top of page

The Hourglass Effect: Why Hollowing Out the Middle Threatens Compliance Culture




AI is reshaping organisations. But transformation without intention is just disruption.


The Disconnect


Block cut 4,000 jobs. The stock soared 20%.


Meta announced 15,000 cuts (20% of their workforce). The stock climbed 3%.


HSBC is targeting 20,000 roles, 10% of its global headcount, while accelerating $1.5 billion in annual cost savings six months ahead of schedule.


The market celebrates. Thousands of families start doing very different maths.


This isn't a recession story. It's a restructuring story.


And it raises a question every risk and compliance leader should be asking: What happens to organisational capability when the middle disappears?


The Hourglass Effect


Picture an hourglass.


Senior leaders at the top: strategy, relationships, accountability. AI agents at the bottom: processing, analysing, flagging.


The middle? Being hollowed out.


Gartner predicts by 2026, 20% of organisations will use AI to eliminate more than half their middle management roles.


The middle isn't just where jobs live. It's where judgement develops.


It's where professionals learn to:

  • Read a room before the data catches up

  • Sense when something's off

  • Develop instincts that can't be taught in a course

  • Mentor the next generation


If we automate the apprenticeship, where do the next leaders come from?


The Expectation Gap


Are we asking people to work differently inside organisations that still work the same way?


Job descriptions now demand "AI fluency" and "automation mindset." But inside many organisations, workflows haven't changed. Reports still built manually. Data still siloed. Approvals still slow.


How are we supposed to build AI experience inside companies that are not set up to use it?"


Satya Nadella reinforced this at Davos:


"The old model: departments hoarding knowledge, information trickling upward through hierarchy, is now structurally obsolete."


His warning: firms will see almost zero productivity gains from AI unless leaders actively redesign their structures, retrain their people, and rebuild how context moves through the organisation.


That is the actual AI race. And most companies are losing it before it even starts.


The Measurement Problem


If the expectation gap explains why individuals struggle, the measurement problem explains why organisations do.


Carey Ransom, writing in The Financial Revolutionist, puts it bluntly: "Banks can buy all the AI tools they want, but if they keep measuring and rewarding people the same way they always have, change will be slow or won't happen at all."


Most organisations still measure managers the way they've measured them for decades. How big is your team? How large is your budget? How many people report to you?


These are proxies for importance. They've been baked into compensation structures, title hierarchies, and promotion decisions for so long that nobody questions them anymore.


But think about what AI actually does. It makes smaller teams capable of producing the same output as much larger ones.


A manager running a two-person team that uses AI to deliver the same results as a traditional ten-person team is objectively creating more value. Lower cost, same output, probably faster turnaround.

Under today's measurement systems, that manager gets punished.


That's backwards in the AI-enabled world we've entered.

Ransom's prediction: "The defining split in banking won't be AI versus no AI. It will be between banks that redesign their structures to drive AI-powered transformation and those that bolt AI onto unchanged organizations and wonder why nothing feels different."


Incentives drive behaviour. Until organisations change what gets rewarded, AI adoption will stay stuck in pilot mode.


The Uncomfortable Fiction


The Banker recently called AI agent compliance an "uncomfortable fiction."


BNY has given login credentials to more than 100 AI agents who report to human managers. Citi uses them to execute operational tasks. JPMorgan uses them to parse thousands of legal documents.


Yet no Compliance framework currently governs what happens when one of these agents causes a financial loss.


Every layer of the existing compliance stack was written for people. Know Your Customer ties a credential to a natural person or legal entity. Anti-money laundering presumes human intent. The Bank Secrecy Act demands a verifiable identity behind every reportable action.


An AI agent satisfies none of these conditions.


Banks are improvising. Creating access protocols. Defining escalation paths. Assigning liability by fiat.

But here's the question no one's answering: Who's accountable when the algorithm gets it wrong?

The technology has outpaced the governance. And in compliance, that's not a feature, it's a risk.


The Talent Flight Risk


The best talent will figure out pretty quickly which institutions value their ability to work smarter and which ones still just count heads.


EY's Work Reimagined survey found that while 95% of banking employees now use AI, only 18% are positioned to achieve transformative impact — those with sufficient training combined with the right tools and mindset.


The rest? Caught in the gap between expectation and enablement.


BCG estimates AI agents could increase bank profitability by 30% and reduce costs by 30–40% by 2030. But only for institutions that redesign work, not just deploy tools.


The banks that win won't be the ones with the most AI agents. They'll be the ones that kept humans capable of supervising them.


And that requires investment in people, not just technology.


The Compliance Connection


For risk and compliance functions, this isn't abstract. It's operational.

Financial crime doesn't pause for restructuring. Regulatory expectations don't soften because your team is "leaner."


When experienced analysts are let go, institutional knowledge walks out the door. When mid-level managers are "optimised," the connective tissue between strategy and execution disappears.


The U.S. Treasury's new Financial Services AI Risk Management Framework makes this explicit: 230 specific control objectives covering governance, data quality, model development, validation, monitoring, third-party risk, and consumer protection.


The message is clear: human-in-the-loop oversight is now a regulatory expectation, not an optional governance practice.


You can't automate accountability.


What Leaders Should Do

If you're leading a risk or compliance function through AI-driven transformation, here are the questions worth asking:


1. Are we measuring what matters? Stop rewarding headcount. Start rewarding output, judgement, and capability development. If your compensation structure punishes managers who do more with less, you're actively discouraging transformation.


2. Where is judgement developed in our organisation? If we're cutting the middle, who's training the next generation of leaders? Document institutional wisdom. Create handover systems that transfer judgement, not just process.


3. Do our people have the capacity to do what we're asking? The EY data is stark: 95% using AI, 18% enabled to use it well. That gap is a leadership failure, not an individual one.


4. Who's accountable when AI gets it wrong? If you can't answer this clearly, you have a governance gap. Define escalation paths. Assign liability. Build audit trails.


5. Is our compliance culture real or performative? Rules don't stop financial crime. People do. Culture is what happens when no one's watching, and it's the only thing that scales when your team gets "leaner."


Where PULSE® Fits


This is why we built PULSE®.


PULSE® is a risk and compliance culture framework that assesses five dimensions: People, Undertaking, Leadership, Structure, and Ecosystem.


  • People: Are your teams enabled, engaged, and empowered?

  • Undertaking: Is your mission clear and aligned across the organisation?

  • Leadership: Do leaders model the culture they expect?

  • Structure: Does your governance support good decisions, or just documentation?

  • Ecosystem :Are you connected to the networks that matter?


It's not a tick-box audit. It's a lens for understanding whether your risk and compliance culture actually works, whether your people make good decisions when no one's watching, or just follow scripts when someone is.


For individuals and small teams: Where does your risk and compliance culture stand today?


For teams ready to build capability: What are the frameworks you use to for strengthen each dimension?


For organisations navigating transformation: Our training and advisory services help leadership teams redesign risk and compliance culture for the AI era.


The Bottom Line


The hourglass is real. The middle is being hollowed out.


But transformation doesn't have to mean losing capability. It means being deliberate about what you're building, not just what you're cutting.


The organisations that win won't be the ones that moved fastest on AI. They'll be the ones that kept humans capable of using it wisely.


It takes a network to defeat a network.


If you're building risk and compliance culture, not theatre, we'd like to hear from you.


I explored the personal side of this story in The Fourth Perspective, Edition 5: The Human Cost of Progress. If this resonated, that's where the human story lives.


JFourth Solutions helps financial institutions build compliance cultures that survive leadership transitions. From PULSE® assessments to board advisory and team training, we focus on the human side of risk and compliance.


If your team is navigating a leadership transition, or building culture from scratch, let's talk.


💙





 
 
 

Comments


bottom of page