The Operating Model Imperative: How Agentic AI is Forcing Enterprise Reinvention

April 9, 2026

By Donncha Carroll, Co-Founder, Partner, and Chief Data Scientist at Lotis Blue Consulting

Business Survival in an AI World: The Case for Rapid Adoption

AI has expanded at an unprecedented pace, quickly evolving from early experimentation into an enterprise-wide priority. Organizations are pouring resources into it — buying platforms, running pilots, and standing up centers of excellence. Yet most are bolting these new tools and technologies onto how they already work today, which is a mistake because AI, espcially agentic AI, brings entirely new capabilities to the workplace. Rather than focusing on how to improve the work they are already doing, companies should seize the opportunity to revolutionize the nature of work itself.

The enterprises that are creating real separation from their competitors right now are not asking, “Where can we use AI?” They are asking, “Now that we have these new tools, how should this work actually get done?” That’s a fundamentally different starting point, and it leads to fundamentally different results. Evidence of AI’s impact on the metrics that Boards and investors care about is still emerging; however, the chart below from the Ramp Economics Lab highlights a clear difference in business performance, measured here by revenue growth, between companies that invest in and leverage AI and those that don’t.

Line chart showing that Ramp customers with high AI spending saw revenue growth climb to just over 100% from 2023 to 2025, far outpacing U.S. nominal GDP and customers with no AI spend, which stayed around 20% or less.

Look at the companies making the news.

  • At TWG Global, the CEO asked: now that we have Palantir’s AIP platform, how should our insurance business actually run? He then led a company-wide effort, working directly with Palantir’s team to rethink operations from the ground up.1 The scale of what Palantir is enabling is significant. The company closed Q4 2025 with $4.3 billion in total contract value and significant remaining deal value, while securing a $10 billion U.S. Defense deal.2 3
  • Lowe’s took a similar approach to decision-making, gaining the ability to cut decision time by 30%, not by adding AI on top of existing processes, but by changing how decisions get made.4
  • GE Aerospace looked carefully at aircraft maintenance and rebuilt the process from scratch.5
  • Cognizant investigated its entire engineering operation and built what it calls an “agentified enterprise,” a model where people and AI systems work together across the company, powered by Anthropic’s Claude.6

What these companies have in common is a bias toward redesigning the work itself, not layering technology on top of legacy processes. They leveraged AI to change what’s possible, and that order matters.

Roy Amara, an American researcher, scientist, futurist, and former president of the Institute for the Future, made an observation that feels especially relevant in this context as we see these early shifts in ways of working and financial performance:

“We overestimate the impact of technology in the short-term and underestimate the effect in the long run.”7

We’ve moved past the short-term hype cycle. The structural shift is here. The rest of this article maps out how that reinvention must advance — from the work itself to the capabilities required to the operating model that ties it all together.

The AI Revolution No One Can Ignore

The scale and speed of adoption are already visible in the data. GitHub, a central hub for developers to store, manage, track, and share code projects, hit 630 million repositories in 2025.8 Its AI coding tool, Copilot, surpassed 20 million users the same year.9 These insights tell a very compelling story, but the more important leading indicators of change are on the business side.

One example in particular should get every executive’s attention: Dario Amodei’s Anthropic. The company grew from $1 billion in annualized revenue to $14 billion in just 14 month10 — growth without precedent in B2B software.

Bar chart showing Anthropic ARR growing from $1 billion in December 2024 to $14 billion in February 2026, with milestones at $4 billion in mid-2025 and $9 billion at the end of 2025.

Its coding tool, Claude Code, launched to the public in May 2025 and reached $2.5 billion in annualized revenue by February 2026, doubling since the start of the year. Business subscriptions quadrupled over the same period. Four percent of all GitHub commits worldwide are now authored by Claude Code — double the figure from just one month prior. Eight of the Fortune 10 are Claude customers. Over 500 companies now spend more than $1 million a year on the platform.

This shift is already showing up at the practice level. I have personally developed and implemented four different business applications using Claude Code within our firm over the last three months.

These statistics and my own experience reflect an explosive level of adoption, driven by what these tools now make possible. Work that used to take a team of developers months now takes one person a weekend, but here’s why that matters for every industry — not just tech. That same collapse in time and effort applies to anyone building internal tools, automating a workflow, or rethinking a process.

The people inside your company who understand your work — operations managers, analysts, finance teams, and supply chain leads — can now build and automate things that used to require a software development request and a six-month wait, with the assumption that the resources are even made available. A claims adjuster who deeply understands the process can work with AI tools to prototype a better workflow in days. A logistics coordinator can build a routing tool over a weekend. The barrier between “knowing what needs to change” and “actually changing it” is disappearing.

This changes the game for every industry. When a regional competitor can take a few weeks to create what took you years to build, the advantage isn’t your technology budget anymore. It’s how well you understand your own work and how fast you can rethink it with these tools in hand.

From Copilot to Colleague: How Agentic Systems are Becoming Part of the Workforce

There’s a difference between AI that answers your questions and AI that does your work.

The first wave of enterprise AI, chatbots, copilots, and search helpers, was and remains very useful. These tools make people faster at the things they were already doing. However, what’s happening now is different. AI agents can take a goal, break it into steps, use tools, and get things done autonomously with some level of human oversight to ensure quality, impact, and safety. For a minute, let’s imagine that this is not a new software tool — it’s a new kind of worker or collaborator.

At J.P. Morgan, this shows up in the systems they are building to enable AI agents to handle the full commercial cycle on behalf of customers — finding products, comparing options, and completing transactions.11 Rather than adding AI to an existing checkout flow, they’re rethinking what commerce looks like when an agent can act on a customer’s behalf.

Walmart’s 2025 Retail Rewired report points to the same shift, placing agentic AI at the center of how retail will operate going forward — not someday, but now.12

Palantir’s AIP bootcamps are delivering high-value operational results on day one — not a demo or a proof of concept, but actual work getting done differently. The market is validating this approach: Palantir’s Q4 2025 revenue reached $1.41 billion, up 70% year over year, with U.S. commercial revenue surging 137%.2

The shift toward agentic systems is already reflected in the data. Gartner indicated that less than 5% of enterprise workflows were run by AI agents in 2025, but then projects that 40% of enterprise AI deployments will be handled by task-specific AI agents by 2026.13

For every company, this means work that used to require a person to coordinate across systems — pulling data, checking rules, routing approvals, updating records — can now be handled by agents. Not all of it, but enough to force a hard question: which of our processes should be handed to agents, which require people, and where do they need to work together?

What’s emerging is a new form of digital labor. It’s not replacing people wholesale — it’s a new resource that must be designed into how the company runs, just as roles, reporting lines, and handoffs are designed for any other part of the workforce. However, deploying this new workforce without the right design has already proven costly.

Lessons from Spectacular AI Failures

Not every company that moved fast got it right. Some of the earliest and most visible AI deployments have already stumbled, and it’s worth investigating and extracting key insights from what went wrong to learn how to avoid similar mistakes.

In August 2025, Lenovo’s AI chatbot “Lena” was tricked into exposing sensitive company data.14 The attack wasn’t sophisticated. A 400-character prompt entered into a chat window was enough. The system had access to data it should not have, and no one had considered how it might respond to manipulation.

That’s a vivid example, but the quieter failures may be more dangerous. Gartner found that 67% of enterprises see their AI models degrade within 12 months of deployment.15 MIT researchers refer to these as “silent failures.” The system appears healthy while quietly producing errors across the workflows it touches. No one gets an alert, and the damage compounds over weeks and months before anyone notices because no one is paying attention.

The security picture is also concerning. Gravitee reports that 88% of enterprises have experienced AI agent security incidents and that over half of deployed agents operate with no security logging.16 Meanwhile, a 2026 Teleport survey of 205 CISOs found that 70% of companies have agents with more access than humans performing the same work.17

Trust metrics tell a similar story. Harvard Business Review found that only 6% of companies fully trust AI agents for core processes.18 Only 20% report that their technology is ready, and 12% believe their governance is in place. Even OpenAI has acknowledged that prompt injection, the technique used against Lenovo’s chatbot, is “unlikely to ever be fully solved.”19

Bar chart showing that 20 percent of companies say AI technology is ready, 12 percent say governance is in place, and 6 percent fully trust AI agents for core business processes.

And then there is Microsoft, a company that bet billions on AI and still got the deployment wrong. In March 2026, Microsoft began rolling back Copilot integrations across Windows, removing the AI assistant from Photos, Notepad, Widgets, and other core apps.20 Users and IT administrators had been calling it “Copilot bloat” for months. The features worked fine in isolation. The problem was forcing them into every workflow, whether they fit or not.

Microsoft’s EVP of Windows acknowledged the company needed to be “more intentional about how and where Copilot integrates.” Pew Research published data that same month showing that half of U.S. adults are now more concerned than excited about AI, up from 37% in 2021.21 Even the company with the deepest pockets and the broadest distribution learned the same lesson: bolting AI onto existing workflows isn’t a strategy. It is, however, a great way to annoy your users and erode their trust.

To be clear, none of these examples add up to an argument for standing still, as progress in all these areas is inevitable. The pattern across these failures is really the same. The technology worked, but the problem became evident when it was deployed without rethinking the work around it, including handoffs, oversight, and fallback plans. The tools aren’t the risk. Using new tools without examining and updating old ways of working is.

The New Shape of Work in the Age of AI

The future of work isn’t all AI or all human; it’s both. Anthropic’s Economic Index, published in January 2026, shows that augmentation — where people work with AI — accounts for 52% of Claude usage, while full automation accounts for 45%.22 When Anthropic introduced features such as persistent memory and workflow skills, usage shifted even further toward collaboration. People didn’t simply hand over more of their work to AI. They worked more closely with it to redesign the work for faster, more efficient outcomes.

For every core process, companies need to answer three questions:

  • Where does human judgment add value that agents can’t?
  • Where do agents add speed and consistency that people can’t?
  • Where are the handoff points that require human review before the agent continues?

Getting those answers right requires understanding which work is truly at risk. A recent Cognizant study found that 93% of U.S. jobs include at least some tasks that AI can perform, representing $4.5 trillion in labor value.23 That sounds alarming until you break it down. The phrase “some tasks” is doing a lot of the heavy lifting in that statement. The question isn’t whether AI touches a job; it’s how much of that job can be automated.

The jobs where AI can replace the highest share of tasks include order processing (where over 95% of tasks can be automated), insurance claims handling, data entry, telemarketing, and paralegal and document review. The common thread is high-volume, rule-based, pattern-matching work. When a role primarily involves applying known rules to incoming data, an agent can perform most of it.

Chart comparing roles most exposed to automation with roles rising in demand. Highest-risk roles include order processing, data entry, insurance claims, telemarketing, and paralegal review, while rising roles include process architects, agent operators, data stewards, AI trainers, and strategic analysts.

However, there is a counterintuitive trend. Vanguard research published in late 2025 found that the roughly 100 occupations most exposed to AI automation are outperforming the rest of the labor market in both job growth and real wages.24 Job growth in these roles increased from 1% in the pre-COVID period to 1.7% from 2023 onward. AI exposure doesn’t automatically mean replacement. In many cases, it increases the value of the work as people focus on more complex responsibilities.

These two trends — high automation potential and rising job value — are not contradictory. They reflect what happens when routine tasks are absorbed by agents and the remaining work becomes more complex, more judgment-intensive, and harder to replace. AI doesn’t eliminate roles so much as it reshapes them, but that reshaping is not evenly distributed.

The real vulnerability is at the entry level. Stanford found that employment in early-career roles in AI-exposed fields is down 13% since 2022.25 IDC reports that 66% of enterprises are reducing entry-level hiring as they deploy AI.26 The World Economic Forum’s 2025 Future of Jobs Report found that 41% of employers plan to reduce headcount where AI can automate tasks.27 The people most at risk aren’t the experienced professionals — they’re the ones starting out in their careers.

The roles that are growing sit at the boundary between people and agents — people who design how humans and AI work together, monitor agent performance, and ensure the data agents rely on is accurate. They require deep knowledge of how the work gets done.

Organizations need people who can train AI systems, make ethical judgment calls that the AI can’t, and think strategically using AI-generated insights — in other words, people with the ability to work well with AI. These aren’t futuristic roles; they are needed in enterprises across the world now. Further, the risks of agentic systems are clearly non-zero, and you will need humans and, more specifically, domain experts to anticipate the unlikely and undesirable outcomes so they can be avoided.

Only a third of organizations say they’re prepared for AI-driven ways of working, and only a third of employees have received any AI training in the past year. If companies are cutting entry-level roles while failing to train the people they already have, they are building a workforce gap that will compound quickly.

Capabilities Required to Design and Operate an AI-Powered Business

Buying AI tools and systems isn’t the hard part. Any company can do that. The challenge is building the organizational muscle to know where, how, and why to use them well. That transition requires specific capabilities that most companies don’t have:

Process design for human-agent teams

For every critical process, someone must determine which steps agents should handle, which steps people should handle, and where human oversight is needed. This isn’t an IT job; it’s an operations capability spanning multiple roles that requires a deep understanding of how the work gets done now and can envision how it should get done in the future.

Governance that builds trust

The trust deficit described earlier is real — only 6% of companies fully trust AI agents for core processes — and the fix is governance. Organizations need clear rules about what agents can and can’t do, audit trails that show what happened and why, and escalation paths when things go wrong. Companies that get governance right in an ambiguous environment scale quickly. Those that don’t are destined to remain stuck in pilots that never get off the ground, which quickly turns into existential risk.

The ability to experiment quickly

Creating a new agentic workflow should take weeks, not quarters. This requires having the right data infrastructure, the platforms, and the organizational permission to test ideas, measure results, and either scale or kill them fast. Palantir’s AIP bootcamps demonstrate this: they deliver operational results in days, not months, because the platform is designed for rapid iteration. The companies moving at light speed treat experimentation as a core operating discipline.

Reskilling that works

The training gap and entry-level squeeze identified earlier aren’t just workforce issues — they’re capability gaps. If the bottom rung of the ladder is removed, organizations need a deliberate plan for which new profiles to hire and how to develop them to deliver future capabilities. The operating model section that follows addresses what that looks like in practice.

Data that agents can use

Agents are only as good as the data they rely on. If data is scattered across systems, poorly labeled, locked behind manual processes, or is generally difficult to access, agents will fail — sometimes quietly but often expensively.

Security designed for agents

Agents require access to systems to perform their work, and that access is an attack surface. Companies should manage agent permissions with the same discipline used for employees — least privilege (granting only the minimum required permissions), logging, and regular review. Companies with least-privilege controls for agents report a 17% security incident rate. Companies without them report 76%. That’s a 4.5x difference driven entirely by the level of access you give your agents.28

Bar chart showing agent-security outcomes are far worse without access controls: 76 percent versus 17 percent with least-privilege controls.

Regulatory readiness across jurisdictions

Regulatory requirements are evolving quickly and vary materially by region. The EU AI Act introduces enforceable requirements for high-risk systems, including risk management, transparency, human oversight, incident reporting, and conformity assessments.29 In contrast, U.S. policy is shifting toward a more limited regulatory approach.30 31 For any enterprise operating across borders, this divergence creates a real capability gap. Someone in the organization needs to own it, track what’s required where, embed compliance into agentic workflows, and ensure the company is prepared as enforcement increases.

The ability to continuously redesign work

AI capabilities are improving at a pace that makes any static process design obsolete within months. The following section on the operating model addresses what continuous redesign looks like in practice.

None of these capabilities emerge on their own. They require intentional and deliberate operating model design — defining how work flows between people and agents, where decision rights sit, and how the organization adapts as AI capabilities evolve. They require strategic workforce planning that goes beyond headcount: identifying which roles change, which emerge, and what skills the organization needs to build or acquire and acting fast to close the gaps. These are design disciplines, not technology decisions.

Reinventing the Operating Model with AI as a Core Enterprise Capability

Knowing you need to change isn’t the same as knowing how to change or how to make change happen.

The road to hell is paved with good intentions, and most companies can’t get past the point of discussion. They understand the capabilities they need but don’t know how to rewire the organization to support the evolution required to get there. It helps to start with what makes the new operating model fundamentally different from the old one.

The traditional model was designed around people doing repeatable work inside functional silos. Finance handles finance and operations handles operations, each with layers of management to coordinate and review. That model made sense when the work was mostly manual, the pace of change was slow, and the cost of coordination was high.

The new model looks different in four important ways:

Workflows across functions, not within them

Agentic workflows should not and do not respect organization chart boundaries. A claims process might start with an agent pulling data from three systems, routing it to a person for a judgment call, returning it to an agent for documentation, and ending with a person approving the outcome. The team is organized around the end-to-end process, not the department.

Fewer layers, faster decisions

Agents can surface relevant information, flag exceptions, and handle routine approvals. That collapses the middle layers that existed mainly to review, route, and report. Decision rights move closer to where the work happens.

Smaller teams do more

When agents handle high-volume, pattern-matching work, fewer people are needed for those tasks, but different capabilities are required. Organizations need more process designers, more people who can direct and oversee agents, and more people who handle the exceptions and judgment calls that agents can’t.

Continuous redesign, not periodic reorganization

The old model changed every few years through formal transformation programs. The new model must evolve continuously as tools improve. What agents couldn’t do six months ago, they can do now. Organizations need to find ways to absorb these changes continuously without launching a one- to two-year change initiative.

The workforce changes in specific ways:

The ratio shifts

Fewer people perform transaction-heavy, repetitive work, and more people design, oversee, and improve how agents do that work. A team of 20 processors might become 5 people managing a team of agents that handle the same or higher volume.

New roles appear

Process architects who design human-agent workflows. Agent operators who monitor performance and step in when things go wrong. Data stewards who keep the information agents rely on clean and up-to-date source data. To be clear, these are not IT roles. They’re operations roles that didn’t previously exist.

Existing roles change shape

A senior underwriter doesn’t disappear, but instead of reviewing every file, they review the ones the agent flagged as unusual, train the agent on edge cases, and spend more time on the complex deals that require real judgment. The job gets harder and more valuable.

The entry-level pipeline breaks

If agents are doing the work that junior people used to learn on the job, then you must build new ways for people to develop the capabilities that matter. New and evolved apprenticeship models, simulation-based training, and rotational programs where people learn by working alongside agents on real processes are needed. Companies that don’t solve this will lose their talent pipeline within a few years and have no one left for a well-structured and effective generational transition.

The practical starting point is the same one that many leading companies have used for years. Start with the work, not the organization chart. Map core processes end-to-end. For each one, ask: given these new capabilities, what’s the best way to deliver this outcome? Redesign the process first: is this the right work? Do all these steps add value? Then determine the structure and mix of people and agents needed to support it.

Build feedback loops from day one

Organizations need ways to measure whether human-agent workflows are working, including output quality, speed, error rates, and customer outcomes. That data should be used to continuously refine the system. This isn’t a one-time redesign. It’s a new way of running the company.

Measure the right outcomes

A Futurum Group survey of 830 IT decision-makers in early 2026 found that enterprises’ evaluation of AI success has fundamentally shifted.32 Direct financial impact, including revenue growth and profitability, has nearly doubled as the primary success metric, while productivity gains have declined in importance. Boards are done hearing “we saved four hours per week.” They want to see business results on the P&L. In the same survey, agentic AI surged 31.5% year over year as a top technology priority. The pilot phase is over. Companies that continue to experiment without financial accountability are falling behind those that connect every AI workflow to margin, cost reduction, or revenue.

Chart showing a shift in AI success metrics from productivity gains to direct P&L impact, with P&L impact nearly doubling while productivity-based measurement declines.

Assign clear accountability

Every process that includes agents needs a clear owner — someone who is agile and courageous enough to take accountability for how the human-agent team performs. Not just technology. Not just the people. The whole thing. This is a new kind of role, and most organizations have not yet defined it.

The AI-Centric Operating Model Imperative: What Leaders Must Do Now

The advantage doesn’t go to the company with the most AI agents. It goes to the company that determines how people and agents should work together and builds the organization around that answer.

There are three things every enterprise leader should start now:

Pick two or three core processes and redesign them from scratch

Focus on high-volume, cross-functional processes where agents can take on the coordination burden. That’s where the payoff is greatest, and the lessons are clearest.

Build the guardrails before you scale

Lenovo didn’t have them; most companies don’t. Governance, access controls, human checkpoints, and security logging should be in place for initial agent workflows, ensuring a tested playbook before scaling further. The data is clear: companies with proper controls in place have a fraction of the security incidents. Those without them are rolling the dice every day.

Invest in your people’s ability to work with agents

This is the new competitive advantage. Not the tools — everyone has access to the same tools. The difference lies in whether people know how to use them to rethink and rebuild how work gets done. This requires training but also permission to experiment, redesign their own workflows, and explore what’s possible. The companies that build this muscle will move faster and adapt more effectively than those that treat AI as something owned solely by IT.

As Roy Amara observed decades ago, “We overestimate the impact of technology in the short-term and underestimate the effect in the long run.”7 Companies that treat this moment in time as a technology initiative will fall behind quickly. The ones that embrace this as an opportunity to rethink how the business works, starting with the work itself and then building the operating model to support it, will pull ahead at an accelerated rate.


Sources:

1.) Palantir and TWG Global Joint Venture: https://investors.palantir.com/news-details/2025/Palantir-and-TWG-Global-Announce-Joint-Venture-to-Deploy-AI-Program-Across-Financial-Services-and-Insurance/

2.) Palantir Q4 2025 Earnings: https://investors.palantir.com/news-details/2026/Palantir-Reports-Q4-2025-U-S–Comm-Revenue-Growth-of-137-YY-and-Revenue-Growth-of-70-YY-Issues-FY-2026-Revenue-Guidance-of-61-YY-and-U-S–Comm-Revenue-Guidance-of-115-YY-Crushing-Consensus-Expectations/

3.) U.S. Army Enterprise Service Agreement with Palantir: https://www.army.mil/article/287506/u_s_army_awards_enterprise_service_agreement_to_enhance_military_readiness_and_drive_operational_efficiency

4.) Lowe’s AI Decision-Making with Nvidia and Palantir: https://quantumzeitgeist.com/palantir-technologies-integrated-ai-platform/

5.) GE Aerospace AI Fact Sheet: https://www.geaerospace.com/sites/default/files/ai-fact-sheet.pdf

6.) Cognizant Adopts Anthropic’s Claude: https://investors.cognizant.com/news-and-events/news/news-details/2025/Cognizant-Adopts-Anthropics-Claude-to-Accelerate-Enterprise-AI-Adoption-at-Scale-and-Deploys-Claude-to-Drive-Internal-AI-Transformation/default.aspx

7.) Wikipedia: Roy Amara: https://en.wikipedia.org/wiki/Roy_Amara

8.) GitHub Octoverse 2025 Report: https://github.blog/news-insights/octoverse/octoverse-a-new-developer-joins-github-every-second-as-ai-leads-typescript-to-1/

9.) GitHub Copilot Crosses 20M Users (TechCrunch): https://techcrunch.com/2025/07/30/github-copilot-crosses-20-million-all-time-users/

10.) Anthropic Series G: $14B ARR, $380B Valuation: https://www.anthropic.com/news/anthropic-raises-30-billion-series-g-funding-380-billion-post-money-valuation

11.) J.P. Morgan Payments: Agentic Commerce with Mirakl: https://www.jpmorgan.com/payments/newsroom/mirakl-nexus-agentic-commerce

12.) Walmart Retail Rewired Report 2025: https://corporate.walmart.com/news/2025/06/04/walmarts-retail-rewired-report-2025-agentic-ai-at-the-heart-of-retail-transformation

13.) Gartner: 40% of Enterprise Apps to Feature AI Agents by 2026: https://www.gartner.com/en/newsroom/press-releases/2025-08-26-gartner-predicts-40-percent-of-enterprise-apps-will-feature-task-specific-ai-agents-by-2026-up-from-less-than-5-percent-in-2025

14.) Lenovo AI Chatbot XSS Vulnerability (Adversa AI): https://adversa.ai/blog/lenovo-ai-chatbot-incident-critical-xss-vulnerability-exposes-enterprise-ai-security-gaps/

15.) Gartner: 67% AI Model Degradation (Beam.ai): https://beam.ai/agentic-insights/silent-failure-at-scale-the-ai-risk-that-compounds-before-anyone-notices

16.) Gravitee: State of AI Agent Security 2026: https://www.gravitee.io/blog/state-of-ai-agent-security-2026-report-when-adoption-outpaces-control

17.) Teleport 2026 State of AI in Enterprise Security Report: https://goteleport.com/about/newsroom/press-releases/2026-state-of-ai-in-enterprise-security-report/

18.) HBR: Only 6% Trust AI Agents for Core Processes: https://opendatascience.com/only-6-of-companies-fully-trust-ai-agents-to-run-core-business-processes-hbr-finds/

19.) OpenAI: Prompt Injection May Never Be Fully Solved: https://the-decoder.com/openai-admits-prompt-injection-may-never-be-fully-solved-casting-doubt-on-the-agentic-ai-vision/

20.) Microsoft Rolls Back Copilot AI on Windows (TechCrunch): https://techcrunch.com/2026/03/20/microsoft-rolls-back-some-of-its-copilot-ai-bloat-on-windows/

21.) Pew Research: Americans’ Views on AI: https://www.pewresearch.org/short-reads/2026/03/12/key-findings-about-how-americans-view-artificial-intelligence/

22.) Anthropic Economic Index, January 2026: https://www.anthropic.com/research/anthropic-economic-index-january-2026-report

23.) Cognizant: AI Can Unlock $4.5T in U.S. Labor Productivity: https://investors.cognizant.com/news-and-events/news/news-details/2026/AI-Can-Unlock-4-5-Trillion-in-U-S–Labor-Productivity-Today-Reveals-Cognizants-Latest-New-Work-New-World-2026-Report/default.aspx

24.) Vanguard: AI-Exposed Occupations Outperforming (Fortune): https://fortune.com/2025/12/27/occupations-most-exposed-to-ai-automation-outperform-vanguard/

25.) Stanford: Entry-Level AI Job Postings Down 13%: https://www.interviewquery.com/p/ai-killing-entry-level-jobs

26.) IDC: Work Rewired — The Human-AI Collaboration Wave: https://www.idc.com/resource-center/blog/work-rewired-navigating-the-human-ai-collaboration-wave/

27.) WEF 2025 Future of Jobs Report (Fortune): https://fortune.com/2025/01/16/world-economic-forum-says-41-of-bosses-worldwide-have-plans-to-fire-you-in-the-next-5-years-to-replace-you-with-ai/

28.) Teleport: New Teleport Research Reveal AI Security Crisis in the Enterprise: https://goteleport.com/about/newsroom/press-releases/2026-state-of-ai-in-enterprise-security-report/

29.) EU AI Act: 6 Steps Before August 2, 2026 (Orrick): https://www.orrick.com/en/Insights/2025/11/The-EU-AI-Act-6-Steps-to-Take-Before-2-August-2026

30.) Trump AI Executive Order, Dec 2025 (Sidley Austin): https://www.sidley.com/en/insights/newsupdates/2025/12/unpacking-the-december-11-2025-executive-order

31.) Trump AI Framework: Preempt State Laws (Reuters): https://www.reuters.com/world/us/white-house-releases-national-ai-framework-2026-03-20/

32.) Futurum Group: Enterprise AI ROI Shifts, Agentic Priorities Surge: https://futurumgroup.com/press-release/enterprise-ai-roi-shifts-as-agentic-priorities-surge/

July 29, 2025

The Operating Model Playbook

An effective operating model articulates where and how work needs to get done to execute your unique strategy. Executive leaders are...

December 20, 2024

Designing an Operating Model for a Global Division of a Tech Auto Dealer Conglomerate

Business Issue Leaders at a global division of a within a tech auto dealer had aligned around a new strategy that...

December 19, 2024

Organizational Restructuring Risks: Five Pitfalls to Avoid

Organizational restructuring risks are significant. Yet, as a leader it is tempting to see organizational restructuring, or changes in organization design,...

Related Posts

Make Bold Moves

Thank you for your interest in Lotis Blue. Please fill out the form to let us
know how we can help. We'll be in touch soon.

Get in touch