The hype around AI has dominated conversations among top executives. Yet most organizations are still struggling to generate meaningful returns from their AI initiatives. To explore these challenges in greater depth, we conducted extensive research, combining a survey of over 100 C-suite executives with more than two dozen interviews across industries. The survey revealed that 45% of executives found the ROI of AI adoption to be below expectations, while only 10% reported results exceeding expectations. It also highlighted that the most significant barriers are organizational, rather than technical. Building on these findings, we identify a set of interlocking obstacles rooted in three areas: people, processes, and politics. In this article, we explore these interlocking obstacles and look at how companies are addressing them. Cultivating People’s Readiness for AI When thinking about employees’ readiness for AI, our research identified three problems: uncertainty, fear of replacement, and the self-image problem. The Uncertainty Problem: “What will this AI actually do?” Slack’s 2024 global survey of more than 17,000 office workers found that 61% of employees had spent less than five hours learning about AI and 30% had received no training at all. In the absence of knowledge, opinions can be polarized. Some employees dismiss AI as mere hype, while others assume it can do everything. Companies’ uncertainty about AI extends beyond technical capability. One audit firm, for example, identified AI opportunities across its workflow, but both clients and auditors resisted, citing regulatory risk. In the end, the firm abandoned many of its AI-based approaches. To address these concerns, firms need to embed AI governance into their daily work and make it intuitive for every employee. Effective governance not only safeguards against unintended consequences but also helps demystify AI. As an example, in 2018 DBS Bank introduced the PURE framework—Purposeful, Unsurprising, Respectful, and Explainable—to evaluate every AI use case. Instead of relying on lengthy policy documents, employees are guided by four simple questions: Is the use purposeful and meaningful? Will the results surprise customers? Does it respect customers and their data? Can the outputs be explained? This approach reduces uncertainty while ensuring responsible use. DBS also established a Responsible Data Use Committee to review projects that do not meet PURE requirements. With an easy-to-grasp framework and clear human oversight, the bank empowered employees at all levels to innovate responsibly. By 2023, AI had already generated $274 million in value for DBS. Fear of Replacement: “Will I keep my job?” When employees suspect they are training a system that will replace them, they comply minimally. They drag their feet when asked to “label data” or “teach the model.” This “training trap” slows down the adoption of AI in service, retail, and manufacturing firms. Companies can counter this by sharing the upside—offering training royalties for data and labeling work, productivity bonuses tied to realized gains, and career guarantees that channel efficiency gains into reskilling rather than layoffs. Because many of these benefits are based on future promises that are easy to break, firms must make promises credible and easy to verify. One e-commerce company, for instance, pledged to increase total labor spending by 1% annually to demonstrate its commitment to investing in employees. This 1% number is easy to check and hard to manipulate, so it helped build trust with workers. It also gave workers formal seats on the AI steering committee and greater influence over personnel decisions. These enhanced power of the workers further reinforce trust. Another form of resistance is fault-finding: holding AI to much higher standards than humans. At a leading insurance company, employees’ fault-finding missions led to demand for unrealistically high levels of accuracy from AI systems, which in turn slowed deployment and drove up investment costs. Independent studies and external audits comparing AI and human outputs helped restore realism. Finally, replacement fears ease when AI fuels growth rather than contraction. If technology expands the business, efficiency gains feel like opportunity, not threat. The Self-Image Problem: “Will I appear competent?” Fear of status loss can be even more powerful than fear of job loss. We’ve observed engineers who quietly use AI tools but conceal it to avoid appearing less skilled. Many worry that admitting to using AI could make them seem lazy, incompetent, or even dishonest. Similar image concerns result in radiologists ignoring AI recommendations to protect professional pride. One financial services firm flipped this stigma by launching an “AI Masters” program that fast-tracks employees who demonstrate exceptional AI skills, regardless of their seniority. This celebrates the proficiency with AI as sophistication and forward-thinking, not laziness or incompetence. By broadcasting this message, organizations can create positive incentives for employees to embrace AI. Equally important is designing AI use for professional dignity. Companies can position AI as a tool that presents facts without judgment, while leaving final conclusions to professionals. This framing reinforces expertise rather than undermining it. Some firms have created private “second-opinion consoles” where employees can consult AI without fear of embarrassment or reputational risk. Processes: Redesigning Workflows AI adoption often falters when organizations treat it as a simple overlay on existing processes. True transformation demands systematic change at three levels: individual workflow (nodes), cross-functional connections (edges), and system-wide coordination (networks). The Node Level: Transforming How Individuals Work A consulting firm’s legal team initially used AI like a spell-check tool—running it at the very end of traditional reviews. The approach produced negligible benefits, as AI was only 100% accurate for 40% of error types. By restructuring the workflow so that AI conducted the first pass—checking only error types it handled best—lawyers could focus solely on the remaining ones. This redesign demonstrated how rethinking workflows unlocks AI’s value. Some firms accelerated the change by setting “mission impossible” goals that force teams to abandon old habits and discover new ways of working. One Boston startup, for instance, faced resistance to using AI in document preparation. To break through, it required that documents—previously completed in a week—be finished within a single day. The extreme time pressure left employees no choice but to integrate AI from the start and redesign their processes around it. The Edge Level: Redesigning Connections The edge level focuses on how improved local judgment and data can transform inter-departmental processes and decision-making flows. At a Japanese cosmetics company, beauty advisors in stores once supplied untrusted, anecdotal feedback. Generative AI helped them analyze customer conversations and traffic patterns, giving structured insights. Headquarters, now confident in the data, built a two-way loop: Campaigns could launch faster and be tweaked in real time based on credible field intelligence. The edge between local operations and central planning became a responsive circuit rather than a one-way command. The Network Level: Orchestrating System-Wide Impact To generate real business impact from AI, companies must consider the network level—how improvements across multiple nodes and edges interact within the broader system. Without this perspective, AI can simply shift bottlenecks from one part of the network to another, limiting overall performance gains. This phenomenon is common because many organizations concentrate their gen AI efforts in a few high-impact areas—such as marketing, customer service, or software development—while overlooking the interdependence across business units. A major car manufacturer discovered this when it adopted generative AI to boost productivity in automotive software development (enhancing one set of nodes), enabling faster design iterations, code generation, and feature testing. Yet the overall vehicle production network showed little improvement, as hardware manufacturing became the primary bottleneck. The enhanced software development nodes were now waiting on unchanged hardware nodes, and the edges connecting them couldn’t handle the increased pace of software output. Addressing such network-level challenges requires coordinated action across all nodes and edges. Organizations should begin by mapping the entire network topology—understanding how workflows between teams and identifying potential bottlenecks. AI adoption should be synchronized across interconnected nodes so that capacity improvements are matched throughout the network. Politics: Navigating Power and Influence AI shapes who gains and who loses inside organizations. The resulting politics—over data, hierarchy, and accountability—often prove more formidable than technical issues. Successful AI adoption often requires redesigning governance structures, adjusting incentive mechanisms, and, in some cases, relying on senior leadership to broker agreements and remove barriers. Here are three specific problems we observed in our research: Resource Hoarding Organizations quickly discover that AI’s hunger for data and knowledge collides head-on with deeply ingrained competitive instincts. At a large Chinese IT firm, researchers uncovered that programmers were 16–18% less likely to recommend AI access to their own teammates, effectively hoarding knowledge to preserve their personal edge. Across business units, larger, more successful divisions that own sophisticated AI models and valuable datasets often see little incentive to share them with smaller units that could benefit most. Sharing can feel like enabling potential internal competitors while diluting their own performance metrics. In the Deloitte-HKU survey we conducted, the C-suite level participants identify “siloed departments preventing cooperation” as the top barrier for AI adoption. DBS Bank confronted this resistance by designing an incentive structure that rewarded units for converting proprietary datasets into reusable assets on the central platform. A key metric tracked the percentage of each unit’s use-case-specific datasets that had been transformed into shareable resources. This approach breaks down silos by motivating both large and small units to contribute high-quality, accessible data. Hierarchy Disruption AI unsettles the traditional hierarchy built on two pillars: experience and headcount. The first weakens when junior employees armed with AI outperform seasoned veterans. In one software firm, programmers with only two years’ experience began producing more and cleaner code than colleagues with five years’ tenure. Juniors felt that they were doing more for less. Some companies responded by expanding competency models to explicitly include AI mastery and by shortening promotion ladders. When advancement cycles shrink from five years to one or two, and mastery of new tools is rewarded, young employees see immediate payoff for learning. The second pillar, power through headcounts and control over resources, creates even stronger resistance. Managers are the gatekeepers of AI adoption, yet their authority often depends on the size of their teams. When efficiency threatens to shrink those teams, their self-interest can quietly derail otherwise valuable AI initiatives. In one translation department, leaders hesitated to automate because doing so would shrink their headcount, bonuses, and prestige. OPPO, the smartphone maker, tackled this by staging an AI tournament where every employee had equal access to tools, and results were ranked by department. Suddenly, managers had to champion AI adoption or risk public embarrassment if their teams lagged. The contest reframed success: status no longer came from managing large teams but from enabling them to achieve more with AI. Accountability Attribution AI also disrupts the traditional balance of blame and discretion within organizations. Its precision turns fuzzy responsibility into hard data—and that can create new political friction. At Dingdong Maicai, a Chinese grocery e-commerce company, AI systems began tracing every customer complaint back to the exact department at fault. When a customer received spoiled fruit, algorithms could pinpoint whether procurement bought poorly, storage mishandled goods, or delivery caused the damage. What had once been shared uncertainty became explicit accountability. With this change, departments that had long operated under ambiguity now found themselves publicly exposed. The binary nature of algorithmic judgment—assigning full responsibility to one side—ignored the grey areas of real-world operations. This led to escalated disputes and complaints from department heads. The lesson is that perfect accountability can undermine organizational harmony. Dingdong eventually changed its system of attribution, allowing final judgment to humans. The goal was not to reject transparency but to buffer it with trust. Effective AI adoption requires knowing when precision helps performance—and when it merely fuels internal politics. Pulling Multiple Levers for AI-Driven Transformation A professional services firm with 2,200 practitioners—primarily software developers and product managers—began piloting GenAI initiatives in mid-2023. Within weeks, individual productivity rose by 30–40%, yet by mid-2024, overall performance—measured by productivity and time-to-delivery—remained flat. Several factors explained the gap. Developers lacked incentives to boost output, fearing efficiency gains might trigger layoffs. The flood of new AI tools created inconsistent practices across teams, disrupting established standards and complicating project management. Meanwhile, junior developers often outperformed senior ones, but work assignments and recognition still followed traditional hierarchies. To address these challenges, the firm pulled levers across people, processes, and politics. On the people front, it redefined its competency model to explicitly reward AI proficiency, making expertise visible across the organization and turning mastery into a source of pride. To counter fears of replacement, the compensation structure was overhauled: Base salaries were reduced to 80%, while performance-based incentives of up to 40% were added, directly linking efficiency gains to individual rewards. The process dimension was overhauled to embed AI throughout the workflow. Developers became data and process stewards, responsible for following standardized data definitions, coding practices, and AI protocols while participating in training to strengthen process consistency. A unified end-to-end framework harmonized AI integration across development stages, with updated SOPs incorporating AI-augmented steps for easier training and compliance. At the organizational level, a centralized governance model with defined checkpoints and Business Process Stewards ensured alignment across data, AI, and workflows. Political barriers were confronted head-on. Job grades expanded from six to 14, with biannual reviews enabling rapid promotion or demotion. This system rewarded AI adopters with greater responsibility and influence, realigning incentives that once favored tenure over capability. By mid-2025, these changes began to pay off. Productivity rose by 22%, enabling a 10% price cut that boosted sales by 20%. Labor costs grew by 5% as the firm reinvested in its workforce, reinforcing its commitment to employees. Overall profitability improved by 3%, demonstrating that AI-driven transformation translated into tangible business value. Building on this foundation, the firm expanded into markets that had previously been too price-competitive to enter. AI-supported development also shortened the learning curve for new programming languages, enabling the company to broaden its offerings. This case shows that true AI transformation goes beyond technology. By aligning incentives, redesigning processes, and reconfiguring organizational power, the firm turned AI adoption into lasting business value. . . . Ultimately, the challenge is not adopting AI but evolving alongside it. The true advantage lies in building an organization that can fully harness AI’s power. Firms that see it merely as a technical upgrade will inevitably fall short.