In the 1980s and 1990s, many large manufacturing companies pursued an offshoring strategy — not always because a careful analysis showed a clear link between offshoring and achieving their business objectives, but because their competitors were doing it. Within a few years, companies had moved significant production overseas, often at the expense of supply chain flexibility. The problem wasn’t offshoring itself, but rather that leaders were starting with the wrong question and not creating clarity around how offshoring fit within their overall strategy. Federal agencies are making the same mistake with AI.
The Trump administration’s AI Action Plan unveiled in July 2025 has created urgency for agencies to demonstrate progress on artificial intelligence. But urgency without clear direction produces activity, not outcomes. Across government agencies, leaders are asking, “what’s our AI strategy?” when the question should be, “how can AI enable our strategy?” Here’s why.
What happens when pressure replaces strategy
The offshoring rush offers a cautionary tale that federal leaders should revisit. For many manufacturing companies, offshoring was an entirely reactive decision driven by intense pressure from Wall Street to demonstrate efforts to reduce costs.
Executives would announce an offshoring strategy, consultants would be hired, operations would be moved, and often real cost implications would only emerge over time: hidden costs in coordination, quality control, and lost flexibility to withstand disruptions. In many cases, the operational changes created strategic vulnerabilities across supply chains.
The companies that were most successful with offshoring started with their strategic objectives and considered offshoring as one lever to help reduce costs or diversify supply chains. Treating it as a tool to improve cost performance rather than an imperative in itself is what differentiated between it being a competitive advantage or an expensive distraction.
Today’s AI adoption race shows the same warning signs. Agencies are under pressure to demonstrate AI progress, and the easiest path is typically to launch pilots, create AI working groups, and report on the number of use cases identified. However, this focus on activities may or may not produce outcomes that matter to the agency’s mission.
The hidden cost of strategy-free AI adoption
When AI initiatives aren’t rooted in organizational strategy, predictable problems emerge. First, use cases cluster around process optimization rather than transformation. Teams identify ways to make existing workflows slightly faster or cheaper. While these improvements are real, they are only incremental. The transformative potential of AI to entirely reimagine current workflows and significantly change how work gets done remains untapped because of a lack of clarity on what transformation should look like in service of strategic goals.
Second, adoption becomes fragmented. Different business units pursue different tools to solve different problems with no coherent thread connecting them. This fragmentation makes it nearly impossible to build organizational capability in AI. Each initiative becomes a one-off experiment rather than a building block toward strategic objectives.
Third, and most damaging, employees disengage. When people are told to use AI without understanding how it advances the mission they care about, the mandate feels arbitrary. Especially with the heightened media coverage of AI-driven job displacement, this can lead to resistance. The goal of AI adoption is to reduce administrative burden and increase productivity. But without strategic framing, it can produce the opposite: reduced productivity as people spend time on tools they don’t understand in service of objectives that aren’t clear.
What strategy-first AI adoption looks like in practice
Consider two hypothetical federal agencies, both adopting the same AI tools.
Agency A starts by asking, “what’s our AI strategy?” They might form an AI task force, evaluate vendors, select platforms, and roll out training. They then track metrics on tool adoption and use cases identified. After a year, they report on how much of the workforce has used AI tools and the number of cases documented. But when asked about how those results tie back to the agency’s strategic mission, the answer is likely vague.
Agency B starts by asking, “what are our strategic imperatives,” “where are we seeing barriers to progress, or opportunities to accelerate?” Only then do they explore where AI could help remove the barriers or accelerate the opportunities. They might create mixed-level teams to test AI tools in sandbox environments, fail fast, and share learnings. Success is measured by progress against strategic priorities, not by adoption rates. After a year, they report that a smaller percentage of employees have used AI tools regularly, but those employees have eliminated major bottlenecks. These case studies and the results achieved inspire many more people to adopt AI tools.
Which agency got more value from their AI investment? Which agency is likely to continue to build momentum on AI?
Why top-down alone fails
Successfully adopting AI across an organization requires both top-down strategic clarity and bottom-up experimentation happening simultaneously. Senior leaders must provide a strategic framework and ask themselves questions like: Which of our objectives could AI accelerate? Where should we focus resources? What does success look like?
However, leaders can’t identify every valuable AI application alone. Employees closer to the tactical work understand where manual processes create delays, where data exists but isn’t being leveraged, and where decisions could be faster with better information. Their insights are critical to making AI adoption practical rather than theoretical.
Successful AI integration requires leaders to provide strategic direction, resource allocation and employees to experiment, learn and identify opportunities. This only happens if leaders create safe spaces for experimentation and reward employees who do so, even when experiments aren’t successful.
To further activate meaningful participation, federal leaders should engage employees in solving strategic challenges, not simply adopting technology mandates. By inviting people to join committees or creating evaluation teams that include diverse perspectives, the connection between AI experimentation and mission advancement becomes clear.
Managing the human side of technological change
More so than any previous technology implementation, AI success depends on human behavior. Two employees with identical objectives and access can produce vastly different outcomes based on how they engage with the technology. Success depends on creativity, experimentation, and integration into daily workflows.
AI adoption is, therefore, fundamentally a behavior change challenge. Employees must understand how AI serves the strategic objectives they care about and isn’t only an attempt at replacing their roles.
AI is evolving much faster than traditional management systems were designed to handle. They were built to produce reliable, repeatable performance, not rapid change. Federal leaders may need to operate outside standard practices by using dynamic experimental teams, engaging more people in finding solutions, and utilizing peer-to-peer communication where employees share discoveries with each other.
If agencies avoid the mistakes of previous management fads, the AI Action Plan represents an opportunity to accelerate mission delivery. The agencies that recognize AI transformation as a people challenge rooted in strategic clarity — not just a technology implementation — will be the ones to truly realize value from their investments.
Gaurav Gupta is head of research and development at Kotter.
The post The hidden cost of launching AI initiatives without strategic clarity first appeared on Federal News Network.