Breaking the Bottleneck | Getting to Success with Manufacturing AI
By: Aditya Raghupathy, Michael Risse, and Dan Hebert
Breaking the Bottleneck is a weekly manufacturing technology newsletter with perspectives, interviews, news, funding announcements, manufacturing market maps, 2025 predictions, and more!
💥 If you are building, operating, or investing in manufacturing, supply chain, or robots, please reach out - aditya@machinafactory.org I’d love to chat!
🏭 If you were forwarded this and found it interesting, please sign up!
Getting to Success with Manufacturing AI 🏭🗞️🔬 📚
By: Aditya Raghupathy, Michael Risse, and Dan Hebert (ControlsPR)
Since late 2022, AI has captured the imagination of enterprise leaders worldwide, dominating roadmaps and commanding attention in boardrooms across industries. Walk into a manufacturing facility today, though, and you’ll encounter a different reality, as the promise of AI hasn’t yet translated into measurable results. An MIT analysis of interviews, surveys, and roughly 300 public AI deployments reveals a sobering truth that 95% of organizations see no quantifiable impact on their bottom line, with only 5% of pilots actually driving meaningful revenue growth or reducing costs.
The disconnect becomes even more puzzling when considering that, in 2024, 78% of manufacturers surveyed by the Manufacturing Leadership Council and Deloitte stated that AI is central to their digital strategy. But here’s where the gap widens into a chasm. Deloitte’s latest research shows that only 29% of manufacturers run AI and machine learning at the facility or network level, and a mere 24% deploy generative AI at scale.
The gap between AI hype, production, and value shows up clearly in the vendor landscape itself. Declining headcount across many manufacturing AI vendors tells the story. Companies like Petuum have shut down entirely, while others like Spark Cognition (now Avathon) are scrambling to rebrand and reposition themselves. Established players like C3.AI are facing earnings disappointments after losing nearly $1 billion in the last four years. If customer deployments of AI software were truly succeeding, revenues would follow, and these vendors would be thriving and expanding their teams. Instead, LinkedIn data reveals the opposite trend: shrinking workforces at traditional AI vendors because they are struggling to demonstrate value.
Identifying the Challenges
“ChatGPT and large language models aren’t succeeding because users have become AI experts; they’re succeeding because the software connects with users’ existing skills and experience through intuitive interfaces that feel natural rather than burdensome.”
This chasm between AI innovation and widespread deployment poses a significant challenge for all parties, including manufacturing companies, vendors, and society as a whole. The stakes extend beyond productivity to encompass critical sustainability targets, workplace safety standards, and competitive positioning in global markets. Industries need AI to make a positive impact because it will play a crucial role in the future of manufacturing. Companies need insights that will continuously enable faster, more accurate data-driven decisions to improve key production metrics and outcomes. Manufacturing organizations cannot simply work harder. They must work smarter by leveraging AI to make better decisions at scale.
So, what’s really behind the widespread failure of AI deployments? The conventional wisdom points to data and trust issues, and those factors absolutely matter. AI models require intensive data access, cleansing, contextualization, and movement, which has spawned the industrial data ops market and vendors like Highbyte, Cognite, and Litmus Automation. Trust remains another hurdle, as plant employees naturally resist black-box AI solutions, though this resistance typically diminishes when analytics are defined by end users implementing their own data science tools and infrastructure.
However, the real culprit behind AI’s manufacturing struggles runs much deeper than issues with data or trust. The biggest obstacle to successful AI deployment is that there simply aren’t enough people to make it work. The New York Times reports 400,000 unfilled manufacturing jobs in the United States alone, with projections suggesting the industry may need up to 3.8 million additional workers through 2033. This creates a “newer and fewer” workforce reality, compounded by the fact that existing plant employees are already stretched thin managing their current responsibilities.
The employee shortage creates a fundamental paradox. While companies tackle data challenges and IT leadership increasingly demands AI integration in manufacturing plants, it’s not possible for these innovations to land effectively without available personnel to implement and manage them. Without sufficient human resources to deploy, monitor, and optimize AI systems, even the most sophisticated solutions will fail to deliver their promised impact.
This reality is driving a critical inflection point in the AI vendor landscape. We’re witnessing a transition from a first generation of AI manufacturing vendors that created software dependent on users (both in-house and outsourced) to a second generation of autonomous software designed to accelerate and empower users rather than burden them. This shift represents our best hope for improving upon that dismal 95% failure rate.
An alternative path, as voiced in recent analyst and influencer posts, envisions a new generation of upskilled plant employees trained to deploy and use AI innovations. In our view, such an approach would only exacerbate the workload problem for already scarce specialist resources. This thinking completely misses the lesson of the most successful AI adoption story in history: ChatGPT and large language models aren’t succeeding because users have become AI experts; they’re succeeding because the software connects with users’ existing skills and experience through intuitive interfaces that feel natural rather than burdensome.
What manufacturing needs isn’t more training for overwhelmed workers. Instead, solutions require fundamentally reimagining how AI collaborates with manufacturing teams, moving from systems that demand extensive change management and training to autonomous software that provides human value.
Enter “Persona as a Service”
The “Persona as a Service” model offers a new path forward. This approach uses digital agents representing specific manufacturing roles, each capable of performing targeted tasks and delivering measurable business outcomes within specialized subsectors. Picture an Operator Agent that continuously monitors production parameters, a Reliability Agent that predicts equipment failures before they happen, a Diagnostics Agent that troubleshoots quality issues in real-time, a Maintenance Agent that optimizes scheduling based on actual asset conditions, and a Quality Agent that catches defects faster than any human inspector ever could. This level of specialization will be required to make these agents usable for a typical manufacturing employee. In contrast, current solutions tend to be general-purpose and can only deliver value to a select few.
These digital agents speak the plant team’s language, connect seamlessly to plant resources and applications such as APM, MES, and CMMS, and run 24/7 on real-time data, reasoning through and solving specific, goal-oriented challenges. Unlike traditional AI tools that require constant feeding and care by experts conversant in tech and manufacturing, this autonomous software operates independently to deliver and then, more importantly, act upon insights alongside the workforce, focusing on the highest-value strategic priorities.
This approach fundamentally reorients the conversation away from the sophistication of AI models toward the quantifiable impact on critical manufacturing KPIs. Success gets measured not by algorithmic complexity but by verified outcomes published in systems of record. Measurement of these outcomes includes clear baselines, interventions, and results that any plant manager can understand and trust. This evidence-based approach builds credibility during a period of widespread alarm fatigue while helping manufacturers address labor shortages by essentially offloading persona workloads, thereby reducing the human resource burden.
Palantir has already pioneered this model through its Forward Deployed Engineers (FDEs), who effectively operationalize the “Persona as a Service” concept with their tech-enabled services. Rather than simply licensing software and expecting customers to figure it out, Palantir embeds specialized engineers directly into manufacturing operations. The FDEs become temporary members of each functional persona (operator, maintenance, quality) to understand what success looks like, then build agents that can deliver those outcomes independently. This allows manufacturers (recently Boeing and Lear) to access world-class AI capabilities without needing to hire, train, and retain specialized data scientists and AI engineers. All that context and expertise, delivered in hours rather than months, is what vendors with “Persona as a Service” offerings aim to provide.
This outcomes-focused approach raises an interesting question about pricing models. Could manufacturing agents evolve toward outcome-based pricing that ties payments to verified results like dollars per downtime minute avoided, OEE improvements, or defect reductions? Such a model would lower customer adoption risk and incentivize vendors to deliver measurable performance. Though it’s been a rare approach in manufacturing, it would create strategic clarity around value propositions while allowing market dynamics to determine optimal pricing over time.
The Second Generation: Autonomous Software Is Already Here
The second generation of AI manufacturing software is not a future state. It’s already here in the offerings of dozens of vendors in the market today, with these vendors spanning a range of maturity levels. Established players like Uptime AI and Arch Systems have proven track records deploying autonomous solutions in production environments. New entrants, such as Juna AI and OpsMate (founded by PTC’s former executive, Howard Heppelmann), are bringing fresh approaches to addressing manufacturing challenges. Meanwhile, existing vendors like Tulip are shifting their platform toward autonomous capabilities with products like AI Composer for Video, Agent Library, and an Agent Builder, enabling users to design and deploy their own agents. Investor enthusiasm confirms the trend, with Insight Partners backing Apprentice.io and Sequoia investing in Squint.
What these vendors share, regardless of their vertical market focus or time on the market, is that they go beyond algorithms and large language models into offerings that autonomously reason, learn, and provide production insights and recommendations. One category demonstrating this evolution is what Peter Reynolds of ARC Advisory calls the Artificial Intelligence Optimization (AIO) market. This represents the next generation of Model Predictive Control (MPC) and Real-Time Optimization (RTO) offerings, where AI software optimizes process and asset performance in real-time to achieve defined priorities. Think of it as “operator as a service,” with 24x7 optimization of specific plant assets.
Imubit has deployed autonomous optimization agents in downstream oil and gas facilities, delivering 2-4% improvements in refinery margins worth tens of millions of dollars annually per facility. Phaidra’s agents optimize cooling systems in data centers, reducing energy consumption by 15-25% while maintaining performance requirements. Intelecy’s agents are deployed across food, beverage, and oil and gas facilities, transforming raw sensor data into production insights within hours of connection. In one deployment, Intelecy’s autonomous agents identified over €10 million in annual savings opportunities by optimizing process efficiency and reducing unplanned downtime, clearly demonstrating how quickly autonomous AI can create measurable business value. AMESA (formerly Composabl) and Intuigence are two more recent examples of companies attempting to do this with agentic infrastructure.
In this emerging landscape, AI doesn’t just assist operators it instead supplements them for specific asset optimization tasks, implementing true second-generation autonomous software solutions. The results speak for themselves: these companies are expanding, adding headcount, and scaling across multiple customer sites while traditional AI vendors struggle to survive.
Beyond process optimization, a second wave of manufacturing AI vendors is emerging across different functional areas with reasoning-enabled software solutions. Companies like Squint are building operator guidance agents that provide real-time work instructions and quality checks. Augury’s diagnostic agents monitor industrial equipment to predict failures weeks in advance. These specialized agents demonstrate how autonomous software can address specific manufacturing challenges while delivering measurable business impact.
Preparing Your Organization for Second-Generation AI Software
Technology is typically the smallest part of the equation when deploying innovation because culture eats strategy for breakfast. As organizations consider adopting second-generation AI software, the fundamental organizational and infrastructure issues that have impeded AI adoption won’t disappear overnight. Manufacturers need to take proactive steps to prepare for this new software generation, addressing both the known challenges associated with AI, including trust, data, and resource allocation, as well as the organizational transitions required for success.
Building the Right Data Foundation
The Manufacturers Alliance found that 47% of manufacturers view data fragmentation as a significant obstacle to implementing AI. Critical operational data remains trapped in disconnected systems, as operational technology, such as SCADA and PLCs, operates in isolation from information technology systems, including ERP and MES. This fragmentation prevents the holistic operational view that autonomous software requires. This fragmented data is often of poor quality, inconsistent, or missing essential metadata.
To address this issue, smart manufacturers are implementing Unified Namespace (UNS) architectures, essentially creating a common data language that enables all plant systems to communicate effectively. These architectures create real-time data fabrics that decouple data sources from consumers while standardizing semantics across industrial automation layers. The goal involves continuously enriching data to reconcile operational views (as they are operated, as maintained, and as designed) at asset, line, and plant levels. Effective governance through catalogs, access control, versioning, and lineage creates a managed layer that enables rapid agent trials without violating compliance requirements.
A data strategy must also account for a critical duality: manufacturing data serves both as a corporate resource, requiring central analysis by business analysts, data scientists, and managers, and as a real-time operational resource at the plant level for engineers and operations teams. This dual requirement drives the adoption of hybrid architectures. Leading data operations vendors, such as Highbyte, Litmus Automation, and Cognite, provide critical capabilities for data migration to cloud data centers while simultaneously supporting algorithms that run at the edge to enable ad hoc local analytics, where real-time decisions are made.
Aligning Employee and Investment Incentives
In plants and factories, moving beyond generic operations metrics means measuring operators, maintenance teams, and energy teams on specific outcomes. These include dollars per downtime minute avoided beyond baseline performance, or percentage yield improvements above established baselines. These familiar derivatives of Overall Equipment Effectiveness (OEE) are understood by the plant managers, supervisors, and operators who can directly influence them. When Imubit deployed optimization agents at a major refinery, they tied operator bonuses to the margin improvements their agents delivered. This created alignment between human employees and artificial intelligence, sustaining change far beyond the initial pilot program.
Just as data strategy requires attention to both plant and corporate levels, employee availability and incentives for innovation efforts need a similar dual focus. Many smart manufacturers are establishing Centers of Excellence (CoEs) to provide an organizational backbone across their manufacturing sites, featuring an executive sponsor, a cross-functional core team, a digital storefront, and a managed backlog for use cases.
These CoEs concentrate limited technical resources on winnable battles, build deep domain expertise within the organization, and establish the credibility required to expand into adjacent operational problems. The centralized structure also helps address resource constraints by creating clear pathways for collaboration between plant-level personnel and corporate AI and IT teams, as well as offering more flexibility to employees.
Critically, this model ensures AI systems avoid the standard failure mode of training on raw sensor data that lacks rich operational context and frontline human expertise necessary for correct interpretation. This makes AI decision-making transparent and explainable to plant personnel, who need to trust and work alongside these systems. This significantly lowers adoption barriers while addressing cultural resistance and job displacement fears, which often represent the most significant non-technical barriers to AI project success.
Building Trust in Insights and Actions
The trust gap between plant employees and the insights provided by AI-based solutions is understandable. When safety and operational targets are on the line, “the algorithm recommended it” isn’t sufficient justification. This trust issue manifests everywhere, from vendor websites emphasizing “explainable AI” to the long history of unfulfilled AI promises that employees may have experienced, to the current flood of false positives and alarms from AI systems. These issues are an inevitable outcome when large amounts of data meet numerous algorithms without sufficient operational context or physics-based expertise.
Closing the gap requires time and demonstrated performance. Trust must be earned, though the process can be accelerated. Second-generation AI vendors build trust through systems that learn from human-in-the-loop interactions. When operators override a recommendation or validate a suggestion, the system improves, creating a virtuous cycle of better insights and greater confidence.
For closed-loop systems where AI can take direct action by changing settings and manipulating controls, two distinct paths have emerged in the market. The first offers graduated autonomy. Users enable closed-loop functionality once they’re comfortable with system insights developed in the cloud, transitioning from recommendations-only to approval-required, and finally to autonomous control at the edge as confidence builds. The second path focuses tightly on specific asset classes or vertical markets, allowing AI agency and control to be precisely defined and bounded. This tight scope makes it easier for users to understand the system’s limits and interact with its insights, building confidence through clarity. As users build trust in agents to make well-defined decisions within known boundaries, initial resistance transforms into productive collaboration.
The Autonomous Industrial Future
The manufacturing industry stands at an inflection point. The first generation of AI software demanded more from already-stretched workforces, creating tools that required constant human attention and expertise. This second generation of autonomous software, designed for specific applications, embodied in the Persona as a Service model, completely reverses that equation by augmenting human capabilities. It captures institutional knowledge, learns from operational context, and reasons through complex operational challenges independently.
Today’s specialized agents optimize manufacturing processes by delivering specific insights that help manufacturers operate more efficiently. Tomorrow’s networked systems will orchestrate entire value chains, from procurement through production scheduling to quality control and logistics, creating industrial ecosystems that continuously learn, adapt, and improve without constant human intervention.
The manufacturers and vendors who master this transition by building, deploying, and proving the value of autonomous agents in high-impact roles won’t just participate in the future of manufacturing; they’ll define it with what Microsoft calls a Frontier Firm. The question isn’t whether this transformation will occur. The only question that remains is which side of that divide your organization will occupy.
To reach out to Michael and Dan, please feel free to find them here on LinkedIn!
Michael Risse - Advisor & Investor
Dan Herbert - Principal at Controls PR



This article comes at the perfect time. You perfectly capture the chasm between AI hype and actual results; sometimes it feels like boardrooms live on a diffrent planet.