Breaking the Bottleneck is a weekly manufacturing technology newsletter with perspectives, interviews, news, funding announcements, manufacturing market maps, 2025 predictions, and more!
💥 If you are building, operating, or investing in manufacturing, supply chain, or robotics, please reach out at aditya@machinafactory.org. I’d love to chat!
🏭 If you were forwarded this and found it interesting, please sign up!
How 25 Years in OT Revealed Key Risks in IT-Led AI Deployments🎙️💬
“Until that model is running on the plant floor and someone is making a decision based on its recommendations, you haven’t seen a lick of ROI. Everything that happened before that was a science experiment.”
You spent 25 years at RoviSys working in MES, historians, and other Level 3 systems before launching the Industrial AI division. How did that OT background shape your approach to AI?
When we launched the Industrial AI division in 2019, customer conversations were our main focus. We identified a clear market opportunity: some of our customers had small data science teams exploring AI, and new AI vendors were entering the OT space. But we kept hearing the same issue. These teams could develop ML models and predictive models that worked. However, after creating the models, everyone would wonder, “Who knows how to put this into operation on the plant floor?” There was a fundamental gap between the skills of data science teams and those necessary to keep a plant running 24/7. Customers told us they had success building models but struggled to move beyond the pilot stage.
So, what does it really take to operationalize an ML model on the plant floor? It’s not just about getting it up and running. You need a comprehensive approach that includes organizational change management. Consider this: you’re asking an operator who has spent over a decade learning to trust their senses to now trust an AI model. That demands significant organizational change. One reason many projects fail? Data science teams didn’t involve operators from the beginning. We know that for these projects to succeed, you must include the people who use them daily from the start. Anything seen as being forced from above is usually rejected outright.
We understood how to handle the people aspect. We recognized that processes would need adjustments, often requiring updates to the MES system and integration with the control system. We learned how to deploy technology on the plant floor that must operate 24/7. We knew all models experience drift, so a strategy for retraining and redeploying is essential. Importantly, from a technology perspective, despite many AI vendors promoting a cloud-based, closed-loop message, nearly none of our customers would accept that. Our customers still insist that any ML models deployed must be located either in the DMZ or the plant control network and remain completely disconnected from the internet. With this understanding, we decided, “Let’s get into this industry because we can make a difference with a different story.”
You emphasize “walk before you run” and quick wins. What’s your playbook for designing that first pilot to deliver fast ROI and build C-suite confidence for larger, more transformative bets?
We know we’ll probably get only one shot. The individuals championing AI within the plant are taking a risk. They’re going to get one shot, so we have to pick a high-impact use case. Typically, customers approach us with a list of 7, 10, or 20 use cases. We quickly downsample to one or two that will be high-impact.
First, we need the data to be available. Without data, we’re in trouble. Customers might say, “That’s a great use case, but we don’t have any data on this." In that case, it needs to be tabled. We can run an AI readiness project to ensure we’re collecting data, then reevaluate in 6 or 12 months. We perform exploratory data analysis to examine the data and determine its predictive value. We must be pretty confident that we can build models based on that data. Afterwards, we evaluate the expected ROI, which requires customer input, but we already have a good idea of where to look for it.
I categorize industrial AI into three groups: traditional AI, autonomous AI, and generative AI. For traditional AI projects, our typical target is an annual ROI of $100,000 to $200,000. That’s the ideal range to make these projects worthwhile. You see a return in about a year, and everything beyond that is profit. For larger, more complex autonomous AI projects, we usually aim for an annual ROI of $750,000 to $1 million. And we are achieving that.
We’re searching for low-hanging fruit that ticks all the boxes. We want the easiest win. If the data looks good but the project requires major changes to five other systems, including the ERP system, it probably won’t be a top priority. Finding the right use cases is an art, not a science. Still, the goal is to achieve that first success, get people excited, and make a positive impact on the plant floor.
One thing I try to get clients to understand is that until that model is running on the plant floor and someone is making a decision based on its recommendations, you haven’t seen a lick of ROI. Everything that happened before that was a science experiment. Operationalizing and selecting the right project, and then getting it into production, is where success happens. The ROI only comes once it’s actually running in the plant. That’s what we’re always working toward, and we’re aiming to get there as quickly as possible.
Many AI vendors claim “immediate value from your data,” but you emphasize operationalizing AI at the edge and directly controlling processes. Why do so many IT-centric AI approaches fail in industrial environments?
We’ve already discussed organizational change management, so I won’t revisit it. But I would say many of these projects fail because the client has only spoken to the AI vendor. The AI vendor often portrays a story about how easy this will be, how turnkey it is, and how, in three days or five hours, you’ll suddenly see an impact, which is often exaggerated, but that’s what they hear. Not having an outside, independent voice is another common issue because you haven’t set expectations correctly. To move beyond just producing dashboards, you need to become much more tactical. You must dive into the details of integrating with control systems. Decisions are made on the plant floor and in the control room, where personnel monitor HMIs and review trends generated by the historian. Those are the people making real-time decisions. Dashboards are usually not the primary basis for their decisions.
When we’re operationalizing, we focus on providing data from the AI model either directly on the HMI or on an adjacent screen through an easy-to-read web interface. That’s really what it’s about. We may also be gathering new data where digitization on the plant floor is needed to ensure we capture data that hasn’t yet been in the control system. Currently, we have a customer who manufactures pigments, and we’re launching an AI rollout as part of an AI readiness project. The issue is that they want to connect the batch they’re working on to quality results, work orders, and other data, because that’s how the AI can generate insights. Without that, it’s not going to work. The problem is, they’re not capturing any of that at the machine. The machines are old and very disconnected. We’re building a kiosk that can gather at least basic data and link it back to what’s running on the machine at that moment. Is this an IT-centric, top-down project? My guess is they haven’t considered any of that.
The final point, and I realize it may seem self-serving, is to involve an independent OT system integrator who can guide you through this process. We work well with IT teams. This isn’t an either-or situation; it’s not about confrontation. We have solid relationships with IT teams, and we genuinely want to help bring their vision to life on the plant floor. We know how to do it and understand the potential challenges that can arise when working on these projects.
You partner with niche AI startups like AMESA rather than building everything in-house. What qualities do you look for in those partnerships, and do you prefer startups that white-label or co-brand?
I have a lot of messages for startups, and this could be an article in itself. I recently attended a roundtable hosted by Momenta that brought together diverse voices from its startups. I was a guest presenter at these startups, and I received a variety of messages. However, if I had to choose a couple of key points, I would say we’re looking for startups that bring something unique to the table. If you’re another startup saying, “We’re going to monitor data and build dashboards,” there are already hundreds of platforms doing that. What makes you unique? AMESA stood out because I’m not aware of any other startups working on deep reinforcement learning in the industrial space. It was so distinctive that it was worth dedicating time to work with them.
Startups that haven’t conducted market research drive me crazy. If I ask how you compare to other vendors, you’d better have a compelling story for why you’re different or similar to existing vendors in the space. Startups that act like they’re the only ones in the market blow my mind. When I ask simple questions about major players and platforms, and they haven’t even done basic market research, that’s a problem. Why am I asking? Because those are the questions my customers will ask me right from the start. “We already have System X. Why do we need System Y?” Why should I have to come up with the answer? The startup should have it.
Regarding white labeling, we’re not particularly interested in that. We don’t own any intellectual property of our own, and we’re not keen on white labeling other companies' technology. Our goal is for you to develop a brand. We want you to build recognition within our industry. We understand that’s challenging, but it's part of what customers expect. Make it easier for me as an SI. If I have to introduce the entire pitch, explain your company's history, and all that, why should I do it? That’s your responsibility. Establish brand recognition so that when I visit, at least the customer thinks, “Oh yeah, I've heard of them. I saw them at a trade show. What’s their deal? Are they legitimate?” That gives me a starting point to build from. I don’t believe I’m asking for much.
The final thing, and I know this is tough for startups, is something I look for when evaluating potential partners. We probably work with around 150 vendors, all listed on our website, and we maintain strong relationships with them. As a system integrator, we see our vendor ecosystem as essential to our operations because we integrate various systems. We leverage off-the-shelf software that covers 50-60% of our needs, and we handle the rest. A key factor I consider is a robust pipeline. If you’re a startup hoping RoviSys will introduce you to 20 opportunities, that’s just not how we operate. We’re happy to add you to our toolbox, and if the right opportunity comes up, we’ll recommend the platform to you. But we need to see demand generation directly from the startup.
Many startups say the right things: “We have a services group, but that was just to bootstrap operations. Now we really want to move to a partner-integrated focus.” That’s fine. But part of that has to be, “We’re overwhelmed. We need you to provide services for us.” I spoke with a startup earlier today that fits this description perfectly. They’re a robotics company with a unique, very impressive capability I haven’t seen before in this space, and they’re independently successful. I suggested we partner up, and they replied, “Yeah, that’d be really cool. We just had a project out in Asia, and boy, we struggled to get that implemented.” I said, “We’ve got a whole Asia operation with five offices across five different countries. Let’s talk.” Some smaller SIs are eager and prepared to partner with a startup, bringing them into every customer, and that’s fine. However, when you come to talk to RoviSys, it’s a different conversation.
You’ve identified “unsolvable problems”, where traditional automation falls short, as prime opportunities for deep reinforcement learning. What industrial problems are you most actively seeking partners or new technologies to tackle today?
Deep reinforcement learning has unique capabilities, and we aim to leverage it to solve complex problems where humans are currently involved. Many people will compare it to model predictive control or advanced process control, technologies that have been around for 20 years. What I typically say is that’s a good mental model because autonomous AI, my term for deep reinforcement learning, can do everything MPC can do, but more. ’t use MPC in non-process industries; however, you can utilize autonomous AI in discrete industries. You can’t use MPC to build production schedules, but you can utilize autonomous AI to create them.
If we’re discussing control, we can create an autonomous AI agent that operates a process like an expert operator. When I say that, I mean that even when faced with situations never encountered before, deep reinforcement learning has proven to be capable of navigating new challenges and still achieving an optimal solution. The roots of deep reinforcement learning trace back to DeepMind. When you look at AlphaGo and AlphaZero, programs designed to compete against grandmasters, you realize those chess experts were throwing everything they could think of at the algorithm to try to stump it. It faced scenarios that never appeared during training, yet it still won every game. The only way that’s possible is because it can handle unfamiliar situations and still produce the best outcomes.
Beyond controllers, you mentioned dynamic scheduling. Absolutely. However, I prefer to step back and discuss orchestration issues. These are situations where you don’t even really have a production schedule. I’ll give you an example. We have a life sciences customer that uses thousands of liters of what’s called media, which is essentially sugar water that the bugs feed on while producing the pharmaceutical ingredient. That media turns out to be the most expensive input to the process, not labor, not electricity, but media. And it’s a one-time use only.
This customer’s process is that when they run low on media and need another 200 liters, they grab the next vessel. It doesn’t matter if it’s 1,500 liters. They use the 200 liters and are legally required to dump the rest. They’re not allowed to reuse that vessel, so they dump it right down the drain, costing up to $100,000 to $200,000 per batch. This was easily a million-dollar-a-year operation. The thing is, it’s not anyone’s job to manage those media vessels. There’s no person working 24/7 to sit in the booth and say, “Use this one next, use that one next.”
We built an autonomous AI agent, and sometimes the correct answer is: if you wait 20 minutes, that vessel will be empty and available. And it’s 500 liters, not 1,500. That’s the kind of decision a human would make. When discussing dynamic scheduling, I prefer to see it as an orchestration problem because, in many cases, it’s no one’s specific job. Still, you can achieve an ROI of half a million to a million dollars a year with an orchestrator like that.
Yes, high-energy variability and lower energy use are common applications. As more novice operators enter the field and experienced ones retire, professional operators don’t just aim to meet specifications. They also have secondary goals: to run at the lower end of the spec to minimize product waste, and to reduce energy consumption while still staying within specifications. Nowadays, novice operators are just happy to get everything within spec. Once they do, they stop adjusting. But autonomous AI isn’t satisfied with simply meeting specs; that's just a baseline it knows it must meet. Instead, it optimizes for secondary goals like cutting energy use and minimizing product giveaway. These are the kinds of cases that really excite me. I believe autonomous AI can make the biggest difference in these areas.
To contact Bryan, reach out to him on LinkedIn here. He’s always happy to help and share valuable insights.


