This site depends on JavaScript to run. Please enable it or upgrade to a modern browser that supports it.
 

ASCM Insights

Episode 57: Who's Afraid of AI? Not Us.

title

Bob Trebilcock: Welcome to The Rebound, where we'll explore the issues facing supply chain managers as our industry gets back up and running in a post-COVID world. This podcast is hosted by Abe Eshkenazi, CEO of the Association for Supply Chain Management and Bob Trebilcock, Editorial Director of Supply Chain Management Review. Remember that Abe and Bob welcome your comments. Now to today's episode.

Bob: Welcome to today's episode of The Rebound. Who's afraid of AI? Not us. I'm Bob Trebilcock.

Abe Eshkenazi: I'm Abe Eskenazi.

Bob: Joining us today is Brandon Marugg. Brandon is the Chief Operating Officer at ALOM, a manager of global supply chains. Brandon, welcome.

Brandon Marugg: Thanks. Happy to be here. Hopefully, we'll relieve some of that fear around big, bad AI.

Bob: We're glad to have you, and hopefully, we are relieved when this is all said and done. Over the last 20 years, I've had the opportunity to report on any number of new technologies. I think I'm as familiar as the next guy with the technology hype cycle. That's where a new technology is lauded as the silver bullet for whatever ails you, literally. I remember a conference where an executive from Walmart predicted that RFID would ultimately lead to a cure for cancer and world hunger. Shocker. About two years later, Walmart abandoned its RFID strategy. By the way, I was listening to Bill Maher on TV a couple of weeks ago, and he said that AI is going to lead to a cure for cancer and world hunger.

Bob: It feels like we're at that same place. I've never seen anything quite like AI. On the one hand, it is lauded as the most important technological development of the last hundred years. It's one that will revolutionize our lives. On the other hand, well, it could be the end of humanity as we know it. It's all a little scary, but that doesn't mean it has to be. Brandon, let's start with this. Where do you come down on those extremes? End of humanity or the savior of humanity? How are you and your customers viewing AI at ALOM?

Brandon: Yes, it's funny you bring up the RFID thing with Walmart. I remember that. The other thing I think about a lot with AI is robots. Robots hit the scene decades ago. It was just another similar thing. We had a lot of movies about robots and in the world, and that kind of thing, too. I don't think we don’t start building bunkers in our backyards yet. Right now, we're really viewing it as a tool.

It's a shiny, really impressive new tool, but a tool, nonetheless. It's not something that's super new on the scene. It's been coming in slowly, even though we have these big releases, like a ChatGPT comes out, and everybody freaks out. We've had predictive texts in your cell phone for-- gosh, how long now? A decade. That's AI. I think it will end up changing the way that we live in some ways, for sure, changing the way that we predict the outcomes of things. I think that's overall going to be a positive thing, as long as we can hone it and make it the tool that we need.

Abe: Brandon, we're hearing a lot of different types of AI. As you indicated, we've been experiencing some type of AI, but we really haven't referenced it as AI. Then there's the concept of regenerative AI versus generative AI. From your perspective, what's the difference between the two, and is it mainstream yet or are we still in the adoption awareness phase?

Brandon: Generative AI is the one that we're the most familiar with. That's your ChatGPTs and the like that are out there. They take care in feeding. What people don't really realize is that ChatGPT is constantly being refined, honed, adjusted so that it doesn't make recommendations like you should murder your family to make your living expenses better. It has morality issues that are constantly being tackled with by humans, by programmers. The next phase of that is that the system itself, AI itself, will participate in that refinement. It'll participate in that regeneration, you can call it, to make it better.

That's what can be pretty exciting, but it can also be pretty scary because we have to really be in control of that regenerative process. That we have not cracked yet. There's no good way or no good example necessarily of regenerative AI that's really taking hold. One of the common things that people talk about with regenerative AI is that's what we need for all the cars to start really driving themselves. They're talking to each other, they're participating in the refinement of what's going on. There's nobody holding the wheel as it were. That's the difference. Regenerative is really something that's taking care of itself more than people taking care of it.

Bob: Brandon, let's talk a little bit about data. For AI to be effective, it's got to have data. I was at an event, I actually think it was at an ASCM event where someone was presenting on AI and big data and was saying that the reason the Chinese have an advantage in AI over the US is they've got 3 billion people to collect data from, and we have 300 million. Just look at the difference.

If we think about that, a couple of questions that flow from it. One is, where does the data come from that AI tools are taking in and analyzing? I guess that's from a supply chain perspective. Can it be trusted? Then second, since machine learning is part of this, you're going to have to have enough data and enough history for AI to be effective. What do you see as the uptick time for that?

Brandon: Yes, and the way that AI is working right now is it needs massive, massive amounts of data. One of the ways you can think about the difference between an AI and a human brain, for instance, is the human brain actually does a really good job with pretty limited information. AI does a terrible job with limited information. It needs a ton of information to even really be able to operate. You hit it on the head, the information is coming from us, it's coming from the internet. It's really a new type of interface to what we've already put out into the world.

A lot of people say that ChatGPT in particular is a reflection of ourselves and what we put on the web over the past, call it 30, 40 years. That's really what the data is pulling from. Then you can have other AI products out there, not necessarily ChatGPT, but AI products geared toward a supply chain or something like that or an industry-like supply chain, where they can isolate the data and say, well, ignore all these things that are not pertinent and ignore your grandmother's texts and those kinds of things. Still, hone itself really on a certain subset of data. That's difficult because, as I just said, they need a lot of information. Without that information, it becomes less and less valuable.

Abe: Brandon, give me a sense. You referenced a little bit about supply chain. Historically, where we've seen, the artificial intelligence or robotics has been in the lift in terms of repetitive tasks or the things that traditionally have been in the labor side, not on the knowledge side. Give me a sense where you're seeing generative AI operate in the supply chain today and connect it to the worker. Who is this affecting the most?

Brandon: Lots of examples of AI at work in the supply chain. The most obvious one that people, I think, would associate with the most is in logistics. That's in, where's my package type stuff, where you can speak to an AI chatbot, and they can figure out if there's a problem with your order, when it's going to be delivered, those kinds of things. We've all had that kind of interaction with what might be a customer support agent but is instead an AI customer support agent.

I think that's going to continue to increase, and that'll be a major interface to supply chain customers is the end user talking to AI and then that AI adjusting things on the back end for transportation systems and logistics companies. Contract negotiations is another interesting one. Walmart did an experiment where Walmart International actually sought to close deals with suppliers for items such as shopping carts and store equipment and things like that. It negotiated payment discounts and extended termination terms and the chatbot closed, ended up closing 65% of the 89 suppliers that participated.

Even things like that, where you're having contract negotiations, the AI can take on part of that and really increase efficiency. Another area is inventory. Generative AI can enhance the ability of teams to make decisions, to prioritize inventory action based on company's history, in order quantities, lead times, demand analysis. This is where we're seeing it right now. I'd like to think of it as safety stock levels on steroids, I guess you could say, where we have been using AI as part of our business intelligence platform at ALOM to really identify inventory trends and try to curb changes in demand by using AI to identify those patterns.

Bob: Brandon, just a follow-up on examples, and then I'll come to the next question I was going to ask, but when you talk to say, the consulting firms. Like Accenture is doing a lot of work on AI. One of the areas that they're looking at is digital twins. They say that they're starting with looking at, can we replicate how a machine is operating so we can look at that? Then could we take that longer and look at a whole facility?

Then ultimately, could we extend it to bringing in our transportation networks or bringing in our suppliers or ultimately bringing in from supplier through customer? Some of the examples you gave us are just where we are today. If you put on your future goggles, where do you potentially see it going, whether that's 3 years, 5 years, 10 years from now?

Brandon: The digital twin thing is interesting because I do think the next evolution in digital twin that AI will help us with is an entire digital twin of someone's supply chain, just as you intimated. Not just the warehouse operation or not just how the way a machine works, but the entire supply chain being digitized in a twin so the AI can run different types of models and run different type of optimizations against that digital twin and really try to hone in on those outcomes and be really precise with those outcomes. I think that is in our future for sure.

Another place that I was thinking about AI really taking hold is an integration between systems. You can imagine codeless integration between two systems. Say a warehouse management system and an ERP or a warehouse management system and a transportation management system between two different companies. You can imagine an integration where the two systems are speaking to each other and almost inventing new connections on the fly. System A needs a new parameter from system B. They negotiate that themselves without human interaction.

That new parameter is made available from the system of record to the system of reference. All of a sudden you just get that additional visibility, or maybe the AI then gets that additional visibility, creates a new pattern recognition, creates a new suggestion about a way to adjust an inventory level or a business process in general that could fund new optimization or enhanced performance or enhanced delivery time, all those different positive outcomes that we're looking for.

Bob: Brandon, when we think of the congressional hearings and so on, there's a lot of talk about guardrails to protect us from AI. One, does that apply to supply chain? If so, if I'm a supply chain executive, what are the kind of guardrails I should be thinking about with regard to AI and my supply chain?

Brandon: The first thing I would say is a general guardrail that I think everybody who's running a company supply chain or not needs to think about with AI and especially the large language models out there like ChatGPT. I know we keep bringing it up, but it is a good example. I like to call that ‘shadow AI’. This is where the company doesn't realize it's using AI, and that's because the employees are taking it upon themselves to do so. They're going into ChatGPT, and they're writing their new return to office policy on ChatGPT, and ChatGPT is doing that for them. This shadow IT has been a thing for a long, long time.

I think you guys are probably familiar with that, which is where departments are going off the board or doing cowboy IT and bringing in systems without the blessing of the IT organization or the company as a whole. That's happening big time with AI. You need to have a policy in place. You need to realize that it's happening, and you need to educate people because there are privacy issues with ChatGPT, confidentiality issues with ChatGPT and the like. Many of them have essentially no privacy expectation. As soon as I use a prompt to ChatGPT, that prompt becomes part of its knowledge base.

If I keep saying that Bob and Abe have the best podcast in supply chain history over and over again to the ChatGPT, it'll eventually become its truth. That can obviously be a dangerous thing. You can expose company secrets that way, you can expose security issues that way. Of course, PII and all of that mess that we don't even need to get into. That's a big thing, I would say. Then from a security aspect in our industry, in the supply chain industry, it's also a major problem. As we all know, supply chain has been a target recently and a historical target for the bad guys out there in cyberattack. This does not help.

There's a huge security aspect to AI where AI is pretty darn good at creating phishing emails or talking to an individual to get them to give up company secrets, these kinds of things. They can do it en masse, and they can do it very cheap. The more information they have about your company, the easier it is to do it. Those are some big things there. Another thing that I would say is that the proliferation of bad data is a pitfall for what I would call early adapters or early adopters, I should say, of AI. We're used to technology getting better and better and better. The newest version's always better than the older version.

Not so much with AI. AI is more up and down with having-- when it gets better, and then it takes a step back, and then it gets better, and then it takes a step back. In fact, some of the more recent versions of ChatGPT, once again, it was having problem-solving simple math questions, figuring out if a number was a prime number or not, and it would get it right about half the time. Anybody with a calculator would've a better chance. The previous version of ChatGPT got it right 100% of the time.

Yes, there's bad data, but as I was talking about before, it's really the care and feeding once again, of these systems that are really refining the outcomes of these prompts to be accurate and to be trustworthy. Right now, it's very difficult to see, it's very difficult to quantify, and you should really take it with a grain of salt.

Abe: Brandon, first I need to agree with you. We do believe that this is the best podcast. Definitely, we agree with you there. Give me a sense when you're talking about regenerative AI, what are we looking for in terms of application? What's the potential for this?

Brandon: Potential is huge, obviously. Exciting for something like demand planning, purchasing, vendor management contract negotiations we talked about a little while ago, they'll get even better. Really anything where real-time self-correction, refinement, learning, you could call it, can be advantageous, and that can be in many different aspects. I think with generative AI you have, call it tier one support, can be AI-driven, and most people will accept it. Tier two support, maybe not so much.

Regenerative will get you to tier two or tier three support where it'll be better than getting a tier level two or level three support agent on the phone. I think that's not too far in our future. Transportation management systems. We could be looking for patterns and anomalies on a dashboard. Regenerative AI is really good at pattern recognition, really, really good at it. It'll be much faster than humans and recognizing those patterns. That, of course, comes with more danger. One of the things I was thinking about with relying on AI is that danger of instant action.

One of the things that we pay people in our industry to do is to look at a dashboard and make decisions based on pattern recognition. That's what humans are really good at, but humans are also good at not acting, seeing how things play out a little bit. I think AI has the danger of acting too quickly, making that adjustment because it just went over by that 0.1% when maybe that wasn't the right answer because it's going to come back down. We're going to have bumps in the road like that, and you have to be really careful with those issues.

Bob: Brandon, we had Yossi Sheffi on the podcast a couple of episodes ago. Abe had Yossi at the ASCM Connect in Louisville. One of the things Yossi talked about is jobs. When we asked Yossi, "Will AI robots replace humans?" His first answer was, "Well, talk to me in 10 years." His second answer was a little historical perspective, which is that technology often replaces jobs, but it also creates jobs. When you think of AI and jobs, do you think we have reason to worry? Where are you coming down on that?

Brandon: I would say I'm not worried. I think it's going to change our jobs. It's going to change the way we work. If you talk about 150 years ago, what was it? 30% of the population were farmers, and now it's 0.03%, something like that. It's a similar type of thing where we're going to create lots of new jobs, and even the jobs that we're doing today will be AI-enabled, but they'll still be jobs that humans are doing. We have a product coming from Microsoft called Copilot, which I think is a really good word for their product because it is like a copilot. It's helping you, it's a tool, just like Microsoft Word is a tool, but you're still the one adding the value from a human perspective. The human brain is still the most complex thing in the universe. It's going to remain that way for a while. There's going to be no substitute for it, I think, for a good long time. I think jobs will change, but I don't think jobs will necessarily go away. People talk a lot about things like inventory analysts or paralegals where AI will be able to do their jobs. I think a lot of inventory analysts that use AI and then inventory analysts that don't use AI we won’t have as much. That's where I land on that. I guess that's where a lot of the fear comes from. A lot of the fear comes from job replacement. Once again, I go back to the old robot analogy where robots were scary because when robots came out, they looked like us.

They had two arms and two legs and a head, and they were faster than us and can do things smarter than us. There's really no reason for a robot to look like a human. I think it's similar with the large language models like ChatGPT, it feels very human. It feels like I'm talking to a human. Really that's just an illusion. ChatGPT and the large language models, they're just deciding what the next best word is to respond to it. That's all they do. It feels very similar to the way we speak. That's why I think it's so scary. It is a little bit more of a parlor trick than it is a feature.

Abe: Brandon, last question. This is a conversation we had with Yossi as well in terms of adoption of new technology. It seems like when a new technology is first introduced, there's a tremendous amount of hype. We know the Gartner hype curve and beyond, but there seems to be an overestimate in terms of the impact in the short term and an underestimate in the long term on the impact of blockchain. I think that's a good example. You rarely hear about it now but much more on the implementation side today. How long do you think before we're in that same format or same context with AI and being mainstream for us?

Brandon: Yes, I think it'll slowly eke its way in. We're going to have these big things that come out, like Copilot will be a big deal that's coming out actually this week. I think that'll be another big bump in the road, and it'll be exciting. Then it'll become normal. AI is very mainstream. We're interacting with AI every single day. This call is being partly managed by AI to make sure that our voice levels are the same. We just don't think about it.

We don't think about it. I think when people are feeling, "Oh, when is this really going to be mainstream?" Really, what they're thinking is, "When is AI going to be able to do my job or when is it going to be this all-knowing thing?" I think that'll come very slowly, very slowly. I would say another 10 years, 15 years, something like that before we see some real changes.

Abe: Thanks, Brandon. That is all the time that we have today. A special thanks to our guest, Brandon Marugg from ALOM. Finally, a special thanks to you for joining us on this episode of The Rebound. We hope you'll be back for our next episode for The Rebound, I'm Abe Eskenazi.

Bob: I'm Bob Trebilcock.

Abe: All the best, everyone. Thank you.

Bob: The Rebound is a joint production of the Association for Supply Chain Management and Supply Chain Management Review. For more information, be sure to visit ASCM.org and SCMR.com. We hope you'll join us again.