Just a couple of weeks ago, I wrote about many of the very real concerns over artificial intelligence (AI), which primarily stem from the fact that AI models are based on inadequate and inherently biased data. Problems include getting facts wrong, spreading misinformation, making up details — and even straight-up lying in order to coax behavior from humans. This week, these issues (and more) are being echoed by about 2,000 industry leaders and researchers, who have signed an open letter advising a moratorium on the creation of many AI tools.
“We call on all AI labs to immediately pause for at least six months the training of AI systems more powerful than GPT-4,” proclaimed the letter, which was published by the Future of Life Institute, an organization whose mission is to “steer transformative technology toward benefitting life and away from extreme large-scale risks.” The letter has been signed by some big names: Apple Cofounder Steve Wozniak; Tesla CEO Elon Musk; AI expert and Turing Prize-Winner Yoshua Bengio; as well as researchers from AI lab DeepMind, an Alphabet subsidiary; and professors representing Berkeley, Harvard, MIT and Princeton.
The letter outlines the signatories’ unease about the speed at which AI is developing: “AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research and acknowledged by top AI labs. … Advanced AI could represent a profound change in the history of life on Earth and should be planned for and managed with commensurate care and resources. Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one — not even their creators — can understand, predict, or reliably control.”
It goes on to urge that powerful AI systems only be developed once it's certain that their effects will be positive and risks manageable. To achieve this goal, the letter advises that a set of shared safety protocols be established and rigorously audited and overseen by independent outside experts.
AI safety guidelines have long been recommended: The UK’s Alan Turing Institute published an ethics and safety guide in 2018; Montreal-based Mila is a community of scientists and interdisciplinary teams committed to socially responsible and beneficial AI; and the Asia Society published reports to raise AI safety standards in Southeast Asia.
Informed for the future
Despite their concerns, I’m sure all signatories would argue that AI is worthwhile, and all have their own interests in the technology. For instance, Tesla has been trying to use AI to power its self-driving car functionality for years; Apple recently acquired a startup that uses AI to compress videos.
As for those of us in supply chain, we're embracing AI for everything from digital twins to hyperautomation to procurement negotiations, as detailed in these stories from the ASCM Insights blog. Check them out, plus so much more fascinating content by tech visionaries and top supply chain thought leaders. You’ll discover diverse perspectives about some of the most important industry topics to keep you fully informed on all things supply chain.