The Sweet Potential and Hidden Risks of AI: An Investigative RiskTech Journal Report
Breaking News Update
This week’s meeting of the EU-U.S. Trade and Technology Council (TTC) addressed crucial matters related to artificial intelligence (AI), especially generative AI like ChatGPT. Margrethe Vestager, EU tech chief, indicated that a draft AI Code of Conduct could be compiled within weeks, creating a foundational structure for industry commitment. The code is likely to feature aspects like watermarking and external audits. Amidst the ongoing legislative process of the EU's AI Act, Vestager stressed the urgent need for immediate action given the swift advancement of AI technology[7].
“The incredible speed at which AI technology is advancing, coupled with its potential impacts, necessitates the implementation of thorough audit mechanisms.”
Commenting on the meeting's outcomes, John A. Wheeler emphasized the importance of AI audits. "The incredible speed at which AI technology is advancing, coupled with its potential impacts, necessitates the implementation of thorough audit mechanisms. Much like our financial systems, we need to maintain transparency and accountability in AI. With these developments, we're on the right path to establish essential safeguards, but we need to act promptly and wisely."
As a former lead market analyst for integrated risk management (IRM) and the founder and CEO of Wheelhouse Advisors, I've long seen the transformative potential of Artificial Intelligence (AI) and its paradox. On the one hand, AI is like an artificial sweetener, enhancing our experiences and offering transformative advantages. On the other, without proper understanding, it can lead to unwanted outcomes. Through this lens, I've been delving into how IRM can provide the vital balance we need in the world of AI.
Striking the Right Balance
Much like our cautious approach to artificial sweeteners, AI requires a balanced understanding to prevent disruptive consequences, such as job displacement and erosion of skills. IRM is the strategic framework that can guide us, helping to identify, evaluate, and mitigate inherent AI risks.
IRM: A Solution to the AI 'Black Box' Problem
The "black box" problem of AI, where its decision-making processes are enigmatic, mirrors the uncertainty we face with the long-term effects of artificial sweeteners[1]. Here, IRM solutions show their real value. They enhance transparency, addressing biases and ethical issues that could otherwise go unnoticed.
The Regulatory Landscape: An EU-US Comparison
In my investigations, I've come across regulatory complexities that resemble the intricate health considerations around artificial sweeteners. The EU and the US offer contrasting approaches: the former with stringent regulations for user privacy and AI misuse prevention, and, until recently, the latter promoted flexibility to encourage innovation[2][5]. This situation, while providing a diverse landscape, poses a challenging maze for multinational corporations.
Enter the Federal Digital Platform Commission
Now, the US government is waking up to the realization that AI can be harmful to the public in ways previously unimaginable. As a result, several senators have proposed the creation of the Federal Digital Platform Commission (FDPC), which echoes the role of the FDA in monitoring artificial sweeteners[6]. The FDA, known for ensuring public health by overseeing food and drug safety, serves as a model for this newly proposed commission, aiming to regulate digital platforms and AI to guarantee their safe, ethical use.
The Guiding Map: Wheelhouse Advisors' IRM Navigator
Navigating this labyrinth becomes less daunting with tools like the IRM Navigator™ Market Map, a product of our own work at Wheelhouse Advisors. Offering a comprehensive overview of leading IRM solutions, it highlights industry frontrunners such as AuditBoard, ServiceNow, Navex, and Archer[3].
These technology leaders provide auditing and certification solutions for AI use, ensuring companies stay in compliance with regulations and best practices. They guide companies to navigate regulations, foster responsible AI use, and ultimately help us enjoy the sweet potential of AI without the risk of a bitter aftertaste.
Final Words
As Sam Altman, CEO of OpenAI, warns about the potential risks of AI in the future, we're reminded of the importance of careful consumption of both AI and artificial sweeteners[4]. With a balanced approach guided by IRM solutions and the upcoming FDPC, we can savor AI's sweetness without worrying about bitter surprises.
Citations:
Stanford University. (2021). AI100 Report. https://ai100.stanford.edu/2021-report.
The Brookings Institution. (2023). The EU and US diverge on AI regulation: A transatlantic comparison and steps to alignment. https://www.brookings.edu/research/the-eu-and-us-diverge-on-ai-regulation-a-transatlantic-comparison-and-steps-to-alignment/.
Wheelhouse Advisors. (2023). IRM Navigator™ Market Map. https://www.wheelhouseadvisors.com/irmnavigator-market-map.
The Guardian. (2023). OpenAI CEO Warns of Unforeseen Dangers of Artificial Intelligence. https://www.theguardian.com/technology/2023/mar/17/openai-sam-altman-artificial-intelligence-warning-gpt4.
Harvard Business Review. (2023). Who Is Going to Regulate AI? https://hbr.org/2023/05/who-is-going-to-regulate-ai.
Nextgov. (2023). Senators Introduce Bill to Create Digital and AI Oversight Agency. https://www.nextgov.com/emerging-tech/2023/05/senators-introduce-bill-create-digital-and-ai-oversight-agency/386580/.
Reuters. (May 31, 2023). EU tech chief sees draft voluntary AI code within weeks. https://www.reuters.com/technology/eu-tech-chief-calls-voluntary-ai-code-conduct-within-months-2023-05-31/