Last week, Senate Majority Leader Chuck Schumer announced the start of a series of lawmaker briefings on AI, focusing on
-
the state of AI today,
-
where AI is headed, and
-
national security implications.
Other governments – including the EU and China – are also beginning to craft their own unique approach to AI regulation: which is understandable, after all Goldman recently forecast that up to 300 million jobs – one-fourth of current work tasks – are at risk as more companies adopt AI to boost profit margins, leading to mass layoffs among high-paying workers.
While regulatory efforts are still nascent and vary across geographies, once implemented, regulation will determine the development trajectory and importantly could mitigate some of the risk from the most disruptive AI use cases; in doing so it could also substantially shrink the potential demand for AI services and adversely impact the upside “use case” which currently is the next market bubble, and is effectively unlimited.
That said, Morgan Stanley strategist Ariana Salvator writes in a recent note (available to pro subscribers) that there is currently no consensus for how governments around the world should engage with AI “and perhaps most importantly how they should balance fostering innovation with protecting users’ safety and privacy.” As a result, Morgan Stanley continues to watch this space carefully, “as incremental newsflow on AI regulation might signal where markets may be ahead of themselves.”
In that vein, the bank highlights two key things investors need to know when it comes to AI regulation:
-
The global regulatory path forward is uneven and uncertain. The European Union currently leads the United States in terms of regulatory efforts, as the EU Parliament has approved its draft AI Act, which seeks to sort AI use cases into a risk management framework and apply different levels of government oversight on the basis of risk level. In the US, thus far federal agencies have issued guidelines and pursued individual enforcement actions. Congress remains in learning mode, although a handful of senators have expressed their interest in clamping down on certain areas of the technology, like making sure Section 230 liability protections do not apply. Hence, governments are far away from aligning their regulatory approaches as AI continues to develop across borders.
-
Government regulation – if implemented as proposed – can possibly mitigate some of the most severe risks associated with AI. MS thinks of government regulation as a filter that effectively screens out some of the most draconian scenarios imagined when it comes to new technology. For example, the EU AI Act as proposed implements a risk-management approach that applies stricter government oversight to certain AI uses cases while simply banning others that fall into the “unacceptable” risk category, like real-time biometric screening or predictive policing systems. As a result, regulation as proposed now could play an important role in determining which of the most disruptive use cases will actually come to fruition vs. those that will be sidelined as governments get involved. Investors, accordingly, should note these limitations when contemplating AI disruption across markets.
Much more in the Morgan Stanley AI Guidebook available to pro subscribers in the usual place.
Tyler Durden
Sun, 06/25/2023 – 21:00