Embracing AI means we must mitigate risk to firms, industries, consumers and society

As Americans adopt AI, government and industry must work together and be flexible

According to a recent poll undertaken by the Certified Financial Planner Board of Standards, nearly 1 in 3 — or 31% — of investors would use artificial intelligence (AI) as their financial advisor.  

For those unfamiliar, AI is commonly referred to as a catch-all for the set of technologies and designs that make AI possible. In its broadest sense, AI applies to any technique that enables computer software to mimic human intelligence.  

The "artificial narrow intelligence" (ANI) system, which presently exists in commercial applications, is software that is based on computational statistics used to create models that can help make decisions by human beings or other machines at ever-increasing speeds.  

OPENAI, CREATOR OF CHATGPT, ON TRAJECTORY TO BRING IN $1 BILLION

Yet this application of ANI still lacks the cognitive abilities and other "human-like" attributes required to be classified as "artificial general intelligence", i.e., software that has cognitive abilities similar to humans, a "sense" of consciousness, and is typically equated to human-like ability in terms of learning, thinking and processing information for decision-making.  

image of robot trading stocks

Nearly one third of investors would be willing to have AI make trades for them. (iStock)

This ANI software is today embedded in a range of industries, firms and products and services, including online search engines, digital cameras, customer service interfaces, and recently, ChatGPT, an example of a large language model and generative AI making recent media headlines. 

Our forthcoming research study, published by the Center for Growth and Opportunity at Utah State University, is focused on answering one key question: What governance approach will offer American society the most efficacious institutional framework to ensure that the potential negative technological risks associated with AI will be regulated and minimized, while simultaneously encouraging the development and implementation of those AI technological benefits for American society?  

Given the present state of AI, our research leads us to conclude that where American society has an important stake in the ongoing development and implementation of ANI across industries, and the limitations of public regulation and the pacing problem, i.e., referring to the quickening pace of technological developments and the inability of governments to keep up with the dynamic state of new knowledge emerging about the capabilities of this technology, the answer is to embrace a flexible and adaptable meta-regulation. 

This meta-regulation involves those activities occurring in a wider regulatory space under the auspices of a variety of institutions, including the state, the private sector, and public interest groups. 

Meta-regulation addresses the tension of "social control", i.e., public regulation, of a still emerging technology, while encouraging the potential commercial benefits accruing from future AI technological innovation through private governance initiatives. 

Furthermore, Congress, in its ongoing efforts to regulate AI, should recognize that private governance is a major component of addressing basic issues related to the future of U.S. AI development and implementation across industries.  

Each industry will have unique issues related to commercializing ANI, and therefore will be in the ideal position to know the "best practices" to be instantiated in their standard-setting processes.  

For example, in the digital media industry, major companies — including Alphabet (Google AI Principles), Meta (Five Pillars of Responsible AI) and Microsoft (Microsoft Responsible AI Standard (v2)) — have in recent years issued explicit private governance policies and/or principles of AI to be used in their business operations.  

What is crucial for American consumers is that there are effective, operational company policies delineating ANI operating practices and performance, and clear, consumer accessible information disclosure on how well the firm is abiding by these industry ANI best practices.  

In many cases, private governance, market-driven mechanisms will significantly assist in the meta-regulation of firm-level ANI, including company AI insurance liability, reputational effects generated by real-time social media coverage of firm behavior, and relevant stakeholder inquiries into firm behavior. Also, this results in positive or negative general media impacts on company financial performance. 

One research approach to further effectively embracing private governance is "polycentric governance," a theoretical construct developed by Nobel Prize in Economic Science winner Elinor Ostrom.  

Furthermore, Congress, in its ongoing efforts to regulate AI, should recognize that private governance is a major component of addressing basic issues related to the future of U.S. AI development and implementation across industries.  

GET FOX BUSINESS ON THE GO BY CLICKING HERE

Since governance in a democratic society requires a variety of tasks to be accomplished, each of which requires actions of different types of "decision centers," both public and non-public, the process of governance as a whole involves many different decision centers acting in interdependent ways. As it involves meta-regulation, the polycentric governance approach could be a valuable tool for evaluating the efficacy of the role of non-governmental institutions in AI governance models.  

Private governance is the critical component of mitigating AI technological risk to firms, industries, consumers and American society. The challenge remains, however, whether there is a corpus of American technological leadership that recognizes, invests and continues to maintain (and improve) an effective regulatory equilibrium.  

This regulatory equilibrium, between government-mandated social control and for the immediate future, a responsible innovation ANI based in industry self-regulation, market forces and firm adherence to best practices, will provide for the still-to-come potential benefits accruing to American society from this revolutionary technology. 

Thomas A. Hemphill is the David M. French Distinguished professor of strategy, innovation and public policy in the School of Management, University of Michigan-Flint. 

Phil Longstreet is an associate professor of Management Information System in the School of Management, University of Michigan-Flint.