U.S. President Joe Biden’s much-anticipated executive order (EO) on artificial intelligence set a new tone for the U.S. — markedly different from the Trump administration’s “AI with American values” that defined technical progress primarily as maintaining a lead over China.
In contrast, the Biden administration is emphasizing a global perspective focusing on multilateral initiatives to support privacy enhancing technologies, new safety standards, and reporting requirements and technical research to address global problems. The release of the EO in late October was timed to coincide with U.S. Vice President Kamala Harris’ participation in the international AI Safety Summit in London.
This is good news on four levels. First, a go-it-alone nationalist strategy for AI was never going to work. Scientists and technologists from around the world have been contributing to the design and refinement of the surprisingly articulate large language models like ChatGPT that have suddenly caught the public’s attention. Taming these wild beasts so AI output can be identified through watermarking and related techniques is going to test the capacities of the brightest scientific minds. American researchers can use all the help they can get.
Second, the strength of these systems is dependent on the diversity of the language data on which they are built. The notion that the capacity of these systems should be built on a one-nation database reflecting in turn American or Armenian or Zimbabwean values is self-defeating.
Third, the motivating concern of governmental attention in the first place is risk. It is hardly likely that the malicious use of these systems or a carelessness in their design would be limited to American players. Both monitoring the behavior of these computational systems and inventing new means of assessment will benefit from global attention and participation.
Fourth, to the relief of many, the posture of the Biden administration is to resist the proposals for Draconian systems of regulation, prohibition and licensing proposed by some in Congress. The pivot on the Potomac is toward facilitating new means of assessment and pressure-testing these models so both the public and the marketplace can evaluate these competing tools for efficient and accurate information processing.
One central focus of the new executive order is Privacy Enhancing Technologies (PETs). There is ample and largely justified concern that advanced computational systems will collect and integrate every possible detail about our personal lives to be used in ultra-targeted online marketing. The PET concept turns this around and proposes personal privacy agents who can negotiate on our behalf and with our guidance to limit what information is made public.
You don’t have the time, the inclination, or the legal training to make sense of the 50,000-word personal privacy policies of various online platforms. But your AI personal privacy agent does. Further, since your personal information is valuable (marketers pay about $1,000 per online adult per year in the U.S. for these targeted access links) it might even negotiate a piece of the action for you for that information you are willing to share.
Does that sound unrealistic? A computerized agent cutting a little financial deal? Well, that is how the marketers do it in the first place in creating those pop-up ads that follow you around the web. It is called programmatic advertising — platform computers negotiating with marketing computers to flash a timely ad, all done in a fraction of a second.
Of course, this will be task for private enterprise, not the federal government. But the White House from time to time gets to set the tone of American policy and signal what is important. In this case, it’s a pivot in the right direction.
W. Russell Neuman is professor of media technology at NYU. His latest book is Evolutionary Intelligence: How Technology Will Make Us Smarter?(The MIT Press, 2023).