China and the United States are taking opposite approaches to governing artificial intelligence, and the contrast has big implications for both their global competition and the safety of their citizens.
China has built a robust AI domestic regulatory system in public/commercial spaces but does not regulate AI use in the military, which is the opposite of the American approach. The U.S. has published robust rules for AI-driven military systems but done nothing to regulate the tech industry’s hasty release of generative AI models like ChatGPT-4 to the public.
China’s approach to generative AI elevates political stability over innovation, with strict regulation of the private/commercial sector. On April 11, the Cyberspace Administration of China (CAC) issued “Measures for the Management of Generative Artificial Intelligence Services.” These draft principles cover “deep synthesis” technologies, including machine-generated text, images, audio and visual content, especially deepfakes.
China’s approach to generative AI elevates political stability over innovation, with strict regulation of the private/commercial sector. (Getty Images)
PRC regulations prohibit AI-driven discrimination, hold Chinese companies liable for any harm, and mandate security assessments before AI models are released. These types of measures would also benefit citizens in democratic countries.
The PLA’s top priority is to rapidly apply AI to its missions and achieve what the leadership calls “intelligentization” of warfare. (Screenshot)
On the other hand, China does not regulate AI military use by the People’s Liberation Army (PLA). The PLA’s top priority is to rapidly apply AI to its missions and achieve what the leadership calls “intelligentization” of warfare. There is no visible framework of trustworthy, transparent or ethical Chinese military restraint to match the caution and control exercised over commercial companies developing AI in China.
By contrast, this year the U.S. published robust regulations on military AI, even as Microsoft (Open AI) released ChatGPT-4 with no U.S. government regulation of the private sector. In January 2023, the Pentagon published “Autonomy in Weapon Systems,” stipulating they be under the control of humans, transparent, explainable, have top cyber protections, clear feedback loops, and be able to be turned off.
U.S. military AI systems must also meet ethical requirements, meaning they are responsible, equitable, traceable, reliable and governable. There is nothing like this to govern AI use by the PLA.
What risks are China and the U.S. taking in their contrarian approaches? China is slowing down AI innovation in its domestic sphere. Pressured to answer ChatGPT (which is not available in China), Baidu rushed out its own large language model in March 2023.
But Ernie Bot is years behind Open AI’s ChatGPT, makes a lot of mistakes, is available only to select Chinese companies, and was a disappointment to most Chinese observers. Those who fear U.S. domestic regulation will enable China to charge ahead have not paid attention to their stringent domestic regulations.
Unless we act to protect Americans from the dangerous effects of untested AI models, then determining who is winning the U.S.-China military competition may be irrelevant.