Europe is charging ahead in the regulation of AI. That is not good for AI, for Europe and for the world. The regulation of AI is necessary and desirable. But it must embrace, not squeeze, the potential of AI to connect, uplift and emancipate humanity.
Seven months have passed since GPT - the "generative pre-trained transformer" - could transform Internet data into rudimentary artificial intelligence (AI). Never before has a technological innovation been picked up so quickly by so many, courtesy of existing Internet platforms and mobile data, combined with a business model that is affordable or even free. It is still too early to predict the direction and impact of this transformation, but the potential of AI for just about all areas of knowledge, skills, creativity and activity is huge - no one doubts that.
One recognizable pattern did immediately repeat itself: America excels at entrepreneurship and Europe excels at regulation. Business magazine Forbes locates 44 of the 50 most promising AI companies worldwide in the United States, with the rest split evenly between Canada, the United Kingdom and Israel. Note: these are the strongest innovators. It does not even count America's dominance in Big Tech. Together, the five American tech giants are investing hundreds of billions of dollars on AI - a truly historic R&D bonanza. And there are now six of them: the American chip maker Nvidia is approaching the still unbelievable trillion dollar market capitalization mark thanks to AI.
Over to the European continent. There, breakthrough companies and tech giants are few and far between, but regulators are everywhere. The European Commission is trying to fast-track a modified AI-Act through the Brussels mill. That draft was originally supposed to address delicate frontier areas such as medical applications, bank loans and HR processes. Now the Commission wants to make basic AI models responsible for the risks of all their applications, even if they have no control over them, on top of restrictions on data use.
Europe’s proactivity is not neutral. All over the world, people are discussing how humanity can best deal with artificial intelligence. Even technology companies are calling for legal bandwidths and guardrails. At the recent G7 summit in Japan, the "Hiroshima AI process" was launched to explore international regulation and oversight.
Europe immediately wants to set a European precedent. That would guide the rest of the world, as Europe's regulatory zeal in technology typically goes further than elsewhere. It would force AI companies to bend to European standards, perhaps even to establish data centers in Europe for their European operations. It is no coincidence that both Sam Altman and Sundar Pichai, CEO of OpenAI and Google, respectively, toured Europe on a charm offensive last week.
The European Union is using its large consumer market politically as leverage for European power in a sector where Europe lacks economic competitiveness. It did the same before with landmark EU laws on data privacy and Internet platforms. Thus the European bureaucracy indirectly forces American technology companies to invest in Europe or risk billions in fines, while directly imposing its regulation as an international norm. This distorts the relationship between Europe and the United States, notwithstanding a transatlantic technology council that is supposed to nurture harmonization.
The European regulatory push also undermines AI's potential for the rest of the world. Like the Internet itself, AI and its underlying data mining benefit from a transnational level playing field without borders. The best AI regulation would be global, including global oversight. The Internet would never have become the Internet we know and love without the World Wide Web that provides the same information and services across borders, supported by global standards. Europe should join America in pursuing an open and shared AI future. In doing so, we would also provide an exciting perspective for the world in times of cold war and geopolitical conflict.
Instead, Europe sides with ... China, which is also preparing its AI law. The Chinese act actually resembles the European one. China also wants to make AI producers liable for the risks caused by AI consumers. That makes a lot of sense for a country where technology companies are an extension of the communist state and where the Internet exists only under a veil of censorship and behind a Chinese wall. China wants a “regime AI” with handpicked AI companies as mandated censors. That Europe spontaneously and autonomously prefers an approach analogous to China says a lot about how values are drifting in our world and about the dogmatism inside the Brussels bubble. The regulation of AI is necessary and desirable. But it must embrace AI's potential to connect, uplift and emancipate humanity, not squeeze it to death.
Marc De Vos a professor at the Ghent University School of Law (Belgium), a fellow of the Brussels based Itinera Institute and a strategy consultant.