(Bloomberg) -- Officials from more than 60 governments including the US, China and South Korea met this week in The Hague to discuss the ethical and legal consequences of using artificial intelligence in the military.

The Responsible AI in the Military Domain (REAIM) summit, held in a city known for hosting tribunals to convict those who have violated the rules of war, was the first gathering of its kind to discuss AI’s role in security. It was an initial attempt to bring together government officials, policy makers, military personnel and industry to consider ways to potentially regulate the use of artificial intelligence in defense technologies.

Two days of wide-ranging discussions resulted in more than 60 countries signing a “call to action” — a first step to lay out norms for a field still in its infancy.

Here are some takeaways:

Time Pressure

Speakers stressed the need for speed in coming up with regulations. “There are huge gaps to be filled from regulating, to up-skilling, to accountability, and meanwhile, the technology is racing forward,” said Marietje Schaake, international policy director at Stanford University’s Cyber Policy Center and a former European Parliament member.

Ed Kronenburg, the Dutch diplomat and special envoy for the REAIM summit, says that while the conference was in the works long before the launch of the OpenAI’s ChatGPT chatbot, for example, recent public progress in AI has highlighted the speed at which regulation needs to come. “Developments are going so fast, especially the examples in the business community, that it’s difficult for society to keep up, let alone to talk about legislation,” he said.

Ethical Engineering 

Regulation aside, it’s critical for defense firms to have their own code of ethics and let that dictate their guidelines for development, participants said.

Take Saab AB, the Swedish defense company that has been using AI for years. “We have started in areas that are not sensitive from an ethical point of view,” Chief Technology Officer Petter Bedoire said in an interview. “Our current way of working is an interpretation based on the humanitarian law and some guidelines for development. We make sure that our engineers understand what areas are safe and in what areas should we be really careful.”

Humans Have Ultimate Responsibility 

Central to the debates at the summit was the continued role that humans should play in any autonomous weapons systems. “It’s about who’s taking ultimate responsibility for using the systems,” Kronenburg said, “and how do you want to regulate that.”

The responsibility humans have when deploying artificial intelligence in the military was also acknowledged in the call to action, released by the foreign and defense ministers from the countries attending the summit.  

“There is an understanding that in the industry, and I think in the military, that meaningful human control would strengthen your automated system not weaken it,” Amnesty International Secretary General Agnes Callamard told Bloomberg. “Some may have a different threshold for what meaningful human control means. We can discuss that, but I think there is a common feeling that this is the right entry point.”

The human rights group warned that the conference and the current debate around using AI in conflict focused too heavily on the context of the war in Ukraine. “Because of Ukraine, everyone here thinks of themselves as the good guy,” Callamard said. “It’s failing to understand that in an era of political volatility, countries can shift.”

©2023 Bloomberg L.P.