(Bloomberg Opinion) -- As the U.S. and China appear headed for a digital cold war, competing policy approaches to the same technologies are emerging. Artificial intelligence is a prime example: Policy makers in democratic societies should, in theory, be making sure it isn’t used to promote intellectual conformity or to persecute minorities and dissidents.

The idea that AI should be ethical and benefit society has led to the emergence of multiple versions of basic principles, drafted by governments, academics and industry groups. Last year, Chinese researchers Yi Zeng, Enmeng Lu and Cunqing Huangfu identified 27 such codes and made a website on which they can be compared. It makes a somewhat eerie impression, as if the various codes form a data set on which an AI algorithm could be trained to spew forth ethical principles for its peers.

QuicktakeDigital Cold War

On Wednesday, the Organization for Economic Cooperation and Development, a wealthy nations’ club set up to study best practices and develop policy recommendations, came up with its own AI guidelines. Michael Kratsios, a White House technology policy aide, made a speech in their support at the event.

The OECD recommendations focus on making the work of AI systems transparent to humans. If any decisions are made with the help of AI, those affected should know about it, understand how the decision has been reached and have the opportunity to appeal it. If these recommendations are followed, and in the OECD’s case they usually are, companies won’t be allowed to build black box AI algorithms and apply them in any decision-making that can affects a human’s rights — for example, on creditworthiness, stop-and-frisk policing or in thousands of other fields where AI, with its capacity for analyzing oodles of data, can come in handy.

That’s a smart approach. As Jon Kleinberg from Cornell University and Sendhil Mullainathan from the University of Chicago showed in a just-published paper, the simpler the prediction rules built into an algorithm, the less fair its decisions. They pointed out that any simplified model is improvable, and the improvement leads to greater equity. So it makes sense to introduce rules that make it impossible to delegate final decisions to a simplified model and to build in opportunities for transparent human review — and provide for the accountability of those who design the algorithm.

If this approach is incorporated into Western nations’ legislation, with adequate protections for people who potentially may be judged by AI, it should make impossible any variations on China’s infamous social credit system. As Rogier Creemers from Leiden University in the Netherlands put it, the eventual goal of that system is to monitor people and organizations “in order to automatically confront them with the consequences of their actions.” Today, the Chinese system rarely makes decisions algorithmically (the use of credit ratings is one of the few exceptions), but the practice may eventually become widespread because the Chinese government isn’t too worried about human rights and legal recourse.

The OECD recommendations have the potential to empower governments in a different way. AI systems used in business, education, health care and other areas will need to be tested and found compliant with transparency and accountability principles.

“Governments should promote a policy environment that supports an agile transition from the research and development stage to the deployment and operation stage for trustworthy AI systems,” the recommendations say. “To this effect, they should consider using experimentation to provide a controlled environment in which AI systems can be tested, and scaled-up, as appropriate.”

If AI is going to be as widespread as technologists and politicians would have us believe, its compliance testing will most likely grow into an industry and a regulatory branch. Companies deploying artificial intelligence will need plenty of human employees to review complaints and provide reports to users on how adverse decisions were reached — and governments will need to hire dedicated bureaucrats to handle cases where consumers still feel wronged.

“We firmly believe that a rush to impose onerous and duplicative regulations will only cede our competitive edge to authoritarian governments who do not share our same values,” Kratsios said in his OECD speech. But, if implemented, the groups’s recommendations would impose quite a burden on AI-using companies. They would need to figure out whether, in some cases, an old-style decision process run by humans might not be more effective than the deployment of both AI and a human support structure. After this cost and benefit analysis, the Western world may end up with less AI than rival authoritarian regimes — and that could be a blessing, too.

To contact the author of this story: Leonid Bershidsky at lbershidsky@bloomberg.net

To contact the editor responsible for this story: Stacey Shick at sshick@bloomberg.net

This column does not necessarily reflect the opinion of the editorial board or Bloomberg LP and its owners.

Leonid Bershidsky is Bloomberg Opinion's Europe columnist. He was the founding editor of the Russian business daily Vedomosti and founded the opinion website Slon.ru.

©2019 Bloomberg L.P.