Lessons for AI Governance From Crypto’s Decentralized Ethos

Lessons for AI Governance From Crypto’s Decentralized Ethos

Techno-dystopists are the loudest proponents of a dystopian narrative. They quickly moved away from being labeled techno-utopians by their critics.

A letter sent this week by more than 350 people, including Microsoft’s Bill Gates, OpenAI’s Sam Altman, and former Google scientist Geoffrey Hinton (sometimes dubbed the “Godfather” of AI), contained a simple , declarative statement: “Containing the risk of extinction caused by AI should also be a global concern, along with other risks of equal magnitude such as pandemics or nuclear war.”

In an open letter sent two months earlier, Twitter and Tesla CEO Elon Musk, along with 31,800 other signatories, called for a six-month pause in AI research to allow society to assess the risks. Eliezer Yudkowsky – credited as the father of artificial general intelligence (AGI) – refused to sign the letter in an editorial published by TIME the same week because it was not comprehensive enough. He instead called for the military-enforced closure of AI labs lest we create a sentient digital being that will kill us all.

Executives around the world will have a hard time ignoring these experts. It is now generally accepted that there is a real threat to human existence. How can we reduce this threat?

I have already written that I believe that the crypto industry can play an important role in society’s efforts to keep AI on its own path by collaborating with other technological solutions and thoughtful regulations that are innovative, on Promote people-centric innovation. Blockchains are a great way to prove data authenticity, stop deep fakes, disinformation and other forms of misinformation, and enable collective ownership instead of corporate ownership. Even disregarding these considerations, the crypto community’s “decentralization mentality” offers a unique perspective on the dangers of concentrated ownership.

AI Risks: A Byzantine View

What do I mean when I say “decentralization”?

Cryptography is based on the “don’t believe, verify” philosophy. Instead of money greedy people running centralized tokenized casinos that discredit the industry, crypto developers conduct endless “Alice & Bob” thought experiments to examine all the threat vectors and failure points that a malicious actor could use to cause harm. Satoshi created Bitcoin by trying to solve the problem of Byzantine generals. This game-theoretic scenario is all about trusting information you don’t know.

Decentralization is seen as the best way to reduce these risks. It is believed that the risk of malicious interference is reduced when no central authority has the power to decide the outcome of a transaction between two parties and both parties can trust this information.

Let’s now apply this worldview to meet the demands set forth in this week’s AI Extinction Letter.

Allison Duettmann: How crypto can help secure AI

The signatories call on governments to work together to develop international policies to combat the AI threat. It’s an admirable goal, but proponents of decentralization would call it naïve. How can we expect current and future governments to realize that working together is better than going it alone, or worse, say one thing and do another? If you think monitoring North Korea’s nuclear weapons program is difficult, peek behind a Kremlin-funded encryption wall and check out their machine learning experiments.

The worst-case scenario was so clear to everyone that it would be a mistake to expect global collaboration in the context of the COVID pandemic. Or to believe that the logic of mutually assured destruction (MAD), based on the principle of mutually assured destruction, would persuade even the most bitter enemies of the Cold War to agree not to use nuclear weapons. The worst-case scenario is clear to everyone. It was one thing for Cold War enemies to agree not to even bring nukes, but it’s quite another when the event takes place around AI, which is unpredictable.

Some in the crypto community worry that these big AI players will rush to regulate, creating a moat that protects their first-mover advantage and makes it harder for competitors to track them. Why is that important? By advocating monopoly, you create the very core risk these crypto thought experiments warn us about.

I’ve never really believed Google’s “Do No Evil” motto, but even if Alphabet, Microsoft, OpenAI and co. are well intentioned, how do I know that their technology won’t be managed by a differently motivated board in the future , a government or a hacker? If the technology is locked in an impenetrable corporate black box, how can anyone else inspect its code?

Here is another thought exercise to explore the risks of centralization in AI:

What structural scenario would lead the AI to this conclusion if, as Yudkowsky believes, the AI eventually achieves artificial general intelligence ( AGI) will achieve, with an intelligence that… Could it be concluded that we should all be killed? AI could kill us if the processing and data that keeps it “alive” are concentrated in a unit that can be shut down at any moment by a government. However, if AI “lives” in a network of decentralized nodes that is censorship-resistant and cannot be shut down by a government or a concerned CEO, then this digital creature will not feel threatened enough to eliminate us.

Of course I don’t know if that would be the case. In the absence of crystal balls, Yudowskly’s AGI thesis requires that we conduct these thought experiments to understand how this future potential nemesis might “think.”

Find the right mix

Most governments are unlikely to accept any of these ideas. OpenAI’s Altman and others are actively conveying the “Please regulate us” message right now. The government wants control. They want to be able to subpoena the CEOs or order closures. It’s part of their DNA.

To be clear: it is important to be realistic. We live in a globalized world. We are stuck in the system of jurisdiction whether we like it or not. We have no choice but to include some level of regulation in the strategy to mitigate AI extinction.

Finding the right mix of international agreements, national regulations and transnational governance models is important.

There may be lessons to be learned from the way governments, academic institutions, and private companies have regulated the Internet. We have installed multi-stakeholder frameworks through bodies such as the Internet Corporation for Assigned Names and the Internet Engineering Task Force to enable the development and adoption of common standards and to facilitate dispute resolution through arbitration rather than through courts.

AI will undoubtedly require some level of regulation, but this open, borderless technology, which is evolving rapidly, cannot be controlled by government. We hope they will put aside their hostility towards the crypto industry to seek their advice and help them solve these challenges using decentralized solutions.

 

Related Articles

AskFX.com