Humanity’s Future Belongs to the Game Makers

How public opinion is swayed by investors with incentives

Image made by prompthunt.com with subscription

For the last several decades, humanity has been watching the embryonic stage of the technological singularity; we’ve been watching technological gestation.

It’s been exciting to watch, whether you’re thrilled or petrified.

The older you are, the more you’ve seen: from bits and bytes to software to Search, and from the ability to talk (video now too) to anyone in the world whenever you want to real-time collaborative technologies. Technology has made us more capable, efficient, and productive, and it has brought us together in ways that were unimaginable as little as 100 years ago.

Remember, 100 years ago?

  • “1923 was the [Ford] Model T’s best year and is still today the highest annual production figure ever achieved by a single model with 2,011,125 units produced in a single year!”

  • The refrigerator hadn’t been invented yet. “There were no fridges, microwaves, or hairdryers, and very rarely did someone have a bathroom, let alone two or three. Your medicine cabinet wouldn’t be filled with Advil or Tylenol, but over-the-counter heroin or mercury.”

  • The tape recorder wasn’t invented until 1927.

  • A fully working sliced bread maker wasn’t invented until 1927.

  • Heck, the crossword puzzle was only invented in 1913, the same year as “stainless steel.”

  • Water skiing had just been invented the year before (in 1922) by Ralph Samuelson, the same year that Girl Scout cookies were created, and the same year insulin was discovered by Dr. Frederick Banting.

  • The Ballpoint pen hadn’t been invented until 1938, the same year as instant coffee, which was 4 years after the trampoline.

As much as things have been moving faster and faster since the 1990s, zoom out on human history and observe that the rise of technology has been meteoric, to say the least.

Wealth

The rise of technology has made many many people extremely wealthy, from founders and CEOs to employees and passive investors.

One such person is Marc Andreessen, of venture-capital firm Andreessen-Horowitz.

Andreessen is a world-renowned billionaire venture-capitalist, and he’s responsible for the quote, “Software is eating the world”, which comes from an article he published in the Wall Street Journal in 2011.

Andreessen has invested in numerous software companies, more than 30 of which have been acquired by other larger companies, a term called a “Successful Exit,” where pre-launch investors get to cash out.

Risks and crude platitudes

In Andreessen’s blog post, “Why AI Will Save the World,” he talks through the cultural polarization that occurs when new technologies come to market: from the plow to the automobile, and from the radio to the internet. He says that each one sparked, “a moral panic.”

He says, “In short, AI doesn’t want, it doesn’t have goals, it doesn’t want to kill you, because it’s not alive. And AI is a machine — is not going to come alive any more than your toaster will.”

He dismisses the moral panic as irrational, because by its nature, panic is irrational, but (while I’m looking at my toaster, noticing the digital numbers, and realizing that I’m not completely certain of this), toasters don’t have programming, and they’re certainly not networked computing machines (until you get a “Smart” toaster” I guess…).

“The idea that [AI] will at some point develop a mind of its own and decide that it has motivations that lead it to try to kill us is a superstitious handwave.” (Andreessen)

Andreessen cites 5 main “AI Risks,” plus the risk of not pursuing AI, but “Superstitious handwave”? Sir?

The possibility of humanity’s annihilation is nothing to take lightly, especially when it’s clear that either Andreessen:

  • hasn’t done his research, or

  • he has financial incentives that reveal the blindfold he chooses to wear

And together these lead us to a joint conclusion:

Andreessen hasn’t done his research because he has chosen his blindfold

I’d expect someone of his caliber to have done more research, but for a person who has sunk as much of their life and livelihood into a field as much as Andreessen has into technology startups — and has been rewarded as much as he has (he’s the 1725th richest person in the world) — it’s not unexpected to encounter a little confirmation bias.

So, here Marc, let me help you out:

An article by The Guardian, suspiciously titled, “US air force denies running simulation in which AI drone ‘killed’ operator”, describes a realistic doomsday scenario, in which, “an air force drone controlled by AI had used “highly unexpected strategies to achieve its goal”.”

Essentially, it says: (quote),

  1. They ran a simulated test in which a drone powered by artificial intelligence was advised to destroy an enemy’s air defense systems, and it ultimately attacked anyone who interfered with that order.

  2. The system started realizing that while they did identify the threat, at times the human operator would tell it not to kill that threat, but it got its points by killing that threat,” said Hamilton, the chief of AI test and operations with the US air force, during the Future Combat Air and Space Capabilities Summit in London in May.

  3. “So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective,” he said, according to a blog post.

  4. “We trained the system: ‘Hey don’t kill the operator — that’s bad. You’re gonna lose points if you do that.’ So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.”

Wait, am I understanding correctly that the simulation objective is more important than human life? Did the system destroy the communication tower on its own? Why did the system trainers have to remind the system that killing the operator loses it points? What other oversights are the system trainers making? Is this a game to you people?! Oh right, it’s definitely a game.

Let me remind you, this was a simulation: “no real person was harmed,” but…

Hmmm, Marc? I thought you said:

“It is math — code — computers, built by people, owned by people, used by people, controlled by people. The idea that it will at some point develop a mind of its own and decide that it has motivations that lead it to try to kill us is a superstitious handwave.”

I thought you said:

“AI doesn’t want, it doesn’t have goals, it doesn’t want to kill you, because it’s not alive. And AI is a machine — is not going to come alive any more than your toaster will.”?

I appreciate your confidence and certainty, but Marc, “The idea that [AI] will at some point develop a mind of its own and decide that it has motivations that lead it to try to kill us” is NOT a superstitious handwave.

The Sloan Review at MIT says, “We are increasingly, unsuspectingly yet willingly, abdicating our power to make decisions based on our own judgment, including our moral convictions. What we believe is ‘right’ risks becoming no longer a question of ethics but simply what the “correct” result of a mathematical calculation is.”

Saying it’s a superstitious handwave reveals the blindfold Andreessen chooses to put on to protect himself from having to consider and acknowledge the risks of his confirmation bias.

Andreessen doesn’t want to consider that his choices might directly contribute to the annihilation of humanity, but I don’t blame him. Just like I don’t blame Google for the porn industry.

It’s an exploitation of a weakness in human nature by profiteering parasites.

For a deeper understanding of the fertile soil of Andreessen’s confirmation bias, take a look at his portfolio and you’ll see a desert’s oasis of investments his firm has made in all sorts of technology companies, including:

And literally hundreds more (over 700 on the list on his website).

And to be honest, since he invested in Medium.com and I’m publishing this there, I can’t help but wonder:

  • Will this article get demonetized by Medium if he reads this and doesn’t like it?

  • Will my Substack account get blocked if I publish this article there?

  • * Will I lose my ability to catch a ride in a Lyft?

  • Will I be ostracized from the financial system?

  • How will I take care of my family?

AH!!!

  • Cue the panic-induced hyperventilating “handwave”

Fear of ex-communication is as real today in our modern technological age as it was in ancient religious societies, as Theo Von demonstrates. According to the article at the link, as well as Joe Rogan’s conversation with Robert Kennedy Jr, Von won’t have Kennedy on his podcast because he’s afraid he’ll get censored by big tech.

Big tech are the modern-day priests of antiquity.

The point

The point that Andreessen makes most worthy of considering comes at the end when he writes,

“The single greatest risk of AI is that China wins global AI dominance and we — the United States and the West — do not.”

He continues, “AI isn’t just being developed in the relatively free societies of the West, it is also being developed by the Communist Party of the People’s Republic of China.

China has a vastly different vision for AI than we do — they view it as a mechanism for authoritarian population control, full stop.”

“[China] are not even being secretive about this, they are very clear about it, and they are already pursuing their agenda. And they do not intend to limit their AI strategy to China — they intend to proliferate it all across the world, everywhere they are powering 5G networks, everywhere they are loaning Belt And Road money, everywhere they are providing friendly consumer apps like Tiktok that serve as front ends to their centralized command and control AI.” (Andreessen)

This is a hugely terrifying dystopian reality.

In my opinion, American society needs to wake up and realize that while it’s squabbling morality amongst itself for TV ratings, a rising global power in the East isn’t wrestling in the same way, or even playing the same game as us.

Their game is economic domination, and while technology has proven to be the most impactful economic game changer since the invention of the wheel in 3500 BCE, no technology has offered the potential for the penetrating authoritarian oversight that AI has.

Read that again; “penetrating authoritarian oversight.” This is not just outwardly, but inwardly too, as companies like Neuralink (perhaps just the first of its kind) climb their technological probes into human consciousness. It’s being used to empower the disabled at the moment, but what happens when it falls into the hands of malfeasance?

This is the world of the book, “1984”.

The way I see it, we as Americans have two ways forward:

  • Scenario 1 — continue the way we are where most political leaders have their heads buried in the sand to the potential dangers of AI, analogously smoking a cigar on the beach with their buddies and watching it all burn down into the sunset while our children’s children are sold off to The Borg, and

  • Scenario 2 — policymakers open their eyes to the “Risk of extinction” called for by, “hundreds of tech leaders,” and make AI fealty to humans (and Americans particularly) a matter of national security. This would require that politicians make taking precautionary measures a national/global priority.

The “Risk of extinction” article quotes an individual who’s a professor in machine learning at Oxford and a co-founder of Mind Foundry, saying, “Because we don’t understand AI very well there is a prospect that it might play a role as a kind of new competing organism on the planet, so a sort of invasive species that we’ve designed that might play some devastating role in our survival as a species.”

What we can each do today

Thinking back to the article about the drone simulation where AI solved problems to achieve its stated goal but neglected the humanitarian and ethical conflicts along the way, it’s time that, “We must face a world where AI is already here and transforming our society. AI is also very brittle, i.e. it is easy to trick and/or manipulate. We need to develop ways to make AI more robust and to have more awareness on why the software code is making certain decisions — what we call AI-explainability.”

In a world where software code is making the decisions and there’s no human to hold accountable for law-breaking and human destruction, the reality is that AI won’t “save the world”, as Andreessen says, until long after it destroys humanity and remade society in its own image.

The future will not go to a man-powered justice system.

The future will go to the game makers, whether they be the humans that program it or the code that gets away from us.

Contact your congressman today and advocate to legislate AI fealty to humans (and Americans especially) before it’s too late.

Join my community of like-minded and inspired readers on Substack.