By Dave Davies
For decades, scientists have envisioned computers so advanced they could think like humans, while also expressing concerns about the potential consequences if these machines started acting on their own.
Pulitzer Prize-winning investigative journalist Gary Rivlin emphasizes that regulation is crucial in managing AI’s impact. “I personally think AI could be an amazing thing around health, medicine, scientific discoveries, education, a wide array of things — as long as we’re deliberate about it,” he says. “And that’s my worry … that we’re not being deliberate.”
In his latest book, AI Valley: Microsoft, Google, and the Trillion-Dollar Race to Cash In on Artificial Intelligence, Rivlin explores the rapid developments in artificial intelligence. He describes today’s AI as “amplified intelligence,” a tool capable of identifying patterns in data far beyond human capacity. However, he also points out a critical aspect he refers to as “alien intelligence.”
“The weird thing about AI is that it seems to know everything, but it doesn’t understand a thing,” Rivlin says. “It’s like a parrot. It’s repeating words randomly, but it doesn’t really understand what it’s saying.”
Rivlin warns that AI could lead to both beneficial and harmful outcomes. He highlights alarming scenarios, such as AI potentially being used to engineer a new pandemic or cybercriminals exploiting it to steal trillions from the global economy. Yet he notes that this isn’t the first time a transformative technology has emerged.
“The automobile symbolized freedom and revolutionized society, but it also brought pollution and tens of thousands of deaths each year in the U.S.,” he points out. “And I look at AI the same way. It could be really great if we’re deliberate about it and take steps to ensure that we get more of the positives than the negatives, because I guarantee you there will be both positives and negatives.”
Interview highlights
On the power of AI development being in the hands of few people
What scares me is there’s a movement in Silicon Valley, there’s a movement in tech, the accelerationists. Anything that stands in the way of our advancing artificial intelligence is bad. Often it’s put in the context of competing with China. We can have no rules in the way, and that is their agenda. I would say their real agenda is that they could make a lot of money — billions, hundreds of billions, ultimately trillions of dollars — off of this, and they don’t want anyone standing in their way. In fact, maybe that’s my biggest fear about AI: It’s so much power in the hands of few people.
On big tech companies vs. startups in the race to develop new AI
Never underestimate the ability of a giant to stumble over its own feet. They have layers and layers of bureaucracy. They have a huge public relations department that’s whispering to CEOs. I don’t think it’s a coincidence that OpenAI, a startup founded in 2015, was the one that set off the starter’s pistol on this because they didn’t have as much as at stake. They can afford reputation-wise to release ChatGPT. They could just make the decision without 10 layers of decision-making before they did it. And so yes, they have an advantage, but Google also has like $100 billion dollars of reserves, where OpenAI has to go out and raise funds. … Google, they just pay for themselves. Microsoft, Meta, they all have deep, deep, deep reserves of money. … And it’s not clear how any of these companies are gonna make money. Google can afford to lose money on these things for five years plus. A startup, that’s harder to do.
On possible good outcomes of widespread use of AI
I do feel that AI is gonna bring about incredible things. I think it’s being overstated. You hear people say that it’s going to close the divide between the developing world and the developed world. I don’t think that’s so, but there’s this interesting study that came out recently, the idea of an AI tutor, a tutor in the pocket. … 5 billion people around the globe have a smartphone and you can use that smartphone as a tutor. … And I really do think around education, around science. … I think we’re gonna see some amazing scientific advancements. There are some who predict — and I actually think there’s a lot to it — that the mortality rate for most cancers are going to go way down because of AI. So I really do think AI could do some amazing things. It’s just, I just don’t know how bad the bad’s going to be.
On a lot that is unknown about how AI works
Nowadays it’s neural networks, models that emulate how humans learn. They learn by reading vast stores of data, the open internet books, whatever, and they improve through feedback and trial and error. You’re not really encoding the rules. … We don’t quite understand why they say what they say because they’re trying to emulate the human brain as best they can. … And so that’s part of the miracle, the gee whiz, these things are amazing, but it’s part of what’s scary because we don’t fully understand. The people who create it don’t fully understand why [AI] says what it says.
The views and opinions expressed in this article are those of the author and do not necessarily reflect the official policy or position of Censational Market.