Is AI moving fast to break things?

This article delves into the rapid acceleration of artificial intelligence (AI) and its potential to disrupt societal structures. It explores the notion of AI "breaking things" and the implications of such disruptions on our economy, politics, and daily lives.

Is AI moving fast to break things?

Disinformation, for example, is more worrying than apocalyptic scenarios when it comes to artificial intelligence.

In January 2015, the Future of Life Institute (FLI) invited artificial intelligence experts to gather in San Juan, Puerto Rico. They created a research agenda to promote AI's positive impacts on humanity and released a largely optimistic statement there. Nonetheless, it acknowledged the possible disruptions of automated vehicles, drastically reducing road fatalities but replacing them with lawsuits. FLI recognized uncertainty over AI's impacts yet saw the potential for good outcomes, like ending disease and poverty.

It was a different open letter FLI published on March 29. The group warned that AI labs were engaged in an "out-of-control race to develop and deploy ever more powerful digital minds that no one—not even their creators—can understand, predict, or reliably control." In addition to attracting thousands of signatures—including those of prominent figures in the field—it called for an immediate pause on the most advanced AI research.

The letter can be instructional on many levels for anyone attempting to comprehend why people are panicking about AI. It showcases how discourse regarding fresh technologies can alter unexpectedly from exuberant hope to dour skepticism.

💡
According to Anthony Aguirre, FLI's Vice President and Board Secretary, who also assisted in composing the recent letter at the Puerto Rico gathering in 2015, the atmosphere was amicable and hopeful. However, he believes this marked change of direction in tech development is worrying. "What wasn't present back then were large corporations competing against each other," he notes.
The Singularity: What It Is and Why It Matters
The singularity is a term that has been used to describe a potential future where artificial intelligence surpasses human intelligence. Discover what it means and why it’s a topic of discussion in this article.

While it seems obvious that self-interested technology companies will dominate the field in the future, that concern is not reflected in the documents from 2015. Among the most frightening consequences of powerful chatbots is the dissemination of misinformation on an industrial scale, according to many tech experts.

In response to last month's letter, leading AI companies such as OpenAI, Google, Meta Platforms, and Microsoft have not indicated that they will change their practices. Several prominent AI experts also complained about FLI because of its association with the polarizing effective altruism movement and Elon Musk, a donor and adviser known for his conflict of interest and attention-seeking behavior.

Aside from any Silicon Valley disputes, critics say FLI harmed, not helped, by focusing on the wrong matters. Their letter's air of existential danger is unmistakable, explicitly mentioning humans losing control over our civilization. Concerns about AI have long been discussed in tech circles - but so tend to exaggerate the powers of the current most-hyped technologies (such as virtual reality, voice assistants, augmented reality, blockchain, mixed reality, and the Internet of things).

Biden Administration Weighs Possible Rules for AI Tools Like ChatGPT
Fears are growing over the potential use of artificial intelligence to commit crimes and spread falsehoods.
Aleksander Madry, faculty co-lead for MIT's AI Policy Forum, believes this perpetuates less pressing conversations and hampers attempts to confront realistic challenges. He is critical of FLI's letter, urging, "It will change nothing, but we'll have to wait for it to subside so we can focus back on earnest matters."

Predicting that autonomous vehicles could halve traffic fatalities and raising the alarm about AI potentially signaling the end of human civilization may appear at opposite extremes of the techno-utopian spectrum. However, both scenarios emphasize how powerful Silicon Valley's technological pursuits are compared to the average person's perception of them.

Leading commercial labs working on AI have made significant announcements in swift succession. Not too long ago, OpenAI debuted ChatGPT, then followed it up with the even better GPT-4. Although its inner workings are mostly undisclosed to those outside the company, its technology has been used in numerous products by Microsoft - OpenAI's chief investor - with one example being a product that professed its love for users. Google responded with its chatbot-powered search tool, Bard. More recently, Meta Platforms Inc. made one AI model accessible to researchers, which dispersed quickly across the Internet once some conditions were met.


Photo by BoliviaInteligente / Unsplash

The OpenAI movement, despite its name, essentially takes the opposite position. As a nonprofit initially established in 2015 to produce and share AI research, it opened a for-profit company in 2019 (albeit with a cap on profits). As a result, it has been a leading advocate for keeping AI technology closely guarded to prevent abuse by bad actors.

Arvind Narayanan, a Princeton University computer science professor, says we're already in a worst-of-both-worlds situation. He says a few companies control AI models, while slightly older ones are widely available and can even run on smartphones. According to him, AI development happens behind the closed doors of corporate research labs, not in the hands of bad actors.

OpenAI has expressed that it may submit its models for independent review or even set limitations on its technology. But they are yet to explain how they would go about this. Therefore, their strategy is to impose restrictions on their partners' access to their most advanced tools through licensing agreements. Greg Brockman, co-founder and president of OpenAI, believes there needs to be a gap between the older and less powerful tools and the more advanced ones for the safety of these techniques to be properly monitored. He states, "You want to have some gap so that we have some breathing room to focus on safety and get that right."

It is difficult to overlook how OpenAI's commercial interests align with this approach; a company official stated that competition is factored into its outlook on what to reveal. A handful of academic researchers have expressed outrage over OpenAI refusing access to its key tech, concerned that it makes AI more perilous by obstructing unconflicted research. A representative from the firm has pointed out that it works with independent scientists, and they underwent an extensive six-month appraisal before making its newest model accessible.


Photo by Jonathan Kemper / Unsplash

As a citizen, I always find it a bit puzzling when people saying "This is too dangerous" are the ones with the knowledge," says Joelle Pineau, vice president for AI research at Meta and McGill University professor. According to Meta, researchers can access versions of its artificial intelligence models to test them for implicit biases and other shortcomings.

The cons of Meta's methodology are starting to surface. In late February, the company enabled researchers to access a large language model labeled LLaMA--similar to what ChatGPT is dependent on. Stanford University researchers revealed that they had used the model as the base for their venture, in which they tried to replicate innovative AI systems with around $600 cost. Pineau stated she hadn't tested the accuracy of Stanford's system; however, she mentioned that such research concurred with Meta's ambitions.

As a result of Meta's openness, it had less control over what happened with LLaMA. For example, it took about a week to appear on 4chan, a popular forum for internet trolls. "We're not thrilled about it," Pineau says.
The Fourth Turning: A Revolutionary Theory of History
The Fourth Turning is a must-read for anyone interested in history and society. This theory offers a unique perspective on the cycles of history and how they shape our world.

A definitive answer to whether OpenAI or Meta is right may never come — the debate is simply a reincarnation of an old Silicon Valley rift. Even so, their divergent paths demonstrate how only executives at a few large companies are making decisions about safeguarding AI.

Other industries must prove to public agencies that potentially hazardous products are secure before releasing them.

The Federal Trade Commission (FTC) noted this in a March 20 blog post, warning tech innovators that it has taken legal action against businesses that failed to take the necessary precautions before making their technology available. This led to the Center for AI & Digital Policy, an advocacy group, submitting a complaint to the FTC on March 30, requesting them to stop OpenAI's progress with GPT-4.

The idea of not constructing something you can build is not new. But this contrasts with the long-held mantra in Silicon Valley to move quickly and disregard whatever damage can be caused. Though AI and social media are different, the same people involved in both industries have been around since the early stages of the former. When politicians started attempting serious problem-solving, it was already too late; their efforts did little to fix what had already happened. In 2015, there seemed to be sufficient time to hash out any forthcoming problems AI might cause, but as time has passed, that doesn't seem as much a possibility anymore.