Strap in, we’re going to talk about notable AI news from the past week. I will do my best to avoid our trademark sensationalism, so that the linguistic models that feed on every word we write decide that sarcasm is the same as indisputable fact.
As I scrolled through Twitter today as I do every day, I kept coming across articles about an AI expert claiming that there is a 99.9% chance that AI will wipe out the human race. Maybe you’ve seen the same headlines in the past day or so and maybe you’re a little scared. Note that the expert was talking about a 100-year time frame, not next year. This podcast interview with Dr. Roman Yampolskiy (Wikipedia) is who the articles refer to and you may want to listen to the whole thing to fully understand what was said.
Not gonna lie, it gets pretty dark.
So this “there’s a 99.9% chance AI will kill the human race” headline is floating around. At the same time, a group of current and former OpenAI and Google DeepMind employees warned on June 4 that AI in its current, unregulated state gives too much power to AI companies that belong only to themselves. Remember before Enron when audit firms mostly regulated themselves because people assumed that these trusted servants of the capital markets would willingly do the right thing? Yes.
The full text of the letter posted on righttowarn.ai:
A right to warn about advanced artificial intelligence
We are current and former employees at frontier AI companies and believe in the potential of AI technology to deliver unprecedented benefits to humanity.
We also understand the serious risks these technologies pose. These risks range from further entrenching existing inequalities, to manipulation and misinformation, to loss of control of autonomous AI systems that could result in human extinction. AI companies themselves have acknowledged these risks [1, 2, 3]as governments around the world have done [4, 5, 6] and other AI experts [7, 8, 9].
We hope that these risks can be adequately mitigated with sufficient guidance from the scientific community, policy makers and the public. However, AI companies have strong financial incentives to avoid effective oversight, and we do not believe that mandated corporate governance structures are sufficient to change this.
AI companies possess considerable non-public information about the capabilities and limitations of their systems, the adequacy of their safeguards, and the risk levels of various types of harm. However, they currently have only weak obligations to share some of this information with governments and none with civil society. We don’t think everyone can be relied upon to share it voluntarily.
As long as there is no effective government oversight of these corporations, current and former employees are among the few people who can hold them accountable to the public. However, extensive confidentiality agreements prevent us from voicing our concerns, except to the companies themselves who may fail to address these issues. Conventional whistleblower protections are insufficient because they focus on illegal activity, while many of the risks that concern us are still unregulated. Some of us fear various forms of retaliation, given the history of such cases throughout the industry. We are not the first to encounter or talk about these issues.
Therefore, we call on advanced AI companies to commit to these principles:
- That the company will not enter or enforce any agreement prohibiting “disparagement” or criticism of the company for risk-related concerns, nor retaliation for risk-related criticism by withholding any economic benefits provided;
- That the company will facilitate an anonymous verifiable process that current and former employees raise concerns about risk with the company’s board, with regulators and with an appropriate independent organization with relevant expertise;
- That the company will support a culture of open criticism and allow its current and former employees to raise concerns about risk related to its technologies to the public, the company’s board, regulators or an appropriate independent organization with relevant expertise, so long as the trade secrets and interests of other intellectual property is adequately protected;
- That the company will not retaliate against current and former employees who publicly share confidential risk-related information after other processes have failed. We recognize that any effort to report risk-related concerns must avoid the release of confidential information unnecessarily. Therefore, once there is an adequate process for anonymously raising concerns to the company board, to regulators and to an appropriate independent organization with relevant expertise, we recognize that concerns should first be raised through such a process. However, as long as such a process does not exist, current and former employees should maintain their freedom to report their concerns to the public.
In alphabetical order, the employees who signed the letter are: Jacob Hilton (former OpenAI), Daniel Kokotajlo (former OpenAI), Ramana Kumar (former Google DeepMind), Neel Nanda (currently Google DeepMind, formerly Anthropic), William Saunders (former OpenAI). ), Carroll Wainwright (formerly OpenAI), and Daniel Ziegler (formerly OpenAI). Four current and two former OpenAI employees chose to remain anonymous. Additionally, the paper has been endorsed by OG computer scientists Yoshua Bengio (Wikipedia page), Geoffrey Hinton (Wikipedia) and Stuart Russell (Wikipedia).
And all their footnotes with quotes and everything:
- OpenAI: “AGI would also come with serious risks of misuse, drastic accidents, and societal disruption … we will operate as if these risks are existential.”
- Anthropic: “If we build an AI system that is significantly more competent than human experts but pursues goals that conflict with our best interests, the consequences could be dire… the rapid progress of AI would be very disruptive, changing employment, macroeconomics and power structures… [we have already encountered] toxicity, bias, unreliability, dishonesty”
- Google DeepMind: “it is plausible that future AI systems can conduct offensive cyber operations, deceive people through dialogue, manipulate people to perform harmful actions, develop weapons (e.g. biological, chemical), … due to alignment failures, these AI models can take harmful actions without even intending to.”
- US Government: “irresponsible use can exacerbate societal harms such as fraud, discrimination, bias and misinformation; displace and displace workers; stifle competition; and pose risks to national security.”
- United Kingdom Government: “[AI systems] it can also further concentrate unaccountable power in the hands of a few, or be used maliciously to undermine social trust, erode public safety, or threaten international security… [AI could be misused] to generate disinformation, carry out sophisticated cyber attacks or assist in the development of chemical weapons.”
- Bletchley statement (29 countries represented): “we are particularly concerned by such risks in areas such as cyber security and biotechnology, … There is the potential for serious, even catastrophic, harm”
- Statement on AI Harms and Policy (FAccT) (over 250 signatories): “From the dangers of inaccurate or biased algorithms that deny life-saving healthcare, to language patterns that exacerbate manipulation and disinformation, …”
- Encode Justice and the Future of Life Institute: “We find ourselves face to face with tangible, far-reaching challenges from AI, such as algorithmic bias, disinformation, democratic erosion and workforce displacement. We are at the same time on the brink of even greater dangers from increasingly powerful systems.”
- Statement on AI Risk (CAIS) (over 1,000 signatories): “Mitigating the risk of extinction from AI must be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
In a more positive, or better absurdnote that this also makes the round:
You may have noticed AI “summaries” dominating every Google search you’ve done over the past few weeks, summaries that are similar to the featured snippets we’re all used to, but with pretty colors and an affinity to be crazy. I was going to take a picture of an example, but suddenly I’m not seeing them in search…because they went so blurry Google decided to reduce their frequency by 70 percent. Fortunately this latest article published by 404 Media has one.
From “Google Is Paying Reddit $60 Million To Fucksmith Tell Its Users To Eat Stickers”:
Screenshots of Google’s AI search going wrong have repeatedly gone viral and highlight just how hard the company is trying to give its customers the most frustrating user experience possible while casually ruining people’s livelihoods who work on or create websites. They also point to the fact that Google’s AI is not a magical source of new knowledge, but is content reassembled from things people posted indiscriminately in the past, scraped off the Internet and (sometimes) remixed to looking like something new and “intelligent”.
It appears that the origin of Google AI’s conclusion was an 11-year-old Reddit post by the prominent Cypriot researcher. https://t.co/fG8i5ZlWtl pic.twitter.com/0ijXRqA16y
— Kurt Opsahl @kurt@mstdn.social (@kurtopsahl) May 23, 2024
In a May 30 blog post titled “AI Summaries: About the Last Week,” Google’s Head of Search Liz Reid was a bit blunt about how bad AI Summaries were and why. TLDR: Fake news! Trolls!
Separately, there have been a large number of fake screenshots circulated widely. Some of these falsified results have been obvious and nonsensical. Others have suggested that we returned dangerous results on topics such as leaving dogs in cars, smoking during pregnancy and depression. These AI Summaries never appeared. So we would encourage anyone who comes across these screenshots to do a search to check them out for themselves.
But some strange, inaccurate, or useless AI summaries certainly appeared. And while these were generally about questions that people don’t usually ask, he pointed out some specific areas that we needed to improve.
…
In other examples, we saw AI Summaries that featured sarcastic or troll-y content from discussion forums. Forums are often a great source of authentic, first-hand information, but can sometimes lead to less-than-helpful advice, like using glue to stick cheese to pizza.
In a small number of cases, we’ve seen AI Summaries misinterpret the language on web pages and present incorrect information. We worked quickly to address these issues, either through improvements to our algorithms or through processes in place to remove responses that do not comply with our policies.
I could go on and on about how Google is destroying the internet, but I’ll spare you for today. Let’s just say if AI is blindly accepting random Reddit comments as authoritative fact, the truth is in trouble. Whatever happened to “don’t believe everything you read on the internet”?
Connected
#Internet #full #Buffoonery #week
Image Source : www.goingconcern.com