A group of insiders who work for OpenAI are now blowing the whistle on what they consider to be a culture of secrecy and total recklessness within the artificial intelligence company, located in the city of San Francisco, which is currently working around the clock to try and establish the most powerful AI systems ever made.
What could possibly go wrong?
If only there were a bunch of cautionary tales that were written or made into movies that would provide warnings about the potential danger of playing god and attempting to make artificial life, which, by the way, lacks a human soul. Oh curse you fate for this lack of thought provoking entertainment!
That, of course, is 100 percent pure, unadulterated sarcasm. Hope you enjoyed it. I have plenty where that came from so feel free to grab a second helping.
According to DNYUZ:
The group, which includes nine current and former OpenAI employees, has rallied in recent days around shared concerns that the company has not done enough to prevent its A.I. systems from becoming dangerous.
The members say OpenAI, which started as a nonprofit research lab and burst into public view with the 2022 release of ChatGPT, is putting a priority on profits and growth as it tries to build artificial general intelligence, or A.G.I., the industry term for a computer program capable of doing anything a human can.
They also claim that OpenAI has used hardball tactics to prevent workers from voicing their concerns about the technology, including restrictive nondisparagement agreements that departing employees were asked to sign.
“OpenAI is really excited about building A.G.I., and they are recklessly racing to be the first there,” Daniel Kokotajlo, a former researcher in OpenAI’s governance division and one of the group’s organizers, went on to explain.
The group of whistleblowers put out an open letter on Tuesday that calls for the leaders within the AI industry, which includes OpenAI, to be more open and provide more protections for whistlerblowers.
"*" indicates required fields
Other members include William Saunders, a research engineer who left OpenAI in February, and three other former OpenAI employees: Carroll Wainwright, Jacob Hilton and Daniel Ziegler. Several current OpenAI employees endorsed the letter anonymously because they feared retaliation from the company, Mr. Kokotajlo said. One current and one former employee of Google DeepMind, Google’s central A.I. lab, also signed.
Lindsey Held, a spokeswoman for OpenAI, released a statement saying, “We’re proud of our track record providing the most capable and safest A.I. systems and believe in our scientific approach to addressing risk. We agree that rigorous debate is crucial given the significance of this technology, and we’ll continue to engage with governments, civil society and other communities around the world.”
A spokesman who works for Big Tech giant Google decided not to comment on the letter.
“The campaign comes at a rough moment for OpenAI. It is still recovering from an attempted coup last year, when members of the company’s board voted to fire Sam Altman, the chief executive, over concerns about his candor. Mr. Altman was brought back days later, and the board was remade with new members,” the report said.
I’m reminded of some wise words from the character of Malcolm Ian from “Jurassic Park,” when discussing how insane John Hammond and the other scientists behind recreating dinosaurs and other extinct species must be for building the park.
“Yeah, yeah, but your scientists were so preoccupied with whether or not they could that they didn’t stop to think if they should,” Malcolm said. Hammond then responds by saying that if he wanted to bring back an extinct animal like certain kinds of condors, Malcolm would have no problem with it, to which Malcolm points out that if condors get loose in the park they wouldn’t munch on the tourists.
Same point with artificial intelligence. We’re so excited by all of the possibilities of what we could create, we’re not stopping to think about whether or not we should create. We could, like the team in “Jurassic Park,” be helping to build the very instruments of our own demise.
Last month, two senior A.I. researchers — Ilya Sutskever and Jan Leike — left OpenAI under a cloud. Dr. Sutskever, who had been on OpenAI’s board and voted to fire Mr. Altman, had raised alarms about the potential risks of powerful A.I. systems. His departure was seen by some safety-minded employees as a setback.
So was the departure of Dr. Leike, who along with Dr. Sutskever had led OpenAI’s “superalignment” team, which focused on managing the risks of powerful A.I. models. In a series of public posts announcing his departure, Dr. Leike said he believed that “safety culture and processes have taken a back seat to shiny products.”
“When I signed up for OpenAI, I did not sign up for this attitude of ‘Let’s put things out into the world and see what happens and fix them afterward,’” Mr. Saunders stated.
A man known as Mr. Kokotajlo, 31, started working for OpenAI back in 2022 as a governance researcher and was soon asked to provide a forecast concerning the future of progress in the development of AI. To say he wasn’t optimistic about the direction we’re headed in would be an understatement.
In his previous job at an A.I. safety organization, he predicted that A.G.I. might arrive in 2050. But after seeing how quickly A.I. was improving, he shortened his timelines. Now he believes there is a 50 percent chance that A.G.I. will arrive by 2027 — in just three years.
He’s also convinced that the likelihood of humanity being destroyed by AI or will, at the very least, be catastrophically harmed by it, is around 70 percent. Well, I don’t know about you, but I do not like those odds.
Kokotajlo is concerned that companies are far too concerned with pushing advancements in the technology when they should be spending their time finding ways to safeguard against the risks these programs produce, but it doesn’t look like a whole lot of changes have been made.
In April, he quit. In an email to his team, he said he was leaving because he had “lost confidence that OpenAI will behave responsibly” as its systems approach human-level intelligence.
“The world isn’t ready, and we aren’t ready,” Mr. Kokotajlo said in the email. “And I’m concerned we are rushing forward regardless and rationalizing our actions.”
As if we didn’t have enough to worry about, right?