Uncategorized

OpenAI Co-Founder Sutskever Sets up New AI Company Devoted to ‘Safe Superintelligence’

Ilya Sutskever’s new company is focused on safely developing “superintelligence” – a reference to AI systems that are smarter than humans.

Ilya Sutskever's new company is focused on safely developing “superintelligence” - a reference to AI systems that are smarter than humans.

Ilya Sutskever, one of the founders of OpenAI who was involved in a failed effort to push out CEO Sam Altman, said he’s starting a safety-focused artificial intelligence company.

Sutskever, a respected AI researcher who left the ChatGPT maker last month, said in a social media post Wednesday that he’s created Safe Superintelligence Inc. with two co-founders. The company’s only goal and focus is safely developing “superintelligence” – a reference to AI systems that are smarter than humans.

The company vowed not to be distracted by “management overhead or product cycles,” and under its business model, work on safety and security would be “insulated from short-term commercial pressures,” Sutskever and his co-founders Daniel Gross and Daniel Levy said in a prepared statement.

The three said Safe Superintelligence is an American company with roots in Palo Alto, California, and Tel Aviv, “where we have deep roots and the ability to recruit top technical talent.”

Sutskever was part of a group that made an unsuccessful attempt last year to oust Altman. The boardroom shakeup, which Sutskever later said he regretted, also led to a period of internal turmoil centered on whether leaders at OpenAI were prioritizing business opportunities over AI safety.

At OpenAI, Sutskever jointly led a team focused on safely developing better-than-human AI known as artificial general intelligence, or AGI. When he left OpenAI, he said that he had plans for a “very personally meaningful” project, but offered no details.

Sutskever said that it was his choice to leave OpenAI.

Days after his departure, his team co-leader Jan Leike also resigned and leveled criticism at OpenAI for letting safety “take a backseat to shiny products.” OpenAI later announced the formation of a safety and security committee, but it’s been filled mainly with company insiders.

Advertisement. Scroll to continue reading.

Related Content

Artificial Intelligence

Incubated for two years by Ballistic Ventures, GetReal Labs has launched to combat manipulated content and deepfakes.

Artificial Intelligence

Microsoft has tricked several gen-AI models into providing forbidden information using a jailbreak technique named Skeleton Key.

Artificial Intelligence

SecurityWeek’s AI Risk Summit + CISO Forum brings together business and government stakeholders to provide meaningful guidance on risk management and cybersecurity in the...

Artificial Intelligence

AI model weights govern outputs from the system, but altered or ‘poisoned’, they can make the output erroneous and, in extremis, useless and dangerous.

Artificial Intelligence

SecurityWeek’s AI Risk Summit + CISO Forum bring together business and government stakeholders to provide meaningful guidance on risk management and cybersecurity in the...

Artificial Intelligence

The US cybersecurity agency CISA has conducted a tabletop exercise with the private sector focused on AI cyber incident response.

Artificial Intelligence

Aim Security has raised a total of $28 million to date and is on a mission to help companies to implement AI products with...

Artificial Intelligence

Retired U.S. Army General Paul M. Nakasone brings cybersecurity experience to OpenAI's Board of Directors and Safety and Security Committee.

Copyright © 2024 SecurityWeek ®, a Wired Business Media Publication. All Rights Reserved.

Exit mobile version