OpenAI CEO Defends AI Safety Amidst Rising Concerns

OpenAI CEO Sam Altman defended his company’s AI technology as safe for widespread use, amidst mounting concerns over potential risks and the lack of proper safeguards for ChatGPT-style AI systems. Altman’s remarks came at a Microsoft event in Seattle, addressing developers as a new controversy erupted over an OpenAI AI voice closely resembling actress Scarlett Johansson’s. The CEO, who gained global prominence after OpenAI released ChatGPT in 2022, is grappling with questions about the safety of the company’s AI following the departure of the team responsible for mitigating long-term AI risks.

Altman emphasized the current opportunity for developers to leverage AI technology, urging them not to delay their plans. He highlighted OpenAI’s partnership with Microsoft, which uses the GPT-4 large language model to build AI tools. Despite acknowledging that GPT-4 is “far from perfect,” Altman assured the audience that it is “robust enough and safe enough” for various applications. He underscored the extensive work OpenAI has invested in ensuring the safety of its models, comparing it to the safety expectations one has when taking medicine.

However, OpenAI’s commitment to safety has been questioned, especially after dissolving its “superalignment” group, dedicated to addressing long-term AI dangers. Team co-leader Jan Leike criticized OpenAI for prioritizing “shiny new products” over safety, expressing concerns about the company’s trajectory. This controversy was compounded by a public statement from Johansson, who was outraged over a voice used by OpenAI’s ChatGPT that resembled hers from the 2013 film “Her.” Altman apologized to Johansson, insisting the voice was not based on hers, but the incident has heightened scrutiny over the ethical and safety practices of OpenAI.

Back to top button

Adblock Detected

Please consider supporting us by disabling your ad blocker