'Hyperalignment' Machine learning researcher Jan Reike, who co-leads OpenAI, announced his last day at the company on May 17th in a thread about X. ” Reik said in a post about X.
Mr. Reich's consecutive posts to X gave insight into the reasons behind his resignation. Describing his time at OpenAI as “a wild journey over the past three years,” Reich said he joined OpenAI because he thought it was “the best place in the world to do this research.” Ta.
Reike then said that he disagreed with the company's leadership and OpenAI's core priorities for quite some time “until we reached a breaking point.”
“Leaving this job was one of the hardest things I've ever done, because we're trying to navigate AI systems much smarter than we are,” Reike said in a series of posts. “We urgently need to find a way to control it.”
Highlighting how OpenAI approaches safety, Reich said in the post that “over the past few years, safety culture and processes have taken a backseat to shiny products,” and that “OpenAI We must become an AGI company that puts safety first.”
According to Reich, OpenAI has more time to “prepare next-generation models, including security, surveillance, preparedness, safety, adversarial robustness, (hyper)coordination, confidentiality, social impact, and related topics.” It says you need to spend. I am concerned that OpenAI is not on track.
“Building machines that are smarter than humans is an inherently risky endeavor,” he said, alluding to continued exploration of AGI (artificial general intelligence) development. “I'm doing it,” he said.
“To ensure that AGI benefits all humanity, we must prioritize as much preparation as possible,” Reike said, adding that OpenAI is “long overdue to address the implications of AGI seriously.” I am.
© IE Online Media Services Pvt Ltd
Date first uploaded: May 19, 2024, 11:56 IST