Top AI researchers seek permission from OpenAI, Meta and others to conduct independent research


More than 100 top artificial intelligence researchers challenge generative AI companies to give researchers access to their systems, claiming opaque internal rules are hampering safety testing tools used by millions of consumers. signed an open letter asking for permission to do so.

Researchers say strict protocols designed to prevent malicious actors from exploiting AI systems have instead had a chilling effect on independent research. These auditors fear account bans and lawsuits if they attempt to test the safety of AI models without the company's permission.

The letter was signed by experts in AI research, policy, and law, including Percy Liang of Stanford University. Pulitzer Prize-winning journalist Julia Angwin. Renee DiResta of the Stanford Internet Observatory. His Mozilla fellow, Deb Raji, is pioneering research on auditing AI models. Former government employee Marietje Schaake, former member of the European Parliament. and Suresh Venkatasubramanian, a professor at Brown University and former adviser to the White House Office of Science and Technology Policy.

The letter, sent to companies including OpenAI, Meta, Anthropic, Google, and Midjourney, calls on tech companies to provide legal and technological safe havens for researchers to interrogate their products. There is.

“Generative AI companies should avoid repeating the mistakes of social media platforms, as many have effectively banned the types of research aimed at accountability,” the letter said. is written.

The effort comes as AI companies have become more aggressive about keeping external auditors out of their systems.

OpenAI said in recent court documents that The New York Times' efforts to uncover potential copyright infringement “hacking” ChatGPT chatbot.The new conditions for the meta say this will be the case. Revoke license If a user claims that the system violates their intellectual property rights, it will apply to its latest large-scale language model, LLaMA 2. Another signatory, movie studio artist Reed Southen, was testing the ability to create copyrighted images of movie characters using the image generation tool Midjourney. account was banned.rear he emphasized As a result of his findings, the company amended threatening language in its terms of service.

“If you knowingly infringe someone else’s intellectual property and it costs us money, we will come after you and recover that money from you,” the terms state. . “We may also do other things, such as lobbying the court to make you pay your legal costs. Please don't do that.”

An accompanying policy proposal co-authored by some of the signatories states that OpenAI updated its conditions to protect academic safety research after reading an earlier draft of the proposal. “Some ambiguity remains,” he said.

AI companies' policies generally prohibit consumers from using their services to generate misleading content, commit fraud, infringe copyright, influence elections, or harass others. Users who violate the Terms may have their accounts suspended or banned without opportunity for appeal.

However, in order to conduct independent research, researchers often intentionally break these rules. Because the tests are conducted while the company is logged in, some worry that AI companies developing ways to monitor potential rule-breakers will be subject to undue scrutiny. Crack down on users who bring negative attention to your business.

While companies like OpenAI offer special programs to provide access to researchers, the letter argues that this setup encourages favoritism by allowing companies to manually select evaluators.

External research has discovered vulnerabilities in widely used models such as GPT-4, including the ability to defeat safeguards by translating English input into less commonly used languages ​​such as Hmong. .

Borhane Brili Hamelin, a researcher who works with the nonprofit AI Risk and Vulnerability Alliance, said that in addition to safe harbors, companies should also provide direct access to external researchers so that they can communicate problems with their tools. He said it is necessary to provide a channel.

In other cases, he said, the best way to make potential harm visible may be to shame the company on social media. This harms the public by narrowing the types of vulnerabilities that are investigated, and puts companies in an adversarial position.

“Our surveillance ecosystem is broken,” Brili Hamelin said. “Sure, people find problems. But the only channel for impact is the 'gotcha' moment that catches the company's pants down. ”





Source link

2024-03-05T07:20:15-08:00