OpenAI has deleted part of its terms and conditions which prohibited the use of its AI technology for military and warfare purposes.
An OpenAI spokesperson told Verdict that while the company’s policy does not allow its tools to be used to harm people, develop weapons, for communications surveillance, or to injure others or destroy property, there are, however, national security use cases that align with its mission. “For example, we are already working with DARPA to spur the creation of new cybersecurity tools to secure open source software that critical infrastructure and industry depend on,” said the spokesperson adding: “It was not clear whether these beneficial use cases would have been allowed under “military” in our previous policies. So the goal with our policy update is to provide clarity and the ability to have these discussions.” The ChatGPT maker’s usage policy initially included a ban on any activity that included “weapons development” and “military warfare”. However, the new update that went live on 10 January, did not include the ban on “military and warfare”. OpenAI left the blanket ban on using the service “to harm yourself or others” with an example included of using AI to “develop or use weapons”. “We’ve updated our usage policies to be more readable and added service-specific guidance,” OpenAI said in a blog post. “We cannot predict all beneficial or abusive uses of our technology, so we proactively monitor for new abuse trends,” the blog post added. Sarah Meyers, managing director of the AI Now Institute, told the Intercept that AI being used to target civilians in Gaza makes now a notable time for OpenAI to change their terms of service.OpenAI removes ban on military use of AI tools for national security scenarios
The move follows OpenAI forming a team, in October 2023, to combat “catastrophic risks” arising from the development of AI models.
Kurt Robson January 17 2024
