AI ethicsArtificial intelligenceFeaturedGoogleTechnology

Google removes weapons development, surveillance pledges from AI ethics policy

Google has updated its ethical policies on artificial intelligence, eliminating a pledge to not use AI technology for weapons development and surveillance.

According to a now-archived version of Google’s AI principles seen on the Wayback Machine, the section titled “Applications we will not pursue” previously included weapons and other technology aimed at injuring people, along with technologies that “gather or use information for surveillance.”

As of Tuesday, the section was no longer listed on Google’s AI principles page.

The Hill reached out to Google for comment.

In a blog post Tuesday, Google head of AI, Demis Hassabis and senior vice president for technology and society James Manyikaexplained the company’s experience and research over the years, along with guidance from other AI firms, “have deepened our understanding of AI’s potential and risks.”

“Since we first published our AI principles in 2018, the technology has evolved rapidly,” Manyika and Hassabis wrote, adding, “It has moved from a niche research topic in the lab to a technology that is becoming as pervasive as mobile phones and the internet itself; one with numerous beneficial uses for society and people around the world, supported by a vibrant AI ecosystem of developers.”

Google said in the blog post that it will continue to “stay consistent with widely accepted principles of international law and human rights,” and evaluate whether the benefits “substantially outweigh potential risks.”

The new policy language also pledged to identity andassesst AI risks through research, expert opinion and “red teaming,” during which a company tests its cybersecurity effectiveness by conducting a simulated attack.

The AI race has ramped among domestic and international companies in recent years as Google and other leading tech firms increase their investments into the emerging technology.

As Washington increasingly embraces the use of AI, some policymakers have expressed concerns the technology could be used for harm when in the hands of bad actors.

The federal government is still trying to harness the benefits of its use, even in the military.

The Defense Department announced late last year a new office focused on accelerating and adopting AI technology for the military to deploy autonomous weapons in the near future.

Source link

Related Posts

Load More Posts Loading...No More Posts.