Featured

Google may use AI for weapons, surveillance – prompts backlash

Google employees are reportedly up in arms over management’s decision to walk back a promise not to use artificial intelligence technology to develop weaponry and surveillance tools.

The search giant’s decision to revise its AI ethical guidelines, announced on Tuesday, marks a significant departure from the company’s earlier commitment to abstain from AI applications that could cause harm or be used for military purposes.

Google employees are reportedly upset over the company’s decision to retract a promise not to use artificial intelligence to develop armaments or surveillance methods. REUTERS

The updated guidelines no longer include a prior commitment to avoiding AI for weapons, surveillance, or other technologies that “cause or are likely to cause overall harm.”

Google did not explicitly acknowledge the removal of its AI weaponry ban in its official communications.

The Post has sought comment from Google.

Google employees expressed their concern about the new policy through posts on the company’s internal message board, Memegen.

One widely shared meme depicted Google CEO Sundar Pichai searching “how to become a weapons contractor” on Google, according to Business Insider.

Another referenced a popular comedy sketch featuring an actor dressed as a Nazi soldier, captioned: “Google lifts a ban on using its AI for weapons and surveillance. Are we the baddies?”

Google employees reportedly circulated a meme depicting CEO Sundar Pichai searching “how to become a weapons contractor” using the search engine. Bloomberg via Getty Images

The “Are we the baddies?” meme comes from a British comedy sketch where two Nazi officers realize their skull-emblazoned uniforms might mean they’re the villains.

It’s used humorously to depict moments of ethical self-reflection.

A third meme featured Sheldon from “The Big Bang Theory” reacting to reports of Google’s increasing collaboration with defense agencies, exclaiming, “Oh, that’s why.”

While some employees voiced concerns, others within the company may support a more active role in defense technology, particularly as AI becomes a critical factor in global military and security strategies.

In the wake of the Oct. 7 massacre by Hamas terrorists, Google faced criticism over its $1.2 billion “Project Nimbus” contract with Israel, with employees and activists arguing that its cloud technology could aid military and surveillance operations against Palestinians.

Google employees in years past have expressed opposition to the company’s government contracts that involve AI and military applications. Bloomberg via Getty Images

The company fired more than two dozen employees who broke into executive offices and staged a sit-in that was live-streamed over the internet last year.

The shift in Google’s stance aligns with a broader industry trend of tech companies engaging more closely with national defense agencies.

Companies like Microsoft and Amazon have secured lucrative government contracts involving AI and military applications, and Google’s revised policy suggests a potential strategic realignment to remain competitive in the field.

DeepMind CEO Demis Hassabis and James Manyika, the company’s senior vice president for technology and society, defended the update in a blog post.

They cited an “increasingly complex geopolitical landscape” as a driving factor behind the change. They emphasized the need for collaboration between governments and businesses to ensure AI remains aligned with democratic values.

“We believe democracies should lead in AI development, guided by core values like freedom, equality, and respect for human rights,” the executives wrote.

“And we believe that companies, governments, and organizations sharing these values should work together to create AI that protects people, promotes global growth, and supports national security.”

Google parent company Alphabet’s stock dropped by more than 8% on Wednesday after the firm announced increased AI spending despite slowing revenue growth. dpa/picture alliance via Getty Images

The company’s historical reluctance to engage in military AI projects stems from employee-led protests in 2018, when workers successfully pressured Google to abandon a Pentagon contract known as “Project Maven,” which aimed to enhance drone surveillance capabilities using AI.

At the time, Google established a set of AI principles that placed clear restrictions on military applications.

Now the original blog post outlining those principles has been updated to link to the revised guidelines, signaling a pivotal shift in company policy.

With AI technology evolving rapidly and geopolitical competition intensifying, Google’s new stance may position it to pursue defense-related contracts previously left to competitors.

However, the internal backlash underscores the ethical dilemmas tech companies face as they navigate the intersection of innovation, corporate responsibility, and national security.

Google parent Alphabet’s stock dropped by more than 8% on Wednesday, wiping out more than $200 billion in market value, after the company announced increased AI spending despite slowing revenue growth.

Shares of Alphabet ticked downward by around 0.5% as of 3 p.m. Eastern Time on Thursday. The company’s stock was trading at around $192 a share.

Investors are scrutinizing tech firms’ rising AI costs, especially after Chinese startup DeepSeek reportedly trained a model for under $6 million without Nvidia’s top hardware.

Source link

Related Posts

Load More Posts Loading...No More Posts.