Andrew Ng pleased Google dropped AI weapons pledge

Published:

Andrew Ng Supports Google’s Decision to Build AI Systems for Weapons

Google’s Recent Stance Change

Andrew Ng, the founder and former leader of Google Brain, supports Google’s recent decision to drop its pledge not to build AI systems for weapons.

“I’m very glad that Google has changed its stance,” Ng said during an on-stage interview Thursday evening with TechCrunch at the Military Veteran Startup Conference in San Francisco.

sajdhasd

Google’s AI Principles Update

Earlier this week, Google deleted a seven-year-old pledge from its AI principles webpage, which promised the company would not design AI for weapons or surveillance. Alongside the deletion, Google published a blog post penned by DeepMind CEO Demis Hassabis who noted companies and governments should work together to build AI that “supports national security.”

Google made its AI weapons pledge in 2018 following the Project Maven protests, in which thousands of employees protested the company’s contracts with the U.S. military. The protestors specifically took issue with Google supplying AI for a military program that helped interpret video images, and could be used to improve the accuracy of drone strikes.

Ng’s Perspective on Project Maven

Ng, however, was baffled by the Project Maven protestors, he told an audience largely made up of veterans.

“Frankly, when the Project Maven thing went down […] A lot of you are going out, willing to shed blood for our country to protect us all,” said Ng. “So how the heck can an American company refuse to help our own service people that are out there, fighting for us?”

Ng did not work at Google when the Project Maven protests happened, but he did play a key role in shaping Google’s efforts around AI and neural networks. Today, Ng leads an AI-focused venture studio, AI fund, and speaks out frequently about AI policy.

Regulatory Efforts and AI Development

Ng later said he was grateful that two AI regulatory efforts – the vetoed California bill, SB 1047, and overturned Biden’s AI executive order – were no longer in play. He had repeatedly argued that both measures would slow down open source AI development in America.

The real key to American AI safety, Ng argued, is to ensure America can compete with China technologically. He noted that AI drones would “completely revolutionize the battlefield.”

He’s not the only former Google executive who has spread that message. Former Google CEO Eric Schmidt now spends his days lobbying Washington D.C. to purchase AI drones to compete with China; his company, White Stork, may eventually supply those drones.

Internal Divisions at Google

While Ng and Schmidt seem to support the military’s use of AI, the topic has split the ranks within Google for years.

Meredith Whittaker, now the President of Signal, led the Maven protests in 2018 while working at Google as an AI researcher. When Google made the pledge to not renew its Project Maven contracts, Whittaker said she was happy about the decision, noting the company “should not be in the business of war.”

She’s not the only Googler whose dissented. Former Google AI researcher and Nobel-laureate Geoffrey Hinton previously called for global governments to prohibit and regulate the use of AI in weapons. Another longtime and revered Google executive, Jeff Dean – now the chief scientist of DeepMind – previously signed a letter opposing the use of machine-learning in autonomous weapons.

In recent years, Google and Amazon fell under renewed scrutiny for their military work, including their Project Nimbus contracts with the Israeli government. Employees of both cloud providers staged sit-ins last year to protest Project Nimbus, under which Google and Amazon reportedly provided cloud computing services to the Israeli Defense Force.

Pentagon and militaries around the globe have a renewed appetite to use AI, the Department of Defense’s chief AI officer previously told TechCrunch. As Google, Amazon, Microsoft and other tech giants invest hundreds of billions of dollars in AI infrastructure, many are looking to recoup that investment through military partnerships.

Conclusion

In conclusion, the debate around Google’s involvement in military AI continues to spark internal and external discussions. While some former executives like Ng and Schmidt support the use of AI for military purposes, others within Google, like Whittaker and Hinton, advocate for stricter regulations and ethical considerations. The future of AI in military applications remains a complex and evolving issue.

FAQs

Q: What was Google’s AI weapons pledge and why was it changed?

A: Google initially pledged not to build AI systems for weapons or surveillance in 2018 but recently deleted this pledge, citing the need to support national security.

Q: How did employees react to Google’s involvement in Project Maven?

A: Thousands of Google employees protested the company’s contracts with the U.S. military during the Project Maven protests, leading to internal divisions and public scrutiny.

Q: What are the key arguments for and against Google’s military AI involvement?

A: Supporters argue that AI can enhance national security and military capabilities, while critics raise concerns about ethics, regulation, and the potential consequences of AI in warfare.


Credit: techcrunch.com

Related articles

You May Also Like