Google AI ethics memo makes it clear military work will continue

Share

The principles include aims such as safety, accountability, privacy, avoiding unfair bias, and being "socially beneficial". "How AI is developed and used will have a significant impact on society for many years to come", Pichai wrote.

However, Google went on to confirm that they will continue to work with government bodies and military. However, the AI principles do not make clear whether Google would be precluded from working on a project like Maven-which promised vast surveillance capabilities to the military but stopped short of enabling algorithmic drone strikes.

"In other words, the company acknowledges that some AI developed for one objective may in fact be re-purposed in unintended ways, even by the military", she said Friday. This officially turned Google into a defense contractor, which is a company that provides products or services to the USA military or US intelligence agencies.

"The global norms surrounding espionage, cyberoperations, mass information surveillance, and even drone surveillance are all contested and debated in the worldwide sphere", he said.

Cavaliers let chances slip away _ again _ in NBA Finals
This is the fourth consecutive year they are battling Cleveland in the NBA Finals and they are aiming for their third crown. The next two days will be angst-laden for Cleveland, though hardly anyone should be fretting the outcome of this series.

Google also called on employees and customers developing AI "to avoid unjust impacts on people", particularly around race, gender, sexual orientation and political or religious belief. Where there is a material risk of harm, we will proceed only where we believe that the benefits substantially outweigh the risks, and will incorporate appropriate safety constraints.

After abandoning its involvement in a Pentagon project, Google has vowed to never develop AI technologies that can be used for war or surveillance.

Only weapons that have a "principal purpose" of causing injury will be avoided, but it's unclear which weapons that refers to.

It's interesting that Google mentioned worldwide human rights laws here, because just recently, the United Nations' Special Rapporteur called on technology companies to implement global human rights laws by default into their products and services, instead of their own filtering and censorship rules, or even the censorship rules of certain local governments.

Amid protests, Rajini’s Kaala screened in many parts of Karnataka
Most single screens that were planning to show Kaala earlier had started screening Hollywood film Jurassic World from Thursday. Several pro-Kannada groups have also warned theatres not to screen the film if they want to avoid any untoward incident.

The restriction could help Google management defuse months of protest by thousands of employees against the company's work with the U.S. military to identify objects in drone video.

The company will, however, continue to work with governments and the military in cybersecurity, training, veterans health care, search and rescue, and military recruitment, Pichai said.

"These collaborations are important and we'll actively look for more ways to augment the critical work of these organizations and keep service members and civilians safe", he wrote.

Today Google unveiled a new set of principles guiding its approach to artificial intelligence, including a pledge not to build AI weapons, "technologies that gather or use information for surveillance violating internationally accepted norms" or ones "whose goal contravenes widely accepted principles of worldwide law and human rights".

How to watch online unveiling of new BlackBerry phone
As the name suggests, the handset comes with a physical keyboard, and BlackBerry says that an icon is reborn in this new device. Those interested can tune in to BlackBerry's YouTube channel and watch the live unveiling of the Key2 as it happens.

Share