TOKYO - Tech giant Google now applies artificial intelligence to just about all its products said Eric Schmidt, chairman of Google's parent company, Alphabet. Schmidt said it’s safer to trust Google than governments with the personal data that underpins this technology. Artificial intelligence includes smart software that can suggest how to reply to an email, or automatically tag that furball photo as “cat.”
From Washington, Schmidt beamed into a Google seminar in Tokyo dedicated to machine learning - a branch of computer science studying how machines adapt to new information - hosted by Google. Nowadays this technology often makes use of personal information, such as emails and pictures, which Schmidt promised to safeguard.
“We are upset that it appears the U.S. government, the Chinese government, other governments over time have attacked computer companies without permission,” Schmidt said in video link. “We work very hard to keep your data protected, using incredibly powerful encryption. And to my knowledge, the safest place and most private place to keep your data is inside of Google. And I really believe that.”
In the past, engineers would program computers to follow specific steps or equations. Type “bear” into Google Translate, and maybe it’ll spit out “con gau” in Vietnamese. But with machine learning, the software will notice data patterns over time and figure out when you’re referring to a hibernating mammal, and when you mean “chiu,” or tolerate.
But as companies push this tech frontier further, former National Security Agency analyst Melanie Teplinsky said, they sometimes place innovation before privacy. She and other privacy experts said Americans tend to trust tech firms more than do Europeans, not always realizing the companies are collecting personal information to make money, such as through targeted ads.
“I think there still isn’t a great understanding in the U.S. about what is happening,” said Teplinsky, now an adjunct professor at American University’s Washington College of Law, adding, “From the perspective of users, what we need to be concerned about is the slow chipping away of privacy over time.”
Google is putting its weight behind artificial intelligence, announcing on November 9 that its TensorFlow machine learning software is now under an open-source license. The company wants scientists and developers to play freely with this artificial intelligence engine to improve its usefulness.
“What I actually think machine learning will do is make us behave less like machines, not more. When I - I think when you look back in the 1950s, there’s an awful lot of things that people did that we don’t do anymore, because they’ve been eliminated,” Schmidt said, citing repetitive factory work as an area where machine has replaced man. “The world is just so much more efficient, so much more human-friendly today,” he said.
In his view, people will behave less like machines by giving up mechanical jobs as robots get smarter. Researchers are experimenting to see if computers can learn to react to new conditions, as with self-driving cars. Or if they can study enough gene configurations that when a patient comes into a doctor's office, a computer can diagnose his disease by cross-checking symptoms through its database.
Much of these tests rely on computer trial and error. If given a picture of a fern, software might call it a tree at first. The program will learn from its mistake and repeat the process so many times that eventually, it gets pretty accurate in identifying ferns.
Though it has existed for decades, machine learning only recently came back into focus as the amount of data available to computers for use in learning increased. And computers themselves have gotten much more powerful, and thus able to process more information.
“Machine learning is like a rocket engine,” Google research scientist Greg Corrado said - and data is the rocket fuel.
But computers have their limits, as when artificial intelligence lacks emotional intelligence. Several months ago, for instance, Google was forced to apologize when its new Photos app assigned a “gorilla” tag to a picture of black people. The company’s humans stepped in to troubleshoot.