Two attendees wearing Google Glass listen to the opening keynote during the annual Google I/O developers conference Photograph:( Reuters )
Google Lens can recognize who is in pictures and what they are doing as well as translate languages in signs viewed through smartphone cameras. ||Google also announced a software kit that will let developers build Assistant capabilities into robots, applications, and other computerised creations.
Google machine vision capabilities are being used to enable services such as recognizing who is in pictures and what they are doing as well as translate languages in signs viewed through smartphone cameras, demonstrations showed.
Advanced "Lens" features are being added first to the Google Photo application, which is available free.
Aiming a smartphone camera at a flower will prompt it to be identified; while aiming it at a complex password and hotspot name on a router will let it automatically log into the wireless connection.
Google also unveiled a second-generation computer chip it designed specifically to improve cloud computing capabilities in data centers.
"We want Google Cloud to be the best cloud for machine learning," Pichai said.
He described the internet giant's core search service and its Google Assistant as the company's most important AI products.
Google Assistant, introduced last year, is now on more than 100 million devices, according to the team's vice president of engineering, Scott Huffman.
"We are really starting to crack the hard computer challenge of conversationality," Huffman said. "Soon, with Google Lens, your assistant will be able to have a conversation about what you see."
Google used the conference to announce a software kit that will let developers build Assistant capabilities into robots, applications, and other computerized creations.
Google also announced enhancements to its Home personal assistant, adding abilities such as hands free telephone calls and acting as speakers for wireless audio.