Google announced Google Lens app during Google I/O 2017, design to provide relevant information using visual analysis. Google officially launches Google Lens on October 4, 2017.
Now at Google I/O 2018, Google announced major update for the app a year after first announcing it.
First, Google Lens features enhanced prompt. Application had dialogue bubble like on Assistant but now app features rounded sheet at the bottom of the screen that tells how user can “Tap on Object and text”. Micro Phone is also accessible through this sheet. Sliding up will provide identification features: Text, Products, Book and Media and Bar codes.
Second, Google Lens features real-time AR capabilities, which was announced at the I/O.User has to point their camera to the real world and get result, shows you items for AR Shopping, recognized objects get anchored dots that display information immediately when tapped.
Third, Google Lens now features smart text selection recognizes text when you point your camera to some text and gives you more information and context. This even allow you to copy and paste.
If you have used the previous version of the app you will notice that the “Keep” and “Share-button” is gone, which is useful for trend.
All credit goes to machine learning and the use of the cloud TPU’s which made this possible by identifying billions of words, places, phrases.
This improved version is still not available on Pixel and Non-Pixel phone, but likely to get update in upcoming weeks as stated in Google I/O keynote.