Google at its recently concluded Google I/O conference gave a teaser of a list of mind blowing things to await for in 2018. For a Data Scientist and a Machine Learning enthusiast these announcements are no less exciting than a Marvel movie. The announcements, besides introducing some pretty fascinating features, has showcased the depth and width of opportunities that lie in the field of Data Science. It threw wide open new doors for AI’s expedition across domains for the future of humanity.
The time when powerful and cool hardware, like Google glasses, AR & VR standalone devices, smart watch, Android releases etc. that used to dominate the Google I/O conference are matters of past now. This time Google has actually reinvented all its products infusing them with Artificial Intelligence, with Machine Learning and Deep Learning models working under the hood. Be it large portfolio of apps, operating systems or autonomous car, everything comes powered with powerful models now. To an extent that Google have even renamed it’s research lab as AI labs.
Announcements pertaining to Artificial Intelligence
AI pushing the technological frontiers in health care industry
1. Diagnose Diabetic Retinopathy: Using Deep Learning models on the retinal image data obtained from the eye scan can diagnose the event of Diabetic Retinopathy which leads to blindness. This can bring quality medical assistance in the areas where doctors are scarce and also assist doctors with accurate early predictions. Interestingly, the same eye scan was additionally able to predict five years risk of adverse cardio vesicular event (heart attack or stroke) with other predictions around blood pressure, diabetics, smoking, BMI etc.
2. Predict medical events: Using Machine Learning on the de-identified (data with masked personal identity) medical records, the models analyzed around 100,000 data points of a patient to predict his chances of re-admission in next 24-48 hrs with 76% accuracy . As a doctor one is limited by capabilities but leveraging machine, AI can be a life saver
AI in Gmail
1. Smart compose: New email text recommendation system for Gmail. It uses Natural Language Processing and prediction models to start suggesting phrases based on the subject and few words into the email. Gmail auto complete feature will enable the user to finish an email by just selecting the recommended sentences, very similar to next word recommendation suggestion by Gboard from Google (keypad we use in mobile phones)
AI in Google Photos
1. Suggested actions: Google Photos has already differentiated itself with AI being at its core from its inception. In a new release smart action suggestions will suggests users certain actions in real time, based on the image, to act on. Object recognition, image processing and matching algorithms allows suggesting to share the image with friends present in the clicked photos in real time, advanced image editing options and conversion of a document image into a perfectly cropped PDF
AI in Google Assistant
1. Six new voices for Google Assistant: Google assistant at one end is about getting work done for the users and at the other end being forgotten of being a virtual program and rather feel more human companion. Wavenet models employed in Google Assistant, models the raw voice over a small training data set, enough to capture the pitch, pace and pauses in a desired persons voice in different scenarios to create a more natural voice for Google assistant. It allowed reduced studio time building the voice by answering all the possible questions by a human and at the same time it feels more natural
2. Continued Conversation: One needn’t be frustrated to have to awaken the Google assistant with “Ok Google” at every query anymore. With new release it will be able to take in follow-up requests and even understand when the user is speaking to it and when to other people around. Smart algorithm, istn’t ?
3. Multiple Actions: With Co-ordination Reduction technique, as called in linguistic, Google Assistant can now separate multiple tasks in a single command, understand and perform different actions for each one of them using Natural Language Processing and semantic learning
4. Pretty please: A feature that can be enabled in Google Assistant is intended to teach kids to be polite. It acts as a reinforcer to the kids by using positive reinforcement technique on the kids. Technically this would essentially mean kids are the training models and AI, the trainer, performing reinforcement learning. Google assistant expects commands in a polite nature with “please” in order to perform the desired action as a reward for their politeness.
5. Google Duplex: The most talked about announcement. Google Assistant can now call people on the users behalf and be able to perform actions requested, like fix an appointment or make a reservation to start with . However it’s additional developments will scale to inquire required information or may be convey the intended message etc. As cool as it seems it has already scared people of power AI. Google have however clarified it will not push the Assistant to bluff the call receivers pretending to be human rather encode it to identify itself at the beginning of the call.
AI in Google News
1. News Digest: Leveraging the huge database at hand, its multi-platform connectivity, user data and advancements in AI, news digest brings customization by showing the user interested news categories without them requiring to select their preferences through reinforcement learning. The model will learn through the user activity over the app and the more the app is used the more accurate the news recommendation gets.
“Information Overload” is the rabbit hole problem the big companies are dealing with. What it essentially means is there is so much of data being collected and published every minute that it is becoming more and more difficult for users to get what they like. Good for business, perhaps, it opens up space for newer models like Recommendation Engines and Natural Language Processing that will understand the user preferences and browse through the heap of data to get users what they exactly what they want to look at, making their happier, engaged and saving them quality time (it seems!!!)
2. News casts: An very interesting feature release in Google News, It extracts a short preview from the long news article using the Natural Language Processing. The model grasps the key information, reviews, quotes, trailers etc. from the long news articles and present them in a short format to the user. The key content of the article is delivered to the user with a less time being spent on it
10. Full-coverage: Top most feature of Google News, worth a clap. The feature intends to give a complete picture of a story by assembling the news articles from different sources, presenting different perspectives. Model based on Temporal Co-Locality technique maps relation between entities (news content that can be articles, tweets, videos etc.) from different sources: YouTube, news publishers, tweets and live news telecast based on the information present in the article (places, people and things) to create a 360 degree view around the story from when it started to current scenario
AI in Android (mobile operating system)
1. Android p: Brings AI to the core of the operating system. This will allow on device Machine Learning and Deep Learning algorithms to be run within the privacy of user data in their phone
2. Adaptive battery: With all the data points collected based on the users daily usage of apps, Machine Learning models can predict user next apps usage which in turn will help the OS to manage background processes and CPU wake-ups, all leading to improvement in battery performance time
3. Adaptive brightness: This is an advancement over Auto Brightness feature with Machine Learning models learning from user individual preference over the level of brightness requirement in different environment scenarios to automatically adjust brightness to user requirements, reducing their manual interference for adjustment
4. APP Action: Android already has predicted apps feature which apredict what apps users would use next. Advancing over it the Android p will roll out with app action prediction which can predict what action the user will perform next, like making a call to mom, ordering food, listening to music etc. driven by the users daily usage pattern
5. ML kit: Google has created and offered app developers a kit with Machine Learning algorithms to support incorporation of AI in more and more apps pushing the apps design and use to a next level. Working on Tensorflow lite on cloud it comes with libraries for image processing, photo labeling, landmark recognition, facial recognition etc.
AI in Google Maps
1. Your match: Google maps will roll out with machine learning models for content based filtering recommendation system. This will combine Google’s large information database with users interest to rate the likelihood of a user liking a particular restaurant (98% match). This is based on the users previous interaction with other restaurants, ratings they have given, cuisines they have showed interest it etc.
AI in Google lens
1. Style Match: Camera can be pointed to a object of interest and the AI will search for the similar style from images online and return the results. The results will contain information ranging from images, online shopping sites to YouTube videos making it easier to get what we like over the overloaded internet.
Machine Learning here have a very different role to play more than object recognition and image processing. the scale of information that models have to traverse and fetch the data in real time can only be facilitated with the help of Machine Learning algorithms and powerful computing infrastructure
Tensor Processing Unit (TPU 3.0): These are the special purpose Machine Learning processing units which facilitate all the developments at Google possible by helping them to create better, larger and more accurate models.
AI in Waymo (Self-Driving cars)
Well, self-driving cars is about AI running in its every vein but Google’s expedition has pushed the AI capabilities to a larger extent with deep semantic understanding. As it says “It is not the car we are building rather it is the better driver we are developing”.
1. Perception Modelling: Deep nets with sensor data has helped in detection of objects with all their different sizes and forms (i.e. a human standing, sitting, in a dinosaur costume etc.).
2. Prediction Modelling: Unexpected movements while driving on road is not so uncommon. The models have to be trained to be prepared for unexpected movements using more and more training data to make it better and better
On an ending note…
The vision of AI has always been to be as human as possible and this is something than can’t be attained without semantic understanding. To be human is to feel the emotion, understand the behavior and habits, and empathize the people’s needs and requirements. The advancement in Natural language processing is the way towards this vision. All the AI capabilities One other most important point to extract from the Google showcase is the effort business are putting in to Personalizing the experience by understanding the user habits, interests and behavior to customize every app, in fact every interaction of the user with the business. Supervised learning based Recommendation Engines are the front runners in the Machine Learning space to create a more customized and personal experience to the users. All the AI announcements mentioned above is a great inspiration to perceive the environment as we interact in context of reinventing them powered with AI. As a Data Scientist and Machine Learning enthusiast this is the crazy life we choose to live.