Makoto Koike is a cucumber farmer in Japan. Koike is a former embedded systems designer who spent years working in the Japanese automobile industry, but in 2015 he returned home to help out on his parents' cucumber farm. He soon realized that the manual task of sorting cucumbers by color, shape, size, and attributes such as "thorniness" was often trickier and more arduous than growing them. Inspired by the deep learning innovation of Google's artificial intelligence (AI) software AlphaGo, he set out to automate the task.
Continue Reading Below
Businesses are beginning to implement practical AI in all sorts of ways, but it's safe to say that no one saw Koike's AI cucumber-sorting solution coming. Koike had never worked with AI techniques before but, using the open-source TensorFlow machine learning (ML) library, he started inputting images of cucumbers. Thanks to computer vision algorithms for recognizing objects and deep learning to train TensorFlow on the nuances of different cucumbers, Koike realized it could identify and sort the vegetables with a high level of accuracy. Then, by using nothing but TensorFlow and a cheap Raspberry Pi 3 computer, Koike built an automated sorting machine that the farm still uses today.
TensorFlow is one of the many open-source algorithms and tools revolutionizing what businesses and developers can solve using AI. Google is also incorporating these techniques and application programming interfaces (APIs) into everything it does, baking ML into its products and fundamentally redefining how its software works in the process.
PCMag recently visited Googleplex and spoke to executives from G Suite, Google Cloud Platform (GCP), and the company's Machine Learning Advanced Solution Lab (ML ASL) about how Google is rebuilding itself with AI.
Artificial Intelligence Everywhere
Let's say one of your customers is having an issue. An agent from your company's helpdesk department is in a live chat with the customer through Google My Business, using a chat feature currently in a pilot program (and not yet widely available). To help them resolve the issue, the user needs to send the agent some sensitive personal data. Now let's say that customer is your grandma. The customer service rep asks grandma for a few pieces of data, but instead, grandma sends way more information than she needs to when she uploads a picture of her social security card to the chat.
Instead of Google archiving that personally identifiable information (PII), the picture shows up with the social security number and other PII automatically redacted. The agent never sees any information they don't need to and none of that data goes into Google's encrypted archive. During a demo of the Google My Business chat at Google's headquarters in Mountain View, Calif., the company pulled back the curtain on how ML algorithms make this happen.
Rob Sadowski, Trust and Security Marketing Lead for Google Cloud, explained that the automatic redaction is powered by Google's data loss prevention (DLP) API working under the surface to classify sensitive data. The algorithm does the same thing with data such as credit card numbers, and can also analyze patterns to detect when a number is fake. This is but one example of Google's subtle strategy of weaving AI into its experiences, and giving businesses and developers such as Koike the resources to do the same.
Google is far from the only tech giant building a connective intelligence layer into its software but, along with Amazon and Microsoft, Google has arguably the most widespread breadth of cloud-based intelligence tools and services available. Breaking down the company's products, you can find Google Assistant and various ML and computer vision APIs in use just about everywhere.
Google Search uses ML algorithms in its RankBrain AI system to process and refine queries, re-ranking and aggregating data based on a host of changing factors to continually improve the quality of search results. Google Photos uses computer vision to stitch related photos together into memories and combine multiple shots of the same location into panoramas. Inbox gives users auto-generated Smart Replies to choose from, and surfaces relevant emails by bundling similar categories together. The company's new Google Allo chat app comes with Google Assistant built in. The list goes on.
All of these apps run on Google's cloud infrastructure, and the company is even applying ML in its data centers to reduce power consumption by adjusting cooling pumps based on load and weather data. Sadowski said this also serves as the final layer of defense in Google's security strategy, where the company uses machine intelligence and risk scoring within its security stack to determine whether a system is compromised using predictive analytics.
"Google takes all these ML and AI models we've developed and tunes them for security," Sadowski explained. "Security changes a lot more radically than most sectors of IT. Products that were the core of your security infrastructure three or four years ago like firewalls and endpoint protection are still important, but we want to provide defense in depth, at scale, and by default over a multi-tenant infrastructure with millions of daily active users.
"It starts with the underlying data center hardware [like the newly announced Titan chip]," Sadowski continued. "On top of that is application services and authentication with fully encrypted data and communication. On top of that is user identity. And the last layer of defense is how we operate with 24/7 monitoring, detection, and incident response. It's how we solve for things like secure remote access with the identity aware proxy. It's the programmatic DLP service finding and preventing data leaks and helping with data governance as well as security. We aim to make these capabilities easy, consumable, and get them working at scale."
A Smarter G Suite
ML is also embedded throughout Google's G Suite productivity apps. Allan Livingston, Director of Product Management for G Suite, broke down some of the ways AI is making G Suite smarter and more contextual without users even realizing it.
"Think about how G Suite brings all these applications together in a naturally integrated way," said Livingston. "You start your work in one of them and flow through as appropriate. You open a Gmail attachment in Drive, and that takes you into Docs; it's really automatic.
"We're trying to take thinking out of it for the user and that also involves machine learning. We started with smart replies in Inbox and we've had good success with Gmail, and that has led to the Explore feature in Docs, Sheets, and Slides."
Rolled out last fall, Explore applies natural language processing (NLP) to the in-app productivity experience. In Docs, Explore gives you instant suggestions based on the content in your document and automatically recommends related topics and resources. In Slides, it generates design suggestions to cut down on presentation formatting. The most interesting use case, however, is in Sheets. Livingston explained how Explore uses ML to simplify data analysis and business intelligence (BI) insights.
"A lot of users don't know what something like a pivot table is or how to use it to visualize a sheet of data," explained Livingston. "Let's say you're dealing with sales data for a customer, where each row is an item that has been sold. Explore lets you type in natural language queries like 'What's the top item on Black Friday?' and spits out a response like 'You sold 563 pairs of pants.' We're addressing data analysis in a way that saves time in making data-driven decisions, using machine learning to improve a common problem in a natural way."
A demo of the Explore feature in Sheets, from the Google Cloud NEXT conference this past March.
According to Livingston, Google plans to expand this kind of ML-driven cloud search to third parties and start building an ecosystem around it. The overarching idea is a common theme in practical AI: automating manual processes to free users up for more creative work. That idea is at the heart of most apps of ML apps: to automage repeatable business processes and everyday tasks, including cucumber sorting.
"In business and with consumers, users have these natural interaction patterns. The shift to the cloud and to mobile productivity are really changing the way people work, and these applied machine learning techniques are fundamental to it," said Livingston. "Because of our strength in machine learning, because of our products serving as a base, because of all the data in our cloud, we're in a unique position to apply that and scale infinitely."
Powering a Machine Learning Revolution
The foundation of everything Google does around AI is rooted in its APIs, algorithms, and open-source tools. The company's TensorFlow library is the most widely used ML tool on GitHub, spawning apps such as Koike's cucumber sorter. The suite of APIs underlying Google Cloud—algorithms spanning computer vision, video intelligence, speech and NLP, prediction modeling, and large-scale ML through the Google Cloud Machine Learning Engine—is the technology powering every AI feature integrated into Google's apps and services.
Francisco Uribe, a Product Manager for Research and Machine Intelligence at Google, works at the heart of the engine that's rewriting how Google works. Uribe oversees Google's aforementioned ML ASL, a lab with an immersive program in which Google ML experts work directly with enterprises to implement AI solutions. By using Google's APIs and the Cloud ML Engine, the lab works with businesses to train and deploy their own models into production.
Uribe has worked in the AI space for more than a decade. He founded BlackLocus, a data-driven startup that built an automated pricing engine for retailers, which was acquired by Home Depot in 2012. After that, he joined Google and worked for four years on the Search Ads team applying ML to improve the ad experience. In 2016, he moved into a research role running the ML ASL and acting as a mentor in Google's Launchpad Accelerator. Uribe said he's continually surprised by how businesses and developers are using Google's tools.
"We've seen use cases across the board—from healthcare and finance to retail and agriculture," said Uribe. "We're trying to help customers improve perception capabilities. Speech translation, image analysis, video APIs, natural language: they're all part of democratizing access to machine and deep learning algorithms, which have finally entered applicability."
The ML ASL has worked with HSBC Bank plc, one of the largest banking and financial services organizations in the world, on ML solutions for anti-money laundering and predictive credit scoring. The ML ASL has also worked with the United Services Automobile Association (USAA), a Fortune 500 financial services group of companies, to train the organization's engineers on ML techniques applied to specific insurance scenarios. eBay used Google's tools to train its ShopBot digital assistant. When the ML ASL works with a company, Uribe explained the four pillars that make up the process.
"You need a strong compute offering to deal with the extreme requirements of ML jobs, and GCP's distributed fiber optics backbone moves data from node to node very efficiently," said Uribe. "We have the Cloud Machine Learning Engine to help customers train models. We help customers execute on data with access to Kaggle's community of 800 active data scientists. Finally, you need the talent to be there, so on the research side of things, we have the Brain Residency Program to train engineers on complex ML curriculum. We see these as the building blocks to help customers build intelligent applications."
This all feeds into the open-source community and third-party ecosystem that Google is building around its AI technology. The company even announced a ML startup competition earlier this year, which awards up to $500,000 in investment to ML startups. Uribe talked about some of the innovative applications he's already seen of Google's technology and where other possibilities might lie.
"Let's say you're a customer service analytics company. Think about a speech API to transcribe the content of calls, and then sentiment analysis to improve the quality of your customer service," said Uribe. "Use the vision API to take a photo of a street sign in a foreign country and then the translation API to translate that content in real time through an app experience. It's not just about increasing efficiency; it's about creating new and unique user experiences."
Uribe sees tools such as TensorFlow as the great enabler for large-scale ML adoption in the marketplace. Not only have these technologies become core to what Google is and how the tech giant approaches product development, but Uribe believes that widely available ML technology will help optimize businesses, open new revenue streams, and invent a new class of intelligent apps.
"Think of it like a new industrial revolution," said Uribe. "We're seeing these tools enable orders of magnitude increases in efficiency and experiences you've never seen before. It's amazing to see how startups are applying it. Look at the cucumber farmer in Japan. He used TensorFlow to build a model for classifying and sorting cucumbers based on patterns, size, textures, etc., and then built specialized hardware to execute it. That level of democratization is incredible to see and we've barely scratched the surface."