Tag Archives: machine learning

Fighting phishing with smarter protections

Editor’s note: October is Cybersecurity Awareness Month, and we're celebrating with a series of security announcements this week. This is the third post; read the first and second ones.

Online security is top of mind for everyone these days, and we’re more focused than ever on protecting you and your data on Google, in the cloud, on your devices, and across the web.


One of our biggest focuses is phishing, attacks that trick people into revealing personal information like their usernames and passwords. You may remember phishing scams as spammy emails from “princes” asking for money via wire-transfer. But things have changed a lot since then. Today’s attacks are often very targeted—this is called “spear-phishing”—more sophisticated, and may even seem to be from someone you know.


Even for savvy users, today’s phishing attacks can be hard to spot. That’s why we’ve invested in automated security systems that can analyze an internet’s-worth of phishing attacks, detect subtle clues to uncover them, and help us protect our users in Gmail, as well as in other Google products, and across the web.


Our investments have enables us to significantly decrease the volume of phishing emails that users and customers ever see. With our automated protections, account security (like security keys) and warnings, Gmail is the most secure email service today.


Here is a look at some of the systems that have helped us secure users over time, and enabled us to add brand new protections in the last year.

More data helps protect your data


The best protections against large-scale phishing operations are even larger-scale defenses. Safe Browsing and Gmail spam filters are effective because they have such broad visibility across the web. By automatically scanning billions of emails, webpages, and apps for threats, they enable us to see the clearest, most up-to-date picture of the phishing landscape.


We’ve trained our security systems to block known issues for years. But, new, sophisticated phishing emails may come from people’s actual contacts (yes, attackers are able to do this), or include familiar company logos or sign-in pages. Here’s one example:

Screenshot 2017-10-11 at 2.45.09 PM.png

Attacks like this can be really difficult for people to spot. But new insights from our automated defenses have enabled us to immediately detect, thwart and protect Gmail users from subtler threats like these as well.

Smarter protections for Gmail users, and beyond

Since the beginning of the year, we’ve added brand new protections that have reduced the volume of spam in people’s inboxes even further.

  • We now show a warning within Gmail’s Android and iOS apps if a user clicks a link to a phishing site that’s been flagged by Safe Browsing. These supplement the warnings we’ve shown on the web since last year.

safelinks.png

  • We’ve built new systems that detect suspicious email attachments and submit them for further inspection by Safe Browsing. This protects all Gmail users, including G Suite customers, from malware that may be hidden in attachments.
  • We’ve also updated our machine learning models to specifically identify pages that look like common log-in pages and messages that contain spear-phishing signals.

Safe Browsing helps protect more than 3 billion devices from phishing, across Google and beyond. It hunts and flags malicious extensions in the Chrome Web Store, helps block malicious ads, helps power Google Play Protect, and more. And of course, Safe Browsing continues to show millions of red warnings about websites it considers dangerous or insecure in multiple browsers—Chrome, Firefox, Safari—and across many different platforms, including iOS and Android.

pastedImage0 (5).png

Layers of phishing protection


Phishing is a complex problem, and there isn’t a single, silver-bullet solution. That’s why we’ve provided additional protections for users for many years.

pasted image 0 (5).png
  • Since 2012, we’ve warned our users if their accounts are being targeted by government-backed attackers. We send thousands of these warnings each year, and we’ve continued to improve them so they are helpful to people. The warnings look like this.
  • This summer, we began to warn people before they linked their Google account to an unverified third-party app.
  • We first offered two-step verification in 2011, and later strengthened it in 2014 with Security Key, the most secure version of this type of protection. These features add extra protection to your account because attackers need more than just your username and password to sign in.

We’ll never stop working to keep your account secure with industry-leading protections. More are coming soon, so stay tuned.

Pixel Visual Core: image processing and machine learning on Pixel 2

The camera on the new Pixel 2 is packed full of great hardware, software and machine learning (ML), so all you need to do is point and shoot to take amazing photos and videos. One of the technologies that helps you take great photos is HDR+, which makes it possible to get excellent photos of scenes with a large range of brightness levels, from dimly lit landscapes to a very sunny sky.

HDR+ produces beautiful images, and we’ve evolved the algorithm that powers it over the past year to use the Pixel 2’s application processor efficiently, and enable you to take multiple pictures in sequence by intelligently processing HDR+ in the background. In parallel, we’ve also been working on creating hardware capabilities that enable significantly greater computing power—beyond existing hardware—to bring HDR+ to third-party photography applications.

To expand the reach of HDR+, handle the most challenging imaging and ML applications, and deliver lower-latency and even more power-efficient HDR+ processing, we’ve created Pixel Visual Core.

Pixel Visual Core is Google’s first custom-designed co-processor for consumer products. It’s built into every Pixel 2, and in the coming months, we’ll turn it on through a software update to enable more applications to use Pixel 2’s camera for taking HDR+ quality pictures.

pixel visual core
Magnified image of Pixel Visual Core

Let's delve into the details for you technical folks out there: The centerpiece of Pixel Visual Core is the Google-designed Image Processing Unit (IPU)—a fully programmable, domain-specific processor designed from scratch to deliver maximum performance at low power. With eight Google-designed custom cores, each with 512 arithmetic logic units (ALUs), the IPU delivers raw performance of more than 3 trillion operations per second on a mobile power budget. Using Pixel Visual Core, HDR+ can run 5x faster and at less than one-tenth the energy than running on the application processor (AP). A key ingredient to the IPU’s efficiency is the tight coupling of hardware and software—our software controls many more details of the hardware than in a typical processor. Handing more control to the software makes the hardware simpler and more efficient, but it also makes the IPU challenging to program using traditional programming languages. To avoid this, the IPU leverages domain-specific languages that ease the burden on both developers and the compiler: Halide for image processing and TensorFlow for machine learning. A custom Google-made compiler optimizes the code for the underlying hardware.


In the coming weeks, we’ll enable Pixel Visual Core as a developer option in the developer preview of Android Oreo 8.1 (MR1). Later, we’ll enable it for all third-party apps using the Android Camera API, giving them access to the Pixel 2’s HDR+ technology. We can’t wait to see the beautiful HDR+ photography that you already get through your Pixel 2 camera become available in your favorite photography apps.

HDR+ will be the first application to run on Pixel Visual Core. Notably, because Pixel Visual Core is programmable, we’re already preparing the next set of applications. The great thing is that as we port more machine learning and imaging applications to use Pixel Visual Core, Pixel 2 will continuously improve. So keep an eye out!

TensorFlow Lattice: Flexibility Empowered by Prior Knowledge



(Cross-posted on the Google Open Source Blog)

Machine learning has made huge advances in many applications including natural language processing, computer vision and recommendation systems by capturing complex input/output relationships using highly flexible models. However, a remaining challenge is problems with semantically meaningful inputs that obey known global relationships, like “the estimated time to drive a road goes up if traffic is heavier, and all else is the same.” Flexible models like DNNs and random forests may not learn these relationships, and then may fail to generalize well to examples drawn from a different sampling distribution than the examples the model was trained on.

Today we present TensorFlow Lattice, a set of prebuilt TensorFlow Estimators that are easy to use, and TensorFlow operators to build your own lattice models. Lattices are multi-dimensional interpolated look-up tables (for more details, see [1--5]), similar to the look-up tables in the back of a geometry textbook that approximate a sine function. We take advantage of the look-up table’s structure, which can be keyed by multiple inputs to approximate an arbitrarily flexible relationship, to satisfy monotonic relationships that you specify in order to generalize better. That is, the look-up table values are trained to minimize the loss on the training examples, but in addition, adjacent values in the look-up table are constrained to increase along given directions of the input space, which makes the model outputs increase in those directions. Importantly, because they interpolate between the look-up table values, the lattice models are smooth and the predictions are bounded, which helps to avoid spurious large or small predictions in the testing time.

How Lattice Models Help You
Suppose you are designing a system to recommend nearby coffee shops to a user. You would like the model to learn, “if two cafes are the same, prefer the closer one.” Below we show a flexible model (pink) that accurately fits some training data for users in Tokyo (purple), where there are many coffee shops nearby. The pink flexible model overfits the noisy training examples, and misses the overall trend that a closer cafe is better. If you used this pink model to rank test examples from Texas (blue), where businesses are spread farther out, you would find it acted strangely, sometimes preferring farther cafes!
Slice through a model’s feature space where all the other inputs stay the same and only distance changes. A flexible function (pink) that is accurate on training examples from Tokyo (purple) predicts that a cafe 10km-away is better than the same cafe if it was 5km-away. This problem becomes more evident at test-time if the data distribution has shifted, as shown here with blue examples from Texas where cafes are spread out more.
A monotonic flexible function (green) is both accurate on training examples and can generalize for Texas examples compared to non-monotonic flexible function (pink) from the previous figure.
In contrast, a lattice model, trained over the same example from Tokyo, can be constrained to satisfy such a monotonic relationship and result in a monotonic flexible function (green). The green line also accurately fits the Tokyo training examples, but also generalizes well to Texas, never preferring farther cafes.

In general, you might have many inputs about each cafe, e.g., coffee quality, price, etc. Flexible models have a hard time capturing global relationships of the form, “if all other inputs are equal, nearer is better, ” especially in parts of the feature space where your training data is sparse and noisy. Machine learning models that capture prior knowledge (e.g. how inputs should impact the prediction) work better in practice, and are easier to debug and more interpretable.

Pre-built Estimators
We provide a range of lattice model architectures as TensorFlow Estimators. The simplest estimator we provide is the calibrated linear model, which learns the best 1-d transformation of each feature (using 1-d lattices), and then combines all the calibrated features linearly. This works well if the training dataset is very small, or there are no complex nonlinear input interactions. Another estimator is a calibrated lattice model. This model combines the calibrated features nonlinearly using a two-layer single lattice model, which can represent complex nonlinear interactions in your dataset. The calibrated lattice model is usually a good choice if you have 2-10 features, but for 10 or more features, we expect you will get the best results with an ensemble of calibrated lattices, which you can train using the pre-built ensemble architectures. Monotonic lattice ensembles can achieve 0.3% -- 0.5% accuracy gain compared to Random Forests [4], and these new TensorFlow lattice estimators can achieve 0.1 -- 0.4% accuracy gain compared to the prior state-of-the-art in learning models with monotonicity [5].

Build Your Own
You may want to experiment with deeper lattice networks or research using partial monotonic functions as part of a deep neural network or other TensorFlow architecture. We provide the building blocks: TensorFlow operators for calibrators, lattice interpolation, and monotonicity projections. For example, the figure below shows a 9-layer deep lattice network [5].
Example of a 9-layer deep lattice network architecture [5], alternating layers of linear embeddings and ensembles of lattices with calibrators layers (which act like a sum of ReLU’s in Neural Networks). The blue lines correspond to monotonic inputs, which is preserved layer-by-layer, and hence for the entire model. This and other arbitrary architectures can be constructed with TensorFlow Lattice because each layer is differentiable.
In addition to the choice of model flexibility and standard L1 and L2 regularization, we offer new regularizers with TensorFlow Lattice:
  • Monotonicity constraints [3] on your choice of inputs as described above.
  • Laplacian regularization [3] on the lattices to make the learned function flatter.
  • Torsion regularization [3] to suppress un-necessary nonlinear feature interactions.
We hope TensorFlow Lattice will be useful to the larger community working with meaningful semantic inputs. This is part of a larger research effort on interpretability and controlling machine learning models to satisfy policy goals, and enable practitioners to take advantage of their prior knowledge. We’re excited to share this with all of you. To get started, please check out our GitHub repository and our tutorials, and let us know what you think!

Acknowledgements
Developing and open sourcing TensorFlow Lattice was a huge team effort. We’d like to thank all the people involved: Andrew Cotter, Kevin Canini, David Ding, Mahdi Milani Fard, Yifei Feng, Josh Gordon, Kiril Gorovoy, Clemens Mewald, Taman Narayan, Alexandre Passos, Christine Robson, Serena Wang, Martin Wicke, Jarek Wilkiewicz, Sen Zhao, Tao Zhu

References
[1] Lattice Regression, Eric Garcia, Maya Gupta, Advances in Neural Information Processing Systems (NIPS), 2009
[2] Optimized Regression for Efficient Function Evaluation, Eric Garcia, Raman Arora, Maya R. Gupta, IEEE Transactions on Image Processing, 2012
[3] Monotonic Calibrated Interpolated Look-Up Tables, Maya Gupta, Andrew Cotter, Jan Pfeifer, Konstantin Voevodski, Kevin Canini, Alexander Mangylov, Wojciech Moczydlowski, Alexander van Esbroeck, Journal of Machine Learning Research (JMLR), 2016
[4] Fast and Flexible Monotonic Functions with Ensembles of Lattices, Mahdi Milani Fard, Kevin Canini, Andrew Cotter, Jan Pfeifer, Maya Gupta, Advances in Neural Information Processing Systems (NIPS), 2016
[5] Deep Lattice Networks and Partial Monotonic Functions, Seungil You, David Ding, Kevin Canini, Jan Pfeifer, Maya R. Gupta, Advances in Neural Information Processing Systems (NIPS), 2017


TensorFlow Lattice: Flexibility Empowered by Prior Knowledge

Crossposted on the Google Research Blog

Machine learning has made huge advances in many applications including natural language processing, computer vision and recommendation systems by capturing complex input/output relationships using highly flexible models. However, a remaining challenge is problems with semantically meaningful inputs that obey known global relationships, like “the estimated time to drive a road goes up if traffic is heavier, and all else is the same.” Flexible models like DNNs and random forests may not learn these relationships, and then may fail to generalize well to examples drawn from a different sampling distribution than the examples the model was trained on.

Today we present TensorFlow Lattice, a set of prebuilt TensorFlow Estimators that are easy to use, and TensorFlow operators to build your own lattice models. Lattices are multi-dimensional interpolated look-up tables (for more details, see [1--5]), similar to the look-up tables in the back of a geometry textbook that approximate a sine function.  We take advantage of the look-up table’s structure, which can be keyed by multiple inputs to approximate an arbitrarily flexible relationship, to satisfy monotonic relationships that you specify in order to generalize better. That is, the look-up table values are trained to minimize the loss on the training examples, but in addition, adjacent values in the look-up table are constrained to increase along given directions of the input space, which makes the model outputs increase in those directions. Importantly, because they interpolate between the look-up table values, the lattice models are smooth and the predictions are bounded, which helps to avoid spurious large or small predictions in the testing time.

How Lattice Models Help You

Suppose you are designing a system to recommend nearby coffee shops to a user. You would like the model to learn, “if two cafes are the same, prefer the closer one.”  Below we show a flexible model (pink) that accurately fits some training data for users in Tokyo (purple), where there are many coffee shops nearby.  The pink flexible model overfits the noisy training examples, and misses the overall trend that a closer cafe is better. If you used this pink model to rank test examples from Texas (blue), where businesses are spread farther out, you would find it acted strangely, sometimes preferring farther cafes!
Slice through a model’s feature space where all the other inputs stay the same and only distance changes. A flexible function (pink) that is accurate on training examples from Tokyo (purple) predicts that a cafe 10km-away is better than the same cafe if it was 5km-away. This problem becomes more evident at test-time if the data distribution has shifted, as shown here with blue examples from Texas where cafes are spread out more.
A monotonic flexible function (green) is both accurate on training examples and can generalize for Texas examples compared to non-monotonic flexible function (pink) from the previous figure.

In contrast, a lattice model, trained over the same example from Tokyo, can be constrained to satisfy such a monotonic relationship and result in a monotonic flexible function (green). The green line also accurately fits the Tokyo training examples, but also generalizes well to Texas, never preferring farther cafes.

In general, you might have many inputs about each cafe, e.g., coffee quality, price, etc. Flexible models have a hard time capturing global relationships of the form, “if all other inputs are equal, nearer is better, ” especially in parts of the feature space where your training data is sparse and noisy. Machine learning models that capture prior knowledge (e.g.  how inputs should impact the prediction) work better in practice, and are easier to debug and more interpretable.

Pre-built Estimators

We provide a range of lattice model architectures as TensorFlow Estimators. The simplest estimator we provide is the calibrated linear model, which learns the best 1-d transformation of each feature (using 1-d lattices), and then combines all the calibrated features linearly. This works well if the training dataset is very small, or there are no complex nonlinear input interactions. Another estimator is a calibrated lattice model. This model combines the calibrated features nonlinearly using a two-layer single lattice model, which can represent complex nonlinear interactions in your dataset. The calibrated lattice model is usually a good choice if you have 2-10 features, but for 10 or more features, we expect you will get the best results with an ensemble of calibrated lattices, which you can train using the pre-built ensemble architectures. Monotonic lattice ensembles can achieve 0.3% -- 0.5% accuracy gain compared to Random Forests [4], and these new TensorFlow lattice estimators can achieve 0.1 -- 0.4% accuracy gain compared to the prior state-of-the-art in learning models with monotonicity [5].

Build Your Own

You may want to experiment with deeper lattice networks or research using partial monotonic functions as part of a deep neural network or other TensorFlow architecture. We provide the building blocks: TensorFlow operators for calibrators, lattice interpolation, and monotonicity projections. For example, the figure below shows a 9-layer deep lattice network [5].


Example of a 9-layer deep lattice network architecture [5], alternating layers of linear embeddings and ensembles of lattices with calibrators layers (which act like a sum of ReLU’s in Neural Networks). The blue lines correspond to monotonic inputs, which is preserved layer-by-layer, and hence for the entire model. This and other arbitrary architectures can be constructed with TensorFlow Lattice because each layer is differentiable.

In addition to the choice of model flexibility and standard L1 and L2 regularization, we offer new regularizers with TensorFlow Lattice:
  • Monotonicity constraints [3] on your choice of inputs as described above.
  • Laplacian regularization [3] on the lattices to make the learned function flatter.
  • Torsion regularization [3] to suppress un-necessary nonlinear feature interactions.
We hope TensorFlow Lattice will be useful to the larger community working with meaningful semantic inputs. This is part of a larger research effort on interpretability and controlling machine learning models to satisfy policy goals, and enable practitioners to take advantage of their prior knowledge. We’re excited to share this with all of you. To get started, please check out our GitHub repository and our tutorials, and let us know what you think!

By Maya Gupta, Research Scientist, Jan Pfeifer, Software Engineer and Seungil You, Software Engineer

Acknowledgements

Developing and open sourcing TensorFlow Lattice was a huge team effort. We’d like to thank all the people involved: Andrew Cotter, Kevin Canini, David Ding, Mahdi Milani Fard, Yifei Feng, Josh Gordon, Kiril Gorovoy, Clemens Mewald, Taman Narayan, Alexandre Passos, Christine Robson, Serena Wang, Martin Wicke, Jarek Wilkiewicz, Sen Zhao, Tao Zhu

References

[1] Lattice Regression, Eric Garcia, Maya Gupta, Advances in Neural Information Processing Systems (NIPS), 2009
[2] Optimized Regression for Efficient Function Evaluation, Eric Garcia, Raman Arora, Maya R. Gupta, IEEE Transactions on Image Processing, 2012
[3] Monotonic Calibrated Interpolated Look-Up Tables, Maya Gupta, Andrew Cotter, Jan Pfeifer, Konstantin Voevodski, Kevin Canini, Alexander Mangylov, Wojciech Moczydlowski, Alexander van Esbroeck, Journal of Machine Learning Research (JMLR), 2016
[4] Fast and Flexible Monotonic Functions with Ensembles of Lattices, Mahdi Milani Fard, Kevin Canini, Andrew Cotter, Jan Pfeifer, Maya Gupta, Advances in Neural Information Processing Systems (NIPS), 2016
[5] Deep Lattice Networks and Partial Monotonic Functions, Seungil You, David Ding, Kevin Canini, Jan Pfeifer, Maya R. Gupta, Advances in Neural Information Processing Systems (NIPS), 2017

The best hardware, software and AI—together

Today, we introduced our second generation family of consumer hardware products, all made by Google: new Pixel phones, Google Home Mini and Max, an all new Pixelbook, Google Clips hands-free camera, Google Pixel Buds, and an updated Daydream View headset. We see tremendous potential for devices to be helpful, make your life easier, and even get better over time when they’re created at the intersection of hardware, software and advanced artificial intelligence (AI).


Why Google?

These days many devices—especially smartphones—look and act the same. That means in order to create a meaningful experience for users, we need a different approach. A year ago, Sundar outlined his vision of how AI would change how people would use computers. And in fact, AI is already transforming what Google’s products can do in the real world. For example, swipe typing has been around for a while, but AI lets people use Gboard to swipe-type in two languages at once. Google Maps uses AI to figure out what the parking is like at your destination and suggest alternative spots before you’ve even put your foot on the gas. But, for this wave of computing to reach new breakthroughs, we have to build software and hardware that can bring more of the potential of AI into reality—which is what we’ve set out to do with this year’s new family of products.

Hardware, built from the inside out

We’ve designed and built our latest hardware products around a few core tenets. First and foremost, we want them to be radically helpful. They’re fast, they’re there when you need them, and they’re simple to use. Second, everything is designed for you, so that the technology doesn’t get in they way and instead blends into your lifestyle. Lastly, by creating hardware with AI at the core, our products can improve over time. They’re constantly getting better and faster through automatic software updates. And they’re designed to learn from you, so you’ll notice features—like the Google Assistant—get smarter and more assistive the more you interact with them.


You’ll see this reflected in our 2017 lineup of new Made by Google products:

  • The Pixel 2 has the best camera of any smartphone, again, along with a gorgeous display and augmented reality capabilities. Pixel owners get unlimited storage for their photos and videos, and an exclusive preview of Google Lens, which uses AI to give you helpful information about the things around you.
  • Google Home Mini brings the Assistant to more places throughout your home, with a beautiful design that fits anywhere. And Max is our biggest and best-sounding Google Home device, powered by the Assistant. And with AI-based Smart Sound, Max has the ability to adapt your audio experience to you—your environment, context, and preferences.
  • With Pixelbook, we’ve reimagined the laptop as a high-performance Chromebook, with a versatile form factor that works the way you do. It’s the first laptop with the Assistant built in, and the Pixelbook Pen makes the whole experience even smarter.
  • Our new Pixel Buds combine Google smarts and the best digital sound. You’ll get elegant touch controls that put the Assistant just a tap away, and they’ll even help you communicate in a different language.
  • The updated Daydream View is the best mobile virtual reality (VR) headset on the market, and the simplest, most comfortable VR experience.
  • Google Clips is a totally new way to capture genuine, spontaneous moments—all powered by machine learning and AI. This tiny camera seamlessly sends clips to your phone, and even edits and curates them for you.

Assistant, everywhere

Across all these devices, you can interact with the Google Assistant any way you want—talk to it with your Google Home or your Pixel Buds, squeeze your Pixel 2, or use your Pixelbook’s Assistant key or circle things on your screen with the Pixelbook Pen. Wherever you are, and on any device with the Assistant, you can connect to the information you need and get help with the tasks to get you through your day. No other assistive technology comes close, and it continues to get better every day.

New hardware products

Google’s hardware business is just getting started, and we’re committed to building and investing for the long run. We couldn’t be more excited to introduce you to our second-generation family of products that truly brings together the best of Google software, thoughtfully designed hardware with cutting-edge AI. We hope you enjoy using them as much as we do.

Source: Google Chrome


The best hardware, software and AI—together

Today, we introduced our second generation family of consumer hardware products, all made by Google: new Pixel phones, Google Home Mini and Max, an all new Pixelbook, Google Clips hands-free camera, Google Pixel Buds, and an updated Daydream View headset. We see tremendous potential for devices to be helpful, make your life easier, and even get better over time when they’re created at the intersection of hardware, software and advanced artificial intelligence (AI).


Why Google?

These days many devices—especially smartphones—look and act the same. That means in order to create a meaningful experience for users, we need a different approach. A year ago, Sundar outlined his vision of how AI would change how people would use computers. And in fact, AI is already transforming what Google’s products can do in the real world. For example, swipe typing has been around for a while, but AI lets people use Gboard to swipe-type in two languages at once. Google Maps uses AI to figure out what the parking is like at your destination and suggest alternative spots before you’ve even put your foot on the gas. But, for this wave of computing to reach new breakthroughs, we have to build software and hardware that can bring more of the potential of AI into reality—which is what we’ve set out to do with this year’s new family of products.

Hardware, built from the inside out

We’ve designed and built our latest hardware products around a few core tenets. First and foremost, we want them to be radically helpful. They’re fast, they’re there when you need them, and they’re simple to use. Second, everything is designed for you, so that the technology doesn’t get in they way and instead blends into your lifestyle. Lastly, by creating hardware with AI at the core, our products can improve over time. They’re constantly getting better and faster through automatic software updates. And they’re designed to learn from you, so you’ll notice features—like the Google Assistant—get smarter and more assistive the more you interact with them.


You’ll see this reflected in our 2017 lineup of new Made by Google products:

  • The Pixel 2 has the best camera of any smartphone, again, along with a gorgeous display and augmented reality capabilities. Pixel owners get unlimited storage for their photos and videos, and an exclusive preview of Google Lens, which uses AI to give you helpful information about the things around you.
  • Google Home Mini brings the Assistant to more places throughout your home, with a beautiful design that fits anywhere. And Max is our biggest and best-sounding Google Home device, powered by the Assistant. And with AI-based Smart Sound, Max has the ability to adapt your audio experience to you—your environment, context, and preferences.
  • With Pixelbook, we’ve reimagined the laptop as a high-performance Chromebook, with a versatile form factor that works the way you do. It’s the first laptop with the Assistant built in, and the Pixelbook Pen makes the whole experience even smarter.
  • Our new Pixel Buds combine Google smarts and the best digital sound. You’ll get elegant touch controls that put the Assistant just a tap away, and they’ll even help you communicate in a different language.
  • The updated Daydream View is the best mobile virtual reality (VR) headset on the market, and the simplest, most comfortable VR experience.
  • Google Clips is a totally new way to capture genuine, spontaneous moments—all powered by machine learning and AI. This tiny camera seamlessly sends clips to your phone, and even edits and curates them for you.

Assistant, everywhere

Across all these devices, you can interact with the Google Assistant any way you want—talk to it with your Google Home or your Pixel Buds, squeeze your Pixel 2, or use your Pixelbook’s Assistant key or circle things on your screen with the Pixelbook Pen. Wherever you are, and on any device with the Assistant, you can connect to the information you need and get help with the tasks to get you through your day. No other assistive technology comes close, and it continues to get better every day.

New hardware products

Google’s hardware business is just getting started, and we’re committed to building and investing for the long run. We couldn’t be more excited to introduce you to our second-generation family of products that truly brings together the best of Google software, thoughtfully designed hardware with cutting-edge AI. We hope you enjoy using them as much as we do.

The best hardware, software and AI—together

Today, we introduced our second generation family of consumer hardware products, all made by Google: new Pixel phones, Google Home Mini and Max, an all new Pixelbook, Google Clips hands-free camera, Google Pixel Buds, and an updated Daydream View headset. We see tremendous potential for devices to be helpful, make your life easier, and even get better over time when they’re created at the intersection of hardware, software and advanced artificial intelligence (AI).


Why Google?

These days many devices—especially smartphones—look and act the same. That means in order to create a meaningful experience for users, we need a different approach. A year ago, Sundar outlined his vision of how AI would change how people would use computers. And in fact, AI is already transforming what Google’s products can do in the real world. For example, swipe typing has been around for a while, but AI lets people use Gboard to swipe-type in two languages at once. Google Maps uses AI to figure out what the parking is like at your destination and suggest alternative spots before you’ve even put your foot on the gas. But, for this wave of computing to reach new breakthroughs, we have to build software and hardware that can bring more of the potential of AI into reality—which is what we’ve set out to do with this year’s new family of products.

Hardware, built from the inside out

We’ve designed and built our latest hardware products around a few core tenets. First and foremost, we want them to be radically helpful. They’re fast, they’re there when you need them, and they’re simple to use. Second, everything is designed for you, so that the technology doesn’t get in they way and instead blends into your lifestyle. Lastly, by creating hardware with AI at the core, our products can improve over time. They’re constantly getting better and faster through automatic software updates. And they’re designed to learn from you, so you’ll notice features—like the Google Assistant—get smarter and more assistive the more you interact with them.


You’ll see this reflected in our 2017 lineup of new Made by Google products:

  • The Pixel 2 has the best camera of any smartphone, again, along with a gorgeous display and augmented reality capabilities. Pixel owners get unlimited storage for their photos and videos, and an exclusive preview of Google Lens, which uses AI to give you helpful information about the things around you.
  • Google Home Mini brings the Assistant to more places throughout your home, with a beautiful design that fits anywhere. And Max is our biggest and best-sounding Google Home device, powered by the Assistant. And with AI-based Smart Sound, Max has the ability to adapt your audio experience to you—your environment, context, and preferences.
  • With Pixelbook, we’ve reimagined the laptop as a high-performance Chromebook, with a versatile form factor that works the way you do. It’s the first laptop with the Assistant built in, and the Pixelbook Pen makes the whole experience even smarter.
  • Our new Pixel Buds combine Google smarts and the best digital sound. You’ll get elegant touch controls that put the Assistant just a tap away, and they’ll even help you communicate in a different language.
  • The updated Daydream View is the best mobile virtual reality (VR) headset on the market, and the simplest, most comfortable VR experience.
  • Google Clips is a totally new way to capture genuine, spontaneous moments—all powered by machine learning and AI. This tiny camera seamlessly sends clips to your phone, and even edits and curates them for you.

Assistant, everywhere

Across all these devices, you can interact with the Google Assistant any way you want—talk to it with your Google Home or your Pixel Buds, squeeze your Pixel 2, or use your Pixelbook’s Assistant key or circle things on your screen with the Pixelbook Pen. Wherever you are, and on any device with the Assistant, you can connect to the information you need and get help with the tasks to get you through your day. No other assistive technology comes close, and it continues to get better every day.

New hardware products

Google’s hardware business is just getting started, and we’re committed to building and investing for the long run. We couldn’t be more excited to introduce you to our second-generation family of products that truly brings together the best of Google software, thoughtfully designed hardware with cutting-edge AI. We hope you enjoy using them as much as we do.

Best commute ever? Ride along with Google execs Diane Greene and Fei-Fei Li

Editor’s Note: The Grace Hopper Celebration of Women in Computing is coming up, and Diane Greene and Dr. Fei-Fei Li—two of our senior leaders—are getting ready. Sometimes Diane and Fei-Fei commute to the office together, and this time we happened to be along to capture the ride. Diane took over the music for the commute, and with Aretha Franklin’s “Respect” in the background, she and Fei-Fei chatted about the conference, their careers in tech, motherhood, and amplifying female voices everywhere. Hop in the backseat for Diane and Fei-Fei’s ride to work.

(A quick note for the riders: This conversation has been edited for brevity, and so you don’t have to read Diane and Fei-Fei talking about U-turns.)

fei-fei and diane.gif

Fei-Fei: Are you getting excited for Grace Hopper?

Diane: I’m super excited for the conference. We’re bringing together technical women to surface a lot of things that haven’t been talked about as openly in the past.

Fei-Fei: You’ve had a long career in tech. What makes this point in time different from the early days when you entered this field?

Diane: I got a degree in engineering in 1976 (ed note: Fei-Fei jumped in to remind Diane that this was the year she was born!). Computers were so exciting, and I learned to program. When I went to grad school to study computer science in 1985, there was actually a fair number of women at UC Berkeley. I’d say we had at least 30 percent women, which is way better than today.

It was a new, undefined field. And whenever there’s a new industry or technology, it’s wide open for everyone because nothing’s been established. Tech was that way, so it was quite natural for women to work in artificial intelligence and theory, and even in systems, networking, and hardware architecture. I came from mechanical engineering and the oil industry where I was the only woman. Tech was full of women then, but now less than 15 percent of women are in tech.

Fei-Fei: So do you think it’s too late?

Diane: I don’t think it’s too late. Girls in grade school and high school are coding. And certainly in colleges the focus on engineering is really strong, and the numbers are growing again.

Fei-Fei: You’re giving a talk at Grace Hopper—how will you talk to them about what distinguishes your career?

Diane: It’s wonderful that we’re both giving talks! Growing up, I loved building things so it was natural for me to go into engineering. I want to encourage other women to start with what you’re interested in and what makes you excited. If you love building things, focus on that, and the career success will come. I’ve been so unbelievably lucky in my career, but it’s a proof point that you can end up having quite a good career while doing what you’re interested in.

I want to encourage other women to start with what you’re interested in and what makes you excited. If you love building things, focus on that, and the career success will come. Diane Greene

Fei-Fei: And you are a mother of two grown, beautiful children. How did you prioritize them while balancing career?

Diane: When I was at VMware, I had the “go home for dinner” rule. When we founded the company, I was pregnant and none of the other founders had kids. But we were able to build a the culture around families—every time someone had a kid we gave them a VMware diaper bag. Whenever my kids were having a school play or parent teacher conference, I would make a big show of leaving in the middle of the day so everyone would know they could do that too. And at Google, I encourage both men and women on my team to find that balance.

Fei-Fei: It’s so important for your message to get across because young women today are thinking about their goals and what they want to build for the world, but also for themselves and their families. And there are so many women and people of color doing great work, how do we lift up their work? How do we get their voices heard? This is something I think about all the time, the voice of women and underrepresented communities in AI.

Diane: This is about educating people—not just women—to surface the accomplishments of everybody and make sure there’s no unconscious bias going on. I think Grace Hopper is a phenomenal tool for this, and there are things that I incorporate into my work day to prevent that unconscious bias: pausing to make sure the right people were included in a meeting, and that no one has been overlooked. And encouraging everyone in that meeting to participate so that all voices are heard.

Fei-Fei: Grace Hopper could be a great platform to share best practices for how to address these issues.

...young women today are thinking about their goals and what they want to build for the world, but also for themselves and their families. Dr. Fei-Fei Li

Diane: Every company is struggling to address diversity and there’s a school of thought that says having three or more people from one minority group makes all the difference in the world—I see it on boards. Whenever we have three or more women, the whole dynamic changes. Do you see that in your research group at all?

Fei-Fei: Yes, for a long time I was the only woman faculty member in the Stanford AI lab, but now it has attracted a lot of women who do very well because there’s a community. And that’s wonderful for me, and for the group.

Now back to you … you’ve had such a successful career, and I think a lot of women would love to know what keeps you going every day.

Diane: When you wake up in the morning, be excited about what’s ahead for the day. And if you’re not excited, ask yourself if it’s time for a change. Right now the Cloud is at the center of massive change in our world, and I’m lucky to have a front row seat to how it’s happening and what’s possible with it. We’re creating the next generation of technologies that are going to help people do things that we didn’t even know were possible, particularly in the AI/ML area. It’s exciting to be in the middle of the transformation of our world and the fast pace at which it’s happening.

Fei-Fei: Coming to Google Cloud, the most rewarding part is seeing how this is helping people go through that transformation and making a difference. And it’s at such a scale that it’s unthinkable on almost any other platform.

Diane: Cloud is making it easier for companies to work together and for people to work across boundaries together, and I love that. I’ve always found when you can collaborate across more boundaries you can get a lot more done.

To hear more from Fei-Fei and Diane, tune into Grace Hopper’s live stream on October 4. 

Source: Google Cloud


Best commute ever? Ride along with Google execs Diane Greene and Fei-Fei Li

Editor’s Note: The Grace Hopper Celebration of Women in Computing is coming up, and Diane Greene and Dr. Fei-Fei Li—two of our senior leaders—are getting ready. Sometimes Diane and Fei-Fei commute to the office together, and this time we happened to be along to capture the ride. Diane took over the music for the commute, and with Aretha Franklin’s “Respect” in the background, she and Fei-Fei chatted about the conference, their careers in tech, motherhood, and amplifying female voices everywhere. Hop in the backseat for Diane and Fei-Fei’s ride to work.

(A quick note for the riders: This conversation has been edited for brevity, and so you don’t have to read Diane and Fei-Fei talking about U-turns.)

fei-fei and diane.gif

Fei-Fei: Are you getting excited for Grace Hopper?

Diane: I’m super excited for the conference. We’re bringing together technical women to surface a lot of things that haven’t been talked about as openly in the past.

Fei-Fei: You’ve had a long career in tech. What makes this point in time different from the early days when you entered this field?

Diane: I got a degree in engineering in 1976 (ed note: Fei-Fei jumped in to remind Diane that this was the year she was born!). Computers were so exciting, and I learned to program. When I went to grad school to study computer science in 1985, there was actually a fair number of women at UC Berkeley. I’d say we had at least 30 percent women, which is way better than today.

It was a new, undefined field. And whenever there’s a new industry or technology, it’s wide open for everyone because nothing’s been established. Tech was that way, so it was quite natural for women to work in artificial intelligence and theory, and even in systems, networking, and hardware architecture. I came from mechanical engineering and the oil industry where I was the only woman. Tech was full of women then, but now less than 15 percent of women are in tech.

Fei-Fei: So do you think it’s too late?

Diane: I don’t think it’s too late. Girls in grade school and high school are coding. And certainly in colleges the focus on engineering is really strong, and the numbers are growing again.

Fei-Fei: You’re giving a talk at Grace Hopper—how will you talk to them about what distinguishes your career?

Diane: It’s wonderful that we’re both giving talks! Growing up, I loved building things so it was natural for me to go into engineering. I want to encourage other women to start with what you’re interested in and what makes you excited. If you love building things, focus on that, and the career success will come. I’ve been so unbelievably lucky in my career, but it’s a proof point that you can end up having quite a good career while doing what you’re interested in.

I want to encourage other women to start with what you’re interested in and what makes you excited. If you love building things, focus on that, and the career success will come. Diane Greene

Fei-Fei: And you are a mother of two grown, beautiful children. How did you prioritize them while balancing career?

Diane: When I was at VMware, I had the “go home for dinner” rule. When we founded the company, I was pregnant and none of the other founders had kids. But we were able to build a the culture around families—every time someone had a kid we gave them a VMware diaper bag. Whenever my kids were having a school play or parent teacher conference, I would make a big show of leaving in the middle of the day so everyone would know they could do that too. And at Google, I encourage both men and women on my team to find that balance.

Fei-Fei: It’s so important for your message to get across because young women today are thinking about their goals and what they want to build for the world, but also for themselves and their families. And there are so many women and people of color doing great work, how do we lift up their work? How do we get their voices heard? This is something I think about all the time, the voice of women and underrepresented communities in AI.

Diane: This is about educating people—not just women—to surface the accomplishments of everybody and make sure there’s no unconscious bias going on. I think Grace Hopper is a phenomenal tool for this, and there are things that I incorporate into my work day to prevent that unconscious bias: pausing to make sure the right people were included in a meeting, and that no one has been overlooked. And encouraging everyone in that meeting to participate so that all voices are heard.

Fei-Fei: Grace Hopper could be a great platform to share best practices for how to address these issues.

...young women today are thinking about their goals and what they want to build for the world, but also for themselves and their families. Dr. Fei-Fei Li

Diane: Every company is struggling to address diversity and there’s a school of thought that says having three or more people from one minority group makes all the difference in the world—I see it on boards. Whenever we have three or more women, the whole dynamic changes. Do you see that in your research group at all?

Fei-Fei: Yes, for a long time I was the only woman faculty member in the Stanford AI lab, but now it has attracted a lot of women who do very well because there’s a community. And that’s wonderful for me, and for the group.

Now back to you … you’ve had such a successful career, and I think a lot of women would love to know what keeps you going every day.

Diane: When you wake up in the morning, be excited about what’s ahead for the day. And if you’re not excited, ask yourself if it’s time for a change. Right now the Cloud is at the center of massive change in our world, and I’m lucky to have a front row seat to how it’s happening and what’s possible with it. We’re creating the next generation of technologies that are going to help people do things that we didn’t even know were possible, particularly in the AI/ML area. It’s exciting to be in the middle of the transformation of our world and the fast pace at which it’s happening.

Fei-Fei: Coming to Google Cloud, the most rewarding part is seeing how this is helping people go through that transformation and making a difference. And it’s at such a scale that it’s unthinkable on almost any other platform.

Diane: Cloud is making it easier for companies to work together and for people to work across boundaries together, and I love that. I’ve always found when you can collaborate across more boundaries you can get a lot more done.

To hear more from Fei-Fei and Diane, tune into Grace Hopper’s live stream on October 4. 

Access information quicker, do better work with Google Cloud Search

We all get sidetracked at work. We intend to be as efficient as possible, but inevitably, the “busyness” of business gets in the way through back-to-back meetings, unfinished docs or managing a rowdy inbox. To be more efficient, you need quick access to your information like relevant docs, important tasks and context for your meetings.

Sadly, according to a report by McKinsey, workers spend up to 20 percent of their time—an entire day each week—searching for and consolidating information across a number of tools. We made Google Cloud Search available to Enterprise and Business edition customers earlier this year so that teams can access important information quicker. Here are a few ways that Cloud Search can help you get the information you need to accomplish more throughout your day.

1. Search more intuitively, access information quicker

If you search for a doc, you’re probably not going to remember its exact name or where you saved it in Drive. Instead, you might remember who sent the doc to you or a specific piece of information it contains, like a statistic.

A few weeks ago, we launched a new, more intuitive way to search in Cloud Search using natural language processing (NLP) technology. Type questions in Cloud Search using everyday language, like “Documents shared with me by John?,” “What’s my agenda next Tuesday?,” or “What docs need my attention?” and it will track down useful information for you.
NLP GIF

2. Prioritize your to-dos, use spare time more wisely

With so much work to do, deciding what to focus on and what to leave for later isn’t always simple. A study by McKinsey reports that only nine percent of executives surveyed feel “very satisfied” with the way they allocate their time. We think technology, like Cloud Search, should help you with more than just finding what you’re looking for—it should help you stay focused on what’s important.

Imagine if your next meeting gets cancelled and you suddenly have an extra half hour to accomplish tasks. You can open the Cloud Search app to help you focus on what’s important. Powered by machine intelligence, Cloud Search proactively surfaces information that it believes is relevant to you and organizes it into simple cards that appear in the app throughout your workday. For example, it suggests documents or tasks based on which documents need your attention or upcoming meetings you have in Google Calendar.

3. Prepare for meetings, get more out of them

Employees spend a lot of time in meetings. According to a study in the UK by the Centre for Economics and Business, office workers spend an average of four hours per week in meetings. It’s even normal for us to join meetings unprepared. The same group surveyed feels like nearly half of the time (47%) spent in meetings is unproductive.

Thankfully, Cloud Search can help. It uses machine intelligence to organize and present information to set you up for success in a meeting. In addition to surfacing relevant docs, Cloud Search also surfaces information about meeting attendees from your corporate directory, and even includes links to relevant conversations from Gmail.

Start by going into Cloud Search to see info related to your next meeting. If you’re interested in looking at another meeting later in the day, just click on “Today’s meetings” and it will show you your agenda for the day. Next, select an event in your agenda (sourced from your Calendar) and Cloud Search will recommend information that’s relevant to that meeting.

GIF 2

Take back your time and focus on what’s important—open the Cloud Search app and get started today, or ask your IT administrator to enable it in your domain. You can also learn more about how Cloud Search can help your teams here.

Source: Google Cloud