Tag Archives: hardware

EfficientNet-EdgeTPU: Creating Accelerator-Optimized Neural Networks with AutoML



For several decades, computer processors have doubled their performance every couple of years by reducing the size of the transistors inside each chip, as described by Moore’s Law. As reducing transistor size becomes more and more difficult, there is a renewed focus in the industry on developing domain-specific architectures — such as hardware accelerators — to continue advancing computational power. This is especially true for machine learning, where efforts are aimed at building specialized architectures for neural network (NN) acceleration. Ironically, while there has been a steady proliferation of these architectures in data centers and on edge computing platforms, the NNs that run on them are rarely customized to take advantage of the underlying hardware.

Today, we are happy to announce the release of EfficientNet-EdgeTPU, a family of image classification models derived from EfficientNets, but customized to run optimally on Google’s Edge TPU, a power-efficient hardware accelerator available to developers through the Coral Dev Board and a USB Accelerator. Through such model customizations, the Edge TPU is able to provide real-time image classification performance while simultaneously achieving accuracies typically seen only when running much larger, compute-heavy models in data centers.

Using AutoML to customize EfficientNets for Edge TPU
EfficientNets have been shown to achieve state-of-the-art accuracy in image classification tasks while significantly reducing the model size and computational complexity. To build EfficientNets designed to leverage the Edge TPU’s accelerator architecture, we invoked the AutoML MNAS framework and augmented the original EfficientNet’s neural network architecture search space with building blocks that execute efficiently on the Edge TPU (discussed below). We also built and integrated a “latency predictor” module that provides an estimate of the model latency when executing on the Edge TPU, by running the models on a cycle-accurate architectural simulator. The AutoML MNAS controller implements a reinforcement learning algorithm to search this space while attempting to maximize the reward, which is a joint function of the predicted latency and model accuracy. From past experience, we know that Edge TPU’s power efficiency and performance tend to be maximized when the model fits within its on-chip memory. Hence we also modified the reward function to generate a higher reward for models that satisfy this constraint.
Overall AutoML flow for designing customized EfficientNet-EdgeTPU models.
Search Space Design
When performing the architecture search described above, one must consider that EfficientNets rely primarily on depthwise-separable convolutions, a type of neural network block that factorizes a regular convolution to reduce the number of parameters as well as the amount of computations. However, for certain configurations, a regular convolution utilizes the Edge TPU architecture more efficiently and executes faster, despite the much larger amount of compute. While it is possible, albeit tedious, to manually craft a network that uses an optimal combination of the different building blocks, augmenting the AutoML search space with these accelerator-optimal blocks is a more scalable approach.
A regular 3x3 convolution (right) has more compute (multiply-and-accumulate (mac) operations) than an depthwise-separable convolution (left), but for certain input/output shapes, executes faster on Edge TPU due to ~3x more effective hardware utilization.
In addition, removing certain operations from the search space that require modifications to the Edge TPU compiler to fully support, such swish non-linearity and squeeze-and-excitation block, naturally leads to models that are readily ported to the Edge TPU hardware. These operations tend to improve model quality slightly, so by eliminating them from the search space, we have effectively instructed AutoML to discover alternate network architectures that may compensate for any potential loss in quality.

Model Performance
The neural architecture search (NAS) described above produced a baseline model, EfficientNet-EdgeTPU-S, which is subsequently scaled up using EfficientNet's compound scaling method to produce the -M and -L models. The compound scaling approach selects an optimal combination of input image resolution scaling, network width, and depth scaling to construct larger, more accurate models. The -M, and -L models achieve higher accuracy at the cost of increased latency as shown in the figure below.
EfficientNet-EdgeTPU-S/M/L models achieve better latency and accuracy than existing EfficientNets (B1), ResNet, and Inception by specializing the network architecture for Edge TPU hardware. In particular, our EfficientNet-EdgeTPU-S achieves higher accuracy, yet runs 10x faster than ResNet-50.
Interestingly, the NAS-generated model employs the regular convolution quite extensively in the initial part of the network where the depthwise-separable convolution tends to be less effective than the regular convolution when executed on the accelerator. This clearly highlights the fact that trade-offs usually made while optimizing models for general purpose CPUs (reducing the total number of operations, for example) are not necessarily optimal for hardware accelerators. Also, these models achieve high accuracy even without the use of esoteric operations. Comparing with the other image classification models such as Inception-resnet-v2 and Resnet50, EfficientNet-EdgeTPU models are not only more accurate, but also run faster on Edge TPUs.

This work represents a first experiment in building accelerator-optimized models using AutoML. The AutoML-based model customization can be extended to not only a wide range of hardware accelerators, but also to several different applications that rely on neural networks.

From Cloud TPU training to Edge TPU deployment
We have released the training code and pretrained models for EfficientNet-EdgeTPU on our github repository. We employ tensorflow’s post-training quantization tool to convert a floating-point trained model to an Edge TPU-compatible integer-quantized model. For these models, the post-training quantization works remarkably well and produces only a very slight loss in accuracy (~0.5%). The script for exporting the quantized model from a training checkpoint can be found here. For an update on the Coral platform, see this post on the Google Developer’s Blog, and for full reference materials and detailed instructions, please refer to the Coral website.

Acknowledgements
Special thanks to Quoc Le, Hongkun Yu, Yunlu Li, Ruoming Pang, and Vijay Vasudevan from the Google Brain team; Bo Wu, Vikram Tank, and Ajay Nair from the Google Coral team; Han Vanholder, Ravi Narayanaswami, John Joseph, Dong Hyuk Woo, Raksit Ashok, Jason Jong Kyu Park, Jack Liu, Mohammadali Ghodrat, Cao Gao, Berkin Akin, Liang-Yun Wang, Chirag Gandhi, and Dongdong Li from the Google Edge TPU team.

Source: Google AI Blog


Glass Enterprise Edition 2: faster and more helpful

Glass Enterprise Edition has helped workers in a variety of industries—from logistics, to  manufacturing, to field services—do their jobs more efficiently by providing hands-free access to the information and tools they need to complete their work. Workers can use Glass to access checklists, view instructions or send inspection photos or videos, and our enterprise customers have reported faster production times, improved quality, and reduced costs after using Glass.


Glass Enterprise Edition 2 helps businesses further improve the efficiency of their employees. As our customers have adopted Glass, we’ve received valuable feedback that directly informed the improvements in Glass Enterprise Edition 2. 

Glass Enterprise Edition 2.png
Glass Enterprise Edition 2 with safety frames by Smith Optics. Glass is a small, lightweight

wearable computer with a transparent display for hands-free work.

Glass Enterprise Edition 2 is built on the Qualcomm Snapdragon XR1 platform, which features a significantly more powerful multicore CPU (central processing unit) and a new artificial intelligence engine. This enables significant power savings, enhanced performance and support for computer vision and advanced machine learning capabilities. We’ve also partnered with Smith Optics to make Glass-compatible safety frames for different types of demanding work environments, like manufacturing floors and maintenance facilities.

Additionally, Glass Enterprise Edition 2 features improved camera performance and quality, which builds on Glass’s existing first person video streaming and collaboration features. We’ve also added USB-C port that supports faster charging, and increased overall battery life to enable customers to use Glass longer between charges.

Finally, Glass Enterprise Edition 2 is easier to develop for and deploy. It’s built on Android, making it easier for customers to integrate the services and APIs (application programming interfaces) they already use. And in order to support scaled deployments, Glass Enterprise Edition 2 now supports Android Enterprise Mobile Device Management.

Over the past two years atX, Alphabet’s moonshot factory, we’ve collaborated with our partners to provide solutions that improve workplace productivity for a growing number of customers—including AGCO, Deutsche Post DHL Group, Sutter Health, and H.B. Fuller. We’ve been inspired by the ways businesses like these have been using Glass Enterprise Edition. X, which is designed to be a protected space for long-term thinking and experimentation, has been a great environment in which to learn and refine the Glass product. Now, in order to meet the demands of the growing market for wearables in the workplace and to better scale our enterprise efforts, the Glass team has moved from X to Google.

We’re committed to providing enterprises with the helpful tools they need to work better, smarter and faster. Enterprise businesses interested in using Glass Enterprise Edition 2 can contact our sales team or our network of Glass Enterprise solution partners starting today. We’re excited to see how our partners and customers will continue to use Glass to shape the future of work.

On the Path to Cryogenic Control of Quantum Processors



Building a quantum computer that can solve practical problems that would otherwise be classically intractable due to the computation complexity, cost, energy consumption or time to solution, is the longstanding goal of the Google AI Quantum team. Current thresholds suggest a first generation error-corrected quantum computer will require on the order of 1 million physical qubits, which is more than four orders of magnitude more qubits than exist in Bristlecone, our 72 qubit quantum processor. Increasing the number of physical qubits needed for a fault-tolerant quantum computer while maintaining high-quality control of each qubit are intertwined and exciting technological challenges that will require inventions beyond simply copying and pasting our current control architecture. One critical challenge is reducing the number of input/output control lines per qubit by relocating the room temperature analog control electronics to the 3 kelvin stage in the cryostat, while maintaining high-quality qubit control.

As a step towards solving that challenge, this week we presented our first generation cryogenic-CMOS single-qubit controller at the International Solid State Circuits Conference in San Francisco. Fabricated using commercial CMOS technology, our controller operates at 3 kelvin, consumes less than 2 milliwatts of power and measures just 1 mm by 1.6 mm. Functionally, it provides an instruction set for single-qubit gate operations, providing analog control of a qubit via digital lines between room temperature and 3 kelvin, all while consuming ~1000 times less power compared to our current room temperature control electronics.
Google’s first generation cryogenic-CMOS single-qubit controller (center and zoomed on the right) packaged and ready to be deployed inside our cryostat. The controller measures 1mm by 1.6mm.
How to Control 72 Qubits
In our lab in Santa Barbara, we run programs on Bristlecone by applying gigahertz frequency analog control signals to each of the qubits to manipulate the qubit state, to entangle qubits and to measure the outcomes of our computations. How well we define the shape and frequency of these control signals directly impacts the quality of our computation. To make high-quality qubit control signals, we leverage technology developed for smartphones packaged in server racks at room temperature. Individual coaxial cables deliver these signals to each qubit, which are themselves kept inside a cryostat chilled to 10 millikelvin. While this approach makes sense for a Bristlecone-scale quantum processor, which demands 2 control lines per qubit for 144 unique control signals, we realized that a more integrated approach would be required in order to scale our systems to the million qubit level.
Research Scientist Amit Vainsencher checking the wiring on Bristlecone in one of Google's flagship cryostats. Blue coaxial cables are connected from custom analog control electronics (server rack on the right) to the quantum processor.
In our current setup, the number of physical wires connected from room temperature to the qubits inside the cryostat and the finite cooling power of the cryostat represent a significant constraint. One way to alleviate this is to move the digital to analog control closer to the quantum processor. Currently, our room temperature digital-to-analog waveform generators used to control individual qubits, dissipate ~1 watt of waste heat per qubit. The cooling power of our cryostat at 3 kelvin is 0.1 watt. That means if we crammed 150 waveform generators into our cryostat (never mind the limited physical space inside the refrigerator for a moment) we would overwhelm the cooling power of our cryostat by 1500x, thereby cooking our cryostat and rendering our qubits useless. Therefore, simply installing our existing digital-to-analog control in the cryostat will not set us on the path to control millions of qubits. It is clear we need an integrated low-power qubit control solution.

A Cool Idea
In collaboration with University of Massachusetts Professor Joseph Bardin, we set out to develop custom integrated circuits (ICs) to control our qubits from within the cryostat to ultimately reduce the physical I/O connections to and from our future quantum processors. These ICs would be designed to operate in the ultracold environment, specifically 3 kelvin, and turn digital instructions into analog control pulses for qubits. A key research objective was to first design a custom IC with low power requirements, in order to prevent warming up the cryostat.

We designed our IC to dissipate no more than 2 milliwatts of power at 3 kelvin, which can be challenging as most physical CMOS models assume operation closer to 300 kelvin. After design and fabrication of the IC with the low power design constraints in mind, we verified that the cryogenic-CMOS qubit controller worked at room temperature. We then mounted it in our cryostat at 3 kelvin and connected it to a qubit (mounted at 10 millikelvin in the same cryostat). We carried out a series of experiments to establish that the cryogenic-CMOS qubit controller worked as designed, and most importantly, that we hadn't just installed a heater inside our cryostat.
Schematic of the cryogenic-CMOS qubit controller mounted on the 3 kelvin stage of our dilution refrigerator and connected to a qubit. Our standard qubit control electronics were connected in parallel to enable control and measurement of the qubit as an in-situ check experiment.
Performance at Low Temperature
Baseline experiments for our new quantum control hardware, including T1, Rabi oscillations, and single qubit gates, show similar performance compared to our standard room-temperature qubit control electronics: qubit coherence time was virtually unchanged, and high-visibility Rabi oscillations were observed by varying the amplitude of the pulses out of the cryogenic-CMOS qubit controller—a signature response of a driven qubit.

Comparison of the qubit coherence time measured using the standard and cryogenic quantum controllers.
Measured Rabi amplitude oscillations using the cryogenic controller. The green and black traces are the probability of measuring the qubits in the 1 and 0 states, respectively.
Next Steps
Although all of these results are promising, this first generation cryogenic-CMOS qubit controller is but one small step towards a truly scalable qubit control and measurement system. For instance, our controller is only able to address a single qubit, and it still requires several connections to room temperature. In addition, we still need to work hard to quantify the error rates for single qubit gates. As such, we are excited to reduce the energy required to control qubits and still maintain the delicate control required to perform high-quality qubit operations.

Acknowledgements
This work was carried out with the support of the Google Visiting Researcher Program while Prof. Bardin, an Associate Professor with the University of Massachusetts Amherst, was on sabbatical with the Google AI Quantum Team. This work would not have been possible without the many contributions of members of the Google AI Quantum team, especially Evan Jeffrey for his integration of the cryo-CMOS controller into the qubit calibration software, Ted White for his on-demand qubit calibrations and Trent Huang for his tireless design rules checks.

Source: Google AI Blog


5 reasons to love the new Chromecast

https://lh4.googleusercontent.com/uJ_Dwg_g96eXtKUqTx2d4oDIlfRzJ0gWzG1LhgMXEwC-02oRIvsaTMM3svbAvudvGkVoBf53g3rqVvyDPvti4oJ6jaBox7aNBzS2jM6nww0j6o9xQar1NYFuSyJwU_z3IQEonJAH


We launched our first Chromecast in 2013 with the aim to make it easy to get your favorite content right from your phone to your TV. With hundreds of compatible apps to cast from, people are tapping the Cast button more than ever. And since Chromecast, the Made by Google family of products has continued to grow, bringing the best of hardware, software, and AI together. So for this 5th year of Chromecast, we wanted to share the top 5 reasons we’re excited about our newest Chromecast:
  1. Fits right in. With a new design, Chromecast blends in with your decor and the rest of the Made by Google family.
  2. Stream hands-free. Chromecast and Google Home work seamlessly together. Just say what you want to watch from compatible services, like YouTube or Netflix, and control your TV just by asking. Try, “Hey Google, play Lost in Space from Netflix.” (You’ll need a Netflix subscription to get started.)
  3. Picture perfect at 60fps. Our newest Chromecast supports streaming in 1080p at 60 frames per second, giving you a more lifelike image. So when you’re watching the match, it will feel even more like you’re there.
  4. More than a screen, it’s a canvas. With Ambient Mode, you can personalize your TV with a constantly updating stream of the best and latest photos taken by you, your friends and your family from Google Photos. With new Live Albums from Google Photos, you can enjoy photos of people and pets you care about and skip blurry photos and duplicates -- all without lifting a finger. New photos will show up automatically on your TV -- no uploading hassles.
  5. And it has an MRP of just ₹3,499. So it’s the perfect gift this upcoming holiday season for the streamer in your life.
    The new Chromecast is available in Charcoal starting today exclusively from Flipkart, and comes with a one-year Sony LIV Premium subscription along with a six-month Gaana Plus subscription.
    So go ahead, #StreamOn!
    Posted by Jess Bonner,  Chromecast PM

    New ways to experience Made by Google products

    Today, we told you about what’s coming in our latest family of #MadebyGoogle products. But what's a line-up of shiny new products without a plethora of ways for the world to experience, try and buy them? As we continue to build products for everyone, we’re exploring helpful new ways to get our products to everyone.

    The Google Hardware Store pop-ups

    Starting on October 18, New Yorkers and Chicagoans can try out and buy our new products at a pop-up shop in each city—the only place you can shop Google products in a fully Google-made experiential space. Our pop ups will be open October 18 through December 31, so if you’re in Chicago (Bucktown at 1704 N. Damen) or NYC (SoHo at 131 Green Street), come visit us.

    The Google Store and Enjoy

    You can now pre-order and shop all of our products via the online Google Store, including the Pixel 3 / 3XL (that works with all major carriers). And as of October 18, folks in the Bay Area can buy the new Pixel 3 / 3XL and get it delivered as soon as three hours and expertly set up via the Enjoy service. You can also get the Pixel 2XL, Pixelbook and Google Home Max via Enjoy delivery now. We’re bringing the Google Store to you!

    Google Store Retail 2018.jpg

    Google Store + Enjoy bring the expertise of the Google Store to you.

    b8ta

    Made by Google products are part of an interactive shopping experience in five b8ta stores across the country, including Austin, Corte Madera, Houston, San Francisco, Tysons Corner, and will be available in two new b8ta stores in Short Hills, NJ and Scottsdale, AZ opening later this year. As a part of the unique in-store experience, customers can test out and shop Google’s Home products in interactive home-like vignettes. Visit a store and demo products with one of b8ta’s experts.
    b8ta

    Made by Google products are now part of a new interactive shopping experience in b8ta.

    goop

    goop is joining forces with Made by Google products to offer the Google Home smart speaker family across the U.S. in permanent goop Lab stores and goop GIFT pop-ups this holiday season. Abroad, customers can shop at the goop London pop-up which opened this past September. Keep an eye out for more information from goop + Made by Google later this month.

    And as for the future...

    Google Home Mini and Wing

    It’s a bird, it’s a plane. It’s Google Home Mini being delivered by drone! You read that right—along with Wing (an Alphabet company), we’re pushing the boundaries of conventional delivery. As a part of a small, localized test, Google Home Minis were recently dropped off at customers’ homes only 10 minutes after ordering. Although not a reality today, imagine the possibilities in years to come… 

    Google hardware. Designed to work better together.

    This year marks Google’s 20th anniversary—for two decades we’ve been working toward our mission to organize the world’s information and make it universally accessible and useful for everybody. Delivering information has always been in our DNA. It’s why we exist. From searching the world, to translating it, to getting a great photo of it, when we see an opportunity to help people, we’ll go the extra mile. We love working on really hard problems that make life easier for people, in big and small ways.

    There’s a clear line from the technology we were working on 20 years ago to the technology we’re developing today—and the big breakthroughs come at the intersection of AI, software and hardware, working together. This approach is what makes the Google hardware experience so unique, and it unlocks all kinds of helpful benefits. When we think about artificial intelligence in the context of consumer hardware, it isn’t artificial at all—it’s helping you get real things done, every day. A shorter route to work. A gorgeous vacation photo. A faster email response. 

    So today, we’re introducing our third-generation family of consumer hardware products, all made by Google:

    • For life on the go, we’re introducing the Pixel 3 and Pixel 3 XL—designed from the inside out to be the smartest, most helpful device in your life. It’s a phone that can answer itself, a camera that won’t miss a shot, and a helpful Assistant even while it’s charging.

    • For life at work and at play, we’re bringing the power and productivity of a desktop to a gorgeous tablet called Pixel Slate. This Chrome OS device is both a powerful workstation at the office, and a home theater you can hold in your hands.

    • And for life at home we designed Google Home Hub, which lets you hear and see the info you need, and manage your connected home from a single screen. With its radically helpful smart display, Google Home Hub lays the foundation for a truly thoughtful home.

    Please visit our updated online store to see the full details, pricing and availability

    The new Google devices fit perfectly with the rest of our family of products, including Nest, which joined the Google hardware family at the beginning of this year. Together with Nest, we’re pursuing our shared vision of a thoughtful home that isn’t just smart, it’s also helpful and simple enough for everyone to set up and use. It's technology designed for the way you live.

    Ivy Ross + Hardware Design

    Our goal with these new products, as always, is to create something that serves a purpose in people’s lives—products that are so useful they make people wonder how they ever lived without them. The simple yet beautiful design of these new devices continues to bring the smarts of the technology to the forefront, while providing people with a bold piece of hardware.

    Our guiding principle

    Google's guiding principle is the same as it’s been for 20 years—to respect our users and put them first. We feel a deep responsibility to provide you with a helpful, personal Google experience, and that guides the work we do in three very specific ways:

    • First, we want to provide you with an experience that is unique to you. Just like Google is organizing the world’s information, the combination of AI, software and hardware can organize your information—and help out with the things you want to get done. The Google Assistant is the best expression of this, and it’s always available when, where, and however you need it.

    • Second, we’re committed to the security of our users. We need to offer simple, powerful ways to safeguard your devices. We’ve integrated Titan™ Security, the system we built for Google, into our new mobile devices. Titan™ Security protects your most sensitive on-device data by securing your lock screen and strengthening disk encryption.

    • Third, we want to make sure you’re in control of your digital wellbeing. From our research, 72 percent of our users are concerned about the amount of time people spend using tech. We take this very seriously and have developed new tools that make people’s lives easier and cut back on distractions.

    A few new things made by Google

    With these Made by Google devices, our goal is to provide radically helpful solutions. While it’s early in the journey, we’re taking an end-to-end approach to consumer technology that merges our most innovative AI with intuitive software and powerful hardware. Ultimately, we want to help you do more with your days while doing less with your tech—so you can focus on what matters most.

    Announcing Cirq: An Open Source Framework for NISQ Algorithms



    Over the past few years, quantum computing has experienced a growth not only in the construction of quantum hardware, but also in the development of quantum algorithms. With the availability of Noisy Intermediate Scale Quantum (NISQ) computers (devices with ~50 - 100 qubits and high fidelity quantum gates), the development of algorithms to understand the power of these machines is of increasing importance. However, a common problem when designing a quantum algorithm on a NISQ processor is how to take full advantage of these limited quantum devices—using resources to solve the hardest part of the problem rather than on overheads from poor mappings between the algorithm and hardware. Furthermore some quantum processors have complex geometric constraints and other nuances, and ignoring these will either result in faulty quantum computation, or a computation that is modified and sub-optimal.*

    Today at the First International Workshop on Quantum Software and Quantum Machine Learning (QSML), the Google AI Quantum team announced the public alpha of Cirq, an open source framework for NISQ computers. Cirq is focused on near-term questions and helping researchers understand whether NISQ quantum computers are capable of solving computational problems of practical importance. Cirq is licensed under Apache 2, and is free to be modified or embedded in any commercial or open source package.
    Once installed, Cirq enables researchers to write quantum algorithms for specific quantum processors. Cirq gives users fine tuned control over quantum circuits, specifying gate behavior using native gates, placing these gates appropriately on the device, and scheduling the timing of these gates within the constraints of the quantum hardware. Data structures are optimized for writing and compiling these quantum circuits to allow users to get the most out of NISQ architectures. Cirq supports running these algorithms locally on a simulator, and is designed to easily integrate with future quantum hardware or larger simulators via the cloud.
    We are also announcing the release of OpenFermion-Cirq, an example of a Cirq based application enabling near-term algorithms. OpenFermion is a platform for developing quantum algorithms for chemistry problems, and OpenFermion-Cirq is an open source library which compiles quantum simulation algorithms to Cirq. The new library uses the latest advances in building low depth quantum algorithms for quantum chemistry problems to enable users to go from the details of a chemical problem to highly optimized quantum circuits customized to run on particular hardware. For example, this library can be used to easily build quantum variational algorithms for simulating properties of molecules and complex materials.

    Quantum computing will require strong cross-industry and academic collaborations if it is going to realize its full potential. In building Cirq, we worked with early testers to gain feedback and insight into algorithm design for NISQ computers. Below are some examples of Cirq work resulting from these early adopters:
    To learn more about how Cirq is helping enable NISQ algorithms, please visit the links above where many of the adopters have provided example source code for their implementations.

    Today, the Google AI Quantum team is using Cirq to create circuits that run on Google’s Bristlecone processor. In the future, we plan to make this processor available in the cloud, and Cirq will be the interface in which users write programs for this processor. In the meantime, we hope Cirq will improve the productivity of NISQ algorithm developers and researchers everywhere. Please check out the GitHub repositories for Cirq and OpenFermion-Cirq — pull requests welcome!

    Acknowledgements
    We would like to thank Craig Gidney for leading the development of Cirq, Ryan Babbush and Kevin Sung for building OpenFermion-Cirq and a whole host of code contributors to both frameworks.


    * An analogous situation is how early classical programmers needed to run complex programs in very small memory spaces by paying careful attention to the lowest level details of the hardware.

    Source: Google AI Blog


    Google Wifi’s Network Check now tests multiple device connections

    Wi-Fi is a necessity for tons of connected devices in our homes. And when it isn’t working the way you expect, it can be a bit of a black box to troubleshoot. Google Wifi’s Network Check technology has always let you measure the speed of your internet connection and the quality of the network connection between your Google Wifi access points (if you have more than one). But what about that new smart TV in the bedroom that’s constantly buffering? Or your outdoor security camera with a flaky connection?


    Starting today, we’re rolling out a new feature to Google Wifi that lets you measure how each individual device is performing on your Wi-Fi network. Knowing Wi-Fi coverage is poor in an area of your home can help you pinpoint the exact bottleneck when you notice a connectivity slowdown. Then, you’ll know to move your Google Wifi point closer to that device or even move the device itself for a stronger connection.

    Network Check update

    In the past month alone, we saw an average of 18 connected devices on each Google Wifi network, globally. With so many devices on your network, we want to make sure you have a way to know each device has the best connection possible, and that your home Wi-Fi is doing its job.


    This update to our Network Check technology will be available in the coming weeks to all Google Wifi users around the world—just open the Google Wifi app to get started. Dead zones be gone!