Today, we told you about what’s coming in our latest family of #MadebyGoogle products. But what's a line-up of shiny new products without a plethora of ways for the world to experience, try and buy them? As we continue to build products for everyone, we’re exploring helpful new ways to get our products to everyone.
The Google Hardware Store pop-upsStarting on October 18, New Yorkers and Chicagoans can try out and buy our new products at a pop-up shop in each city—the only place you can shop Google products in a fully Google-made experiential space. Our pop ups will be open October 18 through December 31, so if you’re in Chicago (Bucktown at 1704 N. Damen) or NYC (SoHo at 131 Green Street), come visit us.
The Google Store and EnjoyYou can now pre-order and shop all of our products via the online Google Store, including the Pixel 3 / 3XL (that works with all major carriers). And as of October 18, folks in the Bay Area can buy the new Pixel 3 / 3XL and get it delivered as soon as three hours and expertly set up via the Enjoy service. You can also get the Pixel 2XL, Pixelbook and Google Home Max via Enjoy delivery now. We’re bringing the Google Store to you!
b8taMade by Google products are part of an interactive shopping experience in five b8ta stores across the country, including Austin, Corte Madera, Houston, San Francisco, Tysons Corner, and will be available in two new b8ta stores in Short Hills, NJ and Scottsdale, AZ opening later this year. As a part of the unique in-store experience, customers can test out and shop Google’s Home products in interactive home-like vignettes. Visit a store and demo products with one of b8ta’s experts.
goopgoop is joining forces with Made by Google products to offer the Google Home smart speaker family across the U.S. in permanent goop Lab stores and goop GIFT pop-ups this holiday season. Abroad, customers can shop at the goop London pop-up which opened this past September. Keep an eye out for more information from goop + Made by Google later this month.
And as for the future...
It’s a bird, it’s a plane. It’s Google Home Mini being delivered by drone! You read that right—along with Wing (an Alphabet company), we’re pushing the boundaries of conventional delivery. As a part of a small, localized test, Google Home Minis were recently dropped off at customers’ homes only 10 minutes after ordering. Although not a reality today, imagine the possibilities in years to come…
This year marks Google’s 20th anniversary—for two decades we’ve been working toward our mission to organize the world’s information and make it universally accessible and useful for everybody. Delivering information has always been in our DNA. It’s why we exist. From searching the world, to translating it, to getting a great photo of it, when we see an opportunity to help people, we’ll go the extra mile. We love working on really hard problems that make life easier for people, in big and small ways.
There’s a clear line from the technology we were working on 20 years ago to the technology we’re developing today—and the big breakthroughs come at the intersection of AI, software and hardware, working together. This approach is what makes the Google hardware experience so unique, and it unlocks all kinds of helpful benefits. When we think about artificial intelligence in the context of consumer hardware, it isn’t artificial at all—it’s helping you get real things done, every day. A shorter route to work. A gorgeous vacation photo. A faster email response.
So today, we’re introducing our third-generation family of consumer hardware products, all made by Google:
For life on the go, we’re introducing the Pixel 3 and Pixel 3 XL—designed from the inside out to be the smartest, most helpful device in your life. It’s a phone that can answer itself, a camera that won’t miss a shot, and a helpful Assistant even while it’s charging.
For life at work and at play, we’re bringing the power and productivity of a desktop to a gorgeous tablet called Pixel Slate. This Chrome OS device is both a powerful workstation at the office, and a home theater you can hold in your hands.
And for life at home we designed Google Home Hub, which lets you hear and see the info you need, and manage your connected home from a single screen. With its radically helpful smart display, Google Home Hub lays the foundation for a truly thoughtful home.
The new Google devices fit perfectly with the rest of our family of products, including Nest, which joined the Google hardware family at the beginning of this year. Together with Nest, we’re pursuing our shared vision of a thoughtful home that isn’t just smart, it’s also helpful and simple enough for everyone to set up and use. It's technology designed for the way you live.
Our goal with these new products, as always, is to create something that serves a purpose in people’s lives—products that are so useful they make people wonder how they ever lived without them. The simple yet beautiful design of these new devices continues to bring the smarts of the technology to the forefront, while providing people with a bold piece of hardware.
Our guiding principle
Google's guiding principle is the same as it’s been for 20 years—to respect our users and put them first. We feel a deep responsibility to provide you with a helpful, personal Google experience, and that guides the work we do in three very specific ways:
First, we want to provide you with an experience that is unique to you. Just like Google is organizing the world’s information, the combination of AI, software and hardware can organize your information—and help out with the things you want to get done. The Google Assistant is the best expression of this, and it’s always available when, where, and however you need it.
Second, we’re committed to the security of our users. We need to offer simple, powerful ways to safeguard your devices. We’ve integrated Titan™ Security, the system we built for Google, into our new mobile devices. Titan™ Security protects your most sensitive on-device data by securing your lock screen and strengthening disk encryption.
Third, we want to make sure you’re in control of your digital wellbeing. From our research, 72 percent of our users are concerned about the amount of time people spend using tech. We take this very seriously and have developed new tools that make people’s lives easier and cut back on distractions.
With these Made by Google devices, our goal is to provide radically helpful solutions. While it’s early in the journey, we’re taking an end-to-end approach to consumer technology that merges our most innovative AI with intuitive software and powerful hardware. Ultimately, we want to help you do more with your days while doing less with your tech—so you can focus on what matters most.
Over the past few years, quantum computing has experienced a growth not only in the construction of quantum hardware, but also in the development of quantum algorithms. With the availability of Noisy Intermediate Scale Quantum (NISQ) computers (devices with ~50 - 100 qubits and high fidelity quantum gates), the development of algorithms to understand the power of these machines is of increasing importance. However, a common problem when designing a quantum algorithm on a NISQ processor is how to take full advantage of these limited quantum devices—using resources to solve the hardest part of the problem rather than on overheads from poor mappings between the algorithm and hardware. Furthermore some quantum processors have complex geometric constraints and other nuances, and ignoring these will either result in faulty quantum computation, or a computation that is modified and sub-optimal.*
Today at the First International Workshop on Quantum Software and Quantum Machine Learning (QSML), the Google AI Quantum team announced the public alpha of Cirq, an open source framework for NISQ computers. Cirq is focused on near-term questions and helping researchers understand whether NISQ quantum computers are capable of solving computational problems of practical importance. Cirq is licensed under Apache 2, and is free to be modified or embedded in any commercial or open source package.
Quantum computing will require strong cross-industry and academic collaborations if it is going to realize its full potential. In building Cirq, we worked with early testers to gain feedback and insight into algorithm design for NISQ computers. Below are some examples of Cirq work resulting from these early adopters:
- Zapata Computing: simulation of a quantum autoencoder (example code, video tutorial)
- QC Ware: QAOA implementation and integration into QC Ware’s AQUA platform (example code, video tutorial)
- Quantum Benchmark: integration of True-Q software tools for assessing and extending hardware capabilities (video tutorial)
- Heisenberg Quantum Simulations: simulating the Anderson Model
- Cambridge Quantum Computing: integration of proprietary quantum compiler t|ket> (video tutorial)
- NASA: architecture-aware compiler based on temporal-planning for QAOA (slides) and simulator of quantum computers (slides)
Today, the Google AI Quantum team is using Cirq to create circuits that run on Google’s Bristlecone processor. In the future, we plan to make this processor available in the cloud, and Cirq will be the interface in which users write programs for this processor. In the meantime, we hope Cirq will improve the productivity of NISQ algorithm developers and researchers everywhere. Please check out the GitHub repositories for Cirq and OpenFermion-Cirq — pull requests welcome!
We would like to thank Craig Gidney for leading the development of Cirq, Ryan Babbush and Kevin Sung for building OpenFermion-Cirq and a whole host of code contributors to both frameworks.
Source: Google AI Blog
Wi-Fi is a necessity for tons of connected devices in our homes. And when it isn’t working the way you expect, it can be a bit of a black box to troubleshoot. Google Wifi’s Network Check technology has always let you measure the speed of your internet connection and the quality of the network connection between your Google Wifi access points (if you have more than one). But what about that new smart TV in the bedroom that’s constantly buffering? Or your outdoor security camera with a flaky connection?
Starting today, we’re rolling out a new feature to Google Wifi that lets you measure how each individual device is performing on your Wi-Fi network. Knowing Wi-Fi coverage is poor in an area of your home can help you pinpoint the exact bottleneck when you notice a connectivity slowdown. Then, you’ll know to move your Google Wifi point closer to that device or even move the device itself for a stronger connection.
In the past month alone, we saw an average of 18 connected devices on each Google Wifi network, globally. With so many devices on your network, we want to make sure you have a way to know each device has the best connection possible, and that your home Wi-Fi is doing its job.
This update to our Network Check technology will be available in the coming weeks to all Google Wifi users around the world—just open the Google Wifi app to get started. Dead zones be gone!
Google Pixel Buds let you do a lot with just a quick touch. When you use Pixel Buds with your Pixel or other Android device with the Assistant, simply touch and hold the right earbud to ask for your favorite playlist, make a call, send a message or get walking directions to dinner. And, it allows you to control your audio too—just swipe forward or backward to control volume and tap to play or pause your music.
We’re adding three highly-requested features with the latest update that is beginning to roll out today. It’s as easy as 3, 2, 1.
Triple tap: On and off with touch.Pixel Buds can now be manually turned on or off by triple-tapping on the right earbud.
Double tap: Next track.Until now, double tapping let you hear notifications as they arrived on your phone. Now you can set double-tap to skip to the next track. To enable this, go to the Pixel Buds’ settings within the Google Assistant app on your phone and enable double-tap to skip to the next track. You can continue to use a Google Assistant voice command to skip tracks, even if you assign two taps to the “next” track feature.
One easy switch: Pairing devices made easy. To switch your Pixel Buds connection between your phone and computer (or any device you’ve previously paired), select your Pixel Buds from the BluetoothTM menu of the desired device. Your Pixel Buds will disconnect from the device you were using and connect to the new one.
These updates are starting to roll out today and will be available to everyone by early next week. Go to g.co/pixelbuds to learn more.
Two months ago, we launched Google Clips, a lightweight, hands-free camera that captures life’s beautiful and spontaneous moments with the help of machine learning and motion detection. Since then we’ve seen some great clips from moms, dads, and pet owners who have captured candid moments like this, this and this.
When it comes to kids and pets, you never know which moments you’ll want to capture. It’s not just about them smiling, looking at the camera, or posing on request (near-impossible with kids and pets who don’t want to sit still!). You may want to get your daughter jumping up and down in excitement, or your son kissing your cat. It’s all about the little moments and emotions that you can't stage or coordinate ahead of time.
To help capture these moments, we’re adding improved functionality to Clips so that it’s better at recognizing hugs, kisses, jumps and dance moves. All you need to do is find the best vantage point as you go about your day, and turn Clips on.
We’ve also heard from families using Clips that they want to be able to connect the device with more than one phone, so we’re adding family pairing this month, so that more than one family member can connect their phone to the Clips device to view and share content.
Clips’s improved intelligence can help you capture more of the candid and fleeting moments that happen in between those posed frames we are all so familiar with.
If you want to learn more about how Clips knows what makes a moment worth capturing, you can check out all the details on the Research blog.
Look for our May update this week (just in time for Mother’s Day!) on your Clips app and try out the improved functionality. For those of you who are looking to try it out, you can get $50 off in our Mother’s Day promotion.
To me, photography is the simultaneous recognition, in a fraction of a second, of the significance of an event as well as of a precise organization of forms which give that event its proper expression.
— Henri Cartier-Bresson
The last few years have witnessed a Cambrian-like explosion in AI, with deep learning methods enabling computer vision algorithms to recognize many of the elements of a good photograph: people, smiles, pets, sunsets, famous landmarks and more. But, despite these recent advancements, automatic photography remains a very challenging problem. Can a camera capture a great moment automatically?
Recently, we released Google Clips, a new, hands-free camera that automatically captures interesting moments in your life. We designed Google Clips around three important principles:
- We wanted all computations to be performed on-device. In addition to extending battery life and reducing latency, on-device processing means that none of your clips leave the device unless you decide to save or share them, which is a key privacy control.
- We wanted the device to capture short videos, rather than single photographs. Moments with motion can be more poignant and true-to-memory, and it is often easier to shoot a video around a compelling moment than it is to capture a perfect, single instant in time.
- We wanted to focus on capturing candid moments of people and pets, rather than the more abstract and subjective problem of capturing artistic images. That is, we did not attempt to teach Clips to think about composition, color balance, light, etc.; instead, Clips focuses on selecting ranges of time containing people and animals doing interesting activities.
How could we train an algorithm to recognize interesting moments? As with most machine learning problems, we started with a dataset. We created a dataset of thousands of videos in diverse scenarios where we imagined Clips being used. We also made sure our dataset represented a wide range of ethnicities, genders, and ages. We then hired expert photographers and video editors to pore over this footage to select the best short video segments. These early curations gave us examples for our algorithms to emulate. However, it is challenging to train an algorithm solely from the subjective selection of the curators — one needs a smooth gradient of labels to teach an algorithm to recognize the quality of content, ranging from "perfect" to "terrible."
To address this problem, we took a second data-collection approach, with the goal of creating a continuous quality score across the length of a video. We split each video into short segments (similar to the content Clips captures), randomly selected pairs of segments, and asked human raters to select the one they prefer.
Given this quality score training data, our next step was to train a neural network model to estimate the quality of any photograph captured by the device. We started with the basic assumption that knowing what’s in the photograph (e.g., people, dogs, trees, etc.) will help determine “interestingness”. If this assumption is correct, we could learn a function that uses the recognized content of the photograph to predict its quality score derived above from human comparisons.
To identify content labels in our training data, we leveraged the same Google machine learning technology that powers Google image search and Google Photos, which can recognize over 27,000 different labels describing objects, concepts, and actions. We certainly didn’t need all these labels, nor could we compute them all on device, so our expert photographers selected the few hundred labels they felt were most relevant to predicting the “interestingness” of a photograph. We also added the labels most highly correlated with the rater-derived quality scores.
Once we had this subset of labels, we then needed to design a compact, efficient model that could predict them for any given image, on-device, within strict power and thermal limits. This presented a challenge, as the deep learning techniques behind computer vision typically require strong desktop GPUs, and algorithms adapted to run on mobile devices lag far behind state-of-the-art techniques on desktop or cloud. To train this on-device model, we first took a large set of photographs and again used Google’s powerful, server-based recognition models to predict label confidence for each of the “interesting” labels described above. We then trained a MobileNet Image Content Model (ICM) to mimic the predictions of the server-based model. This compact model is capable of recognizing the most interesting elements of photographs, while ignoring non-relevant content.
The final step was to predict a single quality score for an input photograph from its content predicted by the ICM, using the 50M pairwise comparisons as training data. This score is computed with a piecewise linear regression model that combines the output of the ICM into a frame quality score. This frame quality score is averaged across the video segment to form a moment score. Given a pairwise comparison, our model should compute a moment score that is higher for the video segment preferred by humans. The model is trained so that its predictions match the human pairwise comparisons as well as possible.
While this data-driven score does a great job of identifying interesting (and non-interesting) moments, we also added some bonuses to our overall quality score for phenomena that we know we want Clips to capture, including faces (especially recurring and thus “familiar” ones), smiles, and pets. In our most recent release, we added bonuses for certain activities that customers particularly want to capture, such as hugs, kisses, jumping, and dancing. Recognizing these activities required extensions to the ICM model.
Given this powerful model for predicting the “interestingness” of a scene, the Clips camera can decide which moments to capture in real-time. Its shot control algorithms follow three main principles:
- Respect Power & Thermals: We want the Clips battery to last roughly three hours, and we don’t want the device to overheat — the device can’t run at full throttle all the time. Clips spends much of its time in a low-power mode that captures one frame per second. If the quality of that frame exceeds a threshold set by how much Clips has recently shot, it moves into a high-power mode, capturing at 15 fps. Clips then saves a clip at the first quality peak encountered.
- Avoid Redundancy: We don’t want Clips to capture all of its moments at once, and ignore the rest of a session. Our algorithms therefore cluster moments into visually similar groups, and limit the number of clips in each cluster.
- The Benefit of Hindsight: It’s much easier to determine which clips are the best when you can examine the totality of clips captured. Clips therefore captures more moments than it intends to show to the user. When clips are ready to be transferred to the phone, the Clips device takes a second look at what it has shot, and only transfers the best and least redundant content.
In addition to making sure our video dataset represented a diverse population, we also constructed several other tests to assess the fairness of our algorithms. We created controlled datasets by sampling subjects from different genders and skin tones in a balanced manner, while keeping variables like content type, duration, and environmental conditions constant. We then used this dataset to test that our algorithms had similar performance when applied to different groups. To help detect any regressions in fairness that might occur as we improved our moment quality models, we added fairness tests to our automated system. Any change to our software was run across this battery of tests, and was required to pass. It is important to note that this methodology can’t guarantee fairness, as we can’t test for every possible scenario and outcome. However, we believe that these steps are an important part of our long-term work to achieve fairness in ML algorithms.
Most machine learning algorithms are designed to estimate objective qualities – a photo contains a cat, or it doesn’t. In our case, we aim to capture a more elusive and subjective quality – whether a personal photograph is interesting, or not. We therefore combine the objective, semantic content of photographs with subjective human preferences to build the AI behind Google Clips. Also, Clips is designed to work alongside a person, rather than autonomously; to get good results, a person still needs to be conscious of framing, and make sure the camera is pointed at interesting content. We’re happy with how well Google Clips performs, and are excited to continue to improve our algorithms to capture that “perfect” moment!
The algorithms described here were conceived and implemented by a large group of Google engineers, research scientists, and others. Figures were made by Lior Shapira. Thanks to Lior and Juston Payne for video content.
Source: Google AI Blog
Source: Official Google India Blog
The apertus° AXIOM project is bringing the world’s first open hardware/free software digital motion picture production camera to life. The project has a rich history, exercises a steadfast adherence to the open source ethos, and all aspects of development have always revolved around supporting and utilising free technologies. The challenge of building a sophisticated digital cinema camera was perfect for Google Summer of Code 2017. But let’s start at the beginning: why did the team behind the project embark on their journey?
Modern CinematographyFor over a century film was dominated by analog cameras and celluloid, but in the late 2000’s things changed radically with the adoption of digital projection in cinemas. It was a natural next step, then, for filmmakers to shoot and produce films digitally. Certain applications in science, large format photography and fine arts still hold onto 35mm film processing, but the reduction in costs and improved workflows associated with digital image capture have revolutionised how we create and consume visual content.
The DSLR revolution
|Photo by Matthew Pearce|
licensed CC SA 2.0.
|Photo by Dave Dugdale licensed CC BY-SA 2.0.|
Starting the revolution for realapertus° project joined forces with the Magic Lantern team to lay the foundation for a totally independent, open hardware, free software, digital cinema camera. They ran a successful crowdfunding campaign for initial development, and they completed hardware development of the first developer kits in 2016. Unlike traditional cameras, the AXIOM is designed to be completely modular, and so continuously evolve, thereby preventing it from ever becoming obsolete. How the camera evolves is determined by its user community, with its design files and source code freely available and users encouraged to duplicate, modify and redistribute anything and everything related to the camera.
While the camera is primarily for use in motion picture production, there are many suitable applications where AXIOM can be useful. Individuals in science, astronomy, medicine, aerial mapping, industrial automation, and those who record events or talks at conferences have expressed interest in the camera. A modular and open source device for digital imaging allows users to build a system that meets their unique requirements. One such company for instance, Mavrx Inc, who use aerial imagery to provide actionable insight for the agriculture industry, used the camera because it enabled them to not only process the data more efficiently than comparable camera equivalents, but also to re-configure its form factor so that it could be installed alongside existing equipment configurations.
Google Summer of Code 2017Continuing their journey, apertus° participated in Google Summer of Code for the first time in 2017. They received about 30 applications from interested students, from which they needed to select three. Projects ranged from field programmable gate array (FPGA) centered video applications to creating Linux kernel drivers for specific camera hardware. Similarly TimVideos.us, an open hardware project for live event streaming and conference recording, is working on FPGA projects around video interfaces and processing.
After some preliminary work, the students came to grips with the camera’s operating processes and all three dove in enthusiastically. One student failed the first evaluation and another failed the second, but one student successfully completed their work.
That student, Vlad Niculescu, worked on defining control loops for a voltage controller using VHSIC Hardware Description Language (VHDL) for a potential future AXIOM Beta Power Board, an FPGA-driven smart switching regulator for increasing the power efficiency and improving flexibility around voltage regulation.
|Left: The printed circuit board (PCB) (printed circuit board) for testing the switching regulator FPGA logic. Right: After final improvements the fluctuation ripple in the voltages was reduced to around 30mV at 2V target voltage.|
“The knowledge I acquired during my work with this project and apertus° was very satisfying. Besides the electrical skills gained I also managed to obtain other, important universal skills. One of the things I learned was that the key to solving complex problems can often be found by dividing them into small blocks so that the greater whole can be easily observed by others. Writing better code and managing the stages of building a complex project have become lessons that will no doubt become valuable in the future. I will always be grateful to my mentor as he had the patience to explain everything carefully and teach me new things step by step, and also to apertus° and Google’s Summer of Code program, without which I may not have gained the experience of working on a project like this one.”
We are grateful for Vlad’s work and congratulate him for successfully completing the program. If you find open hardware and video production interesting, we encourage you to reach out and join the community–both apertus° and TimVideos.us are back for Google Summer of Code 2018.
By Sebastian Pichelhofer, apertus°, and Tim 'mithro' Ansell, TimVideos.us