Bose speakers get smarter with the Google Assistant

With help from the Google Assistant, you can customize your entertainment at home with just your voice: ask the Assistant to play your favorite part of a song, pause a favorite show on your Chromecast-enabled TV to grab some snacks or dim the lights before the movie starts. And when you have great hardware that integrates with the Assistant, there's even more you can do.

Starting today, Bose is bringing the Google Assistant to its line of smart speakers and soundbars. This includes the Bose Home Speaker 500, Bose Soundbar 500 and 700, and an all-new, compact smart speaker coming later this summer, the Bose Home Speaker 300.

With the Google Assistant built in, you can play music, find answers on Google Search, manage everyday tasks and control smart devices around your home—just by saying “Hey Google.” If you’re using the Assistant for the first time on your Bose device, here are a few tips to get started: 

  • Enjoy entertainment:Ask the Google Assistant to play music and radio from your speaker. Or, stream videos to your Chromecast-enabled TV with a simple voice command to your Bose smart speaker. Later this summer, you’ll be able to play the news and podcasts, too. 
  • Get answers: Get answers on sports, weather, finance, calculations and translations.
  • Control compatible smart home devices:Check that the lights are turned off when you leave home and adjust the thermostat when you return. The Assistant works with over 3,500 home automation brands and more than 30,000 devices.
  • Plan your day:With your permission, get help with things like your flight information, or your commute to work. Check on the latest weather and traffic in your area.
  • Manage tasks:With your permission, your Assistant can add items to your shopping list and stock up on essentials. Set alarms and timers hands free.

How to pick the Assistant on your Bose speaker or soundbar 

If you already own one of these Bose smart speakers or sound bars, it’s easy to get the Assistant set up. Your speaker and soundbar will automatically receive a software update introducing the Google Assistant as a voice assistant option. You can go to “Voice Settings” for the device in the Bose Music app, select the Google Assistant and follow the guided setup process.

And if you are purchasing a Bose smart speaker for the first time, you’ll be able to select the Assistant right at set up.

With our collaboration with Bose, we hope you enjoy your new home audio with the helpfulness of the Google Assistant. 


OpenTelemetry: The Merger of OpenCensus and OpenTracing

We’ve talked about OpenCensus a lot over the past few years, from the project’s initial announcement, roots at Google and partners (Microsoft, Dynatrace) joining the project, to new functionality that we’re continually adding. The project has grown beyond our expectations and now sports a mature ecosystem with Google, Microsoft, Omnition, Postmates, and Dynatrace making major investments, and a broad base of community contributors.

We recently announced that OpenCensus and OpenTracing are merging into a single project, now called OpenTelemetry, which brings together the best of both projects and has a frictionless migration experience. We’ve made a lot of progress so far: we’ve established a governance committee, a Java prototype API + implementation, workgroups for each language, and an aggressive implementation schedule.

Today we’re highlighting the combined project at the keynote of Kubecon and announcing that OpenTelemetry is now officially part of the Cloud Native Computing Foundation! Full details are available in the CNCF’s official blog post, which we’ve copied below:

A Brief History of OpenTelemetry (So Far)

After many months of planning, discussion, prototyping, more discussion, and more planning, OpenTracing and OpenCensus are merging to form OpenTelemetry, which is now a CNCF sandbox project. The seed governance committee is composed of representatives from Google, Lightstep, Microsoft, and Uber, and more organizations are getting involved every day.

And we couldn't be happier about it – here’s why.

Observability, Outputs, and High-Quality Telemetry

Observability is a fashionable word with some admirably nerdy and academic origins. In control theory, “observability” measures how well we can understand the internals of a given system using only its external outputs. If you’ve ever deployed or operated a modern, microservice-based software application, you have no doubt struggled to understand its performance and behavior, and that’s because those “outputs” are usually meager at best. We can’t understand a complex system if it’s a black box. And the only way to light up those black boxes is with high-quality telemetry: distributed traces, metrics, logs, and more.

So how can we get our hands – and our tools – on precise, low-overhead telemetry from the entirety of a modern software stack? One way would be to carefully instrument every microservice, piece by piece, and layer by layer. This would literally work, it’s also a complete non-starter – we’d spend as much time on the measurement as we would on the software itself! We need telemetry as a built-in feature of our services.

The OpenTelemetry project is designed to make this vision a reality for our industry, but before we describe it in more detail, we should first cover the history and context around OpenTracing and OpenCensus.

OpenTracing and OpenCensus

In practice, there are several flavors (or “verticals” in the diagram) of telemetry data, and then several integration points (or “layers” in the diagram) available for each. Broadly, the cloud-native telemetry landscape is dominated by distributed traces, timeseries metrics, and logs; and end-users typically integrate with a thin instrumentation API or via straightforward structured data formats that describe those traces, metrics, or logs.



For several years now, there has been a well-recognized need for industry-wide collaboration in order to amortize the shared cost of software instrumentation. OpenTracing and OpenCensus have led the way in that effort, and while each project made different architectural choices, the biggest problem with either project has been the fact that there were two of them. And, further, that the two projects weren’t working together and striving for mutual compatibility.

Having two similar-yet-not-identical projects out in the world created confusion and uncertainty for developers, and that made it harder for both efforts to realize their shared mission: built-in, high-quality telemetry for all.

Getting to One Project

If there’s a single thing to understand about OpenTelemetry, it’s that the leadership from OpenTracing and OpenCensus are co-committed to migrating their respective communities to this single and unified initiative. Although all of us have numerous ideas about how we could boil the ocean and start from scratch, we are resisting those impulses and focusing instead on preparing our communities for a successful transition; our priorities for the merger are clear:
  • Straightforward backwards compatibility with both OpenTracing and OpenCensus (via software bridges)
  • Minimizing the time where OpenTelemetry, OpenTracing, and OpenCensus are being co-developed: we plan to put OpenTracing and OpenCensus into “readonly mode” before the end of 2019.
  • And, again, to simplify and standardize the telemetry solutions available to developers.
In many ways, it’s most accurate to think of OpenTelemetry as the next major version of both OpenTracing and OpenCensus. Like any version upgrade, we will try to make it easy for both new and existing end-users, but we recognize that the main benefit to the ecosystem is the consolidation itself – not some specific and shiny new feature – and we are prioritizing our own efforts accordingly.

How you can help

OpenTelemetry’s timeline is an aggressive one. While we have many open-source and vendor-licensed observability solutions providing guidance, we will always want as many end-users involved as possible. The single most valuable thing any end-user can do is also one of the easiest: check out the actual work we’re doing and provide feedback. Via GitHub, Gitter, email, or whatever feels easiest.

Of course we also welcome code contributions to OpenTelemetry itself, code contributions that add OpenTelemetry support to existing software projects, documentation, blog posts, and the rest of it. If you’re interested, you can sign up to join the integration effort by filling in this form.

By Ben Sigelman, co-creator of OpenTracing and member of the OpenTelemetry governing committee, and Morgan McLean, Product Manager for OpenCensus at Google since the project’s inception

Automatically provision users with three additional apps

What’s changing 

We’re adding auto-provisioning support for three new applications:
  • Hootsuite
  • Huddle
  • OfficeSpace

Who’s impacted 

Admins only

Why you’d use it 

When auto-provisioning is enabled for a supported third-party application, any users created, modified, or deleted in G Suite are automatically added, edited, or deleted in the third-party application as well. This feature is highly popular with admins, as it removes the overhead of managing users across multiple third-party SaaS applications.

How to get started 

  • Admins: For more information on how to set up auto-provisioning, check out the Help Center.
  • End users: No action needed.

Helpful links 

Help Center: Automated user provisioning 
Help Center: Using SAML to set up federated SSO 

Availability 

Rollout details 

G Suite editions 
  • G Suite Education, Business, and Enterprise customers can enable auto-provisioning for all supported applications 
  • G Suite Basic, Government, and Nonprofit customers can enable auto-provisioning for up to three applications 

On/off by default? 
This feature will be OFF by default and can be enabled at the OU level.

Stay up to date with G Suite launches

Wide Color Photos Are Coming to Android: Things You Need to Know to be Prepared

Posted by Peiyong Lin, Software Engineer

Android is now at the point where sRGB color gamut with 8 bits per color channel is not enough to take advantage of the display and camera technology. At Android we have been working to make wide color photography happen end-to-end, e.g. more bits and bigger gamuts. This means, eventually users will be able to capture the richness of the scenes, share a wide color pictures with friends and view wide color pictures on their phones. And now with Android Q, it's starting to get really close to reality: wide color photography is coming to Android. So, it's very important to applications to be wide color gamut ready. This article will show how you can test your application to see whether it's wide color gamut ready and wide color gamut capable, and the steps you need to take to be ready for wide color gamut photography.

But before we dive in, why wide color photography? Display panels and camera sensors on mobile are getting better and better every year. More and more newly released phones will be shipped with calibrated display panels, some are wide color gamut capable. Modern camera sensors are capable of capturing scenes with a wider range of color outside of sRGB and thus produce wide color gamut pictures. And when these two come together, it creates an end-to-end photography experience with more vibrant colors of the real world.

At a technical level, this means there will be pictures coming to your application with an ICC profile that is not sRGB but some other wider color gamut: Display P3, Adobe RGB, etc. For consumers, this means their photos will look more realistic.

orange sunset

Display P3

orange sunset

SRGB

Colorful umbrellas

Display P3

Colorful umbrellas

SRGB

Above are images of the Display P3 version and the SRGB version respectively of the same scene. If you are reading this article on a calibrated and wide color gamut capable display, you can notice the significant difference between these them.

Color Tests

There are two kinds of tests you can perform to know whether your application is prepared or not. One is what we call color correctness tests, the other is wide color tests.

Color Correctness test: Is your application wide color gamut ready?

A wide color gamut ready application implies the application manages color proactively. This means when given images, the application always checks the color space and does conversion based on its capability of showing wide color gamut, and thus even if the application can't handle wide color gamut it can still show the sRGB color gamut of the image correctly without color distortion.

Below is a color correct example of an image with Display P3 ICC profile.

large round balloons outside on floor in front of a concrete wall

However, if your application is not color correct, then typically your application will end up manipulating/displaying the image without converting the color space correctly, resulting in color distortion. For example you may get the below image, where the color is washed-out and everything looks distorted.

large round balloons outside on floor in front of a concrete wall

Wide Color test: Is your application wide color gamut capable?

A wide color gamut capable application implies when given wide color gamut images, it can show the colors outside of sRGB color space. Here's an image you can use to test whether your application is wide color gamut capable or not, if it is, then a red Android logo will show up. Note that you must run this test on a wide color gamut capable device, for example a Pixel 3 or Samsung Galaxy S10.

red Android droid figure

What you should do to prepare

To prepare for wide color gamut photography, your application must at least pass the wide color gamut ready test, we call it color correctness test. If your application passes the wide color gamut ready tests, that's awesome! But if it doesn't, here are the steps to make it wide color gamut ready.

The key thing to be prepared and future proof is that your application should never assume sRGB color space of the external images it gets. This means application must check the color space of the decoded images, and do the conversion when necessary. Failure to do so will result in color distortion and color profile being discarded somewhere in your pipeline.

Mandatory: Be Color Correct

You must be at least color correct. If your application doesn't adopt wide color gamut, you are very likely to just want to decode every image to sRGB color space. You can do that by either using BitmapFactory or ImageDecoder.

Using BitmapFactory

In API 26, we added inPreferredColorSpace in BitmapFactory.Option, which allows you to specify the target color space you want the decoded bitmap to have. Let's say you want to decode a file, then below is the snippet you are very likely to use in order to manage the color:

final BitmapFactory.Options options = new BitmapFactory.Options();
// Decode this file to sRGB color space.
options.inPreferredColorSpace = ColorSpace.get(Named.SRGB);
Bitmap bitmap = BitmapFactory.decodeFile(FILE_PATH, options);

Using ImageDecoder

In Android P (API level 28), we introduced ImageDecoder, a modernized approach for decoding images. If you upgrade your apk to API level 28 or beyond, we recommend you to use it instead of the BitmapFactory and BitmapFactory.Option APIs.

Below is a snippet to decode the image to an sRGB bitmap using ImageDecoder#decodeBitmap API.

ImageDecoder.Source source =
        ImageDecoder.createSource(FILE_PATH);
try {
    bitmap = ImageDecoder.decodeBitmap(source,
            new ImageDecoder.OnHeaderDecodedListener() {
                @Override
                public void onHeaderDecoded(ImageDecoder decoder,
                        ImageDecoder.ImageInfo info,
                        ImageDecoder.Source source) {
                    decoder.setTargetColorSpace(ColorSpace.get(Named.SRGB));
                }
            });
} catch (IOException e) {
    // handle exception.
}

ImageDecoder also has the advantage to let you know the encoded color space of the bitmap before you get the final bitmap by passing an ImageDecoder.OnHeaderDecodedListener and checking ImageDecoder.ImageInfo#getColorSpace(). And thus, depending on how your applications handle color spaces, you can check the encoded color space of the contents and set the target color space differently.

ImageDecoder.Source source =
        ImageDecoder.createSource(FILE_PATH);
try {
    bitmap = ImageDecoder.decodeBitmap(source,
            new ImageDecoder.OnHeaderDecodedListener() {
                @Override
                public void onHeaderDecoded(ImageDecoder decoder,
                        ImageDecoder.ImageInfo info,
                        ImageDecoder.Source source) {
                    ColorSpace cs = info.getColorSpace();
                    // Do something...
                }
            });
} catch (IOException e) {
    // handle exception.
}

For more detailed usage you can check out the ImageDecoder APIs here.

Known bad practices

Some typical bad practices include but are not limited to:

  • Always assume sRGB color space
  • Upload image as texture without necessary conversion
  • Ignore the ICC profile during compression

All these cause a severe users perceived results: Color distortion. For example, below is a code snippet that results in the application not color correct:

// This is bad, don't do it!
final BitmapFactory.Options options = new BitmapFactory.Options();
final Bitmap bitmap = BitmapFactory.decodeFile(FILE_PATH, options);
glTexImage2D(GLES20.GL_TEXTURE_2D, 0, GLES31.GL_RGBA, bitmap.getWidth(),
        bitmap.getHeight(), 0, GLES20.GL_RGBA, GLES20.GL_UNSIGNED_BYTE, null);
GLUtils.texSubImage2D(GLES20.GL_TEXTURE_2D, 0, 0, 0, bitmap,
        GLES20.GL_RGBA, GLES20.GL_UNSIGNED_BYTE);

There's no color space checking before uploading the bitmap as the texture, and thus the application will end up with the below distorted image from the color correctness test.

large round balloons outside on floor in front of a concrete wall

Optional: Be wide color capable

Besides the above changes you must make in order to handle images correctly, if your applications are heavily image based, you will want to take additional steps to display these images in the full vibrant range by enabling the wide gamut mode in your manifest or creating a Display P3 surfaces.

To enable the wide color gamut in your activity, set the colorMode attribute to wideColorGamut in your AndroidManifest.xml file. You need to do this for each activity for which you want to enable wide color mode.

android:colorMode="wideColorGamut"

You can also set the color mode programmatically in your activity by calling the setColorMode(int) method and passing in COLOR_MODE_WIDE_COLOR_GAMUT.

To render wide color gamut contents, besides the wide color contents, you will also need to create a wide color gamut surfaces to render to. In OpenGL for example, your application must first check the following extensions:

And then, request the Display P3 as the color space when creating your surfaces, as shown in the following code snippet:

private static final int EGL_GL_COLORSPACE_DISPLAY_P3_PASSTHROUGH_EXT = 0x3490;

public EGLSurface createWindowSurface(EGL10 egl, EGLDisplay display,
                                      EGLConfig config, Object nativeWindow) {
  EGLSurface surface = null;
  try {
    int attribs[] = {
      EGL_GL_COLORSPACE_KHR, EGL_GL_COLORSPACE_DISPLAY_P3_PASSTHROUGH_EXT,
      egl.EGL_NONE
    };
    surface = egl.eglCreateWindowSurface(display, config, nativeWindow, attribs);
  } catch (IllegalArgumentException e) {}
  return surface;
}

Also check out our post about more details on how you can adopt wide color gamut in native code.

APIs design guideline for image library

Finally, if you own or maintain an image decoding/encoding library, you will need to at least pass the color correctness tests as well. To modernize your library, there are two things we strongly recommend you to do when you extend APIs to manage color:

  1. Strongly recommend to explicitly accept ColorSpace as a parameter when you design new APIs or extend existing APIs. Instead of hardcoding a color space, an explicit ColorSpace parameter is a more future-proof way moving forward.
  2. Strongly recommend all legacy APIs to explicitly decode the bitmap to sRGB color space. Historically there's no color management, and thus Android has been treating everything as sRGB implicitly until Android 8.0 (API level 26). This allows you to help your users maintain backward compatibility.

After you finish, go back to the above section and perform the two color tests.

Behind Magenta, the tech that rocked I/O

On the second day of I/O 2019, two bands took the stage—with a little help from machine learning. Both YACHT and The Flaming Lips worked with Google engineers who say that machine learning could change the way artists create music.

“Any time there has been a new technological development, it has made its way into music and art,” says Adam Roberts, a software engineer on the Magenta team. “The history of the piano, essentially, went from acoustic to electric to the synthesizer, and now there are ways to play it directly from your computer. That just happens naturally. If it’s a new technology, people figure out how to use it in music.”

Magenta, which started nearly three years ago, is an open-source research project powered by TensorFlow that explores the role of machine learning as a tool in the creative process. Machine learning is a process of teaching computers to recognize patterns, with a goal of letting them learn by example rather than constantly receiving input from a programmer. So with music, for example, you can input two types of melodies, then use machine learning to combine them in a novel way.

IO_19_Wed_Evening_11769.jpg

Jesse Engel, Claire Evans, Wayne Coyne and Adam Roberts speak at I/O.  

But the Magenta team isn’t just teaching computers to make music—instead, they’re working hand-in-hand with musicians to help take their art in new directions. YACHT was one of Magenta’s earliest collaborators; the trio came to Google to learn more about how to use artificial intelligence and machine learning in their upcoming album.

The band first took all 82 songs from their back catalog and isolated each part, from bass lines to vocal melodies to drum rhythms; they then took those isolated parts and broke them up into four-bar loops. Then, they put those loops into the machine learning model, which put out new melodies based on their old work. They did a similar process with lyrics, using their old songs plus other material they considered inspiring. The final task was to pick lyrics and melodies that made sense, and pair them together to make a song.

Music and Machine Learning (Google I/O'19)

Music and Machine Learning Session from Google I/O'19

“They used these tools to push themselves out of their comfort zone,” says Jesse Engel, a research scientist on the Magenta team. “They imposed some rules on themselves that they had to use the outputs of the model to some extent, and it helped them make new types of music.”

Claire Evans, the singer of YACHT, explained the process during a presentation at I/O. “Using machine learning to make a song with structure, with a beginning, middle and end, is a little bit still out of our reach,” she explained. “But that’s a good thing. The melody was the model’s job, but the arrangement and performance was entirely our job.”

The Flaming Lips’ use of Magenta is a lot more recent; the band started working with the Magenta team to prepare for their performance at I/O. The Magenta team showcased all their projects to the band, who were drawn to one in particular: Piano Genie, which was dreamed up by a graduate student, Chris Donahue, who was a summer intern at Google. They decided to use Piano Genie as the basis for a new song to be debuted on the I/O stage.

Google AI collaboration with The Flaming Lips bears fruit at I/O 2019

Piano Genie distills 88 notes on a piano to eight buttons, which you can push to your heart’s content to make piano music. In what Jesse calls “an initial moment of inspiration,” someone put a piece of wire inside a piece of fruit, and turned fruit into the buttons for Piano Genie. “Fruit can be used as a capacitive sensor, like the screen on your phone, so you can detect whether or not someone is touching the fruit,” Jesse explains. “They were playing these fruits just by touching these different fruits, and they got excited by how that changed the interaction.”

Wayne Coyne, the singer of The Flaming Lips, noted during an I/O panel that a quick turnaround time, plus close collaboration with Google, gave them the inspiration to think outside the box. “For me, the idea that we’re not playing it on a keyboard, we’re not playing it on a guitar, we’re playing it on fruit, takes it into this other realm,” he said.

During their performance that night, Steven Drozd from The Flaming Lips, who usually plays a variety of instruments, played a “magical bowl of fruit” for the first time. He tapped each fruit in the bowl, which then played different musical tones, “singing” the fruit’s own name. With help from Magenta, the band broke into a brand-new song, “Strawberry Orange.”

IO_19_Wed_Concert_13496 (1).jpg

The Flaming Lips’ Steven Drozd plays a bowl of fruit.

The Flaming Lips also got help from the audience: At one point, they tossed giant, blow-up “fruits” into the crowd, and each fruit was also set up as a sensor, so any audience member who got their hands on one played music, too. The end result was a cacophonous, joyous moment when a crowd truly contributed to the band’s sound.

IO_19_Wed_Concert_13502 (1).jpg

Audience members “play” an inflatable banana.

You can learn more about the "Fruit Genie" and how to build your own at g.co/magenta/fruitgenie.

Though the Magenta team collaborated on a much deeper level with YACHT, they also found the partnership with The Flaming Lips to be an exciting look toward the future. “The Flaming Lips is a proof of principle of how far we’ve come with the technologies,” Jesse says. “Through working with them we understood how to make our technologies more accessible to a broader base of musicians. We were able to show them all these things and they could just dive in and play with it.”

How car-loving Googlers turned a “lemon” into lemonade

This April, Googlers Peter McDade and Clay McCauley spent an entire day trying to keep a $300 car running. No, they weren’t stuck on a nightmare of a road trip. They were competing in the 24 Hours of Lemons race, the culmination of eight months of blood, sweat and tears—and a whole lot of grease.

Peter and Clay work at a Google data center in Moncks Corner, S.C., located about 20 miles from Charleston. Like many Googlers, the two find joy in taking things apart and putting them back together to see how they work. The data center has a maker space for employees, where colleagues tinker with brewing, electronics and 3D printers, as well as an auto repair station, with a car lift and tools to let people work on their vehicles. But their “lemons” race was way more than an after-work hangout.

Here’s how a lemons race works: Participants must team up in groups, and each group must spend no more than $500 on a car. Then they fix it up, give it a wacky paint job and race them. This particular race, nicknamed Southern Discomfort, is a full-day race at the Carolina Motorsports Park; it’s one of the 24 Hours of Lemons races that take place across the U.S. throughout the year. Peter, Clay and two other friends each took one-hour shifts driving, while the rest of the group stayed on call as a pit crew, taking action in case anything broke. Which, given the price of the car, was pretty likely. “The point is not to win,” Peter says. “The point is to finish and have fun.”

Peter first came up with the idea of participating in the race, and spread the word at work. Clay was immediately interested and signed up to help, but didn’t think it would work out. “I was thinking, Oh, it probably isn’t that serious, it probably will never happen,'” Clay says. But they stuck with it once other friends outside of Google stepped up to join.

Their “lemon” car, which they purchased for $300.

Their “lemon” car, which they purchased for $300.

Their first challenge? Find a car for under $500. It took them months, but Clay ended up finding a listing for a $300 car, which had been sitting in a field for a long time. “It was actually sinking into the ground, it had been there for so long,” Clay says. “It had grass overgrown around it, and it had mold growing on the paint.” Though the car barely rolled, thanks to a badly bent wheel, they decided they could figure something out.

That was the beginning of five months of work. They stripped the car down, fixed elements like the brakes and the wheels and added required safety features like a roll cage. At first, they tinkered with the car on site at the data center, but soon moved it to Peter’s driveway, where it remained until the race. They spent Tuesday and Thursday evenings, plus weekends, working to get it in shape, and kept track of what they had to do with Google Sheets.

Peter worked on the car in his driveway.

Peter worked on the car in his driveway.

On the big day, other teams didn’t even expect them to finish because of issues with the car’s fuel system and what Peter calls “electronic gremlins.” But they did, and they bested even their own expectations. The team, nicknamed “The Slow and Spontaneous” as a nod to the “Fast and the Furious” movies, made it the full 24 hours, doing 309 laps and finishing in 49th place out of 84 participants.

Emerging victorious wasn’t really the point, though. It was to work on a project with friends, and learn new skills to boot. “We’re not satisfied with something being broken and having to throw it away and buying something new,” Peter says. “It’s better to get something you know you might be able to fix, trying to find it, and realizing that yeah, I could fail, but if I fail, I’m going to learn something.” And they’ll apply those lessons to their next lemons race, taking place this fall.

Glass Enterprise Edition 2: faster and more helpful

Glass Enterprise Edition has helped workers in a variety of industries—from logistics, to  manufacturing, to field services—do their jobs more efficiently by providing hands-free access to the information and tools they need to complete their work. Workers can use Glass to access checklists, view instructions or send inspection photos or videos, and our enterprise customers have reported faster production times, improved quality, and reduced costs after using Glass.


Glass Enterprise Edition 2 helps businesses further improve the efficiency of their employees. As our customers have adopted Glass, we’ve received valuable feedback that directly informed the improvements in Glass Enterprise Edition 2. 

Glass Enterprise Edition 2.png
Glass Enterprise Edition 2 with safety frames by Smith Optics. Glass is a small, lightweight

wearable computer with a transparent display for hands-free work.

Glass Enterprise Edition 2 is built on the Qualcomm Snapdragon XR1 platform, which features a significantly more powerful multicore CPU (central processing unit) and a new artificial intelligence engine. This enables significant power savings, enhanced performance and support for computer vision and advanced machine learning capabilities. We’ve also partnered with Smith Optics to make Glass-compatible safety frames for different types of demanding work environments, like manufacturing floors and maintenance facilities.

Additionally, Glass Enterprise Edition 2 features improved camera performance and quality, which builds on Glass’s existing first person video streaming and collaboration features. We’ve also added USB-C port that supports faster charging, and increased overall battery life to enable customers to use Glass longer between charges.

Finally, Glass Enterprise Edition 2 is easier to develop for and deploy. It’s built on Android, making it easier for customers to integrate the services and APIs (application programming interfaces) they already use. And in order to support scaled deployments, Glass Enterprise Edition 2 now supports Android Enterprise Mobile Device Management.

Over the past two years atX, Alphabet’s moonshot factory, we’ve collaborated with our partners to provide solutions that improve workplace productivity for a growing number of customers—including AGCO, Deutsche Post DHL Group, Sutter Health, and H.B. Fuller. We’ve been inspired by the ways businesses like these have been using Glass Enterprise Edition. X, which is designed to be a protected space for long-term thinking and experimentation, has been a great environment in which to learn and refine the Glass product. Now, in order to meet the demands of the growing market for wearables in the workplace and to better scale our enterprise efforts, the Glass team has moved from X to Google.

We’re committed to providing enterprises with the helpful tools they need to work better, smarter and faster. Enterprise businesses interested in using Glass Enterprise Edition 2 can contact our sales team or our network of Glass Enterprise solution partners starting today. We’re excited to see how our partners and customers will continue to use Glass to shape the future of work.

A promising step forward for predicting lung cancer

Over the past three years, teams at Google have been applying AI to problems in healthcare—from diagnosing eye disease to predicting patient outcomes in medical records. Today we’re sharing new research showing how AI can predict lung cancer in ways that could boost the chances of survival for many people at risk around the world.


Lung cancer results in over 1.7 million deaths per year, making it the deadliest of all cancers worldwide—more than breast, prostate, and colorectal cancers combined—and it’s the sixth most common cause of death globally, according to the World Health Organization. While lung cancer has one of the worst survival rates among all cancers, interventions are much more successful when the cancer is caught early. Unfortunately, the statistics are sobering because the overwhelming majority of cancers are not caught until later stages.


Over the last three decades, doctors have explored ways to screen people at high-risk for lung cancer. Though lower dose CT screening has been proven to reduce mortality, there are still challenges that lead to unclear diagnosis, subsequent unnecessary procedures, financial costs, and more.

Our latest research

In late 2017, we began exploring how we could address some of these challenges using AI. Using advances in 3D volumetric modeling alongside datasets from our partners (including Northwestern University), we’ve made progress in modeling lung cancer prediction as well as laying the groundwork for future clinical testing. Today we’re publishing our promising findings in “Nature Medicine.”


Radiologists typically look through hundreds of 2D images within a single CT scan and cancer can be miniscule and hard to spot. We created a model that can not only generate the overall lung cancer malignancy prediction (viewed in 3D volume) but also identify subtle malignant tissue in the lungs (lung nodules). The model can also factor in information from previous scans, useful in predicting lung cancer risk because the growth rate of suspicious lung nodules can be indicative of malignancy.

lung cancer model.gif

This is a high level modeling framework. For each patient, the AI uses the current CT scan and, if available, a previous CT scan as input. The model outputs an overall malignancy prediction.

In our research, we leveraged 45,856 de-identified chest CT screening cases (some in which cancer was found) from NIH’s research dataset from the National Lung Screening Trial study and Northwestern University. We validated the results with a second dataset and also compared our results against 6 U.S. board-certified radiologists.

When using a single CT scan for diagnosis, our model performed on par or better than the six radiologists. We detected five percent more cancer cases while reducing false-positive exams by more than 11 percent compared to unassisted radiologists in our study. Our approach achieved an AUC of 94.4 percent (AUC is a common common metric used in machine learning and provides an aggregate measure for classification performance).

lung cancer scan.gif

For an asymptomatic patient with no history of cancer, the AI system reviewed and detected potential lung cancer that had been previously called normal.

Next steps

Despite the value of lung cancer screenings, only 2-4 percent of eligible patients in the U.S. are screened today. This work demonstrates the potential for AI to increase both accuracy and consistency, which could help accelerate adoption of lung cancer screening worldwide.

These initial results are encouraging, but further studies will assess the impact and utility in clinical practice. We’re collaborating with Google Cloud Healthcare and Life Sciences team to serve this model through the Cloud Healthcare API and are in early conversations with partners around the world to continue additional clinical validation research and deployment. If you’re a research institution or hospital system that is interested in collaborating in future research, please fill out this form.


Tech Exchange students reflect on their future careers

What if this was your day? At 10 a.m., explore the impact of cybersecurity on society. Over lunch, chat with a famous YouTuber. Wrap up the day with a tour of the Google X offices. Then, head home to work on a machine intelligence group project.

Sound out of the ordinary? For the 65 students participating in Google’s Tech Exchange program, this has been their reality over the last nine months.

Tech Exchange, a student exchange program between Google and 10 Historically Black Colleges and Universities (HBCUs) and Hispanic-Serving Institutions (HSIs), hosts students at Google’s Mountain View campus and engages them in a variety of applied computer science courses. The curriculum includes machine learning, product management, computational theory and database systems, all co-taught by HBCU/HSI faculty and Google engineers.

Tech Exchange is one way Google makes long-term investments in education in order to increase pathways to tech for underrepresented groups. We caught up with four students to learn about their experiences, hear about their summer plans and understand what they’ll bring back to their home university campuses.

Taylor Roper

Taylor Roper

Howard University

Summer Plans:BOLD Internship with the Research and Machine Intelligence team at Google

What I loved most:“If I could take any of my Tech Exchange classes back to Howard, it would be Product Management. This was such an amazing class and a great introduction into what it takes to be a product manager. The main instructors were Googlers who are currently product managers. Throughout the semester, we learned how design, engineering and all other fields interpret the role of a product manager. Being able to ask experts questions was very insightful and helpful.”

Vensan Cabardo

Vensan Cabardo

New Mexico State University

Summer Plans:Google’s Engineering Practicum Program

Finding confidence and comrades:“As much as I love my friends back home, none of them are computer science majors, and any discussion on my part about computer science would fall on deaf ears. That changed when I came to Tech Exchange. I found people who love computing and talking about computing as much as I do. As you do these things and as you travel through life, there may be a voice in your head telling you that you made it this far on sheer luck alone, that you don’t belong here, or that your accomplishments aren’t that great. That’s the imposter syndrome talking. That voice is wrong. Internalize your success, internalize your achievements, and recognize that they are the result of your hard work, not just good luck.”

Pedro Luis Rivera Gómez

Pedro Luis Rivera Gómez

University of Puerto Rico at Mayagüez

Summer Plans:Software Engineering Internship at Google

The value of a network:“A lesson that I learned during the Tech Exchange program that has helped a lot is to establish a network and promote peer-collaboration. We all have our strengths and weaknesses, and when we are working on a project and do not have much experience, you can get stuck on a particular task. Having a network increases the productivity of the whole group. When one member gets stuck, they can ask a peer for advice.”


Garrett Tolbert

Garrett Tolbert

Florida A&M University

Summer Plans:Applying to be a GEM Fellow

Ask all the questions: “One thing I will never forget from Tech Exchange is that asking questions goes beyond the classroom. Everyone in this program has been so accessible and helpful with accommodating me for things I never thought were possible. Being in this program has showed me that if you don’t know, just ask! Research the different paths you can take within tech, and see which paths interest you. Then, find people who are in those fields and connect with them.”


Dark mode available for Calendar and Keep on Android

What’s changing 

Google Calendar and Keep will now support Dark mode on Android.

 

Dark mode for Google Calendar. 

 

Dark mode for Google Keep. 

Who’s impacted 

End users.

Why you’d use it 

Dark mode is a popular feature that’s frequently requested by Calendar and Keep users. It creates a better viewing experience in low-light conditions by reducing brightness.

How to get started 


  • Admins: No action required. 
  • End users: 
    • Calendar 
      • Enable Dark mode by going to Settings > General > Theme. 
    •  Keep Enable 
      • Dark mode by going to Settings > Enable Dark Mode.

Additional details 


Both Calendar and Keep apps need to be updated to the latest version of the app to see this feature. 

Calendar 
Dark mode for Calendar will be supported on devices with Android N+ (i.e. Nougat and more recent releases).

Android Q users can set their OS to Dark mode, which means Calendar and all other apps will be in Dark mode by default. If users do not have their OS set to Dark mode, they can enable Dark mode in Calendar’s settings (see above).

For pre-Android-Q devices, users will be able to configure Calendar to go into Dark Mode when the device enters battery saving mode.

Keep 
Dark mode for Keep will be supported on devices with Android L-P. For these devices, Dark mode can be enabled from Keep’s settings (see above).

For Android Q devices, Dark will be on by default if the OS is set to Dark mode. Or, it can be enabled in Keep’s settings (see above).

Availability 

Rollout details 

  • Calendar: 
    • Gradual rollout (up to 15 days for feature visibility) starting on May 16, 2019. 
  •  Keep: 
    • Gradual rollout (up to 15 days for feature visibility) starting on May 20, 2019. G Suite editions Available to all G Suite editions. 
On/off by default? 

  • Calendar: 
    • For Android N - P, Dark mode will be OFF by default and can be enabled in Calendar settings (see above). 
    • For Android Q, this feature will be ON by default when the OS is set to Dark mode or can be enabled in Calendar settings (see above). 
  •  Keep: 
    • For Android L - P, this feature will be OFF by default and can be enabled in Keep settings (see above). 
    • For Android Q, this feature will be ON by default when the OS is set to Dark mode or can be enabled in Keep settings (see above).

Stay up to date with G Suite launches