Wide Color Photos Are Coming to Android: Things You Need to Know to be Prepared

Posted by Peiyong Lin, Software Engineer

Android is now at the point where sRGB color gamut with 8 bits per color channel is not enough to take advantage of the display and camera technology. At Android we have been working to make wide color photography happen end-to-end, e.g. more bits and bigger gamuts. This means, eventually users will be able to capture the richness of the scenes, share a wide color pictures with friends and view wide color pictures on their phones. And now with Android Q, it's starting to get really close to reality: wide color photography is coming to Android. So, it's very important to applications to be wide color gamut ready. This article will show how you can test your application to see whether it's wide color gamut ready and wide color gamut capable, and the steps you need to take to be ready for wide color gamut photography.

But before we dive in, why wide color photography? Display panels and camera sensors on mobile are getting better and better every year. More and more newly released phones will be shipped with calibrated display panels, some are wide color gamut capable. Modern camera sensors are capable of capturing scenes with a wider range of color outside of sRGB and thus produce wide color gamut pictures. And when these two come together, it creates an end-to-end photography experience with more vibrant colors of the real world.

At a technical level, this means there will be pictures coming to your application with an ICC profile that is not sRGB but some other wider color gamut: Display P3, Adobe RGB, etc. For consumers, this means their photos will look more realistic.

orange sunset

Display P3

orange sunset

SRGB

Colorful umbrellas

Display P3

Colorful umbrellas

SRGB

Above are images of the Display P3 version and the SRGB version respectively of the same scene. If you are reading this article on a calibrated and wide color gamut capable display, you can notice the significant difference between these them.

Color Tests

There are two kinds of tests you can perform to know whether your application is prepared or not. One is what we call color correctness tests, the other is wide color tests.

Color Correctness test: Is your application wide color gamut ready?

A wide color gamut ready application implies the application manages color proactively. This means when given images, the application always checks the color space and does conversion based on its capability of showing wide color gamut, and thus even if the application can't handle wide color gamut it can still show the sRGB color gamut of the image correctly without color distortion.

Below is a color correct example of an image with Display P3 ICC profile.

large round balloons outside on floor in front of a concrete wall

However, if your application is not color correct, then typically your application will end up manipulating/displaying the image without converting the color space correctly, resulting in color distortion. For example you may get the below image, where the color is washed-out and everything looks distorted.

large round balloons outside on floor in front of a concrete wall

Wide Color test: Is your application wide color gamut capable?

A wide color gamut capable application implies when given wide color gamut images, it can show the colors outside of sRGB color space. Here's an image you can use to test whether your application is wide color gamut capable or not, if it is, then a red Android logo will show up. Note that you must run this test on a wide color gamut capable device, for example a Pixel 3 or Samsung Galaxy S10.

red Android droid figure

What you should do to prepare

To prepare for wide color gamut photography, your application must at least pass the wide color gamut ready test, we call it color correctness test. If your application passes the wide color gamut ready tests, that's awesome! But if it doesn't, here are the steps to make it wide color gamut ready.

The key thing to be prepared and future proof is that your application should never assume sRGB color space of the external images it gets. This means application must check the color space of the decoded images, and do the conversion when necessary. Failure to do so will result in color distortion and color profile being discarded somewhere in your pipeline.

Mandatory: Be Color Correct

You must be at least color correct. If your application doesn't adopt wide color gamut, you are very likely to just want to decode every image to sRGB color space. You can do that by either using BitmapFactory or ImageDecoder.

Using BitmapFactory

In API 26, we added inPreferredColorSpace in BitmapFactory.Option, which allows you to specify the target color space you want the decoded bitmap to have. Let's say you want to decode a file, then below is the snippet you are very likely to use in order to manage the color:

final BitmapFactory.Options options = new BitmapFactory.Options();
// Decode this file to sRGB color space.
options.inPreferredColorSpace = ColorSpace.get(Named.SRGB);
Bitmap bitmap = BitmapFactory.decodeFile(FILE_PATH, options);

Using ImageDecoder

In Android P (API level 28), we introduced ImageDecoder, a modernized approach for decoding images. If you upgrade your apk to API level 28 or beyond, we recommend you to use it instead of the BitmapFactory and BitmapFactory.Option APIs.

Below is a snippet to decode the image to an sRGB bitmap using ImageDecoder#decodeBitmap API.

ImageDecoder.Source source =
        ImageDecoder.createSource(FILE_PATH);
try {
    bitmap = ImageDecoder.decodeBitmap(source,
            new ImageDecoder.OnHeaderDecodedListener() {
                @Override
                public void onHeaderDecoded(ImageDecoder decoder,
                        ImageDecoder.ImageInfo info,
                        ImageDecoder.Source source) {
                    decoder.setTargetColorSpace(ColorSpace.get(Named.SRGB));
                }
            });
} catch (IOException e) {
    // handle exception.
}

ImageDecoder also has the advantage to let you know the encoded color space of the bitmap before you get the final bitmap by passing an ImageDecoder.OnHeaderDecodedListener and checking ImageDecoder.ImageInfo#getColorSpace(). And thus, depending on how your applications handle color spaces, you can check the encoded color space of the contents and set the target color space differently.

ImageDecoder.Source source =
        ImageDecoder.createSource(FILE_PATH);
try {
    bitmap = ImageDecoder.decodeBitmap(source,
            new ImageDecoder.OnHeaderDecodedListener() {
                @Override
                public void onHeaderDecoded(ImageDecoder decoder,
                        ImageDecoder.ImageInfo info,
                        ImageDecoder.Source source) {
                    ColorSpace cs = info.getColorSpace();
                    // Do something...
                }
            });
} catch (IOException e) {
    // handle exception.
}

For more detailed usage you can check out the ImageDecoder APIs here.

Known bad practices

Some typical bad practices include but are not limited to:

  • Always assume sRGB color space
  • Upload image as texture without necessary conversion
  • Ignore the ICC profile during compression

All these cause a severe users perceived results: Color distortion. For example, below is a code snippet that results in the application not color correct:

// This is bad, don't do it!
final BitmapFactory.Options options = new BitmapFactory.Options();
final Bitmap bitmap = BitmapFactory.decodeFile(FILE_PATH, options);
glTexImage2D(GLES20.GL_TEXTURE_2D, 0, GLES31.GL_RGBA, bitmap.getWidth(),
        bitmap.getHeight(), 0, GLES20.GL_RGBA, GLES20.GL_UNSIGNED_BYTE, null);
GLUtils.texSubImage2D(GLES20.GL_TEXTURE_2D, 0, 0, 0, bitmap,
        GLES20.GL_RGBA, GLES20.GL_UNSIGNED_BYTE);

There's no color space checking before uploading the bitmap as the texture, and thus the application will end up with the below distorted image from the color correctness test.

large round balloons outside on floor in front of a concrete wall

Optional: Be wide color capable

Besides the above changes you must make in order to handle images correctly, if your applications are heavily image based, you will want to take additional steps to display these images in the full vibrant range by enabling the wide gamut mode in your manifest or creating a Display P3 surfaces.

To enable the wide color gamut in your activity, set the colorMode attribute to wideColorGamut in your AndroidManifest.xml file. You need to do this for each activity for which you want to enable wide color mode.

android:colorMode="wideColorGamut"

You can also set the color mode programmatically in your activity by calling the setColorMode(int) method and passing in COLOR_MODE_WIDE_COLOR_GAMUT.

To render wide color gamut contents, besides the wide color contents, you will also need to create a wide color gamut surfaces to render to. In OpenGL for example, your application must first check the following extensions:

And then, request the Display P3 as the color space when creating your surfaces, as shown in the following code snippet:

private static final int EGL_GL_COLORSPACE_DISPLAY_P3_PASSTHROUGH_EXT = 0x3490;

public EGLSurface createWindowSurface(EGL10 egl, EGLDisplay display,
                                      EGLConfig config, Object nativeWindow) {
  EGLSurface surface = null;
  try {
    int attribs[] = {
      EGL_GL_COLORSPACE_KHR, EGL_GL_COLORSPACE_DISPLAY_P3_PASSTHROUGH_EXT,
      egl.EGL_NONE
    };
    surface = egl.eglCreateWindowSurface(display, config, nativeWindow, attribs);
  } catch (IllegalArgumentException e) {}
  return surface;
}

Also check out our post about more details on how you can adopt wide color gamut in native code.

APIs design guideline for image library

Finally, if you own or maintain an image decoding/encoding library, you will need to at least pass the color correctness tests as well. To modernize your library, there are two things we strongly recommend you to do when you extend APIs to manage color:

  1. Strongly recommend to explicitly accept ColorSpace as a parameter when you design new APIs or extend existing APIs. Instead of hardcoding a color space, an explicit ColorSpace parameter is a more future-proof way moving forward.
  2. Strongly recommend all legacy APIs to explicitly decode the bitmap to sRGB color space. Historically there's no color management, and thus Android has been treating everything as sRGB implicitly until Android 8.0 (API level 26). This allows you to help your users maintain backward compatibility.

After you finish, go back to the above section and perform the two color tests.

Behind Magenta, the tech that rocked I/O

On the second day of I/O 2019, two bands took the stage—with a little help from machine learning. Both YACHT and The Flaming Lips worked with Google engineers who say that machine learning could change the way artists create music.

“Any time there has been a new technological development, it has made its way into music and art,” says Adam Roberts, a software engineer on the Magenta team. “The history of the piano, essentially, went from acoustic to electric to the synthesizer, and now there are ways to play it directly from your computer. That just happens naturally. If it’s a new technology, people figure out how to use it in music.”

Magenta, which started nearly three years ago, is an open-source research project powered by TensorFlow that explores the role of machine learning as a tool in the creative process. Machine learning is a process of teaching computers to recognize patterns, with a goal of letting them learn by example rather than constantly receiving input from a programmer. So with music, for example, you can input two types of melodies, then use machine learning to combine them in a novel way.

IO_19_Wed_Evening_11769.jpg

Jesse Engel, Claire Evans, Wayne Coyne and Adam Roberts speak at I/O.  

But the Magenta team isn’t just teaching computers to make music—instead, they’re working hand-in-hand with musicians to help take their art in new directions. YACHT was one of Magenta’s earliest collaborators; the trio came to Google to learn more about how to use artificial intelligence and machine learning in their upcoming album.

The band first took all 82 songs from their back catalog and isolated each part, from bass lines to vocal melodies to drum rhythms; they then took those isolated parts and broke them up into four-bar loops. Then, they put those loops into the machine learning model, which put out new melodies based on their old work. They did a similar process with lyrics, using their old songs plus other material they considered inspiring. The final task was to pick lyrics and melodies that made sense, and pair them together to make a song.

Music and Machine Learning (Google I/O'19)

Music and Machine Learning Session from Google I/O'19

“They used these tools to push themselves out of their comfort zone,” says Jesse Engel, a research scientist on the Magenta team. “They imposed some rules on themselves that they had to use the outputs of the model to some extent, and it helped them make new types of music.”

Claire Evans, the singer of YACHT, explained the process during a presentation at I/O. “Using machine learning to make a song with structure, with a beginning, middle and end, is a little bit still out of our reach,” she explained. “But that’s a good thing. The melody was the model’s job, but the arrangement and performance was entirely our job.”

The Flaming Lips’ use of Magenta is a lot more recent; the band started working with the Magenta team to prepare for their performance at I/O. The Magenta team showcased all their projects to the band, who were drawn to one in particular: Piano Genie, which was dreamed up by a graduate student, Chris Donahue, who was a summer intern at Google. They decided to use Piano Genie as the basis for a new song to be debuted on the I/O stage.

Google AI collaboration with The Flaming Lips bears fruit at I/O 2019

Piano Genie distills 88 notes on a piano to eight buttons, which you can push to your heart’s content to make piano music. In what Jesse calls “an initial moment of inspiration,” someone put a piece of wire inside a piece of fruit, and turned fruit into the buttons for Piano Genie. “Fruit can be used as a capacitive sensor, like the screen on your phone, so you can detect whether or not someone is touching the fruit,” Jesse explains. “They were playing these fruits just by touching these different fruits, and they got excited by how that changed the interaction.”

Wayne Coyne, the singer of The Flaming Lips, noted during an I/O panel that a quick turnaround time, plus close collaboration with Google, gave them the inspiration to think outside the box. “For me, the idea that we’re not playing it on a keyboard, we’re not playing it on a guitar, we’re playing it on fruit, takes it into this other realm,” he said.

During their performance that night, Steven Drozd from The Flaming Lips, who usually plays a variety of instruments, played a “magical bowl of fruit” for the first time. He tapped each fruit in the bowl, which then played different musical tones, “singing” the fruit’s own name. With help from Magenta, the band broke into a brand-new song, “Strawberry Orange.”

IO_19_Wed_Concert_13496 (1).jpg

The Flaming Lips’ Steven Drozd plays a bowl of fruit.

The Flaming Lips also got help from the audience: At one point, they tossed giant, blow-up “fruits” into the crowd, and each fruit was also set up as a sensor, so any audience member who got their hands on one played music, too. The end result was a cacophonous, joyous moment when a crowd truly contributed to the band’s sound.

IO_19_Wed_Concert_13502 (1).jpg

Audience members “play” an inflatable banana.

You can learn more about the "Fruit Genie" and how to build your own at g.co/magenta/fruitgenie.

Though the Magenta team collaborated on a much deeper level with YACHT, they also found the partnership with The Flaming Lips to be an exciting look toward the future. “The Flaming Lips is a proof of principle of how far we’ve come with the technologies,” Jesse says. “Through working with them we understood how to make our technologies more accessible to a broader base of musicians. We were able to show them all these things and they could just dive in and play with it.”

How car-loving Googlers turned a “lemon” into lemonade

This April, Googlers Peter McDade and Clay McCauley spent an entire day trying to keep a $300 car running. No, they weren’t stuck on a nightmare of a road trip. They were competing in the 24 Hours of Lemons race, the culmination of eight months of blood, sweat and tears—and a whole lot of grease.

Peter and Clay work at a Google data center in Moncks Corner, S.C., located about 20 miles from Charleston. Like many Googlers, the two find joy in taking things apart and putting them back together to see how they work. The data center has a maker space for employees, where colleagues tinker with brewing, electronics and 3D printers, as well as an auto repair station, with a car lift and tools to let people work on their vehicles. But their “lemons” race was way more than an after-work hangout.

Here’s how a lemons race works: Participants must team up in groups, and each group must spend no more than $500 on a car. Then they fix it up, give it a wacky paint job and race them. This particular race, nicknamed Southern Discomfort, is a full-day race at the Carolina Motorsports Park; it’s one of the 24 Hours of Lemons races that take place across the U.S. throughout the year. Peter, Clay and two other friends each took one-hour shifts driving, while the rest of the group stayed on call as a pit crew, taking action in case anything broke. Which, given the price of the car, was pretty likely. “The point is not to win,” Peter says. “The point is to finish and have fun.”

Peter first came up with the idea of participating in the race, and spread the word at work. Clay was immediately interested and signed up to help, but didn’t think it would work out. “I was thinking, Oh, it probably isn’t that serious, it probably will never happen,'” Clay says. But they stuck with it once other friends outside of Google stepped up to join.

Their “lemon” car, which they purchased for $300.

Their “lemon” car, which they purchased for $300.

Their first challenge? Find a car for under $500. It took them months, but Clay ended up finding a listing for a $300 car, which had been sitting in a field for a long time. “It was actually sinking into the ground, it had been there for so long,” Clay says. “It had grass overgrown around it, and it had mold growing on the paint.” Though the car barely rolled, thanks to a badly bent wheel, they decided they could figure something out.

That was the beginning of five months of work. They stripped the car down, fixed elements like the brakes and the wheels and added required safety features like a roll cage. At first, they tinkered with the car on site at the data center, but soon moved it to Peter’s driveway, where it remained until the race. They spent Tuesday and Thursday evenings, plus weekends, working to get it in shape, and kept track of what they had to do with Google Sheets.

Peter worked on the car in his driveway.

Peter worked on the car in his driveway.

On the big day, other teams didn’t even expect them to finish because of issues with the car’s fuel system and what Peter calls “electronic gremlins.” But they did, and they bested even their own expectations. The team, nicknamed “The Slow and Spontaneous” as a nod to the “Fast and the Furious” movies, made it the full 24 hours, doing 309 laps and finishing in 49th place out of 84 participants.

Emerging victorious wasn’t really the point, though. It was to work on a project with friends, and learn new skills to boot. “We’re not satisfied with something being broken and having to throw it away and buying something new,” Peter says. “It’s better to get something you know you might be able to fix, trying to find it, and realizing that yeah, I could fail, but if I fail, I’m going to learn something.” And they’ll apply those lessons to their next lemons race, taking place this fall.

Glass Enterprise Edition 2: faster and more helpful

Glass Enterprise Edition has helped workers in a variety of industries—from logistics, to  manufacturing, to field services—do their jobs more efficiently by providing hands-free access to the information and tools they need to complete their work. Workers can use Glass to access checklists, view instructions or send inspection photos or videos, and our enterprise customers have reported faster production times, improved quality, and reduced costs after using Glass.


Glass Enterprise Edition 2 helps businesses further improve the efficiency of their employees. As our customers have adopted Glass, we’ve received valuable feedback that directly informed the improvements in Glass Enterprise Edition 2. 

Glass Enterprise Edition 2.png
Glass Enterprise Edition 2 with safety frames by Smith Optics. Glass is a small, lightweight

wearable computer with a transparent display for hands-free work.

Glass Enterprise Edition 2 is built on the Qualcomm Snapdragon XR1 platform, which features a significantly more powerful multicore CPU (central processing unit) and a new artificial intelligence engine. This enables significant power savings, enhanced performance and support for computer vision and advanced machine learning capabilities. We’ve also partnered with Smith Optics to make Glass-compatible safety frames for different types of demanding work environments, like manufacturing floors and maintenance facilities.

Additionally, Glass Enterprise Edition 2 features improved camera performance and quality, which builds on Glass’s existing first person video streaming and collaboration features. We’ve also added USB-C port that supports faster charging, and increased overall battery life to enable customers to use Glass longer between charges.

Finally, Glass Enterprise Edition 2 is easier to develop for and deploy. It’s built on Android, making it easier for customers to integrate the services and APIs (application programming interfaces) they already use. And in order to support scaled deployments, Glass Enterprise Edition 2 now supports Android Enterprise Mobile Device Management.

Over the past two years atX, Alphabet’s moonshot factory, we’ve collaborated with our partners to provide solutions that improve workplace productivity for a growing number of customers—including AGCO, Deutsche Post DHL Group, Sutter Health, and H.B. Fuller. We’ve been inspired by the ways businesses like these have been using Glass Enterprise Edition. X, which is designed to be a protected space for long-term thinking and experimentation, has been a great environment in which to learn and refine the Glass product. Now, in order to meet the demands of the growing market for wearables in the workplace and to better scale our enterprise efforts, the Glass team has moved from X to Google.

We’re committed to providing enterprises with the helpful tools they need to work better, smarter and faster. Enterprise businesses interested in using Glass Enterprise Edition 2 can contact our sales team or our network of Glass Enterprise solution partners starting today. We’re excited to see how our partners and customers will continue to use Glass to shape the future of work.

A promising step forward for predicting lung cancer

Over the past three years, teams at Google have been applying AI to problems in healthcare—from diagnosing eye disease to predicting patient outcomes in medical records. Today we’re sharing new research showing how AI can predict lung cancer in ways that could boost the chances of survival for many people at risk around the world.


Lung cancer results in over 1.7 million deaths per year, making it the deadliest of all cancers worldwide—more than breast, prostate, and colorectal cancers combined—and it’s the sixth most common cause of death globally, according to the World Health Organization. While lung cancer has one of the worst survival rates among all cancers, interventions are much more successful when the cancer is caught early. Unfortunately, the statistics are sobering because the overwhelming majority of cancers are not caught until later stages.


Over the last three decades, doctors have explored ways to screen people at high-risk for lung cancer. Though lower dose CT screening has been proven to reduce mortality, there are still challenges that lead to unclear diagnosis, subsequent unnecessary procedures, financial costs, and more.

Our latest research

In late 2017, we began exploring how we could address some of these challenges using AI. Using advances in 3D volumetric modeling alongside datasets from our partners (including Northwestern University), we’ve made progress in modeling lung cancer prediction as well as laying the groundwork for future clinical testing. Today we’re publishing our promising findings in “Nature Medicine.”


Radiologists typically look through hundreds of 2D images within a single CT scan and cancer can be miniscule and hard to spot. We created a model that can not only generate the overall lung cancer malignancy prediction (viewed in 3D volume) but also identify subtle malignant tissue in the lungs (lung nodules). The model can also factor in information from previous scans, useful in predicting lung cancer risk because the growth rate of suspicious lung nodules can be indicative of malignancy.

lung cancer model.gif

This is a high level modeling framework. For each patient, the AI uses the current CT scan and, if available, a previous CT scan as input. The model outputs an overall malignancy prediction.

In our research, we leveraged 45,856 de-identified chest CT screening cases (some in which cancer was found) from NIH’s research dataset from the National Lung Screening Trial study and Northwestern University. We validated the results with a second dataset and also compared our results against 6 U.S. board-certified radiologists.

When using a single CT scan for diagnosis, our model performed on par or better than the six radiologists. We detected five percent more cancer cases while reducing false-positive exams by more than 11 percent compared to unassisted radiologists in our study. Our approach achieved an AUC of 94.4 percent (AUC is a common common metric used in machine learning and provides an aggregate measure for classification performance).

lung cancer scan.gif

For an asymptomatic patient with no history of cancer, the AI system reviewed and detected potential lung cancer that had been previously called normal.

Next steps

Despite the value of lung cancer screenings, only 2-4 percent of eligible patients in the U.S. are screened today. This work demonstrates the potential for AI to increase both accuracy and consistency, which could help accelerate adoption of lung cancer screening worldwide.

These initial results are encouraging, but further studies will assess the impact and utility in clinical practice. We’re collaborating with Google Cloud Healthcare and Life Sciences team to serve this model through the Cloud Healthcare API and are in early conversations with partners around the world to continue additional clinical validation research and deployment. If you’re a research institution or hospital system that is interested in collaborating in future research, please fill out this form.


Tech Exchange students reflect on their future careers

What if this was your day? At 10 a.m., explore the impact of cybersecurity on society. Over lunch, chat with a famous YouTuber. Wrap up the day with a tour of the Google X offices. Then, head home to work on a machine intelligence group project.

Sound out of the ordinary? For the 65 students participating in Google’s Tech Exchange program, this has been their reality over the last nine months.

Tech Exchange, a student exchange program between Google and 10 Historically Black Colleges and Universities (HBCUs) and Hispanic-Serving Institutions (HSIs), hosts students at Google’s Mountain View campus and engages them in a variety of applied computer science courses. The curriculum includes machine learning, product management, computational theory and database systems, all co-taught by HBCU/HSI faculty and Google engineers.

Tech Exchange is one way Google makes long-term investments in education in order to increase pathways to tech for underrepresented groups. We caught up with four students to learn about their experiences, hear about their summer plans and understand what they’ll bring back to their home university campuses.

Taylor Roper

Taylor Roper

Howard University

Summer Plans:BOLD Internship with the Research and Machine Intelligence team at Google

What I loved most:“If I could take any of my Tech Exchange classes back to Howard, it would be Product Management. This was such an amazing class and a great introduction into what it takes to be a product manager. The main instructors were Googlers who are currently product managers. Throughout the semester, we learned how design, engineering and all other fields interpret the role of a product manager. Being able to ask experts questions was very insightful and helpful.”

Vensan Cabardo

Vensan Cabardo

New Mexico State University

Summer Plans:Google’s Engineering Practicum Program

Finding confidence and comrades:“As much as I love my friends back home, none of them are computer science majors, and any discussion on my part about computer science would fall on deaf ears. That changed when I came to Tech Exchange. I found people who love computing and talking about computing as much as I do. As you do these things and as you travel through life, there may be a voice in your head telling you that you made it this far on sheer luck alone, that you don’t belong here, or that your accomplishments aren’t that great. That’s the imposter syndrome talking. That voice is wrong. Internalize your success, internalize your achievements, and recognize that they are the result of your hard work, not just good luck.”

Pedro Luis Rivera Gómez

Pedro Luis Rivera Gómez

University of Puerto Rico at Mayagüez

Summer Plans:Software Engineering Internship at Google

The value of a network:“A lesson that I learned during the Tech Exchange program that has helped a lot is to establish a network and promote peer-collaboration. We all have our strengths and weaknesses, and when we are working on a project and do not have much experience, you can get stuck on a particular task. Having a network increases the productivity of the whole group. When one member gets stuck, they can ask a peer for advice.”


Garrett Tolbert

Garrett Tolbert

Florida A&M University

Summer Plans:Applying to be a GEM Fellow

Ask all the questions: “One thing I will never forget from Tech Exchange is that asking questions goes beyond the classroom. Everyone in this program has been so accessible and helpful with accommodating me for things I never thought were possible. Being in this program has showed me that if you don’t know, just ask! Research the different paths you can take within tech, and see which paths interest you. Then, find people who are in those fields and connect with them.”


Dark mode available for Calendar and Keep on Android

What’s changing 

Google Calendar and Keep will now support Dark mode on Android.

 

Dark mode for Google Calendar. 

 

Dark mode for Google Keep. 

Who’s impacted 

End users.

Why you’d use it 

Dark mode is a popular feature that’s frequently requested by Calendar and Keep users. It creates a better viewing experience in low-light conditions by reducing brightness.

How to get started 


  • Admins: No action required. 
  • End users: 
    • Calendar 
      • Enable Dark mode by going to Settings > General > Theme. 
    •  Keep Enable 
      • Dark mode by going to Settings > Enable Dark Mode.

Additional details 


Both Calendar and Keep apps need to be updated to the latest version of the app to see this feature. 

Calendar 
Dark mode for Calendar will be supported on devices with Android N+ (i.e. Nougat and more recent releases).

Android Q users can set their OS to Dark mode, which means Calendar and all other apps will be in Dark mode by default. If users do not have their OS set to Dark mode, they can enable Dark mode in Calendar’s settings (see above).

For pre-Android-Q devices, users will be able to configure Calendar to go into Dark Mode when the device enters battery saving mode.

Keep 
Dark mode for Keep will be supported on devices with Android L-P. For these devices, Dark mode can be enabled from Keep’s settings (see above).

For Android Q devices, Dark will be on by default if the OS is set to Dark mode. Or, it can be enabled in Keep’s settings (see above).

Availability 

Rollout details 

  • Calendar: 
    • Gradual rollout (up to 15 days for feature visibility) starting on May 16, 2019. 
  •  Keep: 
    • Gradual rollout (up to 15 days for feature visibility) starting on May 20, 2019. G Suite editions Available to all G Suite editions. 
On/off by default? 

  • Calendar: 
    • For Android N - P, Dark mode will be OFF by default and can be enabled in Calendar settings (see above). 
    • For Android Q, this feature will be ON by default when the OS is set to Dark mode or can be enabled in Calendar settings (see above). 
  •  Keep: 
    • For Android L - P, this feature will be OFF by default and can be enabled in Keep settings (see above). 
    • For Android Q, this feature will be ON by default when the OS is set to Dark mode or can be enabled in Keep settings (see above).

Stay up to date with G Suite launches

Google.org and FII collaborate to empower low-income families

Since 2015, the Family Independence Initiative (FII) has used over $2.5 million in Google.org grants to empower families to escape poverty. Their technology platform UpTogether helps low-income families access small cash investments, connect with each other and share solutions—like how to find childcare or strategies to pay off debt. With the grants last year, FII improved their technology platform and expanded their sites to more cities including Austin and Chicago.

This year, the Family Independence Initiative is embarking on a mission of collaborative research to shift what’s possible for low-income families. And today, we’re expanding our investment in FII with a $1 million grant to support a pilot project called Trust and Invest Collaborative, which aims to guide policy decisions that will increase economic mobility for low-income families and their children. The grant will help FII, the City of Boston and the Department of Transitional Assistance examine learnings and successes from FII, and replicate them in future government services offered to low-income families.


In addition to our original grants to FII, we offered Google’s technical expertise. Over the last six months, six Google.org Fellows have been working full-time with FII to use their engineering and user experience expertise to help improve UpTogether. They used machine learning and natural language processing to make UpTogether’s data more useful in determining what leads to family success and to make it easier for families to share their own solutions with each other. These improvements in data quality will support the research for the pilot in Boston and Cambridge and help FII continue to share learnings from families’ own voices with future collaborators.

New Street View Cars to Start the Ultimate Kiwi Roadtrip





This week three new Street View vehicles will hit the streets in New Zealand, starting with the South Island, to gather updated, higher quality 360-degree imagery.


It’s been nine years since we’ve updated our camera technology, and just as smartphone cameras have dramatically evolved since then, we now have access to improved 360-degree camera technology. These new cutting-edge cameras fitted to our Street View cars will allow us to capture higher quality, sharper imagery and in low light conditions across New Zealand.


Google Maps’ Street View - a global collection of 360 degree imagery - is used millions of times every day by people looking to explore the world, to preview places before they go, or experience places virtually they might never have the chance to visit in person.



Keep your eyes peeled and you may see one of the new cars in your neighbourhood in the coming months. To see where they’ve been and where they’re headed next, check out this link. Imagery from their journeys will be made available via Street View later this year.








New Street View Cars to Start the Ultimate Kiwi Roadtrip


This week three new Street View vehicles will hit the streets in New Zealand, starting with the South Island, to gather updated, higher quality 360-degree imagery.


It’s been nine years since we’ve updated our camera technology, and just as smartphone cameras have dramatically evolved since then, we now have access to improved 360-degree camera technology. These new cutting-edge cameras fitted to our Street View cars will allow us to capture higher quality, sharper imagery and in low light conditions across New Zealand.



Google Maps’ Street View - a global collection of 360 degree imagery - is used millions of times every day by people looking to explore the world, to preview places before they go, or experience places virtually they might never have the chance to visit in person.

Keep your eyes peeled and you may see one of the new cars in your neighbourhood in the coming months. To see where they’ve been and where they’re headed next, check out this link. Imagery from their journeys will be made available via Street View later this year.