Announcing v12 of the Google Ads API

Today, we’re announcing the v12 release of the Google Ads API. To use some v12 features, you’ll need to upgrade your client libraries and client code. The updated client libraries and code examples will be published next week.

Here are the highlights: Where can I learn more?
The following resources can help you get started: If you have any questions or need additional help, contact us via the forum.

Natural Language Assessment: A New Framework to Promote Education

Whether it's a professional honing their skills or a child learning to read, coaches and educators play a key role in assessing the learner's answer to a question in a given context and guiding them towards a goal. These interactions have unique characteristics that set them apart from other forms of dialogue, yet are not available when learners practice alone at home. In the field of natural language processing, this type of capability has not received much attention and is technologically challenging. We set out to explore how we can use machine learning to assess answers in a way that facilitates learning.

In this blog, we introduce an important natural language understanding (NLU) capability called Natural Language Assessment (NLA), and discuss how it can be helpful in the context of education. While typical NLU tasks focus on the user's intent, NLA allows for the assessment of an answer from multiple perspectives. In situations where a user wants to know how good their answer is, NLA can offer an analysis of how close the answer is to what is expected. In situations where there may not be a “correct” answer, NLA can offer subtle insights that include topicality, relevance, verbosity, and beyond. We formulate the scope of NLA, present a practical model for carrying out topicality NLA, and showcase how NLA has been used to help job seekers practice answering interview questions with Google's new interview prep tool, Interview Warmup.


Overview of Natural Language Assessment (NLA)

The goal of NLA is to evaluate the user's answer against a set of expectations. Consider the following components for an NLA system interacting with students:

  • A question presented to the student
  • Expectations that define what we expect to find in the answer (e.g., a concrete textual answer, a set of topics we expect the answer to cover, conciseness)
  • An answer provided by the student
  • An assessment output (e.g., correctness, missing information, too specific or general, stylistic feedback, pronunciation, etc.)
  • [Optional] A context (e.g., a chapter in a book or an article)

With NLA, both the expectations about the answer and the assessment of the answer can be very broad. This enables teacher-student interactions that are more expressive and subtle. Here are two examples:

  1. A question with a concrete correct answer: Even in situations where there is a clear correct answer, it can be helpful to assess the answer more subtly than simply correct or incorrect. Consider the following:

    Context: Harry Potter and the Philosopher's Stone
    Question: “What is Hogwarts?”
    Expectation: “Hogwarts is a school of Witchcraft and Wizardry” [expectation is given as text]
    Answer: “I am not exactly sure, but I think it is a school.”

    The answer may be missing salient details but labeling it as incorrect wouldn’t be entirely true or useful to a user. NLA can offer a more subtle understanding by, for example, identifying that the student’s answer is too general, and also that the student is uncertain.

    Illustration of the NLA process from input question, answer and expectation to assessment output

    This kind of subtle assessment, along with noting the uncertainty the student expressed, can be important in helping students build skills in conversational settings.

  2. Topicality expectations: There are many situations in which a concrete answer is not expected. For example, if a student is asked an opinion question, there is no concrete textual expectation. Instead, there's an expectation of relevance and opinionation, and perhaps some level of succinctness and fluency. Consider the following interview practice setup:

    Question: “Tell me a little about yourself?”
    Expectations: { “Education”, “Experience”, “Interests” } (a set of topics)
    Answer: “Let’s see. I grew up in the Salinas valley in California and went to Stanford where I majored in economics but then got excited about technology so next I ….”

    In this case, a useful assessment output would map the user’s answer to a subset of the topics covered, possibly along with a markup of which parts of the text relate to which topic. This can be challenging from an NLP perspective as answers can be long, topics can be mixed, and each topic on its own can be multi-faceted.


A Topicality NLA Model

In principle, topicality NLA is a standard multi-class task for which one can readily train a classifier using standard techniques. However, training data for such scenarios is scarce and it would be costly and time consuming to collect for each question and topic. Our solution is to break each topic into granular components that can be identified using large language models (LLMs) with a straightforward generic tuning.

We map each topic to a list of underlying questions and define that if the sentence contains an answer to one of those underlying questions, then it covers that topic. For the topic “Experience” we might choose underlying questions such as:

  • Where did you work?
  • What did you study?

While for the topic “Interests” we might choose underlying questions such as:

  • What are you interested in?
  • What do you enjoy doing?

These underlying questions are designed through an iterative manual process. Importantly, since these questions are sufficiently granular, current language models (see details below) can capture their semantics. This allows us to offer a zero-shot setting for the NLA topicality task: once trained (more on the model below), it is easy to add new questions and new topics, or adapt existing topics by modifying their underlying content expectation without the need to collect topic specific data. See below the model’s predictions for the sentence “I’ve worked in retail for 3 years” for the two topics described above:

A diagram of how the model uses underlying questions to predict the topic most likely to be covered by the user’s answer.

Since an underlying question for the topic “Experience” was matched, the sentence would be classified as “Experience”.


Application: Helping Job Seekers Prepare for Interviews

Interview Warmup is a new tool developed in collaboration with job seekers to help them prepare for interviews in fast-growing fields of employment such as IT Support and UX Design. It allows job seekers to practice answering questions selected by industry experts and to become more confident and comfortable with interviewing. As we worked with job seekers to understand their challenges in preparing for interviews and how an interview practice tool could be most useful, it inspired our research and the application of topicality NLA.

We build the topicality NLA model (once for all questions and topics) as follows: we train an encoder-only T5 model (EncT5 architecture) with 350 million parameters on Question-Answers data to predict the compatibility of an <underlying question, answer> pair. We rely on data from SQuAD 2.0 which was processed to produce <question, answer, label> triplets.

In the Interview Warmup tool, users can switch between talking points to see which ones were detected in their answer.

The tool does not grade or judge answers. Instead it enables users to practice and identify ways to improve on their own. After a user replies to an interview question, their answer is parsed sentence-by-sentence with the Topicality NLA model. They can then switch between different talking points to see which ones were detected in their answer. We know that there are many potential pitfalls in signaling to a user that their response is “good”, especially as we only detect a limited set of topics. Instead, we keep the control in the user’s hands and only use ML to help users make their own discoveries about how to improve.

So far, the tool has had great results helping job seekers around the world, including in the US, and we have recently expanded it to Africa. We plan to continue working with job seekers to iterate and make the tool even more helpful to the millions of people searching for new jobs.

A short film showing how Interview Warmup and its NLA capabilities were developed in collaboration with job seekers.

Conclusion

Natural Language Assessment (NLA) is a technologically challenging and interesting research area. It paves the way for new conversational applications that promote learning by enabling the nuanced assessment and analysis of answers from multiple perspectives. Working together with communities, from job seekers and businesses to classroom teachers and students, we can identify situations where NLA has the potential to help people learn, engage, and develop skills across an array of subjects, and we can build applications in a responsible way that empower users to assess their own abilities and discover ways to improve.


Acknowledgements

This work is made possible through a collaboration spanning several teams across Google. We’d like to acknowledge contributions from Google Research Israel, Google Creative Lab, and Grow with Google teams among others.

Source: Google AI Blog


Better Device Compatibility with CameraX

Posted by The Android Team CameraX is an Android Jetpack library that makes it easy to incorporate camera functionality directly in your Android app. That’s why we focus heavily on device compatibility out-of-the-box, so you can focus on what makes your app unique.

In this post, we’ll look at three ways CameraX makes developers’ lives easier when it comes to device compatibility. First, we’ll take a peek into our CameraX Test Lab where we test over 150 physical phones every day. Second, we’ll look at Quirks, the mechanism CameraX uses to automatically handle device inconsistencies. Third, we’ll discuss the ways CameraX makes it easier to develop apps for foldable phones.


CameraX Test Lab

A single rack in our CameraX Test Lab on the left, and on the right, a moving image of the inside of a test inclusure with rotating phone mount 
(Left) A single rack in our CameraX Test Lab. Each test enclosure contains two identical Android phones for testing front and back cameras. (Right) A GIF showing the inside of a test inclosure, with a rotating phone mount (for testing portrait and landscape orientations) and a high-resolution test chart (not pictured).

We built the CameraX Test Lab to ensure CameraX works on the Android devices most people have in their pockets. The Test Lab opened in 2019 with 52 phone models. Today, the Test Lab has 150 phone models. We prioritize devices with the most daily active users over the past 28 days (28DAUs) and devices that leverage a diverse range of systems on a chip (SoCs). The Test Lab currently covers over 750 million 28DAUs. We also test many different Android versions, going back to Android 5.1 (Lollipop).

To generate reliable test results, each phone model has its own test enclosure to control for light and other environmental factors. Each enclosure contains two phones of the same model to simplify testing the front and back cameras. On the opposite side of the test enclosure from the phones, there’s a high-resolution test chart. This chart has many industry-standard tests for camera attributes like color correctness, resolution, sharpness, and dynamic range. The chart also has some specific elements for functional tests like face detection.

When you adopt CameraX in your app, you get the assurance of this continuous testing across many devices and API levels. Additionally, we’re continuously making improvements to the Test Lab, including adding new phones based on market trends to ensure that the majority of your users are well represented. See our current test device list for the latest inventory in our Test Lab.

Quirks

Google provides a Camera Image Test Suite so that OEM’s cameras meet a baseline of consistency. Still, when dealing with the wide range of devices that run Android, there can be differences in the end user camera experience. CameraX includes an abstraction layer, called Quirks, to remove these variations in behavior so that CameraX behaves consistently across all devices with no effort from app developers.

We find these quirks based on our own manual testing, the Test Lab’s automatic testing, and bug reports filed in our public CameraX issue tracker. As of today, CameraX has over 30 Quirks that automatically fix behavior inconsistencies for developers. Here are a few examples:

  • OnePixelShiftQuirk: Some phones shift a column of pixels when converting YUV data to RGB. CameraX automatically corrects for this on those devices.
  • ExtensionDisableQuirk: For phones that don’t support extensions or have broken behavior with extensions, CameraX disables certain extensions.
  • CameraUseInconsistentTimebaseQuirk: Some phones do not properly timestamp video and audio. CameraX fixes the timestamps so that the video and audio align properly.

These are just a few examples of how CameraX automatically handles quirky device behavior. We will continue to add more corrections as we find them, so app developers won’t have to deal with these one-offs on their own. If you find inconsistent behavior on a device you’re testing, you can file an issue in the CameraX component detailing the behavior and the device it’s happening on.

Foldable phones

Foldables continue to be the fastest growing smartphone form factor. Their flexibility in screen size adds complexity to camera development. Here are a few ways that CameraX simplifies the development of camera apps on foldables.

CameraX’s Preview use case handles differences between the aspect ratio of the camera and the aspect ratio of the screen. With traditional phone and tablet form factors, this difference should be small because Section 7.5.5 of the Android Compatibility Definition Document requires that the “long dimension of the camera aligns with the screen’s long dimension.” However, with foldable devices the screen aspect ratio can change, so this relationship might not always hold. With CameraX you can always preserve aspect ratio by filling the PreviewView (which may crop the preview image) or fitting the image into the PreviewView (which may result in letterboxing or pillarboxing). Set PreviewView.ScaleType to specify which method to use.

The increase in foldable devices also increases the possibility that your app may be used in a multi-window environment. CameraX is set up for multi-window support out-of-the-box. CameraX handles all aspects of lifecycle management for you, including the multi-window case where other apps can take priority access of singleton resources, such as the microphone or camera. This means no additional effort is required from app developers when using CameraX in a multi-window environment.

We’re always looking for more ways to improve CameraX to make it even easier to use. With respect to foldables, for example, we’re exploring ways to let developers call setTargetResolution() without having to take into account the different configurations a foldable device can be in. Keep an eye on this blog and our CameraX release notes for updates on new features!

Getting started with CameraX

We have a number of resources to help you get started with CameraX. The best starting place is our CameraX codelab. If you want to dig a bit deeper with CameraX, check out our camera code samples, ranging from a basic app to more advanced features like camera extensions. For an overview of everything CameraX has to offer, see our CameraX documentation. If you have any questions, feel free to reach out to us on our CameraX discussion group.

A new genome sequencing tool powered with our technology

Genome sequencing provides a more complete description of cells and organisms, allowing scientists to uncover serious genetic conditions such as the elevated risk for breast cancer or pulmonary arterial hypertension. While researching genomics has the potential to save lives and preserve people’s quality of life, it's incredibly challenging work.

Back in January, we announced a partnership with PacBio, a developer of genome sequencing instruments, to further advance genomic technologies. Today, PacBio is introducing the Revio sequencing system, an instrument that runs on our deep learning technology, DeepConsensus. With DeepConsensus built right into Revio, researchers can quickly and accurately identify genetic variants that cause diseases.

How Google Health’s technology works

Genome sequencing requires observing individual DNA molecules amidst a complex and noisy background. To address the problem, Google Health worked to adapt Transformer, one of our most influential existing deep learning methods that was developed in 2017 primarily to understand languages. We then applied it to PacBio’s data, which uses fluorescence light to encode DNA sequences. The result was DeepConsensus.

Last year, we demonstrated that DeepConsensus was capable of reducing sequencing errors by 42%, resulting in better genome assemblies and more accurate identification of genetic variants. Although this was a promising research demonstration, we knew that DeepConsensus could have the greatest impact if it was running directly on PacBio’s sequencing instrument. Over the last year, we’ve worked closely with PacBio to speed up DeepConsensus by over 500x from its initial release. We’ve also further improved its accuracy to reduce errors by 59%.

Combining our AI methods and genomics work with PacBio’s instruments and expertise, we were able to build better methods for the research community. Our partnership with PacBio doesn’t stop with Revio. There’s limitless potential to make an impact on the research community and improve healthcare and access for people around the world.

Get to know Google’s Coding Competitions

Posted by Julia DeLorenzo, Program Manager, Coding Competitions

Google’s Coding Competitions provide interactive rounds throughout the year to help you grow your skills, challenge yourself, and connect with developers from around the globe.

Google has three flagship Coding Competitions: Code Jam, Hash Code, and Kick Start. Each competition is unique and offers different types of challenges from algorithmic puzzles to team-based optimization problems. Our Coding Competitions are designed and tested by a team of Google engineers and program managers who craft new and engaging problems for users to tackle.

Google’s Coding Competitions have been around for quite a while (two decades!) and this passionate group of contributors and fans around the world makes each new season even more exciting than the last.

Hear from two program managers on the Coding Competitions team:

Emilly Miller, Google’s Coding Competitions Lead Program Manager

Emily Miller Headshot

“My first year working on Coding Competitions was 2013 with Code Jam. The Finals were hosted in London that year — video proof — and I've been hooked ever since! It's been incredibly rewarding and a whole lot of fun to interact with coders from around the world over the years.

I find it so cool that even after 20 years of Code Jam, the space of online competitions continues to evolve and grow. To me, it's a testament to the strength of the global online community and the value that products like Code Jam, Hash Code, and Kick Start provide developers to connect and learn from one another. Plus, the problem statements are so creative and fun!

My advice to future participants is, jump in and try it out! We're all here for something unique to us, so find out what that is for you and pursue that. Hitting roadblocks along the way is likely, so don't get discouraged. Remember there's a global community of coders out there waiting to help you!”


Julia DeLorenzo, Google’s Coding Competitions Program Manager

Julia DeLorenzo Headshot

“My first introduction to Google’s Coding Competitions was in 2016, when I had the chance to volunteer at the Code Jam World Finals in New York City. The excitement and energy of that Finals stuck with me – four years later, in 2020, an opportunity to work on Coding Competitions full time came up and I jumped at the chance!

I love that Google’s Coding Competitions offer different ways to participate. No matter where you are in your competitive programming journey, there’s a Competition for you. People who are new to competitive programming can get familiar with space by participating in Kick Start; those who want to participate with friends or teammates can try Hash Code; and folks looking for a challenge should try Code Jam. Some people participate in all three! The problems you’ll see are always different and creative so you’re sure to have fun along the way.

As cliché as it sounds, my advice to future participants is that failure is an opportunity for growth. Don’t let imposter syndrome or fear of failure stand in the way of trying something new. If you come across a problem you can’t solve – that’s great! It’s an opportunity to challenge yourself and try a different approach.”


Stay Tuned!

Over the next few weeks, keep an eye on the blog – we’ll be spotlighting each of Google’s Coding Competitions in a series of blog posts to help you understand the ins and outs of each competition.

Supporting HBCU students on the path to tech careers

Last weekend I was welcomed back to my “home by the sea” — Hampton University, located on Chesapeake Bay — as the co-grand marshal for this year’s homecoming festivities along with fellow alumna Dr. Dietra Trent, White House Director of Historically Black Colleges and Universities (HBCU) initiatives. As a proud Hampton alumna and Google’s Chief Diversity Officer, it gives me great pride to continue Google’s long-standing partnership with the HBCU community.

I’ve seen firsthand the impact HBCU graduates are having on the next generation of leaders and thinkers across today’s industries, including tech. In a recent United Negro College Fund (UNCF) study, despite only making up 3% of the nation’s colleges and universities, HBCUs produce almost 20% of all African American graduates and 25% of African American graduates with science, technology, engineering and math (STEM) degrees.

A woman in sunglasses, wearing a blue blazer and white shirt, stands beside a black sports car with a white sign in the window that reads “Hampton Grand Marshall.”

Melonie Parker, Google’s Chief Diversity Officer at Hampton’s homecoming.

At Google, we remain steadfast in our investment and support for HBCUs, and we’ve partnered closely with them to build pathways to tech. One way we’ve done that is by welcoming students from 15 HBCUs for full-time roles and internships in the last year alone, and we've expanded our recruiting efforts to more than 900 schools in the last decade. We’ve also invested in programming to further opportunities and pathways for HBCU and Hispanic-Serving Institutions (HSI) students, including:

  • Tech Exchange, a semester-long immersive program for select HBCU and HSI students, has quadrupled in size and expanded to serve students from 16 HBCUs and Hispanic-Serving Institutions since launching in 2017.
  • Our Pathways to Tech initiative was designed to build equity for HBCU computing education, help job seekers find tech roles, and ensure that Black employees have growth opportunities and feel included at work.
  • The Grow with Google HBCU Career Readiness Program, a partnership with the Thurgood Marshall College Fund, brings digital skills training into the career centers of HBCUs. The program recently expanded to 20 HBCUs, and aims to help 20,000 students learn digital skills by the end of the current school year.
  • Finally, our Google in Residence (GIR) program gives experienced Google software engineers the chance to teach introductory computer science classes, which have reached more than 8,000 HBCU and HSI students since 2013. Two of our GIR students actually became instructors this year, and many have gone on to internships in our Student Training in Engineering Program and full-time software engineering roles at Google.

We also recognize the unique needs of students, faculty and staff within each of these historic institutions. I meet regularly with the HBCU Presidents’ Council, which advises on creating and executing meaningful programming that meets the needs of HBCU students. In 2021, we provided a $50 million grant to 10 HBCUs to support scholarships, invest in technical infrastructure for in-class and remote learning, and develop curricula and career support programs.

To build on this, Monday I was honored to announce a $5 million Google.org grant to Spelman College’s Center for Minority Women in STEM. A team of Google.org Fellows will partner with Spelman to build the first database that will conduct and publicize research on the experiences of women from historically underrepresented groups in STEM. The findings will be used to help empower and elevate women in STEM fields. This week we also announced $300,000 in funding for 18 HBCU and HSI partners to support faculty and students in tech majors. We plan to distribute this funding annually to enable growth and retention in computer science departments.

Finally, supporting our HBCU and HSI partners means showing up and continuing to shine a light on these historic and critical institutions:

  • We were proud to sponsor the National HBCU Week Conference organized by the The White House Initiative on Advancing Educational Equity, Excellence, and Economic Opportunity through Historically Black Colleges and Universities. The event brought together more than 1,500 HBCU students, faculty and community leaders from across the U.S. for the first time since 2019. We hosted panels and workshops on career opportunities, resume building and personal brands.
  • Just last month we were the halftime sponsor at the inaugural HBCU New York Football Classic. More than 35,000 fans gathered in the stands for the September 17 game between Morehouse College and Howard University as part of HBCU Week. Our sponsorship included scholarships to 105 HBCU students and a partnership with HBCU Tools for School, a nonprofit that provides access to tools, resources and networks critical for academic success.
  • Finally, we’re working with the NBA Foundation on an upcoming promotion where a portion of proceeds from Pixel sales on the Google Store will go to HBCUs.

For more than a century, HBCUs have been a driving force in the cultivation of academic excellence and professional achievement within the Black community. We will continue to do our part to support these institutions, and their students, as we work to make tech more inclusive and representative at all levels of the workforce.

7 Google Photos tips for perfecting your pics on Pixel 7

Our new Pixel 7 and Pixel 7 Pro have incredible cameras for capturing your memories, but snapping a photo is just the beginning. We all want to get our photos looking just right to share them and reminisce. Thanks to advances in machine learning, Google Photos is packed with powerful editing features that are a breeze to use.

Here are seven tips to get your photos — both new ones taken on a Pixel 7 and old ones from past years (and phones) — looking picture-perfect with Google Photos on Pixel 7.

1. Fix blurry shots with Photo Unblur

Bring your blurry photos back into focus with just a few taps using Photo Unblur, a brand-new feature only on Pixel 7 and Pixel 7 Pro. Photo Unblur removes blur and visual noise so you can relive the moment as clearly as you remember it. Best of all, it even works on pictures in your library taken with a different phone or camera and scanned images.

Before and after animation of a picture with Photo Unblur applied to it.

2. Get rid of distractions with Magic Eraser

Magic Eraser, which was introduced last year, can detect distractions in your photos — like photobombers in the background, power lines and power poles. Just a few taps to remove them and, poof, gone. You can also circle or brush what you want to remove. No need to be precise — Magic Eraser will figure out what you’re trying to remove.

Before and after animation of a picture with background distractions removed using Magic Eraser

Bonus Magic Eraser tip: Don’t want to remove a distraction entirely, but want it to blend in a bit more? Use Camouflage in Magic Eraser to change the color of distracting objects in your photo. In just a few taps, the object’s colors and shading blend in naturally with the rest of the photo.

3. Make your subject stand out with Portrait blur

Portrait mode in the Pixel Camera can really make your subject shine. But what if you forgot to use it when snapping a picture or you want to edit a picture from the past? With Portrait blur, Google Photos can intelligently blur the background on photos of people — plus pets, food, flowers and more — post-snap.

Before and after animation of a picture of a butterfly on a flower with Portrait blur added to it.

4. Improve the lighting on faces with Portrait light

A good portrait can be hard to capture, especially if the lighting isn’t quite right or you took the photo with an older phone or camera. Use Portrait light to easily improve the lighting on faces, and you can even adjust the light position and brightness to customize your look.

Before and after animation of a selfie picture with Portrait light applied to it.

5. Bring balance to your photos with the HDR effect

If you’ve got older pictures with a dark foreground and bright background (or vice versa), it can be hard to make out all the details in the shot. Enter the HDR effect to help balance things out — enhancing the brightness and contrast across the image so you can soak in every detail.

Before and after animation of a picture of a mountain with the HDR edit applied to it.

6. Change up the mood and tone of your sunset pics with sky suggestions

Chances are you’ve got quite a few sunset photos in your library that didn’t quite capture the beauty of what you saw in the moment. How do you revive it and make it stand out from all the rest? Use sky suggestions to put your own creative twist on your golden hour images. Select from several palettes that adjust the color and contrast of the sky to change up the mood and tone of your pic to get it ready to share.

Before and after animation of a picture of a sunset with a sky edit applied to it.

7. Use the collage editor to make shareable creations

Make creative, shareable collages with the new collage editor. Pick up to six photos and select from more than 50 designs available to Pixel users. You can easily rearrange the layout with simple drag-and-drop controls and even edit each photo in the collage individually to get just the right look.

Animation showing various styles available in the collage editor in Google Photos.

Get creative and mix and match all of these features to create a stunning image that’s ready to share. Remove a photobomber in the background, then combine Photo Unblur, Portrait light and Portrait blur to create a fresh image that gives your memory new life.

Before and after animation of a picture with Photo Unblur, Magic Eraser, Portrait light and Portrait blur applied to it.

With Pixel 7 and Pixel 7 Pro, all the tools you need to perfect your photos or put a creative spin on them are right in Google Photos. Get to editing and share your best creations on social with the hashtag #FixedOnPixel.

Kwentuhan: Sharing our stories this Filipino American History Month

“Kwentuhan” roughly translates to “sharing stories” in English. For Filipino American History Month, or Kapamilya Month as our Filipino Googler Network refers to it, we sat down with Paolo Malabuyo, Director of User Experience in Google Maps and executive sponsor of the Filipino Googler Network, to learn about his story.

As a child growing up in the Philippines, we heard you were somewhat of a Lego competition legend. How did this kickstart your interest in working in UX and design?

I was the youngest of four and I always felt like I was in the shadow of my older, smarter, more athletic and more accomplished siblings. I don’t recall having many ideas about what I wanted to be when I grew up, until my grandmother immigrated to the United States in the 1960s and started sending small handfuls of Lego pieces through the mail.

This started my fascination with Lego and culminated in my participation in Lego competitions across the Philippines. I still think that the greatest job in the world is designing Lego sets.

I immigrated to the States right before my 12th birthday and picked up drawing, reading, and crafting. This developed into a real affinity for art. I ended up getting a BFA in art and minors in communication design, art history and Chinese studies. I also took basic programming classes and learned web design by emulating early websites.

Like Lego pieces, my early days of creatively building, combined with my art education and CS study were what constructed my career today. I got my first role as a graphic designer which started my roundabout journey to become a UX designer, leader and educator.

Can you talk about your role at Google?

I lead the cross-disciplinary user experience teams for Geo Auto and Geo Sustainability. In Auto, we design and deliver the in-car, embedded Google Maps experiences for navigation, routing, and situational awareness so that drivers are safer and more confident, with a major focus on electric vehicles. In Sustainability, we provide platforms, insights, and solutions that help users and partners tackle climate change – the preeminent challenge for humanity today. It’s an incredibly interesting portfolio and it’s such a privilege to work with our teams and clients.

As an executive sponsor for the FIlipino Googler Network, I get to work closely with other teams across Google on projects that impact the Filipino community. One example is the work happening on Maps to help business owners identify themselves, including the introduction of the Asian-owned attribute earlier this year. This attribute will help many Filipino businesses be recognized by current and future customers.

What else is Google doing in support of Filipino culture?

It’s great to see how Google’s products and services are celebrating Filipino culture and elevating our voices. This month, we ran a beautiful Google Doodle in the Philippines which celebrated the Regatta de Zamboanga, an annual sailing competition from the southern part of the country.

Six sailboats with the letters on the flags spelling GOOGLE.

Google TV is highlighting recent movies and TV shows that tell Filipino American stories and feature Filipino American lead actors in its “For you” tab.

The Google TV interface on the For you page displaying an image of Jacob Batalon with fangs promoting a TV show “Reginald the Vampire”

One of Google TV's highlighted shows for Filipino American History Month

And Google Arts & Culture has teamed up with amazing organizations to celebrate the rich culture and history of the Philippines, including the Filipinas Heritage Library, Filipino Street Art Project, and the Ballet Philippines.

Earlier, you talked about how you grew up in the Philippines. What role do you think Google has to play in supporting the local community?

Google has amazing resources that can help the people in the Philippines. We’ve done a lot to support inclusive distance learning, from a Google.org grant to help teachers, to the national deployment of G Suite for Education to 22 million learners in partnership with the Department of Education. We’re collaborating with local telecommunications companies to bring mobile access to learning tools and started a virtual training camp for Filipino YouTubers to accelerate development of quality learning content on the platform. Just last month, we announced we will be giving away Google Career Certificate scholarships to 39,000 Filipino youths.

Commitments like these are super valuable, and I’m grateful for the work to come.

Source: Google LatLong


Kwentuhan: Sharing our stories this Filipino American History Month

“Kwentuhan” roughly translates to “sharing stories” in English. For Filipino American History Month, or Kapamilya Month as our Filipino Googler Network refers to it, we sat down with Paolo Malabuyo, Director of User Experience in Google Maps and executive sponsor of the Filipino Googler Network, to learn about his story.

As a child growing up in the Philippines, we heard you were somewhat of a Lego competition legend. How did this kickstart your interest in working in UX and design?

I was the youngest of four and I always felt like I was in the shadow of my older, smarter, more athletic and more accomplished siblings. I don’t recall having many ideas about what I wanted to be when I grew up, until my grandmother immigrated to the United States in the 1960s and started sending small handfuls of Lego pieces through the mail.

This started my fascination with Lego and culminated in my participation in Lego competitions across the Philippines. I still think that the greatest job in the world is designing Lego sets.

I immigrated to the States right before my 12th birthday and picked up drawing, reading, and crafting. This developed into a real affinity for art. I ended up getting a BFA in art and minors in communication design, art history and Chinese studies. I also took basic programming classes and learned web design by emulating early websites.

Like Lego pieces, my early days of creatively building, combined with my art education and CS study were what constructed my career today. I got my first role as a graphic designer which started my roundabout journey to become a UX designer, leader and educator.

Can you talk about your role at Google?

I lead the cross-disciplinary user experience teams for Geo Auto and Geo Sustainability. In Auto, we design and deliver the in-car, embedded Google Maps experiences for navigation, routing, and situational awareness so that drivers are safer and more confident, with a major focus on electric vehicles. In Sustainability, we provide platforms, insights, and solutions that help users and partners tackle climate change – the preeminent challenge for humanity today. It’s an incredibly interesting portfolio and it’s such a privilege to work with our teams and clients.

As an executive sponsor for the FIlipino Googler Network, I get to work closely with other teams across Google on projects that impact the Filipino community. One example is the work happening on Maps to help business owners identify themselves, including the introduction of the Asian-owned attribute earlier this year. This attribute will help many Filipino businesses be recognized by current and future customers.

What else is Google doing in support of Filipino culture?

It’s great to see how Google’s products and services are celebrating Filipino culture and elevating our voices. This month, we ran a beautiful Google Doodle in the Philippines which celebrated the Regatta de Zamboanga, an annual sailing competition from the southern part of the country.

Six sailboats with the letters on the flags spelling GOOGLE.

Google TV is highlighting recent movies and TV shows that tell Filipino American stories and feature Filipino American lead actors in its “For you” tab.

The Google TV interface on the For you page displaying an image of Jacob Batalon with fangs promoting a TV show “Reginald the Vampire”

One of Google TV's highlighted shows for Filipino American History Month

And Google Arts & Culture has teamed up with amazing organizations to celebrate the rich culture and history of the Philippines, including the Filipinas Heritage Library, Filipino Street Art Project, and the Ballet Philippines.

Earlier, you talked about how you grew up in the Philippines. What role do you think Google has to play in supporting the local community?

Google has amazing resources that can help the people in the Philippines. We’ve done a lot to support inclusive distance learning, from a Google.org grant to help teachers, to the national deployment of G Suite for Education to 22 million learners in partnership with the Department of Education. We’re collaborating with local telecommunications companies to bring mobile access to learning tools and started a virtual training camp for Filipino YouTubers to accelerate development of quality learning content on the platform. Just last month, we announced we will be giving away Google Career Certificate scholarships to 39,000 Filipino youths.

Commitments like these are super valuable, and I’m grateful for the work to come.

Source: Google LatLong


5 Play Console updates to help you understand your app’s delivery performance

Posted by Lidia Gaymond, Product Manager, Google PlayPowered by Android App Bundles, Google Play gives all developers the benefits of modern Android distribution. As the Android ecosystem expands, it’s more important than ever to know how your app is being delivered to different devices.

Delivery insights help you better understand and analyze your app’s delivery performance and what contributes to it, and take action to optimize the experience for your users. Here are five recent Play Console updates you can use to get more insight into your delivery performance.


1. When you release your app, you’ll now see its expected app size and update size at the point of release creation, so you can determine if the size change from the previous release is acceptable.

Screenshot of Google Play Console showing expected app size and update size
Get the expected app size and update size when you create a new release.

2. If you use advanced Play delivery tools, such as Play Asset Delivery or Play Feature Delivery, detailed information about how these are shipped to users are now available on the Statistics page and in the Delivery tab in App bundle explorer. Understanding your feature modules and asset packs usage can help you make better decisions about further modularization and uncover usage patterns across your users.

Screenshot of the Devliery tab in the App bundle explorer page in Play Console
Get detailed information about how your feature modules are shipped to users in the Delivery tab in the App bundle explorer page in Play Console.

Screenshot of performance metrics on the Statistics page in Play Console
See per module performance metrics on the Statistics page in Play Console.


3. When analyzing your existing release, you can now see how many users are on it to help you assess the “freshness” of your install base and how quickly users migrate to new releases. To improve your update rate, consider using the In-app updates API.

Screenshot of the Release Summary showing percentage of install base on this release in Releases Overview in Play Console
Know how many users are on your existing release and how quickly users migrate to new releases.

4. For a deeper dive into your individual app version performance, you can find information about your download size per device model, most common update sizes, and install base in App bundle explorer.

Screenshot of App bundle explorer page in Play Console
Evaluate device-specific bundle download size and install base on the App bundle explorer page.

5. All of these features are also available in your App Dashboard, where you can track these measurements over time alongside other app metrics.

Screenshot of App bundle explorer page in Play Console
Monitor these new delivery metrics on your App Dashboard.

We hope these changes will help you make more informed decisions about your app development and provide you with a detailed view of how your app is being delivered to end user devices.


How useful did you find this blog post?

Google Play Logo