Phone Verification in Content API for Shopping

We are pleased to announce a new Content API interface developers can use to verify phone numbers for Merchant Center accounts. Phone verification is an important step in providing contact information for an account and can also help address account status issues such as PENDING_PHONE_VERIFICATION, which in some cases can enable the option for an account re-review. Prior to this release, this was only possible in the Merchant Center user interface.

Two new methods are provided in the 2-step verification process: Once verified, the phone number will appear in the Accounts.AccountBusinessInformation. The new methods replace the prior approach of setting a phone number directly. We strongly recommend you use these new methods to verify the phone numbers for all Merchant Center accounts to avoid future issues. See the Phone Verification guide for examples and more detail.

If you require further support implementing this change, please visit the Content API for Shopping forum.

Jetpack Compose is now 1.0: announcing Android’s modern toolkit for building native UI

Posted by Anna-Chiara Bellini, Product Manager, Nick Butcher, Developer Relations

Today, we're launching version 1.0 of Jetpack Compose, Android's modern, native UI toolkit to help you build better apps faster. It's stable, and ready for you to adopt in production. We have been developing Compose in the open with feedback and participation from the Android community for the last two years. As we reach 1.0, there are already over 2000 apps in the Play Store using Compose - in fact, the Play Store app itself uses Compose! But that’s not all, we have been working with a number of top app developers and their feedback and support has helped us make the 1.0 release even stronger. Square, for instance, told us that by using Compose, they can “focus on things that are unique to Square and their UI infrastructure, rather than solving the broader issue of building a declarative UI framework”. Monzo said Compose allows them to “build higher quality screens more quickly”. And Twitter summed it up nicely: “We love it! ❤️

We designed Compose to make it faster and easier to build native Android apps. With a fully declarative approach, you just describe your UI, and Compose takes care of the rest. As app state changes, your UI automatically updates, making it a lot simpler to build UI quickly. Intuitive Kotlin APIs help you build beautiful apps with way less code, and native access to all existing Android code means you can adopt at your own pace. Powerful layout APIs and code-driven UI make it easy to support different form factors, like tablets and foldables, and Compose support is coming for WearOS, Homescreen Widgets, and more!

This 1.0 release is ready for use in production, offering key features that you need:

  • Interoperable: Compose is built to interoperate with your existing app. You can embed compose UIs within Views or Views within Compose. You can add as little as a single button to a screen, or keep that custom view you’ve created in a now Compose screen.
  • Jetpack Integration: Compose is built to integrate with the Jetpack libraries you already know and love. With integration with Navigation, Paging, LiveData (or Flow/RxJava), ViewModel and Hilt, Compose works with your existing architecture.
  • Material: Compose offers an implementation of Material Design components and theming, making it easy to build beautiful apps that reflect your brand. The Material theming system is easier to understand and trace, without having to consult multiple XML files.
  • Lists: Compose’s Lazy components offer a simple, succinct but powerful way to efficiently display lists of data, with minimal boilerplate.
  • Animation: Compose’s simple and coherent animation APIs make it far easier to delight your app’s users.


New Tools

The fully declarative approach in Jetpack Compose radically changes how you develop UI. To support new workflows and a different way of thinking, we are delivering new tools, designed specifically for Compose, and adding support for Compose to some of our existing tooling.

Compose Preview

The new Compose Preview, available in Android Studio Arctic Fox allows you to see your Composables in different states, light and dark theme, or different font scalings, all at the same time, making component development easier, without having to deploy a whole app to your device. Enhanced with live editing of literals, you can see updates without recompiling your project.


Deploy Preview

If you ever wished to be able to test parts of the UI on a device, without having to navigate through your app to the screen you’re working on, you will like the new Deploy Preview: just create a preview for your Composable, and deploy it on your device for fast iteration.

Compose support in Layout Inspector

Layout Inspector adds support for Composables, so that you can confidently mix Compose with existing Views.

Read more about Compose support in Android Studio Arctic Fox, here.

Sharing our roadmap for Compose

Adopting any new framework requires evaluation, especially something as far reaching as a new UI Toolkit. To help you to make an informed decision whether it’s the right time for you we’re publishing a public roadmap to share our plans to continue to build out Jetpack Compose.





Learning Compose

To help you get composing, we’ve prepared an extensive set of resources for you and your team:


There’s a lot to learn! The Jetpack Compose Pathway provides a step-by-step journey through key codelabs, videos and docs to help guide you.

Enjoy composing!

We really believe that Jetpack Compose is a huge leap forward, making it so much faster and easier to build great UIs; we can’t wait to see what you build with it. Now that Compose is stable at 1.0, it’s time to get started; there’s nothing better than getting right to the code. Happy Composing!

Vaccines and our return-to-office plans

Sundar sent the following email to Google employees earlier this morning. The email has been edited to remove internal links. 

Hi everyone,

I hope you are all taking good care. Since the beginning of the pandemic, we’ve put the wellbeing of our Google community front and center. We’ve done this while also taking care of our customers and partners, launching over 200 new products and features to help people and businesses navigate this difficult time. 

In March of 2020, we made the early decision to send employees home to slow down the spread of COVID. Since then, we’ve extended our Carer’s Leave coverage to help employees care for loved ones. We’ve continued to cover the full wages of on campus workers who couldn't perform their jobs because of office closures. And, we’ve made sure that Googlers and our extended workforce have access to vaccines as soon as they are available locally. Additionally, thanks to the generosity of Googlers and support from Google.org, we've helped Gavi to fully vaccinate over 1 million people in low-and middle-income countries globally. 

Even as the virus continues to surge in many parts of the world, it’s encouraging to see very high vaccination rates for our Google community in areas where vaccines are widely available. This is a big reason why we felt comfortable opening some of our offices to employees who wanted to return early. And I have to say it’s been great to see Googlers brainstorming around whiteboards and enjoying meals in cafes again in the many offices that have already re-opened globally. 

Getting vaccinated is one of the most important ways to keep ourselves and our communities healthy in the months ahead. As we look toward a global return to our offices, I wanted to share two key updates:

  • First, anyone coming to work on our campuses will need to be vaccinated. We’re rolling this policy out in the U.S. in the coming weeks and will expand to other regions in the coming months. The implementation will vary according to local conditions and regulations, and will not apply until vaccines are widely available in your area. You’ll get guidance from your local leads about how this will affect you, and we’ll also share more details on an exceptions process for those who cannot be vaccinated for medical or other protected reasons.

  • Second, we are extending our global voluntary work-from-home policy through October 18.We are excited that we’ve started to re-open our campuses and encourage Googlers who feel safe coming to sites that have already opened to continue doing so. At the same time, we recognize that many Googlers are seeing spikes in their communities caused by the Delta variant and are concerned about returning to the office. This extension will allow us time to ramp back into work while providing flexibility for those who need it. We’ll continue watching the data carefully and let you know at least 30 days in advance before transitioning into our full return to office plans. For those of you with special circumstances, we will soon be sharing expanded temporary work options that will allow you to apply to work from home through the end of 2021. We’re also extending Expanded Carer’s Leave through the end of the year for parents and caregivers.

I know that many of you continue to deal with very challenging circumstances related to the pandemic. While there is much that remains outside of our control, I’m proud of the way we continue to take care of each other while helping people, businesses and communities through these difficult times.  

I hope these steps will give everyone greater peace of mind as offices reopen. Seeing Googlers together in the offices these past few weeks filled me with optimism, and I’m looking forward to brighter days ahead. 

-Sundar

Using AI to map Africa’s buildings

Between 2020 and 2050, Africa’s population is expected to double, adding 950 million more people to its urban areas alone. However, according to 2018 figures, a scarcity of affordable housing in many African cities has forced over half of the city dwellers in Sub-Saharan Africa to live in informal settlements. And in rural areas, many also occupy makeshift structures due to widespread poverty.

These shelters have remained largely undetectable using traditional monitoring tools. Machine learning, computer vision and remote sensing have come some way in recognizing buildings and roads, but when it comes to denser neighborhoods, it becomes much harder to distinguish small and makeshift buildings. 

Why is this an issue? Because when preparing a humanitarian response, forecasting transportation needs, or planning basic services, being able to accurately map the built environment - which allows us to ascertain population density - is absolutely key.

Enter Google’s Open Buildings

Google’s Open Buildings is a new open access dataset containing the locations and geometry of buildings across most of Africa. From Lagos’ Makoko settlement to Dadaab’s refugee camps, millions of previously invisible buildings have popped up in our dataset. This improved building data helps refine the understanding of where people and communities live, providing actionable information for state and non-state actors looking to provide services from sanitation to education and vaccination.

Open Buildings uses AI to provide a digital footprint of buildings. This includes producing polygons with the outlines of at least 500 million buildings across the African continent, the majority of which are less than 20 square meters. The full dataset encompasses 50 countries.

The data provides the exact location and polygon outline of each building, its size, a confidence score for it being detected as a valid building and a Plus Code. There is, however, no information about the type of building, its street address, or any identifying data. We have also excluded sensitive areas such as conflict zones to protect vulnerable populations.


Satellite mapping using AI 

The Open Buildings dataset was generated by using a model trained to detect buildings using satellite imagery from the African continent. The information for the buildings detected is then saved in CSV files which are available to download. The technical details of the Open Buildings dataset, including usage and tutorials, are available on the dataset website and the Google AI blog.

Animation showing landscape in Africa being mapped

How will this improve planning?

There are many important ways in which this data can be used, including — but not limited to — the following:

Population mapping: Building footprints are a key ingredient for estimating population density. This information is vital to planning for services for communities. 


Humanitarian response: To plan the response to a flood, drought, or other natural disaster.


Environmental science: Knowledge of settlement density is useful for understanding the human impact on the natural environment. 


Addressing systems: In many areas, buildings do not have formal addresses. This can make it difficult for people to access social benefits and economic opportunities. Building footprint data can help with the rollout of digital addressing systems such asPlus Codes.


Vaccination planning: Knowing the density of population and settlements helps to anticipate demand for vaccines and the best locations for facilities. This data is also useful for precision epidemiology, as well as prevention efforts such as mosquito net distribution.


Statistical indicators: Buildings data can be used to help calculate statistical indicators for national planning, such as the numbers of houses in the catchment areas of schools and health centers, mean travel distances to the nearest hospital or demand forecast for transportation systems.

Google’s AI Center in Accra

This project was led by our team at the AI Research Center in Accra, Ghana. The center was launched in 2019 to bring together top machine learning researchers and engineers dedicated to AI research and its applications. The research team has already been improving Google Maps with AI, adding 120 million buildings and 228,000 km of roads across Africa to Maps in the last year. This work is part of our broader AI for Social Good efforts.

Mapping Africa’s Buildings with Satellite Imagery

An accurate record of building footprints is important for a range of applications, from population estimation and urban planning to humanitarian response and environmental science. After a disaster, such as a flood or an earthquake, authorities need to estimate how many households have been affected. Ideally there would be up-to-date census information for this, but in practice such records may be out of date or unavailable. Instead, data on the locations and density of buildings can be a valuable alternative source of information.

A good way to collect such data is through satellite imagery, which can map the distribution of buildings across the world, particularly in areas that are isolated or difficult to access. However, detecting buildings with computer vision methods in some environments can be a challenging task. Because satellite imaging involves photographing the earth from several hundred kilometres above the ground, even at high resolution (30–50 cm per pixel), a small building or tent shelter occupies only a few pixels. The task is even more difficult for informal settlements, or rural areas where buildings constructed with natural materials can visually blend into the surroundings. There are also many types of natural and artificial features that can be easily confused with buildings in overhead imagery.

Objects that can confuse computer vision models for building identification (clockwise from top left) pools, rocks, enclosure walls and shipping containers.

In “Continental-Scale Building Detection from High-Resolution Satellite Imagery”, we address these challenges, using new methods for detecting buildings that work in rural and urban settings across different terrains, such as savannah, desert, and forest, as well as informal settlements and refugee facilities. We use this building detection model to create the Open Buildings dataset, a new open-access data resource containing the locations and footprints of 516 million buildings with coverage across most of the African continent. The dataset will support several practical, scientific and humanitarian applications, ranging from disaster response or population mapping to planning services such as new medical facilities or studying human impact on the natural environment.

Model Development
We built a training dataset for the building detection model by manually labelling 1.75 million buildings in 100k images. The figure below shows some examples of how we labelled images in the training data, taking into account confounding characteristics of different areas across the African continent. In rural areas, for example, it was necessary to identify different types of dwelling places and to disambiguate them from natural features, while in urban areas we needed to develop labelling policies for dense and contiguous structures.

(1) Example of a compound containing both dwelling places as well as smaller outbuildings such as grain stores. (2) Example of a round, thatched-roof structure that can be difficult for a model to distinguish from trees, and where it is necessary to use cues from pathways, clearings and shadows to disambiguate. (3) Example of several contiguous buildings for which the boundaries cannot be easily distinguished.

We trained the model to detect buildings in a bottom-up way, first by classifying each pixel as building or non-building, and then grouping these pixels together into individual instances. The detection pipeline was based on the U-Net model, which is commonly used in satellite image analysis. One advantage of U-Net is that it is a relatively compact architecture, and so can be applied to large quantities of imaging data without a heavy compute burden. This is critical, because the final task of applying this to continental-scale satellite imagery means running the model on many billions of image tiles.

Example of segmenting buildings in satellite imagery. Left: Source image; Center: Semantic segmentation, with each pixel assigned a confidence score that it is a building vs. non-building; Right: Instance segmentation, obtained by thresholding and grouping together connected components.

Initial experiments with the basic model had low precision and recall, for example due to the variety of natural and artificial features with building-like appearance. We found a number of methods that improved performance. One was the use of mixup as a regularisation method, where random training images are blended together by taking a weighted average. Though mixup was originally proposed for image classification, we modified it to be used for semantic segmentation. Regularisation is important in general for this building segmentation task, because even with 100k training images, the training data do not capture the full variation of terrain, atmospheric and lighting conditions that the model is presented with at test time, and hence, there is a tendency to overfit. This is mitigated by mixup as well as random augmentation of training images.

Another method that we found to be effective was the use of unsupervised self-training. We prepared a set of 100 million satellite images from across Africa, and filtered these to a subset of 8.7 million images that mostly contained buildings. This dataset was used for self-training using the Noisy Student method, in which the output of the best building detection model from the previous stage is used as a ‘teacher’ to then train a ‘student’ model that makes similar predictions from augmented images. In practice, we found that this reduced false positives and sharpened the detection output. The student model gave higher confidence to buildings and lower confidence to background.

Difference in model output between the student and teacher models for a typical image. In panel (d), red areas are those that the student model finds more likely to be buildings than the teacher model, and blue areas more likely to be background.

One problem that we faced initially was that our model had a tendency to create “blobby” detections, without clearly delineated edges and with a tendency for neighbouring buildings to be merged together. To address this, we applied another idea from the original U-Net paper, which is to use distance weighting to adapt the loss function to emphasise the importance of making correct predictions near boundaries. During training, distance weighting places greater emphasis at the edges by adding weight to the loss — particularly where there are instances that nearly touch. For building detection, this encourages the model to correctly identify the gaps in between buildings, which is important so that many close structures are not merged together. We found that the original U-Net distance weighting formulation was helpful but slow to compute. So, we developed an alternative based on Gaussian convolution of edges, which was both faster and more effective.

Distance weighting schemes to emphasise nearby edges: U-Net (left) and Gaussian convolution of edges (right).

Our technical report has more details on each of these methods.

Results
We evaluated the performance of the model on several different regions across the continent, in different categories: urban, rural, and medium-density. In addition, with the goal of preparing for potential humanitarian applications, we tested the model on regions with displaced persons and refugee settlements. Precision and recall did vary between regions, so achieving consistent performance across the continent is an ongoing challenge.

Precision-recall curves, measured at 0.5 intersection-over-union threshold.

When visually inspecting the detections for low-scoring regions, we noted various causes. In rural areas, label errors were problematic. For example, single buildings within a mostly-empty area can be difficult for labellers to spot. In urban areas, the model had a tendency to split large buildings into separate instances. The model also underperformed in desert terrain, where buildings were hard to distinguish against the background.

We carried out an ablation study to understand which methods contributed most to the final performance, measured in mean average precision (mAP). Distance weighting, mixup and the use of ImageNet pre-training were the biggest factors for the performance of the supervised learning baseline. The ablated models that did not use these methods had a mAP difference of -0.33, -0.12 and -0.07 respectively. Unsupervised self-training gave a further significant boost of +0.06 mAP.

Ablation study of training methods. The first row shows the mAP performance of the best model combined with self-training, and the second row shows the best model with supervised learning only (the baseline). By disabling each training optimization from the baseline in turn, we observe the impact on mAP test performance. Distance weighting has the most significant effect.

Generating the Open Buildings Dataset
To create the final dataset, we applied our best building detection model to satellite imagery across the African continent (8.6 billion image tiles covering 19.4 million km2, 64% of the continent), which resulted in the detection of 516M distinct structures.

Each building’s outline was simplified as a polygon and associated with a Plus Code, which is a geographic identifier made up of numbers and letters, akin to a street address, and useful for identifying buildings in areas that don’t have formal addressing systems. We also include confidence scores and guidance on suggested thresholds to achieve particular precision levels.

The sizes of the structures vary as shown below, tending towards small footprints. The inclusion of small structures is important, for example, to support analyses of informal settlements or refugee facilities.

Distribution of building footprint sizes.

The data is freely available and we look forward to hearing how it is used. In the future, we may add new features and regions, depending on usage and feedback.

Acknowledgements
This work is part of our AI for Social Good efforts and was led by Google Research, Ghana. Thanks to the co-authors of this work: Wojciech Sirko, Sergii Kashubin, Marvin Ritter, Abigail Annkah, Yasser Salah Edine Bouchareb, Yann Dauphin, Daniel Keysers, Maxim Neumann and Moustapha Cisse. We are grateful to Abdoulaye Diack, Sean Askay, Ruth Alcantara and Francisco Moneo for help with coordination. Rob Litzke, Brian Shucker, Yan Mayster and Michelina Pallone provided valuable assistance with geo infrastructure.

Source: Google AI Blog


Preparing for Google Play’s new safety section

Posted by Suzanne Frey, VP, Product, Android Security and Privacy

Today, we’re announcing additional details for the upcoming safety section in Google Play. At Google, we know that feeling safe online comes from using products that are secure by default, private by design, and give users control over their data. This new safety section will provide developers a simple way to showcase their app’s overall safety. Developers will be able to give users deeper insight into their privacy and security practices, as well as explain the data the app may collect and why — all before users install the app.

Ultimately, all Google Play store apps will be required to share information in the safety section. We want to give developers plenty of time to adapt to these changes, so we’re sharing more information about the data type definitions, user journey, and policy requirements of this new feature.



What the new safety section may look like:

Images are directional and subject to change

Users will see the new summary in an app’s store listing page. It’ll share the developer’s explanation of what data an app collects or shares and highlight safety details, such as whether:

  • The app has security practices, like data encryption
  • The app follows our Families policy
  • The app has been independently validated against a global security standard

Images are directional and subject to change

Users can tap into the summary to see details like:

  • What type of data is collected and shared, such as location, contacts, personal information (e.g., name, email address), financial information and more
  • How the data is used, such as for app functionality, personalization, and more
  • Whether data collection is optional or required in order to use an app

Images are directional and subject to change

In designing our labels, we learned developers appreciate when they can provide context about their data practices and more detail on whether their app automatically collects data versus if that collection is optional. We also learned that users care about whether their data is shared with other companies, and why.

The final design is subject to change as we continue working with developers and designing for the best blend of developer and user experiences.

Policy changes to support the safety section

Today we announced new user data policies designed to provide more user transparency and to help people make informed choices about how their data is collected, protected and used.

  • All developers must provide a privacy policy. Previously, only apps that collected personal and sensitive user data needed to share a privacy policy.
  • Developers are responsible for providing accurate and complete information in their safety section, including data used by the app’s third party libraries or SDKs.

This applies to all apps published on Google Play, including Google's own apps.

What you can expect

We want to provide developers with plenty of time and resources to get prepared.

Target Timeline. Dates subject to change.

Starting in October, developers can submit information in the Google Play Console for review. We encourage you to start early in case you have questions along the way. The new safety section will launch for apps in Google Play in Q1 2022.

We know that some developers will need more time to assess their apps and coordinate with multiple teams. So, you’ll have until April 2022 before your apps must have this section approved. Without an approved section, your new app submission or app update may be rejected.

Images are directional and subject to change

If your app’s information is not approved by the time we launch the safety section in Google Play to users in Q1 2022, then it will display “No information available.”

How to get prepared:

  • Visit the Play Console Help Center for more details about providing app privacy details in Play Console, including data type lists and examples.
  • Review how your app collects, protects and shares data. In particular, check your app’s declared permissions and the APIs and libraries that your app uses. These may require you to indicate that your app collects and shares specific types of data.
  • Join a policy webinar and send us your questions in advance. You can register for Global, India, Japan, or Korea sessions.

We’ll continue to share more guidance, including specific dates, over the next few months.

Thank you for your continued partnership in building this feature alongside us and in making Google Play a safe and trustworthy platform for everyone.

Announcing Policy Updates To Bolster Privacy and Security

Posted by Krish Vitaldevara, Director, Product Management

We are always looking to make Google Play a safer and more trustworthy experience for developers and consumers. Today, we’re announcing new policy updates to bolster user control, privacy, and security.

Giving users more transparency into data privacy and security

We’re sharing our new policy for the upcoming safety section in Google Play alongside additional information, like data definitions. Learn more.

Improving advertising privacy and security

We’ve long offered users meaningful controls with advertising ID, like being able to reset their identifier at any time or opt out of allowing the identifier to be used for ads personalization. We’re continuing to add more controls this year.

As we pre-announced to developers on June 2, we’re making a technical change as part of Google Play services update in late 2021. When users opt out of interest-based advertising or ads personalization, their advertising ID will be removed and replaced with a string of zeros. As a reminder, this Google Play services change will be a phased rollout, affecting apps running on Android 12 devices starting late 2021 and expanding to all apps running on devices that support Google Play in early 2022. Also, apps updating their target API level to Android 12 will need to declare a new Google Play services permission in the manifest file in order to use advertising ID.

We will also test a new feature that notifies developers and ad/analytics service providers of user opt-out preferences to help developers implement user choice and add to existing policy restrictions on how advertising ID can be used. When a user deletes their advertising ID, developers will receive a notification so they can promptly erase advertising IDs that are no longer in use.

In addition, we’re prohibiting linking persistent device identifiers to personal and sensitive user data or resettable device identifiers. This policy adds an additional layer of privacy protection when users reset their device identifiers or uninstall apps.

And last, we’re offering a developer preview of app set ID for essential use cases such as analytics or fraud prevention. App set ID is a unique ID that, on a given device, allows you to correlate usage or actions across a set of apps owned by your organization. You cannot use app set ID for ads personalization or ads measurement. It will also automatically reset if all the developers’ apps on the device are uninstalled or none of the apps have accessed the ID in 13 months.

Enhancing protection for kids

As we introduce app set ID for analytics and fraud prevention, we are also making changes to further enhance privacy for kids. If an app is primarily directed to children, it cannot transmit identifiers like advertising ID. If an app’s audience is both kids and adults, then it needs to avoid transmitting these identifiers for kids.

Over the next several months, we’ll share more information for a smooth transition.

Strengthening security

Security is fundamental to enabling privacy across our platform. We’re announcing a few policy updates to help keep user data secure.

First, Google Play remains a safer ecosystem when developers actively maintain their apps. So, we will close dormant accounts if the account is inactive or abandoned after a year. This includes accounts where the developer has never uploaded an app or accessed Google Play Console in a year.

We will continue supporting developers with actively growing apps. We won’t close accounts with apps that have 1000+ installs or have in-app purchases in the last 90 days. Developers whose accounts are closed can create new ones in the future, but they won’t be able to reactivate old accounts, apps, or data.

Second, it’s important for users to have an accessible experience that is secure. So, we’re adding new requirements on how AccessibilityService API and IsAccessibilityTool can be used. These tools help build accessible experiences, which often require access to user data and device functionality. Now, all apps that use the AccessibilityService API will need to disclose data access and purpose in Google Play Console and get approval. Learn more.

Reminder on Payments policy

As we shared earlier in July, after careful consideration of feedback from both large and small developers, we are giving developers an option to request a 6-month extension until March 31, 2022 to comply with our Payments policy.

For more resources

Thank you for helping us make Google Play an even more trustworthy platform for everyone.

Enhancements to Google Voice

What’s changing

We continuously listen to customer feedback as we make refinements to Google Voice, and based on your feedback we’ve made the following enhancements:


Missed Call Reason:  Now you can see why a call did not ring and what you can do to fix it by changing your settings. Just go to the Missed Call details section or the Voicemail section (in case you received a voicemail for the call), and take the recommended steps in settings such as turning off the Do Not Disturb or setting the device to receive incoming calls

Image showing how to easily find out why a call did not ring and correct it in Settings
Easily find out why a call did not ring and correct it in Settings




Call Drop Reason and Redial:  Find out why a call dropped and easily redial. If the call dropped due to poor internet connectivity, you'll have the option to call using your mobile carrier network.

Image showing an example of why a call didn't connect
See the reason your call didn’t connect




Caller ID:  Google Voice customers using iOS now have a setting that allows them to see their Google Voice number as the caller ID when a call comes in to a number linked to Google Voice. When this setting is on, you will see the number you linked to Google Voice as the caller ID.

Image showing how you can chose to see your Google Voice number as the caller’s number for calls to numbers you linked to Voice

You can chose to see your Google Voice number as the caller’s number for calls to numbers you linked to Voice




Delete multiple SMS messages at once:  Now you can delete multiple SMS messages at one time to streamline your workflow - a capability frequently requested by our users. To use this feature, simply tap on the avatar on one or more SMS threads, and a trash bin will appear on the app bar above the messages allowing the message threads to be easily deleted.

Gif showing how to select multiple SMS messages and delete them at once

Select multiple SMS messages and delete them at once




Who’s impacted


End users

Getting started

  • Admins: There is no admin control for this feature.
  • End users: This feature will be OFF by default for the Caller ID feature and can be enabled by the user from within Settings. Visit the Help Center to learn more about changing the caller ID for incoming calls

Rollout pace

  • Missed Call Reason: Rapid Release and Scheduled Release domains: Gradual rollout (up to 15 days for feature visibility) starting July 15.
  • Call Drop Reason and Redial: This feature is available now for all users.
  • Caller ID: This feature is available now for all iOS users.
  • Delete multiple SMS messages at once: This feature is available now for all users.

Availability

  • Available to all Google Workspace customers who subscribe to Google Voice, as well as G Suite Basic and Business customers.

Resources

Italy’s capital of culture: Parma

The cultural scope of the beautiful Italian Peninsula never ceases to amaze people all over the world. But the possibility of getting to know the traditions and peculiarities of many Italian gems has been drastically reduced since the pandemic hit. Among such treasures is Parma, a delicate city set in the very heart of Italy. Beyond being the capital of iconic food such as Parmigiano and Prosciutto, Parma is a city of incredible cultural heritage that gained the prestigious title of “Italian Capital of Culture for the year 2020” but had to put a year-long calendar of events on hold due to the pandemic. 

Eighteen months later, the city is ready to celebrate its cultural heritage with the world on Google Arts & Culture. The collaboration between the Municipality of Parma and Google brought online the work of 33 institutional partners in the Parma area, including over 17,000 images from the archives of the municipal museums, 30 places digitized with Street View and much more. It’s a  project of true cultural valorization that highlights the magic behind this city.

Travel to Italy from home and check out some of Parma’s wonders. Explore the masterpieces, enjoy the sound of music and get a taste of that Italian cuisine:


  1. Deep into a towering dome: Step inside and see the details of a 27-meters-high dome like you’ve never seen before and learn about the artist Correggio’s devotion to the Benedectine congregation.

  2. Get your artists in place: Thanks to the Google Art Camera, the online experience faithfully reproduces over 200 masterpieces by international artists such as Picasso, Francis Bacon, Goya, Monet and Tiziano Vecellio but also by Italian artists including Ligabue, De Chirico, Boccioni, Filippo Lippi and Parmigianino.

  3. 300,000 bamboo plants: Did you know Parma holds the largest existing labyrinth in the world? Labirinto della Masone was created by the visionary mind of Franco Maria Ricci. It is composed of 300,000 bamboo plants and is considered a magical place, all waiting to be discovered!  

  4. Music to your ears: The land of renowned musicians Verdi and Toscanini, Parma is a favorite destination for opera lovers, who can now immerse themselves in a collection of 10,000 stage photographs, sketches and posters from the newly digitized Casa della Musica(literally “House of Music) archives. Several museums are now online, with the goal of bringing the history of sound reproduction to all ears. 

  5. No stereotypes when it comes to food: Known worldwide for Parmigiano Reggiano, the digital hub also features the Parmigiano Reggiano Museum to discover all about the history of one of the world’s most loved cheeses. 

  6. Did you say Pasta?: “Pasta” is synonymous with Italy so of course the online hub also includes the famous Pasta Museum to virtually transport you from the wheat fields to the traditional Italian household to make pasta. Check it out to truly understand the role that this type of food has played and continues to play in gastronomy, art, culture and in the lives of people around the world.

The journey into the beauty of Parma doesn't end here. Continue to discover the wonders of the Capital of Culture 2020 and 2021and let yourself be amazed by the art, music and culture of the city.  


Want to continue traveling to Italy from home? Look behind the curtain of one of the world’s greatest and oldest theaters, La Scala Theater in Milan or take a virtual tour of some of Italy’s most iconic sites through the “Wonders of Italy” experience. 


This and so much more, on Google Arts & Culture and on the Google Arts & Culture app foriOS and Android

One engineer’s tips for getting into Google

Welcome to the latest edition of “My Path to Google,” where we talk to Googlers, interns and alumni about how they got to Google, what their roles are like and even some tips on how to prepare for interviews.

Today we spoke with Akash Mukherjee, a Security Engineer at our Mountain View office, about what makes his work challenging and exciting.

What do you do at Google?

I’m a Security Engineer on the Chrome Browser Core Infrastructure team. My team makes sure that the infrastructure used to build and ship Chrome to billions of users is secure. We build tools to make it easy for secure development practice across Chrome. One cool part of this work is that we not only support Google’s internal developer community but also open source contributors.

What’s a typical workday like for you? 

Most of my day involves designing and building out tools, so a lot of writing code and design docs. I’d say I spend 15% of my time syncing with colleagues on updates for ongoing projects. I’m fortunate to have multiple projects to work on — this helps me feel constantly challenged and motivated to work.

I feel like I have a great balance between collaborating and working independently. 

What made you decide to apply to Google?

Google had always been at the back of my mind, but I was intimidated by the interview process and held off applying for a while. Still, I’d heard good stories about the work-life balance at Google from friends. I was actually getting ready to apply right when a recruiter reached out to me! It felt like a natural match not only in terms of technical skills, but also culturally.

How did you land in your current role?

Before joining Google, I was a security engineer at another company, where I was doing more automation work. Although it was exciting, I always felt something was missing. Joining Google I realized how much I value constant innovation and building new systems and tools. One of the coolest things about building new things is that it requires you to understand the vast existing infrastructure. It’s challenging, exciting work.

What inspires you to come in (or log in) every day? 

It’s fascinating to see how Google’s objective of building for everyone breaks down to the individual level. One of the benefits of working at Google is that the work we do impacts more than a billion people’s lives. That motivates me. It would be unfair not to also mention all the amazing people I work with on a daily basis — my colleagues are a crucial part of the work I do.

A golden retriever puppy lays on the trunk of a car while wearing a Noogler hat.

Besides work, I play soccer and love to explore driving around. I also have the cutest golden retriever and outside work, that’s where I spend most of my time.

How did you prepare for the interview?

Google’s interview process really tests your fundamental knowledge. Work on strengthening those building blocks and answer questions with technical details. This is a good starting point that I have used. If you look at the questions, you’ll see how fundamentals are important. 


Any tips for aspiring Googlers?

Believe in yourself, especially during tough times and failures. Anyone out there reading this, just get past the fear of failure and start learning from it. Failures teach us much more than success.