View Google Classroom activity with new audit logs, view adoption and other metrics with BigQuery activity logs

What’s changing

We’re making two enhancements for Google Classroom, which will help Google Workspace for Education admins surface information about how Classroom is being used in their organization. Specifically, we’re introducing:

  • Classroom audit logs in the Admin console
  • Classroom activity logs in BigQuery, and Data Studio dashboards.

See below for more information and availability.






Who’s impacted

Admins



Why it’s important

By surfacing Classroom audit logs, Admins can quickly pinpoint who did what in their domain, such as: who removed a student from a class, who archived a class on a certain date, and more. 

For Education Standard and Plus customers, admins can export the Classroom audit log data from the Admin console to BigQuery, which allows them to query the data as needed. As a starting point, we’ve provided a Data Studio report template, which surfaces your data in an easily digestible format. Admins can copy this template and further customize it using Data Studio.

We hope this makes it easier for admins to quickly look up common activities in their organization and quickly act on scenarios where support may be needed.



Getting started


Rollout pace


Availability

Classroom audit logs
  • Available to Google Workspace Education Fundamentals, Education Plus, Teaching and Learning Upgrade, Essentials, Business Starter, Business Standard, Business Plus, Enterprise Essentials, Enterprise Standard, and Enterprise Plus Frontline, and Nonprofits, as well as G Suite Basic and Business customers

BigQuery Logs + Data Studio Templates
  • Available to Google Workspace Education Standard and Education Plus customers
  • Not available to Google Google Workspace Essentials, Business Starter, Business Standard, Business Plus, Enterprise Essentials, Enterprise Standard, Enterprise Plus, Frontline, Education Fundamentals, the Teaching and Learning Upgrade, and Nonprofits, as well as G Suite Basic and Business customers


Collaborating with the UN to accelerate crisis response

In remarks to the UN's High-Level Humanitarian Event on Anticipatory Action, Google SVP for Global Affairs, Kent Walker, discusses collaboration to accelerate crisis preparedness and predict crises before they happen. Read the full remarks below.


Mr. Secretary General, your excellencies, ladies and gentlemen - it’s an honor to join you as we come together to discuss these critical humanitarian issues.

As you know, technology is already raising living standards around the world—leveraging science to double life spans over the last 100 years, helping a billion people emerge from poverty in the last 30 years alone. And innovation will help drive environmental sustainability, raise living standards, improve healthcare, and enhance crisis response.

But addressing global needs in a meaningful way requires strong collaborations between technologists, governments, humanitarian organizations, and those most directly affected.

That’s why we are pleased to announce a $1.5 million commitment to OCHA’s Center for Humanitarian Data. Over the next two years, Google.org will support the Center in scaling up the use of forecasts and predictive models to anticipate humanitarian crises and trigger the release of funds before conditions escalate.

From the earliest days of efforts like Hans Rosling’s GapMinder, it’s been a dream that rather than waiting for a crisis to occur, data and technology could help predict events like droughts or food shortages weeks ahead of time, allowing agencies to provide alerts and deliver supplies to avert the crisis. That technology exists now, today—and we need to put it to work.

With the signs of climate change all around us, it’s essential that we improve our collective preparedness, and protect our most vulnerable populations.

Google is honored to support the critical work led by OCHA and the Center for Humanitarian Data, and we’re committed to combining funding, innovation, and technical expertise to support underserved communities and expand opportunity for everyone.

We hope others will join us in the important work of getting ahead of crises before they happen.

Thank you.

Introducing Android’s Private Compute Services

We introduced Android’s Private Compute Core in Android 12 Beta. Today, we're excited to announce a new suite of services that provide a privacy-preserving bridge between Private Compute Core and the cloud.

Recap: What is Private Compute Core?

Android’s Private Compute Core is an open source, secure environment that is isolated from the rest of the operating system and apps. With each new Android release we’ll add more privacy-preserving features to the Private Compute Core. Today, these include:

  • Live Caption, which adds captions to any media using Google’s on-device speech recognition
  • Now Playing, which recognizes music playing nearby and displays the song title and artist name on your device’s lock screen
  • Smart Reply, which suggests relevant responses based on the conversation you’re having in messaging apps

For these features to be private, they must:

  1. Keep the information on your device private. Android ensures that the sensitive data processed in the Private Compute Core is not shared to any apps without you taking an action. For instance, until you tap a Smart Reply, the OS keeps your reply hidden from both your keyboard and the app you’re typing into.
  2. Let your device use the cloud (to download new song catalogs or speech-recognition models) without compromising your privacy. This is where Private Compute Services comes in.

Introducing Android’s Private Compute Services

Machine learning features often improve by updating models, and Private Compute Services helps features get these updates over a private path. Android prevents any feature inside the Private Compute Core from having direct access to the network. Instead, features communicate over a small set of purposeful open-source APIs to Private Compute Services, which strips out identifying information and uses a set of privacy technologies, including Federated Learning, Federated Analytics, and Private information retrieval.

We will publicly publish the source code for Private Compute Services, so it can be audited by security researchers and other teams outside of Google. This means it can go through the same rigorous security programs that ensure the safety of the Android platform.

We’re enthusiastic about the potential for machine learning to power more helpful features inside Android, and Android’s Private Compute Core will help users benefit from these features while strengthening privacy protections via the new Private Compute Services. Android is the first open source mobile OS to include this kind of externally verifiable privacy; Private Compute Services helps the Android OS continue to innovate in machine learning, while also maintaining the highest standards of privacy and security.

Bringing richer navigation, charging, parking apps to more Android Auto users

Posted by Madan Ankapura, Product Manager

Illustration of car interior with map, parking and gas symbols

Today, we are releasing the beta of Android for Cars App Library version 1.1. Your Android Auto apps using features that require Car App API level 2+ like map interactivity, vehicle’s hardware data, multiple-length text, long message and sign-in templates, can now be used in cars with Android Auto 6.7+ (which were previously limited to Desktop Head Unit only).

Two Android Auto GIF examples. Left GIF is 2GIS and right GIF is TomTom

With this announcement, we are also completing the transition to Jetpack and will no longer be accepting submissions built with the closed source library (com.google.android.libraries.car.app). If you haven’t already, we encourage you to migrate to the AndroidX library now.

For the entire list of changes in beta01, please see the release notes. To start building your app for the car, check out our updated developer documentation, car quality guidelines and design guidelines.

If you’re interested in joining our Early Access Program to get access to new features early in the future, please fill out this interest form. You can get started with the Android for Cars App Library today, by visiting g.co/androidforcars.

Personalized ASR Models from a Large and Diverse Disordered Speech Dataset

Speech impairments affect millions of people, with underlying causes ranging from neurological or genetic conditions to physical impairment, brain damage or hearing loss. Similarly, the resulting speech patterns are diverse, including stuttering, dysarthria, apraxia, etc., and can have a detrimental impact on self-expression, participation in society and access to voice-enabled technologies. Automatic speech recognition (ASR) technologies have the potential to help individuals with such speech impairments by improving access to dictation and home automation and by enhancing communication. However, while the increased computational power of deep learning systems and the availability of large training datasets has improved the accuracy of ASR systems, their performance is still insufficient for many people with speech disorders, rendering the technology unusable for many of the speakers who could benefit the most.

In 2019, we introduced Project Euphonia and discussed how we could use personalized ASR models of disordered speech to achieve accuracies on par with non-personalized ASR on typical speech. Today we share the results of two studies, presented at Interspeech 2021, that aim to expand the availability of personalized ASR models to more users. In “Disordered Speech Data Collection: Lessons Learned at 1 Million Utterances from Project Euphonia”, we present a greatly expanded collection of disordered speech data, composed of over 1 million utterances. Then, in “Automatic Speech Recognition of Disordered Speech: Personalized models outperforming human listeners on short phrases”, we discuss our efforts to generate personalized ASR models based on this corpus. This approach leads to highly accurate models that can achieve up to 85% improvement to the word error rate in select domains compared to out-of-the-box speech models trained on typical speech.

Impaired Speech Data Collection
Since 2019, speakers with speech impairments of varying degrees of severity across a variety of conditions have provided voice samples to support Project Euphonia’s research mission. This effort has grown Euphonia’s corpus to over 1 million utterances, comprising over 1400 hours from 1330 speakers (as of August 2021).

Distribution of severity of speech disorder and condition across all speakers with more than 300 utterances recorded. For conditions, only those with > 5 speakers are shown (all others aggregated into “OTHER” for k-anonymity).
ALS = amyotrophic lateral sclerosis; DS = Down syndrome; PD = Parkinson’s disease; CP = cerebral palsy; HI = hearing impaired; MD = muscular dystrophy; MS = multiple sclerosis

To simplify the data collection, participants used an at-home recording system on their personal hardware (laptop or phone, with and without headphones), instead of an idealized lab-based setting that would collect studio quality recordings.

To reduce transcription cost, while still maintaining high transcript conformity, we prioritized scripted speech. Participants read prompts shown on a browser-based recording tool. Phrase prompts covered use-cases like home automation (“Turn on the TV.”), caregiver conversations (“I am hungry.”) and informal conversations (“How are you doing? Did you have a nice day?”). Most participants received a list of 1500 phrases, which included 1100 unique phrases along with 100 phrases that were each repeated four more times.

Speech professionals conducted a comprehensive auditory-perceptual speech assessment while listening to a subset of utterances for every speaker providing the following speaker-level metadata: speech disorder type (e.g., stuttering, dysarthria, apraxia), rating of 24 features of abnormal speech (e.g., hypernasality, articulatory imprecision, dysprosody), as well as recording quality assessments of both technical (e.g., signal dropouts, segmentation problems) and acoustic (e.g., environmental noise, secondary speaker crosstalk) features.

Personalized ASR Models
This expanded impaired speech dataset is the foundation of our new approach to personalized ASR models for disordered speech. Each personalized model uses a standard end-to-end, RNN-Transducer (RNN-T) ASR model that is fine-tuned using data from the target speaker only.

Architecture of RNN-Transducer. In our case, the encoder network consists of 8 layers and the predictor network consists of 2 layers of uni-directional LSTM cells.

To accomplish this, we focus on adapting the encoder network, i.e. the part of the model dealing with the specific acoustics of a given speaker, as speech sound disorders were most common in our corpus. We found that only updating the bottom five (out of eight) encoder layers while freezing the top three encoder layers (as well as the joint layer and decoder layers) led to the best results and effectively avoided overfitting. To make these models more robust against background noise and other acoustic effects, we employ a configuration of SpecAugment specifically tuned to the prevailing characteristics of disordered speech. Further, we found that the choice of the pre-trained base model was critical. A base model trained on a large and diverse corpus of typical speech (multiple domains and acoustic conditions) proved to work best for our scenario.

Results
We trained personalized ASR models for ~430 speakers who recorded at least 300 utterances. 10% of utterances were held out as a test set (with no phrase overlap) on which we calculated the word error rate (WER) for the personalized model and the unadapted base model.

Overall, our personalization approach yields significant improvements across all severity levels and conditions. Even for severely impaired speech, the median WER for short phrases from the home automation domain dropped from around 89% to 13%. Substantial accuracy improvements were also seen across other domains such as conversational and caregiver.

WER of unadapted and personalized ASR models on home automation phrases.

To understand when personalization does not work well, we analyzed several subgroups:

  • HighWER and LowWER: Speakers with high and low personalized model WERs based on the 1st and 5th quintiles of the WER distribution.
  • SurpHighWER: Speakers with a surprisingly high WER (participants with typical speech or mild speech impairment of the HighWER group).

Different pathologies and speech disorder presentations are expected to impact ASR non-uniformly. The distribution of speech disorder types within the HighWER group indicates that dysarthria due to cerebral palsy was particularly difficult to model. Not surprisingly, median severity was also higher in this group.

To identify the speaker-specific and technical factors that impact ASR accuracy, we examined the differences (Cohen's D) in the metadata between the participants that had poor (HighWER) and excellent (LowWER) ASR performance. As expected, overall speech severity was significantly lower in the LowWER group than in the HighWER group (p < 0.01). Intelligibility and severity were the most prominent atypical speech features in the HighWER group; however, other speech features also emerged, including abnormal prosody, articulation, and phonation. These speech features are known to degrade overall speech intelligibility.

The SurpHighWER group had fewer training utterances and lower SNR compared with the LowWER group (p < 0.01) resulting in large (negative) effect sizes, with all other factors having small effect sizes, except fastness. In contrast, the HighWER group exhibited medium to large differences across all factors.

Speech disorder and technical metadata effect sizes for the HighWER-vs-LowWER and SurpHighWER-vs-LowWER pairs. Positive effects indicated that the group values of the HighWER group were greater than LowWER groups.

We then compared personalized ASR models to human listeners. Three speech professionals independently transcribed 30 utterances per speaker. We found that WERs were, on average, lower for personalized ASR models compared to the WERs of human listeners, with gains increasing by severity.

Delta between the WERs of the personalized ASR models and the human listeners. Negative values indicate that personalized ASR performs better than human (expert) listeners.

Conclusions
With over 1 million utterances, Euphonia’s corpus is one of the largest and most diversely disordered speech corpora (in terms of disorder types and severities) and has enabled significant advances in ASR accuracy for these types of atypical speech. Our results demonstrate the efficacy of personalized ASR models for recognizing a wide range of speech impairments and severities, with potential for making ASR available to a wider population of users.

Acknowledgements
Key contributors to this project include Michael Brenner, Julie Cattiau, Richard Cave, Jordan Green, Rus Heywood, Pan-Pan Jiang, Anton Kast, Marilyn Ladewig, Bob MacDonald, Phil Nelson, Katie Seaver, Jimmy Tobin, and Katrin Tomanek. We gratefully acknowledge the support Project Euphonia received from members of many speech research teams across Google, including Françoise Beaufays, Fadi Biadsy, Dotan Emanuel, Khe Chai Sim, Pedro Moreno Mengibar, Arun Narayanan, Hasim Sak, Suzan Schwartz, Joel Shor, and many others. And most importantly, we wanted to say a huge thank you to the over 1300 participants who recorded speech samples and the many advocacy groups who helped us connect with these participants.

Source: Google AI Blog


Our commitment to water stewardship

I grew up in Muir Beach, California, and was fortunate to spend my childhood exploring its beautiful forests and streams. Today, these delicate ecosystems are threatened as the entire west coast of the U.S. is experiencing one of the worst droughts in recorded history. Unfortunately, this problem extends beyond the stretch of coastline I call home. Climate change is exacerbating water scarcity challenges around the world as places suffer from diminished rainfall — from Brazil's semi-arid region to Sub-saharan Africa. At the same time, we’ve seen strong storms bring devastating floods to places like the eastern U.S., central China, and western Germany.

Last September, we announced our third and most ambitious decade of climate action and laid out our plan toward a carbon-free future. Building on this commitment, we are pledging to a water stewardship target to replenish more water than we consume by 2030 and support water security in communities where we operate. This means Google will replenish 120% of the water we consume, on average, across our offices and data centers. We’re focusing on three areas: enhancing our stewardship of water resources across Google office campuses and data centers, replenishing our water use and improving watershed health and ecosystems in water-stressed communities; sharing technology and tools that help everyone predict, prevent and recover from water stress.


Managing the water we use responsibly

We use water to cool the data centers that make products like Gmail, YouTube, Google Maps and Search possible. Over the years, we've taken steps to address and improve our operational water sustainability. For example, we deployed technology that uses reclaimed wastewater to cool our data center in Douglas County, Georgia. At our office campuses in the San Francisco Bay Area, we worked with ecologists and landscape architects to develop an ecological design strategy and habitat guidelines to improve the resiliency of landscapes and nearby watershed health. This included implementing drip irrigation, using watering systems that adjust to local weather conditions, and fostering diverse landscapes on our campuses that can withstand the stresses of climate change. 

Our water stewardship journey will involve continuously enhancing our water use and consumption. At our data centers, we’ll identify opportunities to use freshwater alternatives where possible — whether that's seawater or reclaimed wastewater. When it comes to our office campuses, we’re looking to use more on-site water sources — such as collected stormwater and treated wastewater — to meet our non-potable water needs like landscape irrigation, cooling and toilet flushing.


Investing in community water security and healthy ecosystems

Water security is an issue that goes beyond our operations, and it’s not something we can solve alone. In partnership with others, we’ll invest in community projects that replenish 120% of the water we consume, on average, across all Google offices and data centers, and that improve the health of the local watersheds where our office campuses and data centers are located. 

Typically, the water we all use every day comes from local watersheds — areas of land where local precipitation collects and drains off into a common outlet, such as a river, bay or other receiving body of water. There are several ways to determine whether a watershed is sustainable including measuring water quality and availability and community access to the water. 

We’ll focus on solutions that address local water and watershed challenges. For example, we’re working with the Colorado River Indian Tribes project to reduce the amount of water that is withdrawn from Lake Mead reservoir on the Colorado River in Nevada and Arizona. In Dublin, Ireland, we’re installing rainwater harvesting systems to reduce stormwater flows to improve water quality in the River Liffey and the Dublin Bay. And in Los Angeles, we’re investing in efforts to remove water-thirsty invasive species to help the nearby ecosystem in the San Gabriel mountains.


Using data tools to predict and prevent water stress

Communities, policymakers and planners all need tools to measure and predict water availability and water needs. We’re dedicated to working with partners to make those tools and technologies universally accessible. To that end, we’ve recently worked with others on these water management efforts: 


  • Partnered with the United Nations Environment Programme and the European Commission’s Joint Research Centre (JRC) to create the Freshwater Ecosystems Explorer. This tool tracks surface water changes over time on a national and local scale. 

  • Co-developed the web application OpenET with academic and government researchers to make satellite-based data that shows how and where water moves when it evaporates available to farmers, landowners and water managers.

  • Provided Google.org funding for Global Water Watch and Windward Fund’s BlueConduit. Global Water Watch provides real-time indicators for current and future water management needs, and was built in partnership with Google.org, WRI, WWF and Deltares. BlueConduit quantifies and maps hazardous lead service lines, making it easier to replace water infrastructure in vulnerable communities.

When it comes to protecting the future of our planet and the resources we rely on, there’s a lot to be done. We’ll keep looking for ways we can use our products and expertise to be good water stewards and partner with others to address these critical and shared water challenges. 


Announcing The Google Ads API Migration Workshop

Today, we’re announcing The Google Ads API Migration Workshop, which will be presented live in three regions, to provide you with knowledge, resources, and support to migrate from the AdWords API to the Google Ads API.
Session DatesGet to know the team supporting your migration during this three-day virtual workshop as we:
  • Teach you about the new API
  • Walk through interactive demonstrations
  • Help you create a migration plan
  • Discuss best practices for migrating your application

Whether you’re new to the Google Ads API or want to level up your skill set, we'll have a variety of sessions to help you achieve your goals. We’ll be delivering several talks to discuss what’s new in the Google Ads API and explain key concepts. Tune into our interactive sessions as we demonstrate how to migrate components of your application and leverage the suite of developer tools supporting the Google Ads API.


Throughout the workshop, you’ll have the opportunity to interact with our team via live Q&A and breakout sessions. We’ll also host a panel featuring the Tech Leads of the Google Ads API Engineering and Developer Relations teams.


Follow the event link to register for the event and view the full agenda. We look forward to seeing you!

A new training programme to help small businesses reduce their carbon emissions

The climate crisis is an urgent issue for everyone. The UK government has set an ambitious target to reach net zero by 2050 and all businesses of all sizes need to play a part if we’re to reach those goals. 

This is not just about doing the right thing — today’s consumers expect action: according to research from Edelman, 80% of people want brands to solve society’s problems. 


Small businesses make up 99% of the UK’s business community so they’ll play a crucial role in reaching net zero. Yet, understandably, small businesses don’t always have the time, resources or expertise to dedicate to this — especially as they focus on recovery from the pandemic. A study from the British Chambers of Commerce and O2 found that only one in 10 small businesses are measuring their carbon footprint, and a fifth of small businesses don't fully understand the term "net zero". Cost, and an ability to understand, measure and report emissions are cited as two of the main barriers to change. 


Sustainability training for small businesses


To help small businesses overcome these obstacles, we’re announcing a new free, simple and actionable training programme to help SMEs reduce their emissions. We developed the training in partnership with leading sustainability and net zero certification group, Planet Mark, as part of the UK Government’s Together for our Planet Business Climate Leaders campaign, which encourages small businesses to commit to cutting their emissions in half by 2030 and to net zero by 2050. 


Our training is designed for small businesses starting their journey towards sustainability, with an emphasis on how a sustainability strategy can help drive business performance. It sets out the business case and imperative for cutting emissions, and explains practical, digitally-focused ways to decarbonize — from using paperless billing and Cloud-enabled technology, to renewable energy sourcing and supply chains. Since we know how much consumers care about this, it also covers how small businesses can use their sustainability credentials to differentiate. 


One business already doing this successfully is catering company, Fooditude. They made tangible changes to their business, like limiting their food waste, going paperless with admin systems and swapping to local suppliers, and reduced their emissions by over 30% per meal. Dean Kennett, Fooditude’s Managing Director, attributes £3 million in new revenue to their new sustainability credentials, as well as their ability to hire staff who share their values, and a shared purpose among employees. 

Swati Deshpande, part of the team at Fooditude

Swati Deshpande, part of the team at Fooditude

We’ll deliver the training through the Google Digital Garage, building on our experience of coaching more than 650,000 people and small businesses in the UK in digital and business skills. And we’ll lean on our expertise as leaders on climate change for over two decades, from becoming carbon neutral in 2007 to our latest and most ambitious commitment to become the first major company to operate on carbon-free energy 24 hours a day, seven days a week, 365 days a year.


We’re encouraging companies who complete the training to make a commitment to going net zero by signing up to the SME Climate Commitment, which can be found on the UK Business Climate Hub. Businesses who sign up and share their commitments will be recognized by the United Nations Race to Zero campaign initiative and inspire other businesses to take action. 


Helping SMEs track carbon emissions


Measuring carbon emissions accurately is essential if small businesses are to know if their actions make a difference, but most small businesses can’t do this alone.  That’s why we’re supporting Normative, the software platform behind the SME Climate Commitment, to help businesses track and account for their carbon emissions, making climate mitigation easier and actionable. Over the next six months, as part of the Google.org Fellowship, we’ll provide a team of 11 Googlers to work full-time, pro bono, to assist Normative with building the technical infrastructure that underpins the free-to-access platform. Normative was one of the organisations to receive a €1M grant through the Google.org Impact Challenge on Climate, which funds bold ideas that aim to use technology to accelerate Europe’s progress toward a greener, more resilient future.


We’re optimistic that by supporting organisations and technologies like these we can help small businesses make the journey towards a carbon-free future. 


How to sign up


Small businesses can sign up to the training here


Discover Dubai’s Culture & Heritage with Google Arts & Culture

In Dubai, we believe our future is derived from our past.  While my hometown has become renowned for its fast-paced development and soaring skyscrapers, many people still don’t know about the rich culture and heritage this city holds. 


Today, I’m proud to unveil ‘Dubai’s Culture and Heritage’, launched in collaboration with Google Arts & Culture, which will help you discover my hometown's story and its vibrant art scene through more than 80 expertly curated stories, 5 audio stories, 25 videos, and over 800 high-resolution images of arts, crafts, heritage sites and much more. 

Did you know Dubai was a trading port?

Some people wonder what it was like to live in Dubai before the city became a bustling metropolis. What better way to learn than to hear firsthand from some of Dubai’s residents, from pearl merchants, boat builders and craftspersons to hearing about childhood memories of swimming in the Dubai Creek.

For many of us, the traditional Emirati Majlis, a cultural and social space where members of the community come together for discussions, was and remains a staple feature of our social lives. 

To get a sense of traditional life in Dubai, take a virtual walk through alleyways and witness traditional architecture such as buildings with high air towers called Barajeel, in the Al-Fahidi district. You can also learn more about traditional embroidery, palm weaving and, for the coffee lovers, the history and culture of coffee in the UAE.

Modern Dubai, Zaha Hadid and the art scene

Fast-forward with the click of a button to see some of modern-day Dubai’s iconic architecturallandmarks, from towers with 90 degree rotation from top to bottom to torus-shaped structures with Arabic calligraphy and the first hotel to have its interiors and exteriors designed by renowned architect Zaha Hadid. 


Dubai’s art scene has also evolved over the years, reflecting the diversity of its social fabric. Learn about one of the city’s first cinemas, learn more about the Sikka Arts Festivaland Art Dubai — two important artistic events held in the heart of the city —  and hear from emerging artists from around the world who are using Dubai as a hub to share their work with the world. 

Ready to take the tour?

We’re excited to be able to help people, wherever they may be, discover our culture and heritage through our work with Google Arts & Culture. To learn more, visit g.co/dubaiculture or download the Google Arts & Culture app for Android or iOS.


New Digital Tools for Kiwi Teachers and Schools


Image: Manaiakalani Classroom using Chromebooks

Nearly 1 million students will find themselves out of school in New Zealand during a national COVID-19 lockdown. While this can in turn put families, schools and teachers under immense pressure to ensure that students continue to learn, over the past 18 months Kiwi teachers and students have greatly accelerated their digital skills. Whether the ‘classroom’ is in-person, virtual or a hybrid of the two, building educators and students capacity and equal access to digital skills education has been central to the partnerships Google has developed throughout New Zealand.



That’s why today we’re pleased to announce the continuation and evolution of our agreement with the Ministry of Education. Since 2018 we’ve provided all state and state-integrated schools across New Zealand with Ministry-funded Chrome Education Upgrades to manage new and existing unmanaged Chromebooks. Now, in addition, the Chrome Education Upgrade will be available to schools via our distribution partner Synnex NZ, allowing schools to also upgrade their Google Workspace for Education Fundamentals (the free for Education edition - previously called G Suite for Education) to Google Workspace for Education Plus


Google Workspace for Education Plus gives schools access to enterprise level teaching and learning, reporting and security tools. This comprehensive edition includes all the enhanced security features and tools from Education Standard, the Teaching and Learning Upgrade, and more to ensure your school has the best educational tools available.



Schools can harness the power of enhanced teaching and learning tools like secure Breakout rooms in Google Meet, Originality checkers in Google classroom and the ability to livestream important school events to the community wherever they are. Kura can customise and personalise Big Query data exports of their student engagement to help support their student learning journeys.  



The Chrome Education upgrade was developed to make device management in schools a breeze, so that teachers and students can focus on what’s most important—teaching and learning. Equipped with the Chrome Education upgrade, schools can utilise essential education features to better support the many ways Chromebooks - the number one device in New Zealand schools -  are used in the classroom.


The introduction of Chrome Education Upgrade Licences with Workspace for Education,  now provides schools with an advanced set of Google Education tools and services that are tailored for Schools, Clusters and homeschools to collaborate, streamline instruction, and keep learning safe and secure.



Our team is working to make digital tools easier and more helpful for everyone and we hope this agreement enables even more educators and students around New Zealand to access and make the most of their digital learning.


Post content