Tag Archives: Google+

Google Nest Device Access Console now available for partners and individuals

Posted by Gabriel Rubinsky, Senior Product Manager

Today, we’re excited to announce the Device Access Console is available.

The Device Access program lets individuals and qualified partners securely access and control Nest products with their apps and solutions.

At the heart of the Device Access program is the Smart Device Management API. Since we announced the program, Alarm.com, Control4, DISH, OhmConnect, NRG Energy, and Vivint Smart Home have successfully completed the Early Access Program (EAP) with Nest thermostat, camera, or doorbell traits. In the coming months, we expect additional devices to be supported and more smart home partners to launch their new integrations as well.

Enhanced privacy and security

The Device Access program is built on a foundation of privacy and security. The program requires partner submission of qualified use cases and completion of a security assessment before being allowed to utilize the Smart Device Management API for commercial use. The program process gives our users the confidence that commercial partners offering integrated Nest solutions have data protections and safeguards in place that meet our privacy and security standards.

Nest device access and control

The Device Access program currently allows qualified partners to integrate directly with Nest devices, enable control of thermostats, access and view camera feeds, and receive doorbell notifications with images. All qualified partner solutions and services will require end-user consent before being able to access, control, and manage Nest devices as part of their service offerings, either through a partner client app or service platform. Ultimately, this gives users more choice in how to control their home and their own generated data.

If you’re a developer or a Nest user interested in the Device Access program or access to the sandbox development environment,* you can find more information on our Device Access site.

  • Device Access for Commercial Developers

    The Device Access program allows trusted partners to offer access, management, and control of Nest devices within the partner’s app, solution, and ecosystem. It allows developers to test all API traits in the sandbox environment, before moving forward with commercial integration. Learn more

  • Device Access for Individuals

    For individual smart home developer enthusiasts, you can register to access the sandbox development environment, allowing you to directly control your own Nest devices through your private integrations and automations. Learn more

We’re doing the work to make Nest devices more secure and protect user privacy long into the future. This means expanding privacy and data security programs, and delivering flexibility for our customers to use thousands of products from partners to create a connected, helpful home.



* Registration consists of the acceptance of the Google API and Nest Device Access Sandbox Terms of Service, along with a one-time, non-refundable nominal fee per account

Google Nest Device Access Console now available for partners and individuals

Posted by Gabriel Rubinsky, Senior Product Manager

Today, we’re excited to announce the Device Access Console is available.

The Device Access program lets individuals and qualified partners securely access and control Nest products with their apps and solutions.

At the heart of the Device Access program is the Smart Device Management API. Since we announced the program, Alarm.com, Control4, DISH, OhmConnect, NRG Energy, and Vivint Smart Home have successfully completed the Early Access Program (EAP) with Nest thermostat, camera, or doorbell traits. In the coming months, we expect additional devices to be supported and more smart home partners to launch their new integrations as well.

Enhanced privacy and security

The Device Access program is built on a foundation of privacy and security. The program requires partner submission of qualified use cases and completion of a security assessment before being allowed to utilize the Smart Device Management API for commercial use. The program process gives our users the confidence that commercial partners offering integrated Nest solutions have data protections and safeguards in place that meet our privacy and security standards.

Nest device access and control

The Device Access program currently allows qualified partners to integrate directly with Nest devices, enable control of thermostats, access and view camera feeds, and receive doorbell notifications with images. All qualified partner solutions and services will require end-user consent before being able to access, control, and manage Nest devices as part of their service offerings, either through a partner client app or service platform. Ultimately, this gives users more choice in how to control their home and their own generated data.

If you’re a developer or a Nest user interested in the Device Access program or access to the sandbox development environment,* you can find more information on our Device Access site.

  • Device Access for Commercial Developers

    The Device Access program allows trusted partners to offer access, management, and control of Nest devices within the partner’s app, solution, and ecosystem. It allows developers to test all API traits in the sandbox environment, before moving forward with commercial integration. Learn more

  • Device Access for Individuals

    For individual smart home developer enthusiasts, you can register to access the sandbox development environment, allowing you to directly control your own Nest devices through your private integrations and automations. Learn more

We’re doing the work to make Nest devices more secure and protect user privacy long into the future. This means expanding privacy and data security programs, and delivering flexibility for our customers to use thousands of products from partners to create a connected, helpful home.



* Registration consists of the acceptance of the Google API and Nest Device Access Sandbox Terms of Service, along with a one-time, non-refundable nominal fee per account

Doubling down on the edge with Coral’s new accelerator

Posted by The Coral Team

Coral image

Moving into the fall, the Coral platform continues to grow with the release of the M.2 Accelerator with Dual Edge TPU. Its first application is in Google’s Series One room kits where it helps to remove interruptions and makes the audio clearer for better video meetings. To help even more folks build products with Coral intelligence, we’re dropping the prices on several of our products. And for those folks that are looking to level up their at home video production, we’re sharing a demo of a pose based AI director to make multi-camera video easier to make.

Coral M.2 Accelerator with Dual Edge TPU

The newest addition to our product family brings two Edge TPU co-processors to systems in an M.2 E-key form factor. While the design requires a dual bus PCIe M.2 slot, it brings enhanced ML performance (8 TOPS) to tasks such as running two models in parallel or pipelining one large model across both Edge TPUs.

The ability to scale across multiple edge accelerators isn’t limited to only two Edge TPUs. As edge computing expands to local data centers, cell towers, and gateways, multi-Edge TPU configurations will be required to help process increasingly sophisticated ML models. Coral allows the use of a single toolchain to create models for one or more Edge TPUs that can address many different future configurations.

A great example of how the Coral M.2 Accelerator with Dual Edge TPU is being used is in the Series One meeting room kits for Google Meet.

The new Series One room kits for Google Meet run smarter with Coral intelligence

Coral image

Google’s new Series One room kits use our Coral M.2 Accelerator with Dual Edge TPU to bring enhanced audio clarity to video meetings. TrueVoice®, a multi-channel noise cancellation technology, minimizes distractions to ensure every voice is heard with up to 44 channels of echo and noise cancellation, making distracting sounds like snacking or typing on a keyboard a concern of the past.

Enabling the clearest possible communication in challenging environments was the target for the Google Meet hardware team. The consideration of what makes a challenging environment was not limited to unusually noisy environments, such as lunchrooms doubling as conference rooms. Any conference room can present challenging acoustics that make it difficult for all participants to be heard.

The secret to clarity without expensive and cumbersome equipment is to use virtual audio channels and AI driven sound isolation. Read more about how Coral was used to enhance and future-proof the innovative design.

Expanding the AI edge

Earlier this year, we reduced the prices of our prototyping devices and sensors. We are excited to share further price drops on more of our products. Our System-on-Module is now available for $99.99, and our Mini PCIe Accelerator, M.2 Accelerator A+E Key, and M.2 Accelerator B+M key are now available at $24.99. We hope this lower price will make our edge AI more accessible to more creative minds around the world. Later, this month our SoM offering will also expand to include 2 and 4GB RAM options.

Multi-cam with AI

Coral image

As we expand our platform and product family, we continue to keep new edge AI use cases in mind. We are continually inspired by our developer community’s experimentation and implementations. When recently faced with the challenges of multicam video production from home, Markku Lepistö, Solutions Architect at Google Cloud, created this real-time pose-based multicam tool he so aptly dubbed, AI Director.

We love seeing such unique implementations of on-device ML and invite you to share your own projects and feedback at [email protected].

For a list of worldwide distributors, system integrators and partners, visit the Coral partnerships page. Please visit Coral.ai to discover more about our edge ML platform.

Doubling down on the edge with Coral’s new accelerator

Posted by The Coral Team

Coral image

Moving into the fall, the Coral platform continues to grow with the release of the M.2 Accelerator with Dual Edge TPU. Its first application is in Google’s Series One room kits where it helps to remove interruptions and makes the audio clearer for better video meetings. To help even more folks build products with Coral intelligence, we’re dropping the prices on several of our products. And for those folks that are looking to level up their at home video production, we’re sharing a demo of a pose based AI director to make multi-camera video easier to make.

Coral M.2 Accelerator with Dual Edge TPU

The newest addition to our product family brings two Edge TPU co-processors to systems in an M.2 E-key form factor. While the design requires a dual bus PCIe M.2 slot, it brings enhanced ML performance (8 TOPS) to tasks such as running two models in parallel or pipelining one large model across both Edge TPUs.

The ability to scale across multiple edge accelerators isn’t limited to only two Edge TPUs. As edge computing expands to local data centers, cell towers, and gateways, multi-Edge TPU configurations will be required to help process increasingly sophisticated ML models. Coral allows the use of a single toolchain to create models for one or more Edge TPUs that can address many different future configurations.

A great example of how the Coral M.2 Accelerator with Dual Edge TPU is being used is in the Series One meeting room kits for Google Meet.

The new Series One room kits for Google Meet run smarter with Coral intelligence

Coral image

Google’s new Series One room kits use our Coral M.2 Accelerator with Dual Edge TPU to bring enhanced audio clarity to video meetings. TrueVoice®, a multi-channel noise cancellation technology, minimizes distractions to ensure every voice is heard with up to 44 channels of echo and noise cancellation, making distracting sounds like snacking or typing on a keyboard a concern of the past.

Enabling the clearest possible communication in challenging environments was the target for the Google Meet hardware team. The consideration of what makes a challenging environment was not limited to unusually noisy environments, such as lunchrooms doubling as conference rooms. Any conference room can present challenging acoustics that make it difficult for all participants to be heard.

The secret to clarity without expensive and cumbersome equipment is to use virtual audio channels and AI driven sound isolation. Read more about how Coral was used to enhance and future-proof the innovative design.

Expanding the AI edge

Earlier this year, we reduced the prices of our prototyping devices and sensors. We are excited to share further price drops on more of our products. Our System-on-Module is now available for $99.99, and our Mini PCIe Accelerator, M.2 Accelerator A+E Key, and M.2 Accelerator B+M key are now available at $24.99. We hope this lower price will make our edge AI more accessible to more creative minds around the world. Later, this month our SoM offering will also expand to include 2 and 4GB RAM options.

Multi-cam with AI

Coral image

As we expand our platform and product family, we continue to keep new edge AI use cases in mind. We are continually inspired by our developer community’s experimentation and implementations. When recently faced with the challenges of multicam video production from home, Markku Lepistö, Solutions Architect at Google Cloud, created this real-time pose-based multicam tool he so aptly dubbed, AI Director.

We love seeing such unique implementations of on-device ML and invite you to share your own projects and feedback at [email protected].

For a list of worldwide distributors, system integrators and partners, visit the Coral partnerships page. Please visit Coral.ai to discover more about our edge ML platform.

Google Summer of Code 2020: Learning Together


In its 16th year of the program, we are pleased to announce that 1,106 students from 65 countries have successfully completed Google Summer of Code (GSoC) 2020! These student projects are the result of three months of collaboration between students, 198 open source organizations, and over 2,000 mentors from 67 countries.

During the course of the program what we learned was most important to the students was the ability to learn, mentorship, and community building. From the student evaluations at the completion of the program, we collected additional statistics from students about the GSoC program, where we found some common themes. The word cloud below shows what mattered the most to our students, and the larger the word in the cloud, the more frequently it was used to describe mentors and open source.

Valuable insights collected from the students:
  • 94% of students think that GSoC helped their programming
  • 96% of students would recommend their GSoC mentors
  • 94% of students will continue working with their GSoC organization
  • 97% of students will continue working on open source
  • 27% of students said GSoC has already helped them get a job or internship
The GSoC program has been an invaluable learning journey for students. In tackling real world, real time implementations, they've grown their skills and confidence by leaps and bounds. With the support and guidance from mentors, they’ve also discovered that the value of their work isn’t just for the project at hand, but for the community at large. As newfound contributors, they leave the GSoC program enriched and eager to continue their open source journey.

Throughout its 16 years, GSoC continues to ignite students to carry on their work and dedication to open source, even after their time with the program has ended. In the years to come, we look forward to many of this year’s students paying it forward by mentoring new contributors to their communities or even starting their own open source project. Such lasting impact cannot be achieved without the inspiring work of mentors and organization administrators. Thank you all and congratulations on such a memorable year!

By Romina Vicente, Project Coordinator for the Google Open Source Programs Office

New Case Studies About Google’s Use of Go

Go started in September 2007 when Robert Griesemer, Ken Thompson, and I began discussing a new language to address the engineering challenges we and our colleagues at Google were facing in our daily work. The software we were writing was typically a networked server—a single program interacting with hundreds of other servers—and over its lifetime thousands of programmers might be involved in writing and maintaining it. But the existing languages we were using didn't seem to offer the right tools to solve the problems we faced in this complex environment.

So, we sat down one afternoon and started talking about a different approach.

When we first released Go to the public in November 2009, we didn’t know if the language would be widely adopted or if it might influence future languages. Looking back from 2020, Go has succeeded in both ways: it is widely used both inside and outside Google, and its approaches to network concurrency and software engineering have had a noticeable effect on other languages and their tools.

Go has turned out to have a much broader reach than we had ever expected. Its growth in the industry has been phenomenal, and it has powered many projects at Google.
Credit to Renee French for the gopher illustration.

The earliest production uses of Go inside Google appeared in 2011, the year we launched Go on App Engine and started serving YouTube database traffic with Vitess. At the time, Vitess’s authors told us that Go was exactly the combination of easy network programming, efficient execution, and speedy development that they needed, and that if not for Go, they likely wouldn’t have been able to build the system at all.

The next year, Go replaced Sawzall for Google’s search quality analysis. And of course, Go also powered Google’s development and launch of Kubernetes in 2014.

In the past year, we’ve posted sixteen case studies from end users around the world talking about how they use Go to build fast, reliable, and efficient software at scale. Today, we are adding three new case studies from teams inside Google:
  • Core Data Solutions: Google’s Core Data team replaced a monolithic indexing pipeline written in C++ with a more flexible system of microservices, the majority of them written in Go, that help support Google Search.
  • Google Chrome: Mobile users of Google Chrome in lite mode rely on the Chrome Optimization Guide server to deliver hints for optimizing page loads of well-known sites in their geographic area. That server, written in Go, helps deliver faster page loads and lowered data usage to millions of users daily.
  • Firebase: Google Cloud customers turn to Firebase as their mobile and web hosting platform of choice. After joining Google, the team completely migrated its backend servers from Node.js to Go, for the easy concurrency and efficient execution.
We hope these stories provide the Go developer community with deeper insight into the reasons why teams at Google choose Go, what they use Go for, and the different paths teams took to those decisions.

If you’d like to share your own story about how your team or organization uses Go, please contact us.

By Rob Pike, Distinguished Engineer

Summer updates from Coral

Posted by the Coral Team

Summer has arrived along with a number of Coral updates. We're happy to announce a new partnership with balena that helps customers build, manage, and deploy IoT applications at scale on Coral devices. In addition, we've released a series of updates to expand platform compatibility, make development easier, and improve the ML capabilities of our devices.

Open-source Edge TPU runtime now available on GitHub

First up, our Edge TPU runtime is now open-source and available on GitHub, including scripts and instructions for building the library for Linux and Windows. Customers running a platform that is not officially supported by Coral, including ARMv7 and RISC-V can now compile the Edge TPU runtime themselves and start experimenting. An open source runtime is easier to integrate into your customized build pipeline, enabling support for creating Yocto-based images as well as other distributions.

Windows drivers now available for the Mini PCIe and M.2 accelerators

Coral customers can now also use the Mini PCIe and M.2 accelerators on the Microsoft Windows platform. New Windows drivers for these products complement the previously released Windows drivers for the USB accelerator and make it possible to start prototyping with the Coral USB Accelerator on Windows and then to move into production with our Mini PCIe and M.2 products.

New fresh bits on the Coral ML software stack

We’ve also made a number of new updates to our ML tools:

  • The Edge TPU compiler is now version 14.1. It can be updated by running sudo apt-get update && sudo apt-get install edgetpu, or follow the instructions here
  • Our new Model Pipelining API allows you to divide your model across multiple Edge TPUs. The C++ version is currently in beta and the source is on GitHub
  • New embedding extractor models for EfficientNet, for use with on-device backpropagation. Embedding extractor models are compiled with the last fully-connected layer removed, allowing you to retrain for classification. Previously, only Inception and MobileNet were available and now retraining can also be done on EfficientNet
  • New Colab notebooks to retrain a classification model with TensorFlow 2.0 and build C++ examples

Balena partners with Coral to enable AI at the edge

We are excited to share that the Balena fleet management platform now supports Coral products!

Companies running a fleet of ML-enabled devices on the edge need to keep their systems up-to-date with the latest security patches in order to protect data, model IP and hardware from being compromised. Additionally, ML applications benefit from being consistently retrained to recognize new use cases with maximum accuracy. Coral + balena together, bring simplicity and ease to the provisioning, deployment, updating, and monitoring of your ML project at the edge, moving early prototyping seamlessly towards production environments with many thousands of devices.

Read more about all the benefits of Coral devices combined with balena container technology or get started deploying container images to your Coral fleet with this demo project.

New version of Mendel Linux

Mendel Linux (5.0 release Eagle) is now available for the Coral Dev Board and SoM and includes a more stable package repository that provides a smoother updating experience. It also brings compatibility improvements and a new version of the GPU driver.

New models

Last but not least, we’ve recently released BodyPix, a Google person-segmentation model that was previously only available for TensorFlow.JS, as a Coral model. This enables real-time privacy preserving understanding of where people (and body parts) are on a camera frame. We first demoed this at CES 2020 and it was one of our most popular demos. Using BodyPix we can remove people from the frame, display only their outline, and aggregate over time to see heat maps of population flow.

Here are two possible applications of BodyPix: Body-part segmentation and anonymous population flow. Both are running on the Dev Board.

We’re excited to add BodyPix to the portfolio of projects the community is using to extend our models far beyond our demos—including tackling today’s biggest challenges. For example, Neuralet has taken our MobileNet V2 SSD Detection model and used it to implement Smart Social Distancing. Using the bounding box of person detection, they can compute a region for safe distancing and let a user know if social distance isn’t being maintained. The best part is this is done without any sort of facial recognition or tracking, with Coral we can accomplish this in real-time in a privacy preserving manner.

We can’t wait to see more projects that the community can make with BodyPix. Beyond anonymous population flow there’s endless possibilities with background and body part manipulation. Let us know what you come up with at our community channels, including GitHub and StackOverflow.

________________________

We are excited to share all that Coral has to offer as we continue to evolve our platform. For a list of worldwide distributors, system integrators and partners, including balena, visit the Coral partnerships page. Please visit Coral.ai to discover more about our edge ML platform and share your feedback at [email protected].

Bringing internet access to millions more Indians with Jio



Today we signed an agreement to invest $4.5 billion (INR 33,737 crore) in Jio Platforms Ltd, taking a 7.73 percent stake in the company, pending regulatory review in India. This is the first investment from the Google For India Digitization Fund announced earlier this week, which aims to accelerate India’s digital economy over the next five to seven years through a mix of equity investments, partnerships, and operational, infrastructure and ecosystem investments. 


Google and Jio Platforms have entered into a commercial agreement to jointly develop an entry-level affordable smartphone with optimizations to the Android operating system and the Play Store. Together we are excited to rethink, from the ground up, how millions of users in India can become owners of smartphones. This effort will unlock new opportunities, further power the vibrant ecosystem of applications and push innovation to drive growth for the new Indian economy.


This partnership comes at an exciting but critical stage in India’s digitization. It’s been amazing to see the changes in technology and network plans that have enabled more than half a billion Indians to get online. At the same time, the majority of people in India still don’t have access to the internet, and fewer still own a smartphone—so there’s much more work ahead. 


Our mission with Android has always been to bring the power of computing to everyone, and we’ve been humbled by the way Indians have embraced Android over recent years. We think the time is right to increase our commitment to India significantly, in collaboration with local companies, and this partnership with Jio is the first step. We want to work with Jio and other leaders in the local ecosystem to ensure that smartphones—together with the apps and services in the Play Store—are within reach for many more Indians across the country. And we believe the pace of Indian innovation means that the experiences we create for India can ultimately be expanded to the rest of the world.  


For Google, our work in India goes to the heart of our efforts to organize the world’s information and make it universally accessible. We opened our first Indian campuses in Bangalore and Hyderabad in 2004. Since then, we’ve made India central to our Next Billion Users initiative—designed to ensure the internet is useful for people coming online for the first time. We’ve improved our apps and services so they’re relevant in more Indian languages and created offline versions for those facing network constraints. We’ve extended our tools to small businesses, sought to close digital divides with initiatives like Internet Saathi, and we’re increasingly focused on helping India harness AI. More and more, apps we create for India—like Google Pay or our Read Along language-learning app—influence what we do globally. 


Jio, for its part, has made an extraordinary contribution to India’s technological progress over the past decade. Its investments to expand telecommunications infrastructure, low-cost phones and affordable internet have changed the way its hundreds of millions of subscribers find news and information, communicate with one another, use services and run businesses. Today, Jio is increasing its focus on the development of areas like digital services, education, healthcare and entertainment that can support economic growth and social inclusion at a critical time in the country’s history. 


In partnership, we can draw on each other’s strengths. We look forward to bringing smartphone access to more Indians—and exploring the many ways we can work together to improve Indians’ lives and advance India’s digital economy.

Posted by Sanjay Gupta, Country Head & VP, India, and Sameer Samat, VP, Product Management

Supporting local communities for Pride 2020

In August 1966, trans women, drag queens, and other members of the LGBTQ+ community fought for their rights and fair treatment outside Compton’s Cafeteria in San Francisco's Tenderloin neighborhood. Three years later on June 28, 1969, the LGBTQ+ community, once again, rose up against inequitable treatment and police misconduct at the Stonewall Inn. For both of these historic moments, LGBTQ+ people of colour—and in particular Black trans women and trans women of colour—helped lead the fight against hate and injustice. In many respects, the modern day LGBTQ+ movement for equality was born from these rebellious acts and the many events preceding them.

Pride should still be a protest. For those within the BIPOC and LGBTQ+ community—especially Black+ trans women—the injustices we're seeing today are a reminder of past and present struggles for equity, justice, and equality under the law. We believe communities must show up for one another, and we stand in solidarity with the Black+ community across the world, honoring the longstanding Pride tradition of unity.

This year for Pride, we’re focusing on helping local organizations in our community that are creating change for LGBTQ+ people of colour, trans and non-binary communities, LGBTQ+ families, and many more.

Supporting local organizations

Local LGBTQ+ organizations are providing critical services for those in need, whether they're helping someone find a bed in a shelter, offering skills and training services, or advocating for more inclusive and equitable policies. Lives depend on these organizations.

One of these organizations is Pride Toronto. Founded in 1981, Pride Toronto has a legacy of purposeful activism for eqality and sharing the diverse stories and perspectives of the LGBTQ+ community. This year, we’re proud to be a Gold sponsor for Pride Toronto and support their work to bring the annual pride celebrations online. The diverse programming has everything from trivia, workouts with olympians, club nights, and an online Pride parade on June 28.

Digital skills training for local businesses

To support LGBTQ+ small businesses and professionals, we’re partnering with Venture Out and Tech Proud on Digital Skills Pride Week from June 22 - 26. In collaboration with other tech companies, we’ll host sessions for small businesses around maintaining productivity, best practices for remote working, creating a website, and building an online presence. Businesses can learn more and register here.

Together, virtually

This year, Pride will feel different for many of us. We’re finding ways to bring people together virtually, including a toolkit that helps organizations host remote Pride events.

We’ve also launched a collection of videos on our YouTube Canada channel to elevate LGBTQ+ voices and share historic Canadian civil rights moments. Dive into interviews from The Queer Network, and listen to personal stories from a collection of Canadian creators, including Julie Vu, GigiGorgeous and AsapSCIENCE.


While Pride is usually marked by jubilant marches and beautiful parade floats, it’s much more than that. For us, Pride is about the ongoing struggle for equity, visibility, and acceptance. We’ll be spending Pride as allies to our the Black+ community members, reflecting on the many LGBTQ+ people of colour who started our liberation movement decades ago, and finding ways to remedy systemic injustices.

Supporting Educators as they teach from home

Editor’s note: This guest post is authored by Michelle Armstrong, Director of EdTechTeam Canada.

“Post your questions in the Chat. We’re here to help.” This is a phrase we’ve gotten used to saying several times a day as our team supports teachers across the country through virtual learning.

A few months ago, schools, universities and colleges across Canada closed down because of concerns over the transmission of COVID-19. The entire Canadian education system had to quickly address the logistical challenges brought about by not being together in a physical classroom. Our facilitators at EdTechTeam Canada geared up immediately, and worked with Google to help parents, teachers and students make the most out of the digital resources available to them.

Since then, we’ve had the opportunity to connect with teachers from all across the country through our live virtual training sessions. With significant funding from our friends at Google Canada, these interactive workshops cover the basics around fundamental learning tools including Google Classroom, Google Meet, Docs and student engagement. By offering up to ten sessions a day along with personal follow-ups, we’ve now hosted hundreds of live workshops and impacted tens of thousands of educators across the country, representing over 350 Divisions, Districts and School Boards. We’ve always prided ourselves in delivering engaging, interactive Professional Development. Thankfully we are still able to achieve that, we just happen to be joining face-to-face from our own living rooms.

Through these interactive discussions with educators, we've seen incredible resilience and tenacity, and a desire to accommodate the needs of learners during this unprecedented time. Here are a few of our key takeaways for educators to consider when it comes to virtual learning.

Learn now, thrive long-term

While some may be scrambling now, being thrust into virtual teaching has created an opportunity for educators to learn new digital skills that will help both inside and outside the classroom. We’ve seen teachers sign up for introductory sessions, as well as the more advanced sessions like using Jamboard, creating quizzes and more. Teachers can check the EdTechTeam Canada website to sign up for live workshops, or review recorded sessions.

Explore new ways to engage students

When you aren’t physically in the classroom with students, it can be challenging to measure student engagement. Simple tricks like using comments within Google Docs and Classroom are great to have a two-way dialogue with students as you share feedback. Using Google Sites can also help keep students and parents up-to-date with important reminders. You can take a look at more resources, tools and tips here.

Supporting learners is more important than ever

With parents now wearing multiple hats from parent to teacher and everything in between, we understand how important it is to support families that are learning at home. Share resources with parents and students who need a bit of practice with digital learning. Some helpful workshops include Get Started with Google Meet, and Google Classroom for Parents.

Our team has been inspired by the remarkable work of Canadian educators and school leaders who continue to adapt and innovate their processes through remote learning. To our educators - we thank you.


For any educators looking to join one of our live workshops, sign up here. We’ll continue to offer these workshops for the next few weeks, as the 2020 school year comes to an end. For more ideas to support educators during this time, try Teach from Home, a central hub of Google for Education tips and tools to help educators keep teaching, even when they aren’t in the classroom.