Author Archives: Juston Payne

Bringing more of Google’s productivity apps to Glass Enterprise

Imagine translating instructions into a colleague’s native language in real-time. Or instantly crossing off daily tasks as you complete them. That’s part of our vision for augmented reality (AR), which has the potential to transform how companies and their frontline workers access information and make informed decisions while collaborating with their teams. We’re working to create a more natural and intuitive way to seek, interact with, and use information in the real world through AR.

Today, we are revealing a new early access program focused on bringing more of Google’s productivity apps and collaboration tools to the Glass Enterprise platform. Companies interested in Glass Enterprise most often request AR features that help people communicate and complete tasks. Starting today, we're introducing the opportunity for Google Workspace enterprise customers to partner with us in testing features focused on task completion, communication, and collaboration.

A more connected and efficient workforce with Glass Enterprise

Since 2017, Glass Enterprise has helped companies utilize AR to help employees work smarter, faster and hands-free. Working with software publishers that create bespoke solutions for companies, Glass Enterprise has helped customers like DB Schenker increase warehouse efficiency by 10%. Wendy's used Glass Enterprise to support food safety, quality practices and oversight of suppliers and distribution centers, as well as remote training and education for restaurant team members.

In 2020, we announced Google Meet on Glass Enterprise to give teams a first-person view of the wearer’s perspective — enabling real-time collaboration and problem solving. Since we launched Meet on Glass, remote team members have stayed connected for more than 750,000 minutes. Meet on Glass is generally available to anyone with Glass Enterprise.

Testing new features within our early access program

As part of this program, we’re expanding our productivity and collaboration offerings to include three new features across Google Tasks, language capabilities and photos:

  1. See step-by-step instructions: Tasks capabilities on Glass Enterprise provide hands-free access to step-by-step instructions to ensure accuracy and efficiency. Using any supported device, a warehouse manager can create a workflow on Tasks and share it directly to a teammate preparing a shipment, who can see and cross-off tasks in real-time on the Glass Enterprise display.
  2. Enable natural communication: Language capabilities like translation and transcription on Glass Enterprise help a global workforce understand, train and collaborate, regardless of language. This feature currently supports 15 languages with plans to add more in the near future. With Glass Enterprise, an employee who doesn’t share a common language with their manager can see direct translations in their line-of-sight.
  3. Collaborate securely: Glass Enterprise can now save images directly to your Pixel phone so you can seamlessly capture photos and share them across teams with Google Photos. Wearers can easily back up and share images and videos to check inventory, audit for quality, or diagnose and review equipment while being hands-free.

These capabilities are made possible by a new phone-enabled platform that uses the computing power of Google Tensor silicon on Pixel. This platform delivers more powerful and unique AR graphics and features on the Glass Enterprise display. It’s controlled by the Glass Enterprise Companion App, making it easier for workers to set up and manage settings out of the box.

We look forward to expanding access as we learn alongside our partners in the coming months, and to the release of more helpful AR features in upcoming programs.

If you are a current Workspace customer interested in testing how these new AR tools can benefit your team, apply to join our Glass Enterprise early access program.

Building and testing helpful AR experiences

Augmented reality (AR) is opening up new ways to interact with the world around us. It can help us quickly and easily access the information we need — like understanding another language or knowing how best to get from point A to point B. For example, we recently shared an early AR prototype we’ve been testing in our labs that puts real-time translation and transcription directly in your line of sight.

However, testing only in a lab environment has its limitations. So starting next month, we plan to test AR prototypes in the real world. This will allow us to better understand how these devices can help people in their everyday lives. And as we develop experiences like AR navigation, it will help us take factors such as weather and busy intersections into account — which can be difficult, sometimes impossible, to fully recreate indoors.

We’ll begin small-scale testing in public settings with AR prototypes worn by a few dozen Googlers and select trusted testers. These prototypes will include in-lens displays, microphones and cameras — but they’ll have strict limitations on what they can do. For example, our AR prototypes don’t support photography and videography, though image data will be used to enable experiences like translating the menu in front of you or showing you directions to a nearby coffee shop.

It's early, and we want to get this right, so we’re taking it slow, with a strong focus on ensuring the privacy of the testers and those around them. You can read more details about our limited public testing efforts for AR prototypes in the Google Help Center. As we continue to explore and learn what’s possible with AR, we look forward to sharing more updates.

Source: The Keyword


A new angle on your favorite moments with Google Clips

We love photos and videos. They take us back to a special time with our friends and family. Some of our favorites are genuine shots that capture the essence of the moment.


The trouble is, getting those spontaneous shots means that someone has to be the “designated photographer”—always waiting to snap a photo at just the right moment. I would have loved more images of me holding my kids, Clark and Juliet, when they were newborns, but because my wife and I had our hands full, these moments got away from us.


At Google, we’ve been working on a new type of camera that lets you capture more of these special moments, while allowing yourself also to be in the moment.


Today we’re introducing Google Clips, a lightweight, hands-free camera that helps you capture more genuine and spontaneous moments of the people—and pets!—who matter to you. You can set the camera down on the coffee table when the kids are goofing around or clip it to a chair to get a shot of your cat playing with its favorite toy. There’s also a shutter button—both on the camera and in the corresponding app—so you can capture other moments or subjects, whatever you please.

Clips2
Google Clips is small, weighs almost nothing, and comes with a clip to hold it steady.

We’ve put machine learning capabilities directly into Clips so when you turn it on, the camera looks for good moments to capture. Clips looks for stable, clear shots of people you know. You can help the camera learn who is important to you so when grandma comes in town, you’ll capture the grand entrance.

Clips3
The camera shoots short motion photos that last several seconds. As you probably guessed, we call these “clips.”

Your clips sync wirelessly and in seconds from the camera to the Google Clips app for Android or iOS. Simply swipe to save or delete your clips, or choose an individual frame to save as a high-resolution still photo. You can view and organize anything you’ve saved in Google Photos (or your favorite gallery app). And if you’re using Google Photos, you can backup unlimited clips for free.

Clips4

We know privacy and control really matter, so we’ve been thoughtful about this for Clips users, their families, and friends. Clips was designed and engineered with these principles in mind.

  • It looks like a camera, and lights up when it's on so everyone knows what Clips does and when it’s capturing.
  • It works best when used at home with family and close friends. As you capture with Clips, the camera learns to recognize the faces of people that matter to you and helps you capture more moments of them.
  • Finally, all the machine learning happens on the device itself. And just like any point-and-shoot, nothing leaves your device until you decide to save it and share it.  

Google Clips is coming soon to the U.S. for $249. In this first edition, Clips is designed specifically with parents and pet owners in mind. It works best with Pixel, and also works with Samsung S7/8 and on iPhone (6 and up).

We hope Google Clips helps you capture more spontaneous moments in life, without any of the hassle.

Clips5
One of my favorite clips I’ve captured with my family.