Tag Archives: ios

New look and feel for Google Meet mobile apps

Quick launch summary

We’re updating the user interface (UI) of the Google Meet mobile apps for Android and iOS. The new mobile UI will have the same look and feel as that of the meeting experience in the Gmail app.

A new look and feel for Google Meet mobile apps

In addition to the revamped design, you’ll now see a New Meeting button. When you tap on this button, you’ll see three options:
  • Get meeting joining info to share with others.
  • Start a Meet call instantly.
  • Schedule a new meeting in Google Calendar.

New options for new meetings in Google Meet mobile apps

This new UI is rolling out on iOS now; check back for an update on this post when the rollout to Android begins.

Getting started

  • Admins: There is no admin control for this feature.
  • End users: This new UI will appear by default once you’ve upgraded your Meet iOS app to version 45 or above.

Rollout pace

iOS
Android

Availability

  • Available to all G Suite customers and users with personal Google Accounts

Resources

ML Kit Pose Detection Makes Staying Active at Home Easier

Posted by Kenny Sulaimon, Product Manager, ML Kit; Chengji Yan and Areeba Abid, Software Engineers, ML Kit

ML Kit logo

Two months ago we introduced the standalone version of the ML Kit SDK, making it even easier to integrate on-device machine learning into mobile apps. Since then we’ve launched the Digital Ink Recognition API, and also introduced the ML Kit early access program. Our first two early access APIs were Pose Detection and Entity Extraction. We’ve received an overwhelming amount of interest in these new APIs and today, we are thrilled to officially add Pose Detection to the ML Kit lineup.

ML Kit Overview

A New ML Kit API, Pose Detection


Examples of ML Kit Pose Detection

ML Kit Pose Detection is an on-device, cross platform (Android and iOS), lightweight solution that tracks a subject's physical actions in real time. With this technology, building a one-of-a-kind experience for your users is easier than ever.

The API produces a full body 33 point skeletal match that includes facial landmarks (ears, eyes, mouth, and nose), along with hands and feet tracking. The API was also trained on a variety of complex athletic poses, such as Yoga positions.

Skeleton image detailing all 33 landmark points

Skeleton image detailing all 33 landmark points

Under The Hood

Diagram of the ML Kit Pose Detection Pipeline

The power of the ML Kit Pose Detection API is in its ease of use. The API builds on the cutting edge BlazePose pipeline and allows developers to build great experiences on Android and iOS, with little effort. We offer a full body model, support for both video and static image use cases, and have added multiple pre and post processing improvements to help developers get started with only a few lines of code.

The ML Kit Pose Detection API utilizes a two step process for detecting poses. First, the API combines an ultra-fast face detector with a prominent person detection algorithm, in order to detect when a person has entered the scene. The API is capable of detecting a single (highest confidence) person in the scene and requires the face of the user to be present in order to ensure optimal results.

Next, the API applies a full body, 33 landmark point skeleton to the detected person. These points are rendered in 2D space and do not account for depth. The API also contains a streaming mode option for further performance and latency optimization. When enabled, instead of running person detection on every frame, the API only runs this detector when the previous frame no longer detects a pose.

The ML Kit Pose Detection API also features two operating modes, “Fast” and “Accurate”. With the “Fast” mode enabled, you can expect a frame rate of around 30+ FPS on a modern Android device, such as a Pixel 4 and 45+ FPS on a modern iOS device, such as an iPhone X. With the “Accurate” mode enabled, you can expect more stable x,y coordinates on both types of devices, but a slower frame rate overall.

Lastly, we’ve also added a per point “InFrameLikelihood” score to help app developers ensure their users are in the right position and filter out extraneous points. This score is calculated during the landmark detection phase and a low likelihood score suggests that a landmark is outside the image frame.

Real World Applications


Examples of a pushup and squat counter using ML Kit Pose Detection

Keeping up with regular physical activity is one of the hardest things to do while at home. We often rely on gym buddies or physical trainers to help us with our workouts, but this has become increasingly difficult. Apps and technology can often help with this, but with existing solutions, many app developers are still struggling to understand and provide feedback on a user’s movement in real time. ML Kit Pose Detection aims to make this problem a whole lot easier.

The most common applications for Pose detection are fitness and yoga trackers. It’s possible to use our API to track pushups, squats and a variety of other physical activities in real time. These complex use cases can be achieved by using the output of the API, either with angle heuristics, tracking the distance between joints, or with your own proprietary classifier model.

To get you jump started with classifying poses, we are sharing additional tips on how to use angle heuristics to classify popular yoga poses. Check it out here.

Learning to Dance Without Leaving Home

Learning a new skill is always tough, but learning to dance without the aid of a real time instructor is even tougher. One of our early access partners, Groovetime, has set out to solve this problem.

With the power of ML Kit Pose Detection, Groovetime allows users to learn their favorite dance moves from popular short-form dance videos, while giving users automated real time feedback on their technique. You can join their early access beta here.

Groovetime App using ML Kit Pose Detection

Staying Active Wherever You Are

Our Pose Detection API is also helping adidas Training, another one of our early access partners, build a virtual workout experience that will help you stay active no matter where you are. This one-of-a-kind innovation will help analyze and give feedback on the user’s movements, using nothing more than just your phone. Integration into the adidas Training app is still in the early phases of the development cycle, but stay tuned for more updates in the future.

How to get started?

If you would like to start using the Pose Detection API in your mobile app, head over to the developer documentation or check out the sample apps for Android and iOS to see the API in action. For questions or feedback, please reach out to us through one of our community channels.

Google Docs mobile improvements: link previews and Smart Compose

What’s changing 

We’re improving the Android and iOS experiences for Google Docs users with two new features. These were previously available on the web, and are now available on mobile as well: 
  • Link previews, which help you get context from linked content without bouncing between apps and screens. 
  • Smart Compose, which helps you write faster and with more confidence. 

Read our Cloud Blog post to learn more about how these and other launches can help you collaborate from anywhere, with Google Docs, Sheets, and Slides on mobile


Who’s impacted 

End users 


Why it’s important 

These launches build on other recent launches that improve the mobile user experience, including a new commenting interface in Docs on Android, dynamic email notifications for Gmail on mobile, and dark mode for Docs, Sheets, and Slides on Android

Together, these features will help make it easier and quicker not only to read and review content on mobile devices, but also to create and collaborate on content, wherever you are. 


Additional details 

Link previews 
Linked content can enrich documents with useful information, but if clicking a link means opening another window, that can be distracting and disrupt your reading flow. Earlier this year, we launched link previews on the web. Now, we’re adding link previews to mobile as well. When you click on a link in Docs, dynamic information about the content will appear. This may include the title, description, and thumbnail images from public web pages, or the owner and latest activity for linked Drive files. This can help you decide whether to open linked content while staying in-context. 

Preview links in Google Docs on the web 


Preview links in Google Docs on mobile devices 

Smart Compose 

Smart Compose on mobile will help you write documents faster and reduce the chance of spelling and grammatical errors when working on the go. When a Smart Compose suggestion appears, simply swipe right to accept it. See more in our announcement for the feature on the web

Getting started 

Admins: These features will be ON by default. There are no admin controls for them. 

End users: 
  • Link previews: This feature will be on by default. There is no setting to control the feature. 
  • Smart Compose: This feature may be on or off depending on whether you have turned it on or off on the web. When enabled, you’ll automatically see suggestions; swipe right to accept a suggestion. Visit the Help Center to learn more about using Smart Compose in Google Docs

Rollout pace 

Link previews in Docs, iOS and Web 
Link previews in Docs, Android 
Smart Compose in Docs, iOS 
Smart Compose in Docs, Android 

Availability 

  • Link previews in Docs: Available to all G Suite customers and users with personal accounts. 
  • Smart Compose in Docs: Available to all G Suite customers. Not available to users with personal accounts. 

Resources 

Simplify management of company-owned iOS devices with new Apple Business Manager integration

What’s changing 

We’re launching an integration between Google endpoint management and Apple Business Manager (formerly the Device Enrollment Program, or DEP). This makes it possible to securely distribute and manage company-owned iOS devices from the Google Admin console. 

The integration will enable G Suite Enterprise, G Suite Enterprise for Education, G Suite Enterprise Essentials, and Cloud Identity Premium customers to set Google endpoint management as an MDM server on Apple Business Manager. 


Who’s impacted 

Admins 


Why you’d use it 

With the integration between Google endpoint management and Apple Business Manager: 
  • Admins can manage company-owned iOS devices directly from the Admin console, in the same location as they manage other devices that access their organization’s data. 
  • Admins can control a wider range of features including app installation, Apple app usage, authentication methods, and more, as shown in this table of supervised company-owned iOS device settings
  • Apple Business Manager and Google endpoint management automatically sync for seamless device management. 
  • Users follow a simple device setup and enrollment through the built-in setup wizard. 
Apple Business Manager setup in the Admin console



Getting started 

  • Admins: To use this feature, you need to enable advanced mobile management for iOS devices in applicable OUs, and have an Apple Business Manager account set up. Visit our Help Center to learn more about how to set up company-owned iOS device management
  • End users: There is no end user setting for this feature. Once provisioned by an admin, users can follow the device setup wizard steps to enroll the device. Once the setup wizard is complete, the Google Device Policy app will automatically install and the user should sign in to it with their G Suite or Cloud Identity account. 

Rollout pace 

Availability 

  • Available to G Suite Enterprise, G Suite Enterprise for Education, G Suite Enterprise Essentials, and Cloud Identity Premium customers 
  • Not available to G Suite Basic, G Suite Business, G Suite for Education, G Suite for Nonprofits, and G Suite Essentials customers 

Resources 

Update your G Suite mobile and desktop apps before August 12, 2020, to ensure they continue working

Quick summary

In 2018, we began making changes to our API and service infrastructure to improve performance and security. As a result of these changes, some older versions of G Suite desktop and mobile apps may stop working on August 12, 2020. In particular, versions released prior to December 2018 may be impacted.

To ensure their workflows are not disrupted, your users should update the following Google apps to the latest versions as soon as possible:

Getting started
  • Admins: Encourage your users to upgrade their apps. If you deploy Drive File Stream to your organization, ensure you’re using the latest version.
  • End users: Upgrade the apps listed above to the latest versions as soon as possible.
Rollout pace

Availability
  • This impacts all G Suite customers and users with personal Google accounts.

On-device machine learning solutions with ML Kit, now even easier to use

Posted by Christiaan Prins, Product Manager, ML Kit and Shiyu Hu, Tech Lead Manager, ML Kit

ML Kit logo

Two years ago at I/O 2018 we introduced ML Kit, making it easier for mobile developers to integrate machine learning into your apps. Today, more than 25,000 applications on Android and iOS make use of ML Kit’s features. Now, we are introducing some changes that will make it even easier to use ML Kit. In addition, we have a new feature and a set of improvements we’d like to discuss.

A new ML Kit SDK, fully focused on on-device ML

ML Kit API Overview

ML Kit's APIs are built to help you tackle common challenges in the Vision and Natural Language domains. We make it easy to recognize text, scan barcodes, track and classify objects in real-time, do translation of text, and more.

The original version of ML Kit was tightly integrated with Firebase, and we heard from many of you that you wanted more flexibility when implementing it in your apps. As a result, we are now making all the on-device APIs available in a new standalone ML Kit SDK that no longer requires a Firebase project. You can still use both ML Kit and Firebase to get the best of both products if you choose to.

With this change, ML Kit is now fully focused on on-device machine learning, giving you access to the unique benefits that on-device versus cloud ML offers:

  • It’s fast, unlocking real-time use cases- since processing happens on the device, there is no network latency. This means, we can do inference on a stream of images / video or multiple times a second on text strings.
  • Works offline - you can rely on our APIs even when the network is spotty or your app’s end-user is in an area without connectivity.
  • Privacy is retained: since all processing is performed locally, there is no need to send sensitive user data over the network to a server.

Naturally, you still get access to Google’s on-device models and processing pipelines, all accessible through easy-to-use APIs, and offered at no cost.

All ML Kit resources can now be found on our new website where we made it a lot easier to access sample apps, API reference docs and our community channels that are there to help you if you have questions.

Object detection & tracking gif Text recognition + Language ID + Translate gif

What does this mean if I already use ML Kit today?

If you are using ML Kit for Firebase’s on-device APIs in your app today, we recommend you to migrate to the new standalone ML Kit SDK to benefit from new features and updates. For more information and step-by-step instructions to update your app, please follow our Migration guide. The cloud-based APIs, model deployment and AutoML Vision Edge remain available through Firebase Machine Learning.

Shrink your app footprint with Google Play Services

Apart from making ML Kit easier to use, developers also asked if we can ship ML Kit through Google Play Services resulting in a smaller app footprint and the model can be reused between apps. Apart from Barcode scanning and Text recognition, we have now added Face detection / contour (model size: 20MB) to the list of APIs that support this functionality.

// Face detection / Face contour model
// Delivered via Google Play Services outside your app's APK…
implementation 'com.google.android.gms:play-services-mlkit-face-detection:16.0.0'

// …or bundled with your app's APK
implementation 'com.google.mlkit:face-detection:16.0.0'

Jetpack Lifecycle / CameraX support

Android Jetpack Lifecycle support has been added to all APIs. Developers can use addObserver to automatically manage teardown of ML Kit APIs as the app goes through screen rotation or closure by the user / system. This makes CameraX integration easier. With this release, we are also recommending that developers adopt CameraX in their apps due to the ease of integration and image quality improvements (compared to Camera1) on a wide range of devices.

// ML Kit now supports Lifecycle
val recognizer = TextRecognizer.newInstance()
lifecycle.addObserver(recognizer)

// ...

// Just like CameraX
val camera = cameraProvider.bindToLifecycle( /* lifecycleOwner= */this,
    cameraSelector, previewUseCase, analysisUseCase)

For an overview of all recent changes, check out the release notes for the new SDK.

Codelab of the day - ML Kit x CameraX

To help you get started with the new ML Kit and its support for CameraX, we have created this code lab to Recognize, Identify Language and Translate text. If you have any questions regarding this code lab, please raise them at StackOverflow and tag it with [google-mlkit]. Our team will monitor this.

screenshot of app running

Early access program

Through our early access program, developers have an opportunity to partner with the ML Kit team and get access to upcoming features. Two new APIs are now available as part of this program:

  • Entity Extraction - Detect entities in text & make them actionable. We have support for phone numbers, addresses, payment numbers, tracking numbers, date/time and more.
  • Pose Detection - Low-latency pose detection supporting 33 skeletal points, including hands and feet tracking.

If you are interested, head over to our early access page for details.

pose detection on man jumping rope

Tomorrow - Support for custom models

ML Kit's turn-key solutions are built to help you take common challenges. However, if you needed to have a more tailored solution, one that required custom models, you typically needed to build an implementation from scratch. To help, we are now providing the option to swap out the default Google models with a custom TensorFlow Lite model. We’re starting with the Image Labeling and Object Detection and Tracking APIs, that now support custom image classification models.

Tomorrow, we will dive a bit deeper into how to find or train a TensorFlow Lite model and use it either with ML Kit, or with Android Studio’s new ML binding functionality.

On-device machine learning solutions with ML Kit, now even easier to use

Posted by Christiaan Prins, Product Manager, ML Kit and Shiyu Hu, Tech Lead Manager, ML Kit

ML Kit logo

Two years ago at I/O 2018 we introduced ML Kit, making it easier for mobile developers to integrate machine learning into your apps. Today, more than 25,000 applications on Android and iOS make use of ML Kit’s features. Now, we are introducing some changes that will make it even easier to use ML Kit. In addition, we have a new feature and a set of improvements we’d like to discuss.

A new ML Kit SDK, fully focused on on-device ML

ML Kit API Overview

ML Kit's APIs are built to help you tackle common challenges in the Vision and Natural Language domains. We make it easy to recognize text, scan barcodes, track and classify objects in real-time, do translation of text, and more.

The original version of ML Kit was tightly integrated with Firebase, and we heard from many of you that you wanted more flexibility when implementing it in your apps. As a result, we are now making all the on-device APIs available in a new standalone ML Kit SDK that no longer requires a Firebase project. You can still use both ML Kit and Firebase to get the best of both products if you choose to.

With this change, ML Kit is now fully focused on on-device machine learning, giving you access to the unique benefits that on-device versus cloud ML offers:

  • It’s fast, unlocking real-time use cases- since processing happens on the device, there is no network latency. This means, we can do inference on a stream of images / video or multiple times a second on text strings.
  • Works offline - you can rely on our APIs even when the network is spotty or your app’s end-user is in an area without connectivity.
  • Privacy is retained: since all processing is performed locally, there is no need to send sensitive user data over the network to a server.

Naturally, you still get access to Google’s on-device models and processing pipelines, all accessible through easy-to-use APIs, and offered at no cost.

All ML Kit resources can now be found on our new website where we made it a lot easier to access sample apps, API reference docs and our community channels that are there to help you if you have questions.

Object detection & tracking gif Text recognition + Language ID + Translate gif

What does this mean if I already use ML Kit today?

If you are using ML Kit for Firebase’s on-device APIs in your app today, we recommend you to migrate to the new standalone ML Kit SDK to benefit from new features and updates. For more information and step-by-step instructions to update your app, please follow our Migration guide. The cloud-based APIs, model deployment and AutoML Vision Edge remain available through Firebase Machine Learning.

Shrink your app footprint with Google Play Services

Apart from making ML Kit easier to use, developers also asked if we can ship ML Kit through Google Play Services resulting in a smaller app footprint and the model can be reused between apps. Apart from Barcode scanning and Text recognition, we have now added Face detection / contour (model size: 20MB) to the list of APIs that support this functionality.

// Face detection / Face contour model
// Delivered via Google Play Services outside your app's APK…
implementation 'com.google.android.gms:play-services-mlkit-face-detection:16.0.0'

// …or bundled with your app's APK
implementation 'com.google.mlkit:face-detection:16.0.0'

Jetpack Lifecycle / CameraX support

Android Jetpack Lifecycle support has been added to all APIs. Developers can use addObserver to automatically manage teardown of ML Kit APIs as the app goes through screen rotation or closure by the user / system. This makes CameraX integration easier. With this release, we are also recommending that developers adopt CameraX in their apps due to the ease of integration and image quality improvements (compared to Camera1) on a wide range of devices.

// ML Kit now supports Lifecycle
val recognizer = TextRecognizer.newInstance()
lifecycle.addObserver(recognizer)

// ...

// Just like CameraX
val camera = cameraProvider.bindToLifecycle( /* lifecycleOwner= */this,
    cameraSelector, previewUseCase, analysisUseCase)

For an overview of all recent changes, check out the release notes for the new SDK.

Codelab of the day - ML Kit x CameraX

To help you get started with the new ML Kit and its support for CameraX, we have created this code lab to Recognize, Identify Language and Translate text. If you have any questions regarding this code lab, please raise them at StackOverflow and tag it with [google-mlkit]. Our team will monitor this.

screenshot of app running

Early access program

Through our early access program, developers have an opportunity to partner with the ML Kit team and get access to upcoming features. Two new APIs are now available as part of this program:

  • Entity Extraction - Detect entities in text & make them actionable. We have support for phone numbers, addresses, payment numbers, tracking numbers, date/time and more.
  • Pose Detection - Low-latency pose detection supporting 33 skeletal points, including hands and feet tracking.

If you are interested, head over to our early access page for details.

pose detection on man jumping rope

Tomorrow - Support for custom models

ML Kit's turn-key solutions are built to help you take common challenges. However, if you needed to have a more tailored solution, one that required custom models, you typically needed to build an implementation from scratch. To help, we are now providing the option to swap out the default Google models with a custom TensorFlow Lite model. We’re starting with the Image Labeling and Object Detection and Tracking APIs, that now support custom image classification models.

Tomorrow, we will dive a bit deeper into how to find or train a TensorFlow Lite model and use it either with ML Kit, or with Android Studio’s new ML binding functionality.

New data exfiltration protections for G Suite data on iOS devices

What’s changing 

We’re adding new security controls that admins can use to protect sensitive company data on iOS devices. Admins can now choose to:

  • Restrict copy and paste on data belonging to G Suite accounts to other accounts. This can prevent corporate data from being exfiltrated to personal accounts. 
  • Restrict the ability for users to drag and drop files from specific apps within their G Suite account. 

At launch, admin controls will apply to five G Suite iOS apps: Gmail, Drive, Docs, Sheets, and Slides. This feature is available to G Suite Enterprise, G Suite Enterprise for Education, and Cloud Identity Premium customers. Users will still be able to copy and paste and drag and drop from personal accounts to G Suite accounts. Protections are available to devices managed with G Suite’s basic or advanced mobile device management, as well as devices with basic mobile management alongside a separate enterprise mobility management (EMM) solution.

Who’s impacted 

Admins

Why it’s important 

Without these features, there are limitations in the controls admins have to prevent users moving corporate data between corporate and personal accounts on the same iOS device. While admins can prevent sharing files between managed and unmanaged apps, users can still share data between accounts when apps support multiple accounts or via cut/copy/paste actions. For example, iOS users can copy the text of a corporate email into a personal account. This introduces the potential for data leaks and reduces the overall security of your corporate data on iOS.

The admin controls introduced in this launch will help increase protections and make it more difficult for corporate data to be accidentally or intentionally shared to a personal account. Similar protections are already available on Android devices through Work Profiles.

See our post on the Cloud Blog to learn how this and other launches can help G Suite customers stay secure.

Getting started 


  • Admins: This feature will be OFF by default and can be enabled at the organizational unit (OU) level. Visit the Help Center to learn more about data protection on iOS devices
  • End users: There is no end-user setting for this feature. If a user tries to perform a restricted copy and paste action, the text “This info can only be shared within your organization’s G Suite apps” will paste instead of the text they copied. 


Admin controls for data exfiltration protection on iOS 

Rollout pace 


  • This feature is already available for all domains. 

Availability 


  • Available to G Suite Enterprise, G Suite Enterprise for Education customers and Cloud Identity Premium customers 
  • Not available to G Suite Basic, G Suite Business, G Suite for Education, G Suite for Nonprofits customers, and Cloud Identity Free customers 

Resources 


Gmail for iOS now allows you to add attachments from the Files app

Quick launch summary 

In the Gmail iOS app, when composing or replying to an email, you can now upload attachments from the Files app on your iPhone or iPad.



Getting started 


  • End users: This feature will be available by default. In the Gmail iOS app, when composing or replying to an email, click the attachment icon and scroll to the “Attachments” section. Then select the folder icon to select an attachment from the Files app. 

Rollout pace 


  • Rapid Release domains: Extended rollout (potentially longer than 15 days for feature visibility) starting on February 12, 2020 
  • Scheduled Release domains: Extended rollout (potentially longer than 15 days for feature visibility) starting on February 12, 2020 

Availability 


  • Available to all Gmail iOS users.

Dynamic email in Gmail available on Android and iOS

Quick launch summary 

We previously announced dynamic emails for Gmail on the web. This functionality is now rolling out to Gmail on Android and iOS.

Dynamic email brings the richness and interactivity of AMP to your mobile device, allowing you to take action directly within a message. You can respond to a comment, RSVP to an event, manage subscription preferences, and more.


The content of Dynamic email can be kept up to date, which means you can open an email and view the most up-to-date order status of an e-commerce order or the latest job postings.

Availability

Rollout details
  • Rapid Release domains: Extended rollout (potentially longer than 15 days for feature visibility) starting on November 21, 2019
  • Scheduled Release domains: Extended rollout (potentially longer than 15 days for feature visibility) starting on November 21, 2019

G Suite editions
  • Available to all G Suite editions

On/off by default?
  • Dynamic email is ON by default.

Stay up to date with G Suite launches