
How fashion designer Phillip Lim stays creative with his Pixel

Beginning today, the Gemini mobile app for Android and iOS devices is now available for Google Workspace users accessing Gemini as a core service. With the Gemini mobile app, users will be able to do research or find quick answers while on the go. They can also leverage the camera of the phone to take pictures of handwritten notes and export them into Google Docs or Gmail, or create presentation-ready visualizations of a chart that was drawn on a whiteboard; All of this comes with the enterprise data protections Google Workspace customers are accustomed to.
As part of this roll out, we’re also extending access to the Gemini mobile app for all Education users, both as a core service with a qualifying edition and as an additional service.
We're excited to announce Gaze Link as the winner of the Best Android App for our Gemini API Developer Competition!
This innovative app demonstrates the potential of the Gemini API in providing a communication system for individuals with Amyotrophic lateral sclerosis (ALS) who develop severe motor and verbal disabilities, enabling them to type sentences with only their eyes.
Gaze Link uses Google’s Gemini 1.5 Flash model to predict the user’s intended sentence based on a few key words and the context of the conversation.
For example if the context is “Is the room temperature ok?” and the user replies “hot AC two” the app will leverage Gemini to generate the full sentence “I am hot, can you turn the AC down by two degrees?”.
The Gaze Link team took advantage of Gemini 1.5 Flash multilingual capabilities to let the app generate sentences in English, Spanish and Chinese, the three languages currently supported by the app.
We were truly impressed by the Gaze Link app. The team used the Gemini API combined with ML Kit Face Detection to empower individuals with ALS providing them with a powerful communication system that is both accessible and affordable.
With Gemini 1.5 Flash currently supporting 38 languages, it is possible for Gaze Link to add support for more languages in the future. In addition, the model’s multimodal abilities could enable the team to enhance the user experience by integrating image, audio and video to augment the context of the conversation.
The result of the integration of the Gemini API in Gaze Link is inspiring. If you are working on an Android app today, we encourage you to learn about the Gemini API capabilities to see how you can successfully add generative AI to your app and delight your users.
To get started, go to the Android AI documentation!