Google Drive cut code and development time in half with Jetpack Compose and new architecture

Posted by Nick Butcher – Product Manager for Jetpack Compose, and Florina Muntenescu – Developer Relations Engineer

As one of the world’s most popular cloud-based storage services, Google Drive lets people do more than just store their files online. With Drive, users can synchronize, share, search, edit, and even pin specified files and content for safe and secure offline use.

Recently, Drive’s developers revamped the application’s home screen to provide a more seamless experience across devices, matching updates made to Google Drive’s web version. However, the app’s previous architecture and codebase would’ve prevented the team from completing the updates in a timely manner.

Instead of struggling with the app’s previous tech stack to implement the update, the Drive team rebuilt the home page from the ground up using Android’s recommended architecture and Jetpack Compose, Android’s modern declarative toolkit for creating native UI.

Compose, combined with architecture improvements, cut our development time nearly in half.” — Dale Hawkins, Senior software engineer and tech lead at Google Drive

Experimenting with Kotlin and Compose

The Drive team experimented with Kotlin — which the Compose toolkit is built with — for several months before planning the app’s home screen rebuild. Drive’s developers liked Kotlin’s improved syntax and null enforcement, making it easier to produce code.

“We had been using RxJava, but started looking into replacing that with coroutines,” said Dale Hawkins, the features team lead for Google Drive. “This led to a more natural alignment between coroutines and Jetpack Compose. After a deep dive into Compose, we came away with a clear understanding of how Compose has numerous benefits over the Views-based approach.”

Following the Kotlin exploration, Dale experimented with Jetpack Compose. “I was pleased with how easy it was to build the UI using Compose. So I continued the experiment after that week,” said Dale. “I eventually rewrote the feature using Compose.”

Using Compose

Shortly after experimenting with Jetpack Compose, the Drive team decided to use it to completely rebuild the app’s home screen UI.

“We wanted to make some major changes to match the ones being done for the web version, but that project had a several-month head start. We wanted to release the Android version shortly after the web changes went live to ensure our users have a seamless Google Drive experience across devices,” said Dale.

The Drive team's experimentation and testing with Jetpack Compose showed that the new toolkit was powerful and reliable and that it would enable them to move faster. With this in mind, the Drive team decided to step away from their old codebase and embrace Jetpack Compose for the app’s home screen update. Not only would it be quicker and easier, but it would also better prepare the team to easily make future UI changes.

Using Android’s guidance for architecture

Before going all-in with Jetpack Compose, Drive developers wanted to restructure the application by implementing a completely new app architecture. Drive developers followed Android’s official architecture guidance to apply structural changes, paving the way for the new Kotlin codebase.

“The recommended architecture reinforces good separation between layers,” said Quintin Knudsen, an Android engineer for Google Drive. “We work in a highly dynamic environment and need to be able to adjust to any app changes. Using well-defined and independent layers helps isolate any changes or UI requirements. The recommendations from Android offered sound ways to structure the layers.” With a clear separation between the app’s data and UI layers, developers could work in parallel to significantly speed up testing and development.

Drive developers also relied on Mappers and UseCases when creating the new architecture. These patterns allowed them to create flexible code that is easier to manage. They also exposed flows from their ViewModels to make the UI respond immediately to any data changes, making it much simpler to implement and understand UI updates.

Less code, faster development

With the app’s newly improved architecture and Jetpack Compose, the Drive team was able to develop the app’s new home screen in less than half the time that they expected. They also implemented the new code and finished quality assurance testing nearly seven weeks ahead of schedule.

“Thanks to Compose, we had the groundwork done within a couple of weeks. We delivered a great implementation over a month ahead of schedule, and it’s been praised by product, UX, and even other engineering teams,” said Dale.

Despite having fewer features, the original home screen required over 12,000 lines of code. The new Compose-based home screen has many new features and only required 5,100 lines of code—a 57% reduction. Having less code makes it much easier for developers to maintain the app and implement any updates.

Testing the new UI in Jetpack Compose also required significantly less code. Before Compose, Drive developers used roughly 9,000 lines of code to test about 62% of the UI. With Compose, it took only 2,200 lines to test over 80% of the new UI.

The original home screen required over 12,000 lines of code. The Compose-based home screen only required 5,100 lines of code. That’s a 57% reduction.” — Dale Hawkins, Senior software engineer and tech lead at Google Drive

Looking forward

A new and improved app architecture paired with Jetpack Compose allowed Drive developers to rebuild the app’s home screen UI faster and easier than they could’ve imagined. The Drive team plans to expand its use of Compose within the application for things like supporting large dynamic displays and text resizing.

“As we work on new projects, we’re taking the opportunity to update older UI code to make use of our new architecture and Compose. The new code will be objectively better and features will be easier to write, test, and maintain,” said Dale.

Get started

Improve app architecture using Android’s official architecture guidance and optimize your UI development with Jetpack Compose.

Gemini 1.5 Pro Now Available in 180+ Countries; With Native Audio Understanding, System Instructions, JSON Mode and More

Posted by Jaclyn Konzelmann and Megan Li - Google Labs

Grab an API key in Google AI Studio, and get started with the Gemini API Cookbook

Less than two months ago, we made our next-generation Gemini 1.5 Pro model available in Google AI Studio for developers to try out. We’ve been amazed by what the community has been able to debug, create and learn using our groundbreaking 1 million context window.

Today, we’re making Gemini 1.5 Pro available in 180+ countries via the Gemini API in public preview, with a first-ever native audio (speech) understanding capability and a new File API to make it easy to handle files. We’re also launching new features like system instructions and JSON mode to give developers more control over the model’s output. Lastly, we’re releasing our next generation text embedding model that outperforms comparable models. Go to Google AI Studio to create or access your API key, and start building.


Unlock new use cases with audio and video modalities

We’re expanding the input modalities for Gemini 1.5 Pro to include audio (speech) understanding in both the Gemini API and Google AI Studio. Additionally, Gemini 1.5 Pro is now able to reason across both image (frames) and audio (speech) for videos uploaded in Google AI Studio, and we look forward to adding API support for this soon.


screen grab of a clooege professor using Gemini 1.5 Pro to create a quiz based on their latest lecture video in Google AI Studio
You can upload a recording of a lecture, like this 117,000+ token lecture from Jeff Dean, and Gemini 1.5 Pro can turn it into a quiz with an answer key. Video sped up for demo purposes.

Gemini API Improvements

Today, we’re addressing a number of top developer requests:

1. System instructions: Guide the model’s responses with system instructions, now available in Google AI Studio and the Gemini API. Define roles, formats, goals, and rules to steer the model's behavior for your specific use case.

image showing where System Instructions is located in Google AI Studio
Set System Instructions easily in Google AI Studio

2. JSON mode: Instruct the model to only output JSON objects. This mode enables structured data extraction from text or images. You can get started with cURL, and Python SDK support is coming soon.

3. Improvements to function calling: You can now select modes to limit the model’s outputs, improving reliability. Choose text, function call, or just the function itself.


A new embedding model with improved performance

Starting today, developers will be able to access our next generation text embedding model via the Gemini API. The new model, text-embedding-004, (text-embedding-preview-0409 in Vertex AI), achieves a stronger retrieval performance and outperforms existing models with comparable dimensions, on the MTEB benchmarks.

table showing Gecko: Versativel Text Embeddings Distilled from Large Language Models
'Text-embedding-004' (aka Gecko) using 256 dims output outperforms all larger 768 dim output models on MTEB benchmarks

These are just the first of many improvements coming to the Gemini API and Google AI Studio in the next few weeks. We’re continuing to work on making Google AI Studio and the Gemini API the easiest way to build with Gemini. Get started today in Google AI Studio with Gemini 1.5 Pro, explore code examples and quickstarts in our new Gemini API Cookbook, and join our community channel on Discord.

Meet the inaugural cohort of our Google for Startups Accelerator: AI First North America

Posted by Matt Ridenour, Head of Startup Developer Ecosystem - USA

Startups are at the forefront of developing solutions for some of humanity's most pressing challenges by using AI, driving breakthroughs across industries from healthcare to cybersecurity.

To help AI-focused startups scale quickly while building responsibly, we’re thrilled to introduce the inaugural class of the Google for Startups Accelerator: AI-First program in North America. This new program is for startups building AI solutions based in the U.S. and Canada. This is the first of several AI-focused programs we'll offer throughout the year in Europe, India and Brazil.

This equity-free program provides 10 weeks of hands-on mentorship and technical project support to startups using AI in their core service or product. Selected startups will collaborate with a cohort of top peer founders and engage with leaders across Google. The curriculum will give founders access to the latest AI tools (including Google’s own Gemini), and will also include workshops on tech and infrastructure, UX and product, growth, sales, leadership and OKRs.

Meet the inaugural class of Google for Startups Accelerator: AI-First, North America

We’re thrilled to introduce the 15 AI startups selected for this accelerator:

Aptori, San Jose, CA. Aptori assists developers and security engineers to build secure, high-quality software.

Augmend, Seattle, WA. Augmend is an AI native Loom made for developers, making it possible to share expertise, not just videos.

Backpack Healthcare, Elkridge, MA. Backpack Healthcare is a pediatric mental health company utilizing proprietary AI technology, an engagement platform, and live therapists to offer personalized care to patients.

BrainLogic AI, Menlo Park, CA. BrainLogic AI has built a localized AI agent that connects users and businesses through whatsapp.

Cicerai, The Woodlands, TX. Cicerai is an AI-native Legal Practice Management Platform, boosting productivity and enhancing quality.

CLIKA, San Jose, CA. CLIKA simplifies deploying AI models on diverse hardware by offering automated model compression and format compilation.

Easel AI, Inc., Los Angeles, CA. Easel AI is an AI avatar-based social chat app that runs on iMessage.

Findly, San Francisco, CA. Findly is a data visualization integrator using a natural language chat interface.

Glass Health, San Francisco, CA. Glass Health empowers clinicians with the best-in-class AI platform for clinical decision support.

Kodif, Sunnyvale, CA. Kodif is a low-code AI-powered automation platform for support agent workflows to resolve customer issues.

Liminal, Indianapolis, IN. Liminal empowers regulated enterprises to securely deploy and use generative AI, horizontally covering every interaction and use case.

Mbue, Austin, TX. Mbue leverages AI to instantly review architectural drawings, catching errors earlier and streamlining the process.

Modulo Bio, San Diego, CA. Modulo Bio is building a platform to discover therapeutics that prevent or reverse neurodegenerative diseases.

Rocket Doctor, Toronto, ON, Canada. Rocket Doctor is a digital health platform and marketplace that intelligently matches patients and clinicians in a telemedicine 2.0 approach.

Sibli, Montreal, QC, Canada. Sibli is a fintech platform that processes unstructured data and identifies key insights for financial analysts.

The program kicks off at Cloud Next 2024 and culminates with a high profile Demo Day in June for potential partners, customers and investors.

After graduation, startups join the dynamic Google for Startups accelerator community, where they receive ongoing support and have the opportunity to build lasting connections with like-minded founders, mentors and investors.

We are honored to partner with this cohort of companies through this accelerator and beyond, to advance their AI technologies. Register your interest to get updates on the program, and join us in celebrating these exceptional startups!

Protect sensitive admin actions with multi-party approvals

This announcement was part of Google Cloud Next ‘24. Visit the Workspace Blog to learn more about the next wave of innovations in Workspace, including enhancements to Gemini for Google Workspace.


What’s changing

To protect our customers from malicious actors taking sensitive admin actions, we’re launching multi-party approvals where one admin must approve certain sensitive actions initiated by another. Multi-party approvals will be required for the following settings:
  • 2-Step verification
  • Account recovery
  • Advanced Protection 
  • Google session control
  • Login Challenges
  • Passwordless (beta)
This feature is available for eligible Workspace customers with multiple super admin accounts — see the “Getting started” section below for more information.


Who’s impacted

Admins


Why it’s important

Multi-party approvals adds an extra layer of security for sensitive actions taken in the Admin console by ensuring no sensitive action happens in a silo and, most importantly, helps prevent unauthorized or accidental changes from being made. This added layer of approval helps ensure actions are being taken appropriately and not too broadly or too often. Additionally, this is more convenient for admins because the action is executed automatically after approval and the requester doesn’t need to take additional action. Multi-party approvals makes super admins aware of what changes are being attempted and gives them the opportunity to accept or reject these sensitive actions.


Outlined below is an example of the feature in action, in this case there is an attempt to make a change to 2-step verification policies:

When 2-step verification changes are attempted, admins will be required to submit the change to a super admin for approval.

Super admins can review and take action on these requests in the Admin console by navigating to Security > Multi-party approval. Super admins will also receive email alerts when a 2-step verification change is requested or any other protected action is attempted.

Admins can open a specific approval request to view more information including who is impacted by the change, what the configuration was before the change and what it will be after the change.

Getting started

  • Admins: 
    • This feature is available for eligible Workspace customers with two or more super admin accounts. Multi-party approvals are OFF by default and can be turned on in the Admin console by going to Security > Multi-party approval settings. Visit the Help Center to learn more about multi-party approvals for sensitive actions.


Rollout pace


Availability

  • Available to Google Workspace Enterprise Standard, Enterprise Plus, Education Standard, Education Plus, and Cloud Identity Premium customers


Control your users’ access to new Gemini for Google Workspace features before general availability

This announcement was part of Google Cloud Next ‘24. Visit the Workspace Blog to learn more about the next wave of innovations in Workspace, including enhancements to Gemini for Google Workspace.



What’s changing

We’re introducing a new setting in the Admin console which will give Gemini customers the ability to test Gemini for Google Workspace alpha features before they become generally available. Specifically, admins will be able to turn on alpha features for all Gemini provisioned Workspace users or for a subset of Gemini users in a particular Organizational Unit (OU) or Group.

To configure Gemini access features, go to Account settings > Gemini for Google Workspace



Who’s impacted

Admins and end users


Why it matters

As our Gemini for Workspace offerings continue to evolve, you may consider allowing your users to test Gemini features in alpha. This will give your users a head start on leveraging our latest AI features and provide Google with helpful feedback to improve Gemini features before they’re generally available. Alpha features get the same robust data protection standards that come with all Google Workspace services.

Getting started

        Please consider the following before configuring alpha access for your users:
    • Your users will receive all Gemini for Workspace alpha features — it is not possible to enable a subset of features or opt-out of specific features. 
    • Features will appear in alpha as soon as they are available — there is no advanced notice of these features appearing for Gemini  for Workspace alpha provisioned users.
    • As these features are not yet generally available, we will not offer full support for these features. Alpha features get the same robust data protection standards that come with all Google Workspace services.
    • You can also help us improve Gemini for Workspace by allowing users at your organization to provide feedback via research studies and surveys
Additionally, we strongly recommend that you and your users sign up for the Google Workspace alpha community page. Subscribing to this page will help users stay on top of the latest Gemini for Workspace alpha features. You can also ask questions about the features on this page.

Rollout pace


Availability

Introducing a new AI Security add-on for Google Workspace

This announcement was part of Google Cloud Next ‘24. Visit the Workspace Blog to learn more about the next wave of innovations in Workspace, including enhancements to Gemini for Google Workspace.



What’s changing

As we continue to expand our Gemini for Google Workspace offerings, we're excited to introduce the AI Security add-on for Google Workspace customers. 

At launch, the AI Security add-on will give customers access to the AI Classification capability in Google Drive. AI Classification allows IT teams to automatically and continuously identify, classify, and label sensitive files across the organization. This capability is powered with privacy-preserving AI models that can be uniquely trained for the specific needs of your organization. Classified files can then be protected with existing data loss prevention (DLP) controls. 

Who’s impacted

Admins

Why it matters

Drive Labels enable Workspace Administrators to up-level their security posture by closely monitoring activity on labeled files, and using labels as a vehicle for data loss prevention and lifecycle management policies. The challenge with label-based policies is that they are only effective on files that are correctly identified and labeled. Further, labeling files placed a considerable manual burden on Admins.

This is where AI Classification can help. By training models on customer-identified examples of content that match their data classification definitions, AI Classification can evaluate files where text can be extracted to see if it should be labeled.  This enables organizations to achieve label coverage at a scale and accuracy that is very difficult to accomplish through traditional means and manual Admin intervention. Once labeled, the organization's data can be protected by fine-grained security policies. 


Availability

The AI Security add-on is available for the following Google Workspace Editions:
  • Business Standard and Plus
  • Enterprise Standard and Plus
  • Enterprise Essentials and Essentials Plus
  • Frontline Starter and Standard
  • Google Workspace for Nonprofits 

Resources


Introducing the AI Meetings and Messaging for Google Workspace add-on

This announcement was part of Google Cloud Next ‘24. Visit the Workspace Blog to learn more about the next wave of innovations in Workspace, including enhancements to Gemini for Google Workspace.


What’s changing

As we continue to expand our Gemini for Google Workspace offerings, we're excited to introduce the AI Meetings and Messaging add-on, which will help you have richer meetings and foster more meaningful collaboration.


At launch, the AI Meetings and Messaging add-on will give customers access to Google Meet features such as studio look, studio lighting, studio sound, and take notes for me (coming soon in Alpha) allowing customers to have more effective and efficient meetings. In the future, AI Meetings and Messaging will also provide access to Gemini features in Google Chat features such as on-demand conversation summaries and automatic translation of messages.


Who’s impacted

Admins


Why it’s important

The AI Meetings and Messaging add-on, along with the new AI Security add-on also announced at Google Cloud Next ‘24, give our customers more ways to work with AI that best suits the needs of their organization. The AI Meetings and Messaging add-on can help enhance collaboration across Meet and Chat with a variety of features such as:

  • Generative backgrounds in Google Meet
  • Studio look, studio sound, and studio lighting in Google Meet
  • Real time translated captions in Google Meet
  • Take notes for me in Google Meet (coming soon in alpha
  • And upcoming features like:
    • Translate for me in Google Meet and Chat for automatic language detection and translation 
    • Adaptive audio in Google Meet for synchronized audio and no feedback when multiple users join a meeting from a room using only their laptops
    • Screenshare watermark in Google Meet to help discourage the copying and unauthorized distribution of shared content
    • On-demand conversation summaries in the home view of Google Chat to get you caught up quickly

Visit our Help Center for a complete list of features available for the AI Meetings and Messaging add-on. Keep an eye on the Workspace Updates blog for new feature launches in the future.


Additional details

Some announced Meet and Chat features for this add-on will be available later this year. More details on timing will be shared in the coming months here on the Workspace Updates blog. This announcement on the Workspace Updates blog has more information about how to enable alpha testing for your end users.


Getting started

Availability

The AI Meetings and Messaging add-on is available for the following Google Workspace Editions:
  • Business Starter, Standard, and Plus
  • Enterprise Starter, Standard, and Plus
  • Frontline Starter and Standard
  • Enterprise Essentials, Essentials Plus
  • Nonprofits

Resources