Tag Archives: announcement

Build with Google AI: new video series for developers

Posted by Joe Fernandez, AI Developer Relations, and Jaimie Hwang, AI Developer Marketing

Artificial intelligence (AI) represents a new frontier for technology we are just beginning to explore. While many of you are interested in working with AI, we realize that most developers aren't ready to dive into building their own artificial intelligence models (yet). With this in mind, we've created resources to get you started building applications with this technology.

Today, we are launching a new video series called Build with Google AI. This series features practical, useful AI-powered projects that don't require deep knowledge of artificial intelligence, or huge development resources. In fact, you can get these projects working in less than a day.

From self-driving cars to medical diagnosis, AI is automating tasks, improving efficiency, and helping us make better decisions. At the center of this wave of innovation are artificial intelligence models, including large language models like Google PaLM 2 and more focused AI models for translation, object detection, and other tasks. The frontier of AI, however, is not simply building new and better AI models, but also creating high-quality experiences and helpful applications with those models.

Practical AI code projects

This series is by developers, for developers. We want to help you build with AI, and not just any code project will do. They need to be practical and extensible. We are big believers in starting small and tackling concrete problems. The open source projects featured in the series are selected so that you can get them working quickly, and then build beyond them. We want you to take these projects and make them your own. Build solutions that matter to you.

Finally, and most importantly, we want to promote the use of AI that's beneficial to users, developers, creators, and organizations. So, we are focused on solutions that follow our principles for responsible use of artificial intelligence.

For the first arc of this series, we focus on how you can leverage Google's AI language model capabilities for applications, particularly the Google PaLM API. Here's what's coming up:

  • AI Content Search with Doc Agent (10/3) We'll show you how a technical writing team at Google built an AI-powered conversation search interface for their content, and how you can take their open source project and build the same functionality for your content. 
  • AI Writing Assistant with Wordcraft (10/10) Learn how the People and AI Research team at Google built a story writing application with AI technology, and how you can extend their code to build your own custom writing app. 
  • AI Coding Assistant with Pipet Code Agent (10/17) We'll show you how the AI Developer Relations team at Google built a coding assistance agent as an extension for Visual Studio Code, and how you can take their open source project and make it work for your development workflow.

For the second arc of the series, we'll bring you a new set of projects that run artificial intelligence applications locally on devices for lower latency, higher reliability, and improved data privacy.

Insights from the development teams

As developers, we love code, and we know that understanding someone else's code project can be a daunting task. The series includes demos and tutorials on how to customize the code, and we'll talk with the people behind the code. Why did they build it? What did they learn along the way? You’ll hear insights directly from the project team, so you can take it further.

Discover AI technologies from across Google

Google provides a host of resources for developers to build solutions with artificial intelligence. Whether you are looking to develop with Google's AI language models, build new models with TensorFlow, or deploy full-stack solutions with Google Cloud Vertex AI, it's our goal to help you find the AI technology solution that works best for your development projects. To start your journey, visit Build with Google AI.

We hope you are as excited about the Build with Google AI video series as we are to share it with you. Check out Episode #1 now! Use those video comments to let us know what you think and tell us what you'd like to see in future episodes.

Keep learning! Keep building!

Studio Bot expands to 170+ international markets!

Posted by Isabella Fiterman – Product Marketing Manager, and Sandhya Mohan – Product Manager

At this year’s Google I/O, one of the most exciting announcements for Android developers was the introduction of Studio Bot, an AI-powered coding assistant which can be accessed directly in Android Studio. Studio Bot can accelerate your ability to write high-quality Android apps faster by helping generate code for your app, answering your questions, finding relevant resources— all without ever having to leave Android Studio. After our announcement you told us how excited you were about this AI-powered coding companion, and those of you outside of the U.S were eager to get your hands on it. We heard your feedback, and expanded Studio Bot to over 170 countries and territories in the canary release channel of Android Studio.

Ask Studio Bot your Android development questions

Studio Bot is powered by artificial intelligence and can understand natural language, so you can ask development questions in your own words. While it’s now available in most countries, it is designed to be used in English. You can enter your questions in Studio Bot’s chat window ranging from very simple and open-ended ones to specific problems that you need help with. Here are some examples of the types of queries it can answer:

How do I add camera support to my app?


I want to create a Room database.

Can you remind me of the format for javadocs?

What's the best way to get location on Android?

Studio Bot remembers the context of the conversation, so you can also ask follow-up questions, such as “Can you give me the code for this in Kotlin?” or “Can you show me how to do it in Compose?”


Moving image showing a user having conversation with Studio Bot

Designed with privacy in mind

Studio Bot was designed with privacy in mind. You don’t need to send your source code to take advantage of Studio Bot’s features. By default, Studio Bot’s responses are purely based on conversation history, and you control whether you want to share additional context or code for customized responses. Much like our work on other AI projects, we stick to a set of AI Principles that hold us accountable.

Focus on quality

Studio Bot is still in its early days, and we suggest validating its responses before using them in a production app. We’re continuing to improve its Android development knowledge base and quality of responses so that it can better support your development needs in the future. You can help us improve Studio Bot by trying it out and sharing your feedback on its responses using the thumbs up and down buttons.

Try it out!

Download the latest canary release of Android Studio and read more about how you can get started with Studio Bot. You can also sign up to receive updates on Studio Bot as the experience evolves.

Announcing the Inaugural Google for Startups Accelerator: AI First cohort

Posted by Yariv Adan, Director of Cloud Conversational AI and Pati Jurek, Google for startups Accelerator Regional Lead

This article is also shared on Google Cloud Blog

Today’s startups are addressing the world's most pressing issues, and artificial intelligence (AI) is one of their most powerful tools. To empower startups to scale their business towards success in the rapidly evolving AI landscape, Google for Startups Accelerator: AI First offers a 10-week, equity-free program for AI-first startups in partnership with Google Cloud. Designed for seed to series A startups based in Europe and Israel, the program helps them grow and build responsibly with AI and machine learning (ML) from the ground up, with access to experts from Google Cloud and Google DeepMind, a mix of in-person and virtual activities, 1:1 mentoring, and group learning sessions.

In addition, the program features deep dives and workshops focused on product design, business growth, and leadership development. Startups that are selected for the cohort also benefit from dedicated Google AI technical expertise and receive credits via the Google for Startups Cloud Program.

Out of hundreds of impressive applications, today we welcome the inaugural cohort of the Google for Startups Accelerator: AI First. The program includes 13 groundbreaking startups from eight different countries, all focused on different verticals and with a diverse array of founder and executive backgrounds. All participants are leveraging AI and ML technologies to solve significant problems and have the potential to transform their respective industries.


Congratulations to the cohort!

We are thrilled to present the inaugural Google for Startups Accelerator: AI First cohort:

  • Annea.Ai (Germany) utilizes AI and Digital Twin technology to forecast and prevent possible breakdowns in renewable energy assets, such as wind turbines.
  • Checktur.io (Germany) empowers businesses to manage their commercial vehicle fleets efficiently via an end-to-end fleet asset management ecosystem while using AI models and data-driven insights.
  • Exactly.ai (UK) lets artists create images in their own unique style with a simple written description.
  • Neurons (Denmark) has developed a precise AI model that can measure human subconscious signals to predict marketing responses.
  • PACTA (Germany) provides AI-driven contract lifecycle management with an intelligent no-code workflow on one central legal platform.
  • Quantic Brains (Spain) empowers users to generate movies and video games using AI.
  • Sarus (France) builds a privacy layer for Analytics & AI and allows data practitioners to query sensitive data without having direct access to it.
  • Releva (Bulgaria) provides an all-in-one AI automation solution for eCommerce marketing.
  • Semantic Hub (Switzerland) uses AI leveraging multilingual Natural Language Understanding to help global biopharmaceutical companies understand the patient experience through first-hand testimonies on social media.
  • Vazy Data (France) allows anyone to analyze data without technical knowledge by using AI.
  • Visionary.AI (Israel) leverages cutting-edge AI to improve real-time video quality in challenging visual conditions like extreme low-light.
  • ZENPULSAR (UK) provides social media analytics from over 10 social media platforms to financial institutions and corporations to facilitate investment and business decisions.
  • Zaya AI (Romania) uses machine learning to better understand and diagnose diseases, assisting healthcare professionals to make timely and informed medical decisions.
Grid image of logos and executives of all startups listed in the inaugural Google for Startups Accelerator

To learn more about the AI-first program, and to signal your interest in nominating your startup for future cohorts, visit the program page here.

Grow user acquisition and store conversions with the updated Play Store Listing Certificate course

Posted by Rob Simpson, Product Manager - Google Play & Joe Davis, Manager - Google Play Academy

Since we launched the Google Play Store Listing Certificate in 2021, our no-cost, self-paced training courses have helped thousands of developers in over 80 countries increase their app installs. Over the course of the training, developers learn essential mobile marketing best practices, including how to leverage Play Console growth tools like store listing experiments (SLEs) and custom store listings (CSLs).

Today, we’re excited to release a major update to our self-paced training, covering all the latest CSL and SLE features, as well as real-world examples showing how you might use them to drive user growth. We’re also releasing video study guide series to help you lock in your new knowledge ahead of the exam.

Built for app growth marketers, take Google Play Academy's online growth marketing training and get certified in Google Play Store Listing best practices!
Videos: Google Play Store Listing Certificate Study Guide

What’s new: New features in custom store listings and store listing experiments

The new course content focuses on custom store listings and store listing experiments. For the unfamiliar, custom store listings allow you to show different versions of your title’s Play Store content to different people. For example, you might create versions tailored to users in different countries where feature availability varies, or an experience just for users who have lapsed or churned.

Custom store listings can help you convey the most effective messaging for different users. Based on an internal comparative analysis, CSLs can help increase an app or game’s monthly active users (MAUs) by an average of over 3.5%1.

Store listing experiments, on the other hand, offer a way to explore what icons, descriptions, screenshots (and more) convert the best for your title on the Play Store.

These are features you can use today! Google Play Academy’s training now includes four new courses on custom store listings, four updated existing courses, and nine new study guide videos.

Finding app and career growth

Here’s what some developers, entrepreneurs and marketers have said about their experience after getting trained up and certified:




Learning best practices for store listing experiments allowed me to know more about our audience. Something that simple as using the proper icon increased acquisitions of one of our games by approximately 60%

Adrian Mojica 
Marketing Creative, GameHouse (Spain) 





The knowledge I gained empowered me to make more informed decisions and learn effective strategies. The credibility I got from the certificate has opened new doors in my career. 

Roshni Kumari 
Student & Campus Ambassador (India)


Play Academy increased efficiency in mentoring relationships by 50%, and we've seen a 30% increase in our game launch speed overall. 

Kimmie Vu
Chief Marketing Officer, Rocket Game Studio (Vietnam)


Top tips to prepare for your certificate exam

  1. Take the training and watch the study guide videos
  2. Take the online training on Google Play Academy to learn best practices to help create a winning store listing, then lock in your knowledge with the new video study guides. You’ll learn key skills to help you drive growth with high-quality and policy-compliant store listings.

  3. Pass the exam and get certified
  4. After the training, take the exam to get an industry-recognized certificate. You will also be invited to join Google Developer Certification Directory, where you can network with other Google-certified developers.

  5. Get started with custom store listings and experiments
  6. Time to put your new skills into action. Start with 2-3 custom store listings for markets important to your app or game, such as users in certain countries or lapsed or churned users. Or test a new icon or short description.

Start your learning journey on Google Play Academy today!


1 Source: Internal Google data [Nov 2022] comparing titles that use CSL to those that do not.

Programmatically access working locations with the Calendar API

Posted by Chanel Greco, Developer Advocate

Giving Google Workspace users the ability to set their working location and working hours in Google Calendar was an important step in helping our customers’ employees adapt to a hybrid world. Sending a Chat message asking “Will you be in the office tomorrow?” soon became obsolete as anyone could share where and when they would be working within Calendar.

To improve the hybrid working experience, many organizations rely on third-party or company-internal tools to enable tasks like hot desk booking or scheduling days in the office. Until recently, there was no way to programmatically synchronize the working location set in Calendar with such tools.


Image showing working locations visible via Google Calendar in the Robin app
Robin displays the working location from Google Calendar in their application and updates the user's Google Calendar when they book a desk in Robin

Programmatically read and write working locations

We are pleased to announce that the Calendar API has been updated to make working locations available and this added functionality is generally available (feature is only available for eligible Workspace editions). This enables developers to programmatically read and write the working location of Google Workspace users. This can be especially useful in three use cases that have surfaced in discussions with customers which we are going to explore together.

1.     Synchronize with third-party tools

Enhancing the Calendar API enables developers to synchronize user’s working location with third-party tools like Robin and Comeen. For example, some companies provide their employees with desk booking tools so they can book their workplace in advance for the days they will be on-site. HR management tools are also common for employees to request and set “Work from home” days. In both situations the user had to set their working location in two separate tools: their desk booking tool and/or HR management system and Google Calendar.

Thanks to the working location being accessible through the Calendar API this duplicate work is no longer necessary since a user’s working location can be programmatically set. And if a user's calendar is the single source of truth? In that case, the API can be used to read the working location from the user’s calendar and write it to any permissioned third-party tool.


Image showing Google Workspace Add-on synchronizing users' working locations in the Comeen app.
Comeen’s Google Workspace Add-on synchronizes the user’s’ working locations whenever the user updates their working location, either in Google Calendar or in Comeen's add-on

2.     Display working location on other surfaces

The API enables the surfacing of the user's working location in other tools, creating interesting opportunities. For instance, some of our customers have asked for ways to better coordinate in-office days. Imagine you are planning to be at the office tomorrow. Who else from your team will be there? Who from a neighboring team might be on-site for a coffee chat?

With the Calendar API, a user's working location can be displayed in tools like directories, or a hybrid-work scheduling tool. The goal is to make a user’s working location available in the systems that are relevant to our customers.

3.     Analyze patterns

The third use case that surfaced from discussions with our customers is analyzing working location patterns. With many of our customers having a hybrid work approach it’s vital to have a good understanding of the working patterns. For example, which days do locations reach maximal legal capacity? Or, when does the on-campus restaurant have to prepare more meals for employees working on-site?

The API answers these and other questions so that facility management can adapt their resources to the needs of their employees.


How to get started

Now that you have an idea of the possibilities the updated Calendar API creates, we want to guide you on how you can get started using it.

  • Check out the developer documentation for reading and writing a user's working locations.
  • Watch the announcement video on the Google Workspace Developers YouTube channel.
  • Check the original post about the launch of the working location feature for a list of all Google Workspace plans that have access to the feature.

MediaPipe for Raspberry Pi and iOS

Posted by Paul Ruiz, Developer Relations Engineer

Back in May we released MediaPipe Solutions, a set of tools for no-code and low-code solutions to common on-device machine learning tasks, for Android, web, and Python. Today we’re happy to announce that the initial version of the iOS SDK, plus an update for the Python SDK to support the Raspberry Pi, are available. These include support for audio classification, face landmark detection, and various natural language processing tasks. Let’s take a look at how you can use these tools for the new platforms.

Object Detection for Raspberry Pi

Aside from setting up your Raspberry Pi hardware with a camera, you can start by installing the MediaPipe dependency, along with OpenCV and NumPy if you don’t have them already.

python -m pip install mediapipe

From there you can create a new Python file and add your imports to the top.

import mediapipe as mp from mediapipe.tasks import python from mediapipe.tasks.python import vision import cv2 import numpy as np

You will also want to make sure you have an object detection model stored locally on your Raspberry Pi. For your convenience, we’ve provided a default model, EfficientDet-Lite0, that you can retrieve with the following command.

wget -q -O efficientdet.tflite -q https://storage.googleapis.com/mediapipe-models/object_detector/efficientdet_lite0/int8/1/efficientdet_lite0.tflite

Once you have your model downloaded, you can start creating your new ObjectDetector, including some customizations, like the max results that you want to receive, or the confidence threshold that must be exceeded before a result can be returned.

# Initialize the object detection model base_options = python.BaseOptions(model_asset_path=model)options = vision.ObjectDetectorOptions(                                   base_options=base_options,                                   running_mode=vision.RunningMode.LIVE_STREAM,                                   max_results=max_results,                                                       score_threshold=score_threshold,                                    result_callback=save_result) detector = vision.ObjectDetector.create_from_options(options)

After creating the ObjectDetector, you will need to open the Raspberry Pi camera to read the continuous frames. There are a few preprocessing steps that will be omitted here, but are available in our sample on GitHub.

Within that loop you can convert the processed camera image into a new MediaPipe.Image, then run detection on that new MediaPipe.Image before displaying the results that are received in an associated listener.

mp_image = mp.Image(image_format=mp.ImageFormat.SRGB, data=rgb_image) detector.detect_async(mp_image, time.time_ns())

Once you draw out those results and detected bounding boxes, you should be able to see something like this:

Moving image of a person holidng up a cup and a phone, and detected bounded boxes identifying these items in real time

You can find the complete Raspberry Pi example shown above on GitHub, or see the official documentation here.

Text Classification on iOS

While text classification is one of the more direct examples, the core ideas will still apply to the rest of the available iOS Tasks. Similar to the Raspberry Pi, you’ll start by creating a new MediaPipe Tasks object, which in this case is a TextClassifier.

var textClassifier: TextClassifier? textClassifier = TextClassifier(modelPath: model.modelPath)

Now that you have your TextClassifier, you just need to pass a String to it to get a TextClassifierResult.

func classify(text: String) -> TextClassifierResult? { guard let textClassifier = textClassifier else { return nil } return try? textClassifier.classify(text: text) }

You can do this from elsewhere in your app, such as a ViewController DispatchQueue, before displaying the results.

let result = self?.textClassifier.classify(text: inputText) let categories = result?.classificationResult.classifications.first?.categories?? []

You can find the rest of the code for this project on GitHub, as well as see the full documentation on developers.google.com/mediapipe.

Moving image of TextClasifier on an iPhone

Getting started

To learn more, watch our I/O 2023 sessions: Easy on-device ML with MediaPipe, Supercharge your web app with machine learning and MediaPipe, and What's new in machine learning, and check out the official documentation over on developers.google.com/mediapipe.

We look forward to all the exciting things you make, so be sure to share them with @googledevs and your developer communities!

Meet the Google for Startups Accelerator: Women Founders Class of 2023

Posted by Iran Karimian, Startup Developer Ecosystem Lead, Canada

It’s an unfortunate truth that women founders are massively underrepresented among venture-backed entrepreneurs and VC investors, with companies founded solely by women receiving less than 3% of all venture capital investments. In response to this, it has become more apparent of the need to invest in women entrepreneurs in alternate ways - such as mentorship guidance and technical support to help grow and scale their business.

Back in 2020, we launched the Google for Startups Accelerator: Women Founders program to bridge the gender gap in the North American startup ecosystem, and provide high-quality mentorship opportunities, technical guidance, support and community for women founders in the region. Since then, the program has supported 36 women-led startups across North America, who have collectively raised $73.46M USD since graduating from their cohort. Now in its fourth year, the equity-free, 10-week intensive virtual accelerator program provides women-led startups the tools they need to prepare for the next phase of their growth journey.

Today, we are excited to introduce the 11 impressive women-led startups selected to participate in the 2023 cohort:

  • Aravenda (Fairfax, VA) is a comprehensive consignment shop software that is leading innovation in the fastest growing segment of retail through resales.
  • BorderlessHR (Ottawa, ON) offers global talent solutions for small businesses, providing instant matches to pre-vetted talent and AI-powered interviewers, saving SMBs the cost and time spent hiring the right talent on time and within budget. Borderless HR also offers a free suite of HR products to help manage talent.
  • Cobble (New York City, NY) is a platform that helps people reach collaborative agreement with others on ideas. Cobble offers a combination of decision-making tools, curated content and AI-driven social connections.
  • Craftmerce (Delaware City, DE) is a B2B technology platform that links African artisans to mainstream retail partners by providing tools for distributed production, enterprise management, and financing.
  • Dreami (Redwood City, Calif.) powers data-driven career development programs for the 36 million people in the US who face barriers to employment.
  • Medijobs (New York City, NY) offers virtual recruiting for the healthcare industry.
  • Monark (Calgary, AB) is a digital leadership development platform, preparing the next generation of leaders through on-demand personalized learning.
  • NLPatent (Toronto, ON) is an AI-patent search and analytics platform that uses a fine-tuned large language model, built from the ground up, to understand the language of innovation.
  • Rejoy Health (Mountain View, Calif.) is an AI-powered mobile application that uses computer vision technology to deliver at-home physical therapy, enabling individuals to effectively manage and alleviate chronic musculoskeletal conditions like back and joint pain.
  • Shimmer (San Francisco, Calif.) is an ADHD coaching platform that connects adults with ADHD and expert ADHD coaches for behavioral coaching.
  • Total Life (Jupiter, FL) reimagines aging for older adults through an easy, one-click platform that connects users with a Medicare covered healthcare provider.

Through data-driven insights, and leveraging the power of AI and ML, these women-led startups are leading innovation in the North American tech scene. We are thrilled to have them join the 10-week intensive virtual program, connecting them to the best of Google's programs, products, people and technology to help them reach their goals and unlock their next phase of growth. The 2023 Google for Startups Accelerator: Women Founders program kicks off this September.

Google Developer Groups & ecosystem partners bring Startup Success Days to 15 Indian cities

Posted by Harsh Dattani - Program Manager, Developer Ecosystem

The Indian startup ecosystem is thriving, with new startups being founded every day. The country has a large pool of talented engineers and entrepreneurs, and a growing number of investors, policy makers and new age enterprises are looking to back Indian startups.

Google Developer Groups (GDGs) in 50 key Indian cities with varying tech ecosystems across India have seen a healthy mix of developers from the startup ecosystem participating in local meetups. As a result, GDGs have created a platform in collaboration with Google to help early-stage startups accelerate their growth. GDGs across India are increasingly playing a vital role in assisting startup founders and their teams with content, networking opportunities, hackathons, bootcamps, demo days, and more.

We are pleased to announce Startup Success Days with the goal of strengthening how developer communities interact with startup founders, VCs, and Googlers to discuss, share, and learn about the latest trends like Generative AI, Google Cloud, Google Maps, and Keras.

Google Developer Groups Success Days August to October 2023

Startup Success Days will be held in 15 cities across India, starting with 8 cities in August and September: Ahmedabad, Bangalore, Hyderabad, Indore, Chennai, New Delhi, Mumbai, and Pune.

The next event will be hosted at Bangalore on August 12, 2023 at Google Office. The events will be free to attend and will be open to all startups, regardless of stage or industry. The events will cover technical topics, focused on Google technologies, and will provide opportunities for startups to receive mentorship from industry experts, network with other startups, and meet VCs to receive feedback on their business models.

Learn more and register for Startup Success Days on our website.

We look forward to seeing you there!

Harsh Dattani
Program Manager, Developer Ecosystem at Google

Welcoming our inaugural Google for Startups Accelerator: Cloud North America cohort

Posted by Ashley Francisco Head of Startup Ecosystem, North America, Google & Darren Mowry, Managing Director, Corporate Sales, Google

We’re kicking off a summer of accelerators by welcoming the inaugural 2023 North American Google for Startups Accelerator: Cloud cohort, our new class of cloud-native startups in the United States and Canada.

This 10-week virtual accelerator brings the best of Google's programs, products, people and technology to startups doing interesting work in the cloud. We’re excited to offer these startups cloud mentorship and technical project support, along with deep dives and workshops on product design, customer acquisition and leadership development for technology startup founders and leaders.

We heard from some of the founders from this year’s cohort - including New York City-based Harmonic Discovery, Toronto-based Oncoustics, and Vancouver-based OneCup AI - demonstrating how they are using Google Cloud data, analytics, AI, and other technologies across healthcare, agriculture and farming, and more. Read more on their aspirations for the program below:


"The team at Harmonic Discovery is excited to scale our deep learning infrastructure for drug discovery using Google Cloud. We also want to learn best practices from the Google team on training and developing machine learning models in a cost effective way.” – Rayees Rahman CEO, Harmonic Discovery


"We're very excited to grow our presence in the healthcare space by bringing our ultrasound based "virtual biopsy" solutions to clinics and serve over 2B people with liver diseases globally. Specifically in the Google for Startups Accelerator: Cloud program, we're looking to develop and hone our ability to efficiently scale our ML environments and processes to support the development of multiple new diagnostic products in parallel. We're also very excited about creating an edge-cloud hybrid solution with effective distribution of AI processing across GCP and Pixel 7 Pro.” – Beth Rogozinski CEO, Oncoustics


"Our primary objective is to leverage Google Cloud Platform's (GCP) cutting-edge technologies to enhance BETSY, our computer vision AI for animal care. Our milestones include developing advanced image recognition models and achieving real-time processing speeds for large-scale datasets. The accelerator will play a vital role in helping us refine our algorithms and optimize our infrastructure on GCP.” – Mokah Shmigelsly, Co-Founder & CEO and Geoffrey Shmigelsky, Co-Founder & CTO, OneCup AI


We received so many great applications for this program and we're excited to welcome the 12 startups that make up the the inaugural North American Cloud cohort:

  • Aiden Automotive (San Ramon, CA): Aiden is one of the first software solutions to provide streaming two-way communication directly with the vehicle and across vehicle brands. Aiden provides simple and intuitive 100% GDPR and CCPA compliant consent management, enabling car owners to choose which digital services they desire.
  • Binarly (Santa Monica, CA): Binarly’s agentless, enterprise-class AI-powered firmware security platform helps protect from advanced threats below the operating system. The company’s technology solves firmware supply chain security problems by identifying vulnerabilities, malicious firmware modifications and providing firmware SBOM visibility without access to the source code. Binarly’s cloud-agnostic solutions give enterprise security teams actionable insights, and reduce the cost and time to respond to security incidents.
  • Duality.ai (San Mateo, CA): Duality AI is an augmented digital twin platform that provides end-to-end workflows for predictive simulation and high fidelity visualization. The platform helps close data gaps for machine learning teams working on perception problems and helps robotics teams speed up design and validation of their autonomy software.
  • HalloAI (Provo, UT): Hallo is an AI-powered language learning platform for speaking. Press a button and start speaking any language with an AI teacher in 3 seconds.
  • Harmonic Discovery (New York, NY): Harmonic Discovery uses machine learning to design multi-targeted kinase drugs for cancer and autoimmune diseases.
  • MLtwist (Santa Clara, CA): MLtwist helps companies bring AI to the world faster. It gives data scientists and ML engineers access to the easiest and best way to get out of the weeds of data pipelines and back to what they enjoy and do best – design, build, and deploy AI.
  • Oncoustics (Toronto, ON): Oncoustics is creating advanced solutions for low-cost and non-invasive surveillance, diagnostics, and treatment monitoring of diseases with high unmet clinical need through the use of patented AI-based solutions running on ultrasound scans. Using a handheld point of care ultrasound, Oncoustics’ first solution allows clinicians to obtain a liver health assessment within 5 minutes.
  • OneCup AI (Vancouver, BC): OneCup uses Computer vision for Animal Care. Our AI, BETSY, is the eyes of the rancher when the rancher is away.
  • Passio AI (Menlo Park, CA): Passio AI is a mobile AI platform that helps developers and companies build mobile applications powered by expert-level AI and computer vision.
  • RealKey (San Francisco, CA): RealKey is one of the first collaboration platforms built specifically for finance (starting with mortgages), automating documentation collection/review, tasks, and communication for all parties (not just borrowers) involved in transactions to reduce time, effort, and costs to close.
  • Sevco Security Inc. (Austin, TX): Sevco Security a leading IT asset visibility and cybersecurity company, that provides the industry’s first unified asset intelligence platform designed to address the new extended attack surface and create a trusted data repository of all devices, users and applications an organization uses.
  • VESSL AI (San Jose, CA): VESSL is an end-to-end MLOps platform aiming to be the next Snowflake for AI. The platform enables MLEs to run ML workloads at any scale on any cloud, such as AWS, Google Cloud Platform, Oracle Cloud, and on-premises.

As tech advancements continue at lightning speed, it’s an exciting opportunity to work with these founders and startup teams to help grow and scale their business. Programming for the Google for Startups Accelerator: Cloud begins mid-July and we can’t wait to see how far these startups go!

Media transcoding and editing, transform and roll out!

Posted by Andrew Lewis - Software Engineer, Android Media Solutions

The creation of user-generated content is on the rise, and users are looking for more ways to personalize and add uniqueness to their creations. These creations are then shared to a vast network of devices, each with its own capabilities. The Jetpack Media3 1.0 release includes new functionality in the Transformer module for converting media files between formats, or transcoding, and applying editing operations. For example, you can trim a clip from a longer piece of media and apply effects to the video track to share over social media, or transcode media into a more efficient codec for upload to a server.

The overall goal of Transformer is to provide an easy to use, reliable and performant API for transcoding and editing media, including support for customizing functionality, following the same API design principles to ExoPlayer. The library is supported on devices running Android 5.0 Lollipop (API 21) onwards and includes device-specific optimizations, giving developers a strong foundation to build on. This post gives an introduction to the new functionality and describes some of the many features we're planning for upcoming releases!


Getting Started

Most operations with Transformer will follow the same general pattern:

  1. Configure a TransformationRequest with settings like your desired output format
  2. Create a Transformer and pass it your TransformationRequest
  3. Apply additional effects and edits
  4. Attach a listener to react to completion events
  5. Start the transformation

Of course, depending on your desired transformations, you may not need every step. Here's an example of transcoding an input video to the H.265/HEVC video format and removing the audio track.

// Create a TransformationRequest and set the output format to H.265 val transformationRequest = TransformationRequest.Builder().setVideoMimeType(MimeTypes.VIDEO_H265).build() // Create a Transformer val transformer = Transformer.Builder(context) .setTransformationRequest(transformationRequest) // Pass in TransformationRequest .setRemoveAudio(true) // Remove audio track .addListener(transformerListener) // transformerListener is an implementation of Transformer.Listener .build() // Start the transformation val inputMediaItem = MediaItem.fromUri("path_to_input_file") transformer.startTransformation(inputMediaItem, outputPath)

During transformation you can get progress updates with Transformer.getProgress. When the transformation completes the listener is notified in its onTransformationCompleted or onTransformationError callback, and you can process the output media as needed.

Check out our documentation to learn about further capabilities in the Transformer APIs. You can also find details about using Transformer to accurately convert 10-bit HDR content to 8-bit SDR in the "Dealing with color washout" blog post to ensure your video's colors remain as vibrant as possible in the case that your app or the device doesn't support HDR content.


Edits, effects, and extensions

Media3 includes a set of core video effects for simple edits, such as scaling, cropping, and color filters, which you can use with Transformer. For example, you can create a Presentation effect to scale the input to 480p resolution while maintaining the original aspect ratio, and apply it with setVideoEffects:

Transformer.Builder(context) .setVideoEffects(listOf(Presentation.createForHeight(480))) .build()

You can also chain multiple effects to create more complex results. This example converts the input video to grayscale and rotates it by 30 degrees:

Transformer.Builder(context) .setVideoEffects(listOf( RgbFilter.createGrayscaleFilter(), ScaleToFitTransformation.Builder() .setRotationDegrees(30f) .build())) .build()

It's also possible to extend Transformer’s functionality by implementing custom effects that build on existing ones. Here is an example of subclassing MatrixTransformation, where we start zoomed in by 2 times, then zoom out gradually as the frame presentation time increases:

val zoomOutEffect = MatrixTransformation { presentationTimeUs -> val transformationMatrix = Matrix() val scale = 2 - min(1f, presentationTimeUs / 1_000_000f) // Video will zoom from 2x to 1x in the first second transformationMatrix.postScale(/* sx= */ scale, /* sy= */ scale) transformationMatrix // The calculated transformations will be applied each frame in turn } Transformer.Builder(context) .setVideoEffects(listOf(zoomOutEffect)) .build()

Here's a screen recording that shows this effect being applied in the Transformer demo app:

moving image showing what subclassing matrix transformation looks like in the Transformer demo app

For even more advanced use cases, you can wrap your own OpenGL code or other processing libraries in a custom GL texture processor and plug those into Transformer as custom effects. See the demo app for some examples of custom effects. The README also has instructions for trying a demo of MediaPipe integration with Transformer.


Coming soon

Transformer is actively under development but ready to use, so please give it a try and share your feedback! The Media3 development branch includes a sneak peek into several new features building on the 1.0 release described here, including support for tone-mapping HDR videos to SDR using OpenGL, previewing video effects using ExoPlayer.setVideoEffects, and custom audio processing. We are also working on support for editing multiple videos in more flexible compositions, with export from Transformer and playback through ExoPlayer, making Media3 an end-to-end solution for transforming media.

We hope you'll find Transformer an easy-to-use and powerful tool for implementing fantastic media editing experiences on Android! You can send us feature requests and bug reports in the Media3 GitHub issue tracker, and follow this blog to get updates on new features. Stay tuned for our upcoming talk “High quality Android media experiences” at Google I/O.