Tag Archives: announcement

Save the date for Firebase’s first Demo Day!

Posted by Annum Munir, Product Marketing Manager

This article was originally posted on the Firebase blog.

For the past six years, we have shared the latest and greatest updates to Firebase, Google’s app development platform, at our annual Firebase Summit – this year, we wanted to do something a little different for our community of developers. So, in addition to the Flutter Firebase festival that just wrapped up, and meeting you all over the world at DevFests, we’re thrilled to announce our very first Firebase Demo Day, happening on November 8, 2023!

What is Demo Day?

Demo Day will be a virtual experience where we'll unveil short demos (i.e. pre-recorded videos) that showcase what's new, what's possible, and how you can solve your biggest app development challenges with Firebase. You’ll hear directly from our team about what they’ve been working on in a format that will feel both refreshing but also familiar.

What will you learn?

You’ll learn how Firebase can help you build and run fullstack apps faster, harness the power of AI to build smart experiences, and use Google technology and tools together to be more productive. We’ve been working closely with our friends from Flutter, Google Cloud, and Project IDX to ensure the demos cover a variety of topics and feature integrated solutions from your favorite Google products.

How can you participate?

Since Demo Day is not your typical physical or virtual event, you don’t need to worry about registering, securing a ticket, or even traveling. This is one of the easiest ways to peek at the exciting future of Firebase! Simply bookmark the website (and add the event to your calendar), then check back on Wednesday, November 8, 2023 at 1:00 pm EST to watch the videos at your own pace and be inspired to make your app the best it can be for users and your business.

In the meantime, we encourage you to follow us on X (formerly Twitter) and LinkedIn and join the conversation using #FirebaseDemoDay. We’ll be sharing teasers and behind-the-scenes footage throughout October as we count down to Demo Day, so stay tuned!

MediaPipe On-Device Text-to-Image Generation Solution Now Available for Android Developers

Posted by Paul Ruiz – Senior Developer Relations Engineer, and Kris Tonthat – Technical Writer

Earlier this year, we previewed on-device text-to-image generation with diffusion models for Android via MediaPipe Solutions. Today we’re happy to announce that this is available as an early, experimental solution, Image Generator, for developers to try out on Android devices, allowing you to easily generate images entirely on-device in as quickly as ~15 seconds on higher end devices. We can’t wait to see what you create!

There are three primary ways that you can use the new MediaPipe Image Generator task:

  1. Text-to-image generation based on text prompts using standard diffusion models.
  2. Controllable text-to-image generation based on text prompts and conditioning images using diffusion plugins.
  3. Customized text-to-image generation based on text prompts using Low-Rank Adaptation (LoRA) weights that allow you to create images of specific concepts that you pre-define for your unique use-cases.

Models

Before we get into all of the fun and exciting parts of this new MediaPipe task, it’s important to know that our Image Generation API supports any models that exactly match the Stable Diffusion v1.5 architecture. You can use a pretrained model or your fine-tuned models by converting it to a model format supported by MediaPipe Image Generator using our conversion script.

You can also customize a foundation model via MediaPipe Diffusion LoRA fine-tuning on Vertex AI, injecting new concepts into a foundation model without having to fine-tune the whole model. You can find more information about this process in our official documentation.

If you want to try this task out today without any customization, we also provide links to a few verified working models in that same documentation.

Image Generation through Diffusion Models

The most straightforward way to try the Image Generator task is to give it a text prompt, and then receive a result image using a diffusion model.

Like MediaPipe’s other tasks, you will start by creating an options object. In this case you will only need to define the path to your foundation model files on the device. Once you have that options object, you can create the ImageGenerator.

val options = ImageGeneratorOptions.builder().setImageGeneratorModelDirectory(MODEL_PATH).build() imageGenerator = ImageGenerator.createFromOptions(context, options)

After creating your new ImageGenerator, you can create a new image by passing in the prompt, the number of iterations the generator should go through for generating, and a seed value. This will run a blocking operation to create a new image, so you will want to run it in a background thread before returning your new Bitmap result object.

val result = imageGenerator.generate(prompt_string, iterations, seed) val bitmap = BitmapExtractor.extract(result?.generatedImage())

In addition to this simple input in/result out format, we also support a way for you to step through each iteration manually through the execute() function, receiving the intermediate result images back at different stages to show the generative progress. While getting intermediate results back isn’t recommended for most apps due to performance and complexity, it is a nice way to demonstrate what’s happening under the hood. This is a little more of an in-depth process, but you can find this demo, as well as the other examples shown in this post, in our official example app on GitHub.

Moving image of an image generating in MediaPipe from the following prompt: a colorful cartoon racoon wearing a floppy wide brimmed hat holding a stick walking through the forest, animated, three-quarter view, painting

Image Generation with Plugins

While being able to create new images from only a prompt on a device is already a huge step, we’ve taken it a little further by implementing a new plugin system which enables the diffusion model to accept a condition image along with a text prompt as its inputs.

We currently support three different ways that you can provide a foundation for your generations: facial structures, edge detection, and depth awareness. The plugins give you the ability to provide an image, extract specific structures from it, and then create new images using those structures.

Moving image of an image generating in MediaPipe from a provided image of a beige toy car, plus the following prompt: cool green race car

LoRA Weights

The third major feature we’re rolling out today is the ability to customize the Image Generator task with LoRA to teach a foundation model about a new concept, such as specific objects, people, or styles presented during training. With the new LoRA weights, the Image Generator becomes a specialized generator that is able to inject specific concepts into generated images.

LoRA weights are useful for cases where you may want every image to be in the style of an oil painting, or a particular teapot to appear in any created setting. You can find more information about LoRA weights on Vertex AI in the MediaPipe Stable Diffusion LoRA model card, and create them using this notebook. Once generated, you can deploy the LoRA weights on-device using the MediaPipe Tasks Image Generator API, or for optimized server inference through Vertex AI’s one-click deployment.

In the example below, we created LoRA weights using several images of a teapot from the Dreambooth teapot training image set. Then we use the weights to generate a new image of the teapot in different settings.

A grid of four photos of teapots generated with training prompt 'a photo of a monadikos teapot'on the left, and a moving image showing an image being generated in MediaPipe from the propmt 'a bright purple monadikos teapot sitting in top of a green table with orange teacups'
Image generation with the LoRA weights

Next Steps

This is just the beginning of what we plan to support with on-device image generation. We’re looking forward to seeing all of the great things the developer community builds, so be sure to post them on X (formally Twitter) with the hashtag #MediaPipeImageGen and tag @GoogleDevs. You can check out the official sample on GitHub demonstrating everything you’ve just learned about, read through our official documentation for even more details, and keep an eye on the Google for Developers YouTube channel for updates and tutorials as they’re released by the MediaPipe team.


Acknowledgements

We’d like to thank all team members who contributed to this work: Lu Wang, Yi-Chun Kuo, Sebastian Schmidt, Kris Tonthat, Jiuqiang Tang, Khanh LeViet, Paul Ruiz, Qifei Wang, Yang Zhao, Yuqi Li, Lawrence Chan, Tingbo Hou, Joe Zou, Raman Sarokin, Juhyun Lee, Geng Yan, Ekaterina Ignasheva, Shanthal Vasanth, Glenn Cameron, Mark Sherwood, Andrei Kulik, Chuo-Ling Chang, and Matthias Grundmann from the Core ML team, as well as Changyu Zhu, Genquan Duan, Bo Wu, Ting Yu, and Shengyang Dai from Google Cloud.

Build with Google AI: new video series for developers

Posted by Joe Fernandez, AI Developer Relations, and Jaimie Hwang, AI Developer Marketing

Artificial intelligence (AI) represents a new frontier for technology we are just beginning to explore. While many of you are interested in working with AI, we realize that most developers aren't ready to dive into building their own artificial intelligence models (yet). With this in mind, we've created resources to get you started building applications with this technology.

Today, we are launching a new video series called Build with Google AI. This series features practical, useful AI-powered projects that don't require deep knowledge of artificial intelligence, or huge development resources. In fact, you can get these projects working in less than a day.

From self-driving cars to medical diagnosis, AI is automating tasks, improving efficiency, and helping us make better decisions. At the center of this wave of innovation are artificial intelligence models, including large language models like Google PaLM 2 and more focused AI models for translation, object detection, and other tasks. The frontier of AI, however, is not simply building new and better AI models, but also creating high-quality experiences and helpful applications with those models.

Practical AI code projects

This series is by developers, for developers. We want to help you build with AI, and not just any code project will do. They need to be practical and extensible. We are big believers in starting small and tackling concrete problems. The open source projects featured in the series are selected so that you can get them working quickly, and then build beyond them. We want you to take these projects and make them your own. Build solutions that matter to you.

Finally, and most importantly, we want to promote the use of AI that's beneficial to users, developers, creators, and organizations. So, we are focused on solutions that follow our principles for responsible use of artificial intelligence.

For the first arc of this series, we focus on how you can leverage Google's AI language model capabilities for applications, particularly the Google PaLM API. Here's what's coming up:

  • AI Content Search with Doc Agent (10/3) We'll show you how a technical writing team at Google built an AI-powered conversation search interface for their content, and how you can take their open source project and build the same functionality for your content. 
  • AI Writing Assistant with Wordcraft (10/10) Learn how the People and AI Research team at Google built a story writing application with AI technology, and how you can extend their code to build your own custom writing app. 
  • AI Coding Assistant with Pipet Code Agent (10/17) We'll show you how the AI Developer Relations team at Google built a coding assistance agent as an extension for Visual Studio Code, and how you can take their open source project and make it work for your development workflow.

For the second arc of the series, we'll bring you a new set of projects that run artificial intelligence applications locally on devices for lower latency, higher reliability, and improved data privacy.

Insights from the development teams

As developers, we love code, and we know that understanding someone else's code project can be a daunting task. The series includes demos and tutorials on how to customize the code, and we'll talk with the people behind the code. Why did they build it? What did they learn along the way? You’ll hear insights directly from the project team, so you can take it further.

Discover AI technologies from across Google

Google provides a host of resources for developers to build solutions with artificial intelligence. Whether you are looking to develop with Google's AI language models, build new models with TensorFlow, or deploy full-stack solutions with Google Cloud Vertex AI, it's our goal to help you find the AI technology solution that works best for your development projects. To start your journey, visit Build with Google AI.

We hope you are as excited about the Build with Google AI video series as we are to share it with you. Check out Episode #1 now! Use those video comments to let us know what you think and tell us what you'd like to see in future episodes.

Keep learning! Keep building!

Studio Bot expands to 170+ international markets!

Posted by Isabella Fiterman – Product Marketing Manager, and Sandhya Mohan – Product Manager

At this year’s Google I/O, one of the most exciting announcements for Android developers was the introduction of Studio Bot, an AI-powered coding assistant which can be accessed directly in Android Studio. Studio Bot can accelerate your ability to write high-quality Android apps faster by helping generate code for your app, answering your questions, finding relevant resources— all without ever having to leave Android Studio. After our announcement you told us how excited you were about this AI-powered coding companion, and those of you outside of the U.S were eager to get your hands on it. We heard your feedback, and expanded Studio Bot to over 170 countries and territories in the canary release channel of Android Studio.

Ask Studio Bot your Android development questions

Studio Bot is powered by artificial intelligence and can understand natural language, so you can ask development questions in your own words. While it’s now available in most countries, it is designed to be used in English. You can enter your questions in Studio Bot’s chat window ranging from very simple and open-ended ones to specific problems that you need help with. Here are some examples of the types of queries it can answer:

How do I add camera support to my app?


I want to create a Room database.

Can you remind me of the format for javadocs?

What's the best way to get location on Android?

Studio Bot remembers the context of the conversation, so you can also ask follow-up questions, such as “Can you give me the code for this in Kotlin?” or “Can you show me how to do it in Compose?”


Moving image showing a user having conversation with Studio Bot

Designed with privacy in mind

Studio Bot was designed with privacy in mind. You don’t need to send your source code to take advantage of Studio Bot’s features. By default, Studio Bot’s responses are purely based on conversation history, and you control whether you want to share additional context or code for customized responses. Much like our work on other AI projects, we stick to a set of AI Principles that hold us accountable.

Focus on quality

Studio Bot is still in its early days, and we suggest validating its responses before using them in a production app. We’re continuing to improve its Android development knowledge base and quality of responses so that it can better support your development needs in the future. You can help us improve Studio Bot by trying it out and sharing your feedback on its responses using the thumbs up and down buttons.

Try it out!

Download the latest canary release of Android Studio and read more about how you can get started with Studio Bot. You can also sign up to receive updates on Studio Bot as the experience evolves.

Announcing the Inaugural Google for Startups Accelerator: AI First cohort

Posted by Yariv Adan, Director of Cloud Conversational AI and Pati Jurek, Google for startups Accelerator Regional Lead

This article is also shared on Google Cloud Blog

Today’s startups are addressing the world's most pressing issues, and artificial intelligence (AI) is one of their most powerful tools. To empower startups to scale their business towards success in the rapidly evolving AI landscape, Google for Startups Accelerator: AI First offers a 10-week, equity-free program for AI-first startups in partnership with Google Cloud. Designed for seed to series A startups based in Europe and Israel, the program helps them grow and build responsibly with AI and machine learning (ML) from the ground up, with access to experts from Google Cloud and Google DeepMind, a mix of in-person and virtual activities, 1:1 mentoring, and group learning sessions.

In addition, the program features deep dives and workshops focused on product design, business growth, and leadership development. Startups that are selected for the cohort also benefit from dedicated Google AI technical expertise and receive credits via the Google for Startups Cloud Program.

Out of hundreds of impressive applications, today we welcome the inaugural cohort of the Google for Startups Accelerator: AI First. The program includes 13 groundbreaking startups from eight different countries, all focused on different verticals and with a diverse array of founder and executive backgrounds. All participants are leveraging AI and ML technologies to solve significant problems and have the potential to transform their respective industries.


Congratulations to the cohort!

We are thrilled to present the inaugural Google for Startups Accelerator: AI First cohort:

  • Annea.Ai (Germany) utilizes AI and Digital Twin technology to forecast and prevent possible breakdowns in renewable energy assets, such as wind turbines.
  • Checktur.io (Germany) empowers businesses to manage their commercial vehicle fleets efficiently via an end-to-end fleet asset management ecosystem while using AI models and data-driven insights.
  • Exactly.ai (UK) lets artists create images in their own unique style with a simple written description.
  • Neurons (Denmark) has developed a precise AI model that can measure human subconscious signals to predict marketing responses.
  • PACTA (Germany) provides AI-driven contract lifecycle management with an intelligent no-code workflow on one central legal platform.
  • Quantic Brains (Spain) empowers users to generate movies and video games using AI.
  • Sarus (France) builds a privacy layer for Analytics & AI and allows data practitioners to query sensitive data without having direct access to it.
  • Releva (Bulgaria) provides an all-in-one AI automation solution for eCommerce marketing.
  • Semantic Hub (Switzerland) uses AI leveraging multilingual Natural Language Understanding to help global biopharmaceutical companies understand the patient experience through first-hand testimonies on social media.
  • Vazy Data (France) allows anyone to analyze data without technical knowledge by using AI.
  • Visionary.AI (Israel) leverages cutting-edge AI to improve real-time video quality in challenging visual conditions like extreme low-light.
  • ZENPULSAR (UK) provides social media analytics from over 10 social media platforms to financial institutions and corporations to facilitate investment and business decisions.
  • Zaya AI (Romania) uses machine learning to better understand and diagnose diseases, assisting healthcare professionals to make timely and informed medical decisions.
Grid image of logos and executives of all startups listed in the inaugural Google for Startups Accelerator

To learn more about the AI-first program, and to signal your interest in nominating your startup for future cohorts, visit the program page here.

Grow user acquisition and store conversions with the updated Play Store Listing Certificate course

Posted by Rob Simpson, Product Manager - Google Play & Joe Davis, Manager - Google Play Academy

Since we launched the Google Play Store Listing Certificate in 2021, our no-cost, self-paced training courses have helped thousands of developers in over 80 countries increase their app installs. Over the course of the training, developers learn essential mobile marketing best practices, including how to leverage Play Console growth tools like store listing experiments (SLEs) and custom store listings (CSLs).

Today, we’re excited to release a major update to our self-paced training, covering all the latest CSL and SLE features, as well as real-world examples showing how you might use them to drive user growth. We’re also releasing video study guide series to help you lock in your new knowledge ahead of the exam.

Built for app growth marketers, take Google Play Academy's online growth marketing training and get certified in Google Play Store Listing best practices!
Videos: Google Play Store Listing Certificate Study Guide

What’s new: New features in custom store listings and store listing experiments

The new course content focuses on custom store listings and store listing experiments. For the unfamiliar, custom store listings allow you to show different versions of your title’s Play Store content to different people. For example, you might create versions tailored to users in different countries where feature availability varies, or an experience just for users who have lapsed or churned.

Custom store listings can help you convey the most effective messaging for different users. Based on an internal comparative analysis, CSLs can help increase an app or game’s monthly active users (MAUs) by an average of over 3.5%1.

Store listing experiments, on the other hand, offer a way to explore what icons, descriptions, screenshots (and more) convert the best for your title on the Play Store.

These are features you can use today! Google Play Academy’s training now includes four new courses on custom store listings, four updated existing courses, and nine new study guide videos.

Finding app and career growth

Here’s what some developers, entrepreneurs and marketers have said about their experience after getting trained up and certified:




Learning best practices for store listing experiments allowed me to know more about our audience. Something that simple as using the proper icon increased acquisitions of one of our games by approximately 60%

Adrian Mojica 
Marketing Creative, GameHouse (Spain) 





The knowledge I gained empowered me to make more informed decisions and learn effective strategies. The credibility I got from the certificate has opened new doors in my career. 

Roshni Kumari 
Student & Campus Ambassador (India)


Play Academy increased efficiency in mentoring relationships by 50%, and we've seen a 30% increase in our game launch speed overall. 

Kimmie Vu
Chief Marketing Officer, Rocket Game Studio (Vietnam)


Top tips to prepare for your certificate exam

  1. Take the training and watch the study guide videos
  2. Take the online training on Google Play Academy to learn best practices to help create a winning store listing, then lock in your knowledge with the new video study guides. You’ll learn key skills to help you drive growth with high-quality and policy-compliant store listings.

  3. Pass the exam and get certified
  4. After the training, take the exam to get an industry-recognized certificate. You will also be invited to join Google Developer Certification Directory, where you can network with other Google-certified developers.

  5. Get started with custom store listings and experiments
  6. Time to put your new skills into action. Start with 2-3 custom store listings for markets important to your app or game, such as users in certain countries or lapsed or churned users. Or test a new icon or short description.

Start your learning journey on Google Play Academy today!


1 Source: Internal Google data [Nov 2022] comparing titles that use CSL to those that do not.

Programmatically access working locations with the Calendar API

Posted by Chanel Greco, Developer Advocate

Giving Google Workspace users the ability to set their working location and working hours in Google Calendar was an important step in helping our customers’ employees adapt to a hybrid world. Sending a Chat message asking “Will you be in the office tomorrow?” soon became obsolete as anyone could share where and when they would be working within Calendar.

To improve the hybrid working experience, many organizations rely on third-party or company-internal tools to enable tasks like hot desk booking or scheduling days in the office. Until recently, there was no way to programmatically synchronize the working location set in Calendar with such tools.


Image showing working locations visible via Google Calendar in the Robin app
Robin displays the working location from Google Calendar in their application and updates the user's Google Calendar when they book a desk in Robin

Programmatically read and write working locations

We are pleased to announce that the Calendar API has been updated to make working locations available and this added functionality is generally available (feature is only available for eligible Workspace editions). This enables developers to programmatically read and write the working location of Google Workspace users. This can be especially useful in three use cases that have surfaced in discussions with customers which we are going to explore together.

1.     Synchronize with third-party tools

Enhancing the Calendar API enables developers to synchronize user’s working location with third-party tools like Robin and Comeen. For example, some companies provide their employees with desk booking tools so they can book their workplace in advance for the days they will be on-site. HR management tools are also common for employees to request and set “Work from home” days. In both situations the user had to set their working location in two separate tools: their desk booking tool and/or HR management system and Google Calendar.

Thanks to the working location being accessible through the Calendar API this duplicate work is no longer necessary since a user’s working location can be programmatically set. And if a user's calendar is the single source of truth? In that case, the API can be used to read the working location from the user’s calendar and write it to any permissioned third-party tool.


Image showing Google Workspace Add-on synchronizing users' working locations in the Comeen app.
Comeen’s Google Workspace Add-on synchronizes the user’s’ working locations whenever the user updates their working location, either in Google Calendar or in Comeen's add-on

2.     Display working location on other surfaces

The API enables the surfacing of the user's working location in other tools, creating interesting opportunities. For instance, some of our customers have asked for ways to better coordinate in-office days. Imagine you are planning to be at the office tomorrow. Who else from your team will be there? Who from a neighboring team might be on-site for a coffee chat?

With the Calendar API, a user's working location can be displayed in tools like directories, or a hybrid-work scheduling tool. The goal is to make a user’s working location available in the systems that are relevant to our customers.

3.     Analyze patterns

The third use case that surfaced from discussions with our customers is analyzing working location patterns. With many of our customers having a hybrid work approach it’s vital to have a good understanding of the working patterns. For example, which days do locations reach maximal legal capacity? Or, when does the on-campus restaurant have to prepare more meals for employees working on-site?

The API answers these and other questions so that facility management can adapt their resources to the needs of their employees.


How to get started

Now that you have an idea of the possibilities the updated Calendar API creates, we want to guide you on how you can get started using it.

  • Check out the developer documentation for reading and writing a user's working locations.
  • Watch the announcement video on the Google Workspace Developers YouTube channel.
  • Check the original post about the launch of the working location feature for a list of all Google Workspace plans that have access to the feature.

MediaPipe for Raspberry Pi and iOS

Posted by Paul Ruiz, Developer Relations Engineer

Back in May we released MediaPipe Solutions, a set of tools for no-code and low-code solutions to common on-device machine learning tasks, for Android, web, and Python. Today we’re happy to announce that the initial version of the iOS SDK, plus an update for the Python SDK to support the Raspberry Pi, are available. These include support for audio classification, face landmark detection, and various natural language processing tasks. Let’s take a look at how you can use these tools for the new platforms.

Object Detection for Raspberry Pi

Aside from setting up your Raspberry Pi hardware with a camera, you can start by installing the MediaPipe dependency, along with OpenCV and NumPy if you don’t have them already.

python -m pip install mediapipe

From there you can create a new Python file and add your imports to the top.

import mediapipe as mp from mediapipe.tasks import python from mediapipe.tasks.python import vision import cv2 import numpy as np

You will also want to make sure you have an object detection model stored locally on your Raspberry Pi. For your convenience, we’ve provided a default model, EfficientDet-Lite0, that you can retrieve with the following command.

wget -q -O efficientdet.tflite -q https://storage.googleapis.com/mediapipe-models/object_detector/efficientdet_lite0/int8/1/efficientdet_lite0.tflite

Once you have your model downloaded, you can start creating your new ObjectDetector, including some customizations, like the max results that you want to receive, or the confidence threshold that must be exceeded before a result can be returned.

# Initialize the object detection model base_options = python.BaseOptions(model_asset_path=model)options = vision.ObjectDetectorOptions(                                   base_options=base_options,                                   running_mode=vision.RunningMode.LIVE_STREAM,                                   max_results=max_results,                                                       score_threshold=score_threshold,                                    result_callback=save_result) detector = vision.ObjectDetector.create_from_options(options)

After creating the ObjectDetector, you will need to open the Raspberry Pi camera to read the continuous frames. There are a few preprocessing steps that will be omitted here, but are available in our sample on GitHub.

Within that loop you can convert the processed camera image into a new MediaPipe.Image, then run detection on that new MediaPipe.Image before displaying the results that are received in an associated listener.

mp_image = mp.Image(image_format=mp.ImageFormat.SRGB, data=rgb_image) detector.detect_async(mp_image, time.time_ns())

Once you draw out those results and detected bounding boxes, you should be able to see something like this:

Moving image of a person holidng up a cup and a phone, and detected bounded boxes identifying these items in real time

You can find the complete Raspberry Pi example shown above on GitHub, or see the official documentation here.

Text Classification on iOS

While text classification is one of the more direct examples, the core ideas will still apply to the rest of the available iOS Tasks. Similar to the Raspberry Pi, you’ll start by creating a new MediaPipe Tasks object, which in this case is a TextClassifier.

var textClassifier: TextClassifier? textClassifier = TextClassifier(modelPath: model.modelPath)

Now that you have your TextClassifier, you just need to pass a String to it to get a TextClassifierResult.

func classify(text: String) -> TextClassifierResult? { guard let textClassifier = textClassifier else { return nil } return try? textClassifier.classify(text: text) }

You can do this from elsewhere in your app, such as a ViewController DispatchQueue, before displaying the results.

let result = self?.textClassifier.classify(text: inputText) let categories = result?.classificationResult.classifications.first?.categories?? []

You can find the rest of the code for this project on GitHub, as well as see the full documentation on developers.google.com/mediapipe.

Moving image of TextClasifier on an iPhone

Getting started

To learn more, watch our I/O 2023 sessions: Easy on-device ML with MediaPipe, Supercharge your web app with machine learning and MediaPipe, and What's new in machine learning, and check out the official documentation over on developers.google.com/mediapipe.

We look forward to all the exciting things you make, so be sure to share them with @googledevs and your developer communities!

Meet the Google for Startups Accelerator: Women Founders Class of 2023

Posted by Iran Karimian, Startup Developer Ecosystem Lead, Canada

It’s an unfortunate truth that women founders are massively underrepresented among venture-backed entrepreneurs and VC investors, with companies founded solely by women receiving less than 3% of all venture capital investments. In response to this, it has become more apparent of the need to invest in women entrepreneurs in alternate ways - such as mentorship guidance and technical support to help grow and scale their business.

Back in 2020, we launched the Google for Startups Accelerator: Women Founders program to bridge the gender gap in the North American startup ecosystem, and provide high-quality mentorship opportunities, technical guidance, support and community for women founders in the region. Since then, the program has supported 36 women-led startups across North America, who have collectively raised $73.46M USD since graduating from their cohort. Now in its fourth year, the equity-free, 10-week intensive virtual accelerator program provides women-led startups the tools they need to prepare for the next phase of their growth journey.

Today, we are excited to introduce the 11 impressive women-led startups selected to participate in the 2023 cohort:

  • Aravenda (Fairfax, VA) is a comprehensive consignment shop software that is leading innovation in the fastest growing segment of retail through resales.
  • BorderlessHR (Ottawa, ON) offers global talent solutions for small businesses, providing instant matches to pre-vetted talent and AI-powered interviewers, saving SMBs the cost and time spent hiring the right talent on time and within budget. Borderless HR also offers a free suite of HR products to help manage talent.
  • Cobble (New York City, NY) is a platform that helps people reach collaborative agreement with others on ideas. Cobble offers a combination of decision-making tools, curated content and AI-driven social connections.
  • Craftmerce (Delaware City, DE) is a B2B technology platform that links African artisans to mainstream retail partners by providing tools for distributed production, enterprise management, and financing.
  • Dreami (Redwood City, Calif.) powers data-driven career development programs for the 36 million people in the US who face barriers to employment.
  • Medijobs (New York City, NY) offers virtual recruiting for the healthcare industry.
  • Monark (Calgary, AB) is a digital leadership development platform, preparing the next generation of leaders through on-demand personalized learning.
  • NLPatent (Toronto, ON) is an AI-patent search and analytics platform that uses a fine-tuned large language model, built from the ground up, to understand the language of innovation.
  • Rejoy Health (Mountain View, Calif.) is an AI-powered mobile application that uses computer vision technology to deliver at-home physical therapy, enabling individuals to effectively manage and alleviate chronic musculoskeletal conditions like back and joint pain.
  • Shimmer (San Francisco, Calif.) is an ADHD coaching platform that connects adults with ADHD and expert ADHD coaches for behavioral coaching.
  • Total Life (Jupiter, FL) reimagines aging for older adults through an easy, one-click platform that connects users with a Medicare covered healthcare provider.

Through data-driven insights, and leveraging the power of AI and ML, these women-led startups are leading innovation in the North American tech scene. We are thrilled to have them join the 10-week intensive virtual program, connecting them to the best of Google's programs, products, people and technology to help them reach their goals and unlock their next phase of growth. The 2023 Google for Startups Accelerator: Women Founders program kicks off this September.

Google Developer Groups & ecosystem partners bring Startup Success Days to 15 Indian cities

Posted by Harsh Dattani - Program Manager, Developer Ecosystem

The Indian startup ecosystem is thriving, with new startups being founded every day. The country has a large pool of talented engineers and entrepreneurs, and a growing number of investors, policy makers and new age enterprises are looking to back Indian startups.

Google Developer Groups (GDGs) in 50 key Indian cities with varying tech ecosystems across India have seen a healthy mix of developers from the startup ecosystem participating in local meetups. As a result, GDGs have created a platform in collaboration with Google to help early-stage startups accelerate their growth. GDGs across India are increasingly playing a vital role in assisting startup founders and their teams with content, networking opportunities, hackathons, bootcamps, demo days, and more.

We are pleased to announce Startup Success Days with the goal of strengthening how developer communities interact with startup founders, VCs, and Googlers to discuss, share, and learn about the latest trends like Generative AI, Google Cloud, Google Maps, and Keras.

Google Developer Groups Success Days August to October 2023

Startup Success Days will be held in 15 cities across India, starting with 8 cities in August and September: Ahmedabad, Bangalore, Hyderabad, Indore, Chennai, New Delhi, Mumbai, and Pune.

The next event will be hosted at Bangalore on August 12, 2023 at Google Office. The events will be free to attend and will be open to all startups, regardless of stage or industry. The events will cover technical topics, focused on Google technologies, and will provide opportunities for startups to receive mentorship from industry experts, network with other startups, and meet VCs to receive feedback on their business models.

Learn more and register for Startup Success Days on our website.

We look forward to seeing you there!

Harsh Dattani
Program Manager, Developer Ecosystem at Google