Posted by Jose Ugia – Developer Relations Engineer
Testing is an integral part of software engineering, especially in the context of payments, where small glitches can have significant implications for your business.
We previously introduced a set of test cards for you to use with the Google Pay API in TEST mode. These cards allow you to build simple test cases to verify that your Google Pay integration operates as expected. As much as this was a great start, a few predefined cards only let you run a limited number of happy path test scenarios, confined to the domain of your applications.
Improved testing capabilities
Today, we are introducing PSP test cards, an upgrade to Google Pay’s test suite that lets you use test cards from your favorite payment processors to build end-to-end test scenarios, enabling additional testing strategies, both manual and automated.
Figure 1: Test cards from your payment processor appear in Google Pay’s payment sheet when using TEST mode.
When you select a card, the result is returned back to your application via the API, so you can use it to validate comprehensive payment flows end-to-end, including relaying payment information to your backend to complete the order with your processor. These test cards allow you to verify the behavior of your application against diverse payment outcomes, including successful and failed transactions due to fraud, declines, insufficient funds and more.
Test automation
This upgrade also supports test automation, so you can write end-to-end UI tests using familiar tools like UIAutomator and Espresso on Android, and include them in your CI/CD flows to further strengthen your checkout experiences.
The new generation of Google Pay’s test suite is currently in beta, with web support coming later this year. You’ll be able to use test cards on Android for 5 of the most widely used PSPs – Stripe, Adyen, Braintree, WorldPay and Checkout.com, and we’ll continue to add test cards from your favorite PSPs.
Next steps
Improved testing capabilities have been one of the most frequent requests from the developer community. From Google Pay, we are committed to providing you with the tools you need to harden your payment flows and improve your checkout performance.
Figure 2: With the upgraded test suite you can run end-to-end automated tests for successful and failed payment flows.
Take a look at the documentation to start enhancing your payments tests. Also, check out the sample test suite in the Google Pay demo open source application.
If you're reading this blog, then you're probably interested in creating a custom machine learning (ML) model. I recently went through the process myself, creating a custom dog detector to go with a Codelab, Create a custom object detection web app with MediaPipe. Like any new coding task, the process took some trial and error to figure out what I was doing along the way. To minimize the error part of your "trial and error" experience, I'm happy to share five takeaways from my model training experience with you.
1. Preparing data takes a long time. Be sure to make the time
Preparing your data for training will look different depending on the type of model you're customizing. In general, there is a step for sourcing data and a step for annotating data.
Sourcing data
Finding enough data points that best represent your use case can be a challenge. For one, you want to make sure you have the right to use any images or text you include in your data. Check the licensing for your data before training. One way to resolve this is to provide your own data. I just so happen to have hundreds of photos of my dogs, so choosing them for my object detector was a no-brainer. You can also look for existing datasets on Kaggle. There are so many options on Kaggle covering a wide range of use cases. If you're lucky, you'll find an existing dataset that serves your needs and it might even already have annotations!
Annotating data
MediaPipe Model Maker accepts data where each input has a corresponding XML file listing its annotations. For example:
There are several software programs that can help with annotation. This is especially useful when you need to highlight specific areas in images. Some software programs are designed to enable collaboration–an intuitive UI and instructions for annotators mean you can enlist the help of others. A common open source option is Label Studio, which is what I used to annotate my images.
If you're anything like me, you have a wonderfully grand idea planned for your first custom model. My dog Ben was the inspiration for my first model. He came from a local golden retriever rescue, but when I did a DNA test, it turned out that he's 0% golden retriever! My first idea was to create a golden retriever detector – a solution that could tell you if a dog was a "golden retriever" or "not golden retriever". I thought it could be fun to see what the model thought of Ben, but I quickly realized that I would have to source a lot more images of dogs than I had so I could run the model on other dogs as well. And, I'd have to make sure that it could accurately identify golden retrievers of all shades. After hours into this endeavor I realized I needed to simplify. That's when I decided to try building a solution for just my three dogs. I had plenty of photos to choose from, so I picked the ones that best showed the dogs in detail. This was a much more successful solution, and a great proof of concept for my golden retriever model because I refuse to abandon that idea.
Here are a few ways to simplify your first custom model:
Start with fewer labels. Choose 2-5 classes to assign to your data.
Leave off the edge cases. If you're coming from a background in software engineering, then you're used to paying attention to and addressing any edge cases. In machine learning, you might be introducing some errors or strange behavior when you try to train for edge cases. For example, I didn't choose any dog photos where their heads aren't visible. Sure, I may want a model that can detect my dogs even from just the back half. But I left partial dog photos out of my training and it turns out that the model is still able to detect them.
The web app still identifies ACi in an image even when her head isn't visible
Include some edge cases in your testing and prototyping to see how the model handles them. Otherwise, don't sweat the edge cases.
A little data goes a long way. Since MediaPipe Model Maker uses transfer learning, you need much less data to train than you would if you were training a model from scratch. Aim for 100 examples for each class. You might be able to train with fewer than 100 examples if there aren't many possible iterations of the data. For example, my colleague trained a model to detect two different Android figurines. He didn't need too many photos because there are only so many angles at which to view the figurines. You might need more than 100 examples to start if you need more to show the possible iterations of the data. For example, a golden retriever comes in many colors. You might need several dozen examples for each color to ensure the model can accurately identify them, resulting in over 100 examples.
So when it comes to your first ML training experience, remember to simplify, simplify, simplify.
Simplify.
Simplify.
3. Expect several training iterations
As much as I'd like to confidently say you'll get the right results from your model the first time you train, it probably won't happen. Taking your time with choosing data samples and annotation will definitely improve your success rate, but there are so many factors that can change how the model behaves. You might find that you need to start with a different model architecture to reach your desired accuracy. Or, you might try a different split of training and validation data. You might need to add more samples to your dataset. Fortunately, transfer learning with MediaPipe Model Maker generally takes several minutes, so you can turn around new iterations fairly quickly.
4. Prototype outside of your app
When you finish training a model, you're probably going to be very excited and eager to add it to your app. However, I encourage you to first try out your model in MediaPipe Studio for a couple of reasons:
Any time you make a change to your app, you probably have to wait for some compile and/or build step to complete. Even with a hot reload, there can be a wait time. So if you decide you want to tweak a configuration option like score threshold, you'll be waiting through every tweak you make and that time can add up. It's not worth the extra time to wait for a whole app to build out when you're just trying to test one component. With MediaPipe Studio, you can try out options and see results with very low latency.
If you don't get the expected results, you can't confidently determine if the issue is with your model, task configuration, or app.
With MediaPipe Studio, I was able to quickly try out different score thresholds on various images to determine what threshold I should use in my app. I also eliminated my own web app as a factor in this performance.
5. Make incremental changes
After sourcing quality data, simplifying your use case, training, and prototyping, you might find that you need to repeat the cycle to get the right result. When that happens, choose just one part of the process to change, and make a small change. In my case, many photos of my dogs were taken on the same blue couch. If the model started picking up on this couch since it's often inside the bounding box, that could be affecting how it categorized images where the dogs aren't on the couch. Rather than throwing out all the couch photos, I removed just a couple and added about 10 more of each dog where they aren't on the couch. This greatly improved my results. If you try to make a big change right away, you might end up introducing new issues rather than resolving them.
If you’d like to share some learnings from training your first model, post the details on LinkedIn along with a link to this blog post, and then tag me. I can't wait to see what you learn and what you build!
Posted by Maru Ahues Bouza, Director, Android Developer Relations
Google I/O 2023 is just a week away, kicking off on Wednesday May 10 at 10AM PT with the Google Keynote and followed at 12:15PM PT by the Developer Keynote. The program schedule launched last week, allowing you to save sessions to your calendar and start previewing content.
To help you get ready for this year's Google I/O, we’re taking a look back at some of Android’s favorite moments from past Google I/Os, as well as a playlist of developer content to help you prepare. Take a look below, and start getting ready!
Modern Android Development
Helping you stay more productive and create better apps, Modern Android Development is Android’s set of tools and APIs, and they were born across many Google I/Os. Tor Norbye, Director of Engineering for Android, reflects on how Android development tools, APIs, and best practices have evolved over the years, starting in 2013 when he and the team announced Android Studio. Here are some of the talks we’re excited for in developer productivity at this year’s Google I/O:
From the launch of Android Auto and Android Wear in 2014 to last year’s preview of the Google Pixel Tablet, Google I/O has always been an important moment for seeing the new form factors that Android is extending to. Sara Hamilton, Developer Relations Engineer for Android, discusses how we are continuing to invest in multi-device experiences and making it easier for you to build for the entire Android device ecosystem. Sara shares her excitement for developers continuing to bring unique experiences to all screen sizes and types, from tablets and foldables, to watches and tvs. Some of our favorite talks at this year’s Google I/O in the multi-device world include:
From originally playing a smaller part in Google I/O keynotes in the early days to announcing 3 billion monthly active users in 2021, Dan Sandler, Software Engineer for Android, looks back at the tremendous growth of the Android platform and how it’s continuing to evolve. With a focus on helping you make quality apps, here are some of our favorite Android platform talks this year:
We can’t wait to show you all that’s new across Android in just under a week. Be sure to tune in on the Google I/O website on May 10 to catch the latest Android updates and announcements this year!
Posted by Lyanne Alfaro, DevRel Program Manager, Google Developer Studio
Developer Journey is a monthly series to spotlight diverse and global developers sharing relatable challenges, opportunities, and wins in their journey. Every month, we will spotlight developers around the world, the Google tools they leverage, and the kind of products they are building.
What does Google I/O mean to you, and what are you looking forward to most this year?
To me, Google I/O is the paradise for embracing cutting-edge technologies. I have followed the keynotes online for two years, and it is so exciting that I will join in-person this year! I can’t wait to exchange thoughts with other amazing developers and listen to the game-changing AI topics.
What's your favorite part about Google I/O?
I’m obsessed with live demos for new technologies. Daring to do a live demo shows Google developers’ strong confidence and pride in their work. It is also exciting to see what kinds of use cases are emphasized and what metrics are evaluated.
What Google tools have you used to build?
As a full-stack developer and cloud engineer, I have built progressive apps and distributed services with Chrome, Android Studio, BigQuery, Analytics, Firebase, Google Maps, YouTube, and Google Cloud Platform. Other than those, I love exploring AI and ML features with Google Colab, Cloud TPU, and TensorFlow.
Which tool has been your favorite? Why?
Chrome has been my favorite. To me, it is the best choice for web app development: great compatibility across OS platforms, feature-rich developer tools, and smooth mobile integration. ChromeDriver is a sweet bonus when accessing deployments and automating tests on a server.
Tell us about something you've built in the past using Google tools.
I collaborated with my friends to build a web app aimed at helping people understand and analyze soccer games easier and faster with pre-trained ML models. This app includes accessing YouTube video sources, detecting targets with Yolo-v3 in TensorFlow, accelerating computation with Colab GPU, and storing results in Google Cloud.
What advice would you give someone starting in their developer journey?
Actively discuss with people and listen to their ideas, especially if you are a student or a beginner. Participating in GDSC and GDG events is a great source to connect with peers and senior developers near you and across the globe. I benefit so much simply by chatting about random tech topics with others. Good communication will open your mind and guide your direction. Meeting interesting people will also make your journey as a developer much more colorful and enjoyable!
Jolina Li
Toronto, Ontario, Canada
GDSC Lead
Google Developer Student Club, University of Toronto St. George
What does Google I/O mean to you, and what are you looking forward to most this year?
It has been a dream for me since high school to attend Google I/O. In previous years, I would watch clips of the keynotes online and browse through creators’ YouTube vlogs to see all the incredible technologies at the hands-on stations. This May, I can’t believe I will be traveling to Mountain View and experiencing Google I/O 2023 for the first time live in person. For me, Google I/O is an opportunity to connect with passionate individuals in the developer community, including students, and experts from around the world. It is a full day of learning, inspiration, innovation, community, and growth. This year, I’m looking forward to hearing all the exciting keynotes in person, interacting with transformative technology, and making new connections.
What's your favorite part about Google I/O?
My favorite part about Google I/O is the technical sessions after the keynotes, where I can learn about innovative products from experts and engage in product demonstrations. I love seeing developments in machine learning, so I will definitely visit the TensorFlow station. I’m also excited to explore other Google technology stations, including Google Cloud and Google Maps Platform, and learn as much as I can.
What Google tools have you used to build?
I have used Android to build mobile apps for my software design course and a tech entrepreneurship competition. I have also used Google Colab, a cloud-based Jupyter notebook environment, for my research and deep learning engineering internships.
Which tool has been your favorite? Why?
I love using Google Colab because it’s an accessible and cost-free tool for students working on data science and machine learning projects. The environment requires no setup and offers expensive computing resources such as GPUs at no cost. It uses Python, my favorite language, and contains all the main Python libraries. The user interface features independent code segments you can run and test rather than running the entire script every time you edit code. There is also an option to add text segments between code to document various script components. Google Colab notebooks can be easily shared with anyone for collaboration and stored in Google Drive for convenient access.
Tell us about something you've built in the past using Google tools.
For my software design course project, a few teammates and I built a cooking recipe organizer app using Android Studio that allows users to discover new recipes and build their own portfolio of recipes. Users can save interesting recipes that they found, give ratings and reviews, and also upload their own recipes to the database. I designed a recipe sorting and filtering system that allows users to sort their saved recipes alphabetically, by interest keywords or rating, and filter their recipes by genre.
Android Studio allowed me to preview the mobile app development using an emulator that functions across all types of Android devices. This feature helped me to understand the app from a user’s perspective and develop the UI/UX more efficiently. We also used Google Firebase for its cloud storage, non-relational feature, and high compatibility with Android.
What advice would you give someone starting in their developer journey?
When I began attending university, I had no experience in programming and had to start my computer science career from zero. I pursued computer science, however, because I was interested in learning about AI and building technology to solve global problems such as climate change.
I believe that when you are starting your career, it’s important to have a goal about what you want to achieve. There are so many possibilities in tech, and having a goal can help you make decisions and motivate you when you’re facing challenges. It’s also important to keep an open mind about different opportunities and explore multiple areas in tech to learn more about the field and discover your passions.
Another tip is to look for opportunities and resources to help you grow as a developer. Many opportunities and resources are available for beginners, including online courses, self-guided project tutorials, and beginner-friendly workshops.
Google has amazing developer communities, including student campus clubs (GDSC), professional developer groups (GDG), Google developer expert groups (GDE), and a women in tech community (WTM). You can also create your own opportunities by teaching a hands-on workshop to enhance your technical and soft skills, starting a local developer group to gain leadership and collaboration skills, or building projects to increase your knowledge and apply what you learn.
Learn a lot, discover new opportunities, gain new skills, connect with people in tech, and keep pursuing what you love about technology!
Maria Paz Muñoz Parra
Malmö, Sweden
Google Developer Groups Organizer and Women Techmakers Ambassador
What does Google I/O mean to you, and what are you looking forward to most this year?
Google I/O is an opportunity to stay up to date in Google technologies and initiatives. We get to witness innovation, connect with other developers and generate energetic conversations about what we are passionate about.
Besides Bard, this year I have a special interest in the WebGPU API. Currently, I work as a senior front-end developer on a Knowledge Graph project. There, one of the most powerful tools for ontologists and data scientists to model and understand data are the canvases. I’m curious about how we can boost the performance when rendering these graphs on the web, using the new features of WebGPU. Google I/O will surely be an inspiration for my work.
What's your favorite part about Google I/O?
It’s the perfect excuse to meet my colleagues and watch the event together, popcorn included! In the online realm, it’s always fun to follow the discussions on social media, and Google always finds a way to surprise us and keep us engaged in our learning process. I still remember the I/O Adventure platform of 2022. It was an outstanding virtual experience, interacting with people in the community booths. Later, I also followed the recorded talks. A gamified learning experience, top to bottom!
What Google tools have you used to build?
The devTools have been my everyday tools for the past 10 years. The ones that I have used the most are the Core Web Vital metrics, devTools for debugging (extra love for the ones to debug accessibility issues), and tools for testing CSS on the browser (i.e. the grid properties and the media queries emulation features).
Since last year, I’ve been testing the Instant Loading and Seamless APIs, and they have allowed me to deliver high-quality interfaces with intuitive navigation, as we are used to having in native mobile apps.
Which tool has been your favorite? Why?
Accessibility guidelines and tools are my favorite. Lighthouse, the accessibility scanner, and Material Design. These tools help us ensure that all users, including those with disabilities, can access and use content and services published on the web. With these tools integrated, other users can start educating themselves on the power of accessibility. My interest in this space started when I noticed that my mother, who has low vision and motor impairments in her hands, couldn’t easily access her favorite music on her phone. The voice search feature on YouTube was revolutionary for her, and probably for many other elders.
Many questions popped into my mind: “Who is considered a user with a disability? How are the interfaces I create used? Am I creating unintentional barriers?”
As a web developer, tools that allow me to test, audit, understand and improve are a must.
Tell us about something you've built in the past using Google tools.
I collaborated with my friends to build a web app aimed at helping people understand and analyze soccer games easier and faster with pre-trained ML models. This app includes accessing YouTube video sources, detecting targets with Yolo-v3 in TensorFlow, accelerating computation with Colab GPU, and storing results in Google Cloud.
What advice would you give someone starting in their developer journey?
Many developers who start their journey come from other areas of expertise or industries. Imagine a journalist, nurse, or primary school teacher who wants to start a developer journey. They may feel they need to throw away all the knowledge they have acquired.
On the contrary, I believe prior knowledge is key to standing out as a developer. Every person has a different combination of interests, talents, and skills. Master the basics, and shine with your own story.
From meeting talented developers to exciting keynotes, there’s so much to look forward to at Google I/O 2023. To optimize your experience, create or connect a developer profile, and start saving content to My I/O to build your personal agenda. Share your experience with us by using #GoogleIO across your social media so we can find you!
Posted by Chanel Greco, Developer Advocate Google Workspace
We recently launched the Google Workspace APIs Explorer, a new tool to help streamline developing on the Google Workspace Platform. What is this handy tool and how can you start using it?
The Google Workspace APIs Explorer is a tool that allows you to explore and test Google Workspace APIs without having to write any code. It's a great way to get familiar with the capabilities of the many Google Workspace APIs.
The Google Workspace APIs Explorer is a web-based tool that allows you to interact with Google Workspace APIs in a visual way.
How to use the Google Workspace APIs Explorer
To use this tool, simply navigate to the Google Workspace APIs Explorer page and select the API that you want to explore. The Google Workspace APIs Explorer will then display a list of all the methods available for that API. You can click on any method to see more information about it, including its parameters, responses, and examples.
To test an API method, simply enter the required parameters and click on the "Execute" button. The Google Workspace APIs Explorer will then send the request to the API and return the response. Please note, the tool acts on real data and authenticates with your Google Account, so use caution when trying methods that create, modify, or delete data.
Click to enlarge
How you can benefit from using the Google Workspace APIs Explorer
These are some of the benefits of using the Google Workspace APIs Explorer:
You can browse and discover the 25+ different Google Workspace APIs.
The tool can help you create code samples for your integrations or add-ons.
It can assist with troubleshooting problems with Google Workspace APIs.
It is a neat way to see the results of API requests in real time.
Getting started
You can access the Google Workspace APIs Explorer tool on the Google Workspace for Developers documentation, either through the navigation (Resources > API Explorer), or on its dedicated page. You will need a Google account to use the tool. This account can either be a Google Workspace account or the Google account you use for accessing tools like Gmail, Drive, Docs, Calendar, and more.
We also have a video showing how you can get started using the Google Workspace APIs Explorer – check it out here!
Posted by Nari Yoon, Bitnoori Keum, Hee Jung, DevRel Community Manager / Soonson Kwon, DevRel Program Manager
Let’s explore highlights and accomplishments of vast Google Machine Learning communities over the first quarter of 2023. We are enthusiastic and grateful about all the activities by the global network of ML communities. Here are the highlights!
ML Campaigns
ML Community Sprint
ML Community Sprint is a campaign, a collaborative attempt bridging ML GDEs with Googlers to produce relevant content for the broader ML community. Throughout Feb and Mar, MediaPipe/TF Recommendation Sprint was carried out and 5 projects were completed.
Tweet Image Maker - Learning to recommend engaging images for tweets by ML GDE Victor Dibia (United States): a slide deck and a Kaggle Notebook
Training a recommendation model with dynamic embeddings by ML GDE Thushan Ganegedara (Australia): a slide deck and a Github repository
Easy Image Segmentation in Android with MediaPipe, TFLite Interpreter or Task Library by ML GDE George Soloupis (Greece): a slide deck and a blog posting
MediaPipe Intro - Image Classification & Embedding by ML GDE Margaret Maynard-Reid (United States): a slide deck and examples on GitHub
ML Olympiad is an associated Kaggle Community Competitions hosted by ML GDE, TFUG, 3rd-party ML communities, supported by Google Developers. The second, ML Olympiad 2023 has wrapped up successfully with 17 competitions and 300+ participants addressing important issues of our time - diversity, environments, etc. Competition highlights include Breast Cancer Diagnosis, Water Quality Prediction, Detect ChatGpt answers, Ensure healthy lives, etc. Thank you all for participating in ML Olympiad 2023!
Various ways of serving Stable Diffusion by ML GDE Chansung Park (Korea) and ML GDE Sayak Paul (India) shares how to deploy Stable Diffusion with TF Serving, Hugging Face Endpoint, and FastAPI. Their other project Fine-tuning Stable Diffusion using Keras provides how to fine-tune the image encoder of Stable Diffusion on a custom dataset consisting of image-caption pairs.
Serving TensorFlow models with TFServing by ML GDE Dimitre Oliveira (Brazil) is a tutorial explaining how to create a simple MobileNet using the Keras API and how to serve it with TF Serving.
Lighting up Images in the Deep Learning Era by ML GDE Soumik Rakshit (India), ML GDE Saurav Maheshkar (UK), ML GDE Aritra Roy Gosthipaty (India), and Samarendra Dash explores deep learning techniques for low-light image enhancement. The article also talks about a library, Restorers, providing TensorFlow and Keras implementations of SoTA image and video restoration models for tasks such as low-light enhancement, denoising, deblurring, super-resolution, etc.
AI for Art and Design by ML GDE Margaret Maynard-Reid (United States) delivered a brief overview of how AI can be used to assist and inspire artists & designers in their creative space. She also shared a few use cases of on-device ML for creating artistic Android apps.
ML Engineering (MLOps)
End-to-End Pipeline for Segmentation with TFX, Google Cloud, and Hugging Face by ML GDE Sayak Paul (India) and ML GDE Chansung Park (Korea) discussed the crucial details of building an end-to-end ML pipeline for Semantic Segmentation tasks with TFX and various Google Cloud services such as Dataflow, Vertex Pipelines, Vertex Training, and Vertex Endpoint. The pipeline uses a custom TFX component that is integrated with Hugging Face Hub - HFPusher.
Textual Inversion Pipeline for Stable Diffusion by ML GDE Chansung Park (Korea) demonstrates how to manage multiple models and their prototype applications of fine-tuned Stable Diffusion on new concepts by Textual Inversion.
Scalability of ML Applications by TFUG Bangalore focused on the challenges and solutions related to building and deploying ML applications at scale. Googler Joinal Ahmed gave a talk entitled Scaling Large Language Model training and deployments.
Responsible IA Toolkit (video) by ML GDE Lesly Zerna (Bolivia) and Google DSC UNI was a meetup to discuss ethical and sustainable approaches to AI development. Lesly shared about the “ethic” side of building AI products as well as learning about “Responsible AI from Google”, PAIR guidebook, and other experiences to build AI.
Women in AI/ML at Google NYC by GDG NYC discussed hot topics, including LLMs and generative AI. Googler Priya Chakraborty gave a talk entitled Privacy Protections for ML Models.
Learning JAX in 2023: Part 1 / Part 2 / Livestream video by ML GDE Aritra Roy Gosthipaty (India) and ML GDE Ritwik Raha (India) covered the power tools of JAX, namely grad, jit, vmap, pmap, and also discussed the nitty-gritty of randomness in JAX.
March Machine Learning Meetup hosted by TFUG Kolkata. Two sessions were delivered: 1) You don't know TensorFlow by ML GDE Sayak Paul (India) presented some under-appreciated and under-used features of TensorFlow. 2) A Guide to ML Workflows with JAX by ML GDE Aritra Roy Gosthipaty (India), ML GDE Soumik Rakshit (India), and ML GDE Ritwik Raha (India) delivered on how one could think of using JAX functional transformations for their ML workflows.
Stable Diffusion Finetuning by ML GDE Pedro Gengo (Brazil) and ML GDE Piero Esposito (Brazil) is a fine-tuned Stable Diffusion 1.5 with more aesthetic images. They used Vertex AI with multiple GPUs to fine-tune it. It reached Hugging Face top 3 and more than 150K people downloaded and tested it.
Posted by Timothy Jordan, Director, Developer Relations & Open Source
I/O is just a few days away and we couldn’t be more excited to share the latest updates across Google’s developer products, solutions, and technologies. From keynotes to technical sessions and hands-on workshops, these announcements aim to help you build smarter and ship faster.
Here are some helpful tips to maximize your experience online.
Start building your personal I/O agenda
Starting now, you can save the Google and developer keynotes to your calendar and explore the program to preview content. Here are just a few noteworthy examples of what you’ll find this year:
What's new in Android
Get the latest news in Android development: Android 14, form factors, Jetpack + Compose libraries, Android Studio, and performance.
What’s new in Web
Explore new features and APIs that became stable across browsers on the Web Platform this year.
What’s new in Generative AI
Discover a new suite of tools that make it easy for developers to leverage and build on top of Google's large language models.
What’s new in Google Cloud
Learn how Google Cloud and generative AI will help you develop faster and more efficiently.
For the best experience, create or connect a developer profile and start saving content to My I/O to build your personal agenda. With over 200 sessions and other learning material, there’s a lot to cover, so we hope this will help you get organized.
This year we’ve introduced development focus filters to help you navigate content faster across mobile, web, AI, and cloud technologies. You can also peruse content by topic, type, or experience level so you can find what you’re interested in, faster.
Connect with the community
After the keynotes, you can talk to Google experts and other developers online in I/O Adventure chat. Here you can ask questions about new releases and learn best practices from the global developer community.
If you’re craving community now, visit the Community page to meet people with similar interests in your area or find a watch party to attend.
We hope these updates are useful, and we can’t wait to connect online in May!
Posted by Ashley Francisco, Head of Startup Developer Ecosystem, North America, & Darren Mowry, Managing Director, Corporate Sales
Startups are solving the world’s most important challenges with agility, innovative technology, and determination, and Google is proud to support them.
TL;DR: Applications are now open for the inaugural North American Google for Startups Accelerator: Cloud cohort. Designed to help connect founders who are building with Cloud to the people, products, and best practices they need to grow, this 10-week virtual accelerator will help 8-12 startups prepare for the next phase of their growth journey.
Around the world, the cloud is helping businesses and governments accelerate their digital transformations, scale their operations, and innovate in new areas. At Google Cloud, we’re helping businesses solve some of their toughest challenges. For instance, we’ve partnered with innovative digital native companies like cart.com to democratize ecommerce by giving brands of all sizes the full capabilities needed to take on the world’s largest online retailers, and with dynamic startups like kimo.ai which leverages our AI tools to transform traditional approaches to online learning.
The adoption and acceleration of Google Cloud unlocks massive potential for startups as the global cloud market is set to grow to more than $470 billion over the next five years. With the artificial intelligence/machine learning (AI/ML) landscape evolving rapidly, this moment presents an exciting and unique opportunity for startups. The Google for Startups Accelerator: Cloud program helps cloud-native startups using AI/ML to seize the opportunities ahead.
Starting today, U.S.- and Canada-based startups can apply for the Google for Startups Accelerator: Cloud program. This equity-free, 10-week virtual accelerator will offer cloud mentorship and technical project support, as well as deep dives and workshops on product design, customer acquisition and leadership development for cloud startup founders and leaders.
The Accelerator program is designed to bring the best of Google's programs, products, people and technology to startups doing impactful work in the cloud.
Here’s what our recent North American Accelerator alumni had to say:
“Thanks to truly amazing mentorship and direct access to Googlers, we have been able to reach new levels of specialized knowledge and deployment capability in our GCP architecture and artificial intelligence projects. From a technical perspective to a business growth standpoint, this is simply invaluable. What we have built in three months with Google will be a part of our upcoming next-gen product line in both Healthcare and Non-Healthcare settings. We deeply thank all Googlers for their exceptional participation in our journey." – Francois Gand, Founder and CEO, NURO
"The accelerator provided F8th Inc. with so much more than we could have ever dreamed. The meaningful mentorship relationships that have been created continue to endure, the workshops have been impactful in helping our business scale, and we have developed new business contacts both in Canada and the US. The incredible support and guidance we received has been second to none. It’s been great to have access to a multidisciplinary team and Google’s outside-the-box thinking.” — Vivene Salmon, Co-Founder, F8th Inc." – Vivene Salmon, Co-Founder, F8th Inc.
Applications are now being accepted until May 30, and the Accelerator will kick-off this July. Interested startups leveraging cloud to drive growth and innovation are encouraged to apply here.
From underserved communities needing more support with kids' education, to struggling to preserve the memories of passed loved ones. In our latest release of #WeArePlay stories, we’re celebrating the inspiring founders who identified problems around them and made apps or games to solve them.
Starting with Maria, Annmaria and Dennis from Minnesota, USA - founders of 7 Generation Games. Growing up as a Latina in rural North Dakota, Maria wanted to build something inspired by her experiences and help support the education gap in underserved communities. She teamed up with her mom AnnMaria, a teacher and computer programmer, and software developer Dennis, to set up 7 Generation Games. They make educational games – in English, Spanish and indigenous languages – to improve math skills of Hispanic and Native American children. Making Camp Ojibwe is a village-building simulation where players earn points by answering math and social studies questions. Now with multiple titles, their games are proven to improve children’s school results.
Next, David, Arman & Hayk from Armenia - founders of Zoomerang. After uploading his music online, David got limited views because his video editing wasn’t engaging. It was his passion for music that led him to start Zoomerang with co-founders Arman and Hayk. They created a platform where content creators could get editing templates for their videos, allowing thousands to grow their brand and vivify their content.
Next, Rama from Jordan - founder of Little Thinking Minds. When she and her friend and co-founder Lamia had their first boys, they struggled to find resources to teach their children Arabic. So, they utilized their background in film production and started making children’s videos in Arabic in their backyards. When they held a screening at a local cinema, over 500 parents and children came to watch it, and they had to screen it multiple times. A few years later and the content is now digitized in a series of apps used in schools of 10 countries. The most popular, I Read Arabic, has educational videos, books, games, and a dashboard for teachers to track students' progress.
Last but not least, Prakash from South Africa - founder of ForKeeps. When Prakash’s sister passed away, his nieces longed to hear her voice again and keep her memory alive. When his father died, he felt the same and regretted not having all his photos and messages in one place. This inspired Prakash and his co-founders to create ForKeeps: a platform for preserving a person’s legacy with photo albums, stories, and voice messages. Through the app, people can feel their loved one’s presence after they're gone. The Forever Album tool also allows the audience to share and celebrate special occasions in real time. Now Prakash’s goal is to help more people across different cultures around the world record memories for their loved ones.
Check out their stories now at g.co/play/weareplay and keep an eye out for more stories coming soon.
Posted by Swathi Dharshna Subbaraj, Project Coordinator, Google Dev Library
Developers can build, test, and deploy any application from a single codebase in Flutter. With high performance and code reusability, it has transformed the app development process. Flutter has become the go-to framework for developers as it streamlines the development process, allowing applications to be built on multi-platform with ease and efficiency.
In this blog, we will explore 6 Flutter/Dart projects from Google Dev Library from building weather apps to Tetris games. These projects will help you grow as a developer, and inspire you to build your first open source project. Let's dive in!
Flutter Design Patterns by Mangirdas Kazlauskas
Design patterns are reusable solutions to common software development problems. They help you create software that is easier to maintain, extend, and refactor. Written in Dart, this repository showcases all 23 design patterns, as described in Design Patterns: Elements of Reusable Object-Oriented Software, to help you learn and apply design patterns in your own projects, improving the quality and maintainability of your code.
A mobile application (developed using Flutter and Dart) designed to control various smart home devices. The app also allows users to create custom scenes to automate device actions based on certain conditions or events.
Learn about an easy-to-use package for accessing a device's photo library, including operations like retrieving images, videos, and albums, as well as deleting, creating, and updating files in the photo library. This package is built using the Flutter plugin architecture, which enables it to interact with native platform APIs for accessing photos and videos on iOS and Android devices.
This project implements the classic Tetris game using the Flutter framework. It’s structured into several classes that handle different aspects of the game.
FlutterGen is a code generator tool that helps you automate the process of generating boilerplate code for assets and fonts, making it easier to use them in Flutter projects. It works by scanning a project directory for specified assets and font files and generates code that can be easily used within a Flutter application. Overall, FlutterGen can save you time and effort in managing assets and fonts in your Flutter projects.
This app uses the Google Maps SDK & Directions API when coding with Flutter. It offers several location-based functions, including the ability to detect the user's current location. It also uses Geocoding to convert addresses into coordinates and vice versa, and allows users to add markers to the map view.
Are you actively contributing to the #FlutterDev community? Become a Google Dev Library Contributor!
Google Dev Library is a platform for showcasing open-source projects featuring Google technologies. Join our global community of developers to showcase your projects. Submit your content.