Author Archives: Google Developers

How Machine Learning GDE Henry Ruiz is inspired by resilience in his community

Posted by Kevin Hernandez, Developer Relations Community Manager

For Hispanic Heritage Month, we are celebrating Henry Ruiz, Machine Learning GDE, and Latin American and Hispanic developer voices.

Henry Ruiz, Machine Learning GDE, originally had aspirations of becoming a soccer player in his home country of Colombia, but when his brother got injured he knew that he had to have a backup plan. With a love for video games, Henry decided to pursue an education in development and eventually discovered the world of computer science.

Today, Henry is a Computer Scientist, working as a Research Specialist (Data Scientist) at Texas A&M AgriLife Research and finishing his Ph.D. in Engineering at Texas A&M University.

Image of Henry Ruiz in the field at Texas A&M University AgriLife Research Department

Henry, who barely spoke English before immigrating to the United States, has now progressed to the point of preparing to defend his PhD, thanks to the assistance of the Hispanic community.

As a first-generation college student in the United States, Henry was looking for a community where he could feel connected. He received a lot of support from international students and mentions that he always received a warm welcome specifically from the Hispanic community. Joining different clubs on campus, Henry connected with others through food and shared experiences and they served as a support system for one another by creating study groups. Through these connections, he began to notice the impact of developers from Latin America which deeply inspired him. Henry reflects, “We are considered a minority and don’t always have the same opportunities that developed countries have. So we have to be creative and put in an extra effort. So to see these stories of minority developers making an impact on the world is very significant to me.” Henry views Hispanic Heritage Month as a celebration of what Hispanic people have accomplished and it drives him in his work.

"Hispanic Heritage Month is a celebration of the hard work, the resilience, and the work that people in the community have done,” 

- Henry Ruiz, Machine Learning GDE
Image of Henry Ruiz conducting research at Texas A&M University AgriLife Research Department

Henry has seen progress being made in recognizing Hispanic contributions in the tech industry. “Big companies have been aware of the challenges that we have as minorities and they started creating different programs to get community members more involved in tech companies,” he explains. Well-known corporations have hosted conferences for the Hispanic community and Google in particular, gives out scholarships such as the Generation Google Scholarship. This makes him feel seen and gives the community visibility in the industry. When he sees Hispanics in leadership positions, it shows him what can be accomplished, which fuels his work.

Today, Henry has worked on generative AI projects and leverages Google technologies (Cloud, TensorFlow, Kubernetes) to tackle challenges in the agricultural industry. Specifically, he’s working on a project to detect diseases and pests in bananas. With the strong foundation of his community, Henry is actively helping communities with his research. On his advice to the Hispanic community, Henry imparts the following words of wisdom, “Although some might not have access to the same tools and technologies as others, we have to remember that we are resilient, creative, and are problem solvers. Just continue moving forward.”

You can find Henry on LinkedIn, GitHub, and via his GDE Developer Profile.


The Google Developer Experts (GDE) program is a global network of highly experienced technology experts, influencers, and thought leaders who actively support developers, companies, and tech communities by speaking at events and publishing content.

MediaPipe On-Device Text-to-Image Generation Solution Now Available for Android Developers

Posted by Paul Ruiz – Senior Developer Relations Engineer, and Kris Tonthat – Technical Writer

Earlier this year, we previewed on-device text-to-image generation with diffusion models for Android via MediaPipe Solutions. Today we’re happy to announce that this is available as an early, experimental solution, Image Generator, for developers to try out on Android devices, allowing you to easily generate images entirely on-device in as quickly as ~15 seconds on higher end devices. We can’t wait to see what you create!

There are three primary ways that you can use the new MediaPipe Image Generator task:

  1. Text-to-image generation based on text prompts using standard diffusion models.
  2. Controllable text-to-image generation based on text prompts and conditioning images using diffusion plugins.
  3. Customized text-to-image generation based on text prompts using Low-Rank Adaptation (LoRA) weights that allow you to create images of specific concepts that you pre-define for your unique use-cases.

Models

Before we get into all of the fun and exciting parts of this new MediaPipe task, it’s important to know that our Image Generation API supports any models that exactly match the Stable Diffusion v1.5 architecture. You can use a pretrained model or your fine-tuned models by converting it to a model format supported by MediaPipe Image Generator using our conversion script.

You can also customize a foundation model via MediaPipe Diffusion LoRA fine-tuning on Vertex AI, injecting new concepts into a foundation model without having to fine-tune the whole model. You can find more information about this process in our official documentation.

If you want to try this task out today without any customization, we also provide links to a few verified working models in that same documentation.

Image Generation through Diffusion Models

The most straightforward way to try the Image Generator task is to give it a text prompt, and then receive a result image using a diffusion model.

Like MediaPipe’s other tasks, you will start by creating an options object. In this case you will only need to define the path to your foundation model files on the device. Once you have that options object, you can create the ImageGenerator.

val options = ImageGeneratorOptions.builder().setImageGeneratorModelDirectory(MODEL_PATH).build() imageGenerator = ImageGenerator.createFromOptions(context, options)

After creating your new ImageGenerator, you can create a new image by passing in the prompt, the number of iterations the generator should go through for generating, and a seed value. This will run a blocking operation to create a new image, so you will want to run it in a background thread before returning your new Bitmap result object.

val result = imageGenerator.generate(prompt_string, iterations, seed) val bitmap = BitmapExtractor.extract(result?.generatedImage())

In addition to this simple input in/result out format, we also support a way for you to step through each iteration manually through the execute() function, receiving the intermediate result images back at different stages to show the generative progress. While getting intermediate results back isn’t recommended for most apps due to performance and complexity, it is a nice way to demonstrate what’s happening under the hood. This is a little more of an in-depth process, but you can find this demo, as well as the other examples shown in this post, in our official example app on GitHub.

Moving image of an image generating in MediaPipe from the following prompt: a colorful cartoon racoon wearing a floppy wide brimmed hat holding a stick walking through the forest, animated, three-quarter view, painting

Image Generation with Plugins

While being able to create new images from only a prompt on a device is already a huge step, we’ve taken it a little further by implementing a new plugin system which enables the diffusion model to accept a condition image along with a text prompt as its inputs.

We currently support three different ways that you can provide a foundation for your generations: facial structures, edge detection, and depth awareness. The plugins give you the ability to provide an image, extract specific structures from it, and then create new images using those structures.

Moving image of an image generating in MediaPipe from a provided image of a beige toy car, plus the following prompt: cool green race car

LoRA Weights

The third major feature we’re rolling out today is the ability to customize the Image Generator task with LoRA to teach a foundation model about a new concept, such as specific objects, people, or styles presented during training. With the new LoRA weights, the Image Generator becomes a specialized generator that is able to inject specific concepts into generated images.

LoRA weights are useful for cases where you may want every image to be in the style of an oil painting, or a particular teapot to appear in any created setting. You can find more information about LoRA weights on Vertex AI in the MediaPipe Stable Diffusion LoRA model card, and create them using this notebook. Once generated, you can deploy the LoRA weights on-device using the MediaPipe Tasks Image Generator API, or for optimized server inference through Vertex AI’s one-click deployment.

In the example below, we created LoRA weights using several images of a teapot from the Dreambooth teapot training image set. Then we use the weights to generate a new image of the teapot in different settings.

A grid of four photos of teapots generated with training prompt 'a photo of a monadikos teapot'on the left, and a moving image showing an image being generated in MediaPipe from the propmt 'a bright purple monadikos teapot sitting in top of a green table with orange teacups'
Image generation with the LoRA weights

Next Steps

This is just the beginning of what we plan to support with on-device image generation. We’re looking forward to seeing all of the great things the developer community builds, so be sure to post them on X (formally Twitter) with the hashtag #MediaPipeImageGen and tag @GoogleDevs. You can check out the official sample on GitHub demonstrating everything you’ve just learned about, read through our official documentation for even more details, and keep an eye on the Google for Developers YouTube channel for updates and tutorials as they’re released by the MediaPipe team.


Acknowledgements

We’d like to thank all team members who contributed to this work: Lu Wang, Yi-Chun Kuo, Sebastian Schmidt, Kris Tonthat, Jiuqiang Tang, Khanh LeViet, Paul Ruiz, Qifei Wang, Yang Zhao, Yuqi Li, Lawrence Chan, Tingbo Hou, Joe Zou, Raman Sarokin, Juhyun Lee, Geng Yan, Ekaterina Ignasheva, Shanthal Vasanth, Glenn Cameron, Mark Sherwood, Andrei Kulik, Chuo-Ling Chang, and Matthias Grundmann from the Core ML team, as well as Changyu Zhu, Genquan Duan, Bo Wu, Ting Yu, and Shengyang Dai from Google Cloud.

7 dos and don’ts of using ML on the web with MediaPipe

Posted by Jen Person, Developer Relations Engineer

If you're a web developer looking to bring the power of machine learning (ML) to your web apps, then check out MediaPipe Solutions! With MediaPipe Solutions, you can deploy custom tasks to solve common ML problems in just a few lines of code. View the guides in the docs and try out the web demos on Codepen to see how simple it is to get started. While MediaPipe Solutions handles a lot of the complexity of ML on the web, there are still a few things to keep in mind that go beyond the usual JavaScript best practices. I've compiled them here in this list of seven dos and don'ts. Do read on to get some good tips!


❌ DON'T bundle your model in your app

As a web developer, you're accustomed to making your apps as lightweight as possible to ensure the best user experience. When you have larger items to load, you already know that you want to download them in a thoughtful way that allows the user to interact with the content quickly rather than having to wait for a long download. Strategies like quantization have made ML models smaller and accessible to edge devices, but they're still large enough that you don't want to bundle them in your web app. Store your models in the cloud storage solution of your choice. Then, when you initialize your task, the model and WebAssembly binary will be downloaded and initialized. After the first page load, use local storage or IndexedDB to cache the model and binary so future page loads run even faster. You can see an example of this in this touchless ATM sample app on GitHub.


✅ DO initialize your task early

Task initialization can take a bit of time depending on model size, connection speed, and device type. Therefore, it's a good idea to initialize the solution before user interaction. In the majority of the code samples on Codepen, initialization takes place on page load. Keep in mind that these samples are meant to be as simple as possible so you can understand the code and apply it to your own use case. Initializing your model on page load might not make sense for you. Just focus on finding the right place to spin up the task so that processing is hidden from the user.

After initialization, you should warm up the task by passing a placeholder image through the model. This example shows a function for running a 1x1 pixel canvas through the Pose Landmarker task:

function dummyDetection(poseLandmarker: PoseLandmarker) { const width = 1; const height = 1; const canvas = document.createElement('canvas'); canvas.width = width; canvas.height = height; const ctx = canvas.getContext('2d'); ctx.fillStyle = 'rgba(0, 0, 0, 1)'; ctx.fillRect(0, 0, width, height); poseLandmarker.detect(canvas); }


✅ DO clean up resources

One of my favorite parts of JavaScript is automatic garbage collection. In fact, I can't remember the last time memory management crossed my mind. Hopefully you've cached a little information about memory in your own memory, as you'll need just a bit of it to make the most of your MediaPipe task. MediaPipe Solutions for web uses WebAssembly (WASM) to run C++ code in-browser. You don't need to know C++, but it helps to know that C++ makes you take out your own garbage. If you don't free up unused memory, you will find that your web page uses more and more memory over time. It can have performance issues or even crash.

When you're done with your solution, free up resources using the .close() method.

For example, I can create a gesture recognizer using the following code:

const createGestureRecognizer = async () => { const vision = await FilesetResolver.forVisionTasks( "https://cdn.jsdelivr.net/npm/@mediapipe/[email protected]/wasm" ); gestureRecognizer = await GestureRecognizer.createFromOptions(vision, { baseOptions: { modelAssetPath: "https://storage.googleapis.com/mediapipe-models/gesture_recognizer/gesture_recognizer/float16/1/gesture_recognizer.task", delegate: "GPU" }, }); }; createGestureRecognizer();

Once I'm done recognizing gestures, I dispose of the gesture recognizer using the close() method:

gestureRecognizer.close();

Each task has a close method, so be sure to use it where relevant! Some tasks have close() methods for the returned results, so refer to the API docs for details.


✅ DO try out tasks in MediaPipe Studio

When deciding on or customizing your solution, it's a good idea to try it out in MediaPipe Studio before writing your own code. MediaPipe Studio is a web-based application for evaluating and customizing on-device ML models and pipelines for your applications. The app lets you quickly test MediaPipe solutions in your browser with your own data, and your own customized ML models. Each solution demo also lets you experiment with model settings for the total number of results, minimum confidence threshold for reporting results, and more. You'll find this especially useful when customizing solutions so you can see how your model performs without needing to create a test web page.

Screenshot of Image Classification page in MediaPipe Studio


✅ DO test on different devices

It's always important to test your web apps on various devices and browsers to ensure they work as expected, but I think it's worth adding a reminder here to test early and often on a variety of platforms. You can use MediaPipe Studio to test devices as well so you know right away that a solution will work on your users' devices.


❌ DON'T default to the biggest model

Each task lists one or more recommended models. For example, the Object Detection task lists three different models, each with benefits and drawbacks based on speed, size and accuracy. It can be tempting to think that the most important thing is to choose the model with the very highest accuracy, but if you do so, you will be sacrificing speed and increasing the size of your model. Depending on your use case, your users might benefit from a faster result rather than a more accurate one. The best way to compare model options is in MediaPipe Studio. I realize that this is starting to sound like an advertisement for MediaPipe Studio, but it really does come in handy here!

photo of a whale breeching against a background of clouds in a deep, vibrant blue sky

✅ DO reach out!

Do you have any dos or don'ts of ML on the web that you think I missed? Do you have questions about how to get started? Or do you have a cool project you want to share? Reach out to me on LinkedIn and tell me all about it!

#WeArePlay | Meet Solape and Yomi from Nigeria. More stories from around the world

Posted by Leticia Lago, Developer Marketing

We continue to be inspired by the amazing #WeArePlay stories of app and game creators on Google Play, from all corners of the Earth. This month, hear about a game changing financial app for women in Nigeria to an early learning platform that uses augmented reality.


First up, we’re in Nigeria where two former colleagues at an investment bank, Solape and Yomi, channeled their economic expertise into improving women’s accessibility to finance. HerVest is an app exclusively designed for farmers and small business owners, with saving and investment tips, financial education and credit options. Intent on improving gender equality in the financial sphere, the pair plan to reach a million women by the end of 2024 and “become the go-to financial platform for the financially underserved in Africa”.


#WeArePlay Juliana BLW Social Singapore g.co/play/weareplay Google Play

Now we’re crossing the ocean into maritime Singapore, where native Brazilian Juliana launched her baby-led weaning app, BLW Meals. When her firstborn was 6 months old, she struggled to transition her onto solid foods. Unable to find adequate resources in her mother language, Portuguese, she decided to make her own platform, sharing everything she’d learned. Today, she’s overjoyed by how much the app - also offered in Spanish and English - has supported other moms through their weaning journey. Soon, she’s launching a new feature for chatting directly with nutritionists, ensuring parents always have an expert on hand to guide them.

#WeArePlay Harry & Luke Visible London, United Kingdom g.co/play/weareplay Google Play

Next, we’re heading over to the UK to meet mechanical engineer Harry, who’s on a mission to revolutionize perceptions around energy-limiting health conditions. When he got sick with long Covid after a mild infection in 2020, his ability to do the wild, athletic activities he once enjoyed – like cycling across Iceland – was no longer on the cards. Disappointed by the lack of treatment options, he decided to create a health monitoring app, partnering up with friend and tech lawyer Luke to make it happen. On Visible, patients are empowered to track and monitor their symptoms and activity levels. The anonymized data is also used by medical researchers to improve understanding and treatment options, feeding into Harry’s larger goal of “working to change health policy laws to recognize these conditions”.

#WeArePlay Ilan, Nastassja & Edison Pleiq Santiago, Chile g.co/play/weareplay Google Play

Finally, we’re heading to Chile, South America, to meet brothers Ilan and Edison and their friend Nastassja. A veritable dream team, the trio began their tech careers running an augmented reality advertisement agency in their native Venezuela. But when they saw how much kids loved their commercials, they decided to instead use their AR skills to develop an education platform for children. After being offered a place on an accelerator program, they moved to Chile to launch PleIQ – an immersive, early learning app for kids aged 3-8. Next, they’re expanding across Latin America with the goal of “improving education quality to create a more equal society”.

Discover more global #WeArePlay stories and share your favorites.



How useful did you find this blog post?

Make with MakerSuite Part 2: Tuning LLMs

Posted by Pranay Bhatia – Product Manager, Google Labs

AI is changing how developers work, and it’s also making it possible for more people to build. In Part 1, we learned how MakerSuite can be used to easily prompt LLMs through plain language. Today, in Part 2, we’re introducing Tuning in MakerSuite, which will let you customize a model for your specific needs in minutes.

What is tuning?

In Part 1, we introduced a technique called few-shot prompting to improve a model’s performance by giving it a handful of examples. Tuning improves on this technique by training the model on many more examples—so many that they can’t all fit in the prompt.


Fine-tuning vs. Parameter Efficient Tuning

You may have heard about classic “fine-tuning” of models. This is where a pre-trained model is adapted to a particular task by training it on a smaller set of task-specific labeled data. But with today’s LLMs and their huge number of parameters, re-training is complex: it requires machine learning expertise, lots of data, and lots of compute.

Tuning in MakerSuite uses a technique called Parameter Efficient Tuning (PET) to produce customized, high-quality models without the additional costs and complexity of traditional fine-tuning. In addition, PET produces high quality models with as little as a few hundred data points, reducing the burden of data collection for the developer.


Tune models in MakerSuite in minutes


1. Create a tuned model

It’s easy to tune models in MakerSuite. Simply select “Create new” and choose “Tuned model.”

Moving image of how to access 'Tuned Model' option from Create New menu in MakerSuite

2. Select data for tuning

You can tune your model from a saved data prompt or import data from Google Sheets or a CSV file. We recommend using at least 100 examples to get the best performance before you hit the Tune button.

Moving image of importing data for tuning into MakerSuite

3. View your tuned model

View your tuning progress in your library. Once the model has finished tuning, you can view the details by clicking on your model.

Moving image of viewing details of a model once it has fiunished tuning

4. Run your tuned model

To start using your newly tuned model, create a new text or data prompt and select your newly tuned model from the list of available models.

Image showing location of model in list of available models in MakerSuite


MakerSuite: a powerful, easy tool for tuning

Tuning in MakerSuite empowers developers to harness the full potential of models like PaLM 2 with delightful ease. Whether you've already tuned a model with the API or just started experimenting with generative AI, you’ll find that MakerSuite opens up exciting possibilities to make the model more relevant and effective for your own application in just minutes.

Build with Google AI: new video series for developers

Posted by Joe Fernandez, AI Developer Relations, and Jaimie Hwang, AI Developer Marketing

Artificial intelligence (AI) represents a new frontier for technology we are just beginning to explore. While many of you are interested in working with AI, we realize that most developers aren't ready to dive into building their own artificial intelligence models (yet). With this in mind, we've created resources to get you started building applications with this technology.

Today, we are launching a new video series called Build with Google AI. This series features practical, useful AI-powered projects that don't require deep knowledge of artificial intelligence, or huge development resources. In fact, you can get these projects working in less than a day.

From self-driving cars to medical diagnosis, AI is automating tasks, improving efficiency, and helping us make better decisions. At the center of this wave of innovation are artificial intelligence models, including large language models like Google PaLM 2 and more focused AI models for translation, object detection, and other tasks. The frontier of AI, however, is not simply building new and better AI models, but also creating high-quality experiences and helpful applications with those models.

Practical AI code projects

This series is by developers, for developers. We want to help you build with AI, and not just any code project will do. They need to be practical and extensible. We are big believers in starting small and tackling concrete problems. The open source projects featured in the series are selected so that you can get them working quickly, and then build beyond them. We want you to take these projects and make them your own. Build solutions that matter to you.

Finally, and most importantly, we want to promote the use of AI that's beneficial to users, developers, creators, and organizations. So, we are focused on solutions that follow our principles for responsible use of artificial intelligence.

For the first arc of this series, we focus on how you can leverage Google's AI language model capabilities for applications, particularly the Google PaLM API. Here's what's coming up:

  • AI Content Search with Doc Agent (10/3) We'll show you how a technical writing team at Google built an AI-powered conversation search interface for their content, and how you can take their open source project and build the same functionality for your content. 
  • AI Writing Assistant with Wordcraft (10/10) Learn how the People and AI Research team at Google built a story writing application with AI technology, and how you can extend their code to build your own custom writing app. 
  • AI Coding Assistant with Pipet Code Agent (10/17) We'll show you how the AI Developer Relations team at Google built a coding assistance agent as an extension for Visual Studio Code, and how you can take their open source project and make it work for your development workflow.

For the second arc of the series, we'll bring you a new set of projects that run artificial intelligence applications locally on devices for lower latency, higher reliability, and improved data privacy.

Insights from the development teams

As developers, we love code, and we know that understanding someone else's code project can be a daunting task. The series includes demos and tutorials on how to customize the code, and we'll talk with the people behind the code. Why did they build it? What did they learn along the way? You’ll hear insights directly from the project team, so you can take it further.

Discover AI technologies from across Google

Google provides a host of resources for developers to build solutions with artificial intelligence. Whether you are looking to develop with Google's AI language models, build new models with TensorFlow, or deploy full-stack solutions with Google Cloud Vertex AI, it's our goal to help you find the AI technology solution that works best for your development projects. To start your journey, visit Build with Google AI.

We hope you are as excited about the Build with Google AI video series as we are to share it with you. Check out Episode #1 now! Use those video comments to let us know what you think and tell us what you'd like to see in future episodes.

Keep learning! Keep building!

Improving user safety in OAuth flows through new OAuth Custom URI scheme restrictions

Posted by Vikrant Rana, Product Manager

OAuth 2.0 Custom URI schemes are known to be vulnerable to app impersonation attacks. As part of Google’s continuous commitment to user safety and finding ways to make it safer to use third-party applications that access Google user data, we will be restricting the use of custom URI scheme methods. They’ll be disallowed for new Chrome extensions and will no longer be supported for Android apps by default.

Disallowing Custom URI scheme redirect method for new Chrome Extensions

To protect users from malicious actors who might impersonate Chrome extensions and steal their credentials, we no longer allow new extensions to use OAuth custom URI scheme methods. Instead, implement OAuth using Chrome Identity API, a more secure way to deliver OAuth 2.0 response to your app.

What do developers need to do?

New Chrome extensions will be required to use the Chrome Identity API method for authorization. While existing OAuth client configurations are not affected by this change, we strongly encourage you to migrate them to the Chrome Identity API method. In the future, we may disallow Custom URI scheme methods and require all extensions to use the Chrome Identity API method.

Disabling Custom URI scheme redirect method for Android clients by default

By default, new Android apps will no longer be allowed to use Custom URI schemes to make authorization requests. Instead, consider using Google Identity Services for Android SDK to deliver the OAuth 2.0 response directly to your app.

What do developers need to do?

We strongly recommend switching existing apps to use the Google Identity Services for Android SDK. If you're creating a new app and the recommended alternative doesn’t work for your needs, you can enable the Custom URI scheme method for your app in the “Advanced Settings” section of the client configuration page on the Google API Console.

User-facing error message

Users may see an “invalid request” error message if they try to use an app that is making unauthorized requests using the Custom URI scheme method. They can learn more about this error by clicking on the "Learn more" link in the error message.

Image of user facing error message
User-facing error example

Developer-facing error message

Developers will be able to see additional error information when testing user flows for their applications. They can get more information about the error by clicking on the “see error details” link, including its root cause and links to instructions on how to resolve the error.

Image of developer facing error message
Developer-facing error example

Related content

Celebrating 25 years of Google Search: developer trends and history

Posted by Google for Developers

This month, Google Search turns 25. A lot has changed over the last quarter of a century when it comes to the development space, but one thing has remained a constant - whether you’re stuck on a problem, reading documentation, learning about new technology, or figuring out the best tech stack for your project, Search has been a helpful tool in getting your questions answered.

What you searched for is a strong signal when it comes to developer trends across web, mobile, cloud, and AI over the years. Let’s take a look at some of the interesting things you’ve looked up* – and some funny queries too – because everyone loves a good retrospective.

*Note: Google Trends data goes as far back as 2004.


Building a better web

After the internet dot-com bubble popped in 2000–2001, the web continued to advance and the internet exploded. Web development responded by enabling designers to incorporate multimedia into web pages. Cascading Style Sheets (CSS) (released in 1997) and Flash video (1996-2017) changed the way web pages looked and moved, and streaming changed the way people consumed video. However, the basic interface and structure of the web page remained the same. With the variety of browsers that came to market, JavaScript frameworks and libraries rose along since it can be run everywhere with both CSS and HTML. All these shifts led to some fun searches.

How to center a div

You can’t think of web development without CSS. And it turns out, “how to center a div” has been searched for from the beginning - it’s also provided the internet with a wealth of memes over the years.

JavaScript libraries

JavaScript is a front-end programming language that is used to add interactivity and dynamic behavior to web pages. It is one of the most popular programming languages in the world, and it is essential for building modern web applications. But at some point, most developers have to ask themselves what kind of JavaScript they should use. Vanilla? A framework? A library?

Starting in 2007 there was an uptick of searches for jQuery, which peaked in 2013 and started to fall after that. Meanwhile, developers started to show more interest in React and Angular right around the same time as jQuery’s peak. By April of 2018 they all had a similar volume of searches, and soon after React took over, followed by Angular. Nigeria searched for React the most, while Japan preferred jQuery, and Ecuador preferred Angular. Nowadays, the choice of JavaScript framework is the subject of a lot of controversy - what's your favorite? Share your thoughts with us.

Graph showing search term volume for “React”,” jQuery”, and “Angular” from 2004-present day
Search term volume for “React”,” jQuery”, and “Angular” from 2004-present day


The rise of mobile

As the web improved, so did mobile. Phones went from cellular to smart. The app economy blossomed. Due to low infrastructure and financial restraints, many emerging markets in Asia, Africa, and Latin America skipped the desktop era in favor of mobile to get their information and entertainment. Mobile development –Android in particular– kicked into high gear as a response.

Android development

Starting in 2007, Android was released as a developer platform before devices were on the market, along with the first Android Developer Challenge which launched to support and recognize developers who build great applications. In 2008, the Android OS was released and open sourced, along with T-Mobile’s G1 as the first smartphone to run Android. That same year, the Android Market was released, allowing developers an easy way to distribute apps to the Android community. In 2012, the marketplace got rebranded to Google Play. All of this momentum helped add to the frenzy, but searches really took off starting in 2012.

Graph of search term volume for “Android development” from 2007-2012
Search term volume for “Android development” from 2007-2012

Mobilegeddon

Even web developers couldn’t escape the importance of mobile in its heyday. By 2010, “mobile-first” and “responsive design” became best practices for the web in order to support mobile traffic. As a response to the clear indication that mobile wasn’t going anywhere, by 2015, Google’s search ranking algorithm changed to favor content that is mobile-friendly. Dubbed ‘Mobilegeddon’ by Chuck Price in a post written on Search Engine Watch, developers quickly searched for the term and adjusted their best practices such as responsive and mobile-first design. By 2017, mobile traffic accounted for approximately half of web traffic worldwide before permanently surpassing it in 2020.


Moving to the cloud

Over the last 25 years, cloud development has evolved from a niche technology to a mainstream solution for organizations of all sizes. Being free from managing infrastructure and operations provides a number of advantages like cost savings, speed, and scalability. In the early days, it was mainly used for hosting static websites and applications. But as technology matured, it became increasingly popular for a wider range of applications, including IoT, big data, real-time data, and ML in addition to more modern development practices like containers, microservices, and security.

Cloud computing

As development continued to modernize, developers, IT, and operations figured out fairly quickly that managing infrastructure and servers was painful and expensive. In response, many cloud environment providers launched between 2002-2010, including Google Cloud Platform.

Graph of search term volume for “cloud computing” from 2004-2012
Search term volume for “cloud computing” from 2004-2012

Cloud databases

Cloud services extend to storage, databases, and so much more – a necessity as technology becomes more robust, supporting large amounts of data in real time from IoT devices or use cases like ML and large language models. While there were searches for the term “cloud database” as far back as 2004, it spiked in 2017, coinciding with Google Cloud’s Cloud Spanner. And with the latest renaissance of AI technology, it’s pretty likely that this search term will keep going up in the coming months and years.


Present day innovations

Disruptive developer technology like artificial intelligence and machine learning are infused in development today. From AI-assisted coding to solving problems leveraging big data, AI is permeating our lives. So it’s no wonder developers are searching for some key terms.

Artificial intelligence, machine learning, and more

While some applications of AI, ML, deep learning, large language models (LLMs) are new, most of the terms aren’t. Even in 2004, AI and ML were search terms of interest. In 2015, most of these terms started to pick back up and continue to trend upwards, with a sheer spike in interest in 2022. That same year, ‘generative AI’ was formally introduced to the world. Python is the most searched coding language closely associated with AI, becoming the most popularly searched language in 2019, finally surpassing Java.

Graph of search term volume for “artificial intelligence”, “machine learning”, “deep learning”, and “generative AI” from 2004-present day
Search term volume for “artificial intelligence”, “machine learning”, “deep learning”, and “generative AI” from 2004-present day

Looking ahead

While some aspects of development have gotten progressively cleaner, more modern, and more lightweight - there’s now more choice and complexity when it comes to your tech stack. So it’s no wonder “why is my code not working” spiked in both the early days and today. At Google, we’ll do our best to help streamline and simplify technology to help you build smarter and ship faster with new technology like Project IDX, Android Studio Bot, and coding for Bard.

Graph of search term volume for “why is my code not working?” from 2004-present day
Search term volume for “why is my code not working?” from 2004-present day

It’s inspiring to see what you have done with the answers to your questions, whether you’re trying to solve specific problems, learning new skills or best practices, figuring out what technology you want to use, or dreaming up your next big idea. We look forward to seeing what the next 25 years bring.

Follow more developer trends and insights on Google for Developers across YouTube, LinkedIn, and Instagram.

Make with MakerSuite – Part 1: An Introduction

Posted by Ray Thai – Product Manager, Labs

We’re always on the lookout for tools and technologies that bring innovative solutions to our developer community. Generative AI refers to the ability of machine learning models, such as Large Language Models (LLMs) trained on massive amounts of data, to learn patterns and create new content such as text, images, videos, or audio. These are still under development, but we’re already seeing how models like PaLM 2 can enhance the quality of our code to make us more productive with tools like Project IDX and Android’s Studio Bot, or help us build new innovative user experiences like Bard. It’s exciting how simple it is to interact with these powerful LLMs so we’re kicking off a 5-part series called “Make with MakerSuite” to show you how easy it is to get started.


What is MakerSuite?

MakerSuite is a fast, easy way to start building generative AI apps. It provides an efficient UI for prompting some of Google’s latest models and easily translates prompts into production-ready code you can integrate into your applications. Today, we’ve removed the waitlist so anyone in 179 countries and territories can use MakerSuite.

The art of prompting LLMs

Interacting with LLMs is as straightforward as crafting a plain language prompt, making it accessible to everyone. Prompts can be as simple as a single input, but you have the flexibility to provide additional context or examples, effectively guiding the model to produce the most optimal response. You'll observe that you can achieve different outcomes by simply tweaking the way you phrase your prompts. To harness the power of these models safely and effectively, careful crafting and iterative refining becomes essential.

Choosing the Right Prompt Type: Text, Data, or Chat?

When it comes to using MakerSuite, there are three prompt types to help you achieve your goals.

1. Text Prompts: Unleash Your Creativity

Text prompts in MakerSuite provide a flexible and freeform experience that allows you to express yourself creatively through your prompts. Whether you're a beginner or an experienced user, text prompts offer a simple way to interact with the model.

image showing user generating ideas in MakerSuite
Generating ideas for a dinner party using a text prompt in MakerSuite

2. Data Prompts: Structured Few-Shot Prompts

Data prompts are the go-to choice when you have examples to help you specify precisely what you want from the model. They are perfect for applications that require a consistent input and output format such as data generation, translation, and more.

image showing user creating a reverse dictionary in MakerSuite
A reverse dictionary using a data prompt in MakerSuite


3. Chat Prompts: Building Conversational Experiences

If your goal is to create interactive chatbots or to simulate conversations, chat prompts are the solution! These prompts enable you to build engaging and interactive conversational experiences.

Image showing user chatting with a snowman in MakerSuite
Chatting with a snowman using a chat prompt in MakerSuite

No matter which prompt type you choose, you’ll find how easy it is to use MakerSuite to prompt some of the latest models from Google to build exciting, new user experiences.


We can’t wait to see what you build

AI is fundamentally reshaping the landscape of developer work and creativity, and we’re committed to empowering our developer community with access to cutting-edge models. We believe an open and collaborative developer community fuels progress and we're thrilled to see companies like LlamaIndex and Chroma harnessing MakerSuite as building blocks for their own innovations.

You can sign up to get started with MakerSuite in 179 countries and territories.You’ll find sample prompts for inspiration or just start prompting to see what the model generates. Once you’re happy with your configuration, easily export to code from MakerSuite and start integrating into your applications, products, and services. If you prefer to prompt our models directly with the API, sign up and grab your API key from MakerSuite to start!

Announcing the Inaugural Google for Startups Accelerator: AI First cohort

Posted by Yariv Adan, Director of Cloud Conversational AI and Pati Jurek, Google for startups Accelerator Regional Lead

This article is also shared on Google Cloud Blog

Today’s startups are addressing the world's most pressing issues, and artificial intelligence (AI) is one of their most powerful tools. To empower startups to scale their business towards success in the rapidly evolving AI landscape, Google for Startups Accelerator: AI First offers a 10-week, equity-free program for AI-first startups in partnership with Google Cloud. Designed for seed to series A startups based in Europe and Israel, the program helps them grow and build responsibly with AI and machine learning (ML) from the ground up, with access to experts from Google Cloud and Google DeepMind, a mix of in-person and virtual activities, 1:1 mentoring, and group learning sessions.

In addition, the program features deep dives and workshops focused on product design, business growth, and leadership development. Startups that are selected for the cohort also benefit from dedicated Google AI technical expertise and receive credits via the Google for Startups Cloud Program.

Out of hundreds of impressive applications, today we welcome the inaugural cohort of the Google for Startups Accelerator: AI First. The program includes 13 groundbreaking startups from eight different countries, all focused on different verticals and with a diverse array of founder and executive backgrounds. All participants are leveraging AI and ML technologies to solve significant problems and have the potential to transform their respective industries.


Congratulations to the cohort!

We are thrilled to present the inaugural Google for Startups Accelerator: AI First cohort:

  • Annea.Ai (Germany) utilizes AI and Digital Twin technology to forecast and prevent possible breakdowns in renewable energy assets, such as wind turbines.
  • Checktur.io (Germany) empowers businesses to manage their commercial vehicle fleets efficiently via an end-to-end fleet asset management ecosystem while using AI models and data-driven insights.
  • Exactly.ai (UK) lets artists create images in their own unique style with a simple written description.
  • Neurons (Denmark) has developed a precise AI model that can measure human subconscious signals to predict marketing responses.
  • PACTA (Germany) provides AI-driven contract lifecycle management with an intelligent no-code workflow on one central legal platform.
  • Quantic Brains (Spain) empowers users to generate movies and video games using AI.
  • Sarus (France) builds a privacy layer for Analytics & AI and allows data practitioners to query sensitive data without having direct access to it.
  • Releva (Bulgaria) provides an all-in-one AI automation solution for eCommerce marketing.
  • Semantic Hub (Switzerland) uses AI leveraging multilingual Natural Language Understanding to help global biopharmaceutical companies understand the patient experience through first-hand testimonies on social media.
  • Vazy Data (France) allows anyone to analyze data without technical knowledge by using AI.
  • Visionary.AI (Israel) leverages cutting-edge AI to improve real-time video quality in challenging visual conditions like extreme low-light.
  • ZENPULSAR (UK) provides social media analytics from over 10 social media platforms to financial institutions and corporations to facilitate investment and business decisions.
  • Zaya AI (Romania) uses machine learning to better understand and diagnose diseases, assisting healthcare professionals to make timely and informed medical decisions.
Grid image of logos and executives of all startups listed in the inaugural Google for Startups Accelerator

To learn more about the AI-first program, and to signal your interest in nominating your startup for future cohorts, visit the program page here.