Author Archives: Google Developers

Solution Challenge 2024 – Using Google Technology to Address UN Sustainable Development Goals

Posted by Rachel Francois, Global Program Manager, Google Developer Student Clubs

Google Developer Student Clubs celebrates 5 years of innovative solutions built by university students


This year marks the 5-year anniversary of the Google Developer Student Clubs Solution Challenge! For the past five years, the Solution Challenge has invited university students to use Google technologies to develop solutions for real-world problems.

Since 2019:

  • Over 110+ countries have participated
  • Over 4,000+ projects have been submitted
  • Over 1,000+ chapters have participated

The project solutions address one or more of the United Nations 17 Sustainable Development Goals, which aim to end poverty, ensure prosperity, and protect the planet by 2030. The goals were agreed upon by all 193 United Nations Member States in 2015.

If you have an idea for how you could use Android, Firebase, TensorFlow, Google Cloud, Flutter, or another Google product to promote employment for all, economic growth, and climate action, enter the 2024 GDSC Solution Challenge and share your ideas!

Solution Challenge prizes

Check out the many great prizes you can win by participating:

  • Top 100 teams receive branded swag, a certificate, and personalized mentorship from Google and experts to help further their solution ideas.
  • Final 10 teams receive a swag box, additional mentorship, and the opportunity to showcase their project solutions to Google teams and developers worldwide during the virtual 2024 Solution Challenge Demo Day, live on YouTube. Additional cash prize of $1,000 per student. Winnings for each qualifying team will not exceed $4,000.
  • Winning 3 teams receive a swag box, and each individual receives a cash prize of $3,000 and a feature on the Google Developers Blog. Winnings for each qualifying team will not exceed $12,000.

Joining the Solution Challenge

To join the Solution Challenge and get started on your project:

Google resources for Solution Challenge participants

Google supports Solution Challenge participants with resources to build strong projects, including:

  • Create a demo video and submit your project by February 22, 2024
  • Live online Q&A sessions
  • Mentorship from Googlers, Google Developer Experts, and the Google Developer Student Club community
  • Curated codelabs designed by Google for Developers
  • Access to Design Sprint guidelines developed by Google Ventures

and so much more!

Winner announcement dates

Once all projects are submitted, our panel of judges will evaluate and score each submission using specific criteria. After that, winners will be announced in three rounds:

  • Round 1 (April): Top 100 teams will be announced.
  • Round 2 (May): Final 10 teams will be announced.
  • Round 3 (June): The Winning 3 grand prize teams will be announced live on YouTube during the 2024 Solution Challenge Demo Day.

We're looking forward to seeing the solutions you create when you combine your enthusiasm for building a better world, coding skills, and help from Google technologies.

Learn more and sign up for the 2024 Solution Challenge here.

Navigating AI Safety & Compliance: A guide for CTOs

Posted by Fergus Hurley – Co-Founder & GM, Checks, and Pedro Rodriguez – Head of Engineering, Checks

The rapid advances in generative artificial intelligence (GenAI) have brought about transformative opportunities across many industries. However, these advances have raised concerns about risks, such as privacy, misuse, bias, and unfairness. Responsible development and deployment is, therefore, a must.

AI applications are becoming more sophisticated, and developers are integrating them into critical systems. Therefore, the onus is on technology leaders, particularly CTOs and Heads of Engineering and AI – those responsible for leading the adoption of AI across their products and stacks – to ensure they use AI safely, ethically, and in compliance with relevant policies, regulations, and laws.

While comprehensive AI safety regulations are nascent, CTOs cannot wait for regulatory mandates before they act. Instead, they must adopt a forward-thinking approach to AI governance, incorporating safety and compliance considerations into the entire product development cycle.

This article is the first in a series to explore these challenges. To start, this article presents four key proposals for integrating AI safety and compliance practices into the product development lifecycle:


1.     Establish a robust AI governance framework

Formulate a comprehensive AI governance framework that clearly defines the organization’s principles, policies, and procedures for developing, deploying, and operating AI systems. This framework should establish clear roles, responsibilities, accountability mechanisms, and risk assessment protocols.

Examples of emerging frameworks include the US National Institute of Standards and Technologies’ AI Risk Management Framework, the OSTP Blueprint for an AI Bill of Rights, the EU AI Act, as well as Google’s Secure AI Framework (SAIF).

As your organization adopts an AI governance framework, it is crucial to consider the implications of relying on third-party foundation models. These considerations include the data from your app that the foundation model uses and your obligations based on the foundation model provider's terms of service.


2.     Embed AI safety principles into the design phase

Incorporate AI safety principles, such as Google’s responsible AI principles, into the design process from the outset.

AI safety principles involve identifying and mitigating potential risks and challenges early in the development cycle. For example, mitigate bias in training or model inferences and ensure explainability of models behavior. Use techniques such as adversarial training – red teaming testing of LLMs using prompts that look for unsafe outputs – to help ensure that AI models operate in a fair, unbiased, and robust manner.


3.     Implement continuous monitoring and auditing

Track the performance and behavior of AI systems in real time with continuous monitoring and auditing. The goal is to identify and address potential safety issues or anomalies before they escalate into larger problems.

Look for key metrics like model accuracy, fairness, and explainability, and establish a baseline for your app and its monitoring. Beyond traditional metrics, look for unexpected changes in user behavior and AI model drift using a tool such as Vertex AI Model Monitoring. Do this using data logging, anomaly detection, and human-in-the-loop mechanisms to ensure ongoing oversight.


4.     Foster a culture of transparency and explainability

Drive AI decision-making through a culture of transparency and explainability. Encourage this culture by defining clear documentation guidelines, metrics, and roles so that all the team members developing AI systems participate in the design, training, deployment, and operations.

Also, provide clear and accessible explanations to cross-functional stakeholders about how AI systems operate, their limitations, and the available rationale behind their decisions. This information fosters trust among users, regulators, and stakeholders.


Final word

As AI's role in core and critical systems grows, proper governance is essential for its success and that of the systems and organizations using AI. The four proposals in this article should be a good start in that direction.

However, this is a broad and complex domain, which is what this series of articles is about. So, look out for deeper dives into the tools, techniques, and processes you need to safely integrate AI into your development and the apps you create.

Create smart chips for link previewing in Google Docs

Posted by Chanel Greco, Developer Advocate

Earlier this year, we announced the general availability of third-party smart chips in Google Docs. This new feature lets you add, view, and engage with critical information from third party apps directly in Google Docs. Several partners, including Asana, Atlassian, Figma, Loom, Miro, Tableau, and Whimsical, have already created smart chips so users can start embedding content from their apps directly into Docs. Sourabh Choraria, a Google Developer Expert for Google Workspace and hobby developer, published a third-party smart chip solution called “Link Previews” to the Google Workspace Marketplace. This app adds information to Google Docs from multiple commonly used SaaS tools.

In this blog post you will find out how you too can create your own smart chips for Google Docs.

Example of a smart chip that was created to preview information from an event management system
Example of a smart chip that was created to preview information from an event management system


Understanding how smart chips for third-party services work

Third-party smart chips are powered by Google Workspace Add-ons and can be published to the Google Workspace Marketplace. From there, an admin or user can install the add-on and it will appear in the sidebar on the right hand side of Google Docs.

The Google Workspace Add-on detects a service's links and prompts Google Docs users to preview them. This means that you can create smart chips for any service that has a publicly accessible URL. You can configure an add-on to preview multiple URL patterns, such as links to support cases, sales leads, employee profiles, and more. This configuration is done in the add-on’s manifest file.

{
  "timeZone": "America/Los_Angeles",
  "exceptionLogging": "STACKDRIVER",
  "runtimeVersion": "V8",
  "oauthScopes": [
    "https://www.googleapis.com/auth/workspace.linkpreview",
    "https://www.googleapis.com/auth/script.external_request"
  ],
  "addOns": {
    "common": {
      "name": "Preview Books Add-on",
      "logoUrl": "https://developers.google.com/workspace/add-ons/images/library-icon.png",
      "layoutProperties": {
        "primaryColor": "#dd4b39"
      }
    },
    "docs": {
      "linkPreviewTriggers": [
        {
          "runFunction": "bookLinkPreview",
          "patterns": [
            {
              "hostPattern": "*.google.*",
              "pathPrefix": "books"
            },
            {
              "hostPattern": "*.google.*",
              "pathPrefix": "books/edition"
            }
          ],
          "labelText": "Book",
          "logoUrl": "https://developers.google.com/workspace/add-ons/images/book-icon.png",
          "localizedLabelText": {
            "es": "Libros"
          }
        }
      ]
    }
  }
}
The manifest file contains the URL pattern for the Google Books API

The smart chip displays an icon and short title or description of the link's content. When the user hovers over the chip, they see a card interface that previews more information about the file or link. You can customize the card interface that appears when the user hovers over a smart chip. To create the card interface, you use widgets to display information about the link. You can also build actions that let users open the link or modify its contents. For a list of all the supported components for preview cards check the developer documentation.

function getBook(id) {
// Code to fetch the data from the Google Books API
}

function bookLinkPreview(event) {
 if (event.docs.matchedUrl.url) {
// Through getBook(id) the relevant data is fetched and used to build the smart chip and card

    const previewHeader = CardService.newCardHeader()
      .setSubtitle('By ' + bookAuthors)
      .setTitle(bookTitle);

    const previewPages = CardService.newDecoratedText()
      .setTopLabel('Page count')
      .setText(bookPageCount);

    const previewDescription = CardService.newDecoratedText()
      .setTopLabel('About this book')
      .setText(bookDescription).setWrapText(true);

    const previewImage = CardService.newImage()
      .setAltText('Image of book cover')
      .setImageUrl(bookImage);

    const buttonBook = CardService.newTextButton()
      .setText('View book')
      .setOpenLink(CardService.newOpenLink()
        .setUrl(event.docs.matchedUrl.url));

    const cardSectionBook = CardService.newCardSection()
      .addWidget(previewImage)
      .addWidget(previewPages)
      .addWidget(CardService.newDivider())
      .addWidget(previewDescription)
      .addWidget(buttonBook);

    return CardService.newCardBuilder()
    .setHeader(previewHeader)
    .addSection(cardSectionBook)
    .build();
  }
}
This is the Apps Script code to create a smart chip.

A smart chip hovered state.
A smart chip hovered state. The data displayed is fetched from the Google for Developers blog post URL that was pasted by the user.


For a detailed walkthrough of the code used in this post, please checkout the Preview links from Google Books with smart chips sample tutorial.



How to choose the technology for your add-on

When creating smart chips for link previewing, you can choose from two different technologies to create your add-on: Google Apps Script or alternate runtime.

Apps script is a rapid application development platform that is built into Google Workspace. This fact makes Apps Script a good choice for prototyping and validating your smart chip solution as it requires no pre-existing development environment. But Apps Script isn’t only for prototyping as some developers choose to create their Google Workspace Add-on with it and even publish it to the Google Workspace Marketplace for users to install.

If you want to create your smart chip with Apps Script you can check out the video below in which you learn how to build a smart chip for link previewing in Google Docs from A - Z. Want the code used in the video tutorial? Then have a look at the Preview links from Google Books with smart chips sample page.

If you prefer to create your Google Workspace Add-on using your own development environment, programming language, hosting, packages, etc., then alternate runtime is the right choice. You can choose from different programming languages like Node.js, Java, Python, and more. The hosting of the add-on runtime code can be on any cloud or on premise infrastructure as long as runtime code can be exposed as a public HTTP(S) endpoint. You can learn more about how to create smart chips using alternate runtimes from the developer documentation.



How to share your add-on with others

You can share your add-on with others through the Google Workspace Marketplace. Let’s say you want to make your smart chip solution available to your team. In that case you can publish the add-on to your Google Workspace organization, also known as a private app. On the other hand, if you want to share your add-on with anyone who has a Google Account, you can publish it as a public app.

To find out more about publishing to the Google Workspace Marketplace, you can watch this video that will walk you through the process.



Getting started

Learn more about creating smart chips for link previewing in the developer documentation. There you will find further information and code samples you can base your solution of. We can’t wait to see what smart chip solutions you will build.

Create smart chips for link previewing in Google Docs

Posted by Chanel Greco, Developer Advocate

Earlier this year, we announced the general availability of third-party smart chips in Google Docs. This new feature lets you add, view, and engage with critical information from third party apps directly in Google Docs. Several partners, including Asana, Atlassian, Figma, Loom, Miro, Tableau, and Whimsical, have already created smart chips so users can start embedding content from their apps directly into Docs. Sourabh Choraria, a Google Developer Expert for Google Workspace and hobby developer, published a third-party smart chip solution called “Link Previews” to the Google Workspace Marketplace. This app adds information to Google Docs from multiple commonly used SaaS tools.

In this blog post you will find out how you too can create your own smart chips for Google Docs.

Example of a smart chip that was created to preview information from an event management system
Example of a smart chip that was created to preview information from an event management system


Understanding how smart chips for third-party services work

Third-party smart chips are powered by Google Workspace Add-ons and can be published to the Google Workspace Marketplace. From there, an admin or user can install the add-on and it will appear in the sidebar on the right hand side of Google Docs.

The Google Workspace Add-on detects a service's links and prompts Google Docs users to preview them. This means that you can create smart chips for any service that has a publicly accessible URL. You can configure an add-on to preview multiple URL patterns, such as links to support cases, sales leads, employee profiles, and more. This configuration is done in the add-on’s manifest file.

{
  "timeZone": "America/Los_Angeles",
  "exceptionLogging": "STACKDRIVER",
  "runtimeVersion": "V8",
  "oauthScopes": [
    "https://www.googleapis.com/auth/workspace.linkpreview",
    "https://www.googleapis.com/auth/script.external_request"
  ],
  "addOns": {
    "common": {
      "name": "Preview Books Add-on",
      "logoUrl": "https://developers.google.com/workspace/add-ons/images/library-icon.png",
      "layoutProperties": {
        "primaryColor": "#dd4b39"
      }
    },
    "docs": {
      "linkPreviewTriggers": [
        {
          "runFunction": "bookLinkPreview",
          "patterns": [
            {
              "hostPattern": "*.google.*",
              "pathPrefix": "books"
            },
            {
              "hostPattern": "*.google.*",
              "pathPrefix": "books/edition"
            }
          ],
          "labelText": "Book",
          "logoUrl": "https://developers.google.com/workspace/add-ons/images/book-icon.png",
          "localizedLabelText": {
            "es": "Libros"
          }
        }
      ]
    }
  }
}
The manifest file contains the URL pattern for the Google Books API

The smart chip displays an icon and short title or description of the link's content. When the user hovers over the chip, they see a card interface that previews more information about the file or link. You can customize the card interface that appears when the user hovers over a smart chip. To create the card interface, you use widgets to display information about the link. You can also build actions that let users open the link or modify its contents. For a list of all the supported components for preview cards check the developer documentation.

function getBook(id) {
// Code to fetch the data from the Google Books API
}

function bookLinkPreview(event) {
 if (event.docs.matchedUrl.url) {
// Through getBook(id) the relevant data is fetched and used to build the smart chip and card

    const previewHeader = CardService.newCardHeader()
      .setSubtitle('By ' + bookAuthors)
      .setTitle(bookTitle);

    const previewPages = CardService.newDecoratedText()
      .setTopLabel('Page count')
      .setText(bookPageCount);

    const previewDescription = CardService.newDecoratedText()
      .setTopLabel('About this book')
      .setText(bookDescription).setWrapText(true);

    const previewImage = CardService.newImage()
      .setAltText('Image of book cover')
      .setImageUrl(bookImage);

    const buttonBook = CardService.newTextButton()
      .setText('View book')
      .setOpenLink(CardService.newOpenLink()
        .setUrl(event.docs.matchedUrl.url));

    const cardSectionBook = CardService.newCardSection()
      .addWidget(previewImage)
      .addWidget(previewPages)
      .addWidget(CardService.newDivider())
      .addWidget(previewDescription)
      .addWidget(buttonBook);

    return CardService.newCardBuilder()
    .setHeader(previewHeader)
    .addSection(cardSectionBook)
    .build();
  }
}
This is the Apps Script code to create a smart chip.

A smart chip hovered state.
A smart chip hovered state. The data displayed is fetched from the Google for Developers blog post URL that was pasted by the user.


For a detailed walkthrough of the code used in this post, please checkout the Preview links from Google Books with smart chips sample tutorial.



How to choose the technology for your add-on

When creating smart chips for link previewing, you can choose from two different technologies to create your add-on: Google Apps Script or alternate runtime.

Apps script is a rapid application development platform that is built into Google Workspace. This fact makes Apps Script a good choice for prototyping and validating your smart chip solution as it requires no pre-existing development environment. But Apps Script isn’t only for prototyping as some developers choose to create their Google Workspace Add-on with it and even publish it to the Google Workspace Marketplace for users to install.

If you want to create your smart chip with Apps Script you can check out the video below in which you learn how to build a smart chip for link previewing in Google Docs from A - Z. Want the code used in the video tutorial? Then have a look at the Preview links from Google Books with smart chips sample page.

If you prefer to create your Google Workspace Add-on using your own development environment, programming language, hosting, packages, etc., then alternate runtime is the right choice. You can choose from different programming languages like Node.js, Java, Python, and more. The hosting of the add-on runtime code can be on any cloud or on premise infrastructure as long as runtime code can be exposed as a public HTTP(S) endpoint. You can learn more about how to create smart chips using alternate runtimes from the developer documentation.



How to share your add-on with others

You can share your add-on with others through the Google Workspace Marketplace. Let’s say you want to make your smart chip solution available to your team. In that case you can publish the add-on to your Google Workspace organization, also known as a private app. On the other hand, if you want to share your add-on with anyone who has a Google Account, you can publish it as a public app.

To find out more about publishing to the Google Workspace Marketplace, you can watch this video that will walk you through the process.



Getting started

Learn more about creating smart chips for link previewing in the developer documentation. There you will find further information and code samples you can base your solution of. We can’t wait to see what smart chip solutions you will build.

Global developers use Google tools to build solutions in recruiting, mentorship and more

Posted by Lyanne Alfaro, DevRel Program Manager, Google Developer Studio

Developer Journey is a monthly series highlighting diverse and global developers sharing relatable challenges, opportunities, and wins in their journey. Every month, we will spotlight developers around the world, the Google tools they leverage, and the kinds of products they are building.

This month we speak with global developers across Google Developer Experts, and Women Techmakers, to learn more about their favorite Google tools, the applications they’ve built to serve diverse communities and the role of inclusive design in their process.


Miguel Ángel Durán Garcí

Headshot of Miguel Ángel Durán Garcí, smiling
Barcelona, Spain
Google Developer Expert, Web Technologies
Content Creator & Software Engineer

What Google tools have you used to build?

I've been using Firebase, Google Cloud Platform, CrUX Dashboard, and Chrome DevTools for years. As a web developer, I'm always excited about the new features that DevTools brings to us to improve our productivity and the performance of our applications.


Which tool has been your favorite to use? Why?

Lately, I've been trying Project IDX, an entirely web-based workspace for full-stack application development, and I'm really excited about the future of this project. I love the idea of being able to develop and deploy applications from the browser, without having to install anything on my computer.


Please share with us about something you’ve built in the past using Google tools.

Most recently, I've deployed AdventJS, a holiday calendar for developers. For optimizing the images, I've used Squoosh from the GoogleChromeLabs team. To ensure the website was accessible and to tweak performance, I've used Lighthouse from Chrome DevTools. Also, I used Google Bard to translate the content of the website into English and Portuguese.


What will you create with Google Bard?

I'm planning to expand a website I've created for the Spanish-speaking community to teach JavaScript from scratch. With Google Bard, I can check the content, create some code, and make it help me create challenges for the students.


What advice would you give someone starting in their developer journey?

I would tell them to be patient and to enjoy the process. It's a long journey, but it's worth it. Also, I would tell them to be curious and avoid sticking to only a few technologies. And finally, I would tell them to share their knowledge with the community, because it's the best way to learn and meet new people. You don't need to be an expert to share your knowledge; you just need to be one step ahead of the people you're teaching.


Marian Villa

Headshot of Marian Villa, smiling
Medellín, Colombia
Google Developer Expert, Web Technologies
Co-founder / Director Pionerasdev

What Google tools have you used to build?

Development and Creativity:

  • Google Chrome DevTools
  • Bard
  • TensorflowJS

Productivity and Communication:

  • Gmail
  • Google Calendar
  • Google Drive
  • Google Docs
  • Google Sheets
  • Google Slides
  • Google Meet

Marketing and Business:

  • Google Ads
  • Google Analytics
  • Google My Business
  • Google Workspace
  • Google Cloud Platform
  • Google Marketing Platform

Education and Learning:

  • Google Classroom
  • Google Forms
  • Google Sites
  • YouTube

Which tool has been your favorite to use? Why?

Choosing a favorite tool is quite a task given the unique strengths of Bard, TensorflowJS and Google Chrome DevTools, but I'd have to say that Google Chrome DevTools stands out for me. Its versatility in inspecting and debugging web pages, testing code variations, and providing insights into JavaScript behavior has been crucial in my web development endeavors. That being said, both Bard and TensorFlow.js have incredible capabilities. Bard plays a vital role in generating creative content, answering queries, and even composing code. TensorFlow.js, on the other hand, is a game-changer, enabling machine learning in JavaScript, and making it accessible for a wide range of applications. Each tool has its unique appeal, and the choice will depend on the context and specific requirements of the task at hand.


Please share with us about something you’ve built in the past using Google tools.

On our latest website, we use all the Google technologies at hand to enhance our image as an NGO. Find it here.


What will you create with Google Bard?

We are once again resuming a winning mentorship project to advance our career as developers, so Bard and Duet AI are great allies to inspect our code and once again create an MVP of this product for our community.


What advice would you give someone starting in their developer journey?

First, think about the problem you want to solve, or what you want to contribute to the world, then create and make it come true. This is easier if you rely on communities, and people who help you as mentors, sponsors and guides.


Rubens de Almeida Zimbres

Headshot of Rubens Zimbres, smiling
São Paulo - Brazil
Google Developer Expert, Machine Learning and Google Cloud
ML Engineer

What Google tools have you used to build?

I’ve been using the full stack of Google Products. I use Google Workspace daily in my life, my personal website is made on Google Sites, and Google Cloud; I started with Compute Engine and Jupyter Notebooks, customized to my needs.

As I acquired more knowledge through practical experience, Coursera and Google Cloud Skills Boost, I started building end to-end solutions using BigQuery, SQL, lots of Vertex AI (Generative AI Studio, Matching Engine, Speech-to-text, Pipelines, AutoML, Model Fine-Tuning), Cloud Run (and a little GKE - Kubernetes), Cloud Functions, Dialogflow and Document AI.

As the requirements of clients change according to the industry, like recruiting (Virtual Career Center) and contact center (Contact Center AI), I was able to test and deploy in production different Google products to solve the clients’ needs.


Which tool has been your favorite to use? Why?

Vertex AI is my favorite, as it is pure ML and Deep Learning optimized. Using AutoML with NAS (Neural Architecture Search) was a very interesting experience with awesome results. Developing Machine Learning pipelines with Kubeflow is a special pleasure, as this is going into production and the whole MLOps is involved.


Please share with us about something you’ve built in the past using Google tools.

I’ve built a recruiting solution that was implemented in six countries of Latin America, benefiting more than 365,000 people. This solution automatically analyzes resumes using OCR via Document AI.

I delivered a revenue prediction for a hotel chain using Tensorflow, where we increased the accuracy of the client’s model by 0.95%. I also built a Contact Center solution which uses Google Speech-to-Text and analytics to make management easier and also to generate strategic insights.

Lately, I was part of the team that delivered an end-to-end Virtual Career Center solution that matches job candidates to job vacancies using Vertex AI Matching Engine via text embeddings and SCANN. Both the recruiting solution and the contact center solution generated patents in Brazil, in the field of NLP (Natural Language Processing).


What will you create with Google Bard?

Google Bard is part of my daily routine. It helps me while coding, it helps me to plan trips, get to the right public transportation, visit interesting places around the world and it also helps by retrieving the Google search in an organized way, with updated content. My idea is to use Bard along with LangChain to perform optimizations in the finance industry.


What advice would you give someone starting in their developer journey?

Learn the basics first.

The temptation of learning this magnificent field as Machine Learning is gigantic, but coding is a great part of the solution. Learn to code properly, in whatever language you want. This brings efficiency and security if your solution needs to scale, decreasing infrastructure costs and improving user experience.

The same applies to Machine Learning: learn basic disciplines such as Calculus, Computer Science fundamentals and you will understand most of the content is shared today online. Only after learning ML you should dive into Deep Learning and the disciplines associated. Don’t fake it. Make it.

#WeArePlay | Meet Steven from Indonesia. More stories from around the world

Posted by Leticia Lago, Developer Marketing

As we bid farewell to 2023, we're excited to unveil our last #WeArePlay blog post of the year. From Lisbon to Dubai, let’s meet the creators behind the game-changing apps supporting communities, bringing innovation and joy to people.



We’re starting off in Indonesia, where Steven remembers his pocket money quickly running out while traveling around rural areas of Indonesia with his parents. Struck by how much more expensive food items were in the villages compared to Jakarta, he was inspired to create Super, providing more affordable goods outside the capital. The app allows shop owners to buy items stored locally and supply them to their communities at lower prices. It's helped boost the hyperlocal supply chain and raise living standards for rural populations. Steven is keen to point out that “it’s not just about entrepreneurship”, but “social impact”. He hopes to take Super even further and improve economic distribution across the whole of rural Indonesia.

ALT TEXT

Next, we’re crossing the Java Sea to Singapore, where twin brothers – and marathon runners – Jeromy and Kenny decided to turn their passion for self-care into a journaling app. On Journey, people can log their daily thoughts and work towards their mental health and self-improvement goals using prompts. With the guidance of coaches, they can practice gratitude, record their ambitions, and learn about mindfulness and building self-confidence. “People tell us it helps them find time to invest in themselves and dedicate space to self-care”, says Jeromy. In the future, the pair want to bring in additional coaches to support even more people to achieve their wellness goals.

ALT TEXT

Now we’re landing in the Middle East where former kindergarten friends Chris and Rene decided to use their experience being expats in Dubai to create a platform for connecting disparate communities across the city. On Hayi حي, locals can share information with their neighbors, find help within the community and connect with those living nearby. “Community is at the heart of everything we do and our goal is to have a positive effect”, says Chris. They’re currently working on creating groups for art and sport enthusiasts to encourage residents to bond over their interests. The pair are also dedicated to sustainability and plan on launching environmental projects, such as wide-scale city clean-ups.

ALT TEXT

And finally, we’re off to Europe where Lisbon-based university chums Rita, João and Martim saw unexpected success. Initially, the trio created a recipe-sharing platform, SaveCook. When they launched its accompaniment, Super Save, however, which compared prices of recipe ingredients across different supermarkets, it exploded in popularity. With rising inflation, people were hugely thankful to the founders “for providing a major service” at such a crucial time. Next, they’re working on a barcode scanner that tells shoppers where they can buy cheaper versions of products “to help people save as much as they can.”

Discover more founder stories from across the globe in the #WeArePlay collection.



How useful did you find this blog post?

Congratulations to the winners of Google’s Immersive Geospatial Challenge

Posted by Bradford Lee – Product Marketing Manager, Augmented Reality, and Ahsan Ashraf – Product Marketing Manager, Google Maps Platform

In September, we launched Google's Immersive Geospatial Challenge on Devpost where we invited developers and creators from all over the world to create an AR experience with Geospatial Creator or a virtual 3D immersive experience with Photorealistic 3D Tiles.

"We were impressed by the innovation and creativity of the projects submitted. Over 2,700 participants across 100+ countries joined to build something they were truly passionate about and to push the boundaries of what is possible. Congratulations to all the winners!" 

 Shahram Izadi, VP of AR at Google

We judged all submissions on five key criteria:

  • Functionality - How are the APIs used in the application?
  • Purpose - What problem is the application solving?
  • Content - How creative is the application?
  • User Experience - How easy is the application to use?
  • Technical Execution - How well are you showcasing Geospatial Creator and/or Photorealistic 3D Tiles?

Many of the entries are working prototypes, with which our judges thoroughly enjoyed experiencing and interacting. Thank you to everyone who participated in this hackathon.



From our outstanding list of submissions, here are the winners of Google’s Immersive Geospatial Challenge:


Category: Best of Entertainment and Events

Winner, AR Experience: World Ensemble

Description: World Ensemble is an audio-visual app that positions sound objects in 3D, creating an immersive audio-visual experience.


Winner, Virtual 3D Experience: Realistic Event Showcaser

Description: Realistic Event Showcaser is a fully configurable and immersive platform to customize your event experience and showcase its unique location stories and charm.


Winner, Virtual 3D Experience: navigAtoR

Description: navigAtoR is an augmented reality app that is changing the way you navigate through cities by providing a 3 dimensional map of your surroundings.



Category: Best of Commerce

Winner, AR Experience: love ya

Description: love ya showcases three user scenarios for a special time of year that connect local businesses with users.



Category: Best of Travel and Local Discovery

Winner, AR Experience: Sutro Baths AR Tour

Description: This guided tour through the Sutro Baths historical landmark using an illuminated walking path, information panels with text and images, and a 3D rendering of how the Sutro Baths swimming pool complex would appear to those attending.


Winner, Virtual 3D Experience: Hyper Immersive Panorama

Description: Hyper Immersive Panorama uses real time facial detection to allow the user to look left, right, up or down, in the virtual 3D environment.


Winner, Virtual 3D Experience: The World is Flooding!

Description: The World is Flooding! allows you to visualize a 3D, realistic flooding view of your neighborhood.


Category: Best of Productivity and Business

Winner, AR Experience: GeoViz

Description: GeoViz revolutionizes architectural design, allowing users to create, modify, and visualize architectural designs in their intended context. The platform facilitates real-time collaboration, letting multiple users contribute to designs and view them in AR on location.



Category: Best of Sustainability

Winner, AR Experience: Geospatial Solar

Description: Geospatial Solar combines the Google Geospatial API with the Google Solar API for instant analysis of a building's solar potential by simply tapping it.


Winner, Virtual 3D Experience: EarthLink - Geospatial Social Media

Description: EarthLink is the first geospatial social media platform that uses 3D photorealistic tiles to enable users to create and share immersive experiences with their friends.


Honorable Mentions

In addition, we have five projects that earned honorable mentions:

  1. Simmy
  2. FrameView
  3. City Hopper
  4. GEOMAZE - The Urban Quest
  5. Geospatial Route Check

Congratulations to the winners and thank you to all the participants! Check out all the amazing projects submitted. We can't wait to see you at the next hackathon.

It’s time for developers and enterprises to build with Gemini Pro

Posted by Jeanine Banks – VP/GM, Developer X and Developer Relations, and Burak Gokturk – VP/GM, Cloud AI and Industry Solutions

Learn more about how to integrate Gemini Pro into your app or business at ai.google.dev

This article is also published on the Keyword blog.

Last week, we announced Gemini, our largest and most capable AI model and the next step in our journey to make AI more helpful for everyone. It comes in three sizes: Ultra, Pro and Nano. We've already started rolling out Gemini in our products: Gemini Nano is in Android, starting with Pixel 8 Pro, and a specifically tuned version of Gemini Pro is in Bard.

Today, we’re making Gemini Pro available for developers and enterprises to build for your own use cases, and we’ll be further fine-tuning it in the weeks and months ahead as we listen and learn from your feedback.


Gemini Pro is available today

The first version of Gemini Pro is now accessible via the Gemini API and here’s more about it:

  • Gemini Pro outperforms other similarly-sized models on research benchmarks.
  • Today’s version comes with a 32K context window for text, and future versions will have a larger context window.
  • It’s free to use right now, within limits, and it will be competitively priced.
  • It comes with a range of features: function calling, embeddings, semantic retrieval and custom knowledge grounding, and chat functionality.
  • It supports 38 languages across 180+ countries and territories worldwide.
  • In today’s release, Gemini Pro accepts text as input and generates text as output. We’ve also made a dedicated Gemini Pro Vision multimodal endpoint available today that accepts text and imagery as input, with text output.
  • SDKs are available for Gemini Pro to help you build apps that run anywhere. Python, Android (Kotlin), Node.js, Swift and JavaScript are all supported.
A screenshot of a code snippet illustrating the SDKs supporting Gemini.
Gemini Pro has SDKs that help you build apps that run anywhere.

Google AI Studio: The fastest way to build with Gemini

Google AI Studio is a free, web-based developer tool that enables you to quickly develop prompts and then get an API key to use in your app development. You can sign into Google AI Studio with your Google account and take advantage of the free quota, which allows 60 requests per minute — 20x more than other free offerings. When you’re ready, you can simply click on “Get code” to transfer your work to your IDE of choice, or use one of the quickstart templates available in Android Studio, Colab or Project IDX. To help us improve product quality, when you use the free quota, your API and Google AI Studio input and output may be accessible to trained reviewers. This data is de-identified from your Google account and API key.

A screen recording of a developer using Google AI Studio.
Google AI Studio is a free, web-based developer tool that enables you to quickly develop prompts and then get an API key to use in your app development.

Build with Vertex AI on Google Cloud

When it's time for a fully-managed AI platform, you can easily transition from Google AI Studio to Vertex AI, which allows for customization of Gemini with full data control and benefits from additional Google Cloud features for enterprise security, safety, privacy and data governance and compliance.

With Vertex AI, you will have access to the same Gemini models, and will be able to:

  • Tune and distill Gemini with your own company’s data, and augment it with grounding to include up-to-minute information and extensions to take real-world actions.
  • Build Gemini-powered search and conversational agents in a low code / no code environment, including support for retrieval-augmented generation (RAG), blended search, embeddings, conversation playbooks and more.
  • Deploy with confidence. We never train our models on inputs or outputs from Google Cloud customers. Your data and IP are always your data and IP.

To read more about our new Vertex AI capabilities, visit the Google Cloud blog.


Gemini Pro pricing

Right now, developers have free access to Gemini Pro and Gemini Pro Vision through Google AI Studio, with up to 60 requests per minute, making it suitable for most app development needs. Vertex AI developers can try the same models, with the same rate limits, at no cost until general availability early next year, after which there will be a charge per 1,000 characters or per image across Google AI Studio and Vertex AI.

A screenshot of input and output prices for Gemini Pro.
Big impact, small price: Because of our investments in TPUs, Gemini Pro can be served more efficiently.

Looking ahead

We’re excited that Gemini is now available to developers and enterprises. As we continue to fine-tune it, your feedback will help us improve. You can learn more and start building with Gemini on ai.google.dev, or use Vertex AI’s robust capabilities on your own data with enterprise-grade controls.

Early next year, we’ll launch Gemini Ultra, our largest and most capable model for highly complex tasks, after further fine-tuning, safety testing and gathering valuable feedback from partners. We’ll also bring Gemini to more of our developer platforms like Chrome and Firebase.

We’re excited to see what you build with Gemini.

Bazel 7 Release

Posted by the Google Bazel team

Bazel 7 is now released. Bazel is Google's open source build system for fast and correct builds. It has built-in support for building both client and server software, including client applications for both Android and iOS platforms. It also provides an extensible framework that you can use to develop your own build rules. Bazel builds almost all Google products, including Google Search, GMail, and Google Docs.


What’s new in Bazel 7?

Bazel 7 is the latest major release on the long-term support (LTS) track. It includes:

Bzlmod: Bzlmod, Bazel's new modular external dependency management system, is now enabled by default (i.e. --enable_bzlmod defaults to true). If your project doesn't have a MODULE.bazel file, Bazel will create an empty one for you. The old WORKSPACE mechanism will continue to work alongside the new Bzlmod-managed system. Learn more about what’s changed since Bazel 6 and what’s coming up in Bazel 8 and 9.

Build without the Bytes (BwoB): Build without the Bytes for builds using remote execution is now enabled by default (i.e. --remote_download_outputs defaults to toplevel). Bazel will no longer try to download any intermediate outputs from the remote server, but only the outputs of requested top-level targets instead. This significantly improves remote build performance. Learn more about BwoB.

Merged analysis and execution (Skymeld): Project Skymeld aims to improve multi-target build performance by removing the boundary between the analysis and execution phases and allowing targets to be independently executed as soon as their analysis finishes.

Platform-based toolchain resolution for Android and C++: This change helps streamline the toolchain resolution API across all rulesets, obviating the need for language-specific flags. It also removes technical debt by having Android and C++ rules use the same toolchain resolution logic as other rulesets. Full details for Android developers are available in the Android Platforms announcement.


What's next?

Read the full release notes for Bazel 7, and follow along as we work together towards Bazel 8:

If you have any questions or feedback, or would like to share something you’ve built, reach out to [email protected]. We would love to hear from you!

Announcing the inaugural Google for Startups Accelerator: Women Founders program, Europe & Israel – applications now open.

Posted by Karina Govindji Senior Director – LEAD - Global Workforce Diversity, and Noa Havazelet – Head of Google's accelerator programs across Europe and Israel

Applications are also open for underrepresented founders in North America

Artificial intelligence (AI) stands at the forefront of transformative technologies, reshaping industries and redefining the way we live and work. Yet, a closer look at the AI startup ecosystem reveals a stark gender disparity. Women, despite their profound capabilities and innovative prowess, often find themselves navigating a maze of obstacles in their entrepreneurial journey. Despite investment in AI software is booming globally, the venture capital funding problem for women is even more marked. Women-founded startups accounted for only 2.1% of VC deals involving AI startups1. This is a reality that demands attention and action. Globally in 2023, all-women founding teams raised just 3% of all dollars invested in the year, with mixed gender founding teams taking 15%, leaving 82% of dollars to flow to founding teams that are all men2.

Google's accelerator programs have actively taken a leading role in championing diversity and empowering women and minority founders - having supported 1100+ startups across the globe since 2016, 36% of which are women-led startups. As such, we are pleased to announce the launch of the Google for Startups Accelerator: Women Founders program (Europe & Israel), a 12 week program for Seed to Series A AI startups based in Europe and Israel.

The Google for Startups Accelerator: Women Founders program (Europe & Israel) provides a comprehensive mix of mentorship, technical support, and workshops, establishing a robust foundation for participants. Beyond Google's expert guidance, the accelerator cultivates a collaborative network among women founders, propelling innovation within the tech startup space. By empowering women founders, the Google for Startups Accelerator: Women Founders program (Europe & Israel) proactively contributes to creating a more inclusive and equitable tech community.

Applications for the Google for Startups Accelerator: Women Founders Europe & Israel program are open until January 19th, 2024. You can learn more and apply here.

In a similar vein, in North America, two other Google for Startups Accelerator programs for underrepresented founders have opened applications for the fifth Women Founders and Black Founders programs. These 10 week equity- free programs are best suited for Seed to Series A, high potential revenue generative women-led and black-led startups with growing teams (5+ employees). Applications for both programs close on February 1st, 2024.

To further explore these opportunities and why you should apply - listen to what past participants of the North American Women Founder and Black Founder programs have to say here.