Author Archives: Google Developers

Google Developer Groups & ecosystem partners bring Startup Success Days to 15 Indian cities

Posted by Harsh Dattani - Program Manager, Developer Ecosystem

The Indian startup ecosystem is thriving, with new startups being founded every day. The country has a large pool of talented engineers and entrepreneurs, and a growing number of investors, policy makers and new age enterprises are looking to back Indian startups.

Google Developer Groups (GDGs) in 50 key Indian cities with varying tech ecosystems across India have seen a healthy mix of developers from the startup ecosystem participating in local meetups. As a result, GDGs have created a platform in collaboration with Google to help early-stage startups accelerate their growth. GDGs across India are increasingly playing a vital role in assisting startup founders and their teams with content, networking opportunities, hackathons, bootcamps, demo days, and more.

We are pleased to announce Startup Success Days with the goal of strengthening how developer communities interact with startup founders, VCs, and Googlers to discuss, share, and learn about the latest trends like Generative AI, Google Cloud, Google Maps, and Keras.

Google Developer Groups Success Days August to October 2023

Startup Success Days will be held in 15 cities across India, starting with 8 cities in August and September: Ahmedabad, Bangalore, Hyderabad, Indore, Chennai, New Delhi, Mumbai, and Pune.

The next event will be hosted at Bangalore on August 12, 2023 at Google Office. The events will be free to attend and will be open to all startups, regardless of stage or industry. The events will cover technical topics, focused on Google technologies, and will provide opportunities for startups to receive mentorship from industry experts, network with other startups, and meet VCs to receive feedback on their business models.

Learn more and register for Startup Success Days on our website.

We look forward to seeing you there!

Harsh Dattani
Program Manager, Developer Ecosystem at Google

Expanding our Fully Homomorphic Encryption offering

Posted by Miguel Guevara, Product Manager, Privacy and Data Protection Office

At Google, it’s our responsibility to keep users safe online and ensure they’re able to enjoy the products and services they love while knowing their personal information is private and secure. We’re able to do more with less data through the development of our privacy-enhancing technologies (PETs) like differential privacy and federated learning.

And throughout the global tech industry, we’re excited to see that adoption of PETs is on the rise. The UK’s Information Commissioner’s Office (ICO) recently published guidance for how organizations including local governments can start using PETs to aid with data minimization and compliance with data protection laws. Consulting firm Gartner predicts that within the next two years, 60% of all large organizations will be deploying PETs in some capacity.

We’re on the cusp of mainstream adoption of PETs, which is why we also believe it’s our responsibility to share new breakthroughs and applications from our longstanding development and investment in this space. By open sourcing dozens of our PETs over the past few years, we’ve made them freely available for anyone – developers, researchers, governments, business and more – to use in their own work, helping unlock the power of data sets without revealing personal information about users.

As part of this commitment, we open-sourced a first-of-its-kind Fully Homomorphic Encryption (FHE) transpiler two years ago, and have continued to remove barriers to entry along the way. FHE is a powerful technology that allows you to perform computations on encrypted data without being able to access sensitive or personal information and we’re excited to share our latest developments that were born out of collaboration with our developer and research community to expand what can be done with FHE.

Furthering the adoption of Fully Homomorphic Encryption

Today, we are introducing additional tools to help the community apply FHE technologies to video files. This advancement is important because video adoption can often be expensive and incur long run times, limiting the ability to scale FHE use to larger files and new formats.

This will encourage developers to try out more complex applications with FHE. Historically, FHE has been thought of as an intractable technology for large-scale applications. Our results processing large video files show it is possible to do FHE in previously unimaginable domains.Say you’re a developer at a company and are thinking of processing a large file (in the TBs order of magnitude, can be a video, or a sequence of characters) for a given task (e.g., convolution around specific data points to do a blurry filter on a video or detect object movement), you can now try this task using FHE.

To do so, we are expanding our FHE toolkit in three new ways to make it easier for developers to use FHE for a wider range of applications, such as private machine learning, text analysis, and video processing. As part of our toolkit, we will release new hardware, a software crypto library and an open source compiler toolchain. Our goal is to provide these new tools to researchers and developers to help advance how FHE is used to protect privacy while simultaneously lowering costs.


Expanding our toolkit

We believe—with more optimization and specialty hardware — there will be a wider amount of use cases for a myriad of similar private machine learning tasks, like privately analyzing more complex files, such as long videos, or processing text documents. Which is why we are releasing a TensorFlow-to-FHE compiler that will allow any developer to compile their trained TensorFlow Machine Learning models into a FHE version of those models.

Once a model has been compiled to FHE, developers can use it to run inference on encrypted user data without having access to the content of the user inputs or the inference results. For instance, our toolchain can be used to compile a TensorFlow Lite model to FHE, producing a private inference in 16 seconds for a 3-layer neural network. This is just one way we are helping researchers analyze large datasets without revealing personal information.

In addition, we are releasing Jaxite, a software library for cryptography that allows developers to run FHE on a variety of hardware accelerators. Jaxite is built on top of JAX, a high-performance cross-platform machine learning library, which allows Jaxite to run FHE programs on graphics processing units (GPUs) and Tensor Processing Units (TPUs). Google originally developed JAX for accelerating neural network computations, and we have discovered that it can also be used to speed up FHE computations.

Finally, we are announcing Homomorphic Encryption Intermediate Representation (HEIR), an open-source compiler toolchain for homomorphic encryption. HEIR is designed to enable interoperability of FHE programs across FHE schemes, compilers, and hardware accelerators. Built on top of MLIR, HEIR aims to lower the barriers to privacy engineering and research. We will be working on HEIR with a variety of industry and academic partners, and we hope it will be a hub for researchers and engineers to try new optimizations, compare benchmarks, and avoid rebuilding boilerplate. We encourage anyone interested in FHE compiler development to come to our regular meetings, which can be found on the HEIR website.

Launch diagram

Building advanced privacy technologies and sharing them with others

Organizations and governments around the world continue to explore how to use PETs to tackle societal challenges and help developers and researchers securely process and protect user data and privacy. At Google, we’re continuing to improve and apply these novel data processing techniques across many of our products, and investing in democratizing access to the PETs we’ve developed. We believe that every internet user deserves world-class privacy, and we continue to partner with others to further that goal. We’re excited for new testing and partnerships on our open source PETs and will continue investing in innovations, aiming at releasing more updates in the future.

These principles are the foundation for everything we make at Google and we’re proud to be an industry leader in developing and scaling new privacy-enhancing technologies (PETs) that make it possible to create helpful experiences while protecting our users’ privacy.

PETs are a key part of our Protected Computing effort at Google, which is a growing toolkit of technologies that transforms how, when and where data is processed to technically ensure its privacy and safety. And keeping users safe online shouldn’t stop with Google - it should extend to the whole of the internet. That’s why we continue to innovate privacy technologies and make them widely available to all.

Introducing Project IDX, An Experiment to Improve Full-stack, Multiplatform App Development

Posted by Bre Arder, UX Research Lead, Kirupa Chinnathambi, Product Lead, Ashwin Raghav Mohan Ganesh, Engineering Lead, Erin Kidwell, Director of Engineering, and Roman Nurik, Design Lead


These days, getting an app from zero to production – especially one that works well across mobile, web, and desktop platforms – can feel like building a Rube Goldberg machine. You’ve got to navigate an endless sea of complexity, duct-taping together a tech stack that'll help you bootstrap, compile, test, deploy, and monitor your apps.

While Google’s been working on making multiplatform app development easier for years – from Angular and Flutter to Google Cloud and Firebase – it feels like there’s even more we can do to make the entire multiplatform app development workflow faster and more frictionless. So several months ago, a few of us got together and started experimenting. And today, we’re excited to share a very early look at our experiment, which we’re calling Project IDX.

Moving illustration of Project IDX Logo

Project IDX is a browser-based development experience built on Google Cloud and powered by Codey, a foundational AI model trained on code and built on PaLM 2. It’s designed to make it easier to build, manage and deploy full-stack web and multiplatform applications, with popular frameworks and languages. Project IDX is also built on Code OSS, so it should feel familiar no matter what you’re building.

A big part of why we’re sharing Project IDX today is we’d love to hear from the broader developer community on what could help you work even faster. In the meantime, here’s a preview of what’s possible today with Project IDX.


Get to work quickly, from anywhere

At the heart of Project IDX is our conviction that you should be able to develop from anywhere, on any device, with the full fidelity of local development. Every Project IDX workspace has the full capabilities of a Linux-based VM, paired with the universal access that comes with being hosted in the cloud, in a datacenter near you.

Moving illustration of Project IDX workspace operating on a browser and generating a preview on a mobile devivce

Import your existing app, or start something new

Project IDX lets you import your existing projects from GitHub so you can pick up right where you left off. You can also create new projects, with pre-baked templates for popular frameworks, including Angular, Flutter, Next.js, React, Svelte, Vue, and languages such as JavaScript, Dart, and (coming soon) Python, Go, and more. We’re also actively working to add first-class support for more project types and frameworks. If you have any suggestions, we’d love your feedback on which stacks to support.

Image of logos of Project IDX supported frameworks – React, Angular, Next, Flutter, Vue, Svelte, Go, Python, GitHub

Preview your app across platforms

Creating successful apps today means optimizing your app design and behavior across platforms, and previewing your apps just as your users would see them. To make this easier, Project IDX includes a built-in web preview and, coming soon, a fully-configured Android emulator and an embedded iOS simulator, all available directly in the browser.

Moving illustration of app design and behavior optimized across multiple devices - iOS simulator, Web browser, and Android emulator –  with Project IDX

Help from AI

We spend a lot of time writing code, and recent advances in AI have created big opportunities to make that time more productive. With Project IDX, we’re exploring how Google’s innovations in AI — including the Codey and PaLM 2 models powering Studio Bot in Android Studio, Duet in Google Cloud and more – can help you not only write code faster, but also write higher-quality code. Currently, Project IDX has smart code completion, an assistive chatbot, and contextual code actions like “add comments” and “explain this code”. Our AI capabilities are in their very early days, and we’re working on making IDX AI even better at helping you as you work.

Moving illustration of IDX AI assisting you with smart code completion, assistive chatbot, and contenxtual code actions

Publish to the web with Firebase Hosting

Finally, a common pain point in getting your app into production is deploying it. We’ve made this easier by integrating Firebase Hosting, making it possible to deploy a shareable preview of your web app, or deploy to production with a fast, secure, and global hosting platform, with just a few clicks. And because Firebase Hosting supports dynamic backends, powered by Cloud Functions, this works great for full-stack frameworks like Next.js.


Let’s build Project IDX together

We shared how we think Project IDX can start to make multiplatform app development better, along with some strides we’ve started making in these areas. But we are just at the beginning of this journey to improve the end-to-end development workflow, and we can only make good on this vision with your help. So with that, we’d like to share an early version of Project IDX with you — rough edges and all — to iterate on what’s working well and what could be even better for your app team’s workflow. To join us on our journey, visit our website to sign up and be one of the first to try Project IDX.

As for what’s next, we’re continuously working on adding new capabilities and addressing your feedback. We’re already working on new collaboration features, as we know how important those are in this hybrid work world, as well as deeper framework integrations and more personalized/contextual AI. Please share your feature requests with us as well!

MakerSuite expands to 179 countries and territories, and adds helpful features for AI makers

Posted by Simon Tokumine, Director of Product Management

When we announced MakerSuite earlier this year, we were delighted to see people from all over the world sign up for the waitlist. With MakerSuite we want to help anyone become an AI maker and easily create innovative AI applications with Google’s large generative models. We’re excited to see how it’s being used.

Today, we’re expanding access to MakerSuite to cover 179 countries and territories, including anyone with a Google Workspace account. This means that more developers than ever can sign up to create AI applications with our latest language model, PaLM 2.

We’re also introducing three helpful features:

  • Automatically optimize your text prompts
  • Image showing prompt suggestion in MakerSuite
    Want to write better prompts? Now, you can write a text prompt and click "Prompt Suggestion" to get ideas and suggestions to get better responses 
  • Enable dark mode
  • Image showing light mode and dark mode UX in MakerSuite
    In MakerSuite, you can now switch from light mode to dark mode in the settings.
  • Import and export your data with Google sheets and CSV to save time and collaborate effectively
  • Image showing import data function in MakeSuite
    Import and export your data to and from Google Sheets or CSV files easily. This can save you time by eliminating the need to recreate data that you have already created. It can also help you collaborate more effectively with others by allowing you to share your results easily.

Easily go from MakerSuite to code

Since the PaLM API is integrated into MakerSuite, it’s easy to quickly try different prompts from your browser, and then incorporate them into your code—no machine learning expertise required.

Moving image showing how users can copy their code with one click to integrate it into their project
Once your prompt is ready, simply copy your code in just one click and integrate it into your project

Get started

Sign up and learn more on our Generative AI for Developers website. Be sure to check out our quick-start guide, browse our prompt gallery, and explore sample apps for inspiration. We can't wait to see what you build with MakerSuite!

#WeArePlay | Meet Ayushi & Nikhil from India. More stories from around the world.

Posted by Leticia Lago, Developer Marketing

This month, we’re sharing new #WeArePlay stories from inspiring founders creating apps which help people improve their quality of life. From a diabetes management tracker to an upskilling platform for women, hear the stories behind some groundbreaking apps on Google Play.



Firstly, meet Nikhil and Ayushi from Bengaluru, India. During the Covid-19 lockdowns, Nikhil watched as his mother picked up new hobbies and tried making different dishes in the kitchen. Seeing his mom researching new recipes and cooking resources, it struck him that there was a lack of educational platforms in India specifically targeted at women. This gave him and his wife, Ayushi, the idea to create Alippo: an upskilling app for women that provides classes and training materials. It also has resources to help women launch and manage their own businesses using their newly acquired expertise. In the future, they want to add more learning materials, business guides and even financing options.


Image of Ed, Ken, and Erin of Health2Sync, located in Taipei City, Taiwan g.co/play/weareplay Google Play

Next up we have Ed, Ken and Erin from Taiwan. Ed comes from a family with a history of diabetes. But his grandma always stayed on top of her condition thanks to her habit of regularly noting down her blood sugar levels and sharing them with her doctor. Partnering with product manager Ken, whose mother also has diabetes, and former colleague Erin, he launched Health2Sync: a digital blood sugar tracker with a range of other features for tracking and managing diets, exercise and medication. Thanks to the app’s new AI-based food recognition feature, people can now track the contents and nutrients of their meals just by uploading a picture of their food.


Image of César and Lorenzo of WeCancer, located in Sao Paulo, Brazil g.co/play/weareplay Google Play

Now, Lorenzo and César from Brazil. Growing up, they both had personal experiences with cancer having lost their mothers to the disease. When they met some time later, via a mutual friend, they discussed their experiences, both agreeing that the hospital visits were tiring for their moms, and often unnecessary when measures could be taken to provide care at home. This inspired them to partner up and create WeCancer, a cancer treatment support platform where patients can receive support and medical care from the comfort of their own home, with monitoring and advice from doctors. In Lorenzo's own words, the app provides "qualified care outside of hospital walls to make life easier for patients”.


Image of John, Laura and Erich of Curable, located in Denver (CO), USA g.co/play/weareplay Google Play

Last but not least, Laura, Erich and John from the US. When they were colleagues, it was sharing their experiences around chronic pain that bonded them and brought them together as friends. When John began to teach the others some alternative methods he’d learnt for managing his pain, all three began to see huge improvements in their various conditions. Elated by how much these techniques and practices had helped them, they wanted to share the practices with others, inspiring them to team up to create Curable. On the app, chronic pain sufferers can follow a guided recovery program with a range of science-backed methods, including cognitive behavioral therapy and soothing meditation.


Discover more #WeArePlay stories from across the globe and stay tuned for more.



How useful did you find this blog post?

How it’s Made: TextFX is a suite of AI tools inspired by Lupe Fiasco’s lyrical and linguistic techniques

Posted by Aaron Wade, Creative Technologist

Google Lab Sessions is a series of experimental AI collaborations with innovators. In our latest Lab Session we wanted to explore specifically how AI could expand human creativity. So we turned to GRAMMY® Award-winning rapper and MIT Visiting Scholar Lupe Fiasco to build an AI experiment called TextFX.



The discovery process

We started by spending time with Lupe to observe and learn about his creative process. This process was invariably marked by a sort of linguistic “tinkering”—that is, deconstructing language and then reassembling it in novel and innovative ways. Some of Lupe’s techniques, such as simile and alliteration, draw from the canon of traditional literary devices. But many of his tactics are entirely unique. Among them was a clever way of creating phrases that sound identical to a given word but have different meanings, which he demonstrated for us using the word “expressway”:

express whey (speedy delivery of dairy byproduct)

express sway (to demonstrate influence)

ex-press way (path without news media)

These sorts of operations played a critical role in Lupe’s writing. In light of this, we began to wonder: How might we use AI to help Lupe explore creative possibilities with text and language?

When it comes to language-related applications, large language models (LLMs) are the obvious choice from an AI perspective. LLMs are a category of machine learning models that are specially designed to perform language-related tasks, and one of the things we can use them for is generating text. But the question still remained as to how LLMs would actually fit into Lupe’s lyric-writing workflow.

Some LLMs such as Google’s Bard are fine-tuned to function as conversational agents. Others such as the PaLM API’s Text Bison model lack this conversational element and instead generate text by extending or fulfilling a given input text. One of the great things about this latter type of LLM is their capacity for few-shot learning. In other words, they can recognize patterns that occur in a small set of training examples and then replicate those patterns for novel inputs.

As an initial experiment, we had Lupe provide more examples of his same-sounding phrase technique. We then used those examples to construct a prompt, which is a carefully crafted string of text that primes the LLM to behave in a certain way. Our initial prompt for the same-sounding phrase task looked like this:

Word: defeat
Same-sounding phrase: da feet (as in "the feet")

Word: surprise
Same-sounding phrase: Sir Prize (a knight whose name is Prize)

Word: expressway
Same-sounding phrase: express whey (speedy delivery of dairy byproduct)

(...additional examples...)

Word: [INPUT WORD]
Same-sounding phrase:


This prompt yielded passable outputs some of the time, but we felt that there was still room for improvement. We actually found that factors beyond just the content and quantity of examples could influence the output—for example, how the task is framed, how inputs and outputs are represented, etc. After several iterations, we finally arrived at the following:

A same-sounding phrase is a phrase that sounds like another word or phrase.


Here is a same-sounding phrase for the word "defeat":

da feet (as in "the feet")


Here is a same-sounding phrase for the word "surprise":

Sir Prize (a knight whose name is Prize)


Here is a same-sounding phrase for the word "expressway":

express whey (speedy delivery of dairy byproduct)


(...additional examples...)


Here is a same-sounding phrase for the word "[INPUT WORD]":

After successfully codifying the same-sounding word task into a few-shot prompt, we worked with Lupe to identify additional creative tasks that we might be able to accomplish using the same few-shot prompting strategy. In the end, we devised ten prompts, each uniquely designed to explore creative possibilities that may arise from a given word, phrase, or concept:

SIMILE - Create a simile about a thing or concept.

EXPLODE - Break a word into similar-sounding phrases.

UNEXPECT - Make a scene more unexpected and imaginative.

CHAIN - Build a chain of semantically related items.

POV - Evaluate a topic through different points of view.

ALLITERATION - Curate topic-specific words that start with a chosen letter.

ACRONYM - Create an acronym using the letters of a word.

FUSE - Create an acronym using the letters of a word.

SCENE - Create an acronym using the letters of a word.

UNFOLD - Slot a word into other existing words or phrases.

We were able to quickly prototype each of these ideas using MakerSuite, which is a platform that lets users easily build and experiment with LLM prompts via an interactive interface.

Moving image showing a few-shot prompt in MakerSuite

How we made it: building using the PaLM API

After we finalized the few-shot prompts, we built an app to house them. We decided to call it TextFX, drawing from the idea that each tool has a different “effect” on its input text. Like a sound effect, but for text.

Moving image showing the TextFX user interface

We save our prompts as strings in the source code and send them to Google’s PaLM 2 model using the PaLM API, which serves as an entry point to Google’s large language models.

All of our prompts are designed to terminate with an incomplete input-output pair. When a user submits an input, we append that input to the prompt before sending it to the model. The model predicts the corresponding output(s) for that input, and then we parse each result from the model response and do some post-processing before finally surfacing the result in the frontend.

Diagram of information flow between TextFX and Google's PaLM 2 large language models

Users may optionally adjust the model temperature, which is a hyperparameter that roughly corresponds to the amount of creativity allowed in the model outputs.

Try it yourself

You can try TextFX for yourself at textfx.withgoogle.com.

We’ve also made all of the LLM prompts available in MakerSuite. If you have access to the public preview for the PaLM API and MakerSuite, you can create your own copies of the prompts using the links below. Otherwise, you can join the waitlist.


And in case you’d like to take a closer look at how we built TextFX, we’ve open-sourced the code here.

If you want to try building with the PaLM API and MakerSuite, join the waitlist.

A final word

TextFX is an example of how you can experiment with the PaLM API and build applications that leverage Google’s state of the art large language models. More broadly, this exploration speaks to the potential of AI to augment human creativity. TextFX targets creative writing, but what might it mean for AI to enter other creative domains as a collaborator? Creators play a crucial role in helping us imagine what these collaborations might look like. Our hope is that this Lab Session gives you a glimpse of what’s possible using the PaLM API and inspires you to use Google’s AI offerings to bring your own ideas to life, in whatever your craft may be.

If you’d like to explore more Lab Sessions like this one, head over to labs.google.com.

Indie Games Fund: Apply for support from Google Play’s $2M fund in Latin America

Posted by Daniel Trócoli Head of Play Partnerships for Games - LATAM

In 2022, we first launched the Indie Games Fund in Latin America as part of our commitment to helping developers of all sizes grow on Google Play. Check out the 10 selected studios who received a share of the fund last year.

Today, we’re bringing back the Indie Games Fund for 2023. We will award $2 million dollars in non-dilutive cash awards in addition to hands-on support, to selected small games studios based in Latin America, helping them build and grow their businesses on our platform.

The program is open to indie game developers who have already launched a game - whether it’s on Google Play or another mobile platform, PC or console. Each selected recipient will get between $150,000 and $200,000 dollars to help them take their game to the next level, and build successful businesses.

Check out all eligibility criteria and apply now. Applications close at 12:00pm BRT September 1, 2023. Priority will be given to applications received by 12:00pm BRT August 16, 2023.

For more updates about all our programs, resources and tools for indie game developers visit our website, and follow us on Twitter @GooglePlayBiz and Google Play business community on LinkedIn.



How useful did you find this blog post?

Champion Innovator David Cardozo, based in Victoriaville, Quebec

Posted by Max Saltonstall, Developer Relations Engineer

Google Cloud Champion Innovators are a global network of more than 500 non-Google professionals, who are technical experts in Google Cloud products and services. Each Champion specializes in one of nine different technical categories: cloud AI/ML, data analytics, hybrid multi-cloud, modern architecture, security and networking, serverless app development, storage, Workspace and databases.

In our ongoing interview series we sit down with Champion Innovators across the world to learn more about their journeys, their technology focus, and what excites them.

Today we're talking to David Cardozo, a Machine Learning Scientist, Kubeflow Community member and ML GDE.

Headshot of David Cardozo, smiling

What tech area has you most fascinated right now, and why?

I love all the creative ways people are using Machine Learning (ML) to solve problems. There are a ton of cool applications that I see through my consulting work – counting cranberries from drone footage, tallying fish in fish farms, classifying plastics for recycling – and there's great stuff going on in both the public and private sector.

I'm also digging into the Kubeflow community right now, learning from that group. It's a melting pot of languages: Go, Python, etc. By participating in the working group and meetings I'm understanding so much more about current issues, blockers to progress, and get a deeper understanding of the technology itself. I love gaining that insight.

How do you like to learn new services, tools, and applications?

I read a lot: engineering blogs, books, documentation. Right now I'm learning system design from a variety of Google blogs, which helps me learn how to scale up the things I design. I'm also learning how to make ML models, and how to improve the ones I've deployed.

I'm passionate about contributing to the open source community and actively participate in various projects. Right now with friends in the community we developed Elegy – a high level API for Deep Learning in JAX.

Writing about a topic also helps me learn. Right now, I am working on blogs focused on Kubeflow pipelines in version 2.0 and Vertex AI in Google Cloud.

When I'm diving into a brand new technology I try to join the working groups that are furthering its development, so I get an inside look at how things are moving. Those working groups, their discussions and notes, teach me a ton. I also use the Google Cloud Forum and StackOverflow communities to deepen my knowledge.

What are some exciting projects you have in flight right now?

Getting to play with Generative AI within Vertex (on Google Cloud) has been very fun. I like hearing about what the other Innovators are making; it's a very smart, creative group with cool projects. Learning more about the cutting edge of ML is very exciting.

I'm doing a bit more with Open Source in my free time, trying to understand more around Kubernetes and Kubeflow.

What engages you outside of the technology world?

I stay active: swimming, lots of soccer. I also have been learning about option trading, testing out the waters of active investing. The complexity of those economic systems stimulates my curiosity. I really want to understand how it works, and how to make it useful.

My background is in the social sciences, I'm a bit of a frustrated historian. My interest in school was history, but my family said that I shouldn't focus on social science, so I majored in Math and Physics, but never finished my degree. Right now, after a few life and career pivots, I'm working on completing my Bachelor's through Coursera via the University of London, and earning a history degree requires a lot of reading. This has inspired me to make an AI project that summarizes the knowledge from very long documents, making history research more accessible by giving people a format that's easier to consume.

What brought you into the Innovators program?

I started as one of the Google Developer Experts, but I always wanted more opportunities to talk with Google engineers and get more feedback on the cloud architectures I was building, for myself or my clients. I also wanted to be more involved in the Cloud community.

When I see members of the community encountering challenges, struggling as I did, I feel the pull to help them. As a native Spanish speaker I wanted to make more content in Spanish for folks like myself. I didn't have a mentor as I was learning, and I'd like to fill that gap for others.

So I began organizing meetups in Latin America, and in Spanish speaking communities. I sought out more data scientists. And I went through Qwiklabs and Cloud Skills Boost to learn to improve my own skills.

After I joined the Innovators program, I've had the chance to play with new AI technologies, work more closely with Google experts and received credits for more Cloud experimentation.

What's one thing our readers should do next?

I recommend using some of the open, public teaching resources in Computer Science (CS), especially if you're like me and didn't focus on CS in school. For me, computers came very late to Colombia and I didn't have a chance to major in CS as a student, so I got into it via Math, then information security.

I also suggest taking a look at Elegy, and being involved in solving first issues, providing feedback and also some pull requests :)

I've liked Stanford's course on Neural Networks (CS 231n), as well as MIT's open courseware classes and ML videos on YouTube by Joel Grus.


Each Champion Innovator is not affiliated with Google nor do they offer services on behalf of Google.

What’s new for developers building solutions on Google Workspace – mid-year recap

Posted by Chanel Greco, Developer Advocate Google Workspace

Google Workspace offers tools for productivity and collaboration for the ways we work. It also offers a rich set of APIs, SDKs, and no-code/low-code tools to create apps and integrate workflows that integrate directly into the surfaces across Google Workspace.

Leading software makers like Atlassian, Asana, LumApps and Miro are building integrations with Google Workspace apps—like Google Docs, Meet, and Chat—to make it easier than ever to access data and act right in the tools relied on by more than 3 billion users and 9 million paying customers.

At I/O’23 we had some exciting announcements for new features that give developers more options when integrating apps with Google Workspace.


Third-party smart chips in Google Docs

We announced the opening up of smart chips functionality to our partners. Smart chips allow you to tag and see critical information to linked resources, such as projects, customer records, and more. This preview information provides users with context and critical information right in the flow of their work. These capabilities are now generally available to developers to build their own smart chips.

Some of our partners have built and launched integrations using this new smart chips functionality. For example, Figma is integrated into Docs with smart chips, allowing users to tag Figma projects which allows readers to hover over a Figma link in a doc to see a preview of the design project. Atlassian is leveraging smart chips so users can seamlessly access Jira issues and Confluence pages within Google Docs.

Tableau uses smart chips to show the user the Tableau Viz's name, last updated date, and a preview image. With the Miro smart chip solution users have an easy way to get context, request access and open a Miro board from any document. The Whimsical smart chip integration allows users to see up-to-date previews of their Whimsical boards.

Moving image showing functionality of Figma smart chips in Google docs, allowing users to tag and preview projects in docs.

Google Chat REST API and Chat apps

Developers and solution builders can use the Google Chat REST API to create Chat apps and automate workflows to send alerts, create spaces, and share critical data right in the flow of the conversation. For instance, LumApps is integrating with the Chat APIs to allow users to start conversations in Chat right from within the employee experience platform.

The Chat REST API is now generally available.

Using the Chat API and the Google Workspace UI-kit, developers can build Chat apps that bring information and workflows right into the conversation. Developers can also build low code Chat apps using AppSheet.

Moving image showing interactive Google Meet add-ons by partner Jira

There are already Chat apps available from partners like Atlassian’s Jira, Asana, PagerDuty and Zendesk. Jira for Google Chat to collaborate on projects, create issues, and update tickets – all without having to switch context.

Google Workspace UI-kit

We are continuing to evolve the Workspace UI-kit to provide a more seamless experience across Google Workspace surfaces with easy to use widgets and visual optimizations.

For example, there is a new date and time picker widget for Google Chat apps and there is the new two-column layout to optimize space and organize information.

Google Meet SDKs and APIs

There are exciting new capabilities which will soon be launched in preview for Google Meet.

For example, the Google Meet Live Sharing SDK allows for the building of new shared experiences for users on Android, iOS, and web. Developers will be able to synchronize media content across participant’s devices in real-time and offer shared content controls for everyone in the meeting.

The Google Meet Add-ons SDK enables developers to embed their app into Meet via an iframe, and choose between the main stage or the side panel. This integration can be published on the Google Workspace Marketplace for discoverability.

Partners such as Atlassian, Figma, Lucid Software, Miro and Polly.ai, are already building Meet add-ons, and we’re excited to see what apps and workflows developers will build into Meet’s highly-interactive surfaces.

Image of interactive Google Meet add-on by partner Miro

With the Google Meet APIs developers can add the power of Google Meet to their applications by pre-configuring and launching video calls right from their apps. Developers will also be able to pull data and artifacts such as attendance reporting, recordings, and transcripts to make them available for their users post-meeting.

Google Calendar API

The ability to programmatically read and write the working location from Calendar is now available in preview. In the second half of this year, we plan to make these two capabilities, along with the writing of sub-day working locations, generally available.

These new capabilities can be used for integrating with desk booking systems and coordinating in-offices days, to mention just a few use cases. This information will help organizations adapt their setup to meet the needs of hybrid work.

Google Workspace API Dashboard and APIs Explorer

Two new tools were released to assist developers: the Google Workspace API Dashboard and the APIs Explorer.

The API Dashboard is a unified way to access Google Workspace APIs through the Google Cloud Console—APIs for Gmail, Google Drive, Docs, Sheets, Chat, Slides, Calendar, and many more. From there, you now have a central location to manage all your Google Workspace APIs and view all of the aggregated metrics, quotas, credentials, and more for the APIs in use.

The APIs Explorer allows you to explore and test Google Workspace APIs without having to write any code. It's a great way to get familiar with the capabilities of the many Google Workspace APIs.

Apps Script

The eagerly awaited project history capability for Google Apps Script will soon be generally available. This feature allows users to view the list of versions created for the script, their content, and different changes between the selected version and the current version.

It was also announced that admins will be able to add an allowlist for URLs per domain to help safer access controls and control where their data can be sent externally.

The V8 runtime for Apps Script was launched back in 2020 and it enables developers to use modern JavaScript syntax and features. If you still have legacy scripts on the old Rhino runtime, now is the time to migrate them to V8.

AppSheet

We have been further improving AppSheet, our no-code solution builder, and announced multiple new features at I/O.

Later this year we will be launching Duet AI in AppSheet to make it easier than ever to create no-code apps for Google Workspace. Using a natural-language and conversational interface, users can build an app in AppSheet by simply describing their needs as a step-by-step conversation in chat.

Moving image of no-code app creation in AppSheet

The no-code Chat apps feature for AppSheet is generally available which can be used to quickly create Google Chat apps and publish them with 1-click.

AppSheet databases are also generally available. With this native database feature, you can organize data with structured columns and references directly in AppSheet.

Check out the Build a no-code app using the native AppSheet database and Add Chat to your AppSheet apps codelabs to get you started with these two new capabilities.

Google Workspace Marketplace

The Google Workspace Marketplace is where developers can distribute their Workspace integrations for users to find, install, and use. We launched the Intelligent Apps category which spotlights the AI-enabled apps developers build and helps users discover tools to work smarter and be more productive (eligibility criteria here).

Image of Intelligent Apps in Google Workspace

Start building today

If you want early access to the features in preview, sign up for the Developer Preview Program. Subscribe to the Google Workspace Developers YouTube channel for the latest news and video tutorials to kickstart your Workspace development journey.

We can’t wait to see what you will build on the Google Workspace platform.