Tag Archives: Announcements

How We Made SPACE INVADERS: World Defense, an AR game powered by ARCore

Posted by Dereck Bridie, Developer Relations Engineer, ARCore and Bradford Lee, Product Marketing Manager, Augmented Reality

To celebrate the 45th anniversary of “SPACE INVADERS,” we collaborated with TAITO, the Japanese developer of the original arcade game, and UNIT9 to launch “SPACE INVADERS: World Defense,” an immersive game that takes advantage of the most advanced location-based AR technology. Players around the world can go outside to explore their local neighborhoods, defend the Earth from virtual Space Invaders that spawn from nearby structures, and score points by taking them down – all with augmented reality.

The game is powered by our latest ARCore technology - Geospatial API, Streetscape Geometry API, and Geospatial Creator. We’re excited to show you behind the scenes of how the game was developed and how we used our newest features and tools to design the first of its kind procedural, global AR gameplay.

Geospatial API: Turn the world into a playground

Geospatial API enables you to attach content remotely to any area mapped by Google Street View and create richer and more robust immersive experiences linked to real-world locations on a global scale. SPACE INVADERS: World Defense is available in over 100 countries in areas with high Visual Positioning Service (VPS) coverage in Street View, adapting the gameplay to busy urban environments as well as smaller towns and villages.

For players who live in areas without VPS coverage, we have recently updated the game to include our new mode called Indoor Mode, which allows you to defend the Earth from Space Invaders in any setting or location - indoors or outdoors.

Indoor Mode
The new Indoor Mode in Space Invaders brings the immersive gameplay to any indoor building setting

Creating the initial user flow

ARCore Geospatial API uses camera images from the user’s device to scan for feature points and compares those to images from Google Street View in order to precisely position the device in real-world space.

Geospatial API
Geospatial API is based on VPS with tens of billions of images in Street View to enable developers to build world-anchored experiences remotely in over 100 countries

This requires the user to hold up their phone and pan around the area such that enough data is collected to accurately position the user. To do this, we employed a clever technique to get users to scan the area, by requiring them to track the spaceship in the camera’s field of view.

Start of Game spaceship
To get started, follow the spaceship to scan your local surroundings

Using this user flow, we continually check whether the Geospatial API has gathered enough data for a high quality experience:

if (earthManager.EarthTrackingState == TrackingState.Tracking) {         var yawAcc = earthManager.CameraGeospatialPose.OrientationYawAccuracy;         var horiAcc = earthManager.CameraGeospatialPose.HorizontalAccuracy;         bool yawIsAccurate = yawAcc <= 5;         bool horizontalIsAccurate = horiAcc <= 10; return yawIsAccurate && horizontalIsAccurate; }

Transforming the environment into the playground

After scanning the nearby area, the game uses mesh data from the Streetscape Geometry API to algorithmically make playing the game in different locations a unique experience. Every real-world location has its own topography and city layout, affecting the gameplay in its own unique way.

Space Invaders played in diferent locations
Gameplay is varied depending on your location - from towns in Czech Republic (left) to cities in New York (right)

In the game, SPACE INVADERS can spawn from buildings, so we constructed test cases using building geometry obtained from different parts of the world. This ensures that the game would perform optimally in diverse environments from local villages to bustling cities.

Portal Placement
A visualization of how the algorithm would place portals in the real-world

Entering the Invader’s dimension

From our research studies, we learned that it can be tiring for users to keep holding their hands up for a prolonged period of time for an augmented reality experience. This knowledge influenced our gameplay development - we created the Invader’s dimension to give players a chance to relax their phone arm and improve user comfort.

Our favorite ‘wow’ moment that really shows you the power of the Geospatial API is the transition between real-world AR and virtually generated, 3D dimensions.

Transition AR to 3D
Gameplay transition from real-world AR to 3D dimension

This effect is achieved by blending the camera feed with the virtual environment shader that renders the buildings and terrain in the distinct wireframe style.

Portal Transition Editor
The Invader’s dimension appears around the player in the Unity Editor, seamlessly transitioning between the two modes

After the player enters the Invader’s dimension, the player’s spaceship flies through an algorithmically generated path through their local neighborhood. This is done by creating a depth image of the user’s environment from an overhead camera. In this image, the red channel represents buildings and the blue channel represents space that could potentially be used for the flight path. This image is then used to generate a grid with points that the path should follow, and an A* search algorithm is used to solve for a path that follows all the points.

Finally, the generated A-Star path is post-processed to smooth out any potential jittering, sharp turns and collisions.

To smooth out the spaceship’s pathway, the jitter is removed by sampling the path over a set interval of nodes. Then, we determine if there are any sharp turns on a path by analyzing the angles along the path. If a sharp turn is present, we introduce two additional points to round it out. Lastly we see if this smoothed path would collide with any obstacles, and adjust it to fly over them if detected.

Depth Composite on the left and 3D Path on the right
A visualization of the depth map and a generated sample path in the Invader’s dimension

Creating a global gaming experience

A key takeaway from building the game was that the complexity of the contextual generation required worldwide testing. With Unity, we brought multiple environments into test cases, which allowed us to rapidly iterate and validate changes to these algorithms. This gave us confidence to deploy the game globally.

Visualizing SPACE INVADERS using Geospatial Creator

We used Geospatial Creator, powered by ARCore and Photorealistic 3D Tiles from Google Maps Platform, to validate how virtual content, such as Space Invaders, would appear next to specific landmarks within Tokyo in Unity.

Japan 3D Tiles
With Photorealistic 3D Tiles, we were able to visualize Invaders in specific locations, including the Tokyo Tower in Japan

Future updates and releases

Since the game’s launch, we have heard our players’ feedback and have been actively updating and improving the gameplay experience.

  • We have added a new gameplay mode, Indoor Mode, which allows all players without VPS coverage or players who do not want to use AR mode to experience the game.
  • To encourage users to play the game in AR, scores have been rebalanced to reward players who play outside more than players who play indoors.

Download the game on Android or iOS today and join the ranks of an elite Earth defender force to compete in your neighborhood for the highest score. To hear the latest game updates, follow us on Twitter (@GoogleARVR) to hear how we are improving the game. Plus, visit our ARCore and Geospatial Creator websites to learn how to get started building with Google’s AR technology.

Latest ARTwork on hundreds of millions of devices

Posted by Serban Constantinescu, Product Manager

Wouldn’t it be great if each update improved start-up times, execution speed, and memory usage of your apps? Google Play system updates for the Android Runtime (ART) do just that. These updates deliver performance improvements, the latest security fixes, and unify the core OpenJDK APIs across hundreds of millions of devices, including all Android 12+ devices and soon Android Go.

ART is the engine behind the Android operating system (OS). It provides the runtime and core APIs that all apps and most OS services rely on. Both Java and Kotlin are compiled down to bytecode executed by ART. Improvements in the runtime, compiler and core API benefit all developers making app execution faster and bytecode compilation more efficient.

While parts of Android are customizable by device manufacturers, ART is the same for all devices and Google Play system updates enable a path to modular updates.

Modularizing the OS

Android was originally designed for monolithic updates, which meant that OS components did not need to have clear API boundaries. This is because all dependent software would be built together. However, this made it difficult to update ART independently of the rest of the OS. Our first challenge was to untangle ART's dependencies and create clear, well-defined, and tested API boundaries. This allowed us to modularize ART and make it independently updatable.

Illustration of a racecar with an engine part hovering above the hood. A curved arrow points to where this part should go

As a core part of the OS, ART had to blaze new trails and engineer new OS boundaries. These new boundaries were so extensive that manually adding and updating them would be too time-consuming. Therefore, we implemented automatic generation of those through introspection in the build system.

Another example is stack unwinding, which reports the functions last executed when an issue is detected. Before modularizing the OS, all stack unwinding code was built together and could change across Android versions. This made the transition even more challenging, since there is only one version of ART that is delivered to many versions of Android, we had to create a new API boundary as well as design it to be forward-compatible with newer versions of the ART APEX module on devices that are no longer getting full OS updates.

Recently, for Android 14, we refactored the interface between the Package Manager, the service that determines how to install and update apps, and ART. This moves the OS boundary from the ART dex2oat command line to a well-defined interface that enables future optimizations, such as finer-grained control over the compilation mode.

ART updatability also introduced new challenges. For example, the collection of Java libraries, referred to as the Boot Classpath, had to be securely recompiled to ensure good performance. This required introducing a new secure state for compilation during boot as well as a fallback JIT compilation mode.

On older devices, the secure compilation happens on the first reboot after an ART update. On newer devices that support the Android Virtualization Framework, the compilation happens while the device is idle, in an enclave called Isolated Compilation – saving up to 20 seconds of boot-time.

Testing the ART APEX module

The ART APEX module is a complex piece of software with an order of magnitude more APIs than any other APEX module. It also backs a quarter of the developer APIs available in the Android SDK. In addition, ART has a compiler that aims to make the most of the underlying hardware by generating chipset-specific instructions, such as Arm SVE. This, together with the multiple OS versions on which the ART APEX module has to run, makes testing challenging.

We first modularized the testing framework from per-platform release (e.g. Android CTS) to per module. We did this by introducing an ART-specific Mainline Test Suite (MTS), which tests both compiler and runtime, as well as core OpenJDK APIs, while collecting code coverage statistics.

Our target is 100% API coverage and high line coverage, especially for new APIs. Together with HWASan and fuzzing, all of the tests described above contribute to a massive test load that needs to be sharded across multiple devices to ensure that it completes in a reasonable amount of time.

Illustration of modularized testing framework

We test the upcoming ART release every day by compiling over 18 million APKs and running app compatibility tests, and startup, performance, and memory benchmarks on a variety of Android devices that replicate the diversity of our ecosystem as closely as possible. Once tests pass with all possible compilation modes, all Garbage Collector algorithms, and supported OS versions, we begin gradually rolling out the next ART release.

Benefits of ART Google Play system updates

By updating ART independently of OS updates, users get the latest performance optimizations and security fixes as quickly as possible, while developers get OpenJDK improvements and compiler optimisations that benefit both Java and Kotlin.

As shown in the graph below, the runtime and compiler optimizations in the ART 13 update delivered real-world app start-up improvements of up to 30% on some devices.

Graph of average app startup time showing startup time in milliseconds with improvement up to 30% across 12 weeks on devices running the latest ART Google Play system update

ART updates allow us to frequently deploy fixes with little additional effort from our ecosystem partners. They include propagating upstream OpenJDK fixes to Android devices as quickly as possible, as well as runtime and compiler security fixes, such as CVE-2022-20502, which was detected by our automated fuzzing tests.

For developers, ART updates mean that you can now target the latest programming features. ART 13 delivered OpenJDK 11 core language features, which was the fastest-ever adoption of a new OpenJDK release on Android devices.

What’s next

In the coming months, we'll be releasing ART 14 to all compatible devices. ART 14 includes OpenJDK 17 support along with new compiler and runtime optimizations that improve performance while reducing code size. Stay tuned for more details on ART 14!

Java and OpenJDK are trademarks or registered trademarks of Oracle and/or its affiliates.

Meet the student leaders building apps using Google technology

Posted by Kübra Zengin, North America GDSC Regional Lead

Serving as a Google Developer Student Clubs (GDSC) Lead at the university level builds technical skills and leadership skills that serve alumni well in their post-graduate careers. Four GDSC Alumni Leads from universities in Canada and the U.S. have gone on to meaningful careers in the tech industry, and share their experiences.

Image of Daniel Shirvani (right) with Ayman Bolad (left)at a Google Developer Students event

Daniel Shirvani: The Next Frontier in Patient Data

Daniel Shirvani graduated from the University of British Columbia (UBC) in Vancouver, Canada, in 2023, with a Bachelor’s of Science in Pharmacology, and will soon return to UBC for medical school. He served as Google Developer Student Clubs (GDSC) Lead and founding team member. He also launched his own software company, Leftindust Systems, in 2019, to experiment with creating small-scale electronic medical record software (EMR) for the open source community. This project is now closed.

“I built a startup to rethink the use of medical software,” he says.

As a summer student volunteer at a Vancouver-area heart clinic, Shirvani was tasked with indexing hundreds of medical records, who had specific blood glucose HBA1C levels and factors related to kidney disease, to see who would be eligible for the new cardiac drug. However, the clinic’s medical records software didn’t have the capability to flag patients in the system, so the only way to register the hundreds of files on Shirvani’s final list would be to do so manually–and that was impossible, given the size of the list and the time remaining in his work term. He believed that the software should have been able to not only flag these patients, but also to automatically filter which patients met the criteria.

“Two to three hundred patients will not receive this life-saving drug because of this software,” Shirvani says. “My father is a patient who would have been eligible for this type of drug. His heart attack put things into perspective. There are families just like mine who will have the same experience that my father did, only because the software couldn’t keep up.”

Shirvani decided to combine his medical knowledge and programming skills to develop an electronic medical software, or EMR, that could store patient data numerically, instead of within paragraphs. This allows doctors to instantly analyze the data of patients, both at the individual and group-level. Doctors across North America took notice, including those from UBC, Stanford, UCLA, and elsewhere.

“During the North America Connect conference, a 2-day in-person event bringing together organizers and members across North America from the Google for Developers community programs including Google Developer Group, Women Techmakers, Google Developer Experts, and Google Developer Student Clubs, I met with many GDEs and Googlers, such as Kevin A. McGrail, who is now a personal mentor,” says Shirvani, who continues to look for other ways to make change in the healthcare community.

"When systems disappoint, we see not an end, but a new beginning. It’s in that space that we shape the future.


Image of Alexandra Cusell presenting at Carnegie Mellon University Swartz Center for Entrepreneurship

Alex Cussell: Becoming a tech entrepreneur

Alex Cussell graduated from the University of Central Florida in 2020, where she was a GDSC Lead her senior year. She says the experience inspired her to pursue her passion of becoming a tech entrepreneur.

“Leading a group of students with such differing backgrounds, addressing the world’s most pervasive problems, and loving every second of it taught me that I was meant to be a tech entrepreneur,” she says. “We were on a mission to save the lives of those involved in traffic accidents, when the world as we knew it came to a screeching halt due to the COVID-19 pandemic.

After her virtual graduation, Cussell moved to Silicon Valley and earned a Master’s in Technology Ventures from Carnegie Mellon University. She studied product management, venture capital, and startup law, with a vision of building a meaningful company. After getting engaged and receiving multiple gift cards as bridal shower gifts, Cussell found herself confused about each card’s amount and challenged trying to keep them organized.

She created the Jisell app, which features a universal gift card e-wallet, allowing users to digitize their gift cards. The app has had over five thousand dollars in gift cards uploaded to date and a partnership with the largest gift card distributor in the U.S. Jisell product manager Emily Robertson was Cussell’s roommate at the GDSC summit.

“Without Google Developer Student Clubs, I might never have realized how much I love problem-solving or technical leadership or known so much about the great tools offered by Google,” Cussell says. "Thank you to everyone who contributes to the GDSC experience; you have truly changed the lives of so many.”


Headshot of Angela Busheska, smiling

Angela Busheska: Founding a nonprofit to fight climate change

Angela Busheska is double majoring in electrical engineering and computer science, with a minor in mathematics, at Lafayette College in Easton, Pennsylvania, and anticipates graduating in 2025. A Google intern this summer and last summer, Busheska participated in Google’s Computer Science Research Mentorship Program from September 2021-January 2022, which supports the pursuit of computing research for students from historically marginalized groups through career mentorship, peer-to-peer networking, and building awareness about pathways within the field. Busheska investigated the computing processes across four different projects in the field of AI for Social Good.

During the pandemic, in 2020, Busheska founded EnRoute, a nonprofit to harness the power of everyday actions to fight climate change and break down the stigma that living sustainably is an expensive and challenging commitment. She also built a mobile app using Android and Flutter that helps users make simple daily transportation and shopping choices to reduce their carbon footprints. Since 2020, the app has guided thousands of users to reduce more than 100,000kg of CO2 emissions.

EnRoute honors Busheska’s aunt, who passed away when Busheska was 17. Busheska grew up in Skopje, in North Macedonia, one of the world’s most polluted cities.

“When I was 17 years old, Skopje’s dense air pollution led my aunt, who suffered from cardiovascular difficulties, to complete blood vessel damage, resulting in her swift passing,” says Busheska. “Inspired by my personal loss, I started researching the causes of the pollution.”

EnRoute has been featured on the Forbes 30 Under 30 Social Impact List and has been publicly recognized by Shawn Mendes, Prince William, One Young World, and the United Nations.


Headshot of Sapphira Ching, smiling

Sapphira Ching: Advancing Environmental, Social, and Government standards (ESG)

Sapphira Ching, a senior at the University of Pennsylvania’s Wharton School, spent her junior year as UPenn’s GDSC Lead, after joining GDSC her first year, leading social media for the club that spring and heading marketing and strategy her sophomore year. As a GDSC Lead, Sapphira expanded GDSC's campus membership and partnerships to reach an audience of over 2,000 students. In line with her passion for Environmental, Social, and Government standards (ESG) and Diversity, Equity, and Inclusion (DEI), Sapphira built a leadership team from different areas of study, including engineering, business, law, medicine, and music.

Ching’s passions for ESG, technology, and business drive her choices, and she says, “I am eager to incorporate ESG into tech to bring people together using business acumen.”

The Wharton School appointed her as an inaugural undergraduate fellow at the Turner ESG Initiative, and she founded the Penn Innovation Network, an ESG innovation club. Her summer internships have focused on ESG; her 2021 summer internship at MSCI (formerly known as Morgan Stanley Capital International) centered on on ESG, and her 2022 summer internship was at Soros Fund Management, an ESG juggernaut in finance. She is also a NCAA Division I student-athlete and Olympic hopeful in sabre fencing.

“I attribute my growth in ESG, tech, and business to how GDSC has helped me since my first year of college,” Ching says.

Are you an Alumni or current GDSC Lead? You can join the Google Developer Student Clubs (GDSC) LinkedIn Group here. The group is a great place to share ideas and connect with current and former GDSC Leads.

Interested in joining a GDSC near you? Google Developer Student Clubs (GDSC) are university based community groups for students interested in Google developer technologies. Students from all undergraduate or graduate programs with an interest in growing as a developer are welcome. Learn more here.

Interested in becoming a GDSC Lead? GDSC Leads are responsible for starting and growing a Google Developer Student Club (GDSC) chapter at their university. GDSC Leads work with students to organize events, workshops, and projects. Learn more here.

Expanding our Fully Homomorphic Encryption offering

Posted by Miguel Guevara, Product Manager, Privacy and Data Protection Office

At Google, it’s our responsibility to keep users safe online and ensure they’re able to enjoy the products and services they love while knowing their personal information is private and secure. We’re able to do more with less data through the development of our privacy-enhancing technologies (PETs) like differential privacy and federated learning.

And throughout the global tech industry, we’re excited to see that adoption of PETs is on the rise. The UK’s Information Commissioner’s Office (ICO) recently published guidance for how organizations including local governments can start using PETs to aid with data minimization and compliance with data protection laws. Consulting firm Gartner predicts that within the next two years, 60% of all large organizations will be deploying PETs in some capacity.

We’re on the cusp of mainstream adoption of PETs, which is why we also believe it’s our responsibility to share new breakthroughs and applications from our longstanding development and investment in this space. By open sourcing dozens of our PETs over the past few years, we’ve made them freely available for anyone – developers, researchers, governments, business and more – to use in their own work, helping unlock the power of data sets without revealing personal information about users.

As part of this commitment, we open-sourced a first-of-its-kind Fully Homomorphic Encryption (FHE) transpiler two years ago, and have continued to remove barriers to entry along the way. FHE is a powerful technology that allows you to perform computations on encrypted data without being able to access sensitive or personal information and we’re excited to share our latest developments that were born out of collaboration with our developer and research community to expand what can be done with FHE.

Furthering the adoption of Fully Homomorphic Encryption

Today, we are introducing additional tools to help the community apply FHE technologies to video files. This advancement is important because video adoption can often be expensive and incur long run times, limiting the ability to scale FHE use to larger files and new formats.

This will encourage developers to try out more complex applications with FHE. Historically, FHE has been thought of as an intractable technology for large-scale applications. Our results processing large video files show it is possible to do FHE in previously unimaginable domains.Say you’re a developer at a company and are thinking of processing a large file (in the TBs order of magnitude, can be a video, or a sequence of characters) for a given task (e.g., convolution around specific data points to do a blurry filter on a video or detect object movement), you can now try this task using FHE.

To do so, we are expanding our FHE toolkit in three new ways to make it easier for developers to use FHE for a wider range of applications, such as private machine learning, text analysis, and video processing. As part of our toolkit, we will release new hardware, a software crypto library and an open source compiler toolchain. Our goal is to provide these new tools to researchers and developers to help advance how FHE is used to protect privacy while simultaneously lowering costs.


Expanding our toolkit

We believe—with more optimization and specialty hardware — there will be a wider amount of use cases for a myriad of similar private machine learning tasks, like privately analyzing more complex files, such as long videos, or processing text documents. Which is why we are releasing a TensorFlow-to-FHE compiler that will allow any developer to compile their trained TensorFlow Machine Learning models into a FHE version of those models.

Once a model has been compiled to FHE, developers can use it to run inference on encrypted user data without having access to the content of the user inputs or the inference results. For instance, our toolchain can be used to compile a TensorFlow Lite model to FHE, producing a private inference in 16 seconds for a 3-layer neural network. This is just one way we are helping researchers analyze large datasets without revealing personal information.

In addition, we are releasing Jaxite, a software library for cryptography that allows developers to run FHE on a variety of hardware accelerators. Jaxite is built on top of JAX, a high-performance cross-platform machine learning library, which allows Jaxite to run FHE programs on graphics processing units (GPUs) and Tensor Processing Units (TPUs). Google originally developed JAX for accelerating neural network computations, and we have discovered that it can also be used to speed up FHE computations.

Finally, we are announcing Homomorphic Encryption Intermediate Representation (HEIR), an open-source compiler toolchain for homomorphic encryption. HEIR is designed to enable interoperability of FHE programs across FHE schemes, compilers, and hardware accelerators. Built on top of MLIR, HEIR aims to lower the barriers to privacy engineering and research. We will be working on HEIR with a variety of industry and academic partners, and we hope it will be a hub for researchers and engineers to try new optimizations, compare benchmarks, and avoid rebuilding boilerplate. We encourage anyone interested in FHE compiler development to come to our regular meetings, which can be found on the HEIR website.

Launch diagram

Building advanced privacy technologies and sharing them with others

Organizations and governments around the world continue to explore how to use PETs to tackle societal challenges and help developers and researchers securely process and protect user data and privacy. At Google, we’re continuing to improve and apply these novel data processing techniques across many of our products, and investing in democratizing access to the PETs we’ve developed. We believe that every internet user deserves world-class privacy, and we continue to partner with others to further that goal. We’re excited for new testing and partnerships on our open source PETs and will continue investing in innovations, aiming at releasing more updates in the future.

These principles are the foundation for everything we make at Google and we’re proud to be an industry leader in developing and scaling new privacy-enhancing technologies (PETs) that make it possible to create helpful experiences while protecting our users’ privacy.

PETs are a key part of our Protected Computing effort at Google, which is a growing toolkit of technologies that transforms how, when and where data is processed to technically ensure its privacy and safety. And keeping users safe online shouldn’t stop with Google - it should extend to the whole of the internet. That’s why we continue to innovate privacy technologies and make them widely available to all.

MakerSuite expands to 179 countries and territories, and adds helpful features for AI makers

Posted by Simon Tokumine, Director of Product Management

When we announced MakerSuite earlier this year, we were delighted to see people from all over the world sign up for the waitlist. With MakerSuite we want to help anyone become an AI maker and easily create innovative AI applications with Google’s large generative models. We’re excited to see how it’s being used.

Today, we’re expanding access to MakerSuite to cover 179 countries and territories, including anyone with a Google Workspace account. This means that more developers than ever can sign up to create AI applications with our latest language model, PaLM 2.

We’re also introducing three helpful features:

  • Automatically optimize your text prompts
  • Image showing prompt suggestion in MakerSuite
    Want to write better prompts? Now, you can write a text prompt and click "Prompt Suggestion" to get ideas and suggestions to get better responses 
  • Enable dark mode
  • Image showing light mode and dark mode UX in MakerSuite
    In MakerSuite, you can now switch from light mode to dark mode in the settings.
  • Import and export your data with Google sheets and CSV to save time and collaborate effectively
  • Image showing import data function in MakeSuite
    Import and export your data to and from Google Sheets or CSV files easily. This can save you time by eliminating the need to recreate data that you have already created. It can also help you collaborate more effectively with others by allowing you to share your results easily.

Easily go from MakerSuite to code

Since the PaLM API is integrated into MakerSuite, it’s easy to quickly try different prompts from your browser, and then incorporate them into your code—no machine learning expertise required.

Moving image showing how users can copy their code with one click to integrate it into their project
Once your prompt is ready, simply copy your code in just one click and integrate it into your project

Get started

Sign up and learn more on our Generative AI for Developers website. Be sure to check out our quick-start guide, browse our prompt gallery, and explore sample apps for inspiration. We can't wait to see what you build with MakerSuite!

A vision for more efficient media management

Petit Press’ new open source, cloud-based DAM platform helps publishers get rich media content in front of their audience at pace and scale.

Picture the scene: You’re an investigative journalist that has just wrapped up a new piece of video content that offers incisive, timely commentary on a pressing issue of the day. Your editor wants to get the content in front of your audience as quickly as possible and you soon find yourself bogged down in a laborious, manual process of archiving and uploading files. A process that is subject to human error, and involves repeating the same tasks as you prepare the content for YouTube and embedding within an article.

With the development of a new open source digital asset management (DAM) system, Slovak publishing house, Petit Press, is hoping to help the wider publishing ecosystem overcome these types of challenges.

Striving towards a universal, open source solution

Like many publishers in today’s fast-paced, fast-changing news landscape, Petit Press was feeling the pressure to be more efficient and do more with less, while at the same time maximizing the amount of high-quality, rich media content its journalists could deliver. “We wanted to find a solution to two main asset delivery issues in particular,” says Ondrej Podstupka, deputy editor in chief of SME.sk. “Firstly, to reduce the volume of work involved in transferring files from our journalists to our admin teams to the various platforms and CMS we use. Secondly, to avoid the risk of misplacing archived files or losing them entirely in an archive built on legacy technologies.”

As a publisher of over 35 print and digital titles, including one of Slovakia’s most-visited news portal, SME.sk, Petit Press also had a first-hand understanding of how useful the solution might be if it could flex to the different publishing scales, schedules, and platforms found across the news industry. With encouragement and support from GNI, Petit Press challenged themselves to build an entirely open source, API-based DAM system that flexes beyond their own use cases and can be easily integrated with any CMS, which means that other publishers can adapt and add functionality with minimal development costs.

Getting out of the comfort zone to overcome complexity

For the publisher, creating an open source project requires collaboration, skill development, and a strong sense of purpose. GNI inspired our team members to work together in a positive, creative, and supportive environment. Crucial resources from GNI also enabled the team to broaden the scope of the project beyond Petit Press’ direct requirements to cover the edge use cases and automations that a truly open source piece of software requires.

“GNI has enabled our organization to make our code open source, helping to create a more collaborative and innovative environment in the media industry.” 
– Ondrej Podstupka, deputy editor in chief of SME.sk

Building and developing the tool was difficult at times with a team of software engineers, product managers, newsroom managers, UX designers, testers, and cloud engineers all coming together to see the project to completion. For a team not used to working on GitHub, the open source aspect of the project proved the primary challenge. The team, however, also worked to overcome everything from understanding the complexities of integrating a podcast feature, to creating an interface all users felt comfortable with, to ensuring compliance with YouTube’s security requirements.

Unburdening the newsroom and minimizing costs

The hard work paid off though, when the system initially launched in early 2023. Serving as a unified distribution platform, asset delivery service and long term archive, the single solution is already unburdening the newsroom. It also benefits the tech/admin teams, by addressing concerns about the long-term costs of various media storage services.

On Petit Press’ own platforms, the DAM system has already been successfully integrated into SME.sk’s user-generated content (UGC) blog. This integration allows for seamless content management and curation, enhancing the overall user experience. The system also makes regulatory compliance easier, thanks to its GDPR-compliant user deletion process.

In addition to the UGC Blog system, the DAM system has now launched for internal Petit Press users—specifically for managing video and podcast content, which has led to increased efficiency and organization within the team. By streamlining the video and podcast creation and distribution processes, Petit Press has already seen a 5-10% productivity boost. The new DAM system saves an estimated 15-20 minutes of admin time off every piece of video/podcast content Petit Press produces.

Working towards bigger-picture benefits

Zooming out, the DAM system is also playing a central part in Petit Press’ year-long, org-wide migration to the cloud. This transformation was set in motion to enhance infrastructure, streamline processes, and improve overall efficiency within the department.

Podstupka also illustrates how the system might benefit other publishers. “It could be used as an effective standalone, automated archive for videos and podcasts,” he says. For larger publishing houses, “if you use [the DAM system] to distribute videos to YouTube and archive podcasts, there is minimal traffic cost and very low storage cost. But you still have full control over the content in case you decide to switch to a new distribution platform or video hosting service.”

As the team at Petit Press continues to get to grips with the new system, there is a clear goal in mind: To have virtually zero administrative overhead related to audio or video.

Beyond the automation-powered efficiency savings, the team at Petit Press are also exploring the new monetisation opportunities that the DAM system presents. They are currently working on a way to automatically redistribute audio and image assets to their video hosting platform, to automatically create video from every podcast they produce. This video is then pushed to their CMS and optimized for monetisation on the site with very little additional development required.

Ultimately, though, the open source nature of the system makes the whole team excited to see where other publishers and developers might take the product. “It’s a futureproof way to leverage media content with new services, platforms and ideas that emerge in technology or media landscapes,” says Igor, Head Of Development & Infrastructure. A succinct, but undeniably compelling way of summing up the system’s wide-ranging potential.

A guest post by the Petit Press team

Credential Manager beta: easy & secure authentication with passkeys on Android

Posted by Diego Zavala, Product Manager, and Niharika Arora, Android Developer Relations Engineer

Today, we are excited to announce the beta release of Credential Manager with a finalized API surface, making it suitable for use in production. As we previously announced, Credential Manager is a new Jetpack library that allows app developers to simplify their users' authentication journey, while also increasing security with support of passkeys.

Authentication provides secure access to personalized experiences, but it has challenges. Passwords, which are widely used today, are difficult to use, remember and are not always secure. Many applications and services require two-factor authentication (2FA) to login, adding more friction to the user's flow. Lastly, sign-in methods have proliferated, making it difficult for users to remember how they signed in. This proliferation has also added complexity for developers, who now need to support multiple integrations and APIs.

Credential Manager brings support for passkeys, a new passwordless authentication mechanism, together with traditional sign-in methods, such as passwords and federated sign-in, into a single interface for the user and a unified API for developers.


image showing end-to-end journey to sign in using a passkey on a mobile device
End-to-end journey to sign in using a passkey

With Credential Manager, users will benefit from seeing all their credentials in one place; passkeys, passwords and federated credentials (such as Sign in with Google), without needing to tap three different places. This reduces user confusion and simplifies choices when logging in.


image showing the unified account selector that support multiple credential types across multiple accounts on a mobile device
Unified account selector that support multiple credential types across multiple accounts

Credential Manager also makes the login experience simpler by deduping across sign-in methods for the same account and surfacing only the safest and simplest authentication method, further reducing the number of choices users need to make. So, if a user has a password and a passkey for a single account, they won’t need to decide between them when signing in; rather, the system will propose using the passkey - the safest and simplest option. That way, users can focus on choosing the right account instead of the underlying technology.


image showing how a passkey and a password for the same account are deduped on a mobile device
A passkey and a password for the same account are deduped

For developers, Credential Manager supports multiple sign-in mechanisms within a single API. It provides support for passkeys on Android apps, enabling the transition to a passwordless future. And at the same time, it also supports passwords and federated sign in like Sign in With Google, simplifying integration requirements and ongoing maintenance.

Who is already using Credential Manager?

Kayak has already integrated with Credential Manager, providing users with the advantages of passkeys and simpler authentication flows.

"Passkeys make creating an account lightning fast by removing the need for password creation or navigating to a separate app to get a link or code. As a bonus, implementing the new Credential Manager library also reduced technical debt in our code base by putting passkeys, passwords and Google sign-in all into one new modern UI. Indeed, users are able to sign up to Kayak with passkeys twice as fast as with an email link, which also improves the sign-in completion rate."  

– Matthias Keller, Chief Scientist and SVP, Technology at Kayak 

Something similar is observed on Shopify

“Passkeys work across browsers and our mobile app, so it was a no-brainer decision for our team to implement, and the resulting one-tap user experience has been truly magical. Buyers who are using passkeys to log in to Shop are doing so 14% faster than those who are using other login methods (such as email or SMS verification)”

– Mathieu Perreault, Director of Engineering at Shopify

Support for multiple password managers

Credential Manager on Android 14 and higher supports multiple password managers at the same time, enabling users to choose the provider of their choice to store, sync and manage their credentials. We are excited to be working with several leading providers like Dashlane on their integration with Credential Manager.

“Adopting passkeys was a no-brainer for us. It simplifies sign-ins, replaces the guesswork of traditional authentication methods with a reliable standard, and helps our users ditch the downsides of passwords. Simply put, it’s a big win for both us and our users. Dashlane is ready to serve passkeys on Android 14!”

– Rew Islam, Director of Product Engineering and Innovation at Dashlane

Get started

To start using Credential Manager, you can refer to our integration guide.

We'd love to hear your input during this beta release, so please let us know about your experience integrating with Credential Manager, using passkeys, or any other feedback you might have:

What’s new for developers building solutions on Google Workspace – mid-year recap

Posted by Chanel Greco, Developer Advocate Google Workspace

Google Workspace offers tools for productivity and collaboration for the ways we work. It also offers a rich set of APIs, SDKs, and no-code/low-code tools to create apps and integrate workflows that integrate directly into the surfaces across Google Workspace.

Leading software makers like Atlassian, Asana, LumApps and Miro are building integrations with Google Workspace apps—like Google Docs, Meet, and Chat—to make it easier than ever to access data and act right in the tools relied on by more than 3 billion users and 9 million paying customers.

At I/O’23 we had some exciting announcements for new features that give developers more options when integrating apps with Google Workspace.


Third-party smart chips in Google Docs

We announced the opening up of smart chips functionality to our partners. Smart chips allow you to tag and see critical information to linked resources, such as projects, customer records, and more. This preview information provides users with context and critical information right in the flow of their work. These capabilities are now generally available to developers to build their own smart chips.

Some of our partners have built and launched integrations using this new smart chips functionality. For example, Figma is integrated into Docs with smart chips, allowing users to tag Figma projects which allows readers to hover over a Figma link in a doc to see a preview of the design project. Atlassian is leveraging smart chips so users can seamlessly access Jira issues and Confluence pages within Google Docs.

Tableau uses smart chips to show the user the Tableau Viz's name, last updated date, and a preview image. With the Miro smart chip solution users have an easy way to get context, request access and open a Miro board from any document. The Whimsical smart chip integration allows users to see up-to-date previews of their Whimsical boards.

Moving image showing functionality of Figma smart chips in Google docs, allowing users to tag and preview projects in docs.

Google Chat REST API and Chat apps

Developers and solution builders can use the Google Chat REST API to create Chat apps and automate workflows to send alerts, create spaces, and share critical data right in the flow of the conversation. For instance, LumApps is integrating with the Chat APIs to allow users to start conversations in Chat right from within the employee experience platform.

The Chat REST API is now generally available.

Using the Chat API and the Google Workspace UI-kit, developers can build Chat apps that bring information and workflows right into the conversation. Developers can also build low code Chat apps using AppSheet.

Moving image showing interactive Google Meet add-ons by partner Jira

There are already Chat apps available from partners like Atlassian’s Jira, Asana, PagerDuty and Zendesk. Jira for Google Chat to collaborate on projects, create issues, and update tickets – all without having to switch context.

Google Workspace UI-kit

We are continuing to evolve the Workspace UI-kit to provide a more seamless experience across Google Workspace surfaces with easy to use widgets and visual optimizations.

For example, there is a new date and time picker widget for Google Chat apps and there is the new two-column layout to optimize space and organize information.

Google Meet SDKs and APIs

There are exciting new capabilities which will soon be launched in preview for Google Meet.

For example, the Google Meet Live Sharing SDK allows for the building of new shared experiences for users on Android, iOS, and web. Developers will be able to synchronize media content across participant’s devices in real-time and offer shared content controls for everyone in the meeting.

The Google Meet Add-ons SDK enables developers to embed their app into Meet via an iframe, and choose between the main stage or the side panel. This integration can be published on the Google Workspace Marketplace for discoverability.

Partners such as Atlassian, Figma, Lucid Software, Miro and Polly.ai, are already building Meet add-ons, and we’re excited to see what apps and workflows developers will build into Meet’s highly-interactive surfaces.

Image of interactive Google Meet add-on by partner Miro

With the Google Meet APIs developers can add the power of Google Meet to their applications by pre-configuring and launching video calls right from their apps. Developers will also be able to pull data and artifacts such as attendance reporting, recordings, and transcripts to make them available for their users post-meeting.

Google Calendar API

The ability to programmatically read and write the working location from Calendar is now available in preview. In the second half of this year, we plan to make these two capabilities, along with the writing of sub-day working locations, generally available.

These new capabilities can be used for integrating with desk booking systems and coordinating in-offices days, to mention just a few use cases. This information will help organizations adapt their setup to meet the needs of hybrid work.

Google Workspace API Dashboard and APIs Explorer

Two new tools were released to assist developers: the Google Workspace API Dashboard and the APIs Explorer.

The API Dashboard is a unified way to access Google Workspace APIs through the Google Cloud Console—APIs for Gmail, Google Drive, Docs, Sheets, Chat, Slides, Calendar, and many more. From there, you now have a central location to manage all your Google Workspace APIs and view all of the aggregated metrics, quotas, credentials, and more for the APIs in use.

The APIs Explorer allows you to explore and test Google Workspace APIs without having to write any code. It's a great way to get familiar with the capabilities of the many Google Workspace APIs.

Apps Script

The eagerly awaited project history capability for Google Apps Script will soon be generally available. This feature allows users to view the list of versions created for the script, their content, and different changes between the selected version and the current version.

It was also announced that admins will be able to add an allowlist for URLs per domain to help safer access controls and control where their data can be sent externally.

The V8 runtime for Apps Script was launched back in 2020 and it enables developers to use modern JavaScript syntax and features. If you still have legacy scripts on the old Rhino runtime, now is the time to migrate them to V8.

AppSheet

We have been further improving AppSheet, our no-code solution builder, and announced multiple new features at I/O.

Later this year we will be launching Duet AI in AppSheet to make it easier than ever to create no-code apps for Google Workspace. Using a natural-language and conversational interface, users can build an app in AppSheet by simply describing their needs as a step-by-step conversation in chat.

Moving image of no-code app creation in AppSheet

The no-code Chat apps feature for AppSheet is generally available which can be used to quickly create Google Chat apps and publish them with 1-click.

AppSheet databases are also generally available. With this native database feature, you can organize data with structured columns and references directly in AppSheet.

Check out the Build a no-code app using the native AppSheet database and Add Chat to your AppSheet apps codelabs to get you started with these two new capabilities.

Google Workspace Marketplace

The Google Workspace Marketplace is where developers can distribute their Workspace integrations for users to find, install, and use. We launched the Intelligent Apps category which spotlights the AI-enabled apps developers build and helps users discover tools to work smarter and be more productive (eligibility criteria here).

Image of Intelligent Apps in Google Workspace

Start building today

If you want early access to the features in preview, sign up for the Developer Preview Program. Subscribe to the Google Workspace Developers YouTube channel for the latest news and video tutorials to kickstart your Workspace development journey.

We can’t wait to see what you will build on the Google Workspace platform.

Prepare your app for the new Samsung tablets, foldables and watches

Posted by the Android team

From foldable innovations to seamless connectivity, Google and Samsung have continued to work together to create helpful experiences across Android phones, tablets, smartwatches and more. Today at Galaxy Unpacked in Seoul, Samsung unveiled the new Galaxy Z Flip5 and Z Fold5, Galaxy Watch6 series, and Galaxy Tab S9 series.

With these new devices from Samsung, there are four more reasons to ensure your app looks great across all your user’s favorite screens. Here are three ways you can ensure your app is ready for these great new Samsung devices:

1. Provide a great foldable experience

The launch of the new Galaxy Z Flip5 and Z Fold5 brings two brand new foldables to the Android ecosystem, so it is important to provide experiences that have fully adaptive UIs. The bottom line is that layout and app behavior should be based on device configuration and available features, and not the physical type of the device.

When it comes to providing a great foldable experience, here are a few of our top recommendations:

Illustration of Window Class sizes showing compact, medium, and expanded sizes across widths from 600dp through 840 dp

  • Use window size classes to guide layout decisions based on your current windowing state using opinionated breakpoints that are derived from common device types.
  • Observe folding features with Jetpack WindowManager, which provides the set of folding features that intersect your app's current window.
  • Make dynamic, runtime decisions based on whether a feature is available, instead of assuming that a feature is or is not available for a certain kind of device.
  • Referring in the UI to the user’s device as simply a “device” covers all form factors and is the simplest to implement. However, differentiating between the multiple devices a user may have provides a more polished experience and enables you to display the type of the device to the user using heuristics relevant to your particular use case.

You can learn more about how (and why) to implement the recommendations above in this detailed blog and, to find best practices for updating your app, check out the Support different screen sizes page.

2. Design with multi-device experiences in mind

With new devices, big and small, it is important to think through the user experience you hope to accomplish. A large part of that is the UI and design of your app – with specific consideration to account for based on screen sizes and types.

Ensuring your app looks great on large screens is a critical part of your users’ experience. Material You supports beautiful, efficient tablet and foldable experiences – and, at Google I/O this year, the team dove into the latest updates to large screen guidelines for designers and developers. You can also get inspired with the latest design guidance and mockup in check out the Large Screens Gallery.

To help with the challenges of designing and building great watch experiences that work for all, we created our the Wear OS Gallery This blog and the series of videos that accompany it are built to get you started designing inclusive smartwatch apps. For even more information on beautiful smartwatch design, discover the new Wear OS Gallery where you can find general design tips, verticalized use cases, and implementation ideas.

3. Get ready for Wear OS 4

The next generation of Wear OS is here! The Galaxy Watch6 series comes with the newest version of Google’s smartwatch platform, Wear OS 4. This platform update is also coming soon to other Samsung Galaxy watches, including the Watch4 and Watch5.

Wear OS 4 is based on Android 13, which is several versions newer than the current Wear OS version, so your app will need to handle the system behavior changes that took effect in Android 12 and Android 13. We recommend you start by testing your app and releasing a compatible update first – as devices get upgraded to Wear OS 4, it’s a basic but a critical level of quality that provides a good app experience for users.

Download the Wear OS 4 emulator in Android Studio Hedgehog to explore new features and test your app on Wear OS 4 Developer Preview.

The release of Wear OS 4 comes with many exciting changes – including a new way to build watchfaces.

The new Watch Face Format is a declarative XML format that allows you to configure the appearance and behavior of watch faces. This means that there's no executable code involved in creating a watch face, and there's no code embedded in your watch face APK. The Wear OS platform takes care of the logic needed to render the watch face so you can focus on your creative ideas, rather than code optimizations or battery performance.

Get started with watch faces using our documentation or create your own watch face with Samsung’s Watch Face Studio design tool.

Get started building a multi-device experience today!

With all the amazing additions to the Android ecosystem coming from Galaxy Unpacked, there has never been a better time to be sure your app looks great on all the devices your users know and love - from tablets to foldables to watches.

Learn more about building multi-device experiences from Deezer, where they increased their monthly active users 4X after improving multi-device support. Then get started with Jetpack WindowManager to help you build a responsive app for large screens by checking out the documentation and sample app. Finally, get to know Wear OS 4 and try it out with your app!

Meet the students using Google technologies to address the UN’s sustainability goals around the globe

Posted by Rachel Francois, Global Program Manager, Google Developer Student Clubs


Every year, university students who are members of Google Developer Student Clubs around the world are invited to create innovative solutions for real-world problems as part of the Solution Challenge. Participating students use Google products and platforms like Android, Firebase, TensorFlow, Google Cloud, and Flutter to build solutions for one or more of the United Nations’ 17 Sustainable Development Goals, which promote employment for all, economic growth, and climate action, to name a few. Agreed upon in 2015 by all 193 United Nations Member States, the goals aim to end poverty, ensure prosperity, and protect the planet by 2030.

On Demo Day, August 3, live on YouTube, the final 10 teams of the 2023 Solution Challenge will present their solutions to a panel of Google judges and a global audience of developers. These top 10 finalists were selected among the top 100 teams globally. During the live event, judges will review team projects, ask questions, and choose the top 3 grand prize winners!

Want to be part of this awesome event? RSVP here to tune into Demo Day, vote for the People’s Choice Award, and watch the action as it unfolds in real time.

In the meantime, learn more about our top 10 finalists and their amazing solutions.

The Top 10 Projects


Buzzbusters, Universidad Mayor de San Andres in Bolivia 🇧🇴

UN Sustainable Goals Addressed: Goal 3: Good Health & Wellbeing, Goal 9: Industry, Innovation, & Infrastructure, Goal 11: Sustainable Cities, Goal 17: Partnerships

Buzzbusters is an early warning system designed to prevent epidemics of mosquito-borne diseases, like dengue, Zika, chikungunya, and yellow fever, by using Google Cloud monitoring technologies like Vertex AI, TensorFlow, Firebase, Flutter, Google Cloud Storage, Google Maps, and Google Colab.

Creators: Sergio Mauricio Nuñez, Saleth Jhoselin Mamani Huanca, Moises David Cisneros Laura, and Wendy Nayely Huayhua López


FarmX, Obafemi Awolowo University in Nigeria 🇳🇬

UN Sustainable Goals Addressed: Goal 2: Zero Hunger, Goal 12: Responsible Consumption & Production, Goal 13: Climate Action

FarmX is an app that empowers farmers to decide which crops to plant, how to implement precision agriculture, and how to detect crop diseases, using TensorFlow, Flutter, Firebase, and Google Cloud.

Creators: Victor Olufemi, Oluwaseun Salako, Lekan Adesina, and Festus Idowu


Femunity, Vellore Institute of Technology in India 🇮🇳

UN Sustainable Goals Addressed: Goal 3: Good Health & Wellbeing, Goal 4: Quality Education, Goal 5: Gender Equality and Women’s Empowerment, Goal 10: Reduced Inequalities

Femunity is an innovative social media platform that empowers women by providing a safe and inclusive online space, using Flutter and Firebase.

Creators: Amritansh Sharma and Arin Yadav


HeadHome, Nanyang Technological University in Singapore 🇸🇬

UN Sustainable Goals Addressed: Goal 3: Good Health & Wellbeing, Goal 11: Sustainable Cities

HeadHome is an app focused on tackling wandering by dementia patients, who can receive instructions from a dedicated watch or receive assistance from caregivers and volunteers. HeadHome is built on Google Cloud, using Cloud Run, Google Maps, and Firebase.

Creators: Chang Dao Zheng, Chay Hui Xiang, Ong Jing Xuan, and Marc Chern Di Yong


HearSitter, Yonsei University Seoul Campus in South Korea 🇰🇷

UN Sustainable Goals Addressed: Goal 3: Good Health & Wellbeing

HearSitter is a mobile app that helps deaf parents with young children be aware of their children's needs, alerting parents to a baby’s cry or sudden noises. HearSitter was built using Flutter, Go Lang, Fiber, and AngularJS.

Creators: DongJae Kim, Juii Kim, HyoJeong Park, and YoungMin Jin


Project REMORA, University of Southampton in United Kingdom 🇬🇧

UN Sustainable Goals Addressed: Goal 3: Good Health & Wellbeing, Goal 6: Clean Water & Sanitation

Project Remora is a smart water pollution tracking device that uses sensors to identify sources of water pollution, providing geo-tagged results that allow users to identify pollution sources using the concentration gradient. Project Remora was developed in the MIT App Inventor using Firebase, Realtime Database, and the Google Maps API.

Creators: Tong En Lim, Shao Qian Choong, Isaac Lim Rudd, and Aiman Haziq Bin Hairel Anuar


ReVita, Nazarbayev University in Kazakhstan 🇰🇿

UN Sustainable Goals Addressed: Goal 3: Good Health & Wellbeing, Goal 9: Industry, Innovation, & Infrastructure

ReVita is a mobile app that addresses the mental and emotional challenges faced by organ transplant recipients, as well as the physical challenges of recovering from surgery. The ReVita app is built on GoLang, Flutter, Firebase, Google Fit, Google Maps API, Google Chat, Google Meet API, and Google Calendar API.

Creators: Dias Baimukhanov, Madiyar Moldabayev, Dinmukhamed Nuran, and Ansar Serikbayev


SlugLoop, University of California, Santa Cruz in United States 🇺🇸

UN Sustainable Goals Addressed: Goal 4: Quality Education, Goal 11: Sustainable Cities, Goal 13: Climate Action

SlugLoop is a real-time bus tracking app that provides accurate route information for buses at the University of California Santa Cruz, allowing students to get to class on time, while reducing their carbon footprint. The SlugLoop app is built with React, Firebase, and Google Maps.

Creators: Bill Zhang, Alex Liu, Annie Liu, and Nicholas Szwed


Wonder, Korea University Seoul Campus in South Korea 🇰🇷

UN Sustainable Goals Addressed: Goal 3: Good Health & Wellbeing

Wonder partners with local volunteer organizations to provide opportunities for users to engage in walking-based activities that contribute to their communities, like walking dogs for shelters or delivering meals to isolated seniors. Wonder is built with Flutter and utilizes TensorFlow, Google Maps, and Google Cloud.

Creators: Chanho Park, Keo Kim, Boyoung Kim, and Sukyung Baek


Wonder Reader, Binus University International in Indonesia 🇮🇩

UN Sustainable Goals Addressed: Goal 4: Quality Education, Goal 10: Reduced Inequalities

Wonder Reader is a 3D printed digital braille reader that helps visually impaired students learn by connecting wirelessly to a smartphone, allowing teachers to send questions to the device through Bluetooth and students to reply using the built-in braille keyboard. Wonder Reader was built using Google Cloud, Firebase, Flutter, and Google Text to Speech API.

Creators: Philipus Adriel Tandra, Aric Hernando, Jason Jeremy Wijadi, and Jason Christian Hailianto

Special thanks to our Google mentors and Google Developer Experts for supporting the students as they developed their fascinating projects.

Feeling inspired and ready to learn more about Google Developer Student Clubs? Find a club near you, and be sure to RSVP and tune in to the upcoming Solution Challenge Demo Day livestream on August 3 at 10:00am ET.