Tag Archives: Announcements

Cloud photos now available in the Android photo picker

Posted by Roxanna Aliabadi Walker – Product Manager

Available now with Google Photos

Our photo picker has always been the gateway to your local media library, providing a secure, date-sorted interface for users to grant apps access to selected images and videos. But now, we're taking it a step further by integrating cloud photos from your chosen cloud media app directly into the photo picker experience.

Moving image of the photo picker access

Unifying your media library

Backed-up photos, also known as "cloud photos," will now be merged with your local ones in the photo picker, eliminating the need to switch between apps. Additionally, any albums you've created in your cloud storage app will be readily accessible within the photo picker's albums tab. If your cloud media provider has a concept of “favorites,” they will be showcased prominently within the albums tab of the photo picker for easy access. This feature is currently rolling out with the February Google System Update to devices running Android 12 and above.

Available now with Google Photos, but open to all

Google Photos is already supporting this new feature, and our APIs are open to any cloud media app that qualifies for our pilot program. Our goal is to make accessing your lifetime of memories effortless, regardless of the app you prefer.

The Android photo picker will attempt to auto-select a cloud media app for you, but you can change or remove your selected cloud media app at any time from photo picker settings.

Image of Cloud media settings in photo picker settings

Migrate today for an enhanced, frictionless experience

The Android photo picker substantially reduces friction by not requiring any runtime permissions. If you switch from using a custom photo picker to the Android photo picker, you can offer this enhanced experience with cloud photos to your users, as well as reduce or entirely eliminate the overhead involved with acquiring and managing access to photos on the device. (Note that apps without a need for persistent and/or broad scale access to photos - for example - to set a profile picture, must adopt the Android photo picker in lieu of any sensitive file permissions to adhere to Google Play policy).

The photo picker has been backported to Android 4.4 to make it easy to migrate without needing to worry about device compatibility. Access to cloud content will only be available for users running Android 12 and higher, but developers do not need to consider this when implementing the photo picker into their apps. To use the photo picker in your app, update the ActivityX dependency to version 1.7.x or above and add the following code snippet:

// Registers a photo picker activity launcher in single-select mode.
val pickMedia = registerForActivityResult(PickVisualMedia()) { uri ->
    // Callback is invoked after the user selects a media item or closes the
    // photo picker.
    if (uri != null) {
        Log.d("PhotoPicker", "Selected URI: $uri")
    } else {
        Log.d("PhotoPicker", "No media selected")
    }
}


// Launch the photo picker and let the user choose images and videos.
pickMedia.launch(PickVisualMediaRequest(PickVisualMedia.ImageAndVideo))

// Launch the photo picker and let the user choose only images.
pickMedia.launch(PickVisualMediaRequest(PickVisualMedia.ImageOnly))

// Launch the photo picker and let the user choose only videos.
pickMedia.launch(PickVisualMediaRequest(PickVisualMedia.VideoOnly))

More customization options are listed in our developer documentation.

How We Made the CES 2024 AR Experience: Android Virtual Guide, powered by Geospatial Creator

Posted by Kira Rich – Senior Product Marketing Manager, AR and Bradford Lee – Product Marketing Manager, AR

Navigating a large-scale convention like CES can be overwhelming. To enhance the attendee experience, we've created a 360° event-scale augmented reality (AR) experience in our Google booth. Our friendly Android Bot served as a digital guide, providing:

  • Seamless wayfinding within our booth, letting you know about the must try demos
  • Delightful content, only possible with AR, like replacing the Las Vegas Convention Center facade with our Generative AI Wallpapers or designing an interactive version of Android on Sphere for those who missed it in real life
  • Helpful navigation tips and quick directions to transportation hubs (Monorail, shuttle buses)

In partnership with Left Field Labs and Adobe, we used Google’s latest AR technologies to inspire developers, creators, and brands on how to elevate the conference experience for attendees. Here’s a behind the scenes look at how we used Geospatial Creator, powered by ARCore and Photorealistic 3D Tiles from Google Maps Platform, to promote the power and usefulness of Google on Android.

Moving image showing end-to-end experience
Using Google’s Geospatial Creator, we helped attendees navigate CES with the Android Bot as a virtual guide, providing helpful and delightful immersive tips on what to experience in our Google Booth.


Tools We Used


Geospatial Creator in Adobe Aero Pre-Release

Geospatial Creator in Adobe Aero enables creators and developers to easily visualize where in the real-world they want to place their digital content, similar to how Google Earth visualizes the world. With Geospatial Creator, we were able to bring up Las Vegas Convention Center in Photorealistic 3D Tiles from Google Maps Platform and understand the surroundings of where the Google Booth would be placed. In this case, the booth did not exist in the Photorealistic 3D Tiles because it was a temporary build for the conference. However, by utilizing the 3D model of the booth and the coordinates of where it would be built, we were able to easily estimate and visualize the booth inside of Adobe Aero and build the experience around it seamlessly, including anchoring points for the digital content and the best attendee viewing points for the experience.


"At CES 2024, the Android AR experience, created in partnership with the Google AR, Android, and Adobe teams, brought smiles and excitement to attendees - ultimately that's what it's all about. The experience not only showcased the amazing potential of app-less AR with Geospatial Creator, but also demonstrated its practical applications in enhancing event navigation and engagement, all accessible with a simple QR scan." 
– Yann Caloghiris, Executive Creative Director at Left Field Labs
Moving image of developer timelapse
Adobe Aero provided us with an easy way to visualize and anchor the AR experience around the 3D model of the Google Booth at the Las Vegas Convention Center.

With Geospatial Creator, we had multiple advantages for designing the experience:

  • Rapid iteration with live previews of 3D assets and high fidelity visualization of the location with Photorealistic 3D Tiles from Google Maps Platform were crucial for building a location-based, AR experience without having to be there physically. 
  • Easy selection of the Las Vegas Convention Center and robust previews of the environment, as you would navigate in Google Earth, helped us visualize and develop the AR experience with precision and alignment to the real world location.

In addition, Google Street View imagery generated a panoramic skybox, which helped visualize the sight lines in Cinema 4D for storyboards. We also imported this and Photorealistic 3D Tiles from Google Maps Platform into Unreal Engine to visualize occlusion models at real world scale.

In Adobe Aero, we did the final assembly of all 3D assets and created all interactive behaviors in the experience. We also used it for animating simpler navigational elements, like the info panel assets in the booth.

AR development was primarily done with Geospatial Creator in Adobe Aero. Supplementary tools, including Unreal Engine and Autodesk Maya, were used to bring the experience to life.

Adobe Aero also supports Google Play Instant apps and App Clips1, which means attendees did not have to download an app to access the experience. They simply scanned a QR code at the booth and launched directly into the experience, which proved to be ideal for onboarding users and reducing friction especially at a busy event like CES.

Unreal Engine was used to bring in the Photorealistic 3D Tiles, allowing them to build the 3D animated Android Bot that really interacted closely with the surrounding environment. This approach was crucial for previews of the experience, allowing us to understand sight lines and where to best locate content for optimal viewing from the Google booth.

Autodesk Maya was used to create the Android Bot character, environmental masks, and additional 3D props for the different scenes in the experience. It was also used for authoring the final materials.

Babylon exporter was used for exporting from Autodesk Maya to glTF format for importing into Adobe Aero.

Figma was used for designing flat user interface elements that could be easily imported into Adobe Aero.

Cinema 4D was used for additional visualization and promotional shots, which helped with stakeholder alignment during the development of the experience.



Designing the experience

During the design phase, we envisioned the AR experience to have multiple interactions, so attendees could experience the delight of seeing precise and robust AR elements blended into the real world around them. In addition, they could experience the helpfulness of contextual information embedded into the real objects around them, providing the right information at the right time.

Image of Creative storyboard
To make the AR experience more engaging for attendees, we created several possibilities for people to interact with their environment (click to enlarge).

Creative storyboarding

Creating an effective storyboard for a Geospatial AR experience using Adobe Aero begins with a clear vision of how the digital overlays interact with the real-world locations.

Left Field Labs started by mapping out key geographical points at the Las Vegas Convention Center location where the Google booth was going to stand, integrating physical and digital elements along the way. Each scene sketched in the storyboard illustrated how virtual objects and real-world environments would interplay, ensuring that user interactions and movements felt natural and intuitive.


“Being able to pin content to a location that’s mapped by Google and use Photorealistic 3D Tiles in Google’s Geospatial Creator provided incredible freedom when choosing how the experience would move around the environment. It gave us the flexibility to create the best flow possible.” 
– Chris Wnuk, Technical Director at Left Field Labs

Early on in the storyboarding process, we decided that the virtual 3D Android Bot would act as the guide. Users could follow the Bot around the venue by turning around in 360°, but staying at the same vantage point. This allowed us to design the interactive experience and each element in it for the right perspective from where the user would be standing, and give them a full look around the Google Booth and surrounding Google experiences, like the Monorail or Sphere.

The storyboard not only depicted the AR elements but also considered user pathways, sightlines, and environmental factors like time of day, occlusion, and overall layout of the AR content around the Booth and surrounding environment.

We aimed to connect the attendees with engaging, helpful, and delightful content, helping them visually navigate Google Booth at CES.

User experience and interactivity

When designing for AR, we have learned that user interactivity and ensuring that the experience has both helpful and delightful elements are key. Across the experience, we added multiple interactions that allowed users to explore different demo stations in the Booth, get navigation via Google Maps for the Monorail and shuttles, and interact with the Android Bot directly.

The Android brand team and Left Field Labs created the Android character to be both simple and expressive, showing playfulness and contextual understanding of the environment to delight users while managing the strain on users’ devices. Taking an agile approach, the team iterated on a wide range of both Android and iOS mobile devices to ensure smooth performance across different smartphones, form factors such as foldables, as well as operating system versions, making the AR experience accessible and enjoyable to the widest audience.

testing content in Adobe Aero
With Geospatial Creator in Adobe Aero, we were able to ensure that 3D content would be accurate to specific locations throughout the development process.

Testing the experience

We consistently iterated on the interactive elements based on location testing. We performed two location tests: First, in the middle of the design phase, which helped us validate the performance of the Visual Positioning Service (VPS) at the Las Vegas Convention Center. Second, at the end of the design phase and a few days before CES, which further validated the placement of the 3D content and enabled us to refine any final adjustments once the Google booth structure was built on site.


“It was really nice to never worry about deploying. The tracking on physical objects and quickness of localization was some of the best I’ve seen!” 
– Devin Thompson, Associate Technical Director at Left Field Labs

Attendee Experience

When attendees came to the Google Booth, they saw a sign with the QR code to enter the AR experience. We positioned the sign at the best vantage point at the booth, ensuring that people had enough space around them to scan with their device and engage in the AR experience.

Sign with QR code to scan for entry at the Google Booth, Las Vegas Convention Center
By scanning a QR code, attendees entered directly into the experience and saw the virtual Android Bot pop up behind the Las Vegas Convention Center, guiding them through the full AR experience.

Attendees enjoyed seeing the Android Bot take over the Las Vegas Convention Center. Upon initializing the AR experience, the Bot revealed a Generative AI wallpaper scene right inside of a 3D view of the building, all while performing skateboarding tricks at the edge of the building’s facade.

Moving image of GenAI Wallpaper scene
With Geospatial Creator, it was possible for us to “replace” the facade of the Las Vegas Convention Center, revealing a playful scene where the Android Bot highlighted the depth and occlusion capabilities of the technology while showcasing a Generative AI Wallpaper demo.

Many people also called out the usefulness of seeing location-based, AR content with contextual information, like navigation through Google Maps, embedded into interesting locations around the Booth. Interactive panels overlaid around the Booth then introduced key physical demos located at each station around the Booth. Attendees could quickly scan the different themes and features demoed, orient themselves around the Booth, and decide which area they wanted to visit first.


“I loved the experience! Maps and AR make so much sense together. I found it super helpful seeing what demos are in each booth, right on top of the booth, as well as the links to navigation. I could see using this beyond CES as well!” 
– CES Attendee
Moving image Booth navigation
The Android Bot helped attendees visually understand the different areas and demos at the Google Booth, helping them decide what they wanted to go see first.

From the attendees we spoke to, over half of them engaged with the full experience. They were able to skip parts of the experience that felt less relevant to them and focus only on the interactions that added value. Overall, we’ve learned that most people liked seeing a mix of delightful and helpful content and they felt excited to explore the Booth further with other demos.

Moving image of people navigating augmented reality at CES
Many attendees engaged with the full AR experience to learn more about the Google Booth at CES.

Photo of Shahram Izadi watching a demonstration of the full Geospatial AR experience at CES.
Shahram Izadi, Google’s VP and GM, AR/XR, watching a demonstration of the full Geospatial AR experience at CES.

Location-based, AR experiences can transform event experiences for attendees who desire more ways to discover and engage with exhibitors at events. This trend underscores a broader shift in consumer expectations for a more immersive and interactive world around them and the blurring lines between online and offline experiences. At events like CES, AR content can offer a more immersive and personalized experience that not only entertains but also educates and connects attendees in meaningful ways.

To hear the latest updates about Google AR, Geospatial Creator, and more follow us on LinkedIn (@GoogleARVR) and X (@GoogleARVR). Plus, visit our ARCore and Geospatial Creator websites to learn how to get started building with Google’s AR technology.


1Available on select devices and may depend on regional availability and user settings.

Introducing Android emulators, iOS simulators, and other product updates from Project IDX

Posted by the IDX team

Six months ago, we launched Project IDX, an experimental, cloud-based workspace for full-stack, multiplatform software development. We built Project IDX to simplify and streamline the developer workflow, aiming to reduce the sea of complexities traditionally associated with app development. It certainly seems like we've piqued your interest, and we love seeing what IDX has helped you build.

For example, we recently learned about Tanaki, an AI-enhanced content creation app built using Project IDX:

Image of content creation app Tanaki on a mobile device in the foreground, with coding in Project IDX on a computer screen in the banckgound.

Pasquale D’Silva one of the developers that built Tanaki, said:

"Using the IDX shared workspace to build Tanaki has been so fun. It allows our remote team of imagineers to build together in one place. It is a magic collaboration portal!"

Developers at Google have also been using IDX internally to help speed up development across various projects. One example is the the Firebase Blog, where the full authoring, development, and deployment of the Astro-powered project is handled using IDX:

Screen grab of The Firebase Blog on a computer

Another interesting project leveraging IDX’s extensibility model is Malloy, a new open-source data language available as a VS Code extension that operates against databases like BigQuery:

Screen grab of Malloy in Project IDX

Lloyd Tabb, a Distinguished Software Engineer at Google, told us:

“I use IDX with the Malloy project. I often have several different data projects going simultaneously and IDX lets me quickly spin up an instance to solve a problem and it is trivial to configure."

If you want to share what IDX has helped you build, use the #ProjectIDX tag on X.


What’s new in IDX?

In addition to seeing how you’re using IDX, a key part of building Project IDX is your feedback, so we’ve continued to roll out features for you to test. We're excited to share the latest updates we've implemented to expedite and streamline multiplatform app development, so you can deliver with speed, ease and quality.


Preview your app directly in IDX with our iOS simulator and Android emulator

We’re bringing the iOS Simulator and Android Emulator to the browser. Whether you’re building a Flutter or web app, Project IDX now allows you to preview your applications without having to leave your workspace. When you use a Flutter or web template, Project IDX intelligently loads the right preview environment for your application — Safari mobile and Chrome for web templates, or Android, iOS, and Chrome for Flutter templates.

Screen grab of an animation project in Project IDX

IDX’s web and Android emulators allow you to develop, test, and debug directly from your workspace, consolidating your multi-step, multiplatform process into one place. With iOS simulation you can spot-check your app's layout and behavior while you work. This feature is still experimental, so be sure to test it out and send us feedback.


Get started fast with a rich library of project templates

Four of our top ten feature requests have been to support more templates, so we’re pleased to share that we’ve added new templates for Astro, Go, Python/Flask, Qwik, Lit, Preact, Solid.js, and Node.js. Use these templates to jump right into your project so you can spend less time setting up and more time creating.

Preview of template gallery in Project IDX
Check out our new and improved template gallery

Of course you can still import your own repo from GitHub, directly from your local files, or you can choose your own setup using a custom Nix environment.


Quickly build and customize your IDX workspace with improvements to Nix

.idx/dev.nix

IDX uses Nix to define the environment configuration for each workspace to give you flexibility and extensibility in IDX – even our templates and previews are configured using Nix to ensure they’re working correctly inside IDX. We’re continuously working on Nix improvements to help boost your productivity, so now you can:

  • Customize IDX starter templates easily by leveraging Nix extensibility.
  • Reduce the likelihood of errors and write code more efficiently with Nix file editing, including support for syntax highlighting, error detection, and suggested code completions.
  • Recover from broken configurations quickly and avoid unnecessary rebuild attempts with major improvements to our environment customization workflow, including seamless environment rebuilds and troubleshooting.

Easily build, test, and deploy apps with additional new IDX features and resources

image showing backend ports and workspace tasks in IDX
  • Auto-detect network ports needed for applications or services and adjust the firewall settings to permit ingress and egress without any additional configuration on your end.
  • Instantly run command-line tools, scripts, and utilities directly within workspace without the need to install them locally on your machine.
  • Simplify the process of working with Docker containers and images directly from the development environment by enabling Docker in your dev.nix file.

AI launched in 15 new regions

image showing backend ports and workspace tasks in IDX

We’ve launched our AI capabilities in the following 15 countries: India, Australia, Israel, Brazil, Mexico, Colombia, Argentina, Peru, Chile, Singapore, Bangladesh, Pakistan, Canada, Japan, and South Korea. More countries will be enabled with AI access soon – indicate your interest for AI expansion in this feature tracking post and stay tuned for more AI updates.


Improving together

We're constantly working on adding new capabilities to help you do higher quality work, more efficiently, with less friction. We’ve addressed dozens of your feature requests and fixed a multitude of bugs you flagged for us, so thank you for your continued support and engagement – please keep the feedback coming by filing bugs and feature requests.

For walkthroughs and more information on all the features mentioned above, check out our documentation page. If you haven’t already, visit our website to sign up to try Project IDX and join us on our journey. Also, be sure to check out our new Project IDX Blog for the latest product announcements and updates from the team.

We can’t wait to see what you create with Project IDX!

What’s new in the Jetpack Compose January ’24 release

Posted by Ben Trengrove, Android Developer Relations Engineer

Today, as part of the Compose January ‘24 Bill of Materials, we’re releasing version 1.6 of Jetpack Compose, Android's modern, native UI toolkit that is used by apps such as Threads, Reddit, and Dropbox. This release largely focuses on performance improvements, as we continue to migrate modifiers and improve the efficiency of major parts of our API.

To use today’s release, upgrade your Compose BOM version to 2024.01.01

implementation platform('androidx.compose:compose-bom:2024.01.01')

Performance

Performance continues to be our top priority, and this release of Compose has major performance improvements across the board. We are seeing an additional ~20% improvement in scroll performance and ~12% improvement to startup time in our benchmarks, and this is on top of the improvements from the August ‘23 release. As with that release, most apps will see these benefits just by upgrading to the latest version, with no other code changes needed.

The improvement to scroll performance and startup time comes from our continued focus on memory allocations and lazy initialization, to ensure the framework is only doing work when it has to. These improvements can be seen across all APIs in Compose, especially in text, clickable, Lazy lists, and graphics APIs, including vectors, and were made possible in part by the Modifier.Node refactor work that has been ongoing for multiple releases.

There is also new guidance for you to create your own custom modifiers with Modifier.Node.

Configuring the stability of external classes

Compose compiler 1.5.5 introduces a new compiler option to provide a configuration file for what your app considers stable. This option allows you to mark any class as stable, including your own modules, external library classes, and standard library classes, without having to modify these modules or wrap them in a stable wrapper class. Note that the standard stability contract applies; this is just another convenient method to let the Compose compiler know what your app should consider stable. For more information on how to use stability configuration, see our documentation.

Generated code performance

The code generated by the Compose compiler plugin has also been improved. Small tweaks in this code can lead to large performance improvements due to the fact the code is generated in every composable function. The Compose compiler tracks Compose state objects to know which composables to recompose when there is a change of value; however, many state values are only read once, and some state values are never read at all but still change frequently! This update allows the compiler to skip the tracking when it is not needed.

Compose compiler 1.5.6 also enables “intrinsic remember” by default. This mode transforms remember at compile time to take into account information we already have about any parameters of a composable that are used as a key to remember. This speeds up the calculation of determining if a remembered expression needs reevaluating, but also means if you place a breakpoint inside the remember function during debugging, it may no longer be called, as the compiler has removed the usage of remember and replaced it with different code.

Composables not being skipped

We are also investing in making the code you write more performant, automatically. We want to optimize for the code you intuitively write, removing the need to dive deep into Compose internals to understand why your composable is recomposing when it shouldn’t.

This release of Compose adds support for an experimental mode we are calling “strong skipping mode”. Strong skipping mode relaxes some of the rules about which changes can skip recomposition, moving the balance towards what developers expect. With strong skipping mode enabled, composables with unstable parameters can also skip recomposition if the same instances of objects are passed in to its parameters. Additionally, strong skipping mode automatically remembers lambdas in composition that capture unstable values, in addition to the current default behavior of remembering lambdas with only stable captures. Strong skipping mode is currently experimental and disabled by default as we do not consider it ready for production usage yet. We are evaluating its effects before aiming to turn it on by default in Compose 1.7. See our guidance to experiment with strong skipping mode and help us find any issues.

Text

Changes to default font padding

This release now makes the includeFontPadding setting false by default. includeFontPadding is a legacy property that adds extra padding based on font metrics at the top of the first line and bottom of the last line of a text. Making this setting default to false brings the default text layout more in line with common design tools, making it easier to match the design specifications generated. Upon upgrading to the January ‘24 release, you may see small changes in your text layout and screenshot tests. For more information about this setting, see the Fixing Font Padding in Compose Text blog post and the developer documentation.

Line height with includeFontPadding as false on the left and true on the right.

Support for nonlinear font scaling

The January ‘24 release uses nonlinear font scaling for better text readability and accessibility. Nonlinear font scaling prevents large text elements on screen from scaling too large by applying a nonlinear scaling curve. This scaling strategy means that large text doesn't scale at the same rate as smaller text.

Drag and drop

Compose Foundation adds support for platform-level drag and drop, which allows for content to be dragged between apps on a device running in multi-window mode. The API is 100% compatible with the View APIs, which means a drag and drop started from a View can be dragged into Compose and vice versa. To use this API, see the code sample.

Moving image illustrating drag and drop feature

Additional features

Other features landed in this release include:

    • Support for LookaheadScope in Lazy lists.
    • Fixed composables that have been deactivated but kept alive for reuse in a Lazy list not being filtered by default from semantics trees.
    • Spline-based keyframes in animations.
    • Added support for selection by mouse, including text.

Get started!

We’re grateful for all of the bug reports and feature requests submitted to our issue tracker — they help us to improve Compose and build the APIs you need. Continue providing your feedback, and help us make Compose better!

Wondering what’s next? Check out our roadmap to see the features we’re currently thinking about and working on. We can’t wait to see what you build next!

Happy composing!

Google Summer of Code 2024 Mentor Organization Applications Now Open

We are excited to announce that open source projects and organizations can now apply to participate as mentor organizations in the 2024 Google Summer of Code (GSoC) program. Applications for organizations will close on February 6, 2024 at 18:00 UTC.

We are celebrating a big milestone as we head into our 20th year of Google Summer of Code this year! In 2024 we are adding a third project size option which you can read more about in our announcement blog post.

Does your open source project want to learn more about becoming a mentor organization? Visit the program site and read the mentor guide to learn what it means to be a mentor organization and how to prepare your community (hint: have plenty of excited, dedicated mentors and well thought out project ideas!).

We welcome all types of organizations and are very eager to involve first-time mentor orgs in GSoC. We encourage new organizations to get a referral from experienced organizations that think they would be a good fit to participate in GSoC.

The open source projects that participate in GSoC as mentor organizations span many fields including those doing interesting work in AI/ML, security, cloud, development tools, science, medicine, data, media, and more! Projects can range from being relatively new (about 2 years old) to well established projects that started over 20 years ago. We welcome open source projects big, small, and everything in between.

This year we are looking to bring more open source projects in the AI/ML field into GSoC 2024. If your project is in the artificial intelligence or machine learning fields please chat with your community and see if you would be interested in applying to GSoC 2024.

One thing to remember is that open source projects wishing to apply need to have a solid community; the goal of GSoC is to bring new contributors into established and welcoming communities. While you don’t have to have 50+ community members, the project also can’t have as few as three people.

You can apply to be a mentor organization for GSoC starting today on the program site. The deadline to apply is February 6, 2024 at 18:00 UTC. We will publicly announce the organizations chosen for GSoC 2024 on February 21st.

Please visit the program site for more information on how to apply and review the detailed timeline for important deadlines. We also encourage you to check out the Mentor Guide, our ‘Intro to Google Summer of Code’ video, and our short video on why open source projects are excited to be a part of the GSoC program.

Good luck to all open source mentor organization applicants!

By Stephanie Taylor, Program Manager – Google Open Source Programs Office

A New Approach to Real-Money Games on Google Play

Posted by Karan Gambhir – Director, Global Trust and Safety Partnerships

As a platform, we strive to help developers responsibly build new businesses and reach wider audiences across a variety of content types and genres. In response to strong demand, in 2021 we began onboarding a wider range of real-money gaming (RMG) apps in markets with pre-existing licensing frameworks. Since then, this app category has continued to flourish with developers creating new RMG experiences for mobile.

To ensure Google Play keeps up with the pace of developer innovation, while promoting user safety, we’ve since conducted several pilot programs to determine how to support more RMG operators and game types. For example, many developers in India were eager to bring RMG apps to more Android users, so we launched a pilot program, starting with Rummy and Daily Fantasy Sports (DFS), to understand the best way to support their businesses.

Based on the learnings from the pilots and positive feedback from users and developers, Google Play will begin supporting more RMG apps this year, including game types and operators not covered by an existing licensing framework. We’ll launch this expanded RMG support in June to developers for their users in India, Mexico, and Brazil, and plan to expand to users in more countries in the future.

We’re pleased that this new approach will provide new business opportunities to developers globally while continuing to prioritize user safety. It also enables developers currently participating in RMG pilots in India and Mexico to continue offering their apps on Play.

    • India pilot: For developers in the Google Play Pilot Program for distributing DFS and Rummy apps to users in India, we are extending the grace period for pilot apps to remain on Google Play until June 30, 2024 when the new policy will take effect. After that time, developers can distribute RMG apps on Google Play to users in India, beyond DFS and Rummy, in compliance with local laws and our updated policy.
    • Mexico pilot: For developers in the Google Play Pilot Program for DFS in Mexico, the pilot will end as scheduled on June 30, 2024, at which point developers can distribute RMG apps on Google Play to users in Mexico, beyond DFS, in compliance with local laws and our updated policy.

Google Play’s existing developer policies supporting user safety, such as requiring age-gating to limit RMG experiences to adults and requiring developers use geo-gating to offer RMG apps only where legal, remain unchanged and we’ll continue to strengthen them. In addition, Google Play will continue other key user safety and transparency efforts such as our expanded developer verification mechanisms.

With this policy update, we will also be evolving our service fee model for RMG to reflect the value Google Play provides and to help sustain the Android and Play ecosystems. We are working closely with developers to ensure our new approach reflects the unique economics and various developer earning models of this industry. We will have more to share in the coming months on our new policy and future expansion plans.

For developers already involved in the real-money gaming space, or those looking to expand their involvement, we hope this helps you prepare for the upcoming policy change. As Google Play evolves our support of RMG around the world, we look forward to helping you continue to delight users, grow your businesses, and launch new game types in a safe way.

Solution Challenge 2024 – Using Google Technology to Address UN Sustainable Development Goals

Posted by Rachel Francois, Global Program Manager, Google Developer Student Clubs

Google Developer Student Clubs celebrates 5 years of innovative solutions built by university students


This year marks the 5-year anniversary of the Google Developer Student Clubs Solution Challenge! For the past five years, the Solution Challenge has invited university students to use Google technologies to develop solutions for real-world problems.

Since 2019:

  • Over 110+ countries have participated
  • Over 4,000+ projects have been submitted
  • Over 1,000+ chapters have participated

The project solutions address one or more of the United Nations 17 Sustainable Development Goals, which aim to end poverty, ensure prosperity, and protect the planet by 2030. The goals were agreed upon by all 193 United Nations Member States in 2015.

If you have an idea for how you could use Android, Firebase, TensorFlow, Google Cloud, Flutter, or another Google product to promote employment for all, economic growth, and climate action, enter the 2024 GDSC Solution Challenge and share your ideas!

Solution Challenge prizes

Check out the many great prizes you can win by participating:

  • Top 100 teams receive branded swag, a certificate, and personalized mentorship from Google and experts to help further their solution ideas.
  • Final 10 teams receive a swag box, additional mentorship, and the opportunity to showcase their project solutions to Google teams and developers worldwide during the virtual 2024 Solution Challenge Demo Day, live on YouTube. Additional cash prize of $1,000 per student. Winnings for each qualifying team will not exceed $4,000.
  • Winning 3 teams receive a swag box, and each individual receives a cash prize of $3,000 and a feature on the Google Developers Blog. Winnings for each qualifying team will not exceed $12,000.

Joining the Solution Challenge

To join the Solution Challenge and get started on your project:

Google resources for Solution Challenge participants

Google supports Solution Challenge participants with resources to build strong projects, including:

  • Create a demo video and submit your project by February 22, 2024
  • Live online Q&A sessions
  • Mentorship from Googlers, Google Developer Experts, and the Google Developer Student Club community
  • Curated codelabs designed by Google for Developers
  • Access to Design Sprint guidelines developed by Google Ventures

and so much more!

Winner announcement dates

Once all projects are submitted, our panel of judges will evaluate and score each submission using specific criteria. After that, winners will be announced in three rounds:

  • Round 1 (April): Top 100 teams will be announced.
  • Round 2 (May): Final 10 teams will be announced.
  • Round 3 (June): The Winning 3 grand prize teams will be announced live on YouTube during the 2024 Solution Challenge Demo Day.

We're looking forward to seeing the solutions you create when you combine your enthusiasm for building a better world, coding skills, and help from Google technologies.

Learn more and sign up for the 2024 Solution Challenge here.

Thank you for creating excellent apps, across devices in 2023!

Posted by Anirudh Dewani, Director of Android Developer Relations

Hello Android Developers,

As we approach the end of 2023, I wanted to take a moment to reflect on all that we've accomplished together as a community, and send a huge *thank you* for all of your work!

It's been an incredible year for Android, with many new features and improvements released as part of the platform as well as many new delightful app experiences crafted and delivered by you, all for the benefit of our users across the world. Here are just a few of the highlights:

    • The release of feature packed and highly performant Android 14, our most ambitious release to date.
    • The incredible momentum on large screens and Wear OS, fueled by hardware innovations of device makers and by the great app experiences you build for users
    • The growth of Compose, from being a mobile developer toolkit to Compose Everywhere, helping you build excellent apps for mobile, tablets, wear and TV,
    • And the growth of the entire Android Developer community around the world, and the millions of amazing apps you build for users!

I'm so proud of everything we've achieved together this year!

Your hard work and dedication continue to make Android the best mobile platform in the world, and I want to thank you for being a part of this community. Your contributions are invaluable, and I'm grateful for your continued support.

Thanks again for all that you do, and we can’t wait to see what you build next year!

Best,
Anirudh Dewani
Director, Android Developer Relations

Thank You for building excellent apps across devices! 0 PELOTO zoom SAMSUNG happyHolidays (year: Int = 2023)

Congratulations to the winners of Google’s Immersive Geospatial Challenge

Posted by Bradford Lee – Product Marketing Manager, Augmented Reality, and Ahsan Ashraf – Product Marketing Manager, Google Maps Platform

In September, we launched Google's Immersive Geospatial Challenge on Devpost where we invited developers and creators from all over the world to create an AR experience with Geospatial Creator or a virtual 3D immersive experience with Photorealistic 3D Tiles.

"We were impressed by the innovation and creativity of the projects submitted. Over 2,700 participants across 100+ countries joined to build something they were truly passionate about and to push the boundaries of what is possible. Congratulations to all the winners!" 

 Shahram Izadi, VP of AR at Google

We judged all submissions on five key criteria:

  • Functionality - How are the APIs used in the application?
  • Purpose - What problem is the application solving?
  • Content - How creative is the application?
  • User Experience - How easy is the application to use?
  • Technical Execution - How well are you showcasing Geospatial Creator and/or Photorealistic 3D Tiles?

Many of the entries are working prototypes, with which our judges thoroughly enjoyed experiencing and interacting. Thank you to everyone who participated in this hackathon.



From our outstanding list of submissions, here are the winners of Google’s Immersive Geospatial Challenge:


Category: Best of Entertainment and Events

Winner, AR Experience: World Ensemble

Description: World Ensemble is an audio-visual app that positions sound objects in 3D, creating an immersive audio-visual experience.


Winner, Virtual 3D Experience: Realistic Event Showcaser

Description: Realistic Event Showcaser is a fully configurable and immersive platform to customize your event experience and showcase its unique location stories and charm.


Winner, Virtual 3D Experience: navigAtoR

Description: navigAtoR is an augmented reality app that is changing the way you navigate through cities by providing a 3 dimensional map of your surroundings.



Category: Best of Commerce

Winner, AR Experience: love ya

Description: love ya showcases three user scenarios for a special time of year that connect local businesses with users.



Category: Best of Travel and Local Discovery

Winner, AR Experience: Sutro Baths AR Tour

Description: This guided tour through the Sutro Baths historical landmark using an illuminated walking path, information panels with text and images, and a 3D rendering of how the Sutro Baths swimming pool complex would appear to those attending.


Winner, Virtual 3D Experience: Hyper Immersive Panorama

Description: Hyper Immersive Panorama uses real time facial detection to allow the user to look left, right, up or down, in the virtual 3D environment.


Winner, Virtual 3D Experience: The World is Flooding!

Description: The World is Flooding! allows you to visualize a 3D, realistic flooding view of your neighborhood.


Category: Best of Productivity and Business

Winner, AR Experience: GeoViz

Description: GeoViz revolutionizes architectural design, allowing users to create, modify, and visualize architectural designs in their intended context. The platform facilitates real-time collaboration, letting multiple users contribute to designs and view them in AR on location.



Category: Best of Sustainability

Winner, AR Experience: Geospatial Solar

Description: Geospatial Solar combines the Google Geospatial API with the Google Solar API for instant analysis of a building's solar potential by simply tapping it.


Winner, Virtual 3D Experience: EarthLink - Geospatial Social Media

Description: EarthLink is the first geospatial social media platform that uses 3D photorealistic tiles to enable users to create and share immersive experiences with their friends.


Honorable Mentions

In addition, we have five projects that earned honorable mentions:

  1. Simmy
  2. FrameView
  3. City Hopper
  4. GEOMAZE - The Urban Quest
  5. Geospatial Route Check

Congratulations to the winners and thank you to all the participants! Check out all the amazing projects submitted. We can't wait to see you at the next hackathon.

Google Open Source Peer Bonus program announces second group of 2023 winners



We are excited to announce the second group of winners for the 2023 Google Open Source Peer Bonus Program! This program recognizes external open source contributors who have been nominated by Googlers for their exceptional contributions to open source projects.

The Google Open Source Peer Bonus Program is a key part of Google's ongoing commitment to open source software. By supporting the development and growth of open source projects, Google is fostering a more collaborative and innovative software ecosystem that benefits everyone.

This cycle's Open Source Peer Bonus Program received 163 nominations and winners come from 35 different countries around the world, reflecting the program's global reach and the immense impact of open source software. Community collaboration is a key driver of innovation and progress, and we are honored to be able to support and celebrate the contributions of these talented individuals from around the world through this program.

We would like to extend our congratulations to the winners! Included below are those who have agreed to be named publicly.

Winner

Open Source Project

Tim Dettmers

8-bit CUDA functions for PyTorch

Odin Asbjørnsen

Accompanist

Lazarus Akelo

Android FHIR

Khyati Vyas

Android FHIR

Fikri Milano

Android FHIR

Veyndan Stuart

AndroidX

Alex Van Boxel

Apache Beam

Dezső Biczó

Apigee Edge Drupal module

Felix Yan

Arch Linux

Gerlof Langeveld

atop

Fabian Meumertzheim

Bazel

Keith Smiley

Bazel

Andre Brisco

Bazel Build Rules for Rust

Cecil Curry

beartype

Paul Marcombes

bigfunctions

Lucas Yuji Yoshimine

Camposer

Anita Ihuman

CHAOSS

Jesper van den Ende

Chrome DevTools

Aboobacker MK

CircuitVerse.org

Aaron Ballman

Clang

Alejandra González

Clippy

Catherine Flores

Clippy

Rajasekhar Kategaru

Compose Actors

Olivier Charrez

comprehensive-rust

John O'Reilly

Confetti

James DeFelice

container-storage-interface

Akihiro Suda

containerd, runc, OCI specs, Docker, Kubernetes

Neil Bowers

CPAN

Aleksandr Mikhalitsyn

CRIU

Daniel Stenberg

curl

Ryosuke TOKUAMI

Dataform

Salvatore Bonaccorso

Debian

Moritz Muehlenhoff

Debian

Sylvestre Ledru

DebianLLVM

Andreas Deininger

Docsy

Róbert Fekete

Docsy

David Sherret

dprint

Justin Grant

ECMAScript Time Zone Canonicalization Proposal

Chris White

EditorConfig

Charles Schlosser

Eigen

Daniel Roe

Elk - Mastodon Client

Christopher Quadflieg

FakerJS

Ostap Taran

Firebase Apple SDK

Frederik Seiffert

Firebase C++ SDK

Juraj Čarnogurský

firebase-tools

Callum Moffat

Flutter

Anton Borries

Flutter

Tomasz Gucio

Flutter

Chinmoy Chakraborty

Flutter

Daniil Lipatkin

Flutter

Tobias Löfstrand

Flutter go_router package

Ole André Vadla Ravnås

Frida

Jaeyoon Choi

Fuchsia

Jeuk Kim

Fuchsia

Dongjin Kim

Fuchsia

Seokhwan Kim

Fuchsia

Marcel Böhme

FuzzBench

Md Awsafur Rahman

GCViT-tf, TransUNet-tf,Kaggle

Qiusheng Wu

GEEMap

Karsten Ohme

GlobalPlatform

Sacha Chua

GNU Emacs

Austen Novis

Goblet

Tiago Temporin

Golang

Josh van Leeuwen

Google Certificate Authority Service Issuer for cert-manager

Dustin Walker

google-cloud-go

Parth Patel

GUAC

Kevin Conner

GUAC

Dejan Bosanac

GUAC

Jendrik Johannes

Guava

Chao Sun

Hive, Spark

Sean Eddy

hmmer

Paulus Schoutsen

Home Assistant

Timo Lassmann

Kalign

Stephen Augustus

Kubernetes

Vyom Yadav

Kubernetes

Meha Bhalodiya

Kubernetes

Madhav Jivrajani

Kubernetes

Priyanka Saggu

Kubernetes

DANIEL FINNERAN

kubeVIP

Junfeng Li

LanguageClient-neovim

Andrea Fioraldi

LibAFL

Dongjia Zhang

LibAFL

Addison Crump

LibAFL

Yuan Tong

libavif

Gustavo A. R. Silva

Linux kernel

Mathieu Desnoyers

Linux kernel

Nathan Chancellor

Linux Kernel, LLVM

Gábor Horváth

LLVM / Clang

Martin Donath

Material for MkDocs

Jussi Pakkanen

Meson Build System

Amos Wenger

Mevi

Anders F Björklund

minikube

Maksim Levental

MLIR

Andrzej Warzynski

MLIR, IREE

Arnaud Ferraris

Mobian

Rui Ueyama

mold

Ryan Lahfa

nixpkgs

Simon Marquis

Now in Android

William Cheng

OpenAPI Generator

Kim O'Sullivan

OpenFIPS201

Yigakpoa Laura Ikpae

Oppia

Aanuoluwapo Adeoti

Oppia

Philippe Antoine

oss-fuzz

Tornike Kurdadze

Pinput

Andrey Sitnik

Postcss (and others: Autoprefixer, postcss, browserslist, logux)

Marc Gravell

protobuf-net

Jean Abou Samra

Pygments

Qiming Sun

PySCF

Trey Hunner

Python

Will Constable

PyTorch/XLA

Jay Berkenbilt

qpdf

Ahmed El-Helw

Quran App for Android

Jan Gorecki

Reproducible benchmark of database-like ops

Ralf Jung

Rust

Frank Steffahn

Rust, ICU4X

Bhaarat Krishnan

Serverless Web APIs Workshop

Maximilian Keppeler

Sheets-Compose-Dialogs

Cory LaViska

Shoelace

Carlos Panato

Sigstore

Keith Zantow

spdx/tools-golang

Hayley Patton

Steel Bank Common Lisp

Qamar Safadi

Sunflower

Victor Julien

Suricata

Eyoel Defare

textfield_tags

Giedrius Statkevičius

Thanos

Michael Park

The Good Docs Project

Douglas Theobald

Theseus

David Blevins

Tomee

Anthony Fu

Vitest

Ryuta Mizuno

Volcago

Nicolò Ribaudo

WHATWG HTML Living Standard; ECMAScript Language Specification

Antoine Martin

xpra

Toru Komatsu

youki

We are incredibly proud of all of the nominees for their outstanding contributions to open source, and we look forward to seeing even more amazing contributions in the years to come. An additional thanks to Maria Tabak who has helped to lay the groundwork and management of this program for the past 5 years!

By Mike Bufano, Google Open Source Peer Bonus Program Lead