Tag Archives: Explore

Google Cloud Next ’24 session library is now available

Posted by Max Saltonstall – Developer Relations Engineer

Google Cloud Next 2024 is coming soon, and our session library is live!

Next ‘24 covers a ton of ground, so choose your adventure. There's something on the menu for everyone, not just AI.

Developer-focused

Developers, this is your time. We have got a huge collection of edutainment for you in store for Next, including:

  • Thousands of Googlers on-site to connect and chat
  • Demos you can play with, try out, poke and see inside of (rather than just watching)
  • Talks from Champion Innovators about how they put cloud to use
  • Gathering spots for classes, interest groups, trainings and hanging out

This year we have more than double the number of advanced technical sessions, and recommendations for startups, small and medium businesses, and sustainability for all. Data scientists and data engineers can shard themselves out into 60+ big data sessions, including going to the cutting edge with BigQuery multi-modal data.


Artificial intelligence

If you want to build your own AI model, LLM or chatbot we've got sessions for that, covering ways to use Vertex AI to spin up your own large-language models on cloud, to search your multimedia library and to maintain equity in your data used for training.


Diversity, equity, and inclusion

Equity and inclusion go way past AI, and we’re really excited to have talks this year addressing allyship for your Muslim colleagues, growing inclusion in your org, and dialogues for change.

A cupped hand with a lock floating in a bed of clouds above it against a nebulous blue background. A faint ray of sunshine is shining through from the top left corner.

Security and data privacy

Don't forget security (really, who does?). Whether you are tackling security at the infrastructure, platform, machine or workload level, we've got sessions for you. Even if you're on multiple clouds, with multiple teams, you still need to get insight into the security and compliance of it all.

Speaking of all these fun chips, what about the salsa? We've got supply chain security with talks on SLSA and GUAC, plus numerous options for serverless workload security and ML data privacy.


Come join us

So, still on the fence?

Come for the magnificent shows in Vegas.

Come for the chance to sit down with expert developers and engineers.

Come for the amazing technical talks and tutorials.

Or just come for the spectacle. We've got it all at Google Cloud Next ‘24.

Check out sessions and secure your spot for three days of learning, community-building, and cloud tech with experts and peers at Mandalay Bay Convention Center in Las Vegas, April 9–11.

Wear OS hybrid interface: Boosting power and performance

Posted by Kseniia Shumelchyk, Android Developer Relations Engineer

In collaboration with our hardware partners, we’ve continued to prioritize the Wear OS by Google user experience. As such, we’ve made fundamental design changes to the platform and substantially expanded the capabilities of the Wear OS hybrid interface that improve two key areas: power and performance.

With OnePlus Watch 2, powered with the latest version of Wear OS (Wear OS 4), the dual-chipset architecture works with our hybrid interface to get both chips to work better in tandem. This enables even more use cases to benefit from dramatically extended battery life of up to 100 hours of regular use with all functionalities accessible in Smart Mode.

Together, we’ve created a premium smartwatch experience that doesn’t compromise the advanced feature set or battery life. In this post, we’ll share how you can benefit from these changes when building experiences for Wear OS.

On the edge of innovation: redesigned smartwatch architecture

Wear OS smartwatches have a dual-chipset architecture inclusive of a powerful application processor (AP) and ultra low-power co-processor microcontroller unit (MCU). The architecture has a powerful AP capable of handling complex operations en-masse, and is seamlessly coupled with a low power MCU.

The Wear OS hybrid interface enables intelligent switching between the MCU or the AP, allowing the AP to be suspended when not needed to preserve battery life. It helps, for instance, achieve more power-efficient experiences, like sensor data processing on the MCU while the AP is asleep. At the same time, the hybrid interface provides a seamless transition between these states, keeping a rich and premium user experience without jarring transitions between power modes.

ALT TEXT

Connectivity and notification experience

To enhance connectivity-reliant interactions like notifications and phone calls, OnePlus utilized platform capabilities with the notification API in the hybrid interface, enabling the MCU to process regular notification experiences and reduce the need to activate the AP.

For example, bridged notifications will be delivered to the watch without waking up the high-performance AP. Users can read and dismiss these notifications while the watch is still powered by the MCU. The MCU can also handle wearable-specific actions in notifications, such as quick replies or remote actions.

What this means for development

You can leverage existing Wear OS APIs to get these optimizations without any added effort – no code changes required!

Notifications

The notification hybrid interface enables seamless transitions between power modes to work with the Wear OS notification stack. You get the best notification performance by using the Notification API.

Health & Fitness experiences

The Wear OS hybrid interface also elevates the fitness experience with more precise workout tracking, automatic sports recognition and smarter health data monitoring. All of these can be offered to users without compromising battery life.

Starting with Wear OS 3, developers use Health Services on Wear OS to gain access to sensor data. The health hybrid interface works under the hood to enable power optimizations by batching sensor data on the MCU and periodically updating developer apps through the Health Services API on the AP.

Watch Faces

With Wear OS 4, we launched the Watch Face Format, a declarative XML format to create customizable and power-efficient watch faces.

The platform has created capabilities to implement Watch Face Format rendering on the MCU, so using the new format helps future-proof certain watch faces to take advantage of emerging optimizations in future devices for better battery usage.

Check out the watch face format documentation and design guidelines for Wear OS watch faces.

Expand your reach with Wear OS

With the additions to the Wear OS smartwatch ecosystem and expanded device capabilities, it's an ideal time to build experiences for smartwatches that can reach more users and benefit your business.

To begin developing apps for Wear OS, try our Compose for Wear OS codelab, and check out the documentation and samples.

Read more about developer updates in Wear OS 4, and how you can get your apps ready for the latest Wear OS watches.

We can’t wait to see what experiences you’ll build!

The First Developer Preview of Android 15

Posted by Dave Burke, VP of Engineering
Android 14 logo

We're releasing the first Developer Preview of Android 15 today so you, our developers, can collaborate with us to build a better Android.

Android 15 continues our work to build a platform that helps improve your productivity while giving you new capabilities to produce superior media experiences, minimize battery impact, maximize smooth app performance, and protect user privacy and security all on the most diverse lineup of devices out there.

Android enables your apps to take advantage of premium device hardware, including high-end camera capabilities, powerful GPUs, dazzling displays, and AI processing. The demand for large-screen devices, including tablets, foldables and flippables, continues to grow, offering an opportunity to reach high-value users. Also, Android is committed to providing tooling and libraries to help your apps take advantage of the latest advances in AI.

Your feedback on the Android 15 Developer Preview and QPR beta program plays a key role in helping Android continuously improve. The Android 15 developer site has more information about the preview, including downloads for Pixel and detailed documentation about changes. This preview is just the beginning, and we’ll have lots more to share as we move through the release cycle. Thank you in advance for your help in making Android a platform that works for everyone.

Protecting user privacy and security

Android is constantly working to create solutions that maximize user privacy and security.

Privacy Sandbox on Android

Android 15 brings Android AD Services up to extension level 10, incorporating the latest version of the Privacy Sandbox on Android, part of our work to develop new technologies that improve user privacy and enable effective, personalized advertising experiences for mobile apps. Our website has more about the Privacy Sandbox on Android developer preview and beta programs to help you get started.

Health Connect

Android 15 integrates Android 14 extensions 10 around Health Connect by Android, a secure and centralized platform to manage and share app-collected health and fitness data. This update adds support for new data types across fitness, nutrition, and more.

File integrity

Android 15's FileIntegrityManager includes new APIs that tap into the power of the fs-verity feature in the Linux kernel. With fs-verity, files can be protected by custom cryptographic signatures, helping you ensure they haven't been tampered with or corrupted. This leads to enhanced security, protecting against potential malware or unauthorized file modifications that could compromise your app's functionality or data.

Partial screen sharing

Android 15 supports partial screen sharing so users can share or record just an app window rather than the entire device screen. This feature, enabled first in Android 14 QPR2, includes MediaProjection callbacks that allow your app to customize the partial screen sharing experience. Note that user consent is now required for each MediaProjection capture session.

Supporting creators

Android continues its work to give you access to tools and hardware to support creators to bring their vision to life on Android.

In-app Camera Controls

Android 15 adds new extensions for more control over the camera hardware and its algorithms on supported devices:

Virtual MIDI 2.0 Devices

Android 13 added support for connecting to MIDI 2.0 devices via USB, which communicate using Universal MIDI Packets (UMP). Android 15 extends UMP support to virtual MIDI apps, enabling composition apps to control synthesizer apps as a virtual MIDI 2.0 device just like they would with an USB MIDI 2.0 device.

Performance and quality

Android continues its focus on helping you improve the quality of your apps. Much of this focus is around tooling and libraries, including Jetpack Compose, Android Studio, and more.

Dynamic Performance

Android 15 continues our investment in the Android Dynamic Performance Framework (ADPF), a set of APIs that allow games and performance intensive apps to interact more directly with power and thermal systems of Android devices. On supported devices, Android 15 will add new ADPF capabilities:

    • A power-efficiency mode for hint sessions to indicate that their associated threads should prefer power saving over performance, great for long-running background workloads.
    • GPU and CPU work durations can both be reported in hint sessions, allowing the system to adjust CPU and GPU frequencies together to best meet workload demands.

To learn more about how to use ADPF in your apps and games, head over to the documentation.

Developer Productivity

Android 15 continues to add OpenJDK APIs, including quality-of-life improvements around NIO buffers, streams, security, and more. These APIs are updated on over a billion devices running Android 12+ through Google Play System updates, so you can target the latest programming features.

App compatibility

Image of Android 15 Development timeline, indicating we are on time with Developer Previews in February

To give you more time to plan for app compatibility work, we’re letting you know our Platform Stability milestone well in advance.

At this milestone, we’ll deliver final SDK/NDK APIs and also final internal APIs and app-facing system behaviors. We’re expecting to reach Platform Stability in June 2024, and from that time you’ll have several months before the official release to do your final testing. The release timeline details are here.

Get started with Android 15

The Developer Preview has everything you need to try the Android 15 features, test your apps, and give us feedback. You can get started today by flashing a system image onto a Pixel 6, 7, or 8 series device, along with the Pixel Fold and Pixel Tablet. If you don’t have a Pixel device, you can use the 64-bit system images with the Android Emulator in Android Studio.

For the best development experience with Android 15, we recommend that you use the latest preview of Android Studio Jellyfish (or more recent Jellyfish+ versions). Once you’re set up, here are some of the things you should do:

    • Try the new features and APIs – your feedback is critical during the early part of the developer preview. Report issues in our tracker on the feedback page.
    • Test your current app for compatibility – learn whether your app is affected by changes in Android 15; install your app onto a device or emulator running Android 15 and extensively test it.

We’ll update the preview system images and SDK regularly throughout the Android 15 release cycle. This initial preview release is for developers only and not intended for daily or consumer use, so we're making it available by manual download only. Once you’ve manually installed a preview build, you’ll automatically get future updates over-the-air for all later previews and Betas. Read more here.

If you intend to move from the Android 14 QPR Beta program to the Android 15 Developer Preview program and don't want to have to wipe your device, we recommend that you move to Developer Preview 1 now. Otherwise you may run into time periods where the Android 14 Beta will have a more recent build date which will prevent you from going directly to the Android 15 Developer Preview without doing a data wipe.

As we reach our Beta releases, we'll be inviting consumers to try Android 15 as well, and we'll open up enrollment for the Android Beta program at that time. For now, please note that the Android Beta program is not yet available for Android 15.

For complete information, visit the Android 15 developer site.


Java and OpenJDK are trademarks or registered trademarks of Oracle and/or its affiliates.

Build with Gemini models in Project IDX

Posted by Ali Satter – AI Lead, Roman Nurik – Design Lead

A few weeks ago, we announced a series of product updates to Project IDX to help streamline and simplify full-stack, multiplatform software development. This week, we’re excited to share how Project IDX uses Gemini models to provide you with AI features to further speed up and refine your end-to-end developer workflow.

Project IDX launched with support for AI-powered code completion, an assistive chatbot, and contextual code actions like "add comments" and “explain this code” to help you write high-quality code faster. Since launch, and thanks to your feedback, we’ve been working hard to add new AI functionality to help boost your productivity even more.


Work faster with inline AI assistance

You can now get inline AI assistance inside any file by pressing Cmd/Ctrl + I. Simply describe the changes you want to make to your code and IDX inline AI assistance will provide real-time error correction, code suggestions, and auto-completion in your code.

We integrated these AI enhancements directly into Project IDX’s centralized workspace to equip you with the necessary tools and resources for full-stack app development where and when you need them. From setting up your workspace to testing your app, IDX AI assistance helps accelerate and improve your workflow, ensuring that your end-to-end development experience is faster, easier, and higher quality.

For example, let’s say you want to add an authenticated API endpoint to your server. You can tell IDX AI to write the code necessary to enable secure task management using Firebase Authentication and Cloud Firestore. Given an input prompt, IDX AI assistance can write the code to construct the route, determine which APIs to use to verify the token, and save the data to the database. Instead of writing boilerplate code, you can focus on higher-level design and problem solving.

moving image illustrating the use of an input prompt in Project IDX to generate corresponding code
Input prompt for reference: Create a POST endpoint named /tasks. Get the ID Token from a cookie named _session. Verify this token with the Firebase Admin SDK. Use the UID property to assign the item to the user. Then save a task item with a servertime stamp for createdAt to the Firestore database using the admin SDK.

Then, let's say you want to clean up your code a bit to improve its quality, readability, and maintainability. IDX AI assistance can help you quickly and easily refactor your code, so you can get right into optimizing your work without the hassle of manual refactoring.

moving image illustrating the use of input prompt: Refactor to use Node’s promise API.
Input prompt for reference: Refactor to use Node’s promise API.

And, as you wrap up your project, IDX AI can help you test and debug your code to make sure your application is running smoothly before deployment. Tell IDX AI assistance to write you a unit test for a function to ensure it’s working properly, saving you time and effort as you inspect the quality of your app.

moving image illustrating the use of input prompt: Create a unit test for this function
Input prompt for reference: Create a unit test for this function

Easily add AI features with the Gemini API template

We’re also simplifying the process of building with the Gemini API with Project IDX’s new Gemini API template. The Gemini API template uses the Gemini Pro model to embed AI-powered features into your applications without additional configuration on your end, so you can get started working with the Gemini API quickly and easily. There's even an option to use the Gemini API via the popular LangChain framework to simplify the process of building LLM-powered apps.

The Gemini API template is multimodal, meaning it can provide context-aware prompt output for a myriad of input modalities including images, text and, of course, code. This can help you add features like conversational interfaces, summarization of user reviews, translation, and automatic image caption creation.

To demonstrate its functionality, we pre-configured the Gemini API template with ‘Baking with the Gemini API’, a recipe builder application that, using the Gemini model’s multimodal capabilities, can reverse-engineer possible recipes for baked goods from just a picture.

moving image illustrating the use of an input prompt in Project IDX to generate corresponding code

But this recipe builder is just one example of the Gemini API template in action – with support for different input modalities and context-aware output generation, you can use IDX’s Gemini API template to create a myriad of innovative and impactful applications that deliver AI-enhanced experiences to your users.


Stay tuned for more AI updates

These updates are a continuation of our efforts to leverage Google’s AI innovations for Project IDX, so make sure to keep an eye out for more announcements to come, including the expansion of AI in IDX to more than 150 countries/regions in the coming weeks.

Thank you for your continued support and engagement – please keep the feedback coming by filing bugs and feature requests. For walkthroughs and more information on all the features mentioned above, check out our documentation. If you haven’t already, visit our website to sign up to try Project IDX and join us on our journey. Also, be sure to check out our new Project IDX Blog for the latest product announcements and updates from the team.

We can’t wait to see what you create with Project IDX!

Gemini 1.5: Our next-generation model, now available for Private Preview in Google AI Studio

Posted by Jaclyn Konzelmann and Wiktor Gworek – Google Labs

Last week, we released Gemini 1.0 Ultra in Gemini Advanced. You can try it out now by signing up for a Gemini Advanced subscription. The 1.0 Ultra model, accessible via the Gemini API, has seen a lot of interest and continues to roll out to select developers and partners in Google AI Studio.

Today, we’re also excited to introduce our next-generation Gemini 1.5 model, which uses a new Mixture-of-Experts (MoE) approach to improve efficiency. It routes your request to a group of smaller "expert” neural networks so responses are faster and higher quality.

Developers can sign up for our Private Preview of Gemini 1.5 Pro, our mid-sized multimodal model optimized for scaling across a wide-range of tasks. The model features a new, experimental 1 million token context window, and will be available to try out in Google AI Studio. Google AI Studio is the fastest way to build with Gemini models and enables developers to easily integrate the Gemini API in their applications. It’s available in 38 languages across 180+ countries and territories.


1,000,000 tokens: Unlocking new use cases for developers

Before today, the largest context window in the world for a publicly available large language model was 200,000 tokens. We’ve been able to significantly increase this — running up to 1 million tokens consistently, achieving the longest context window of any large-scale foundation model. Gemini 1.5 Pro will come with a 128,000 token context window by default, but today’s Private Preview will have access to the experimental 1 million token context window.

We’re excited about the new possibilities that larger context windows enable. You can directly upload large PDFs, code repositories, or even lengthy videos as prompts in Google AI Studio. Gemini 1.5 Pro will then reason across modalities and output text.

  1. Upload multiple files and ask questions
  2. We’ve added the ability for developers to upload multiple files, like PDFs, and ask questions in Google AI Studio. The larger context window allows the model to take in more information — making the output more consistent, relevant and useful. With this 1 million token context window, we’ve been able to load in over 700,000 words of text in one go.

    moving image illustrating how Gemini 1.5 Pro can find and reason from particular quotes across the Apollo 11 PDF transcript.
    Gemini 1.5 Pro can find and reason from particular quotes across the Apollo 11 PDF transcript. 
    [Video sped up for demo purposes]

  3. Query an entire code repository
  4. The large context window also enables a deep analysis of an entire codebase, helping Gemini models grasp complex relationships, patterns, and understanding of code. A developer could upload a new codebase directly from their computer or via Google Drive, and use the model to onboard quickly and gain an understanding of the code.

    moving image illustrating how Gemini 1.5 Pro can help developers boost productivity when learning a new codebase.
    Gemini 1.5 Pro can help developers boost productivity when learning a new codebase.  
    [Video sped up for demo purposes]

  5. Add a full length video
  6. Gemini 1.5 Pro can also reason across up to 1 hour of video. When you attach a video, Google AI Studio breaks it down into thousands of frames (without audio), and then you can perform highly sophisticated reasoning and problem-solving tasks since the Gemini models are multimodal.

    moving image illustrating how Gemini 1.5 Pro can perform reasoning and problem-solving tasks across video and other visual inputs.
    Gemini 1.5 Pro can perform reasoning and problem-solving tasks across video and other visual inputs.  
    [Video sped up for demo purposes]

More ways for developers to build with Gemini models

In addition to bringing you the latest model innovations, we’re also making it easier for you to build with Gemini:

  • Easy tuning. Provide a set of examples, and you can customize Gemini for your specific needs in minutes from inside Google AI Studio. This feature rolls out in the next few days. 
  • New developer surfaces. Integrate the Gemini API to build new AI-powered features today with new Firebase Extensions, across your development workspace in Project IDX, or with our newly released Google AI Dart SDK
  • Lower pricing for Gemini 1.0 Pro. We’re also updating the 1.0 Pro model, which offers a good balance of cost and performance for many AI tasks. Today’s stable version is priced 50% less for text inputs and 25% less for outputs than previously announced. The upcoming pay-as-you-go plans for AI Studio are coming soon.

Since December, developers of all sizes have been building with Gemini models, and we’re excited to turn cutting edge research into early developer products in Google AI Studio. Expect some latency in this preview version due to the experimental nature of the large context window feature, but we’re excited to start a phased rollout as we continue to fine-tune the model and get your feedback. We hope you enjoy experimenting with it early on, like we have.

Calling all students: Learn how to become a Google Developer Student Club Lead

Posted by Rachel Francois, Global Program Manager, Google Developer Student Clubs

Does the idea of leading a student community at your university appeal to you? Are you enthusiastic about Google technologies or interested in learning more about them? Do you love planning tech-related events and new ways for your campus community to build skills? If so, consider leading a Google Developer Student Club!

What are Google Developer Student Clubs?

Google Developer Student Clubs (GDSC) are community groups for university students interested in learning and building with Google technologies. There are over 2000 GDSC chapters, represented in over 100 countries around the world where undergraduate and graduate students explore Artificial Intelligence, Machine Learning, Google Cloud, Android development, Flutter, and other innovative technologies together. GDSC chapters host in-person, project-based events, such as hackathons and Solution Challenge with guest speakers and technical experts provided by Google.

Apply to Lead a Google Developer Student Club

You can learn more about the 2024-2025 GDSC Lead application process here.

Leading a GDSC is a great opportunity to learn new programming skills, dive deep into Google technologies and create local impact, while also building your network.

Google Developer Student Club Leads hone their technical and leadership skills as they manage a campus-based community for peers. GDSC Leads:

  • Receive mentorship from Google
  • Join a global community of leaders
  • Train peers to use Google technologies in their developer journey
  • Use technology to find solutions for real-world challenges
Drashtant Chudasama, Lakehead University Google Developer Student Club lead

Meet Drashtant Chudasama, Lakehead University Google Developer Student Club lead. Drashtant hosted a 2-day DevFest On Campus event in Canada to help foster technology in his local area. The city's first DevFest included a handful of guest speakers and a hackathon. These are the types of things you will have the opportunity to do as a GDSC Lead.

If this sounds like your skill set or you’d like to explore a new leadership opportunity in technology, we encourage you to apply to become a GDSC Lead. You can check for application deadlines in your region here.


Google Developer Student Clubs Around the World

GDSC HITS lead, Amitasha Verma and her team

After a year’s hiatus, GDSC HITS lead, Amitasha Verma and her team defied the odds to bring an interactive event to life. More than 80+ students came together for a 3-hour "Unlocking the Power of Blockchain" event in India. This event demonstrated the unwavering spirit of students eager to explore the world of blockchain.

GDSC Fast National University in Islamabad

GDSC Fast National University in Islamabad collaborated with 15 other GDSC chapters to host the exciting "Techbuzz" competition, bringing together a diverse group of tech enthusiasts to showcase their skills through a variety of engaging activities. The event featured intense rapid-fire tech sessions that tested the participants' knowledge and quick thinking, while bringing a game-based learning platform to add an element of fun and excitement.


How to become a GDSC Lead

Learn more about the GDSC Lead role and criteria here. To get started click here.


Note: Google Developer Student Clubs are student-led independent organizations, and their presence does not indicate a relationship between Google and the students' universities.

Cloud photos now available in the Android photo picker

Posted by Roxanna Aliabadi Walker – Product Manager

Available now with Google Photos

Our photo picker has always been the gateway to your local media library, providing a secure, date-sorted interface for users to grant apps access to selected images and videos. But now, we're taking it a step further by integrating cloud photos from your chosen cloud media app directly into the photo picker experience.

Moving image of the photo picker access

Unifying your media library

Backed-up photos, also known as "cloud photos," will now be merged with your local ones in the photo picker, eliminating the need to switch between apps. Additionally, any albums you've created in your cloud storage app will be readily accessible within the photo picker's albums tab. If your cloud media provider has a concept of “favorites,” they will be showcased prominently within the albums tab of the photo picker for easy access. This feature is currently rolling out with the February Google System Update to devices running Android 12 and above.

Available now with Google Photos, but open to all

Google Photos is already supporting this new feature, and our APIs are open to any cloud media app that qualifies for our pilot program. Our goal is to make accessing your lifetime of memories effortless, regardless of the app you prefer.

The Android photo picker will attempt to auto-select a cloud media app for you, but you can change or remove your selected cloud media app at any time from photo picker settings.

Image of Cloud media settings in photo picker settings

Migrate today for an enhanced, frictionless experience

The Android photo picker substantially reduces friction by not requiring any runtime permissions. If you switch from using a custom photo picker to the Android photo picker, you can offer this enhanced experience with cloud photos to your users, as well as reduce or entirely eliminate the overhead involved with acquiring and managing access to photos on the device. (Note that apps without a need for persistent and/or broad scale access to photos - for example - to set a profile picture, must adopt the Android photo picker in lieu of any sensitive file permissions to adhere to Google Play policy).

The photo picker has been backported to Android 4.4 to make it easy to migrate without needing to worry about device compatibility. Access to cloud content will only be available for users running Android 12 and higher, but developers do not need to consider this when implementing the photo picker into their apps. To use the photo picker in your app, update the ActivityX dependency to version 1.7.x or above and add the following code snippet:

// Registers a photo picker activity launcher in single-select mode.
val pickMedia = registerForActivityResult(PickVisualMedia()) { uri ->
    // Callback is invoked after the user selects a media item or closes the
    // photo picker.
    if (uri != null) {
        Log.d("PhotoPicker", "Selected URI: $uri")
    } else {
        Log.d("PhotoPicker", "No media selected")
    }
}


// Launch the photo picker and let the user choose images and videos.
pickMedia.launch(PickVisualMediaRequest(PickVisualMedia.ImageAndVideo))

// Launch the photo picker and let the user choose only images.
pickMedia.launch(PickVisualMediaRequest(PickVisualMedia.ImageOnly))

// Launch the photo picker and let the user choose only videos.
pickMedia.launch(PickVisualMediaRequest(PickVisualMedia.VideoOnly))

More customization options are listed in our developer documentation.

Introducing Android emulators, iOS simulators, and other product updates from Project IDX

Posted by the IDX team

Six months ago, we launched Project IDX, an experimental, cloud-based workspace for full-stack, multiplatform software development. We built Project IDX to simplify and streamline the developer workflow, aiming to reduce the sea of complexities traditionally associated with app development. It certainly seems like we've piqued your interest, and we love seeing what IDX has helped you build.

For example, we recently learned about Tanaki, an AI-enhanced content creation app built using Project IDX:

Image of content creation app Tanaki on a mobile device in the foreground, with coding in Project IDX on a computer screen in the banckgound.

Pasquale D’Silva one of the developers that built Tanaki, said:

"Using the IDX shared workspace to build Tanaki has been so fun. It allows our remote team of imagineers to build together in one place. It is a magic collaboration portal!"

Developers at Google have also been using IDX internally to help speed up development across various projects. One example is the the Firebase Blog, where the full authoring, development, and deployment of the Astro-powered project is handled using IDX:

Screen grab of The Firebase Blog on a computer

Another interesting project leveraging IDX’s extensibility model is Malloy, a new open-source data language available as a VS Code extension that operates against databases like BigQuery:

Screen grab of Malloy in Project IDX

Lloyd Tabb, a Distinguished Software Engineer at Google, told us:

“I use IDX with the Malloy project. I often have several different data projects going simultaneously and IDX lets me quickly spin up an instance to solve a problem and it is trivial to configure."

If you want to share what IDX has helped you build, use the #ProjectIDX tag on X.


What’s new in IDX?

In addition to seeing how you’re using IDX, a key part of building Project IDX is your feedback, so we’ve continued to roll out features for you to test. We're excited to share the latest updates we've implemented to expedite and streamline multiplatform app development, so you can deliver with speed, ease and quality.


Preview your app directly in IDX with our iOS simulator and Android emulator

We’re bringing the iOS Simulator and Android Emulator to the browser. Whether you’re building a Flutter or web app, Project IDX now allows you to preview your applications without having to leave your workspace. When you use a Flutter or web template, Project IDX intelligently loads the right preview environment for your application — Safari mobile and Chrome for web templates, or Android, iOS, and Chrome for Flutter templates.

Screen grab of an animation project in Project IDX

IDX’s web and Android emulators allow you to develop, test, and debug directly from your workspace, consolidating your multi-step, multiplatform process into one place. With iOS simulation you can spot-check your app's layout and behavior while you work. This feature is still experimental, so be sure to test it out and send us feedback.


Get started fast with a rich library of project templates

Four of our top ten feature requests have been to support more templates, so we’re pleased to share that we’ve added new templates for Astro, Go, Python/Flask, Qwik, Lit, Preact, Solid.js, and Node.js. Use these templates to jump right into your project so you can spend less time setting up and more time creating.

Preview of template gallery in Project IDX
Check out our new and improved template gallery

Of course you can still import your own repo from GitHub, directly from your local files, or you can choose your own setup using a custom Nix environment.


Quickly build and customize your IDX workspace with improvements to Nix

.idx/dev.nix

IDX uses Nix to define the environment configuration for each workspace to give you flexibility and extensibility in IDX – even our templates and previews are configured using Nix to ensure they’re working correctly inside IDX. We’re continuously working on Nix improvements to help boost your productivity, so now you can:

  • Customize IDX starter templates easily by leveraging Nix extensibility.
  • Reduce the likelihood of errors and write code more efficiently with Nix file editing, including support for syntax highlighting, error detection, and suggested code completions.
  • Recover from broken configurations quickly and avoid unnecessary rebuild attempts with major improvements to our environment customization workflow, including seamless environment rebuilds and troubleshooting.

Easily build, test, and deploy apps with additional new IDX features and resources

image showing backend ports and workspace tasks in IDX
  • Auto-detect network ports needed for applications or services and adjust the firewall settings to permit ingress and egress without any additional configuration on your end.
  • Instantly run command-line tools, scripts, and utilities directly within workspace without the need to install them locally on your machine.
  • Simplify the process of working with Docker containers and images directly from the development environment by enabling Docker in your dev.nix file.

AI launched in 15 new regions

image showing backend ports and workspace tasks in IDX

We’ve launched our AI capabilities in the following 15 countries: India, Australia, Israel, Brazil, Mexico, Colombia, Argentina, Peru, Chile, Singapore, Bangladesh, Pakistan, Canada, Japan, and South Korea. More countries will be enabled with AI access soon – indicate your interest for AI expansion in this feature tracking post and stay tuned for more AI updates.


Improving together

We're constantly working on adding new capabilities to help you do higher quality work, more efficiently, with less friction. We’ve addressed dozens of your feature requests and fixed a multitude of bugs you flagged for us, so thank you for your continued support and engagement – please keep the feedback coming by filing bugs and feature requests.

For walkthroughs and more information on all the features mentioned above, check out our documentation page. If you haven’t already, visit our website to sign up to try Project IDX and join us on our journey. Also, be sure to check out our new Project IDX Blog for the latest product announcements and updates from the team.

We can’t wait to see what you create with Project IDX!

A New Approach to Real-Money Games on Google Play

Posted by Karan Gambhir – Director, Global Trust and Safety Partnerships

As a platform, we strive to help developers responsibly build new businesses and reach wider audiences across a variety of content types and genres. In response to strong demand, in 2021 we began onboarding a wider range of real-money gaming (RMG) apps in markets with pre-existing licensing frameworks. Since then, this app category has continued to flourish with developers creating new RMG experiences for mobile.

To ensure Google Play keeps up with the pace of developer innovation, while promoting user safety, we’ve since conducted several pilot programs to determine how to support more RMG operators and game types. For example, many developers in India were eager to bring RMG apps to more Android users, so we launched a pilot program, starting with Rummy and Daily Fantasy Sports (DFS), to understand the best way to support their businesses.

Based on the learnings from the pilots and positive feedback from users and developers, Google Play will begin supporting more RMG apps this year, including game types and operators not covered by an existing licensing framework. We’ll launch this expanded RMG support in June to developers for their users in India, Mexico, and Brazil, and plan to expand to users in more countries in the future.

We’re pleased that this new approach will provide new business opportunities to developers globally while continuing to prioritize user safety. It also enables developers currently participating in RMG pilots in India and Mexico to continue offering their apps on Play.

    • India pilot: For developers in the Google Play Pilot Program for distributing DFS and Rummy apps to users in India, we are extending the grace period for pilot apps to remain on Google Play until June 30, 2024 when the new policy will take effect. After that time, developers can distribute RMG apps on Google Play to users in India, beyond DFS and Rummy, in compliance with local laws and our updated policy.
    • Mexico pilot: For developers in the Google Play Pilot Program for DFS in Mexico, the pilot will end as scheduled on June 30, 2024, at which point developers can distribute RMG apps on Google Play to users in Mexico, beyond DFS, in compliance with local laws and our updated policy.

Google Play’s existing developer policies supporting user safety, such as requiring age-gating to limit RMG experiences to adults and requiring developers use geo-gating to offer RMG apps only where legal, remain unchanged and we’ll continue to strengthen them. In addition, Google Play will continue other key user safety and transparency efforts such as our expanded developer verification mechanisms.

With this policy update, we will also be evolving our service fee model for RMG to reflect the value Google Play provides and to help sustain the Android and Play ecosystems. We are working closely with developers to ensure our new approach reflects the unique economics and various developer earning models of this industry. We will have more to share in the coming months on our new policy and future expansion plans.

For developers already involved in the real-money gaming space, or those looking to expand their involvement, we hope this helps you prepare for the upcoming policy change. As Google Play evolves our support of RMG around the world, we look forward to helping you continue to delight users, grow your businesses, and launch new game types in a safe way.

Leverage Gemini in your Android apps

Posted by Dave Burke, VP of Engineering

Last week we unveiled our most capable foundation model, Gemini. Gemini is multimodal – it can accept both text and image inputs. We introduced a way for Android developers to leverage our smallest model Gemini Nano, on-device. This is available on select devices through AICore, a system service that handles model management, runtimes, safety features and more, simplifying the work for developers. And today, we're introducing new ways for Android developers to access the Gemini Pro model – which runs off-device, in Google's data centers.

App development with Gemini Pro

Gemini Pro is accessible via the Gemini API, and it’s our best model for scaling across a wide range of text and image reasoning tasks. To simplify integrating Gemini Pro, you can use the Google AI SDK, a client SDK for Android. This SDK enables direct integration from Android apps and removes the need for developers to build and manage their own backend infrastructure, reducing development costs and improving velocity.

Google AI Studio provides a streamlined way for developers to integrate the Gemini Pro model, craft prompts, create API keys, and effortlessly transform ideas into AI apps. Once you have developed your prompt in Google AI Studio, you can simply click on the “Get code” action to generate a Kotlin code snippet, and start integrating Gemini today using the Google AI SDK for Android.

ALT TEXT
Generate Kotlin code for the Gemini API in Google AI Studio

We are also making it easier for developers to use the Gemini API directly in the latest preview version of Android Studio. We’re introducing a new project template for developers to get started with the Google AI SDK for Android right away. You’ll benefit from Android Studio’s enhanced code completion and lint checkers, helping with API keys and security.

ALT TEXT
New Project template for AI in Android Studio

To leverage the new template in Android Studio, start a new project through File > New > New Project and pick the Gemini API starter template. This template provides a pre-configured project with the necessary code to use the Gemini API. After choosing a project name and location, you will be prompted to generate an API key in Google AI Studio, and asked to enter it in Android Studio. Android Studio will automatically set up the project for you with the Gemini API connection, simplifying your workflow.

Alternatively, you can import the generative AI code sample and set it up in Android Studio through File > New > Import Sample, and searching for "Generative AI Sample".

Get started building AI-powered features and Android apps using Gemini Pro.