Tag Archives: Resources

Coral, Google’s platform for Edge AI, chooses ASUS as OEM partner for global scale

We launched Coral in 2019 with a mission to make edge AI powerful, private, and efficient, and also accessible to a wide variety of customers with affordable tools that reliably go from prototype to production. In these first few years, we’ve seen a strong growth in demand for our products across industries and geographies, and with that, a growing need for worldwide availability and support.

That’s why we're pleased to announce that we have signed an agreement with ASUS IoT, to help scale our manufacturing, distribution and support. With decades of experience in electronics manufacturing at a global scale, ASUS IoT will provide Coral with the resources to meet our growth demands while we continue to develop new products for edge computing.

ASUS IoT is a sub-brand of ASUS dedicated to the creation of solutions in the fields of AI and the internet of things (IoT). Their mission is to become a trusted provider of embedded systems and the wider AI and IoT ecosystem. ASUS IoT strives to deliver best-in-class products and services across diverse vertical markets, and to partner with customers in the development of fully-integrated and rapid-to-market applications that drive efficiency – providing convenient, efficient, and secure living and working environments for people everywhere.

ASUS IoT already has a long-standing history of collaboration with Coral, being the first partner to release a product using the Coral SoM when they launched the Tinker Edge T development board. ASUS IoT has also integrated Coral accelerators into their enterprise class intelligent edge computers and was the first to release a multi Edge TPU device with the award winning AI Accelerator PCIe Card. Because we have this history of collaboration, we know they share our strong commitment to new innovation in edge computing.

ASUS IoT also has an established manufacturing and distribution processes, and a strong reputation in enterprise-level sales and support. So we're excited to work with them to enable scale and long-term availability for Coral products.

With this agreement, the Coral brand and user experience will not change, as Google will maintain ownership of the brand and product portfolio. The Coral team will continue to work with our customers on partnership initiatives and case studies through our Coral Partnership Program. Those interested in joining our partner ecosystem can visit our website to learn more and apply.

Coral.ai will remain the home for all product information and documentation, and in the coming months ASUS IoT will become the primary channel for sales, distribution and support. With this partnership, our customers will gain access to dedicated teams for sales and technical support managed by ASUS IoT.

ASUS IoT will be working to expand the distribution network to make Coral available in more countries. Distributors interested in carrying Coral products will be able to contact ASUS IoT for consideration.

We continue to be impressed by the innovative ways in which our customers use Coral to explore new AI-driven solutions. And now with ASUS IoT bringing expanded sales, support and resources for long-term availability, our Coral team will continue to focus on building the next generation of privacy-preserving features and tools for neural computing at the edge.

We look forward to the continued growth of the Coral platform as it flourishes and we are excited to have ASUS IoT join us on our journey.

How can App Engine users take advantage of Cloud Functions?

Posted by Wesley Chun (@wescpy), Developer Advocate, Google Cloud

Introduction

Recently, we discussed containerizing App Engine apps for Cloud Run, with or without Docker. But what about Cloud Functions… can App Engine users take advantage of that platform somehow? Back in the day, App Engine was always the right decision, because it was the only option. With Cloud Functions and Cloud Run joining in the serverless product suite, that's no longer the case.

Back when App Engine was the only choice, it was selected to host small, single-function apps. Yes, when it was the only option. Other developers have created huge monolithic apps for App Engine as well… because it was also the only option. Fast forward to today where code follows more service-oriented or event-driven architectures. Small apps can be moved to Cloud Functions to simplify the code and deployments while large apps could be split into smaller components, each running on Cloud Functions.

Refactoring App Engine apps for Cloud Functions

Small, single-function apps can be seen as a microservice, an API endpoint "that does something," or serve some utility likely called as a result of some event in a larger multi-tiered application, say to update a database row or send a customer email message. App Engine apps require some kind web framework and routing mechanism while Cloud Function equivalents can be freed from much of those requirements. Refactoring these types of App Engine apps for Cloud Functions will like require less overhead, helps ease maintenance, and allow for common components to be shared across applications.

Large, monolithic applications are often made up of multiple pieces of functionality bundled together in one big package, such as requisitioning a new piece of equipment, opening a customer order, authenticating users, processing payments, performing administrative tasks, and so on. By breaking this monolith up into multiple microservices into individual functions, each component can then be reused in other apps, maintenance is eased because software bugs will identify code closer to their root origins, and developers won't step on each others' toes.

Migration to Cloud Functions

In this latest episode of Serverless Migration Station, a Serverless Expeditions mini-series focused on modernizing serverless apps, we take a closer look at this product crossover, covering how to migrate App Engine code to Cloud Functions. There are several steps you need to take to prepare your code for Cloud Functions:

  • Divest from legacy App Engine "bundled services," e.g., Datastore, Taskqueue, Memcache, Blobstore, etc.
  • Cloud Functions supports modern runtimes; upgrade to Python 3, Java 11, or PHP 7
  • If your app is a monolith, break it up into multiple independent functions. (You can also keep a monolith together and containerize it for Cloud Run as an alternative.)
  • Make appropriate application updates to support Cloud Functions

    The first three bullets are outside the scope of this video and its codelab, so we'll focus on the last one. The changes needed for your app include the following:

    1. Remove unneeded and/or unsupported configuration
    2. Remove use of the web framework and supporting routing code
    3. For each of your functions, assign an appropriate name and install the request object it will receive when it is called.

    Regarding the last point, note that you can have multiple "endpoints" coming into a single function which processes the request path, calling other functions to handle those routes. If you have many functions in your app, separate functions for every endpoint becomes unwieldy; if large enough, your app may be more suited for Cloud Run. The sample app in this video and corresponding code sample only has one function, so having a single endpoint for that function works perfectly fine here.

    This migration series focuses on our earliest users, starting with Python 2. Regarding the first point, the app.yaml file is deleted. Next, almost all Flask resources are removed except for the template renderer (the app still needs to output the same HTML as the original App Engine app). All app routes are removed, and there's no instantiation of the Flask app object. Finally for the last step, the main function is renamed more appropriately to visitme() along with a request object parameter.

    This "migration module" starts with the (Python 3 version of the) Module 2 sample app, applies the steps above, and arrives at the migrated Module 11 app. Implementing those required changes is illustrated by this code "diff:"

    Migration of sample app to Cloud Functions

    Next steps

    If you're interested in trying this migration on your own, feel free to try the corresponding codelab which leads you step-by-step through this exercise and use the video for additional guidance.

    All migration modules, their videos (when published), codelab tutorials, START and FINISH code, etc., can be found in the migration repo. We hope to also one day cover other legacy runtimes like Java 8 as well as content for the next-generation Cloud Functions service, so stay tuned. If you're curious whether it's possible to write apps that can run on App Engine, Cloud Functions, or Cloud Run with no code changes at all, the answer is yes. Hope this content is useful for your consideration when modernizing your own serverless applications!

How Jira for Google Chat uses the latest platform features for app and bot building

Posted by By Kyle Zhao, Software Engineer and Charles Maxson, Developer Advocate

Nothing breaks the flow of getting work done like having to stop what you’re doing in one application and switch over to another to look up information, log an event, create a ticket or check on the status of a project task. For Google Workspace users who also rely on Atlassian’s Jira Software for their issue tracking and project management needs, Jira for Chat helps bridge the gap between having conversations with your team in Google Chat and seamlessly staying on top of issues and tasks from Jira while keeping everyone in the loop.

Recently, there have been a number of enhancements to the Google Chat framework for developers that allows them to make connections between applications like Jira and Google Chat a whole lot better. And in this post, we’ll take a look at how the latest version of Jira for Chat takes advantage of some of those newer enhancements for building apps and bots for Chat. Whether you are thinking about building or upgrading your own integration with Chat, or are simply interested in getting more out of using Jira with Google Workspace for you and your team, we’ll cover how Jira for Chat brings those newer features to life.

Connections made easy: Improved Connection Flow

One of the most important steps for getting users to leverage any integration is to make it as easy as possible to set up. Setting up Jira to integrate with Chat requires two applications to be installed, 1) the Google Chat bot for Jira Cloud from the Atlassian Marketplace and 2) Jira for Chat (unfortunately there are no direct links available, but you can navigate to it in the Chat catalog) located in the Google Chat application under the “+” icon to start a chat.

In the earlier version of Jira for Chat, the setup required a number of steps that were somewhat less intuitive. That’s changed, with the redesign of the new connection flow process that’s built around an improved connection wizard that provides detailed visual information to connect Jira for Chat to your Jira instance.

The new wizard (made possible by enhancements with the Chat dialogs feature) takes the guesswork of trudging through a number of tedious steps, shows actionable errors if something has been misconfigured or isn’t working and makes it easier by parse out Jira URLs guiding users along the way. See the connection wizard in action below. Now anyone can set it up like a pro!

Jira for Chat Connection Flow Wizard Dialog

Batched Notifications: Taking care of notification fatigue

A user favorite feature of Jira for Chat is its ability to keep you informed via Google Chat of updates to your team's projects, tickets and tasks. But nobody likes a ‘chatty’ app either and notification fatigue is real—and really annoying. Notifications are only useful when they provide valuable information in a timely fashion without being overburdening - otherwise they run the risk of being ignored or even turned off.

To avoid notification fatigue, the Jira Chat bot enables batched notifications that optimizes sending notifications in batches based on the time elapsed since the last activity in an issue. Jira for Chat will send all updates to a ticket with a single card to Google Chat if a lot of activity is happening in Jira until at least 15 seconds have passed since the last update to the issue or 60 seconds have passed since the first update in the group. The latter keeps notifications fresh in case a lot of continuous activity is happening.

Updates to the same Jira issue are grouped in one notification card, until one of the following conditions is true:

  1. 15 seconds have passed without any additional updates to the issue.

    Example: Alice reassigned issue X at 6:00:00, and then added a comment at 6:00:10. Both the “assignee change” and the “new comment” will be grouped into a single notification, sent at 6:00:25.
  1. 60 seconds have passed since the first update in the group (to ensure a timely delivery)

    Example: Alice reassigned issue X at 6:00:00, and kept adding comments every 10 seconds. A notification card should be posted around 6:01:00, with all the changes in the past 60 seconds.

Example, Batching Notifications from 5 down to 1

Link Unfurling: Relevant context where you need it

One of the goals of integrating applications with Google Workspace is streamlining the flow of information with less clicks and fewer open tabs, making the new Link Unfurling feature a welcome addition to any Chat bot. Link Unfurling (also known as Link Previews) preemptively includes contextual information associated with a link passed to a Chat message, keeping the information inline and in context to the conversation while eliminating the need to interrupt your focus by following the link out of the conversation to its original source to gather more information.

Specifically with Jira for Chat, this means when a teammate posts a Jira link in Chat or pings you asking about more information about one of your tickets they’ve just linked in a message, you can now see that information immediately in the conversation along with the link, saving the steps of having to resort back to Jira every time. Link unfurling with the Jira Chat bot happens automatically once the app has been added and configured within a Chat conversation, there’s nothing additional that users need to do, and any links that Jira can preview will automatically get previewed within Chat.

Link Unfurling example in Jira for Chat

Create Issue Dialog: Take action from within Chat

Imagine you are in a lengthy conversation thread with colleagues in Google Chat, when you come to the conclusion that the topic you are discussing warrants a new ticket being created in your Jira instance. Instead of pivoting away from the conversation in Chat to create a new ticket in Jira, you can now quickly create a new Jira issue in Chat thanks to Jira for Chat.

To create an issue from Chat, simply invoke the slash command /jira_create to bring up the Create Issue dialog (enabled by the Chat dialogs feature). Then specify the Project that you would like to assign the ticket to, select Ticket Type, and enter a brief Summary. The rest of the fields are optional such as labels and description, and those, as well as advanced fields can always be filled out within your Jira instance at a later time if you would like. This way you can jump right back into the conversation, knowing you won’t forget to get this ticket logged, but also without missing a beat with what your team is talking about.

Create a Jira Issue Dialog

Takeaway and More Resources

The new enhancements to Jira for Chat make it a super useful companion for teams that rely on Google Workspace and Jira Software to manage their work. Whether it's the new and improved connection flow, the less-is-more batched notifications handling, or the instant gratification of creating issues directly from Chat, it's more than just a productivity booster, but also a great showcase for how the types of apps you can build with Google Chat are evolving.

Get started with Jira for Chat today or learn how you can build your own apps for Google Chat with the developer docs. To keep up with all the news about the Google Workspace Platform, please subscribe to our newsletter.

Happening now! #TheAndroidShow: Tablets, Jetpack Compose & Android 13

Posted by Florina Muntenescu & Huyen Tue Dao, Co-Hosts of #TheAndroidShow

We’re just about to kick off another episode of #TheAndroidShow, you can watch here! In this episode, we’ll take you behind-the-scenes with Jetpack Compose, Android 13 and all of the updates to Android tablets. If you haven’t already, there’s still time to get your burning Android tablet questions answered from the team, using #AskAndroid. We've assembled a team of experts ready to answer your questions live!


First, we go behind-the-scenes with Jetpack Compose

First up in #TheAndroidShow, we’ll be discussing Jetpack Compose, Android’s modern, native UI toolkit. Last month, we released version 1.1 of Jetpack Compose, which contains new features like improved focus handling, touch target sizing, ImageVector caching, and support for Android 12 stretch overscroll. Compose 1.1 also graduates a number of previously experimental APIs to stable and supports newer versions of Kotlin. In #TheAndroidShow, we’ll take you behind-the-scenes in the world of animations with one of the engineers who helps build them, Doris Liu. And then we hear from Twitter about how Compose helps them build new features in half the time!


Next: the world of tablets, including the 12L feature drop, now in AOSP

Next up, we’ll jump into the world of tablets, following the big news from earlier this week: we’ve officially released the 12L feature drop to AOSP and it’s rolling out to all supported Pixel devices over the next few weeks. There are over 250+ million large screen Android devices, and 12L makes Android 12 even better on tablets, and includes updates like a new taskbar that lets users instantly drag and drop apps into split-screen mode, new large-screen layouts in the notification shade and lockscreen, and improved compatibility modes for apps. You can read more here.

Starting later this year, 12L will be available in planned updates on tablets and foldables from Samsung, Lenovo and Microsoft, so now is the time to make sure your apps are ready. We highly recommend testing your apps in split-screen mode with windows of various sizes, trying it in different orientations, and checking the new compatibility mode changes if they apply. You can read more about 12L for developers here.

We see large screens as a key surface for the future of Android, and we’re continuing to invest to give you the tools you need to build great experiences for tablets, Chromebooks, and foldables. You can learn more about how to get started optimizing for large screens, and make sure to check out our large screens developer resources.


Tune in now!

Finally, we wrap up the show with a conversation with the director for Android Developer Relations, Maru Ahues Bouza, where we talk about Android 13 as well as some of the broader themes you’ll continue to see from Android later this year.

It’s all happening right now on #TheAndroidShow - join us on YouTube!

Hyper-local ads targeting made easy and automated with Radium

Posted by Álvaro Lamas, Natalija Najdova

Location targeting helps your advertising to focus on finding the right customers for your business. Are you, as a digital marketer, spending a lot of time optimizing your location targeting settings for your digital marketing campaigns? Are your ads running only in locations where you can deliver your services to your users or outside as well?

Read further to find out how Radium can help automate your digital marketing campaigns location targeting and make sure you only run ads where you deliver your services.

The location targeting settings challenge

Configuring accurate location targeting settings in Marketing Platforms like Google Ads allows your ads to appear in the geographic locations that you choose: countries, cities and zip codes OR radius around a location. As a result, precise geo targeting could help increase the KPIs of your campaigns such as the return on investment (ROI), the cost per acquisition (CPA) at high volumes, etc.

Mapping your business area to the available targeting options (country, city and zip code targeting or radius targeting) in Marketing Platforms is a challenge that every business doing online marketing campaigns has faced. This challenge becomes critical if you offer a service that is only available in a certain geographical area. This is particularly relevant for Food or Grocery Delivery Apps or organizations that run similar business models.

Adjusting these location targeting settings is a time consuming process. In addition, manually translating your business or your physical stores delivery areas into geo targeting settings is also an error prone process. And not having optimal targeting options might lead to ads shown to users that you cannot really deliver your services to, so you would likely lose money and time setting location targeting manually.

How can Radium help you?

Radium is a simple Web Application, based on App Scripts, that can save you money and time. Its UI helps you automatically translate a business area into radius targeting settings, one of the three options for geo targeting in Google Ads. It also provides you with an overview of the geographical information about how well the radius targeting overlaps with your business delivery area.

It has a few extra features like merging a few areas into one and generating the optimal radius targeting settings for those.

How does it work?

You can get your app deployed and running in less than an hour following these instructions. Once you’re done, in no time you can customize and improve your radius targeting settings to better meet your needs and optimize your marketing efforts.

Per delivery area that you provide, you will be able to visualize different circles in the UI, select one from the default circles or opt in for custom circle radius settings:

  • Large Circle: Pre-generated circle that englobes the rectangle that surrounds the targeting area
  • Small Circle: Pre-generated circle contained in the rectangle that surrounds the targeting area, touching its sides
  • Threshold Circle: Pre-generated circle with the minimum radius to cover at least the 90% of your delivery area, to maximize targeting and minimize waste
  • Custom Circle: Circle which center and radius can be customized manually by drag-and-drop and using the controls of the UI

    Large Circle

    Small Circle

    Threshold Circle

    Custom Circle

Take advantage of metrics to compare between all the radius targeting options and select the best fit for your needs. In red you can see the visualization of the business targeting area and, overlapped in gray, the generated radius targeting.

Metrics:

  • Radius: radius of the circle, in km
  • % Intersection: area of the Business Targeting Area inside the circle / total Business Targeting Area size
  • % Waste: area of circle excluding the Business Targeting Area / total Business Targeting Area size
  • Circle Size: area of the circle, in km2
  • Intersection Size: area of the Business Targeting Area, in km2
  • Waste Size: area of the circle excluding the Business Targeting Area, in km2
  • Circle Score: % Intersection - % Waste. The highest score represent the sweet spot, maximizing the targeting area and minimizing the waste area

Once you are done optimizing the radius settings, it’s time to activate them in your marketing campaigns. Radium offers you different ways of storing and activating this output, so you can use the one that better fits your needs:

  • Export your data to a Spreadsheet. This will allow you to have a mapping of readable names for each delivery area and its targeting settings, to generate the campaign settings in the csv format expected by Google Ads and to bulk upload them using Google Ads Editor
  • Directly download the csv file that can be uploaded to Google Ads via Google Ads Editor to bulk upload the settings of your campaigns
  • Upload them manually using the Google Ads UI

Find all the details about how to activate your location targeting settings in this step by step guide

Getting started with Radium

There are only 2 things you need to have in order to benefit from Radium:

  • Very easy to generate Maps JavaScript API Key
  • Map of your business’ delivery areas in either format:
    • KML file representing the polygon shaped targeting areas (see sample file)
    • CSV file with lat-lng, radius and name of the area it belongs to, more oriented to physical stores and restaurants (see sample file)

To get you started please visit the Radium repository on Github.

Summary

So, in conclusion, Radium helps you automate the location targeting configuration and optimization for your Google Ads campaigns, saving you time and minimizing errors of manual adjustments.

Prediction Framework, a time saver for Data Science prediction projects

Posted by Álvaro Lamas, Héctor Parra, Jaime Martínez, Julia Hernández, Miguel Fernandes, Pablo Gil

Acquiring high value customers using predicted Lifetime Value, taking specific actions on high propensity of churn users, generating and activating audiences based on machine learning processed signals…All of those marketing scenarios require of analyzing first party data, performing predictions on the data and activating the results into the different marketing platforms like Google Ads as frequently as possible to keep the data fresh.

Feeding marketing platforms like Google Ads on a regular and frequent basis, requires a robust, report oriented and cost reduced ETL & prediction pipeline. These pipelines are very similar regardless of the use case and it’s very easy to fall into reinventing the wheel every time or manually copy & paste structural code increasing the risk of introducing errors.

Wouldn't it be great to have a common reusable structure and just add the specific code for each of the stages?

Here is where Prediction Framework plays a key role in helping you implement and accelerate your first-party data prediction projects by providing the backbone elements of the predictive process.

Prediction Framework is a fully customizable pipeline that allows you to simplify the implementation of prediction projects. You only need to have the input data source, the logic to extract and process the data and a Vertex AutoML model ready to use along with the right feature list, and the framework will be in charge of creating and deploying the required artifacts. With a simple configuration, all the common artifacts of the different stages of this type of projects will be created and deployed for you: data extraction, data preparation (aka feature engineering), filtering, prediction and post-processing, in addition to some other operational functionality including backfilling, throttling (for API limits), synchronization, storage and reporting.

The Prediction Framework was built to be hosted in the Google Cloud Platform and it makes use of Cloud Functions to do all the data processing (extraction, preparation, filtering and post-prediction processing), Firestore, Pub/Sub and Schedulers for the throttling system and to coordinate the different phases of the predictive process, Vertex AutoML to host your machine learning model and BigQuery as the final storage of your predictions.

Prediction Framework Architecture

To get involved and start using the Prediction Framework, a configuration file needs to be prepared with some environment variables about the Google Cloud Project to be used, the data sources, the ML model to make the predictions and the scheduler for the throttling system. In addition, custom queries for the data extraction, preparation, filtering and post-processing need to be added in the deploy files customization. Then, the deployment is done automatically using a deployment script provided by the tool.

Once deployed, all the stages will be executed one after the other, storing the intermediate and final data in the BigQuery tables:

  • Extract: this step will, on a timely basis, query the transactions from the data source, corresponding to the run date (scheduler or backfill run date) and will store them in a new table into the local project BigQuery.
  • Prepare: immediately after the extract of the transactions for one specific date is available, the data will be picked up from the local BigQuery and processed according to the specs of the model. Once the data is processed, it will be stored in a new table into the local project BigQuery.
  • Filter: this step will query the data stored by the prepare process and will filter the required data and store it into the local project BigQuery. (i.e only taking into consideration new customers transactionsWhat a new customer is up to the instantiation of the framework for the specific use case. Will be covered later).
  • Predict: once the new customers are stored, this step will read them from BigQuery and call the prediction using Vertex API. A formula based on the result of the prediction could be applied to tune the value or to apply thresholds. Once the data is ready, it will be stored into the BigQuery within the target project.
  • Post_process: A formula could be applied to the AutoML batch results to tune the value or to apply thresholds. Once the data is ready, it will be stored into the BigQuery within the target project.

One of the powerful features of the prediction framework is that it allows backfilling directly from the BigQuery user interface, so in case you’d need to reprocess a whole period of time, it could be done in literally 4 clicks.

In summary: Prediction Framework simplifies the implementation of first-party data prediction projects, saving time and minimizing errors of manual deployments of recurrent architectures.

For additional information and to start experimenting, you can visit the Prediction Framework repository on Github.

Prediction Framework, a time saver for Data Science prediction projects

Posted by Álvaro Lamas, Héctor Parra, Jaime Martínez, Julia Hernández, Miguel Fernandes, Pablo Gil

Acquiring high value customers using predicted Lifetime Value, taking specific actions on high propensity of churn users, generating and activating audiences based on machine learning processed signals…All of those marketing scenarios require of analyzing first party data, performing predictions on the data and activating the results into the different marketing platforms like Google Ads as frequently as possible to keep the data fresh.

Feeding marketing platforms like Google Ads on a regular and frequent basis, requires a robust, report oriented and cost reduced ETL & prediction pipeline. These pipelines are very similar regardless of the use case and it’s very easy to fall into reinventing the wheel every time or manually copy & paste structural code increasing the risk of introducing errors.

Wouldn't it be great to have a common reusable structure and just add the specific code for each of the stages?

Here is where Prediction Framework plays a key role in helping you implement and accelerate your first-party data prediction projects by providing the backbone elements of the predictive process.

Prediction Framework is a fully customizable pipeline that allows you to simplify the implementation of prediction projects. You only need to have the input data source, the logic to extract and process the data and a Vertex AutoML model ready to use along with the right feature list, and the framework will be in charge of creating and deploying the required artifacts. With a simple configuration, all the common artifacts of the different stages of this type of projects will be created and deployed for you: data extraction, data preparation (aka feature engineering), filtering, prediction and post-processing, in addition to some other operational functionality including backfilling, throttling (for API limits), synchronization, storage and reporting.

The Prediction Framework was built to be hosted in the Google Cloud Platform and it makes use of Cloud Functions to do all the data processing (extraction, preparation, filtering and post-prediction processing), Firestore, Pub/Sub and Schedulers for the throttling system and to coordinate the different phases of the predictive process, Vertex AutoML to host your machine learning model and BigQuery as the final storage of your predictions.

Prediction Framework Architecture

To get involved and start using the Prediction Framework, a configuration file needs to be prepared with some environment variables about the Google Cloud Project to be used, the data sources, the ML model to make the predictions and the scheduler for the throttling system. In addition, custom queries for the data extraction, preparation, filtering and post-processing need to be added in the deploy files customization. Then, the deployment is done automatically using a deployment script provided by the tool.

Once deployed, all the stages will be executed one after the other, storing the intermediate and final data in the BigQuery tables:

  • Extract: this step will, on a timely basis, query the transactions from the data source, corresponding to the run date (scheduler or backfill run date) and will store them in a new table into the local project BigQuery.
  • Prepare: immediately after the extract of the transactions for one specific date is available, the data will be picked up from the local BigQuery and processed according to the specs of the model. Once the data is processed, it will be stored in a new table into the local project BigQuery.
  • Filter: this step will query the data stored by the prepare process and will filter the required data and store it into the local project BigQuery. (i.e only taking into consideration new customers transactionsWhat a new customer is up to the instantiation of the framework for the specific use case. Will be covered later).
  • Predict: once the new customers are stored, this step will read them from BigQuery and call the prediction using Vertex API. A formula based on the result of the prediction could be applied to tune the value or to apply thresholds. Once the data is ready, it will be stored into the BigQuery within the target project.
  • Post_process: A formula could be applied to the AutoML batch results to tune the value or to apply thresholds. Once the data is ready, it will be stored into the BigQuery within the target project.

One of the powerful features of the prediction framework is that it allows backfilling directly from the BigQuery user interface, so in case you’d need to reprocess a whole period of time, it could be done in literally 4 clicks.

In summary: Prediction Framework simplifies the implementation of first-party data prediction projects, saving time and minimizing errors of manual deployments of recurrent architectures.

For additional information and to start experimenting, you can visit the Prediction Framework repository on Github.

How to get started in cloud computing

Posted by Google Cloud training & certifications team

Validated cloud skills are in demand. With Google Cloud certifications, employers know that certified individuals have proven knowledge of various professional roles within the cloud industry. Google Cloud certifications have also been recognized as some of the highest-paying IT certifications for the past several years. This year, the Google Cloud Certified Professional Data Engineer topped the list with an average salary of $171,749, while the Google Cloud Certified Professional Cloud Architect came in second place, with an average salary of $169,029.

You may be wondering what sort of background you need to take advantage of these opportunities: What sort of classes should you take? How exactly do you get started in the cloud without experience? Here are some tips to start learning about Google Cloud and build your cloud computing skills.

Get hands-on experience with cloud computing

Google Cloud training offers a wide range of learning paths featuring comprehensive courses and hands-on labs, so you get to practice with the real Google Cloud console. For instance, If you wanted to take classes to prepare for the Professional Data Engineer certification mentioned above, there is a complete learning path featuring four courses and 31 hands-on labs to help familiarize you with relevant topics like BigQuery, machine learning, IoT, TensorFlow, and more.

There are nine learning paths providing you with a launch pad to all major pillars of cloud computing, from networking, cloud security, database management, and hybrid cloud infrastructure. Each broader learning path contains specific learning paths to help you specifically train for job roles like Machine Learning Engineer. Visit the Google Cloud training page to find the right path for you.

Learn live from cloud experts

Google Cloud regularly hosts a half-day live training event called Cloud OnBoard which features hands-on learning led by experts. All sessions are also available to watch on-demand after the event.

If you’re a developer new to cloud computing, we recommend you start with Google Cloud Fundamentals, an entry-level course to learn about the basics of Google Cloud. Experts guide you through hands-on labs where you can practice using the Google Console, Google Cloud Shell, and more.

You’ll be introduced to core components of Google Cloud and given an overview of how its tools impact the entire cloud computing landscape. The curriculum covers Compute Engine and how to create VM instances from scratch and from existing templates, how to connect them together, and end with projects that can talk to each other safely and securely. You will also learn about the different storage and database options available on Google Cloud.

Other Cloud OnBoard event topics include cloud architecture, Kubernetes, data analytics, and cloud application development.

Explore Google Cloud infrastructure

Cloud infrastructure is the backbone of the internet. Understanding cloud infrastructure is a good starting point to start digging deeper into cloud concepts because it will give you a taste of the various aspects of cloud computing to figure out what you like best, whether it’s networking, security, or application development.

Build your foundational Google Cloud knowledge with our on-demand infrastructure training in the cloud infrastructure learning path. This learning path will provide you with practical experience through expert-guided labs which dive into Cloud Storage and other key application services like Google Cloud’s operations suite and Cloud Functions.

Show off your skills

Once you have a strong grasp on Google Cloud basics, you can start earning skill badges to demonstrate your experience.

Skill badges are digital credentials that recognize your ability to solve real-world problems with your cloud knowledge. You can share them on your resume or social profile so your professional network sees your technical skills. This can be useful for recruiters or employers as you transition to cloud computing work.Skill badges also enable you to get in-depth, hands-on experience with different Google Cloud offerings on the way to earning the credential.

You can also use them to start preparing for Google Cloud certifications which are more intensive and show employers that you are a cloud expert. Most Google Cloud certifications recommend having at least 6 months or up to several years of industry experience depending on the material.

Ready to get started in the cloud? Visit the Google Cloud training page to see all your options from in-person classes, online courses, special events, and more.

Promote your Google Workspace Marketplace apps with the new badge for developers

Posted by Elena Kingbo, Program Manager

Recently, Google Workspace Marketplace launched new features for developers to improve the application search and evaluation experience in the Marketplace for our Google Workspace admins and end users.

Continuing with updates to further improve the developer experience, we are excited to announce Google Workspace Marketplace badges.

The new badges will allow developers to promote their published Google Workspace Marketplace applications on their own websites. Users will be taken directly to the Marketplace application listing, where they can review application details, privacy policy, terms of service, and more. These users will then be able to securely install applications directly from the Google Workspace Marketplace.

This promotional tool can help you reach more potential users and grow your business. By being part of the Google Workspace Marketplace ecosystem, you receive direct access to the more than 3 billion Google Workspace users, where they can browse, search, and install solutions they need. To date, we've seen a stunning 4.8 billion apps installed in Google Workspace, transforming the way users connect, create, and collaborate.

The new Google Workspace Marketplace badge for developers.

Use the badge to show your existing and potential customers where they can find your application, backed by Google Workspace’s industry-leading standards of quality, security, and compliance. For developers with products available globally, the badge is available in 29 languages including Arabic, Spanish, Vietnamese, and more.

The new Google Workspace Marketplace badge creation page for developers.

Refresh your Marketplace application listing and marketing assets

When you are ready to incorporate the new badge onto your website to feature your Marketplace app, we recommend that you also review the creatives and information used in your app’s listing in the Google Workspace Marketplace. You should also include well designed feature graphics in your listing and localize your featured graphics, screenshots, and description to improve conversions globally.

Stay on top of developer news

To make sure you’re aware of the latest Google Workspace Platform updates for developers, make sure you subscribe to our Google Workspace Developers Newsletter for monthly updates, follow @workspacedevs, and join in on the community conversation.

Personalize user journeys by Pushing Dynamic Shortcuts to Assistant

Posted by Jessica Dene Earley-Cha, Developer Relations Engineer

Like many other people who use their smartphone to make their lives easier, I’m way more likely to use an app that adapts to my behavior and is customized to fit me. Android apps already can support some personalization like the ability to long touch an app and a list of common user journeys are listed. When I long press my Audible app (an online audiobook and podcast service), it gives me a shortcut to the book I’m currently listening to; right now that is Daring Greatly by Brené Brown.

Now, imagine if these shortcuts could also be triggered by a voice command – and, when relevant to the user, show up in Google Assistant for easy use.

Wouldn't that be lovely?

Dynamic shortcuts on a mobile device

Well, now you can do that with App Actions by pushing dynamic shortcuts to the Google Assistant. Let’s go over what Shortcuts are, what happens when you push dynamic shortcuts to Google Assistant, and how to do just that!

Android Shortcuts

As an Android developer, you're most likely familiar with shortcuts. Shortcuts give your users the ability to jump into a specific part of your app. For cases where the destination in your app is based on individual user behavior, you can use a dynamic shortcut to jump to a specific thing the user was previously working with. For example, let’s consider a ToDo app, where users can create and maintain their ToDo lists. Since each item in the ToDo list is unique to each user, you can use Dynamic Shortcuts so that users' shortcuts can be based on their items on their ToDo list.

Below is a snippet of an Android dynamic shortcut for the fictional ToDo app.

val shortcut = = new ShortcutInfoCompat.Builder(context, task.id)
.setShortLabel(task.title)
.setLongLabel(task.title)
.setIcon(Icon.createWithResource(context, R.drawable.icon_active_task))
.setIntent(intent)
.build()

ShortcutManagerCompat.pushDynamicShortcut(context, shortcut)

Dynamic Shortcuts for App Actions

If you're pushing dynamic shortcuts, it's a short hop to make those same shortcuts available for use by Google Assistant. You can do that by adding the Google Shortcuts Integration library and a few lines of code.

To extend a dynamic shortcut to Google Assistant through App Actions, two jetpack modules need to be added, and the dynamic shortcut needs to include .addCapabilityBinding.

val shortcut = = new ShortcutInfoCompat.Builder(context, task.id)
.setShortLabel(task.title)
.setLongLabel(task.title)
.setIcon(Icon.createWithResource(context, R.drawable.icon_active_task))
.addCapabilityBinding("actions.intent.GET_THING", "thing.name", listOf(task.title))
.setIntent(intent)
.build()

ShortcutManagerCompat.pushDynamicShortcut(context, shortcut)

The addCapabilityBinding method binds the dynamic shortcut to a capability, which are declared ways a user can launch your app to the requested section. If you don’t already have App Actions implemented, you’ll need to add Capabilities to your shortcuts.xml file. Capabilities are an expression of the relevant feature of an app and contains a Built-In Intent (BII). BIIs are a language model for a voice command that Assistant already understands, and linking a BII to a shortcut allows Assistant to use the shortcut as the fulfillment for a matching command. In other words, by having capabilities, Assistant knows what to listen for, and how to launch the app.

In the example above, the addCapabilityBinding binds that dynamic shortcut to the actions.intent.GET_THING BII. When a user requests one of their items in their ToDo app, Assistant will process their request and it’ll trigger capability with the GET_THING BII that is listed in their shortcuts.xml.

<shortcuts xmlns:android="http://schemas.android.com/apk/res/android">
<capability android:name="actions.intent.GET_THING">
<intent
android:action="android.intent.action.VIEW"
android:targetPackage="YOUR_UNIQUE_APPLICATION_ID"
android:targetClass="YOUR_TARGET_CLASS">
<!-- Eg. name = the ToDo item -->
<parameter
android:name="thing.name"
android:key="name"/>
</intent>
</capability>
</shortcuts>

So in summary, the process to add dynamic shortcuts looks like this:

1. Configure App Actions by adding two jetpack modules ( ShortcutManagerCompat library and Google Shortcuts Integration Library). Then associate the shortcut with a Built-In Intent (BII) in your shortcuts.xml file. Finally push the dynamic shortcut from your app.

2. Two major things happen when you push your dynamic shortcuts to Assistant:

  1. Users can open dynamic shortcuts through Google Assistant, fast tracking users to your content
  2. During contextually relevant times, Assistant can proactively suggest your Android dynamic shortcuts to users, displaying it on Assistant enabled surfaces.

Not too bad. I don’t know about you, but I like to test out new functionality in a small app first. You're in luck! We recently launched a codelab that walks you through this whole process.

Dynamic Shortcuts Codelab

Looking for more resources to help improve your understanding of App Actions? We have a new learning pathway that walks you through the product, including the dynamic shortcuts that you just read about. Let us know what you think!

Thanks for reading! To share your thoughts or questions, join us on Reddit at r/GoogleAssistantDev.

Follow @ActionsOnGoogle on Twitter for more of our team's updates, and tweet using #AppActions to share what you’re working on. Can’t wait to see what you build!