Category Archives: Google Developers Blog

News and insights on Google platforms, tools and events

Register now for Firebase Summit 2022!

Posted by Grace Lopez, Product Marketing Manager

One of the best things about Firebase is our community, so after three long years, we’re thrilled to announce that our seventh annual Firebase Summit is returning as a hybrid event with both in-person and virtual experiences! Our 1-day, in-person event will be held at Pier 57 in New York City on October 18, 2022. It will be a fun reunion for us to come together to learn, network, and share ideas. But if you’re unable to travel, don’t worry, you’ll still be able to take part in the activities online from your office/desk/couch wherever you are in the world.

Join us to learn how Firebase can help you accelerate app development, run your app with confidence, and scale your business. Registration is now open for both the physical and virtual events! Read on for more details on what to expect.


Keynote full of product updates

In-person and livestreamed

We’ll kick off the day with a keynote from our leaders, highlighting all the latest Firebase news and announcements. With these updates, our goal is to give you a seamless and secure development experience that lets you focus on making your app the best it can be.

#AskFirebase Live

In-person and livestreamed

Having a burning question you want to ask us? We’ll take questions from our in-person and virtual attendees and answer them live on stage during a special edition of everyone’s favorite, #AskFirebase.

NEW! Ignite Talks

In-person and livestreamed

This year at Firebase Summit, we’re introducing Ignite Talks, which will be 7-15 minute bitesize talks focused on hot topics, tips, and tricks to help you get the most out of our products.

NEW! Expert-led Classes

In-person and will be released later

You’ve been asking us for more technical deep dives, so this year we’ll also be running expert-led classes at Firebase Summit. These platform-specific classes will be designed to give you comprehensive knowledge and hands-on practice with Firebase products. Initially, these classes will be exclusive to in-person attendees, but we’ll repackage the content for self-paced learning and release them later for our virtual attendees.

We can’t wait to see you

In addition, Firebase Summit will be full of all the other things you love - interactive demos, lots of networking opportunities, exciting conversations with the community…and a few surprises too! The agenda is now live, so don't forget to check it out! In the meantime, register for the event, subscribe to the Firebase YouTube channel, and follow us on Twitter and LinkedIn to join the conversation using #FirebaseSummit

Just launched: Apply for support from Google Play’s $2M Indie Games Fund in Latin America

Posted by Patricia Correa, Director, Global Developer Marketing

As part of our commitment to helping all developers grow on our platform, at Google Play we have various programs focused on supporting small games studios. A few weeks ago we announced the winners of the Indie Games Festival in Europe, Korea and Japan, and the 2022 class of the Indie Games Accelerator.

Today, we are launching the Indie Games Fund in Latin America. We will be awarding $2 million dollars in non-dilutive cash awards, in addition to hands-on support, to selected small games studios based in LATAM, to help them build and grow their businesses on Google Play.

The program is open to indie game developers who have already launched a game - whether it’s on Google Play or another mobile platform, PC or console. Each selected recipient will get between $150,000 and $200,000 dollars to help them take their game to the next level, and build successful businesses.

Check out all eligibility criteria and apply now. Priority will be given to applications received by 12:00 p.m. BRT, 31 October, 2022.

For more updates about all our programs, resources and tools for indie game developers, follow us on Twitter @GooglePlayBiz and Google Play business community on LinkedIn.




How useful did you find this blog post?




Introducing Discovery Ad Performance Analysis

Posted by Manisha Arora, Nithya Mahadevan, and Aritra Biswas, gPS Data Science team

Overview of Discovery Ads and need for Ad Performance Analysis

Discovery ads, launched in May 2019, allow advertisers to easily extend their reach of social ads users across YouTube, Google Feed and Gmail worldwide. They provide brands a new opportunity to reach 3 billion people as they explore their interests and search for inspiration across their favorite Google feeds (YouTube, Gmail, and Discover) -- all with a single campaign. Learn more about Discovery ads here.


Due to these uniquenesses, customers need a data driven method to identify textual & imagery elements in Discovery Ad copies that drive Interaction Rate of their Discovery Ad campaigns, where interaction is defined as the main user action associated with an ad format—clicks and swipes for text and Shopping ads, views for video ads, calls for call extensions, and so on.

Interaction Rate = interaction / impressions


“Customers need a data driven method to identify textual & imagery elements in Discovery Ad copies that drive Interaction Rate of their campaigns.”

- Manisha Arora, Data Scientist



Our analysis approach:

The Data Science team at Google is investing in a machine learning approach to uncover insights from complex unstructured data and provide machine learning based recommendations to our customers. Machine Learning helps us study what works in ads at scale and these insights can greatly benefit the advertisers.

We follow a six-step based approach for Discovery Ad Performance Analysis:
  • Understand Business Goals
  • Build Creative Hypothesis
  • Data Extraction
  • Feature Engineering
  • Machine Learning Modeling
  • Analysis & Insight Generation

To begin with, we work closely with the advertisers to understand their business goals, current ad strategy, and future goals. We closely map this to industry insights to draw a larger picture and provide a customized analysis for each advertiser. As a next step, we build hypotheses that best describe the problem we are trying to solve. An example of a hypothesis can be -”Do superlatives (words like “top”, “best”) in the ad copy drive performance?”


“Machine Learning helps us study what works in ads at scale and these insights can greatly benefit the advertisers.”

- Manisha Arora, Data Scientist


Once we have a hypothesis we are working towards, the next step is to deep-dive into the technical analysis.

Data Extraction & Pre-processing


Our initial dataset includes raw ad text, imagery, performance KPIs & target audience details from historic ad campaigns in the industry. Each Discovery ad contains two text assets (Headline and Description) and one image asset. We then apply ML to extract text and image features from these assets.

Text Feature Extraction

We apply NLP to extract the text features from the ad text. We pass the raw text in the ad headline & description through Google Cloud’s Language API which parses the raw text into our feature set: commonly used keywords, sentiments etc.

Example: 


Image Feature Extraction

We apply Image Processing to extract image features from the ad copy imagery. We pass the raw images through Google Cloud’s Vision API & extract image components including objects, person, background, lighting etc.
Following are the holistic set of features that are extracted from the ad content:

Feature Design


Text Feature Design

There are two types of text features being included in DisCat:
1. Generic text feature
a. These are features returned by Google Cloud’s Language API including sentiment, word / character count, tone (imperative vs indicative), symbols, most frequent words and so on.

2. Industry-specific value propositions
a. These are features that only apply to a specific industry (e.g. finance) that are manually curated by the data science developer in collaboration with specialists and other industry experts.
  • For example, for the finance industry, one value proposition can be “Price Offer”. A list of keywords / phrases that are related to price offers (e.g. “discount”, “low rate”, “X% off”) will be curated based on domain knowledge to identify this value proposition in the ad copies. NLP techniques (e.g. wordnet synset) and manual examination will be used to make sure this list is inclusive and accurate.
Image Feature Design

Like the text features, image features can largely be grouped into two categories:
1. Generic image features
a. These features apply to all images and include the color profile, whether any logos were detected, how many human faces are included, etc.
b. The face-related features also include some advanced aspects: we look for prominent smiling faces looking directly at the camera, we differentiate between individuals vs. small groups vs. crowds, etc.
2. Object-based features
a. These features are based on the list of objects and labels detected in all the images in the dataset, which can often be a massive list including generic objects like “Person” and specific ones like particular dog breeds.
b. The biggest challenge here is dimensionality: we have to cluster together related objects into logical themes like natural vs. urban imagery.
c. We currently have a hybrid approach to this problem: we use unsupervised clustering approaches to create an initial clustering, but we manually revise it as we inspect sample images. The process is:
  • Extract object and label names (e.g. Person, Chair, Beach, Table) from the Vision API output and filter out the most uncommon objects
  • Convert these names to 50-dimensional semantic vectors using a Word2Vec model trained on the Google News corpus
  • Using PCA, extract the top 5 principal components from the semantic vectors. This step takes advantage of the fact that each Word2Vec neuron encodes a set of commonly adjacent words, and different sets represent different axes of similarity and should be weighted differently
  • Use an unsupervised clustering algorithm, namely either k-means or DBSCAN, to find semantically similar clusters of words
  • We are also exploring augmenting this approach with a combined distance metric:
d(w1, w2) = a * (semantic distance) + b * (co-appearance distance)
where the latter is a Jaccard distance metric

Each of these components represents a choice the advertiser made when creating the messaging for an ad. Now that we have a variety of ads broken down into components, we can ask: which components are associated with ads that perform well or not so well?

We use a fixed effects1 model to control for unobserved differences in the context in which different ads were served. This is because the features we are measuring are observed multiple times in different contexts i.e. ad copy, audience groups, time of year & device in which ad is served.

The trained model will seek to estimate the impact of individual keywords, phrases & image components in the discovery ad copies. The model form estimates Interaction Rate (denoted as ‘IR’ in the following formulas) as a function of individual ad copy features + controls:



We use ElasticNet to spread the effect of features in presence of multicollinearity & improve the explanatory power of the model:


“Machine Learning model estimates the impact of individual keywords, phrases, and image components in discovery ad copies.”

- Manisha Arora, Data Scientist

 

Outputs & Insights


Outputs from the machine learning model help us determine the significant features. Coefficient of each feature represents the percentage point effect on CTR.

In other words, if the mean CTR without feature is X% and the feature ‘xx’ has a coeff of Y, then the mean CTR with feature ‘xx’ included will be (X + Y)%. This can help us determine the expected CTR if the most important features are included as part of the ad copies.

Key-takeaways (sample insights):

We analyze keywords & imagery tied to the unique value propositions of the product being advertised. There are 6 key value propositions we study in the model. Following are the sample insights we have received from the analyses:
Shortcomings:

Although insights from DisCat are quite accurate and highly actionable, the moel does have a few limitations:
1. The current model does not consider groups of keywords that might be driving ad performance instead of individual keywords (Example - “Buy Now” phrase instead of “Buy” and “Now” individual keywords).
2. Inference and predictions are based on historical data and aren’t necessarily an indication of future success.
3. Insights are based on industry insights and may need to be tailored for a given advertiser.

DisCat breaks down exactly which features are working well for the ad and which ones have scope for improvement. These insights can help us identify high-impact keywords in the ads which can then be used to improve ad quality, thus improving business outcomes. As next steps, we recommend testing out the new ad copies with experiments to provide a more robust analysis. Google Ads A/B testing feature also allows you to create and run experiments to test these insights in your own campaigns.

Summary


Discovery Ads are a great way for advertisers to extend their social outreach to millions of people across the globe. DisCat helps break down discovery ads by analyzing text and images separately and using advanced ML/AI techniques to identify key aspects of the ad that drives greater performance. These insights help advertisers identify room for growth, identify high-impact keywords, and design better creatives that drive business outcomes.

Acknowledgement


Thank you to Shoresh Shafei and Jade Zhang for their contributions. Special mention to Nikhil Madan for facilitating the publishing of this blog.

Notes

  1. Greene, W.H., 2011. Econometric Analysis, 7th ed., Prentice Hall;

    Cameron, A. Colin; Trivedi, Pravin K. (2005). Microeconometrics: Methods and Applications

Come to the Tag1 & Google Performance Workshop at DrupalCon Europe 2022, Prague

Posted by Andrey Lipattsev, EMEA CMS Partnerships Lead

TL;DR: If you’re attending @DrupalConEur submit your URL @ https://bit.ly/CWV-DrupalCon-22 to get your UX & performance right on #Drupal at the Tag1 & Google interactive workshop.


Getting your User Experience right, which includes performance, is critical for success. It’s a key driver of many success metrics (https://web.dev/tags/web-vitals) and a factor taken into account by platforms, including search engines, that surface links to your site (https://developers.google.com/search/docs/advanced/experience/page-experience).

Quantifying User Experience is not always easy, so one way to measure, track and improve it is by using Core Web Vitals (CWV, https://web.dev/vitals/). Building a site with great CWV on Drupal is easier than on many platforms on average (https://bit.ly/CWV-tech-report) and yet there are certain tips and pitfalls you should be aware of.

In this workshop the team from Tag1 and Google (Michael Meyers, Andrey Lipattsev and others) will use real life examples of Drupal-based websites to illustrate some common pain points and the corresponding solutions. If you would like us to take a look at your website and provide actionable advice, please submit the URL via this link (https://bit.ly/CWV-DrupalCon-22). The Workshop is interactive, so bring your laptop - we'll get you up and running and teach you hands-on how to code for the relevant improvements.

We cannot guarantee that all the submissions will be analysed as this depends on the number of submissions and the time that we have. However, we will make sure that all the major themes cutting across the submitted sites will be covered with relevant solutions.

See you in Prague!

Date & Time: Wednesday 21.09.2022, 16:15-18:00

Helping Developers Build with Google, Matters

Posted by Jeannie Zhang and Kevin Po; Product Managers, Nest

As the smart home industry prepares for a major shift in usability and interoperability with Matter launching later this year, we are working to help you build more devices and connections with Google products and beyond.

At Google I/O this year, we shared updates on how Google is continuing to support smart home developers, including the launch of our new and improved Google Home Developer Center. Today, we are excited to share that the Google Home Developer Console is now in Developer Preview at console.home.google.com.

What is the Google Home Developer Console?


The Google Home Developer Console is a guided flow for developers looking to integrate with Google. It provides everything needed to build intelligent and innovative smart home products with Matter. By simplifying the process of building Matter-enabled smart home products, you can spend more time innovating with your devices and less time on the basics.

The console is a part of the Google Home Developer Center we announced earlier this year; the go-to starting place for anyone interested in developing smart home devices and apps with Google.

Google Home Device SDK


Along with this new console, we have also released two new software development kits to make building Matter devices with Google easier. We’ve created the Google Home Device SDK, which extends the open-source Matter SDK with development, testing, and go-to market tools; making it the fastest and easiest way to build Matter devices.

Created with both new and experienced smart home developers in mind, the Google Home Device SDK has tools such as code samples, code labs and a Matter virtual device to help you start building, integrating and testing your Matter devices with Google easily.

At I/O this year, we announced Intelligence Clusters, which will allow you to access Google intelligence about the home locally and directly on your Matter devices, using a similar structure to clusters within Matter. To protect the privacy and security of our users, we have built guardrails into our Intelligence Clusters, beginning with Home & Away, to ensure that user information is always encrypted, processed locally, and only with user consent and visibility. You can learn more about these guardrails and fill out our interest form here.

Google Home Mobile SDK


Apps are invaluable to the user experience for your devices, so we have also deployed the Google Home Mobile SDK, a tool to build Android Apps that connect directly with Matter devices. Our mobile SDK streamlines the setup process, creating a more consistent and reliable experience for Android users. These APIs make it easier to set up devices in your app, Google Home, and third party ecosystems, and to share devices with other ecosystems and apps.

Why build with Google?


Even with Matter making interoperability the standard, determining the best platform for your smart devices is still an important consideration. Google's end-to-end tools for Matter devices and apps complement your existing development platforms, accelerate time-to-market for your devices, improve reliability, and let you differentiate with Google Home while having interoperability with other Matter platforms.

Getting Started


Looking to get started building with Matter? Before hopping into the Google Home Developer Console, head over to our Get Started page to gather all the information you need to know before building.

We’re committed to supporting smart home developers that build and innovate with Google, by providing easy and high-quality resources. The latest tools are just an example of our ongoing commitment to be partners in this industry. We can’t wait to see what you build!

Updates to Emoji: New Characters, New Animation, New Color Customization, and More!

Posted by Jennifer Daniel, Emoji and Expression Creative Director

It’s official: new emoji are here, there, and everywhere.

But what exactly is “new” and where is “here”? Great question.

Emoji have long eclipsed their humble beginnings in sms text messages in the 1990’s. Today, they appear in places you'd never expect like self-checkout kiosks, television screens and yes, even refrigerators ?. As emoji increase in popularity and advance in how they are used, the Noto Emoji project has stepped up our emoji game to help everyone get “?” without having to buy a new device (or a new refrigerator).

Over the past couple of years we’ve been introducing a suite of updates to make it easier than ever for apps to embrace emoji. Today, we’re taking it a step further by introducing new emoji characters (in color and in monochrome), metadata like shortcodes, a new font standard called COLRv1, open source animated emotes, and customization features in emoji kitchen. Now it’s easier than ever to operate at the speed of language online.

New Emoji!

First and foremost, earlier today the Unicode Consortium published all data files associated with the Unicode 15.0 release, including 31 new emoji characters.?

Among the collection includes a wing(?), a leftwards and rightwards hand, and a shaking face (?). Now you too can make pigs fly (??), high five (????), and shake in your boots all in emoji form (?????).

These new characters bring our emoji total to 3,664 and all of them are all coming to Android soon and will become available across Google products early next year.

Can’t wait until then? You can download the font today and use it today (wherever color vector fonts are supported). Our entire emoji library including the source files and associated metadata like short codes is open source on Github for you to go build with and build on (Note: Keep an eye open for those source files on Github later this week).

And before you ask, yes the variable monochrome version of Noto Emoji that launched earlier this year is fully up to date to the new Unicode Standard. ???

Dancing Emotes

While emoji are almost unrecognizable today from what they were in the late 1990's, there are some things I miss about the original emoji sets from Japan. Notably, the animation. Behold the original dancer emoji via phone operator KDDI: 
 

This animation is so good. Go get it, KDDI dancer.

Just as language doesn’t stand still, neither do emoji. Say hello to our first set of animations!!!!!

Scan the collection, download in your preferred file format, and watch them dance. You may have already seen a few in the Messages by Google app which supports these today. The artwork is available under the CC BY 4.0 license.  


New Color Font Support

Emoji innovation isn't limited to mobile anymore and there is a lot to be explored in web environments. Thanks to a new font format called COLRv1, color fonts — such as Noto Color emoji — can render with the crispness we’ve come to expect from digital imagery. You can also do some sweet things to customize the appearance of color fonts. If you’re viewing this on the latest version of Chrome. Go ahead, give it a whirl.


(Having trouble using this demo? Please update to the latest version of Chrome.)

Make a vaporwave duck


Or a duck from the 1920's


Softie duckie

… a sunburnt duck?


Before you ask: No, you can’t send 1920's duck as a traditional emoji using the COLRv1 tech. It’s more demonstrating the possibilities of this new font standard. Because your ducks render in the browser (*) interoperability isn’t an issue! Take our vibrant and colorful drawings and stretch our imaginations of what it even means to be an emoji. It’s an exciting time to be emoji-adjacent.

If you’d like to send goth emoji today in a messaging app, you’ll have to use Emoji Kitchen stickers in Gboard to customize their color. *COLRv1 is available on Google Chrome and in Edge. Expect it in other browsers such as Firefox soon.

Customized Emotes

That’s right, you can change the color of emoji using emoji kitchen. No shade: I love that “pink heart” was anointed the title of “Most anticipated emoji” on social media earlier this summer but what if … changing the color of an emote happened with the simple click of a button and didn’t require the Unicode Consortium, responsible for digitizing the world’s languages, to do a cross-linguistic study of color terms to add three new colored hearts?

Customizing and personalizing emotes is becoming more technically feasible, thanks to Noto Emoji. Look no further than Emoji Kitchen available on Gboard: type a sequence of emoji including a colored heart to change its color.

No lime emoji? No problem.??







Red rose too romantic for the moment? Try a yellow rose??








Feeling goth? ??



Go Cardinals! ❤️?









While technically these are stickers, it’s a lovely example of how emoji are rapidly evolving. Whether you're a developer, designer, or just a citizen of the Internet, Noto Emoji has something for everyone and we love seeing what you make with it.

Google Cloud & Kotlin GDE Kevin Davin helps others learn in the face of challenges

Posted by Kevin Hernandez, , Developer Relations Community Manager

Kevin Davin speaking at the SnowCamp Conference in 2019

Kevin Davin has always had a passion for learning and helping others learn, no matter their background or unique challenges they may face. He explains, “I want to learn something new every day, I want to help others learn, and I’m addicted to learning.” This mantra is evident in everything he does from giving talks at numerous conferences to helping people from underrepresented groups overcome imposter syndrome and even helping them become GDEs. In addition to learning, Kevin is also passionate about diversity and inclusion efforts, partly inspired by navigating the world with partial blindness.

Kevin has been a professional programmer for 10 years now and has been in the field of Computer Science for about 20 years. Through the years, he has emphasized the importance of learning how and where to learn. For example, while he learned a lot while he was studying at a university, he was able to learn just as much through his colleagues. In fact, it was through his colleagues that he picked up lessons in teamwork and the ability to learn from people with different points of view and experience. Since he was able to learn so much from those around him, Kevin also wanted to pay it forward and started volunteering at a school for people with disabilities. Guided by the Departmental Centers for People with Disabilities, the aim of the program is to teach coding languages and reintegrate students into a technical profession. During his time at this center, Kevin helped students practice what they learned and ultimately successfully transition into a new career.

During these experiences, Kevin was always involved in the developer community through open-source projects. It was through these projects that he learned about the GDE program and was connected to Google Developer advocates. Kevin was drawn to the GDE program because he wanted to share his knowledge with others and have direct access to Google in order to become an advocate on behalf of developers. In 2016, he discovered Kubernetes and helped his company at the time move to Google Cloud. He always felt like this model was the right solution and invested a lot of time to learn it and practice it. “Google Cloud is made for developers. It’s like a Lego set because you can take the parts you want and put it together,” he remarked.

The GDE program has given him access to the things he values most: being a part of a developer community, being an advocate for developers, helping people from all backgrounds feel included, and above all, an opportunity to learn something new every day. Kevin’s parting advice for hopeful GDEs is: “Even if you can’t reach the goal of being a GDE now, you can always get accepted in the future. Don’t be afraid to fail because without failure, you won’t learn anything.” With his involvement in the program, Kevin hopes to continue connecting with the developer community and learning while supporting diversity efforts.

Learn more about Kevin on Twitter & LinkedIn.

The Google Developer Experts (GDE) program is a global network of highly experienced technology experts, influencers, and thought leaders who actively support developers, companies, and tech communities by speaking at events and publishing content.

#WeArePlay | Meet Sam from Chicago. More stories from Peru, Croatia and Estonia.

Posted by Leticia Lago, Developer Marketing

A medical game for doctors, a language game for kids, a scary game for horror lovers and an escape room game for thrill seekers! In this latest batch of #WeArePlay stories, we’re celebrating the founders behind a wonderful variety of games from all over the world. Have a read and get gaming! 

To start, let’s meet Sam from Chicago. Coming from a family of doctors, his Dad challenged him to make a game to help those in the medical field. Sam agreed, made a game and months later discovered over 100,000 doctors were able to practice medical procedures. This early success inspired him to found Level Ex - a company of 135, making world-class medical games for doctors across the globe. Despite his achievements, his Dad still hopes Sam may one day get into medicine himself and clinch a Nobel prize.


Next, a few more stories from around the world:

  • Aldo and Sandro from Peru - founders of Dark Dome. They combine storytelling and art to make thrilling and chilling games, filled with plot twists and jump scares.


  • Vladimir, Tomislav and Boris from Croatia - founders of Pine Studio. They won the Indie Games Festival 2021 with their game Cats In Time. 


  • Kelly, Mikk, Reimo and Madde from Estonia - founders of ALPA kids. Their language games for children have a huge impact on early education and language preservation.


Check out all the stories now at g.co/play/weareplay and stay tuned for even more coming soon.


How useful did you find this blog post?


Google Dev Library Letters : 13th Issue

Posted by Garima Mehra, Program Manager


Welcome to the 13th Issue: ‘Google Dev Library letters’ is a technology newsletter curated to bring you some of the best projects developed with Google tech and submitted to the Google Dev Library platform. We are back with another boost of inspiration for your next project!


Hero Content of the month

Check out shortlisted content from the Google technologies of your choice.

Android



Contact Store API by Alex Styl

Contact Store is a modern API that makes access to contacts on Android devices simple to use. It solves for the most frequent use cases and makes developing enjoyable.




Custom Progress Indicator by Samson Achiaga

CustomProgressIndicator library is a simple, customizable progress indicator that gives android applications a nice feel. It saves developers time by creating a unique, customizable loading view.










Flutter




Numbers by Bulent Bariskilic

Discover an app designed to show facts about numbers using the http://numbersapi.com API. The project has been written solely in Dart Language.










Cupertino Icons Gallery by Cephas Brian

Get access to over 1,335 icons in one centralized place - the Cupertino Icons Gallery is an open source, cross-platform space to find all the icons used in Flutter.




Machine Learning



Learn how to build a system by considering two MLOps scenarios - if the model needs to be replaced later and if the model itself has to evolve with the data.



Probing Vision Transformers by Sayak Paul & Aritra Roy

Explore tools in this repository to probe into the representations learned by different families of Vision Transformers.

Google Cloud



Combining Google Apps Script with Google AppSheet by Aryan Irani

Learn how to combine Google AppScript with Google AppSheet to make automation even more powerful.




What a beautiful stream!! by Mandar Chaphalkar

Learn how to create a stream in 6 simple steps now that Google Cloud recently made Datastream CDC generally available.



Curators Corner

Meet our curators who have been working behind the scenes to bring you the best content submissions

Android


"Android development changes fast and it's great to see developers write blogs to help others learn.

It's a pleasure to be part of the Android community. I enjoy seeing the android community. I enjoy seeing the Android community flourish by collaborating with each other and sharing their learnings" 

 

Andres Sandoval

Sr. Strategist, Google


Machine Learning



"We are loving the TensorFlow.js submissions we have seen so far, and have no doubt future ones will continue to push the boundaries of what's possible in this space, and because it is web powered anyone anywhere can try the demos typically with the click of a link!"


Jason Mayes 

Web ML Developer Relations Lead, Google
 




Liked what you read? Checkout the latest projects and community-authored content by visiting our home page or subscribing to our newsletter.