Author Archives: Google Developers

Yasmine Evjen shares her passion for Android development and how to get involved at DevFest

Posted by Komal Sandhu - Global Program Manager, Google Developer Groups

“I would love to see more stepping out of our comfort zones, playing with technology, and bringing back that joy of what got us into Android development in the first place.”

Learn Android tools and tips from Android Community Lead, Yasmine Evjen, and hear from her first-hand on how to get involved.

Continue reading

Meet Rose Niousha, GDSC Waseda Founder & WTM Ambassador

Posted by Takuo Suzuki, Developer Relations Program Manager, Japan

Rose Niousha wanted to create a community where students could explore their technical interests without being held back by external factors or stereotypes. A passion for inclusion set her on a path to growing her Google Developer Student Clubs chapter and discovering the Women Techmakers (WTM) program.

After majoring in Computer Science at Waseda University, Rose realized many students had difficulty applying what they learned in school to practical environments and internships. Seeing a gap between theory and practice, she aimed to tackle these problems by founding a Google Developer Student Club (GDSC) on her campus. Through her leadership, the club became the largest chapter in Japan, with 177 active members. This post highlights how Rose created a big impact in her community and then became a WTM Ambassador.

How GDSC Waseda emphasized inclusivity in their community

Rose wanted the Waseda community to champion diversity and inclusion. When Rose selected her core team members, she aimed to ensure diverse perspectives and different educational backgrounds were represented. By recruiting members from other majors, people didn't feel like outsiders in the community. As a result, the members of GDSC Waseda consisted of both technical and non-technical majors, with 47.8% being female students, marking an inclusive 50-50 gender ratio that is not typical among tech communities.

The 2021-2022 GDSC Waaseda core team (Tokyo, Japan)
After building a core team for the chapter, Rose decided that breaking the language barrier could establish a more inclusive community. Rose wanted students from all backgrounds to be able to communicate with each other so she chose English as the main language for the chapter. Since her university is home to an international community, this helped address a common challenge in Japanese universities: students' lack of confidence to discuss professional fields in English. This brought students together and helped everyone improve their language abilities.

 

Hosting programs to educate, inspire, and connect students


The chapter hosted over 30 activities like speaker sessions and hands-on programming workshops where students gained a practical understanding of tools like Flutter, Google Cloud Platform, and Firebase.

Flutter sessions were taught to students so they could create natively compiled mobile apps and submit to the annual GDSC Solution Challenge. Firebase sessions helped backend teams handle user databases as well as get a basic understanding of NoSQL databases. Students then could implement this technology and strengthen their project’s scalability and data security.

Through collaborations with other companies, GDSC Waseda helped students to experience different disciplines like coding/programming, team management, and design thinking. These workshops helped students find internship opportunities and even students majoring in non-technical majors, like humanities, secured internships at tech firms in roles such as UX/UI design and PM roles since they were exposed to a practical side of the industry.
Event Participants from GDSC Waseda (Tokyo, Japan)

Leadership in action: GDSC Solution Challenge efforts in Japan


As a GDSC lead, Rose encouraged participation in the annual GDSC Solution Challenge. She approached it as a starting point, rather than a goal. With this positive attitude, four teams from the chapter submitted projects and team mimi4me, a mobile safety application using Machine Learning, became the first team from Japan to be selected as one of the Global Top 50. The team is continuing to scale their solution by planning to publish the application on Google Play.

Rose Niousha gives certificate to the Mini Solution Challenge winning team (Tokyo, Japan)

To showcase the efforts of all the teams after the Solution Challenge, the chapter hosted a Mini Solution Challenge event. All teams gave a presentation describing the solutions that they submitted, and event participants voted for their favorite project. Additionally, another team of students from GDSC Waseda and Keio founded an E-Commerce startup from their time at GDSC.

Reflections and accomplishments along the way


Through Google connections and using tools like LinkedIn to find other like-minded leaders, Rose reached out to many inspiring women working in the tech industry. She prepared for the events for weeks in advance by conducting several meetings with the speakers. Through these helpful sessions, GDSC Waseda was able to inspire many more women on campus to join their community and discover their interests. Now, GDSC Waseda is proud to have a diverse community with a 50-50 gender ratio in members.

“Being a GDSC Lead has brought me tremendous opportunities,” says Rose. “Since one of my biggest objectives was to tackle the gender barrier in the tech industry through my GDSC community, I actively hosted events during International Women's Day (IWD) month.”


Rose Niousha with the Global Head of Google Developer Community Program, Erica Hanson (New York City, New York, USA)

Building an inclusive future as a WTM ambassador

Rose worked with her Google Community Manager in Japan, Reisa Matsuda, who helped develop her passion for creating a diverse and inclusive community. Reisa told Rose about the Women Techmakers (WTM) program and encouraged her to take advantage of many opportunities. With mentorship and guidance, soon after Rose became a GDSC Lead, she joined Women Techmakers (WTM) as an Ambassador.


Reisa Matsuda and Rose at GDSC Leads Graduation

As an alumnus of Women Developer Academy (WDA), a program that equips women in tech with the skills, resources, and support they need to become a tech presenter and speaker, Rose felt confident and prepared to speak as a panelist at this year’s International Women’s Day event hosted by WTM Tokyo - the largest IWD event in Japan with over 180 participants. During the talk, she shared her experience with the WDA program and personal stories related to WTM’s IWD 2022 "Progress, not Perfection” campaign.


Rose Niousha with the Head of Google Women Techmakers, Caitlin Morrissey (Mountain View, California, USA)

As part of her involvement with the WTM program, Rose attended Google I/O offline at Shoreline on May 11, 2022. It was the first in-person Google developer event she had ever attended.


“I was surprised by its massive scale,” says Rose. “Kicking off the event with an inspiring talk by Google's CEO, Sundar Pichai, I had an amazing time listening to talks and networking. During my time in California, I was able to meet with many inspiring students and professionals, and bring unique ideas back to my chapter.”

 

Join a Google Developer Student Club near you

Google Developer Student Clubs (GDSC) are community groups for college and university students like Rose who are interested in Google developer technologies. With over 1,800+ chapters in 112 countries, GDSC aims to empower developers like Rose to help their communities by building technical solutions. If you’re a student and would like to join a Google Developer Student Club community, look for a chapter near you here, or visit the program page to learn more about starting one in your area.

Learn more about Women Techmakers

Google’s Women Techmakers program provides visibility, community, and resources for women in technology. Women Techmakers Ambassadors are global leaders passionate about impacting their communities and building a world where all women can thrive in tech.

Google Cloud Next, developer style

Posted by Jeana Jorgensen, Senior Director, Cloud Product Marketing and Sustainability, Google

Google Cloud Next is coming up on October 11 - 13. Register at no cost today and join us live to explore what’s new and what’s coming next in Google Cloud.

You’ll find lots of developer-specific content in the Developer Zone. Here’s a preview of what we’ve curated for you this year.

A developer keynote to get you thinking about the future

For the Next developer keynote we’re going to share our top 10 cloud technology predictions that we believe could come true by the end of 2025.

Hear from our experts who are on the cutting edge of many of these technology trends, whether it's AI, data and analytics, or modern cloud infrastructure:

Jeanine Banks, VP of Developer Products and Community

Erik Brewer, VP of Infrastructure and Google Fellow

Andi Gutmans, VP & GM of Databases


DevFests to find your people

DevFests are local tech conferences hosted by Google Developer Groups around the world during Next ‘22. The content of each one will vary to suit the local developer community. You might find hands-on labs, technical talks, or simply a chance to connect.

To find a DevFest near you, visit the DevFest page and filter the map by Google Cloud Next. You can RSVP via the map interface. Quick side tip…this is separate from Next registration.

Challenges to flex your skills

Drone Racing

That’s right, you can use your development skills to influence drone races. Welcome to the Google Cloud Fly Cup Challenge!

In the challenge, you can use Drone Racing League (DRL) race data and Google Cloud analytics tools to predict race outcomes and then provide tips to DRL pilots to help enhance their season performance. Compete for the chance to win a trip to the season finale of the 2022-23 DRL Algorand World Championship and be celebrated on stage.

Google Clout Challenge

Spice up the middle of your week with a no-cost, 20-minute competition posted each Wednesday until October 10. All challenges will take place in Google Cloud Skills Boost. And as a new user, you can get 30 days of no-cost access to Google Cloud Skills Boost* – plenty of time to complete the whole challenge.

Test your knowledge against your fellow developers and race the clock to see how fast you can complete the challenge. The faster you go, the higher your score.

Can you top your last score?

To participate, follow these three steps:

  1. Enroll - Go to our website, click the link to the weekly challenge, and enroll in the quest using your Google Cloud Skills Boost account.
  2. Play - Attempt the challenge as many times as you want. Remember the faster you are, the higher your score!
  3. Share - Share your score card on Twitter/LinkedIn using #GoogleClout
  4. Win - Complete all 10 weekly challenges to earn exclusive #GoogleClout badges

*Requires credit card

Innovator Hive livestreams to get the latest tech news

Innovator Hive livestreams are your unique opportunity to hear from Google Cloud executives and engineers as we announce the latest innovations. Join any livestream to explore technical content featuring new Google Cloud technologies.

Save your seat at Next

We at Google are getting excited for Next ‘22. It’s this year’s big moment to dive into the latest innovations, hear from Google experts, get inspired by what your peers are doing with technology, and try out some new skills.

There’s so much good stuff lined up – all we’re missing at this point is some #GoogleClout badge boasting, drone stat analyzing, technology-minded people to geek out with. Register for Next ‘22 today and join the fun live in October.

See you there!

Register now for Firebase Summit 2022!

Posted by Grace Lopez, Product Marketing Manager

One of the best things about Firebase is our community, so after three long years, we’re thrilled to announce that our seventh annual Firebase Summit is returning as a hybrid event with both in-person and virtual experiences! Our 1-day, in-person event will be held at Pier 57 in New York City on October 18, 2022. It will be a fun reunion for us to come together to learn, network, and share ideas. But if you’re unable to travel, don’t worry, you’ll still be able to take part in the activities online from your office/desk/couch wherever you are in the world.

Join us to learn how Firebase can help you accelerate app development, run your app with confidence, and scale your business. Registration is now open for both the physical and virtual events! Read on for more details on what to expect.


Keynote full of product updates

In-person and livestreamed

We’ll kick off the day with a keynote from our leaders, highlighting all the latest Firebase news and announcements. With these updates, our goal is to give you a seamless and secure development experience that lets you focus on making your app the best it can be.

#AskFirebase Live

In-person and livestreamed

Having a burning question you want to ask us? We’ll take questions from our in-person and virtual attendees and answer them live on stage during a special edition of everyone’s favorite, #AskFirebase.

NEW! Ignite Talks

In-person and livestreamed

This year at Firebase Summit, we’re introducing Ignite Talks, which will be 7-15 minute bitesize talks focused on hot topics, tips, and tricks to help you get the most out of our products.

NEW! Expert-led Classes

In-person and will be released later

You’ve been asking us for more technical deep dives, so this year we’ll also be running expert-led classes at Firebase Summit. These platform-specific classes will be designed to give you comprehensive knowledge and hands-on practice with Firebase products. Initially, these classes will be exclusive to in-person attendees, but we’ll repackage the content for self-paced learning and release them later for our virtual attendees.

We can’t wait to see you

In addition, Firebase Summit will be full of all the other things you love - interactive demos, lots of networking opportunities, exciting conversations with the community…and a few surprises too! The agenda is now live, so don't forget to check it out! In the meantime, register for the event, subscribe to the Firebase YouTube channel, and follow us on Twitter and LinkedIn to join the conversation using #FirebaseSummit

Just launched: Apply for support from Google Play’s $2M Indie Games Fund in Latin America

Posted by Patricia Correa, Director, Global Developer Marketing

As part of our commitment to helping all developers grow on our platform, at Google Play we have various programs focused on supporting small games studios. A few weeks ago we announced the winners of the Indie Games Festival in Europe, Korea and Japan, and the 2022 class of the Indie Games Accelerator.

Today, we are launching the Indie Games Fund in Latin America. We will be awarding $2 million dollars in non-dilutive cash awards, in addition to hands-on support, to selected small games studios based in LATAM, to help them build and grow their businesses on Google Play.

The program is open to indie game developers who have already launched a game - whether it’s on Google Play or another mobile platform, PC or console. Each selected recipient will get between $150,000 and $200,000 dollars to help them take their game to the next level, and build successful businesses.

Check out all eligibility criteria and apply now. Priority will be given to applications received by 12:00 p.m. BRT, 31 October, 2022.

For more updates about all our programs, resources and tools for indie game developers, follow us on Twitter @GooglePlayBiz and Google Play business community on LinkedIn.




How useful did you find this blog post?




Introducing Discovery Ad Performance Analysis

Posted by Manisha Arora, Nithya Mahadevan, and Aritra Biswas, gPS Data Science team

Overview of Discovery Ads and need for Ad Performance Analysis

Discovery ads, launched in May 2019, allow advertisers to easily extend their reach of social ads users across YouTube, Google Feed and Gmail worldwide. They provide brands a new opportunity to reach 3 billion people as they explore their interests and search for inspiration across their favorite Google feeds (YouTube, Gmail, and Discover) -- all with a single campaign. Learn more about Discovery ads here.


Due to these uniquenesses, customers need a data driven method to identify textual & imagery elements in Discovery Ad copies that drive Interaction Rate of their Discovery Ad campaigns, where interaction is defined as the main user action associated with an ad format—clicks and swipes for text and Shopping ads, views for video ads, calls for call extensions, and so on.

Interaction Rate = interaction / impressions


“Customers need a data driven method to identify textual & imagery elements in Discovery Ad copies that drive Interaction Rate of their campaigns.”

- Manisha Arora, Data Scientist



Our analysis approach:

The Data Science team at Google is investing in a machine learning approach to uncover insights from complex unstructured data and provide machine learning based recommendations to our customers. Machine Learning helps us study what works in ads at scale and these insights can greatly benefit the advertisers.

We follow a six-step based approach for Discovery Ad Performance Analysis:
  • Understand Business Goals
  • Build Creative Hypothesis
  • Data Extraction
  • Feature Engineering
  • Machine Learning Modeling
  • Analysis & Insight Generation

To begin with, we work closely with the advertisers to understand their business goals, current ad strategy, and future goals. We closely map this to industry insights to draw a larger picture and provide a customized analysis for each advertiser. As a next step, we build hypotheses that best describe the problem we are trying to solve. An example of a hypothesis can be -”Do superlatives (words like “top”, “best”) in the ad copy drive performance?”


“Machine Learning helps us study what works in ads at scale and these insights can greatly benefit the advertisers.”

- Manisha Arora, Data Scientist


Once we have a hypothesis we are working towards, the next step is to deep-dive into the technical analysis.

Data Extraction & Pre-processing


Our initial dataset includes raw ad text, imagery, performance KPIs & target audience details from historic ad campaigns in the industry. Each Discovery ad contains two text assets (Headline and Description) and one image asset. We then apply ML to extract text and image features from these assets.

Text Feature Extraction

We apply NLP to extract the text features from the ad text. We pass the raw text in the ad headline & description through Google Cloud’s Language API which parses the raw text into our feature set: commonly used keywords, sentiments etc.

Example: 


Image Feature Extraction

We apply Image Processing to extract image features from the ad copy imagery. We pass the raw images through Google Cloud’s Vision API & extract image components including objects, person, background, lighting etc.
Following are the holistic set of features that are extracted from the ad content:

Feature Design


Text Feature Design

There are two types of text features being included in DisCat:
1. Generic text feature
a. These are features returned by Google Cloud’s Language API including sentiment, word / character count, tone (imperative vs indicative), symbols, most frequent words and so on.

2. Industry-specific value propositions
a. These are features that only apply to a specific industry (e.g. finance) that are manually curated by the data science developer in collaboration with specialists and other industry experts.
  • For example, for the finance industry, one value proposition can be “Price Offer”. A list of keywords / phrases that are related to price offers (e.g. “discount”, “low rate”, “X% off”) will be curated based on domain knowledge to identify this value proposition in the ad copies. NLP techniques (e.g. wordnet synset) and manual examination will be used to make sure this list is inclusive and accurate.
Image Feature Design

Like the text features, image features can largely be grouped into two categories:
1. Generic image features
a. These features apply to all images and include the color profile, whether any logos were detected, how many human faces are included, etc.
b. The face-related features also include some advanced aspects: we look for prominent smiling faces looking directly at the camera, we differentiate between individuals vs. small groups vs. crowds, etc.
2. Object-based features
a. These features are based on the list of objects and labels detected in all the images in the dataset, which can often be a massive list including generic objects like “Person” and specific ones like particular dog breeds.
b. The biggest challenge here is dimensionality: we have to cluster together related objects into logical themes like natural vs. urban imagery.
c. We currently have a hybrid approach to this problem: we use unsupervised clustering approaches to create an initial clustering, but we manually revise it as we inspect sample images. The process is:
  • Extract object and label names (e.g. Person, Chair, Beach, Table) from the Vision API output and filter out the most uncommon objects
  • Convert these names to 50-dimensional semantic vectors using a Word2Vec model trained on the Google News corpus
  • Using PCA, extract the top 5 principal components from the semantic vectors. This step takes advantage of the fact that each Word2Vec neuron encodes a set of commonly adjacent words, and different sets represent different axes of similarity and should be weighted differently
  • Use an unsupervised clustering algorithm, namely either k-means or DBSCAN, to find semantically similar clusters of words
  • We are also exploring augmenting this approach with a combined distance metric:
d(w1, w2) = a * (semantic distance) + b * (co-appearance distance)
where the latter is a Jaccard distance metric

Each of these components represents a choice the advertiser made when creating the messaging for an ad. Now that we have a variety of ads broken down into components, we can ask: which components are associated with ads that perform well or not so well?

We use a fixed effects1 model to control for unobserved differences in the context in which different ads were served. This is because the features we are measuring are observed multiple times in different contexts i.e. ad copy, audience groups, time of year & device in which ad is served.

The trained model will seek to estimate the impact of individual keywords, phrases & image components in the discovery ad copies. The model form estimates Interaction Rate (denoted as ‘IR’ in the following formulas) as a function of individual ad copy features + controls:



We use ElasticNet to spread the effect of features in presence of multicollinearity & improve the explanatory power of the model:


“Machine Learning model estimates the impact of individual keywords, phrases, and image components in discovery ad copies.”

- Manisha Arora, Data Scientist

 

Outputs & Insights


Outputs from the machine learning model help us determine the significant features. Coefficient of each feature represents the percentage point effect on CTR.

In other words, if the mean CTR without feature is X% and the feature ‘xx’ has a coeff of Y, then the mean CTR with feature ‘xx’ included will be (X + Y)%. This can help us determine the expected CTR if the most important features are included as part of the ad copies.

Key-takeaways (sample insights):

We analyze keywords & imagery tied to the unique value propositions of the product being advertised. There are 6 key value propositions we study in the model. Following are the sample insights we have received from the analyses:
Shortcomings:

Although insights from DisCat are quite accurate and highly actionable, the moel does have a few limitations:
1. The current model does not consider groups of keywords that might be driving ad performance instead of individual keywords (Example - “Buy Now” phrase instead of “Buy” and “Now” individual keywords).
2. Inference and predictions are based on historical data and aren’t necessarily an indication of future success.
3. Insights are based on industry insights and may need to be tailored for a given advertiser.

DisCat breaks down exactly which features are working well for the ad and which ones have scope for improvement. These insights can help us identify high-impact keywords in the ads which can then be used to improve ad quality, thus improving business outcomes. As next steps, we recommend testing out the new ad copies with experiments to provide a more robust analysis. Google Ads A/B testing feature also allows you to create and run experiments to test these insights in your own campaigns.

Summary


Discovery Ads are a great way for advertisers to extend their social outreach to millions of people across the globe. DisCat helps break down discovery ads by analyzing text and images separately and using advanced ML/AI techniques to identify key aspects of the ad that drives greater performance. These insights help advertisers identify room for growth, identify high-impact keywords, and design better creatives that drive business outcomes.

Acknowledgement


Thank you to Shoresh Shafei and Jade Zhang for their contributions. Special mention to Nikhil Madan for facilitating the publishing of this blog.

Notes

  1. Greene, W.H., 2011. Econometric Analysis, 7th ed., Prentice Hall;

    Cameron, A. Colin; Trivedi, Pravin K. (2005). Microeconometrics: Methods and Applications

Come to the Tag1 & Google Performance Workshop at DrupalCon Europe 2022, Prague

Posted by Andrey Lipattsev, EMEA CMS Partnerships Lead

TL;DR: If you’re attending @DrupalConEur submit your URL @ https://bit.ly/CWV-DrupalCon-22 to get your UX & performance right on #Drupal at the Tag1 & Google interactive workshop.


Getting your User Experience right, which includes performance, is critical for success. It’s a key driver of many success metrics (https://web.dev/tags/web-vitals) and a factor taken into account by platforms, including search engines, that surface links to your site (https://developers.google.com/search/docs/advanced/experience/page-experience).

Quantifying User Experience is not always easy, so one way to measure, track and improve it is by using Core Web Vitals (CWV, https://web.dev/vitals/). Building a site with great CWV on Drupal is easier than on many platforms on average (https://bit.ly/CWV-tech-report) and yet there are certain tips and pitfalls you should be aware of.

In this workshop the team from Tag1 and Google (Michael Meyers, Andrey Lipattsev and others) will use real life examples of Drupal-based websites to illustrate some common pain points and the corresponding solutions. If you would like us to take a look at your website and provide actionable advice, please submit the URL via this link (https://bit.ly/CWV-DrupalCon-22). The Workshop is interactive, so bring your laptop - we'll get you up and running and teach you hands-on how to code for the relevant improvements.

We cannot guarantee that all the submissions will be analysed as this depends on the number of submissions and the time that we have. However, we will make sure that all the major themes cutting across the submitted sites will be covered with relevant solutions.

See you in Prague!

Date & Time: Wednesday 21.09.2022, 16:15-18:00

Helping Developers Build with Google, Matters

Posted by Jeannie Zhang and Kevin Po; Product Managers, Nest

As the smart home industry prepares for a major shift in usability and interoperability with Matter launching later this year, we are working to help you build more devices and connections with Google products and beyond.

At Google I/O this year, we shared updates on how Google is continuing to support smart home developers, including the launch of our new and improved Google Home Developer Center. Today, we are excited to share that the Google Home Developer Console is now in Developer Preview at console.home.google.com.

What is the Google Home Developer Console?


The Google Home Developer Console is a guided flow for developers looking to integrate with Google. It provides everything needed to build intelligent and innovative smart home products with Matter. By simplifying the process of building Matter-enabled smart home products, you can spend more time innovating with your devices and less time on the basics.

The console is a part of the Google Home Developer Center we announced earlier this year; the go-to starting place for anyone interested in developing smart home devices and apps with Google.

Google Home Device SDK


Along with this new console, we have also released two new software development kits to make building Matter devices with Google easier. We’ve created the Google Home Device SDK, which extends the open-source Matter SDK with development, testing, and go-to market tools; making it the fastest and easiest way to build Matter devices.

Created with both new and experienced smart home developers in mind, the Google Home Device SDK has tools such as code samples, code labs and a Matter virtual device to help you start building, integrating and testing your Matter devices with Google easily.

At I/O this year, we announced Intelligence Clusters, which will allow you to access Google intelligence about the home locally and directly on your Matter devices, using a similar structure to clusters within Matter. To protect the privacy and security of our users, we have built guardrails into our Intelligence Clusters, beginning with Home & Away, to ensure that user information is always encrypted, processed locally, and only with user consent and visibility. You can learn more about these guardrails and fill out our interest form here.

Google Home Mobile SDK


Apps are invaluable to the user experience for your devices, so we have also deployed the Google Home Mobile SDK, a tool to build Android Apps that connect directly with Matter devices. Our mobile SDK streamlines the setup process, creating a more consistent and reliable experience for Android users. These APIs make it easier to set up devices in your app, Google Home, and third party ecosystems, and to share devices with other ecosystems and apps.

Why build with Google?


Even with Matter making interoperability the standard, determining the best platform for your smart devices is still an important consideration. Google's end-to-end tools for Matter devices and apps complement your existing development platforms, accelerate time-to-market for your devices, improve reliability, and let you differentiate with Google Home while having interoperability with other Matter platforms.

Getting Started


Looking to get started building with Matter? Before hopping into the Google Home Developer Console, head over to our Get Started page to gather all the information you need to know before building.

We’re committed to supporting smart home developers that build and innovate with Google, by providing easy and high-quality resources. The latest tools are just an example of our ongoing commitment to be partners in this industry. We can’t wait to see what you build!

Updates to Emoji: New Characters, New Animation, New Color Customization, and More!

Posted by Jennifer Daniel, Emoji and Expression Creative Director

It’s official: new emoji are here, there, and everywhere.

But what exactly is “new” and where is “here”? Great question.

Emoji have long eclipsed their humble beginnings in sms text messages in the 1990’s. Today, they appear in places you'd never expect like self-checkout kiosks, television screens and yes, even refrigerators ?. As emoji increase in popularity and advance in how they are used, the Noto Emoji project has stepped up our emoji game to help everyone get “?” without having to buy a new device (or a new refrigerator).

Over the past couple of years we’ve been introducing a suite of updates to make it easier than ever for apps to embrace emoji. Today, we’re taking it a step further by introducing new emoji characters (in color and in monochrome), metadata like shortcodes, a new font standard called COLRv1, open source animated emotes, and customization features in emoji kitchen. Now it’s easier than ever to operate at the speed of language online.

New Emoji!

First and foremost, earlier today the Unicode Consortium published all data files associated with the Unicode 15.0 release, including 31 new emoji characters.?

Among the collection includes a wing(?), a leftwards and rightwards hand, and a shaking face (?). Now you too can make pigs fly (??), high five (????), and shake in your boots all in emoji form (?????).

These new characters bring our emoji total to 3,664 and all of them are all coming to Android soon and will become available across Google products early next year.

Can’t wait until then? You can download the font today and use it today (wherever color vector fonts are supported). Our entire emoji library including the source files and associated metadata like short codes is open source on Github for you to go build with and build on (Note: Keep an eye open for those source files on Github later this week).

And before you ask, yes the variable monochrome version of Noto Emoji that launched earlier this year is fully up to date to the new Unicode Standard. ???

Dancing Emotes

While emoji are almost unrecognizable today from what they were in the late 1990's, there are some things I miss about the original emoji sets from Japan. Notably, the animation. Behold the original dancer emoji via phone operator KDDI: 
 

This animation is so good. Go get it, KDDI dancer.

Just as language doesn’t stand still, neither do emoji. Say hello to our first set of animations!!!!!

Scan the collection, download in your preferred file format, and watch them dance. You may have already seen a few in the Messages by Google app which supports these today. The artwork is available under the CC BY 4.0 license.  


New Color Font Support

Emoji innovation isn't limited to mobile anymore and there is a lot to be explored in web environments. Thanks to a new font format called COLRv1, color fonts — such as Noto Color emoji — can render with the crispness we’ve come to expect from digital imagery. You can also do some sweet things to customize the appearance of color fonts. If you’re viewing this on the latest version of Chrome. Go ahead, give it a whirl.


(Having trouble using this demo? Please update to the latest version of Chrome.)

Make a vaporwave duck


Or a duck from the 1920's


Softie duckie

… a sunburnt duck?


Before you ask: No, you can’t send 1920's duck as a traditional emoji using the COLRv1 tech. It’s more demonstrating the possibilities of this new font standard. Because your ducks render in the browser (*) interoperability isn’t an issue! Take our vibrant and colorful drawings and stretch our imaginations of what it even means to be an emoji. It’s an exciting time to be emoji-adjacent.

If you’d like to send goth emoji today in a messaging app, you’ll have to use Emoji Kitchen stickers in Gboard to customize their color. *COLRv1 is available on Google Chrome and in Edge. Expect it in other browsers such as Firefox soon.

Customized Emotes

That’s right, you can change the color of emoji using emoji kitchen. No shade: I love that “pink heart” was anointed the title of “Most anticipated emoji” on social media earlier this summer but what if … changing the color of an emote happened with the simple click of a button and didn’t require the Unicode Consortium, responsible for digitizing the world’s languages, to do a cross-linguistic study of color terms to add three new colored hearts?

Customizing and personalizing emotes is becoming more technically feasible, thanks to Noto Emoji. Look no further than Emoji Kitchen available on Gboard: type a sequence of emoji including a colored heart to change its color.

No lime emoji? No problem.??







Red rose too romantic for the moment? Try a yellow rose??








Feeling goth? ??



Go Cardinals! ❤️?









While technically these are stickers, it’s a lovely example of how emoji are rapidly evolving. Whether you're a developer, designer, or just a citizen of the Internet, Noto Emoji has something for everyone and we love seeing what you make with it.