Chrome Beta for Android Update

Hi everyone! We've just released Chrome Beta 93 (93.0.4577.25) for Android: it's now available on Google Play.

You can see a partial list of the changes in the Git log. For details on new features, check out the Chromium blog, and for details on web platform updates, check here.

If you find a new issue, please let us know by filing a bug.

Krishna Govind
Google Chrome

Aussies can now save vaccination certificates on Android devices

Since the onset of the COVID-19, Australia has faced many seasons and surges of the pandemic. As we continue to battle the Delta variant across multiple states, Government and health authorities are working harder than ever to test and vaccinate people – and pave the way to safely open up communities. 

To support these efforts, we’ve been working with Services Australia to give you a convenient and secure way to view, save and show your vaccination status and information, straight from your smartphone. 


So, we just expanded our COVID Card feature to Australia – providing a simple, private and secure way to save and access vaccination information on Android smartphones after you’ve had your second jab. Vaccine information is only stored on your device (it is not stored by Google). 

To access your vaccination certificate, simply login to the Express Plus Medicare app or via the Medicare portal of the MyGov website and select the options to ‘View your COVID-19 digital certificate’ and ‘Save to Phone.’  

For added convenience, you can access your vaccine information even when you’re offline, which means you do not need mobile or wifi connection. If you have the Google Pay app on your Android phone, you can also access the certificate from the same place where you access your other cards and other passes.

Every time you access your certificate, you will be asked for the password, pin or biometric method that you have set up for your Android device. If you do not have this set up on your phone, you’ll be prompted to do so to strengthen security. 

The launch of the COVID Card feature in Australia builds on the many ways we’ve been working to help authorities, businesses and Australians stay safe and informed during the pandemic. This includes surfacing the latest updates, health and travel advice from authorities, and giving almost $5M AUD in ad grants to the Federal Government to support these public health initiatives. We’ve provided regular updates on Search trends and launched COVID-19 Community Mobility Reports to offer local insights on the impact of social distancing. And to help meet the cost of the virus, we’ve offered $20M in ad credits to businesses to support their pivot to online trading during these challenging times. To keep across the latest news, check out our local COVID microsite featuring the latest updates and health resources: google.com.au/covid19. 

Beta Channel Update for Desktop

The Beta channel has been updated to 93.0.4577.25 for Windows, linux and Mac.


A full list of changes in this build is available in the log. Interested in switching release channels?  Find out how here. If you find a new issue, please let us know by filing a bug. The community help forum is also a great place to reach out for help or learn about common issues.



Prudhvikumar Bommana

Two New Datasets for Conversational NLP: TimeDial and Disfl-QA

A key challenge in natural language processing (NLP) is building conversational agents that can understand and reason about different language phenomena that are unique to realistic speech. For example, because people do not always premeditate exactly what they are going to say, a natural conversation often includes interruptions to speech, called disfluencies. Such disfluencies can be simple (like interjections, repetitions, restarts, or corrections), which simply break the continuity of a sentence, or more complex semantic disfluencies, in which the underlying meaning of a phrase changes. In addition, understanding a conversation also often requires knowledge of temporal relationships, like whether an event precedes or follows another. However, conversational agents built on today’s NLP models often struggle when confronted with temporal relationships or with disfluencies, and progress on improving their performance has been slow. This is due, in part, to a lack of datasets that involve such interesting conversational and speech phenomena.

To stir interest in this direction within the research community, we are excited to introduce TimeDial, for temporal commonsense reasoning in dialog, and Disfl-QA, which focuses on contextual disfluencies. TimeDial presents a new multiple choice span filling task targeted for temporal understanding, with an annotated test set of over ~1.1k dialogs. Disfl-QA is the first dataset containing contextual disfluencies in an information seeking setting, namely question answering over Wikipedia passages, with ~12k human annotated disfluent questions. These benchmark datasets are the first of their kind and show a significant gap between human performance and current state of the art NLP models.

TimeDial
While people can effortlessly reason about everyday temporal concepts, such as duration, frequency, or relative ordering of events in a dialog, such tasks can be challenging for conversational agents. For example, current NLP models often make a poor selection when tasked with filling in a blank (as shown below) that assumes a basic level of world knowledge for reasoning, or that requires understanding explicit and implicit inter-dependencies between temporal concepts across conversational turns.

It is easy for a person to judge that “half past one” and “quarter to two” are more plausible options to fill in the blank than “half past three” and “half past nine”. However, performing such temporal reasoning in the context of a dialog is not trivial for NLP models, as it requires appealing to world knowledge (i.e., knowing that the participants are not yet late for the meeting) and understanding the temporal relationship between events (“half past one” is before “three o’clock”, while “half past three” is after it). Indeed, current state-of-the-art models like T5 and BERT end up picking the wrong answers — “half past three” (T5) and “half past nine” (BERT).

The TimeDial benchmark dataset (derived from the DailyDialog multi-turn dialog corpus) measures models’ temporal commonsense reasoning abilities within a dialog context. Each of the ~1.5k dialogs in the dataset is presented in a multiple choice setup, in which one temporal span is masked out and the model is asked to find all correct answers from a list of four options to fill in the blank.

In our experiments we found that while people can easily answer these multiple choice questions (at 97.8% accuracy), state-of-the-art pre-trained language models still struggle on this challenge set. We experiment across three different modeling paradigms: (i) classification over the provided 4 options using BERT, (ii) mask filling for the masked span in the dialog using BERT-MLM, (iii) generative methods using T5. We observe that all the models struggle on this challenge set, with the best variant only scoring 73%.

Model   2-best Accuracy
Human   97.8%
BERT - Classification   50.0%
BERT - Mask Filling   68.5%
T5 - Generation   73.0%

Qualitative error analyses show that the pre-trained language models often rely on shallow, spurious features (particularly text matching), instead of truly doing reasoning over the context. It is likely that building NLP models capable of performing the kind of temporal commonsense reasoning needed for TimeDial requires rethinking how temporal objects are represented within general text representations.

Disfl-QA
As disfluency is inherently a speech phenomenon, it is most commonly found in text output from speech recognition systems. Understanding such disfluent text is key to building conversational agents that understand human speech. Unfortunately, research in the NLP and speech community has been impeded by the lack of curated datasets containing such disfluencies, and the datasets that are available, like Switchboard, are limited in scale and complexity. As a result, it’s difficult to stress test NLP models in the presence of disfluencies.

Disfluency   Example
Interjection   When is, uh, Easter this year?
Repetition   When is EasEaster this year?
Correction   When is Lent, I mean Easter, this year?
Restart   How much, no wait, when is Easter this year?
Different kinds of disfluencies. The reparandum (words intended to be corrected or ignored; in red), interregnum (optional discourse cues; in grey) and repair (the corrected words; in blue).

Disfl-QA is the first dataset containing contextual disfluencies in an information seeking setting, namely question answering over Wikipedia passages from SQuAD. Disfl-QA is a targeted dataset for disfluencies, in which all questions (~12k) contain disfluencies, making for a much larger disfluent test set than prior datasets. Over 90% of the disfluencies in Disfl-QA are corrections or restarts, making it a much more difficult test set for disfluency correction. In addition, compared to earlier disfluency datasets, it contains a wider variety of semantic distractors, i.e., distractors that carry semantic meaning as opposed to simpler speech disfluencies. 

Passage: …The Normans (Norman: Nourmands; French: Normands; Latin: Normanni) were the people who in the 10th and 11th centuries gave their name to Normandy, a region in France. They were descended from Norse ("Norman" comes from "Norseman") raiders and pirates from Denmark, Iceland and Norway who, under their leader Rollo, …
Q1:   In what country is Normandy located? France ✓
DQ1:   In what country is Norse found no wait Normandy not Norse? Denmark X
Q2:   When were the Normans in Normandy? 10th and 11th centuries ✓
DQ2:   From which countries no tell me when were the Normans in Normandy? Denmark, Iceland and Norway X
A passage and questions (Qi) from SQuAD dataset, along with their disfluent versions (DQi), consisting of semantic distractors (like “Norse” and “from which countries”) and predictions from a T5 model.

Here, the first question (Q1) is seeking an answer about the location of Normandy. In the disfluent version (DQ1) Norse is mentioned before the question is corrected. The presence of this correctional disfluency confuses the QA model, which tends to rely on shallow textual cues from the question for making predictions.

Disfl-QA also includes newer phenomena, such as coreference (expression referring to the same entity) between the reparandum and the repair.

SQuAD  Disfl-QA
Who does BSkyB have an operating license from?  Who removed [BSkyB’s] operating license, no scratch that, who do [they] have [their] operating license from?

Experiments show that the performance of existing state-of-the-art language model–based question answering systems degrades significantly when tested on Disfl-QA and heuristic disfluencies (presented in the paper) in a zero-shot setting.

Dataset   F1
SQuAD   89.59
Heuristics   65.27 (-24.32)
Disfl-QA   61.64 (-27.95)

We show that data augmentation methods partially recover the loss in performance and also demonstrate the efficacy of using human-annotated training data for fine-tuning. We argue that researchers need large-scale disfluency datasets in order for NLP models to be robust to disfluencies.

Conclusion
Understanding language phenomena that are unique to human speech, like disfluencies and temporal reasoning, among others, is a key ingredient for enabling more natural human–machine communication in the near future. With TimeDial and Disfl-QA, we aim to fill a major research gap by providing these datasets as testbeds for NLP models, in order to evaluate their robustness to ubiquitous phenomena across different tasks. It is our hope that the broader NLP community will devise generalized few-shot or zero-shot approaches to effectively handle these phenomena, without requiring task-specific human-annotated training datasets, constructed specifically for these challenges.

Acknowledgments
The TimeDial work has been a team effort involving Lianhui Qi, Luheng He, Yenjin Choi, Manaal Faruqui and the authors. The Disfl-QA work has been a collaboration involving Jiacheng Xu, Diyi Yang, Manaal Faruqui.

Source: Google AI Blog


Two New Datasets for Conversational NLP: TimeDial and Disfl-QA

A key challenge in natural language processing (NLP) is building conversational agents that can understand and reason about different language phenomena that are unique to realistic speech. For example, because people do not always premeditate exactly what they are going to say, a natural conversation often includes interruptions to speech, called disfluencies. Such disfluencies can be simple (like interjections, repetitions, restarts, or corrections), which simply break the continuity of a sentence, or more complex semantic disfluencies, in which the underlying meaning of a phrase changes. In addition, understanding a conversation also often requires knowledge of temporal relationships, like whether an event precedes or follows another. However, conversational agents built on today’s NLP models often struggle when confronted with temporal relationships or with disfluencies, and progress on improving their performance has been slow. This is due, in part, to a lack of datasets that involve such interesting conversational and speech phenomena.

To stir interest in this direction within the research community, we are excited to introduce TimeDial, for temporal commonsense reasoning in dialog, and Disfl-QA, which focuses on contextual disfluencies. TimeDial presents a new multiple choice span filling task targeted for temporal understanding, with an annotated test set of over ~1.1k dialogs. Disfl-QA is the first dataset containing contextual disfluencies in an information seeking setting, namely question answering over Wikipedia passages, with ~12k human annotated disfluent questions. These benchmark datasets are the first of their kind and show a significant gap between human performance and current state of the art NLP models.

TimeDial
While people can effortlessly reason about everyday temporal concepts, such as duration, frequency, or relative ordering of events in a dialog, such tasks can be challenging for conversational agents. For example, current NLP models often make a poor selection when tasked with filling in a blank (as shown below) that assumes a basic level of world knowledge for reasoning, or that requires understanding explicit and implicit inter-dependencies between temporal concepts across conversational turns.

It is easy for a person to judge that “half past one” and “quarter to two” are more plausible options to fill in the blank than “half past three” and “half past nine”. However, performing such temporal reasoning in the context of a dialog is not trivial for NLP models, as it requires appealing to world knowledge (i.e., knowing that the participants are not yet late for the meeting) and understanding the temporal relationship between events (“half past one” is before “three o’clock”, while “half past three” is after it). Indeed, current state-of-the-art models like T5 and BERT end up picking the wrong answers — “half past three” (T5) and “half past nine” (BERT).

The TimeDial benchmark dataset (derived from the DailyDialog multi-turn dialog corpus) measures models’ temporal commonsense reasoning abilities within a dialog context. Each of the ~1.5k dialogs in the dataset is presented in a multiple choice setup, in which one temporal span is masked out and the model is asked to find all correct answers from a list of four options to fill in the blank.

In our experiments we found that while people can easily answer these multiple choice questions (at 97.8% accuracy), state-of-the-art pre-trained language models still struggle on this challenge set. We experiment across three different modeling paradigms: (i) classification over the provided 4 options using BERT, (ii) mask filling for the masked span in the dialog using BERT-MLM, (iii) generative methods using T5. We observe that all the models struggle on this challenge set, with the best variant only scoring 73%.

Model   2-best Accuracy
Human   97.8%
BERT - Classification   50.0%
BERT - Mask Filling   68.5%
T5 - Generation   73.0%

Qualitative error analyses show that the pre-trained language models often rely on shallow, spurious features (particularly text matching), instead of truly doing reasoning over the context. It is likely that building NLP models capable of performing the kind of temporal commonsense reasoning needed for TimeDial requires rethinking how temporal objects are represented within general text representations.

Disfl-QA
As disfluency is inherently a speech phenomenon, it is most commonly found in text output from speech recognition systems. Understanding such disfluent text is key to building conversational agents that understand human speech. Unfortunately, research in the NLP and speech community has been impeded by the lack of curated datasets containing such disfluencies, and the datasets that are available, like Switchboard, are limited in scale and complexity. As a result, it’s difficult to stress test NLP models in the presence of disfluencies.

Disfluency   Example
Interjection   When is, uh, Easter this year?
Repetition   When is EasEaster this year?
Correction   When is Lent, I mean Easter, this year?
Restart   How much, no wait, when is Easter this year?
Different kinds of disfluencies. The reparandum (words intended to be corrected or ignored; in red), interregnum (optional discourse cues; in grey) and repair (the corrected words; in blue).

Disfl-QA is the first dataset containing contextual disfluencies in an information seeking setting, namely question answering over Wikipedia passages from SQuAD. Disfl-QA is a targeted dataset for disfluencies, in which all questions (~12k) contain disfluencies, making for a much larger disfluent test set than prior datasets. Over 90% of the disfluencies in Disfl-QA are corrections or restarts, making it a much more difficult test set for disfluency correction. In addition, compared to earlier disfluency datasets, it contains a wider variety of semantic distractors, i.e., distractors that carry semantic meaning as opposed to simpler speech disfluencies. 

Passage: …The Normans (Norman: Nourmands; French: Normands; Latin: Normanni) were the people who in the 10th and 11th centuries gave their name to Normandy, a region in France. They were descended from Norse ("Norman" comes from "Norseman") raiders and pirates from Denmark, Iceland and Norway who, under their leader Rollo, …
Q1:   In what country is Normandy located? France ✓
DQ1:   In what country is Norse found no wait Normandy not Norse? Denmark X
Q2:   When were the Normans in Normandy? 10th and 11th centuries ✓
DQ2:   From which countries no tell me when were the Normans in Normandy? Denmark, Iceland and Norway X
A passage and questions (Qi) from SQuAD dataset, along with their disfluent versions (DQi), consisting of semantic distractors (like “Norse” and “from which countries”) and predictions from a T5 model.

Here, the first question (Q1) is seeking an answer about the location of Normandy. In the disfluent version (DQ1) Norse is mentioned before the question is corrected. The presence of this correctional disfluency confuses the QA model, which tends to rely on shallow textual cues from the question for making predictions.

Disfl-QA also includes newer phenomena, such as coreference (expression referring to the same entity) between the reparandum and the repair.

SQuAD  Disfl-QA
Who does BSkyB have an operating license from?  Who removed [BSkyB’s] operating license, no scratch that, who do [they] have [their] operating license from?

Experiments show that the performance of existing state-of-the-art language model–based question answering systems degrades significantly when tested on Disfl-QA and heuristic disfluencies (presented in the paper) in a zero-shot setting.

Dataset   F1
SQuAD   89.59
Heuristics   65.27 (-24.32)
Disfl-QA   61.64 (-27.95)

We show that data augmentation methods partially recover the loss in performance and also demonstrate the efficacy of using human-annotated training data for fine-tuning. We argue that researchers need large-scale disfluency datasets in order for NLP models to be robust to disfluencies.

Conclusion
Understanding language phenomena that are unique to human speech, like disfluencies and temporal reasoning, among others, is a key ingredient for enabling more natural human–machine communication in the near future. With TimeDial and Disfl-QA, we aim to fill a major research gap by providing these datasets as testbeds for NLP models, in order to evaluate their robustness to ubiquitous phenomena across different tasks. It is our hope that the broader NLP community will devise generalized few-shot or zero-shot approaches to effectively handle these phenomena, without requiring task-specific human-annotated training datasets, constructed specifically for these challenges.

Acknowledgments
The TimeDial work has been a team effort involving Lianhui Qi, Luheng He, Yenjin Choi, Manaal Faruqui and the authors. The Disfl-QA work has been a collaboration involving Jiacheng Xu, Diyi Yang, Manaal Faruqui.

Source: Google AI Blog


Stadia Savepoint: July updates

It’s time for another update to our Stadia Savepoint series, recapping the new games, features and updates on Stadia.

In July, Stadia Pro subscribers enjoyed new games like Moonlighter, Street Power Football, Terraria and The Darkside Detective. Claiming these titles within the growing Pro library of more than 20 games makes them easy to play instantly, for as long as you’re a Pro subscriber. Plus, Stadia Pro offered free play weekends for Dead by Daylight, The Crew 2, Marvel’s Avengers and Olympic Games Tokyo 2020: The Official Video Game.

In addition, fights broke out within the popular arcade-brawler Streets of Rage 4 when it launched on the Stadia store, while other players joined Luffy and his band of Straw Hat Pirates in the open world action-adventure ONE PIECE World Seeker. JRPG fans rejoiced with the arrivals of Ys IX: Monstrum Nox and Cris Tales. For players interested in RPGs of the side-scrolling variety, Bloodstained: Ritual of the Night delivered an homage to the Metroidvania series.

While Olympic Games Tokyo 2020: The Official Video Game officially launched in June, the arcade sports title was at the top of the leaderboards in July with the start of the in-person events in Tokyo. YouTube Creators have the chance to compete with their viewers in-game across 18 different olympic events with Crowd Play, available in beta (apply through our form for feature access.)

Crowd Play demo video.

Use Crowd Play on Stadia to play games with YouTube livestream viewers.

Game recommendations on Google TV

Look for recommendations for recently played games within the “Top Picks for You” section on Google TV’s home screen.

Smart suggestions for new friends on web, mobile

The friends list on web and mobile devices now contains smart suggestions for new friends based on platform interactions and other player activity.

New games coming to Stadia announced in July:

That’s all for now — we’ll be back next month to share more updates. As always, stay tuned to the Stadia Community Blog, Facebook, YouTube and Twitter for the latest news.


Cloud NDB to Cloud Datastore migration

Posted by Wesley Chun (@wescpy), Developer Advocate, Google Cloud

An optional migration

Serverless Migration Station is a mini-series from Serverless Expeditions focused on helping users on one of Google Cloud's serverless compute platforms modernize their applications. The video today demonstrates how to migrate a sample app from Cloud NDB (or App Engine ndb) to Cloud Datastore. While Cloud NDB suffices as a current solution for today's App Engine developers, this optional migration is for those who want to consolidate their app code to using a single client library to talk to Datastore.

Cloud Datastore started as Google App Engine's original database but matured to becoming its own standalone product in 2013. At that time, native client libraries were created for the new product so non-App Engine apps as well as App Engine second generation apps could access the service. Long-time developers have been using the original App Engine service APIs to access Datastore; for Python, this would be App Engine ndb. While the legacy ndb service is still available, its limitations and lack of availability in Python 3 are why we recommend users switch to standalone libraries like Cloud NDB in the preceding video in this series.

While Cloud NDB lets users break free from proprietary App Engine services and upgrade their applications to Python 3, it also gives non-App Engine apps access to Datastore. However, Cloud NDB's primary role is a transition tool for Python 2 App Engine developers. Non-App Engine developers and new Python 3 App Engine developers are directed to the Cloud Datastore native client library, not Cloud NDB.

As a result, those with a collection of Python 2 or Python 3 App Engine apps as well as non-App Engine apps may be using completely different libraries (ndb, Cloud NDB, Cloud Datastore) to connect to the same Datastore product. Following the best practices of code reuse, developers should consider consolidating to a single client library to access Datastore. Shared libraries provide stability and robustness with code that's constantly tested, debugged, and battle-proven. Module 2 showed users how to migrate from App Engine ndb to Cloud NDB, and today's Module 3 content focuses on migrating from Cloud NDB to Cloud Datastore. Users can also go straight from ndb directly to Cloud Datastore, skipping Cloud NDB entirely.

Migration sample and next steps

Cloud NDB follows an object model identical to App Engine ndb and is deliberately meant to be familiar to long-time Python App Engine developers while use of the Cloud Datastore client library is more like accessing a JSON document store. Their querying styles are also similar. You can compare and contrast them in the "diffs" screenshot below and in the video.

The diffs between the Cloud NDB and Cloud Datastore versions of the sample app

The "diffs" between the Cloud NDB and Cloud Datastore versions of the sample app

All that said, this migration is optional and only useful if you wish to consolidate to using a single client library. If your Python App Engine apps are stable with ndb or Cloud NDB, and you don't have any code using Cloud Datastore, there's no real reason to move unless Cloud Datastore has a compelling feature inaccessible from your current client library. If you are considering this migration and want to try it on a sample app before considering for yours, see the corresponding codelab and use the video for guidance.

It begins with the Module 2 code completed in the previous codelab/video; use your solution or ours as the "START". Both Python 2 (Module 2a folder) and Python 3 (Module 2b folder) versions are available. The goal is to arrive at the "FINISH" with an identical, working app but using a completely different Datastore client library. Our Python 2 FINISH can be found in the Module 3a folder while Python 3's FINISH is in the Module 3b folder. If something goes wrong during your migration, you can always rollback to START, or compare your solution with our FINISH. We will continue our Datastore discussion ahead in Module 6 as Cloud Firestore represents the next generation of the Datastore service.

All of these learning modules, corresponding videos (when published), codelab tutorials, START and FINISH code, etc., can be found in the migration repo. We hope to also one day cover other legacy runtimes like Java 8 and others, so stay tuned. Up next in Module 4, we'll take a different turn and showcase a product crossover, showing App Engine developers how to containerize their apps and migrate them to Cloud Run, our scalable container-hosting service in the cloud. If you can't wait for either Modules 4 or 6, try out their respective codelabs or access the code samples in the table at the repo above. Migrations aren't always easy, and we hope content like this helps you modernize your apps.

The evolution of lifestyle and beauty blogger Keiko Lynn

When Keiko Lynn moved to New York City, she had $700, a fledgling fashion line and the LiveJournal blog she had written since she was 15. “I was working from home and sewing every day,” she says. “I only knew two people in New York and I never left home unless I was walking my dog.” Her home is where the modern incarnation of keikolynn.com began.


“I started documenting my outfits every day to hold myself accountable and make sure that I was getting up and getting dressed,” she says. It may have started as a simple idea, but it quickly took off. In the last 20 years, her blog has transformed from an online diary to a promotional tool to a full-time business. 

Keiko wears a patterned dress with a handbag.

Keiko often curates her blog with colorful dresses, handbags fit for the season.

“I was so young when I started my blog,” Keiko shares. “It was before I had a camera of any kind, so it was all text and I treated it like a diary.” When Keiko moved away for college, the blog underwent its first major transformation. “I got a digital camera and used it to document my life and keep in touch with friends and family,” she says. In many ways, her blog has remained true to its original intent: It’s personal and conversational, and it reads like recommendations from a friend. She carefully considers product reviews she writes, keeps price in mind and has yet to recommend something you couldn’t re-wear. 


“I think my blog is much more personal than a lot of people's blogs because that's how it started out,” Keiko says. “There's a certain level of expectation where people want to know what's going on in my life that I haven't ever really fully moved away from.”


Keiko is the first person featured in the new Creator Insights series, hosted on the Google Web Creators YouTube channel. In her Creator Insights videos, Keiko weighs in on critical parts of her journey to becoming a full-time creator, discussing how to work smarter, determine your rates and generate more traffic. We recently caught up with Keiko to learn more.


On the ins and outs of being your own brand

When asked about her brand, she says, “I didn’t start with the intention of it becoming a business. I fell into it, but if I were coaching someone today, that’d be the first thing I would tell them to figure out.” She continues, “I am my own brand, which can be great, but it can also be tricky. If somebody associates your brand with a specific identity, any evolution can feel like a betrayal. Because I am the brand, I have to go with what feels right.” And what feels right, Keiko notes, often goes against typical fashion and style rules. 

Cameras sit on a shelf below a photo adorned with painted flowers

Keiko’s interest in vintage design and photography is on full display in her home office.

On staying true to yourself despite expectations

The blog “explores whatever I'm interested in at the moment,” Keiko says. “It never adheres to what people think I should be doing, wearing or what's popular. It's all about being yourself and having fun with fashion and beauty without adhering to any sort of standard rules, especially within this blogging world.”


If you’re a fashion or beauty blogger and constant consumption doesn’t fit your lifestyle, Keiko says, “disregard it. Disregard the rule that you have to be a hyper-consumer. There's such a pressure to keep up with the Joneses, always to have something new and fresh to talk about. But for me, it just doesn't make sense for me because that's not the lifestyle that I live.”


On sparking initial interest through her clothing line

Keiko began sewing and selling clothes to support herself during college. “I was making my own clothes because I couldn't afford to buy clothes,” she told me. “I would go to thrift stores, buy stuff and rework it. I was like [the girl in] ‘Pretty in Pink,’” she said. She started selling her clothes through her LiveJournal blog. Then “magazines and brands started reaching out to me,” she says, “and I realized this was a great way to market my clothing line." 

Photo of the back of a woman’s head with hair clips that read “Friday,” “Whatever,” and “Party.”

Keiko shares her favorite DIY projects, like making felt hats or custom hair clips.

After running her fashion line for many years, Keiko decided to focus entirely on the blog. It transitioned from a tool for self-promotion to “a marketing platform for whatever I was interested in at the moment and brand partnerships,” Keiko says. “It evolved as a business.” 


On making it work as an influencer

Keiko says her primary sources of income are brand partnerships and affiliate links, but getting new blog readers is still a priority. Social media, Keiko advises, “is a valuable tool to remind people that you still have those long-form posts.” Newsletters are great, too. “They’re like the RSS feeds we used to have...a little ‘new-blog-is-up!’ reminder,” she says. The people who open them are likely some of your most engaged readers. It’s helpful to give them a nudge.


It’s also essential to find the type of social media that best augments your strategy. “Pinterest is underutilized for bloggers,” she says. “Food bloggers and travel bloggers already know that Pinterest is highly valuable, but in the style and beauty space, a lot of people who still maintain blogs don't utilize it enough and are seeing the benefits of Pinterest from other people pinning their photos.”

A woman sits on a cement ledge before a red wall.

Keiko Lynn is her own brand and follows her interests wherever they take her.

While Instagram posts may disappear or peak in 24 hours, Pinterest posts have the potential to remain evergreen. “I have posts that are over a decade old that still are top traffic earners,” Keiko says. 


But the most crucial advice Keiko has for bloggers in any industry is to “carve out a space for yourself instead of trying to mirror what you view as successful,” she says. “You don’t want to be a carbon copy because you’re always competing with the already successful person. Separate and allow yourself to be as weird as you want to be.”


"It's great to poll the audience to see what they want, but you also have to tell them and show them what they want because they came to you for a reason."


Your interests and needs will change but if you invest in your own evolution and consider which platforms align with your content, your audience, too, will follow you anywhere. You can learn more about Keiko Lynn by watching Creator Insights on the Google Web Creators YouTube.

August 2021 update to Display & Video 360 API v1

Today we’re releasing an update to the Display & Video 360 API that includes the following features: More detailed information about this update can be found in the Display & Video 360 API release notes. Before using these new features, make sure to update your client library to the latest version.

If you run into issues or need help with these new features or samples, please contact us using our support contact form.