Monthly Archives: November 2018

Highlights from the 2018 Google PhD Fellowship Summit



Google created the PhD Fellowship Program to recognize and support outstanding graduate students doing exceptional research in Computer Science and related disciplines. This program provides a unique opportunity for students pursuing a graduate degree in Computer Science (or related field) who seek to influence the future of technology. Now in its tenth year, our Fellowships have helped support close to 400 graduate students globally in Australia, China and East Asia, India, North America, Europe, the Middle East and Africa, the most recent region to award Google Fellowships.
Every year, Google PhD Fellows are invited to our global PhD Fellowship Summit where they are exposed to state-of-the-art research pursued across Google, and are given the opportunity to network with Google’s research community as well as other PhD Fellows from around the world. Below we share some highlights from our most recent summit, and also announce the newest recipients.

Summit Highlights
At this year’s annual Global PhD Fellowship Summit, Fellows from around the world converged on our Mountain View campus for two days of talks, focused discussions, sharing research work, and networking. VP of Education and University Programs Maggie Johnson welcomed the Fellows and presented Google's approach to research and its various outreach efforts that encourage collaboration with academia. The agenda also included talks on a range of topics, starting with an opening keynote from Principal Scientist Maya Gupta on controlling machine learning models with constraints and goals to make them do what you want, followed by researchers Andrew Tomkins, Rahul Sukthankar, Sai Teja Peddinti, Amin Vahdat, Martin Stumpe, Ed Chi and Ciera Jaspan giving talks from a variety of research perspectives. A closing presentation was given by Jeff Dean, Senior Fellow and SVP of Google AI, who spoke about using deep learning to solve a variety of challenging research problems at Google.
Starting clockwise from top left: Researchers Rahul Sukthankar and Ed Chi talking with Fellow attendees; Jeff Dean delivering the closing talk; Poster session in full swing.
Fellows had the chance to connect with each other and Google researchers to discuss their work during a poster event, as well as receive feedback from leaders in their fields in smaller deep dives. A panel discussion comprised of Fellow alumni, 2 from academia and 2 from Google, provided both perspectives on career paths.
Google Fellows attending the 2018 PhD Fellowship Summit.
The Complete List of 2018 Google PhD Fellows
We believe that the Google PhD Fellows represent some of the best and brightest young researchers around the globe in Computer Science, and it is our ongoing goal to support them as they make their mark on the world. As such, we would like to announce the latest recipients from China and East Asia, India, Australia and Africa, who join the North America, Europe and Middle East Fellows we announced last April. Congratulations to all of this year’s awardees! The complete list of recipients is:

Algorithms, Optimizations and Markets
Emmanouil Zampetakis, Massachusetts Institute of Technology
Manuela Fischer, ETH Zurich
Pranjal Dutta, Chennai Mathematical Institute
Thodoris Lykouris, Cornell University
Yuan Deng, Duke University

Computational Neuroscience
Ella Batty, Columbia University
Neha Spenta Wadia, University of California - Berkeley
Reuben Feinman, New York University

Human Computer Interaction
Gierad Laput, Carnegie Mellon University
Mike Schaekermann, University of Waterloo
Minsuk (Brian) Kahng, Georgia Institute of Technology
Niels van Berkel, The University of Melbourne
Siqi Wu, Australian National University
Xiang Zhang, The University of New South Wales

Machine Learning
Abhijeet Awasthi, Indian Institute of Technology - Bombay
Aditi Raghunathan, Stanford University
Futoshi Futami, University of Tokyo
Lin Chen, Yale University
Qian Yu, University of Southern California
Ravid Shwartz-Ziv, Hebrew University
Shuai Li, Chinese University of Hong Kong
Shuang Liu, University of California - San Diego
Stephen Tu, University of California - Berkeley
Steven James, University of the Witwatersrand
Xinchen Yan, University of Michigan
Zelda Mariet, Massachusetts Institute of Technology

Machine Perception, Speech Technology and Computer Vision
Antoine Miech, INRIA
Arsha Nagrani, University of Oxford
Arulkumar S, Indian Institute of Technology - Madras
Joseph Redmon, University of Washington
Raymond Yeh, University of Illinois - Urbana-Champaign
Shanmukha Ramakrishna Vedantam, Georgia Institute of Technology

Mobile Computing
Lili Wei, Hong Kong University of Science & Technology
Rizanne Elbakly, Egypt-Japan University of Science and Technology
Shilin Zhu, University of California - San Diego

Natural Language Processing
Anne Cocos, University of Pennsylvania
Hongwei Wang, Shanghai Jiao Tong University
Jonathan Herzig, Tel Aviv University
Rotem Dror, Technion - Israel Institute of Technology
Shikhar Vashishth, Indian Institute of Science - Bangalore
Yang Liu, University of Edinburgh
Yoon Kim, Harvard University
Zhehuai Chen, Shanghai Jiao Tong University
Imane khaouja, Université Internationale de Rabat

Privacy and Security
Aayush Jain, University of California - Los Angeles

Programming Technology and Software Engineering
Gowtham Kaki, Purdue University
Joseph Benedict Nyansiro, University of Dar es Salaam
Reyhaneh Jabbarvand, University of California - Irvine
Victor Lanvin, Fondation Sciences Mathématiques de Paris

Quantum Computing
Erika Ye, California Institute of Technology

Structured Data and Database Management
Lingjiao Chen, University of Wisconsin - Madison

Systems and Networking
Andrea Lattuada, ETH Zurich
Chen Sun, Tsinghua University
Lana Josipovic, EPFL
Michael Schaarschmidt, University of Cambridge
Rachee Singh, University of Massachusetts - Amherst
Stephen Mallon, The University of Sydney

Source: Google AI Blog


SDK Developers: sign up to stay up to date with latest tips, news and updates

Posted by Parul Soi, Strategic Partner Development Manager, Google Play

Android is fortunate to have an incredibly rich ecosystem of SDKs and libraries to help developers build great apps more efficiently. These SDKs can range from developer tools that simplify complicated feature development to end-to-end services such as analytics, attribution, engagement, etc. All of these tools can help Android developers reduce cost and ship products faster.

For the past few months, various teams at Google have been working together on new initiatives to expand the resources and support we offer for the developers of these tools. Today, SDK developers can sign up and register their SDK with us to receive updates that will keep you informed about Google Play policy changes, updates to the platform, and other useful information.

Our goal is to provide you with whatever you need to better serve your technical and business goals in helping your partners create better apps or games. Going forward we will be sharing further resources to help SDK developers, so stay tuned for more updates.

If you develop an SDK or library for Android, make sure you sign up and register your SDK to receive updates about the latest tools and information to help serve customers better. And, if you're an app developer, share this blogpost with the developers of the SDKs that you use!

How useful did you find this blog post?

Fuji Bokujo Dairy Farm: milking the best of the internet

fujibokujo2

As part of our series of interviews with Asia-Pacific entrepreneurs who use the internet to connect, create and grow, we chatted with Yuichiro Fujii, President of Fujii Bokujo Inc., a dairy farm based in Hokkaido, Japan. Founded in 1904, Fujii Bokujo runs the entire process of dairy product production—from milking, to breeding, to feed production—and needs a regular supply of seasonal workers to keep the farm going. In 2016, Fujii Bokujo was ranked as the third most popular company in Japan for employee welfare.


Can you tell us a bit about your farm and how your business works?

We have 900 cows at our farm in Furano, Hokkaido. We use the most cutting-edge technology and practices available in the dairy industry, such as fully automated milking machines. And we’re proud to export our homemade cheeses and ice creams worldwide. Business is booming and we’re eager to hire new employees each year, but farming isn’t everyone’s first choice of career. Each year, it gets harder and harder to attract new graduates. Most young people want to move to the cities and there’s a shortage of talent in the countryside.


What’s it like working on the farm?

Working life on the farm is fun, but it takes a lot of energy! Most of our 24 employees are in their twenties. Many come into the business with no experience of farming, but our motto is “We nurture our cows and our people.”  We’re constantly trying to create an environment where our people can grow professionally, and maybe personally too.


fujibokujocow

One of the residents of the Fuji Bojuko dairy farm.

What difference has the internet made for your business?

We are the descendants of pioneer dairy farmers in Hokkaido. A man named Edwin Dan, considered to be the father of modern day dairy farming in Hokkaido, coined the phrase, “Kaitakusha tare” (meaning “the pride of the pioneers”). We continue to practice the pioneer spirit today by always trying out new things.


So this year, to deal with our manpower crunch, instead of waiting for responses to wanted ads in newspapers and magazines, we decided to go online. To drive interest and awareness of Fujii Bokujo and the dairy industry, we used YouTube video ads and banner ads on the Google Display Network. In particular, we hoped that young people attending universities near us that had dairy farming courses would see our ads.


We got 260 enquiries for the three positions we had open and attracted 80 participants to a seminar we held to introduce our company. I was surprised by how far the message reached—we got responses from students not just from Hokkaido but also well-known schools in Tokyo and Osaka. In the end, we offered five students jobs, completing our hiring process three months earlier than last year.


What’s next for your business?

I’m looking forward to meeting next year’s graduates! We are in an age where domestic milk production cannot keep up with demand. In line with the spirit of Fujii Bokujo, it’s my life’s dream to develop and train the next generation for the dairy business.


I am also eager to use video not just for our corporate brand and hiring but also our product marketing efforts in the future. We are developing content that will help to entice the young people to the world of dairy.


Finally, with the Olympics coming up in 2020, nothing would make me happier than contributing to athletes winning medals through food. My dream is to have the athletes of the 2020 Tokyo Olympics enjoy the high quality milk we carefully produce at Fujii Bokujo.  


Announcing v0_6 of the Google Ads API

Today we’re announcing the beta release of Google Ads API v0_6. This release brings you the features listed in Required Minimum Functionality (RMF). Now that the core functionality of this new API is available, you should get started planning, designing, and coding against it. With this version, you’ll continue pointing to v0 as your endpoint, however, you'll need to update your client libraries. Here are the highlights:
  • Manager account authentication: If you’re authenticating as a manager account, the manager account you want for authorization must be in the header as login-customer-id. You then set the customer you want to interact with in the request as usual. This tells the API the level of manager account hierarchy you want to authenticate with.
  • Mix mutate operations: Pass in multiple kinds of operations with GoogleAdsService.Mutate.
  • Hotel Ads: We extended the GoogleAdsService to enable users to query hotel performance metrics that were previously available in the Travel Partner API with the HotelPerformanceView. Supported performance metrics are cost, clicks, impressions, and average lead values. Some derived metrics are also precomputed: average position, average cost per click, average cost per thousand impressions, and click-through rate. These metrics can be segmented by:
    • Itinerary segments: check-in date, check-in day of week, booking window days, and date selection type
    • Hotel segments: hotel center ID, hotel ID, class, city, state, and country
    • Date segments: hour, date, day of week, week, month, quarter, and year
    • Others: campaign, ad group, ad network, and device
  • Feeds: Manage and retrieve feeds with AdGroupFeedService, CustomerFeedService, FeedService, CampaignFeedService, and FeedMappingService.
  • Account management:
    • We introduced CustomerClient, which provides the ability to get an expanded hierarchy of customer clients (both direct and indirect) for a given manager customer.
    • This release also supports the creation of new customer clients under a manager using the CustomerService.CreateCustomerClient method.
    • CustomerService now supports mutates.
  • Recommendations: Added the DismissRecommendation method to the RecommendationService, making it possible to dismiss the recommendations listed in our guide.
  • Ad formats: Gmail ads and image ads are now supported.
  • Search query reporting: The SearchTermView resource is now available, providing metrics aggregated at the search term level. SearchTermView provides functionality similar to the Search Query Performance Report of the AdWords API.
  • Audiences: Create audiences using UserListService.
  • Criteria types: You can now create criteria with CriterionType LANGUAGE, CARRIER, USER_LIST, USER_INTEREST and IP_BLOCK.
To get started with the API, review these helpful resources: The updated client libraries and code examples will be published within the next 2 business days. If you have any questions or need help, please contact us via the forum.

Learning to Predict Depth on the Pixel 3 Phones



Portrait Mode on the Pixel smartphones lets you take professional-looking images that draw attention to a subject by blurring the background behind it. Last year, we described, among other things, how we compute depth with a single camera using its Phase-Detection Autofocus (PDAF) pixels (also known as dual-pixel autofocus) using a traditional non-learned stereo algorithm. This year, on the Pixel 3, we turn to machine learning to improve depth estimation to produce even better Portrait Mode results.
Left: The original HDR+ image. Right: A comparison of Portrait Mode results using depth from traditional stereo and depth from machine learning. The learned depth result has fewer errors. Notably, in the traditional stereo result, many of the horizontal lines behind the man are incorrectly estimated to be at the same depth as the man and are kept sharp.
(Mike Milne)
A Short Recap
As described in last year’s blog post, Portrait Mode uses a neural network to determine what pixels correspond to people versus the background, and augments this two layer person segmentation mask with depth information derived from the PDAF pixels. This is meant to enable a depth-dependent blur, which is closer to what a professional camera does.

PDAF pixels work by capturing two slightly different views of a scene, shown below. Flipping between the two views, we see that the person is stationary, while the background moves horizontally, an effect referred to as parallax. Because parallax is a function of the point’s distance from the camera and the distance between the two viewpoints, we can estimate depth by matching each point in one view with its corresponding point in the other view.
The two PDAF images on the left and center look very similar, but in the crop on the right you can see the parallax between them. It is most noticeable on the circular structure in the middle of the crop.
However, finding these correspondences in PDAF images (a method called depth from stereo) is extremely challenging because scene points barely move between the views. Furthermore, all stereo techniques suffer from the aperture problem. That is, if you look at the scene through a small aperture, it is impossible to find correspondence for lines parallel to the stereo baseline, i.e., the line connecting the two cameras. In other words, when looking at the horizontal lines in the figure above (or vertical lines in portrait orientation shots), any proposed shift of these lines in one view with respect to the other view looks about the same. In last year’s Portrait Mode, all these factors could result in errors in depth estimation and cause unpleasant artifacts.

Improving Depth Estimation
With Portrait Mode on the Pixel 3, we fix these errors by utilizing the fact that the parallax used by depth from stereo algorithms is only one of many depth cues present in images. For example, points that are far away from the in-focus plane appear less sharp than ones that are closer, giving us a defocus depth cue. In addition, even when viewing an image on a flat screen, we can accurately tell how far things are because we know the rough size of everyday objects (e.g. one can use the number of pixels in a photograph of a person’s face to estimate how far away it is). This is called a semantic cue.

Designing a hand-crafted algorithm to combine these different cues is extremely difficult, but by using machine learning, we can do so while also better exploiting the PDAF parallax cue. Specifically, we train a convolutional neural network, written in TensorFlow, that takes as input the PDAF pixels and learns to predict depth. This new and improved ML-based method of depth estimation is what powers Portrait Mode on the Pixel 3.
Our convolutional neural network takes as input the PDAF images and outputs a depth map. The network uses an encoder-decoder style architecture with skip connections and residual blocks.
Training the Neural Network
In order to train the network, we need lots of PDAF images and corresponding high-quality depth maps. And since we want our predicted depth to be useful for Portrait Mode, we also need the training data to be similar to pictures that users take with their smartphones.

To accomplish this, we built our own custom “Frankenphone” rig that contains five Pixel 3 phones, along with a Wi-Fi-based solution that allowed us to simultaneously capture pictures from all of the phones (within a tolerance of ~2 milliseconds). With this rig, we computed high-quality depth from photos by using structure from motion and multi-view stereo.
Left: Custom rig used to collect training data. Middle: An example capture flipping between the five images. Synchronization between the cameras ensures that we can calculate depth for dynamic scenes, such as this one. Right: Ground truth depth. Low confidence points, i.e., points where stereo matches are not reliable due to weak texture, are colored in black and are not used during training. (Sam Ansari and Mike Milne)
The data captured by this rig is ideal for training a network for the following main reasons:
  • Five viewpoints ensure that there is parallax in multiple directions and hence no aperture problem.
  • The arrangement of the cameras ensures that a point in an image is usually visible in at least one other image resulting in fewer points with no correspondences.
  • The baseline, i.e., the distance between the cameras is much larger than our PDAF baseline resulting in more accurate depth estimation.
  • Synchronization between the cameras ensure that we can calculate depth for dynamic scenes like the one above.
  • Portability of the rig ensures that we can capture photos in the wild simulating the photos users take with their smartphones.
However, even though the data captured from this rig is ideal, it is still extremely challenging to predict the absolute depth of objects in a scene — a given PDAF pair can correspond to a range of different depth maps (depending on lens characteristics, focus distance, etc). To account for this, we instead predict the relative depths of objects in the scene, which is sufficient for producing pleasing Portrait Mode results.

Putting it All Together
This ML-based depth estimation needs to run fast on the Pixel 3, so that users don’t have to wait too long for their Portrait Mode shots. However, to get good depth estimates that makes use of subtle defocus and parallax cues, we have to feed full resolution, multi-megapixel PDAF images into the network. To ensure fast results, we use TensorFlow Lite, a cross-platform solution for running machine learning models on mobile and embedded devices and the Pixel 3’s powerful GPU to compute depth quickly despite our abnormally large inputs. We then combine the resulting depth estimates with masks from our person segmentation neural network to produce beautiful Portrait Mode results.

Try it Yourself
In Google Camera App version 6.1 and later, our depth maps are embedded in Portrait Mode images. This means you can use the Google Photos depth editor to change the amount of blur and the focus point after capture. You can also use third-party depth extractors to extract the depth map from a jpeg and take a look at it yourself. Also, here is an album showing the relative depth maps and the corresponding Portrait Mode images for traditional stereo and the learning-based approaches.

Acknowledgments
This work wouldn’t have been possible without Sam Ansari, Yael Pritch Knaan, David Jacobs, Jiawen Chen, Juhyun Lee and Andrei Kulik. Special thanks to Mike Milne and Andy Radin who captured data with the five-camera rig.

Source: Google AI Blog


Host Hangouts Meet meetings with up to 100 participants

Meeting with teammates, clients, or customers around the globe is critical to getting your job done. For those times when you need to meet with a larger group, Hangouts Meet now allows G Suite Enterprise users to organize meetings with up to 100 participants and G Suite Business users to host up to 50 participants. This participant count includes people from both inside and outside of your organization.



This new extended participant limit supports people joining from any mixture of video and dial-in entry points so you can flexibly bring together even more people from all over the world. It is now fully rolled out across all domains.

If you need to host an even larger meeting, you can enable live streaming, allowing up to 100,000 viewers to watch at once.

Launch Details
Release track:
Launching to both Rapid Release and Scheduled Release

Editions:

  • 100-person meetings available to G Suite Enterprise and G Suite Enterprise for Education editions only
  • 50-person meetings available to G Suite Business and G Suite for Education editions only
Rollout pace:
Full rollout (1–3 days for feature visibility)

Impact:
All end users

Action:
Change management suggested/FYI

More Information
Help Center: Get Started with Meet
Help Center: Hangouts Meet Benefits and features
G Suite Learning Center: How many people can join a video meeting?

Launch release calendar
Launch detail categories
Get these product update alerts by email
Subscribe to the RSS feed of these updates

Announcing the Google Security and Privacy Research Awards



We believe that cutting-edge research plays a key role in advancing the security and privacy of users across the Internet. While we do significant in-house research and engineering to protect users’ data, we maintain strong ties with academic institutions worldwide. We provide seed funding through faculty research grants, cloud credits to unlock new experiments, and foster active collaborations, including working with visiting scholars and research interns.

To accelerate the next generation of security and privacy breakthroughs, we recently created the Google Security and Privacy Research Awards program. These awards, selected via internal Google nominations and voting, recognize academic researchers who have made recent, significant contributions to the field.

We’ve been developing this program for several years. It began as a pilot when we awarded researchers for their work in 2016, and we expanded it more broadly for work from 2017. So far, we awarded $1 million dollars to 12 scholars. We are preparing the shortlist for 2018 nominees and will announce the winners next year. In the meantime, we wanted to highlight the previous award winners and the influence they’ve had on the field.
2017 Awardees

Lujo Bauer, Carnegie Mellon University
Research area: Password security and attacks against facial recognition

Dan Boneh, Stanford University
Research area: Enclave security and post-quantum cryptography

Aleksandra Korolova, University of Southern California
Research area: Differential privacy

Daniela Oliveira, University of Florida
Research area: Social engineering and phishing

Franziska Roesner, University of Washington
Research area: Usable security for augmented reality and at-risk populations

Matthew Smith, Universität Bonn
Research area: Usable security for developers


2016 Awardees

Michael Bailey, University of Illinois at Urbana-Champaign
Research area: Cloud and network security

Nicolas Christin, Carnegie Mellon University
Research area: Authentication and cybercrime

Damon McCoy, New York University
Research area: DDoS services and cybercrime

Stefan Savage, University of California San Diego
Research area: Network security and cybercrime

Marc Stevens, Centrum Wiskunde & Informatica
Research area: Cryptanalysis and lattice cryptography

Giovanni Vigna, University of California Santa Barbara
Research area: Malware detection and cybercrime


Congratulations to all of our award winners.

Elevate your quizzing and grading experience with two G Suite for Education beta programs

We’re offering two new beta programs for G Suite for Education customers to improve their quizzing and grading experience.

Locked mode in Quizzes in Google Forms 
This summer, we announced locked mode in Quizzes in Google Forms as a new way to keep students focused during assessments. Available only on managed Chromebooks running operating system 68 or higher, locked mode prevents students from navigating away from the Quiz in their Chrome browser until they submit their answers. Once enabled, teachers can enable locked mode with a simple checkbox, giving them full control over assessments.




Better grading in Classroom 
Earlier this year, we introduced new grading tools and a comment bank for richer, better feedback. Today, we’re continuing to strengthen the grading process in Classroom with a beta for a new Gradebook to better enable teachers to keep their assignments and grades in one place, and keep this important task more organized.



Express interest in the betas 
Beta programs for locked mode and Gradebook are now available to G Suite for Education customers. All teachers and G Suite for Education admins can express interest by completing this form. Check out the full post on the Google for Education blog and the Help Center for more details. 

Launch Details

Editions:
Available to G Suite for Education editions only

Action:
Admins and teachers can express interest by completing this form

More Information
Help Center
Google for Education blog post



Launch release calendar
Launch detail categories
Get these product update alerts by email
Subscribe to the RSS feed of these updates

Upgrade your daily drive with new Android Auto features

When you’re on the road, the journey can be just as important as the destination. Today we’ve added new Android Auto features that make your drives even more simple, personal and helpful—including easier access to your favorite content with improved media browse and search features, plus new ways to stay connected with visual message previews and group messaging.

You can try out these new features with some of your favorite media apps—like Google Play Books, Google Play Music, iHeartRadio, Pocket Casts and Spotify. Popular messaging apps like Messages, Hangouts and WhatsApp also work with the new messaging features. In the coming months, we’ll work with more apps to add support for these new features.

New ways to discover media

If driving in silence isn’t for you, playing and finding media just got easier. By bringing content to the forefront of your screen, you can now spend less time browsing and more time enjoying the content you like most. An improved layout, featuring large album art views, lets you quickly identify and select something to play.

image 3

Got something specific in mind? We’ve also made improvements to the voice search experience. Just say “OK Google, play 80s music” or “OK Google, play Lilt” to view even more categorized search results from your app right on the screen.

image 2

More options for messaging

As you’re driving, you don’t have to worry about missing important messages—you can now safely stay connected with multiple people at once. When messages arrive, Android Auto can show you a short preview of the text when your vehicle is stopped. This message previewing capability is purely optional (enabled via Android Auto’s settings menu), giving you the ability to choose what’s most important—privacy or convenience.

img

In addition to SMS messaging, Android Auto now supports apps that use MMS (multimedia messaging service) and RCS (rich communication services). This means that your favorite messaging apps can now offer additional capabilities, like support for group messaging.


The updates will be fully available in the next several days. Check out the Android Auto app in the Google Play Store to try out these new features on your next drive.

Source: Android


Introducing more ways to share your Stories on YouTube


As a creator, you're always looking to strengthen your relationship with your audience. You bring them along on your travels, give them a backstage pass to one of your videos, or even a sneak peek at your upcoming video. Through testing the Stories format with a small group of you over the past year, we’ve seen you do just that, from FashionbyAlly giving updates on what’s coming next, to Colin and Samir bringing their fans into the creative process. We applied feedback that we got from you to build a product specifically designed with you, the YouTube creator, in mind. And starting today, we are excited to announce that we are rolling out YouTube Stories to all eligible creators with 10K+ subscribers.









Creating with Stories is lightweight, easy, and fun. Stories will have the fun creation tools that you know and love. You can add text, music, filters, YouTubey stickers, and more to make your story uniquely you! To create a story, just open the YouTube mobile app, tap on the video camera icon, and select "Create Story."




We’ve also added comments to Stories, so the entire community can be a part of the conversation. Your fans can comment, thumbs up and thumbs down comments, and you can heart comments. And all of the comment moderation tools that are available on video uploads will also be available on Stories. You can now also respond directly to a fan comment with a photo or video for the entire community to see!




Images via Colin and Samir




Once posted, Stories are available in the mobile app for 7 days to ensure that your fans have a chance to see it. Stories may show up to both subscribers on the Subscriptions tab and non-subscribers on Home and in the Up Next list below videos.




We’re excited to see how you continue to use Stories to reach out to your community. Give it a try today!



Todd Sherman, Product Lead for YouTube Stories