Category Archives: YouTube Engineering and Developers Blog

What’s happening with engineering and developers at YouTube

New YouTube live features: live 360, 1440p, embedded captions, and VP9 ingestion

Yesterday at NAB 2016 we announced exciting new live and virtual reality features for YouTube. We’re working to get you one step closer to actually being in the moments that matter while they are happening. Let’s dive into the new features and capabilities that we are introducing to make this possible:

Live 360: About a year ago we announced the launch of 360-degree videos at YouTube, giving creators a new way to connect to their audience and share their experiences. This week, we took the next step by introducing support for 360-degree on YouTube live for all creators and viewers around the globe.

To make sure creators can tell awesome stories with virtual reality, we’ve been working with several camera and software vendors to support this new feature, such as ALLie and VideoStitch. Manufacturers interested in 360 through our Live API can use our YouTube Live Streaming API to send 360-degree live streams to YouTube.

Other 360-degree cameras can also be used to live stream to YouTube as long as they produce compatible output, for example, cameras that can act as a webcam over USB (see this guide for details on how to live stream to YouTube). Like 360-degree uploads, 360-degree live streams need to be streamed in the equirectangular projection format. Creators can use our Schedule Events interface to set up 360 live streams using this new option:

360_checkbox.png


Check out this help center page for some details.



1440p live streaming: Content such as live 360 as well as video games are best enjoyed at high resolutions and high frame rates. We are also announcing support of 1440p 60fps resolution for live streams on YouTube. Live streams at 1440p have 70 percent more pixels than the standard HD resolution of 1080p. To ensure that your stream can be viewed on the broadest possible range of devices and networks, including those that don’t support such high resolutions or frame rates, we perform full transcoding on all streams and resolutions. A 1440p60 stream gets transcoded to 1440p60, 1080p60 and 720p60 as well as all resolutions from 1440p30 down to 144p30.

Support for 1440p will be available from our creation dashboard as well as our Live API. Creators interested in using this high resolution should make sure that their encoder is able to encode at such resolutions and that they have sufficient upload bandwidth on their network to sustain successful ingestion. A good rule of thumb is to provision at least twice the video bitrate.

VP9 ingestion / DASH ingestion: We are also announcing support for VP9 ingestion. VP9 is a modern video codec that lets creators upload higher resolution videos with lower bandwidth, which is particularly important for high resolution 1440p content. To facilitate the ingestion of this new video codec we are also announcing support for DASH ingestion, which is a simple, codec agnostic HTTP-based protocol. DASH ingestion will support H.264 as well as VP9 and VP8. HTTP-based ingestion is more resilient to corporate firewalls and also allows ingestion over HTTPS. It is also a simpler protocol to implement for game developers that want to offer in game streaming support with royalty free video codecs. MediaExcel and Wowza Media Systems will both be demoing DASH VP9 encoding with YouTube live at their NAB booths.

We will soon publish a detailed guide to DASH Ingestion on our support web site. For developers interested in DASH Ingestion, please join this Google group to receive updates.

Embedded captions: To provide more support to broadcasters, we now accept embedded EIA-608/CEA-708 captions over RTMP (H.264/AAC). That makes it easier to send captioned video content to YouTube and no longer requires posting caption data over side-band channels. We initially offer caption support for streams while they are live and will soon support the transitioning of caption data to the live recordings as well. Visit the YouTube Help Center for more information on our live captioning support.



We first launched live streaming back in 2011, and we’ve live streamed memorable moments: 2012 Olympics, Red Bull Stratos Jump, League of Legends Championship, and Coachella Music Festival. We are excited to see what our community can create with these new tools!

Nils Krahnstoever, Engineering Manager for Live
Kurt Wilms, Senior Product Manager for VR and Live
Sanjeev Verma, Product Manager for Video Formats

Chat it up, streamers! New Live Chat, Fan Funding & Sponsorships APIs

From the moment YouTube Gaming launched in August, we’ve consistently seen a pair of requests from our community: “Where are the chat bots? Where are the stream overlays?” A number of developers were happy to oblige, and some great new tools have launched for YouTube streamers.

With those new tools have come some feedback on our APIs. Particularly, that there aren’t enough of them. So much is happening on YouTube live streams -- chatting, fan funding, sponsoring -- but there’s no good way to get the data out and into the types of apps that streamers want, like on-screen overlays, chat moderation bots and more.

Well well, what have we here? A whole bunch of new additions to the Live Streaming API, getting you access to all those great chat messages, fan funding alerts and new sponsor messages!

  • Fan Funding events, which occur when a user makes a one-time voluntary payment to support a creator.
  • Live Chat events, which allow you to read the content of a YouTube live chat in real time, as well as adding new chat messages on behalf of the authenticated channel.
  • Live Chat bans, which enable the automated application of chat “time-outs” and “bans.”
  • Sponsors, which allows access to a list of YouTube users that are sponsoring the channel. A sponsor provides recurring monetary support to a creator and receives special benefits.

In addition, we’ve closed a small gap in the LiveBroadcasts API by adding the ability to retrieve and modify the LiveBroadcast object for a channel’s “Stream now” stream.

As part of the development process we gave early access to a few folks, and we’re excited to show off some great integrations that launch today:

  • Using our new Sponsorships feature? Discord will let you offer your sponsors access to private voice and text servers.
  • Add live chat, new sponsors and new fan funding announcements to an overlay with the latest beta of Gameshow.
  • Looking for some help with moderating and managing your live chat? Try out Nightbot, a chat bot that can perform a variety of moderating tasks specifically designed to create a more efficient and friendly environment for your community.
  • Show off your live chat with an overlay in XSplit Broadcaster using their new YouTube Live Chat plugin.

We’ve also spotted some libraries and sample code on Github that might help get you started, including this chat library in Go and this one in Python.

We hope these new APIs can bring whole new categories of tools to the creator community. We’re excited to see what you build!

Marc Chambers, Developer Relations, recently watched ”ArmA 3| Episode 1|Pilot the CH53 E SS.”

Smoother <video> in Chrome

Video quality matters, and when an HD or HFR playback isn’t smooth, we notice. Chrome noticed. YouTube noticed. So we got together to make YouTube video playback smoother in Chrome, and we call it Project Butter.


For some context, our brains fill in the motion in between frames if each frame is onscreen the same amount of time - this is called motion interpolation. In other words, a 30 frames per second video won’t appear smooth unless each frame is spaced evenly each 1/30th of a second. Smoothness is more complicated than just this - you can read more about it in this article by Michael Abrash at Valve.


Frame rates, display refresh rates and cadence
Your device’s screen redraws itself at a certain frame rate. Videos present frames at a certain rate. These rates are often not the same. At YouTube we commonly see videos authored at 24, 25, 29.97, 30, 48, 50, 59.94, and 60 frames per second (fps) and these videos are viewed on displays with different refresh rates - the most common being 50Hz (Europe) and 60Hz (USA).  


For a video to be smooth we need to figure out the best, most regular way to display the frames - the best cadence. The ideal cadence is calculated as the ratio of the display rate to frame rate. For example, if we have a 60Hz display (a 1/60 second display interval) and a 30 fps clip, 60 / 30 == 2 which means each video frame should be displayed for two display intervals of total duration 2 * 1/60 second.


We played videos a bunch of different ways and scored them on smoothness.  


Smoothness score
Using off the shelf HDMI capture hardware and some special video clips we computed a percentage score based on the number of times each video frame was displayed relative to a calculated optimal display count. The higher the score, the more frames aligned with the optimal display frequency. Below is a figure showing how Chrome 43 performed when playing a 30fps clip back on a 60Hz display:


Smoothness: 68.49%, ~Dropped: 5 / 900 (0.555556%)


The y-axis is the number of times each frame was displayed, while the x-axis is the frame number. As mentioned previously the calculated ideal display count for a 30fps clip on a 60Hz display is 2. So, in an ideal situation, the graph should be a flat horizontal line at 2, yet Chrome dropped many frames and displayed certain frames for as many as 4 display cycles! The smoothness score reflects this -  only 68.49 percent of frames were displayed correctly. How could we track down what was going on?


Using some of the performance tracing tools built into Chrome, we identified timing issues inherent to the existing design for video rendering as the culprit. These issues resulted in both missed and irregular video frames on a regular basis.



There were two main problems in the interactions between Chrome’s compositor (responsible for drawing frames) and its media pipeline (responsible for generating frames) --  
  1. The compositor didn’t have a timely way of knowing when a video frame needed display. Video frames were selected on the media pipeline thread while the compositor would occasionally come along looking for them on the compositor thread, but if the compositor thread was busy it wouldn’t get the notification on time.
  2. Chrome’s media pipeline didn’t know when the compositor would be ready to draw its next new frame. This led to the media pipeline sometimes picking a frame that was too old by the time the compositor displayed it.


In Chrome 44, we re-architected the media and compositor pipelines to communicate carefully about the intent to generate and display. Additionally, we also improved which video frames to pick by using the optimal display count information. With these changes, Chrome 44 significantly improved on smoothness scores across all video frame rates and display refresh rates:
Smoothness: 99.33%, ~Dropped: 0 / 900 (0.000000%)


Smooth like butter. Read more in public design document, if you’re interested in further details.


Dale Curtis, Software Engineer, recently watched WARNING: SCARIEST GAME IN YEARS | Five Nights at Freddy's - Part 1
Richard Leider, Engineering Manager, recently watched Late Art Tutorial.
Renganathan Ramamoorthy, Product Manager, recently watched Video Game High School

Improving YouTube video thumbnails with deep neural nets

Video thumbnails are often the first things viewers see when they look for something interesting to watch. A strong, vibrant, and relevant thumbnail draws attention, giving viewers a quick preview of the content of the video, and helps them to find content more easily. Better thumbnails lead to more clicks and views for video creators.

Inspired by the recent remarkable advances of deep neural networks (DNNs) in computer vision, such as image and video classification, our team has recently launched an improved automatic YouTube "thumbnailer" in order to help creators showcase their video content. Here is how it works.

The Thumbnailer Pipeline
While a video is being uploaded to YouTube, we first sample frames from the video at one frame per second. Each sampled frame is evaluated by a quality model and assigned a single quality score. The frames with the highest scores are selected, enhanced and rendered as thumbnails with different sizes and aspect ratios. Among all the components, the quality model is the most critical and turned out to be the most challenging to develop. In the latest version of the thumbnailer algorithm, we used a DNN for the quality model. So, what is the quality model measuring, and how is the score calculated?

The main processing pipeline of the thumbnailer.

(Training) The Quality Model
Unlike the task of identifying if a video contains your favorite animal, judging the visual quality of a video frame can be very subjective - people often have very different opinions and preferences when selecting frames as video thumbnails. One of the main challenges we faced was how to collect a large set of well-annotated training examples to feed into our neural network. Fortunately, on YouTube, in addition to having algorithmically generated thumbnails, many YouTube videos also come with carefully designed custom thumbnails uploaded by creators. Those thumbnails are typically well framed, in-focus, and center on a specific subject (e.g. the main character in the video). We consider these custom thumbnails from popular videos as positive (high-quality) examples, and randomly selected video frames as negative (low-quality) examples. Some examples of the training images are shown below.

Example training images.
The visual quality model essentially solves a problem we call "binary classification": given a frame, is it of high quality or not? We trained a DNN on this set using a similar architecture to the Inception network in GoogLeNet that achieved the top performance in the ImageNet 2014 competition.

Results
Compared to the previous automatically generated thumbnails, the DNN-powered model is able to select frames with much better quality. In a human evaluation, the thumbnails produced by our new models are preferred to those from the previous thumbnailer in more than 65% of side-by-side ratings. Here are some examples of how the new quality model performs on YouTube videos:

Example frames with low and high quality score from the DNN quality model, from video “Grand Canyon Rock Squirrel”.
Thumbnails generated by old vs. new thumbnailer algorithm.

We recently launched this new thumbnailer across YouTube, which means creators can start to choose from higher quality thumbnails generated by our new thumbnailer. Next time you see an awesome YouTube thumbnail, don’t hesitate to give it a thumbs up. ;)

Access to YouTube Analytics data in bulk

Want to get all of your YouTube data in bulk? Are you hitting the quota limits while accessing analytics data one request at a time? Do you want to be able to break down reports by more dimensions? What about accessing assets and revenue data?
With the new YouTube Bulk Reports API, your authorized application can retrieve bulk data reports in the form of CSV files that contain YouTube Analytics data for a channel or content owner. Once activated, reports are generated daily and contain data for a unique, 24-hour period.

While the known YouTube Analytics API supports real-time targeted queries of much of the same data as the YouTube Bulk Reports API, the latter is designed for applications that can retrieve and import large data sets, then use their own tools to filter, sort, and mine that data.

As of now the API supports video, playlist, ad performance, estimated earnings and asset reports.

How to start developing


  • Choose your reports:
    • Video reports provide statistics for all user activity related to a channel's videos or a content owner's videos. For example, these metrics include the number of views or ratings that videos received. Some video reports for content owners also include earnings and ad performance metrics.
    • Playlist reports provide statistics that are specifically related to video views that occur in the context of a playlist.
    • Ad performance reports provide impression-based metrics for ads that ran during video playbacks. These metrics account for each ad impression, and each video playback can yield multiple impressions.
    • Estimated earnings reports provide the total earnings for videos from Google-sold advertising sources as well as from non-advertising sources. These reports also contain some ad performance metrics.
    • Asset reports provide user activity metrics related to videos that are linked to a content owners' assets. For its data to included in the report, a video must have been uploaded by the content owner and then claimed as a match of an asset in the YouTube Content ID system.

  • Schedule reports:
  1. Get an OAuth token (authentication credentials)
  2. Call the reportTypes.list method to retrieve a list of the available report types
  3. Create a new reporting job by calling jobs.create and passing the desired report type (and/or query in the future)

  • Retrieve reports:
  1. Get an OAuth token (authentication credentials)
  2. Call the jobs.list method to retrieve a list of the available reporting jobs and remember its ID.
  3. Call the reports.list method with the jobId filter parameter set to the ID found in the previous step to retrieve a list of downloadable reports that that particular job created.
  4. Creators can check the report’s last modified date to determine whether the report has been updated since the last time it was retrieved.
  5. Fetch the report from the URL obtained by step 3.

  • While using our sample code and tools
    • Client libraries for many different programming languages can help you implement the YouTube Reporting API as well as many other Google APIs.
    • Don't write code from scratch! Our Java, PHP, and Python code samples will help you get started.
    • The APIs Explorer lets you try out sample calls before writing any code.


Cheers,


Ten years of YouTube video tech in ten videos

2005: YouTube is born

Me at the Zoo is the first video uploaded to YouTube

2006: Google buys YouTube

One year after YouTube launches, videos play in the FLV container with the H.263 codec at a maximum resolution of 240p. We scale videos up to 640x360, but you can still click a button to play at original size.

2007: YouTube goes mobile

YouTube is one of the original applications on the iPhone. Because it doesn't support Flash, we re-encode every single YouTube video into H.264 with the MP4 container. YouTube videos get a resolution notch to 360p.

2008: YouTube kicks it up to HD

With upload sizes and download speeds growing, videos jump in size up to 720p HD. Lower resolution files get higher quality by squeezing Main Profile H.264 into FLVs.

2009: YouTube enters the third dimension

YouTube supports 3D videos, 1080p and live streaming.

2010: YouTube's on TV

The biggest screen in your house now gets YouTube courtesy of Flash Lite and ActionScript 2. 2010 also sees the first playbacks with HTML5 <video> thanks to VP8, an open source video codec. We bump up the maximum resolution to 4K, known as "Original" at the time.

2011: YouTube slices bread (and videos) to battle buffering

We launch Sliced Bread, codename for a project that enables adaptive bitrate in the Flash player by requesting videos a little piece at a time. Users see higher quality videos more often and buffering less often.

2012: YouTube live streaming hits prime time

We scale up our live streaming infrastructure to support the 2012 Summer Olympics, with over 1,200 events. In October, over 8 million people watch live as Felix Baumgartner jumps from the stratosphere.

2013: YouTube's first taste of VP9

We start our first experiments with VP9 in Chrome, which brings higher quality video at less bandwidth. Adaptive bitrate streaming in the HTML5 and Flash players moves to the DASH standard using both FMP4 and MKV video containers.

2014: Silky smooth 60fps comes to YouTube

High frame rate isn't just for games anymore: YouTube now supports videos that play in up to 60fps. Gangnam Style becomes the first YouTube video to break the MAX_INT barrier with more than 232 / 2 - 1 views.

2015: YouTube adds spherical video (look behind you!)

You can now upload videos that wrap 360 degrees around the viewer. Even 4K videos can play up to 60fps. HTML5 becomes the default YouTube web player.

Richard Leider, Engineering Manager, recently watched David Bowie - Oh You Pretty Things
Jonathan Levine, Product Manager, recently watched Candide Thovex - One of those days 2

Bye-bye, YouTube Data API v2

UPDATE 08/03/15: Starting today, API v2 of comments, captions and video flagging services are turned down.
------------------------------------------------------------------------------------------------------------------------------------------------------UPDATE 06/03/15: Starting today, most YouTube Data API v2 calls will receive 410 Gone HTTP responses.
------------------------------------------------------------------------------------------------------------------------------------------------------
UPDATE 05/06/15: Starting today, YouTube Data API v2 video feeds will only return the support video.
------------------------------------------------------------------------------------------------------------------------------------------------------
UPDATE: With the launch of video abuse reporting and video search for developer, the Data API v3 supports every feature scheduled to be migrated from the soon-to-be-turned-down Data API v2.
------------------------------------------------------------------------------------------------------------------------------------------------------

With the recent additions of comments, captions, and RSS push notifications, the Data API v3 supports almost every feature scheduled to be migrated from the soon-to-be-turned-down Data API v2. The only remaining feature to be migrated is video flagging, which will launch in the coming days. The new API brings in many features from the latest version of YouTube, making sure your users are getting the best YouTube experience on any screen.

For a quick memory lane trip, in March 2014, we announced that the Data API v2 would be retired on April 20, 2015, and would be shut down soon thereafter. To help with your migration, we launched the migration guide in September 2014, and have also been giving you regular notices on v3 feature updates.

Retirement plan
If you’re still using the Data API v2, today we’ll start showing a video at the top of your users’ video feeds that will notify them of how they might be affected. Apart from that, your apps will work as usual.
In early May, Data API v2 video calls will start returning only the warning video introduced on April 20. Users will not be able to view other videos on apps that use the v2 API video calls. See youtube.com/devicesupport for affected devices.

By late May, v2 API calls except for comments and captions will receive 410 Gone HTTP responses. You can test your application’s reaction to this response by pointing the application at eol.gdata.youtube.com instead of gdata.youtube.com. While you should migrate your app as soon as possible, these features will work in the Data API v2 until the end of July 2015 to avoid any outages.

How you can migrate
Check out the frequently asked questions and migration guide for the most up-to-date instructions on how to update specific features to use the Data API v3. The guide now lists all of the Data API v2 functionality that is being deprecated and won't be offered in the Data API v3. It also includes updated instructions for a few newly migrated features, like comments, captions, and video flagging.

- Ibrahim Ulukaya, and the YouTube for Developers team

Manage comments with the YouTube Data API v3

YouTube Sentiment Analysis Demo
Cindy 3 hours ago
I wish my app could manage YouTube comments.

Ibrahim 2 hours ago
Then it's your day today. With the new YouTube Data API (v3) you can now have comments in your app. Just register your application to use the v3 API and then check out the documentation for the  Comments and CommentThreads resources and their methods.

Andy 2 hours ago
+Cindy R u still on v2? U know the v2 API is being deprecated on April 20, and you’ve updated to v3 right?

Andy 1 hour ago
+Ibrahim I can haz client libraries, too?

Ibrahim 30 minutes ago
Yes, there are client libraries for many different programming languages, and there are already Java, PHP, and Python code samples.

Matt 20 minutes ago
My brother had a python and he used to feed it mice. Pretty gross!

Cindy 10 minutes ago
Thanks, +Ibrahim. This is very cool. The APIs Explorer lets you try out sample calls before writing any code, too.

Ibrahim 5 minutes ago
Check out this interactive demo that uses the new comments retrieval feature and Google Prediction APIs. The demo displays audience sentiment against any video by retrieving the video's comments and feeding them to the Cloud Prediction API for the sentiment analysis.

VP9: Faster, better, buffer-free YouTube videos

As more people watch more high-quality videos across more screens, we need video formats that provide better resolution without increasing bandwidth usage. That’s why we started encoding YouTube videos in VP9, the open-source codec that brings HD and even 4K (2160p) quality at half the bandwidth used by other known codecs.

VP9 is the most efficient video compression codec in widespread use today. In the last year alone, YouTube users have already watched more than 25 billion hours of VP9 video, billions of which would not have been played in HD without VP9's bandwidth benefits. And with more of our device partners adopting VP9, we wanted to give you a primer on the technology.

How VP9 works

Videos hold a lot of information. If video were stored in the same format that a camera sensor uses when shooting a scene, the resulting files would be enormous — raw 4K is up to 18,000 Mbps! Instead, modern video compression looks at a video more like a person might, by encoding a description of the features in a scene, and tracking how those features move and change. This compression is hundreds of times more efficient than a camera sensor's recording and is what makes video streaming possible.

While VP9 uses the same basic blueprint as previous codecs, the WebM team has packed improvements into VP9 to get more quality out of each byte of video. For instance, the encoder prioritizes the sharpest image features, and the codec now uses asymmetric transforms to help keep even the most challenging scenes looking crisp and block-free.

Here's a comparison between the image quality you'd get watching Janelle Monaé with VP9 or legacy H.264 transcodes on a 600Kbps connection:

View: VP9H.264Combined

Bringing quality to the people

This new format bumps everybody one notch closer to our goal of instant, high-quality, buffer-free videos. That means that if your Internet connection used to only play up to 480p without buffering on YouTube, it can now play silky smooth 720p with VP9.

VP9 also has benefits for people with limited bandwidth or expensive data plans. By cutting bitrates in as much as half, it dramatically increases the set of users that can watch 360p quality video without increased rebuffering or cost.

Reduced time spent watching low quality formats thanks to VP9

Opening the door to 4K

And for those who can never get enough pixels (including your humble author!), VP9 unlocks the burgeoning world of 4K videos. At larger video sizes, VP9 actually gets even more efficient than its predecessors, so uninterrupted 4K content can now be streamed by a significant and growing part of the YouTube audience. The amount of 4K video uploaded to YouTube has more than tripled in the past year, and VP9 helps us plan for improved streaming into the future. You can find 4K videos by using the search filter, or see some of our favorites in this playlist.

Where can I use VP9?

Thanks to our device partners, VP9 decoding support is available today in the Chrome web browser, in Android devices like the Samsung Galaxy S6, and in TVs and game consoles from Sony, LG, Sharp, and more. More than 20 device partners across the industry are launching products in 2015 and beyond using VP9.

To learn more about producing your own VP9 content, see our FFMpeg encoding guide or check out the Adobe Premier WebM plugin.

Steven Robertson, Software Engineer, recently watched “St. Lucia - Before The Dive.”

Scaling MySQL in the cloud with Vitess and Kubernetes

[Cross-posted from the Google Cloud Platform Blog

Your new website is growing exponentially. After a few rounds of high fives, you start scaling to meet this unexpected demand. While you can always add more front-end servers, eventually your database becomes a bottleneck, which leads you to . . .

  • Add more replicas for better read throughput and data durability
  • Introduce sharding to scale your write throughput and let your data set grow beyond a single machine
  • Create separate replica pools for batch jobs and backups, to isolate them from live traffic
  • Clone the whole deployment into multiple datacenters worldwide for disaster recovery and lower latency

At YouTube, we went on that journey as we scaled our MySQL deployment, which today handles the metadata for billions of daily video views and 300 hours of new video uploads per minute. To do this, we developed the Vitess platform, which addresses scaling challenges while hiding the associated complexity from the application layer.

Vitess is available as an open-source project and runs best in a containerized environment. With Kubernetes and Google Container Engine as your container cluster manager, it's now a lot easier to get started. We’ve created a single deployment configuration for Vitess that works on any platform that Kubernetes supports.

In addition to being easy to deploy in a container cluster, Vitess also takes full advantage of the benefits offered by a container cluster manager, in particular:

  • Horizontal scaling – add capacity by launching additional nodes rather than making one huge node
  • Dynamic placement – let the cluster manager schedule Vitess containers wherever it wants
  • Declarative specification – describe your desired end state, and let the cluster manager create it
  • Self-healing components – recover automatically from machine failures

In this environment, Vitess provides a MySQL storage layer with improved durability, scalability, and manageability.

We're just getting started with this integration, but you can already run Vitess on Kubernetes yourself. For more on Vitess, check out vitess.io, ask questions on our forum, or join us on GitHub. In particular, take a look at our overview to understand the trade-offs of Vitess versus NoSQL solutions and fully-managed MySQL solutions like Google Cloud SQL.

-Posted by Anthony Yeh, Software Engineer, YouTube