Monthly Archives: December 2019

Android’s commitment to Kotlin

Posted by David Winer, Kotlin Product Manager

Android and Kotlin banner

When we announced Kotlin as a supported language for Android, there was a tremendous amount of excitement among developers. Since then, there has been a steady increase in the number of developers using Kotlin. Today, we’re proud to say nearly 60% of the top 1,000 Android apps contain Kotlin code, with more and more Android developers introducing safer and more concise code using Kotlin.

During this year’s I/O, we announced that Android development will be Kotlin-first, and we’ve stood by that commitment. This is one of the reasons why Android is the gold partner for this year’s KotlinConf.

Seamless Kotlin on Android

In 2019, we focused on making programming in Kotlin on Android a seamless experience, with modern Kotlin-first APIs across the Android platform. Earlier this year, we launched a developer preview of Jetpack Compose, a modern UI toolkit for Android built using a Kotlin domain-specific language (DSL). We also incorporated coroutines into several of the flagship Jetpack libraries, including Room and Lifecycle. Finally, we brought Kotlin extensions (KTX) to even more major Google libraries, including Firebase and Play Core.

On the tooling side, we strengthened our commitment to Kotlin in Android Studio and the Android build pipeline. Significant updates to R8 (the code shrinker for Android) brought the ability to detect and handle Kotlin-specific bytecode patterns. Support was added for .kts Gradle build scripts in Android Studio, along with improved Kotlin support in Dagger. We worked closely with the JetBrains team to optimize support for the Kotlin plugin, and make the Kotlin editing experience in Android Studio fluid and fast.

Better Kotlin learning

This year we’ve also invested in quality Kotlin on Android learning content.

We released two free video learning courses in partnership with Udacity: Developing Android Apps in Kotlin and Advanced Android in Kotlin. This content was also released as the Codelab courses Android Kotlin Fundamentals and Advanced Android in Kotlin, for those who prefer text-based learning. The popular Kotlin Bootcamp for Programmers Udacity course was also published as a Codelabs course, helping provide a Kotlin foundation for non-Kotlin developers. Kotlin-based instructional Codelabs were also created for topics including Material Design, Kotlin coroutines, location, refactoring to Kotlin, billing in Kotlin, and Google Pay in Kotlin. It hasn’t been just about new content: we've updated Kotlin Codelab favorites to take advantage of important features such as coroutines.

Looking ahead

In 2020, Android development will continue to be Kotlin-first. We’ve been listening to your feedback, and will continue partnering with JetBrains to improve your experience with Kotlin.

This includes working with JetBrains to improve the Kotlin compiler over the next year. Our teams are making the compiler more extensible with a new backend, and making your builds faster with a significantly faster frontend. We’re also working with many of the largest annotation processors to make compilation faster for Kotlin code. You can also expect more Kotlin-first updates to Android, including more Jetpack libraries that make use of Kotlin features such as coroutines.

Thank you for letting us be part of your app development journey this year. We look forward to continuing the journey with you in 2020.

Understanding Transfer Learning for Medical Imaging



As deep neural networks are applied to an increasingly diverse set of domains, transfer learning has emerged as a highly popular technique in developing deep learning models. In transfer learning, the neural network is trained in two stages: 1) pretraining, where the network is generally trained on a large-scale benchmark dataset representing a wide diversity of labels/categories (e.g., ImageNet); and 2) fine-tuning, where the pretrained network is further trained on the specific target task of interest, which may have fewer labeled examples than the pretraining dataset. The pretraining step helps the network learn general features that can be reused on the target task.

This kind of two-stage paradigm has become extremely popular in many settings, and particularly so in medical imaging. In the context of transfer learning, standard architectures designed for ImageNet with corresponding pretrained weights are fine-tuned on medical tasks ranging from interpreting chest x-rays and identifying eye diseases, to early detection of Alzheimer’s disease. Despite its widespread use, however, the precise effects of transfer learning are not yet well understood. While recent work challenges many common assumptions, including the effects on performance improvement, contribution of the underlying architecture and impact of pretraining dataset type and size, these results are all in the natural image setting, and leave many questions open for specialized domains, such as medical images.

In our NeurIPS 2019 paper, “Transfusion: Understanding Transfer Learning for Medical Imaging,” we investigate these central questions for transfer learning in medical imaging tasks. Through both a detailed performance evaluation and analysis of neural network hidden representations, we uncover many surprising conclusions, such as the limited benefits of transfer learning for performance on the tested medical imaging tasks, a detailed characterization of how representations evolve through the training process across different models and hidden layers, and feature independent benefits of transfer learning for convergence speed.

Performance Evaluation
We first performed a thorough study on the effect of transfer learning on model performance. We compared models trained from random initialization and applied directly on tasks to those pretrained on ImageNet that leverage transfer learning for the same tasks. We looked at two large scale medical imaging tasks — diagnosing diabetic retinopathy from fundus photographs and identifying five different diseases from chest x-rays. We evaluated various neural network architectures including both standard architectures popularly used for medical imaging (ResNet50, Inception-v3) as well as a family of simple, lightweight convolutional neural networks that consist of four or five layers of the standard convolution-batchnorm-ReLU progression, or CBRs.

The results from evaluating all of these models on the different tasks with and without transfer learning give us four main takeaways:
  • Surprisingly, transfer learning does not significantly affect performance on medical imaging tasks, with models trained from scratch performing nearly as well as standard ImageNet transferred models.
  • On the medical imaging tasks, the much smaller CBR models perform at a level comparable to the standard ImageNet architectures.
  • As the CBR models are much smaller and shallower than the standard ImageNet models, they perform much worse on ImageNet classification, highlighting that ImageNet performance is not indicative of performance on medical tasks.
  • The two medical tasks are much smaller in size than ImageNet (~200k vs ~1.2m training images), but in the very small data regime, there may only be a few thousand training examples. We evaluated transfer learning in this very small data regime, finding that while there was a larger gap in performance between transfer and training from scratch for large models (ResNet) this was not true for smaller models (CBRs), suggesting that the large models designed for ImageNet might be too overparameterized for the very small data regime.
Representation Analysis
We next study the degree to which transfer learning affects the kinds of features and representations learned by the neural networks. Given the similar performance, does transfer learning result in different representations from random initialization? Is knowledge from the pretraining step reused, and if so, where? To find answers to these questions, this study analyzes and compares the hidden representations (i.e., representations learned in the latent layers of the network) in the different neural networks trained to solve these tasks. This quantitative analysis can be challenging, due to the complexity and lack of alignment in different hidden layers. But a recent method, singular vector canonical correlation analysis (SVCCA; code and tutorials), based on canonical correlation analysis (CCA), helps overcome these challenges, and can be used to calculate a similarity score between a pair of hidden representations.

Similarity scores are computed for some of the hidden representations from the top latent layers of the networks (closer to the output) between networks trained from random initialization and networks trained from pretrained ImageNet weights. As a baseline, we also compute similarity scores of representations learned from different random initializations. For large models, representations learned from random initialization are much more similar to each other than those learned from transfer learning. For smaller models, there is greater overlap between representation similarity scores.
Representation similarity scores between networks trained from random initialization and networks trained from pretrained ImageNet weights (orange), and baseline similarity scores of representations trained from two different random initializations (blue). Higher values indicate greater similarity. For larger models, representations learned from random initialization are much more similar to each other than those learned through transfer. This is not the case for smaller models.
The reason for this difference between large and small models becomes clear with further investigation into the hidden representations. Large models change less through training, even from random initialization. We perform multiple experiments that illustrate this, from simple filter visualizations to tracking changes between different layers through fine-tuning.

When we combine the results of all the experiments from the paper, we can assemble a table summarizing how much representations change through training on the medical task across (i) transfer learning, (ii) model size and (iii) lower/higher layers.
Effects on Convergence: Feature Independent Benefits and Hybrid Approaches
One consistent effect of transfer learning was a significant speedup in the time taken for the model to converge. But having seen the mixed results for feature reuse from our representational study, we looked into whether there were other properties of the pretrained weights that might contribute to this speedup. Surprisingly, we found a feature independent benefit of pretraining — the weight scaling.

We initialized the weights of the neural network as independent and identically distributed (iid), just like random initialization, but using the mean and variance of the pretrained weights. We called this initialization the Mean Var Init, which keeps the pretrained weight scaling but destroys all the features. This Mean Var Init offered significant speedups over random initialization across model architectures and tasks, suggesting that the pretraining process of transfer learning also helps with good weight conditioning.
Filter visualization of weights initialized according to pretrained ImageNet weights, Random Init, and Mean Var Init. Only the ImageNet Init filters have pretrained (Gabor-like) structure, as Rand Init and Mean Var weights are iid.
Recall that our earlier experiments suggested that feature reuse primarily occurs in the lowest layers. To understand this, we performed weight transfusion experiments, where only a subset of the pretrained weights (corresponding to a contiguous set of layers) are transferred, with the remainder of weights being randomly initialized. Comparing convergence speeds of these transfused networks with full transfer learning further supports the conclusion that feature reuse is primarily happening in the lowest layers.
Learning curves comparing the convergence speed with AUC on the test set. Using only the scaling of the pretrained weights (Mean Var Init) helps with convergence speed. The figures compare the standard transfer learning and the Mean Var initialization scheme to training from random initialization.
This suggests hybrid approaches to transfer learning, where instead of reusing the full neural network architecture, we can recycle its lowest layers and redesign the upper layers to better suit the target task. This gives us most of the benefits of transfer learning while further enabling flexible model design. In the Figure below, we show the effect of reusing pretrained weights up to Block2 in Resnet50, halving the remainder of the channels, initializing those layers randomly, and then training end-to-end. This matches the performance and convergence of full transfer learning.
Hybrid approaches to transfer learning on Resnet50 (left) and CBR models (right) — reusing a subset of the weights and slimming the remainder of the network (Slim), and using mathematically synthesized Gabors for conv1 (Synthetic Gabor).
The figure above also shows the results of an extreme version of this partial reuse, transferring only the very first convolutional layer with mathematically synthesized Gabor filters (pictured below). Using just these (synthetic) weights offers significant speedups, and hints at many other creative hybrid approaches.
Synthetic Gabor filters used to initialize the first layer if neural networks in some of the experiments in this paper. The Gabor filters are generated as grayscale images and repeated across the RGB channels. Left: Low frequencies. Right: High frequencies.
Conclusion and Open Questions
Transfer learning is a central technique for many domains. In this paper we provide insights on some of its fundamental properties in the medical imaging context, studying performance, feature reuse, the effect of different architectures, convergence and hybrid approaches. Many interesting open questions remain: How much of the original task has the model forgotten? Why do large models change less? Can we get further gains matching higher order moments of pretrained weight statistics? Are the results similar for other tasks, such as segmentation? We look forward to tackling these questions in future work!

Acknowledgements
Special thanks to Samy Bengio and Jon Kleinberg, who are co-authors on this work. Thanks also to Geoffrey Hinton for helpful feedback.

Source: Google AI Blog


5 ways to beat holiday stress with the Google Assistant

Five more gifts to buy, three projects to wrap up before the holiday break and one big family dinner to host. Anyone else have an end-of-the-year list like this? Here's how the Google Assistant is helping me get through it all:

1. Stay organized with notes and lists

If you’re like me, inspiration strikes when you’re busy, like while cooking, commuting or playing with the kids. Starting to roll out today, you can use your Assistant to create and manage your notes and lists in Google Keep, Any.do, AnyList, or Bring! across Assistant-enabled phones and smart speakers. Lists are also available on Smart Displays. 


To get started, simply connect the Assistant with the app you use to create notes or lists. Select the “Services” tab in your Google Assistant settings and then choose your preferred provider name from the “Notes and Lists” section. Once connected, new notes and lists created from supported Assistant surfaces will appear in your chosen provider. You can also ask the Assistant for your historical notes and lists that were createdbefore you connected the Assistant with your chosen provider, but these will not be visible in the provider’s app.


Here are a few things to try, starting with “Hey Google…”:

  • “Create a holiday gift list.”

  • “Add Chromebook to my holiday gift list.”

  • “Add cranberries to my grocery list.”

  • “Take a note.”

  • “Show me my notes.”

2. Assign reminders to your housemates and family members 

Assignable reminders help families and housemates collaborate and stay organized all year around. You can create reminders for your partner or roommate to pick up eggnog from the store, order gift wrapping paper or mail your holiday cards. To assign a reminder, ask your Assistant, “Hey Google, remind Nick to pick up Mom from the airport tonight.”  

3. Find and share photos using just your voice

It’s now easier than ever to find and share your favorite holiday memories, simply by using your voice. On your Android phone, just say, “Hey Google, look up photos from this weekend," tap your favorite pictures and then say, "Hey Google, share these photos with Lizzie.” Your Assistant helps you search through your photos, pick your favorites, and send them to your friends or family. 

4. Listen to podcasts by topic

Heading to a potluck and tasked with bringing an entree? Turn to your Assistant for some cooking inspiration. When you ask the Assistant for podcasts about a certain topic—“Hey Google, find a podcast about holiday cooking”—it’ll suggest relevant episodes for you. Or if you’re looking to get a head start on productivity and self improvement, just ask, “Hey Google, show me podcasts about New Year's resolutions.” The feature is available now in English on all Assistant-enabled devices globally. 

5. Enjoy a pick-me-up while running errands 

And, while you’re getting your last minute holiday shopping done, the Assistant can help you get a sweet treat or or a pick-me-up from Dunkin’. If you have the Dunkin’ App installed on your Android phone, just say “Hey Google, order a latte from Dunkin,” to quickly start your order.


I hope these features will make your holiday season just a little bit easier, so you can focus on spending time with family.


Our annual pay equity review

Compensation should be based on what you do, not who you are. We design compensation to be fair and equitable from the outset—but because these are human processes, it’s important to double-check them. 

Each year we run a rigorous statistical analysis to make sure all new salaries, bonuses and equity awards are fair. We take into account things that should impact pay, such as role, level, location and performance. If we find any differences in proposed pay between men and women globally or by race and ethnicity or age in the U.S., we make upward adjustments.

Each year, we continue to improve our analytical approach. This year we included a higher percentage of Googlers in our analysis than before (now 93 percent worldwide), and for the first time we analyzed Googlers age 40 and over in the U.S. After thorough review, we increased compensation for 2 percent of employees to ensure that there were no inconsistencies for any demographic group. Increases totalled $5.1 million, and Googlers that received adjustments fell into every demographic category.

Ensuring fairness is a never-ending process, and our pay equity analysis is just one part of a larger effort to improve our practices. We know that employees’ level, performance ratings, and promotion history also impact pay, which is why we’re continuing to focus on all of our people processes to ensure that Google is a great place to work for everyone. 

You can read more about our pay equity analysis methodology on our re:Work site.

Europe and Africa code weeks: 136,000 students learn to code

Within the next 10 to 15 years, 90 percent of all jobs in Europe will require some level of technology education, and now is the time for the future workforce to start acquiring these skills. Computer Science (CS) programs all over the world are helping prepare students for the new global economy and helping them channel their excitement and passion into real world creations.

This October, we supported Europe Code Week,a movement started by the European Commission,for the sixth consecutive year, and Africa Code Week for the fourth consecutive year. In total, Google funded 88 education organizations in 41 countries, reaching a grand total of 136,000 students. 

This is part of our commitment to help one million Europeans grow their careers by the end of 2020 and to train 10 million Africans in digital skills by 2022 as part of Grow with Google. 

As our work with Europe Code Week shows, this support is making a difference. Here are just a few stories from among the 33 organizations we funded in 23 countries and through which 21,291 students learned CS.

Europe Code Week

Africa Code Week 

In Africa, we joined forces with SAP and Africa Code Week to fund 55 organizations and grassroots groups across 18 countries. Over 115,000 students were able to explore CS through a variety of fun and interactive workshops. See some of their stories below.

We’re thrilled to help these students and teachers gain coding experience in Europe and Africa and look forward to inspiring even more students in 2020.

#YouTubeRewind – What India watched in 2019



For the last several years, video has increasingly been a medium that inspired and fascinated Indians and also became the canvas for their imaginations. In the twelfth year of YouTube’s journey in India, 2019 has proven to be a coming-of-age year in more ways than one.


While movies and music continued to rule the hearts and minds of India, with ‘Rowdy Baby’ from Dhanush-starrer Maari 2 making it to YouTube’s global most viewed charts, breakout creators like Khandeshi Movies and their signature style of down-home comedy had us laughing with the rest of the country.


2019 was also the year when previously niche genres like farming, gaming and learning blossomed into categories worth reckoning, notching impressive reach and engagement. Across categories, women creators could be seen leading from the front. While 2016 had just 1 woman creator with a subscriber base of over 1 million, 2019 has seen that number climb to a whopping 120 women creators with over a million followers. 


While the portfolio of categories widened to add new verticals like learning and farming, Indian languages continued their expansion across verticals, with languages like Bengali, Tamil, Telugu, Kannada and Malayalam turning into fast growing video ecosystems in their own right. From comedy to gaming to beauty, each of these languages today houses a full showcase of the range of content on YouTube, with millions of creators fuelling the growth, and advertisers leveraging it to reach their marketing objectives. 


Here’s a quick look at 2019 in video, and what made the top charts globally and in India.


YouTube Most viewed Music Videos Globally


  1. Daddy Yankee & Snow - Con Calma (Video Oficial)
  2. ROSALÍA, J Balvin - Con Altura (Official Video) ft. El Guincho
  3. Anuel AA, KAROL G - Secreto
  4. Anuel AA, Daddy Yankee, Karol G, Ozuna & J Balvin - China (Video Oficial)
  5. Jhay Cortez, J. Balvin, Bad Bunny - No Me Conoce (Remix)
  6. Shawn Mendes, Camila Cabello - Señorita
  7. Maari 2 - Rowdy Baby (Video Song) | Dhanush, Sai Pallavi | Yuvan Shankar Raja | Balaji Mohan
  8. BLACKPINK - 'Kill This Love' M/V
  9. Billie Eilish - bad guy
  10. Ariana Grande - 7 rings

    YouTube Top Trending Videos in India


    1. Khandeshi Movies - Chotu Ke Golgappe
    2. Jaipur The Pink City - New Arabic Mehndi Design by Sonia Goyal
    3. Team Naach - O Saki Saki | Batla House
    4. The Motor Mouth - When Kapil Sharma met Doraemon voice artist
    5. Satish Tech - How To Make Helicopter Matchbox Helicopter Toy DIY
    6. Animated Video Pro - लालची दूधवाली | जादुई इंजेक्शन
    7. Cricket.com - Kohli, Dhoni too good for the Aussies | Second Gillette ODI
    8. Discovery Channel India - Exclusive Sneak Peek| Man VS Wild with Bear Grylls and PM Modi
    9. Sarpmitra Akash Jadhav - Dangerous Rescue Operation | Rescue indian cobra snake in the well from Ahmednagar maharashtra
    10. Experiment King - Chewing gum vs Hot oil experiment

      YouTube Top Trending Music Videos in India


      1. Rowdy Baby - Dhanush, Sai Pallavi | Yuvan Shankar Raja | Maari 2 
      2. Vaaste - Dhvani Bhanushali, Tanishk Bagchi | Nikhil D | Bhushan Kumar | Radhika Rao, Vinay Sapru
      3. She Don't Know - Millind Gaba Song | Shabby
      4. Coca Cola - Kartik A, Kriti S | Tony Kakkar | Tanishk Bagchi | Neha Kakkar | Luka Chuppi
      5. Coka - Sukh-E Muzical Doctorz | Alankrita Sahai | Jaani | Arvindr Khaira
      6. Ve Maahi - Akshay Kumar, Parineeti Chopra | Arijit Singh & Asees Kaur | Tanishk Bagchi | Kesari
      7. Dheeme Dheeme - Tony Kakkar ft. Neha Sharma
      8. Lehanga - Jass Manak, Satti Dhillon
      9. Pachtaoge - Arijit Singh | Vicky Kaushal, Nora Fatehi |Jaani, B Praak, Arvindr Khaira | Bhushan Kumar
      10. O Saki Saki - Nora Fatehi, Tanishk B, Neha K, Tulsi K, B Praak, Vishal-Shekhar | Batla House

        With the massive depth and diversity of content on the platform, some of the country’s most-loved brands have leveraged YouTube to tell some of the year’s most compelling brand stories, the best of which form the YouTube Ads Leaderboard for 2019.


        YouTube Ads Leaderboard India 


        1. Kia Motors - Kia Motors India | Magical Inspirations | Stunning Designs
        2. Samsung - Samsung India Good Vibes App: Caring for the Possibilities
        3. Pepsi - Har Ghoont Mein Swag | Tiger Shroff | Disha Patani | Badshah | Ahmed Khan | Bhushan Kumar
        4. Mi Smart LED TV - Mi Smart LED TV sab ki sunega | Say It See It - Xiaomi India
        5. OPPO - OPPO F11 Pro | Features, Specs & Product Overview | Available Now
        6. Google Assistant - Pooche koi bhi sawaal Hindi mein (mausam ki jaankari) | Google Assistant
        7. Aditya Birla Group - Aditya Birla Group - Big in Your Life
        8. OnePlus - 90 Hz Smooth Moves | OnePlus x Robert Downey Jr
        9. Horlicks - #FearLessKota #BottleOfLove
        10. Vivo - All new vivo S1 with 32MP Selfie Camera | #ItsMyStyle | HDFC and Jio Offer

          Our Rewind 2019 video compiles the top videos and creators that you liked and watched the most around the world, from the biggest games to must-watch beauty palettes and breakout stars. 




          For a deeper look at the year on YouTube and to see the top videos and trends in many other countries, head to this year's Rewind site

          Posted by Satya Raghavan, Director - YouTube Partnerships, India
          https://lh6.googleusercontent.com/AvUD7qGEuQ6LEuOZDn6D6D8BxrA-9oHW8_xnTICHDjkachJxDtgfHQpLdn6W-7qc91ituytKEgQ0OOa2hNPJPcAroDnQ5tWAbu8iExp80mEAp4NgqRmo3CEoYxBK6XgQXtgSFBLM

          Restrict the use of Drive File Stream to company-owned devices

          Quick launch summary

          Earlier this year, we gave admins more control over their corporate data by integrating controls for Drive File Stream in Google’s device management interface. The option to restrict the use of Drive File Stream to company owned-devices only is now available to opt into.

          Admins can access the setting by going to the Admin console and navigating to Apps > G Suite > Settings for Drive and Docs > Features and Applications. Then, select “Allow Drive File Stream in your Organization” and “Only allow Drive File Stream on authorized devices (Beta)".

          Availability

          G Suite editions

          • Available to all G Suite editions


          On/off by default?

          • This feature will be OFF by default and can be enabled at the OU level.



          Stay up to date with G Suite launches

          Android Game SDK

          Posted by Dan Galpin, Developer Advocate

          With over 2.5 billion monthly active devices, the Android Platform gives incredible reach for game developers. Taking advantage of that opportunity can be a challenge, particularly if your game really tries to push the limits of what mobile can do. We've spent years working with game developers to try to both capture and address the biggest issues, and we're just beginning to see the fruits of that effort with the launch of the Android Game SDK. The Android Game SDK is a set of libraries that you can use to enhance your Android game.

          The first library we are launching in the Android Game SDK helps developers with frame pacing, the synchronization of a game's rendering loop with the OS display subsystem and underlying display hardware. Android's display subsystem is designed to avoid tearing that occurs when the display hardware switches to a new frame in the middle of an update. To this end, it buffers past frames, detects late frame submissions, and repeats the display of past frames when late frames are detected. When a game render loop renders at a different rate than the native display hardware, such as a game running at 30 frames-per-second attempting to render on a device that natively supports 60 FPS, the optimal display flow involves synchronization between the game render loop, the system compositor, and the display hardware.

          Optimal Display Flow

          Optimal Display Flow

          Any mismatch in synchronization can create substantial inconsistencies in frame times. If a frame takes substantially less time to render, it can shorten the presentation of the previous frame, causing something like a 33ms, 16ms, and a 50ms sequence.

          Synchronization Mismatch: Rendering too Fast

          Synchronization Mismatch: Rendering too Fast

          If a frame takes too long to render, a similar problem occurs. The frame will be presented for an extra frame, causing something like a 50ms, 16ms, and 33ms sequence.

          Synchronization Mismatch: Slow Frame

          Synchronization Mismatch: Slow Frame

          In either of these two scenarios, the game player will experience inconsistent delays between game input and screen updates. Visually, things will look less smooth and polished. Both visuals and game play can be impacted.

          The Frame Pacing library uses Android's Choreographer API for synchronization with the display subsystem, using presentation timestamp extensions on both OpenGL and Vulkan APIs to make sure frames are presented at the proper time, and sync fences to avoid buffer stuffing. Multiple refresh rates are handled if supported by the device, giving a game more flexibility in presenting a frame. For a device that supports a 60 Hz refresh rate as well as 90 Hz, a game that cannot produce 60 frames per second can drop to 45 FPS instead of 30 FPS to remain smooth. The library detects the expected game frame rate and auto-adjusts frame presentation times accordingly. The Frame Pacing library allows games to take advantage of higher refresh rate 90 and 120 Hz displays, while also making it easy to lock the refresh rate to a desired value, regardless of the underlying display refresh rate.

          The Frame Pacing library is built into Unity versions 2019.2 and beyond. Just select the optimized Frame Pacing checkbox under Android Settings to enable smoother frame rates for your game. If you have source to your game engine, it's straightforward to integrate the library into your OpenGL or Vulkan renderer. We've just added library binaries for download at developer.android.com/games/sdk/, or you can download the source code from the Android Open Source Project.

          To learn more about Frame Pacing, check out the documentation at developer.android.com, along with the Frame Pacing section of the Optimizing Android Games Performance talk from Google I/O 2019. Be sure to subscribe to our Twitter channel and stay tuned for our announcements at GDC 2020 for more on how we're working to make Android game development better, so you can bring the best game experience to billions of devices.

          Kiwi’s Top Trending YouTube Videos Revealed for 2019

          It's time to hit #YouTubeRewind and check out the top trending videos for 2019.

          In 2019 we revelled in the return of Sam Smith, the scandalous revelations of James Charles and recreations of Billie Eilish. We bopped to ‘Bad Guy’ and attempted to learn how to create the video clip at home!

          Let’s dive into our annual look back at the year that was in online video, and reflect on the moments that captured the hearts and minds of Kiwis in 2019.

          Kardashians, conspiracies and spicy wings reviews all made it into our top trending videos. Well known creators dominated our watchlist, with the confessions and investigations of James Charles and Shane Dawson. While Gordon Ramsay and the infamous ‘twenty bucks’ Karen brought us entertainment with a side of colourful language.

          The spectacular leaping acrobatic display by Katelyn Ohashi clearly captured us. As did the opportunity to jump on a bandwagon to “Make this video the most liked video on YouTube” (sorry guys, no luck this time).

          Breakout artists of the year Billie Eilish and Camila Cabelo made our top list but we are still clearly obsessed with Ariana Grande (with two of her tracks in the top 10!).
          And if you claim you haven’t watched or listened to Old Town Road - you’re a liar.

          Some may have inspired a belly laugh, while others a break down. These are the videos that had Kiwi’s laughing, leering and losing it in 2019.

          New Zealand’s Top Trending Videos

          1. No More Lies
          2. Katelyn Ohashi - 10.0 Floor (1-12-19)
          3. Gordon Ramsay Savagely Critiques Spicy Wings | Hot Ones
          4. Conspiracy Theories with Shane Dawson
          5. Make This Video The Most Liked Video On Youtube
          6. 73 Questions With Kim Kardashian West (ft. Kanye West) | Vogue
          7. how to create billie eilish's "bad guy"
          8. New Zealand Today - Karen wants her $20 back.
          9. Minecraft Part 1
          10. Gangsters in Paradise - The Deportees of Tonga

          New Zealand’s Top Trending Music Videos

          While some artists returned for a second year in a row we also saw breakout stars Billie Eilish and Camila Cabella feature this year. But the great return of Sam Smith dominated.

          1. Sam Smith, Normani - Dancing With A Stranger
          2. Billie Eilish - bad guy
          3. Shawn Mendes, Camila Cabella - Señorita
          4. Lil Nas X - Old Town Road (Official Movie) ft. Billy Ray Cyrus
          5. Ariana Grande - 7 rings
          6. Khalid - Talk (Official Video)
          7. Lil Dicky - Earth (Official Music Video)
          8. Khalid, Kane Brown - Saturday Nights REMIX (Official Video)
          9. Cardi B & Bruno Mars - Please Me (Official Video)
          10. Ariana Grande - break up with your girlfriend, i'm bored

          As 2020 draws near, we also take this moment to celebrate YouTube with the annual Rewind mashup. This year, we tried something different and looked at what you did like — a lot. Our Rewind 2019 video compiles the top videos and creators that you liked, shared, and watched the most around the world, from the biggest games to must-try beauty tutorials and breakout stars.

          Check out the full video below and head over to our Rewind site for more!



          Post content

          Beta Channel Update for Chrome OS

          The Beta channel has been updated to 79.0.3945.66 (Platform version: 12607.47.0) for most Chrome OS devices. This build contains a number of bug fixes, security updates and feature enhancements. 
          If you find new issues, please let us know by visiting our forum or filing a bug. Interested in switching channels? Find out how. You can submit feedback using ‘Report an issue...’ in the Chrome menu (3 vertical dots in the upper right corner of the browser).

          Cindy Bayless

          Google Chrome OS