Last call for Google’s Computer Science Summer Institute applications

Applications for Google’s Computer Science Summer Institute (CSSI) and the Generation Google Scholarship close on Monday, Mar 18. Submit your application today!


Ever wonder what it’s like to be a CSSIer? Meet Jonathan James Mshelia, Tarik Brown, and Kaycee Tate — three CSSI students from this past summer here to share their CSSI experiences and give any CSSI/Generation Google applicant (we’re lookin’ at you!) a better idea of what’s to come.
Jonathan is currently a junior at Medgar Evers College. He grew up in Nigeria and moved to America to pursue an education in computer science. When he’s not glued to the computer screen, he’s usually hanging out with friends or learning a new language.
Tarik is now a freshman at the University of Notre Dame (Go Irish!) where he intends to double major in Computer Science and Economics. He has a deep love for jazz and the world of technology (especially robotics). Tarik lost hearing in his right ear when he was young and explains that he is, “quite grateful for this disability because it made me into the dedicated and motivated person that I am today.”
Kaycee grew up in Alabama and is currently attending Xavier University of Louisiana. If she’s not studying for her double major (Computer Science and Computer Engineering) or working on campus, she enjoys quiet time reading, coloring, or researching things that interest her — currently it’s investing and digital currency.

What motivated you to apply to CSSI?


Kaycee: I knew that I wanted to expand my mind as much as possible before getting to college so that I was prepared — not only in actual programming and coding skills, but also in the ability to think creatively and share my perspective in innovative ways. I definitely believe that the CSSI experience gave me a chance to do that.

Tarik: From as early as I can remember, I was always interested in how things worked. This inclination to enjoy knowing the inner workings of everything that I worked with steered me into the direction of the tech world and introduced me to Computer Science.

Jonathan: My passion for computers and my attitude towards learning were the driving forces behind my choice to join the Google CSSI program. Before CSSI, I only tried to learn the syntax of a programming language and I did not necessarily know how to apply what I learned to make anything, but during the CSSI program I put these programming languages into proper use and I began to see it differently.


What do you wish you’d known before you arrived at Google for CSSI?

Tarik: A valuable lesson that I learned from Google’s Computer Science Summer Institute was that I am much more capable than I give myself credit for. Imposter Syndrome is real and it affects many, I wish I had known that there was no reason to doubt myself and CSSI definitely gave me a more positive outlook upon my ability and self worth.

Jonathan: Upon getting started at Google CSSI I had some experience with a few front end technologies and that gave me the ability to learn more and improve my skills. The learning experience was fun so I never thought about wishing I knew more than I already knew.

Kaycee: Before going to CSSI, I wish I had truly understood that it didn’t matter how much you knew about computer science and programming prior to the experience. Of course the FAQs and application mentioned that, but truly processing that and just hearing it are two different things.


Can you tell us how the CSSI experience has impacted you?

Jonathan: The CSSI experience opened my eyes to the possibilities technology has to offer. I now understand the internal workings of the web — how the front-end and back-end worked hand in hand to give a fully functioning website. This helped me at college because I was able to accomplish more in terms of applying my knowledge to school work.

Kaycee: CSSI isn’t just about computer science — I feel like CSSI promotes the idea that to be good in anything you do, you first have to know yourself, what you’re striving for, and what you want to get out of every experience you are able to partake in.

Tarik: The knowledge I gained from CSSI was truly invaluable. We delved into the world of web development and received instruction on front-end and back-end web development. We ran the gauntlet when it came to learning multiple programming languages as we learned HTML/CSS, JavaScript and Python — all essential tools in web development. Also, we learned how to utilize the Google Cloud Engine which is actually used to run well known applications such as Snapchat. With this we were able to create our own web application from scratch and it was truly an amazing experience. Not only did I gain a wealth of technical skills, I also acquired essential soft skills that involved collaboration in small teams and being able to explain my work to others. We learned how to tactfully use version control with Github and focused on team based work. In the end, we presented our projects to the entire office of Google Software Engineers.

Reminder: applications for Google’s Computer Science Summer Institute (CSSI) and the Generation Google Scholarship close on Monday, Mar 18. Submit your application today!

Grow your games business with ads

There’s so much that goes into building a great mobile game. Building a thriving business on top of it? That’s next level. Today, we’re announcing new solutions to increase the lifetime value of your players. Now, it’s easier than ever to re-engage your audience and take advantage of a new, smarter approach to monetization.

Help inactive players rediscover your game

Let's face it, the majority of players you acquire aren't going to continue engaging with your game after just a handful of days. One of the biggest opportunities you have to grow your business is to get those inactive players to come back and play again.

We’re introducing App campaigns for engagement in Google Ads to help players rediscover your game by engaging them with relevant ads across Google’s properties. With App campaigns for engagement, you can reconnect with players in many different ways, such as encouraging lapsed players to complete the tutorial, introducing new features that have been added since a player’s last session, or getting someone to open the game for the first time on Android (which only Google can help with).

Learn more about it here or talk to your Google account representative if you’re interested in trying it out.

Rediscover game 1

Generate revenue from non-spending players

Acquiring and retaining users is important, but retention alone doesn’t generate revenue.  Our internal data shows that, on average, less than four percent of players will ever spend on in-app items. One way to increase overall revenue is through ads. However, some developers worry that ads might hurt in-app purchase revenue by disrupting gameplay for players who do spend. What if you could just show ads to the players who aren't going to spend in your app? Good news—now you can.

We’re bringing a new approach to monetization that combines ads and in-app purchases in one automated solution. Available today, new smart segmentation features in Google AdMob use machine learning to segment your players based on their likelihood to spend on in-app purchases.

Ad units with smart segmentation will show ads only to users who are predicted not to spend on in-app purchases. Players who are predicted to spend will see no ads, and can simply continue playing.  Check it out by creating an interstitial ad unit with smart segmentation enabled.

Smart Segmentation Flow

To learn more about news ways to help you increase the lifetime value of your players, please join us at the Game Developers Conference. Location and details are below:


What: Google Ads Keynote
Where: Moscone West, room #2020
When: Wednesday March 20th at 12:30 PM


I'm excited for the week ahead and all the new games you’re building—I’m always on the lookout for my next favorite.


Source: Google Ads


Your mission, gumshoe: Catch Carmen Sandiego in Google Earth

I distinctly remember being tucked into the couch, computer on and ready for the chase. With my assignment from ACME (first stop: Paris) I traveled from Singapore to Tokyo to Kathmandu chasing VILE villains, always on the lookout for that iconic scarlet coat and fedora.

Like many of my friends, I spent much of my time in the ‘90s obsessing over “Where in the World Is Carmen Sandiego?”—the games, the cartoon and the classic game show. I can remember Carmen Sandiego teaching me the currency of Hungary (forint), the capital of Iraq (Baghdad), and dozens of country flags—Argentina’s blue and white, Germany’s black, red and gold.

But Carmen Sandiego was more than just fun facts for children and adults alike. The globe-trotting game taught me the world was bigger than my couch, and got me excited to learn about new cultures and customs. That curiosity has taken me to more than 30 countries. (Carmen’s also responsible for a theme song that has been stuck in my head for decades.)

Where on Google Earth is Carmen Sandiego?

To celebrate the global explorer in all of us, today we’re introducing The Crown Jewels Caper, the first in a series of Carmen Sandiego games in Google Earth. Created in collaboration with Houghton Mifflin Harcourt, the home of Carmen Sandiego, our game is an homage to the original. It’s for all those gumshoes who grew up with the chase, and for the next generation feeling that geography itch for the first time.

Carmen_Game.png

To get your assignment, look for the special edition Pegman icon in Google Earth for Chrome, Android and iOS. Good luck, super sleuths!

Grow your indie game with Google Play

Posted by Patricia Correa, Director, Platforms & Ecosystems Developer Marketing

Google Play empowers game developers of all sizes to engage and delight people everywhere, and build successful businesses too. We are inspired by the passion and creativity we see from the indie games community, and, over the past few years, we've invested in and nurtured indie games developers around the world, helping them express their unique voice and bring ideas to life.

This year, we've put together several initiatives to help the indie community.

Indie Games Showcase

For indie developers who are constantly pushing the boundaries of storytelling, visual excellence, and creativity in mobile we are announcing today the Indie Games Showcase, an international competition for games studios from Europe*, South Korea and Japan. Those of you who meet the eligibility criteria (as outlined below) can enter your game for a chance to win several prizes, including:

  • A paid trip and accommodation to the final event in your region to showcase your game.
  • Promotion on the Google Play Store.
  • Promotion on Android and Google Play marketing channels.
  • Dedicated consultations with the Google Play team.
  • Google hardware.
  • And more...

How to enter the competition

If you're over 18 years old, based in one of the eligible countries, have 30 or less full time employees, and have published a new game on Google Play after 1 January 2018, you can enter your game. If you're planning on publishing a new game soon, you can also enter by submitting a private beta. Submissions close on May 6 2019. Check out all the details in the terms and conditions for each region. Enter now!

Indie Games Accelerator

Last year we launched our first games accelerator for developers in Southeast Asia, India and Pakistan and saw great results. We are happy to announce that we are expanding the format to accept developers from select countries in the Middle East, Africa, and Latin America, with applications for the 2019 cohort opening soon. The Indie Games Accelerator is a 6 month intensive program for top games startups, powered by mentors from the gaming industry as well as Google experts, offering a comprehensive curriculum that covers all aspects of building a great game and company.

Mobile Developer Day at GDC

We will be hosting our annual Developer Day at the Game Developers Conference in San Francisco on Monday, March 18th. Join us for a full day of sessions covering tools and best practices to help build a successful mobile games business. We'll focus on game quality, effective monetization and growth strategies, and how to create, connect, and scale with Google. Sign up to stay up to date or join us via livestream.

Developer Days

We also want to engage with you in person with a series of events. We will be announcing them shortly, so please make sure to sign up to our newsletter to get notified about events and programs for indie developers.

Academy for App Success

Looking for tips on how to use various developer tools in the Play Console? Get free training through our e-learning program, the Academy for App Success. We even have a custom Play Console for game developers course to get a jump start on Google Play.

We look forward to seeing your amazing work and sharing your creativity with other developers, gamers and industry experts around the world. And don't forget to submit your game for a chance to get featured on Indie Corner on Google Play.

* The competition is open to developers from the following European countries: Austria, Belgium, Belarus, Czech Republic, Denmark, Finland, France, Germany, Israel, Italy, Netherlands, Norway, Poland, Romania, Russia, Slovakia, Spain, Sweden, Ukraine, and the United Kingdom (including Northern Ireland).


How useful did you find this blog post?

Chrome for Android Update

Hi, everyone! We've just released Chrome 73 (73.0.3683.75) for Android: it'll become available on Google Play over the next few weeks.

This release contains the following features, as well as stability and performance improvements:
  • Offline Content on the Dino Page: easily browse suggested articles while offline
  • Lite pages: get optimized pages that save data and load faster
You can see a full list of the changes in the Git log. If you find a new issue, please let us know by filing a bug.

Ben Mason
Google Chrome

Google Drive is getting a new look on iOS and Android

What’s changing  

Google Drive is getting a new look and feel on iOS and Android, making it easier to communicate and collaborate across files in Drive on mobile devices.



This Material redesign is part of a larger effort to bring the look and feel of our G Suite apps together as a whole, with ease-of-use in mind.

Some improvements you’ll see include:
  • New Home tab and bottom navigation 
    • Similar to Drive on the web, the Home tab will surface the files that are most important to you, based on things like: 
      • The last time you accessed or edited a file 
      • Who specific files are frequently shared with 
      • What files are used at specific times of day.
  • A more intuitive bottom navigation bar that features options to switch between Home, Starred, files shared with you (Shared), and all files (Files), allowing for quicker access to your most important items.

  • Expanded search bar 
    • The search bar is now more accessible across the application, including from the Team Drives page.
  •  My Drive, Team Drives and Computers in Files view 
    • Team Drives will be now be displayed as a tab next to My Drive in the Files view. Users will also see a Computers tab if they have backed up content from a local machine to their account. 

    •  New account switching experience 
      • The feature to switch accounts is moving from the left navigation menu to an icon in the top right.


      •  Revised actions menu 
        • A revised actions menu attached to every file and folder emphasizes the most frequently used actions at the top. Toggles for starred and offline are now changed to buttons.

        Who’s impacted 

        End users

        Why you’d use it 

        We know that mobile devices are critical to getting work done, whether it’s at our desk, in a meeting, sending an email, or collaborating. Drive is not just a way to backup files to the cloud, but a critical way to easily share work, make last minute changes to content, or review important content on the go. The Drive Mobile redesign aims to make these workflows easier.

        How to get started 

        • Admins: No action required. 
        • End users: You’ll see the new look coming your way soon. 

        Additional details

        iOS users will begin seeing the redesign starting on March 12, 2019. Android users will see the redesign starting on March 18, 2019.

        To help your users navigate this redesign, see this change management guide or download this PDF.

        Helpful links 

        View the change management guide for this update. Also available as a PDF.
        Using Google Drive on Android
        Using Google Drive on iOS 

        Availability 

        Rollout details 
        • iOS: Gradual rollout (up to 15 days for feature visibility) rollout starting March 12, 2019.
        • Android: Gradual rollout (up to 15 days for feature visibility) rollout starting March 18, 2019. 
        G Suite editions 
        Available to all G Suite Editions.

        On/off by default? 
        This feature will be ON by default.

        Stay up to date with G Suite launches

        With Lookout, discover your surroundings with the help of AI

        Whether it’s helping to detect cancer cells or drive our cars, artificial intelligence is playing an increasingly larger role in our lives. With Lookout, our goal is to use AI to provide more independence to the nearly 253 million people in the world who are blind or visually impaired.

        Now available to people with Pixel devices in the U.S. (in English only), Lookout helps those who are blind or have low vision identify information about their surroundings. It draws upon similar underlying technology as Google Lens, which lets you search and take action on the objects around you, simply by pointing your phone. Since we announced Lookout at Google I/O last year, we’ve been working on testing and improving the quality of the app’s results.

        We designed Lookout to work in situations where people might typically have to ask for help—like learning about a new space for the first time, reading text and documents, and completing daily routines such as cooking, cleaning and shopping. By holding or wearing your device (we recommend hanging your Pixel phone from a lanyard around your neck or placing it in a shirt front pocket), Lookout tells you about people, text, objects and much more as you move through a space. Once you’ve opened the Lookout app, all you have to do is keep your phone pointed forward. You won’t have to tap through any further buttons within the app, so you can focus on what you're doing in the moment.

        Lookout modes.png

        Screenshot image of Lookout’s modes including, “Explore,” “Shopping,” “Quick read” Second screenshot of Lookout detecting a dog in the camera frame.

        As with any new technology, Lookout will not always be 100 percent perfect. Lookout detects items in the scene and takes a best guess at what they are, reporting this to you. We’re very interested in hearing your feedback and learning about times when Lookout works well (and not so well) as we continue to improve the app. Send us feedback by contacting the Disability Support team at g.co/disabilitysupport.

        We hope to bring Lookout to more devices, countries and platforms soon. People with a Pixel device in the US can download Lookout on Google Play today. To learn more about how Lookout works, visit the Help Center.

        Stable Channel Update for Desktop

        The Chrome team is delighted to announce the promotion of Chrome 73 to the stable channel for Windows, Mac and Linux. This will roll out over the coming days/weeks.

        Chrome 73.0.3683.75 contains a number of fixes and improvements -- a list of changes is available in the log. Watch out for upcoming Chrome and Chromium blog posts about new features and big efforts delivered in 73.

        Security Fixes and Rewards
        Note: Access to bug details and links may be kept restricted until a majority of users are updated with a fix. We will also retain restrictions if the bug exists in a third party library that other projects similarly depend on, but haven’t yet fixed.

        This update includes 60 security fixes. Below, we highlight fixes that were contributed by external researchers. Please see the Chrome Security Page for more information.

        [$TBD][913964] High CVE-2019-5787: Use after free in Canvas. Reported by Zhe Jin(金哲),Luyao Liu(刘路遥) from Chengdu Security Response Center of Qihoo 360 Technology Co. Ltd on 2018-12-11
        [$N/A][925864] High CVE-2019-5788: Use after free in FileAPI. Reported by Mark Brand of Google Project Zero on 2019-01-28
        [$N/A][921581] High CVE-2019-5789: Use after free in WebMIDI. Reported by Mark Brand of Google Project Zero on 2019-01-14
        [$7500][914736] High CVE-2019-5790: Heap buffer overflow in V8. Reported by Dimitri Fourny (Blue Frost Security) on 2018-12-13
        [$1000][926651] High CVE-2019-5791: Type confusion in V8. Reported by Choongwoo Han of Naver Corporation on 2019-01-30
        [$500][914983] High CVE-2019-5792: Integer overflow in PDFium. Reported by pdknsk on 2018-12-13
        [$TBD][937487] Medium CVE-2019-5793: Excessive permissions for private API in Extensions. Reported by Jun Kokatsu, Microsoft Browser Vulnerability Research on 2019-03-01
        [$TBD][935175] Medium CVE-2019-5794: Security UI spoofing. Reported by Juno Im of Theori on 2019-02-24
        [$N/A][919643] Medium CVE-2019-5795: Integer overflow in PDFium. Reported by pdknsk on 2019-01-07
        [$N/A][918861] Medium CVE-2019-5796: Race condition in Extensions. Reported by Mark Brand of Google Project Zero on 2019-01-03
        [$N/A][916523] Medium CVE-2019-5797: Race condition in DOMStorage. Reported by Mark Brand of Google Project Zero on 2018-12-19
        [$N/A][883596] Medium CVE-2019-5798: Out of bounds read in Skia. Reported by Tran Tien Hung (@hungtt28) of Viettel Cyber Security on 2018-09-13
        [$1000][905301] Medium CVE-2019-5799: CSP bypass with blob URL. Reported by sohalt on 2018-11-14
        [$1000][894228] Medium CVE-2019-5800: CSP bypass with blob URL. Reported by Jun Kokatsu (@shhnjk) on 2018-10-10
        [$500][921390] Medium CVE-2019-5801: Incorrect Omnibox display on iOS. Reported by Khalil Zhani on 2019-01-13
        [$500][632514] Medium CVE-2019-5802: Security UI spoofing. Reported by Ronni Skansing on 2016-07-28
        [$1000][909865] Low CVE-2019-5803: CSP bypass with Javascript URLs'. Reported by Andrew Comminos of Facebook on 2018-11-28
        [$500][933004] Low CVE-2019-5804: Command line command injection on Windows. Reported by Joshua Graham of TSS on 2019-02-17


        We would also like to thank all security researchers that worked with us during the development cycle to prevent security bugs from ever reaching the stable channel.

        As usual, our ongoing internal security work was responsible for a wide range of fixes:
        • [940992] Various fixes from internal audits, fuzzing and other initiatives
        Many of our security bugs are detected using AddressSanitizer, MemorySanitizer, UndefinedBehaviorSanitizer, Control Flow Integrity, libFuzzer, or AFL.

        Interested in switching release channels? Find out how here. If you find a new issue, please let us know by filing a bug. The community help forum is also a great place to reach out for help or learn about common issues.

        Thank you,
        Abdul Syed

        YouTube Music Debuts in South Africa: It’s All Here

        From the “Global Citizen Festival: Mandela 100” live stream featuring Beyonce to landmark videos like “Despacito,” “New Rules” and “This Is America,” people come to YouTube to be part of music culture and discover new music.


        But YouTube was made for watching, which meant fans have had to jump back and forth between multiple music apps and YouTube. Those days will soon be over. Today, we’re excited to bring YouTube Music to South Africa.

        YouTube Music is a new music streaming service made for music listening, on top of the magic of YouTube: making the world of music easier to explore and more personalized than ever. Whether you want to listen, watch or discover - all the ways music moves you can be found in one place.




        Here are six reasons we think you’re gonna like YouTube Music:
        1. It’s ALL here. Not just music videos, but official albums, singles, remixes, live performances, covers and hard-to-find music you can only get on YouTube.
        2. Recommendations built for you. A home screen that dynamically adapts to provide recommendations based on what you’ve played before, where you are and what you’re doing. At the gym workin’ on that fitness? Escaping during your commute? The right music is right here, built just for you.
        3. Thousands of playlists across any genre, mood or activity. That means no matter what kind of music you like, where you are, what you’re doing, or what mood you’re in, you can easily find the right playlist for that moment. Try “House hotlist” to discover the best in house music or “SA’s hip hop hotlist” to get the heart rate going.
        4. Smart search so we’ll find the song, even if you can’t remember what it’s called. Try “that South African fainting song” or “the song with clicking” - We got you. You can also search by lyrics (even if they’re wrong). It’s “Starbucks lovers,” right?
        5. The hottest new music. We’ll keep you on top of what’s hot! The hottest videos in the country right now are right there, on their own dedicated Hotlist screen. And our popular YouTube Charts like Top 100 Songs South Africa are available in YouTube Music as playlists so you can keep up with the latest and greatest.
        6. No internet? No problem. Get YouTube Music Premium to listen ad-free, in the background and on-the-go with downloads. Plus, your Offline Mixtape automatically downloads songs and videos you love just in case you forgot to.

        While fans can enjoy the new ad-supported version of YouTube Music for free, we’re also launching YouTube Music Premium, a paid membership that gives you background listening, downloads and an ad-free experience for R59.99 a month. And for a limited time, music fans can get three months free of YouTube Music Premium here, R59.99 per month after, R89.99 per month for a Family Plan).





        YouTube Premium also launches today
        Starting today, you can also upgrade to YouTube Premium, providing members with the benefits of Music Premium, plus ad-free, background, downloads across all of YouTube. Try YouTube Premium free for three months here, R71.99 per month after, R109.99 per month for a Family Plan).
        “South

        Google Play Music subscribers will automatically receive access to YouTube Music Premium at their current price. Nothing is changing with Google Play Music - you'll still be able to access all of your purchased music, uploads and playlists in Google Play Music just like always.


        Get the new YouTube Music from the Play Store and App Store today or check out the web player at music.youtube.com. You can signup for YouTube Premium at youtube.com/premium.


        Posted by the YouTube Music Team




         ====




        YouTube Music fait ses débuts en Afrique du Sud : Tout est là.

        Du Global Citizen Festival: Mandela 100 diffusé en direct avec pour invitée Beyoncé jusqu’aux vidéos cultes comme «Despacito», « New Rules » et « This Is America », on se connecte à YouTube pour faire partie de la culture musicale et découvrir de nouvelles sonorités.

        Mais YouTube a été conçu pour regarder des vidéos, ce qui supposait de jongler entre plusieurs applications musicales et YouTube. Cette époque sera bientôt révolue. Aujourd’hui, nous sommes heureux de présenter YouTube Music en Afrique du Sud.

        YouTube Music c’est un nouveau service de musique en streaming qui permet d'écouter de la musique, en plus de la magie de YouTube : rendre le monde de la musique plus accessible et plus personnalisé que jamais. Que vous souhaitiez écouter, regarder ou découvrir, toutes les tendances de la musique sont réunies au même endroit.

        Voici six raisons pour lesquelles vous allez adorer YouTube Music :

        1. TOUT est là. Pas seulement les vidéoclips mais aussi les albums officiels,les singles, les remix, les concerts en direct, les reprises et de la musique que vous ne trouverez que sur YouTube.
        2. Des recommandations rien que pour vous Un écran d’accueil qui s’adapte de façon dynamique pour fournir des recommandations en fonction de ce que vous avez écouté précédemment, de l’endroit où vous vous trouvez et de ce que vous êtes en train de faire. En salle de sport workin’ on that fitness? Vous voulez un moment d’évasion pendant vos trajets quotidiens ? Les bonnes musiques sont ici, avec un programme rien pour vous.
        3. Des milliers de playlists dans tous les genres, quelles que soient votre humeur ou votre activité. Peu importe le genre de musique que vous aimez, où que vous soyez, quelle que soit votre activité ou votre humeur du moment, vous pouvez trouver facilement la playlist qui vous convient. Laissez-vous tenter par les « dernières tendances de la musique house » et découvrez les meilleurs dans ce genre ou la SA hip hop » pour bouger et vous éclater.
        4. La fonction de recherche intelligente vous permet de trouver la chanson, même si vous ne vous souvenez plus du titre. Essayez « cette chanson sud-africaine dont le refrain invite à faire semblant de s’évanouir » ou « la chanson avec des bruits de clic » - We got you. Vous pouvez aussi faire une recherche sur les paroles (même si ce ne sont pas les bonnes). C’est bien « Starbucks lovers », n’est-ce pas ?
        5. Les nouvelles musiques les plus tendance Grâce à YouTube Music, restez au top des tendances ! Les vidéos les plus vues du pays sont désormais accessibles, sur leur page dédiée. Sans oublier les YouTube Charts comme le Top 100 des chansons sud-africaines qui sont disponibles sur YouTube Music sous forme de playlists afin de ne manquer aucun des succès les plus récents ni les classiques.
        6. Vous n’avez pas accès à Internet ? Ce n’est pas un problème. Abonnez-vous à YouTube Music Premium pour écouter vos musiques sans publicité, en arrière-plan pendant vos déplacements, grâce aux téléchargements. De plus, votre Offline Mixtape télécharge automatiquement les chansons que vous aimez, au cas où vous auriez oublié.
        Alors que vous pouvez profiter gratuitement de la nouvelle version de YouTube Music avec publicité, nous lançons également YouTube Music Premium, un abonnement payant qui vous permet d’écouter, de télécharger et de vivre une expérience sans publicité pour 59.99 rands par mois. Ce n’est pas tout, vous pouvez bénéficier de trois mois gratuits sur l’abonnement à YouTube Music Premium en cliquant ici pour 59,99 rands par mois, puis, 89,99 rands par mois pour un Abonnement famille).


        Le lancement de YouTube Premium c’est également aujourd’hui
        À partir d’aujourd’hui, vous pouvez également passer à YouTube Premium. Ce service propose aux abonnés les avantages de Music Premium, sans publicité, des téléchargements sur l’ensemble de YouTube et un accès aux émissions et aux films disponibles sur YouTube Originals . Essayez YouTube Premium gratuitement pendant trois mois en cliquant ici pour 71,99 rands par mois, puis, R109,99 rands par mois pour un Abonnement famille).

        Les abonnés de Google Play Music auront automatiquement accès à YouTube Music Premium au tarif actuel. Rien ne change avec Google Play Music - vous pourrez toujours accéder à toute la musique achetée, aux téléchargements et aux playlists dans Google Play Music.

        Accédez à la nouvelle version de YouTube Music via l’App Store et App Storedès aujourd’hui ou essayez le lecteur web à l’adresse suivante music.youtube.com. Vous pouvez vous abonner à YouTube Premium à l’adresse suivante youtube.com/premium.

        Publié par l'équipe musicale YouTube

        An All-Neural On-Device Speech Recognizer



        In 2012, speech recognition research showed significant accuracy improvements with deep learning, leading to early adoption in products such as Google's Voice Search. It was the beginning of a revolution in the field: each year, new architectures were developed that further increased quality, from deep neural networks (DNNs) to recurrent neural networks (RNNs), long short-term memory networks (LSTMs), convolutional networks (CNNs), and more. During this time, latency remained a prime focus — an automated assistant feels a lot more helpful when it responds quickly to requests.

        Today, we're happy to announce the rollout of an end-to-end, all-neural, on-device speech recognizer to power speech input in Gboard. In our recent paper, "Streaming End-to-End Speech Recognition for Mobile Devices", we present a model trained using RNN transducer (RNN-T) technology that is compact enough to reside on a phone. This means no more network latency or spottiness — the new recognizer is always available, even when you are offline. The model works at the character level, so that as you speak, it outputs words character-by-character, just as if someone was typing out what you say in real-time, and exactly as you'd expect from a keyboard dictation system.
        This video compares the production, server-side speech recognizer (left panel) to the new on-device recognizer (right panel) when recognizing the same spoken sentence. Video credit: Akshay Kannan and Elnaz Sarbar
        A Bit of History
        Traditionally, speech recognition systems consisted of several components - an acoustic model that maps segments of audio (typically 10 millisecond frames) to phonemes, a pronunciation model that connects phonemes together to form words, and a language model that expresses the likelihood of given phrases. In early systems, these components remained independently-optimized.

        Around 2014, researchers began to focus on training a single neural network to directly map an input audio waveform to an output sentence. This sequence-to-sequence approach to learning a model by generating a sequence of words or graphemes given a sequence of audio features led to the development of "attention-based" and "listen-attend-spell" models. While these models showed great promise in terms of accuracy, they typically work by reviewing the entire input sequence, and do not allow streaming outputs as the input comes in, a necessary feature for real-time voice transcription.

        Meanwhile, an independent technique called connectionist temporal classification (CTC) had helped halve the latency of the production recognizer at that time. This proved to be an important step in creating the RNN-T architecture adopted in this latest release, which can be seen as a generalization of CTC.

        Recurrent Neural Network Transducers
        RNN-Ts are a form of sequence-to-sequence models that do not employ attention mechanisms. Unlike most sequence-to-sequence models, which typically need to process the entire input sequence (the waveform in our case) to produce an output (the sentence), the RNN-T continuously processes input samples and streams output symbols, a property that is welcome for speech dictation. In our implementation, the output symbols are the characters of the alphabet. The RNN-T recognizer outputs characters one-by-one, as you speak, with white spaces where appropriate. It does this with a feedback loop that feeds symbols predicted by the model back into it to predict the next symbols, as described in the figure below.
        Representation of an RNN-T, with the input audio samples, x, and the predicted symbols y. The predicted symbols (outputs of the Softmax layer) are fed back into the model through the Prediction network, as yu-1, ensuring that the predictions are conditioned both on the audio samples so far and on past outputs. The Prediction and Encoder Networks are LSTM RNNs, the Joint model is a feedforward network (paper). The Prediction Network comprises 2 layers of 2048 units, with a 640-dimensional projection layer. The Encoder Network comprises 8 such layers. Image credit: Chris Thornton
        Training such models efficiently was already difficult, but with our development of a new training technique that further reduced the word error rate by 5%, it became even more computationally intensive. To deal with this, we developed a parallel implementation so the RNN-T loss function could run efficiently in large batches on Google's high-performance Cloud TPU v2 hardware. This yielded an approximate 3x speedup in training.

        Offline Recognition
        In a traditional speech recognition engine, the acoustic, pronunciation, and language models we described above are "composed" together into a large search graph whose edges are labeled with the speech units and their probabilities. When a speech waveform is presented to the recognizer, a "decoder" searches this graph for the path of highest likelihood, given the input signal, and reads out the word sequence that path takes. Typically, the decoder assumes a Finite State Transducer (FST) representation of the underlying models. Yet, despite sophisticated decoding techniques, the search graph remains quite large, almost 2GB for our production models. Since this is not something that could be hosted easily on a mobile phone, this method requires online connectivity to work properly.

        To improve the usefulness of speech recognition, we sought to avoid the latency and inherent unreliability of communication networks by hosting the new models directly on device. As such, our end-to-end approach does not need a search over a large decoder graph. Instead, decoding consists of a beam search through a single neural network. The RNN-T we trained offers the same accuracy as the traditional server-based models but is only 450MB, essentially making a smarter use of parameters and packing information more densely. However, even on today's smartphones, 450MB is a lot, and propagating signals through such a large network can be slow.

        We further reduced the model size by using the parameter quantization and hybrid kernel techniques we developed in 2016 and made publicly available through the model optimization toolkit in the TensorFlow Lite library. Model quantization delivered a 4x compression with respect to the trained floating point models and a 4x speedup at run-time, enabling our RNN-T to run faster than real time speech on a single core. After compression, the final model is 80MB.

        Our new all-neural, on-device Gboard speech recognizer is initially being launched to all Pixel phones in American English only. Given the trends in the industry, with the convergence of specialized hardware and algorithmic improvements, we are hopeful that the techniques presented here can soon be adopted in more languages and across broader domains of application.

        Acknowledgements:
        Raziel Alvarez, Michiel Bacchiani, Tom Bagby, Françoise Beaufays, Deepti Bhatia, Shuo-yiin Chang, Yanzhang He, Alex Gruenstein, Anjuli Kannan, Bo Li, Qiao Liang, Ian McGraw, Ruoming Pang, Rohit Prabhavalkar, Golan Pundak, Kanishka Rao, David Rybach, Tara Sainath, Haşim Sak, June Yuan Shangguan, Matt Shannon, Mohammadinamul Sheik, Khe Chai Sim, Gabor Simko, Trevor Strohman, Mirkó Visontai, Yonghui Wu, Ding Zhao, Dan Zivkovic.

        Source: Google AI Blog