Monthly Archives: January 2019

YouTube Music Unveils its First-Ever ‘Artists to Watch’ in Australia and New Zealand

Singers such as Kaiit, Bene and Didirri, and rappers including the Triple One crew and Kwame, feature in the Top 10 Australian and New Zealand acts predicted to break through in the next 12 months.


YouTube Music today released its top 10 Artists To Watch in Australia and New Zealand for 2019. Some are already building impressive reputations locally but we believe all of them have the talent to go significantly further this year, both at home and overseas.
In alphabetical order, the 10 Artists To Watch from Australia and New Zealand in 2019 are:

  • Bene (Auckland, New Zealand) 
  • Didirri (Melbourne, Australia) 
  • G Flip (Melbourne, Australia) - pictured above 
  • JessB (Auckland, New Zealand) 
  • Kaiit (Melbourne, Australia) 
  • Kian (Castlemaine, Australia) 
  • Kwame (Sydney, Australia, via Auckland, New Zealand) 
  • The Kid Laroi (Sydney, Australia) 
  • Triple One (Sydney, Australia) 
  • Tyne-James Organ (Melbourne via Sydney, Australia) 


YouTube Music compiled its Artists To Watch for 2019 using factors including YouTube views, engagement from global music fans and YouTube Music analytics.
Melbourne’s all-singing, all-drumming indie firecracker G Flip, real name Georgia Flipo, says she is “stoked” to be one of YouTube Music’s Artists To Watch.
“I think it’s so cool they are championing breaking artists and I'm honoured to be chosen as one of them,” she said.
Her fellow Melbournian, sunny troubadour Didirri, describes his inclusion as “an absolute honour”. 
“I think it’s time we bring some positivity back into the limelight this year,” Didirri adds. “Hoping to bring a bunch of us together and share the music.”
That would include Sydney hip-hop trio Triple One, as they brace themselves for a huge 12 months, saying, “We've come into the new year with something to prove. 2019 will be our biggest year yet and our biggest evolution as a group.”
And when soulful Kiwi Bene heard she had made YouTube Music’s Artists To Watch list, she said, “So much luv, can't wait to show you more of ma shtuff.”
Burgeoning singer-songwriters Kaiit, Kian and Tyne James-Organ, and talented rappers JessB, Kwame and The Kid Laroi round out the chosen ones.
Head to YouTube Music to further explore these Artists to Watch and enjoy a brand-new YouTube Music playlist featuring our top 10 artists and the longlist, as well as an in-app spotlight, featuring audio and video content.

Soft Actor-Critic: Deep Reinforcement Learning for Robotics



Deep reinforcement learning (RL) provides the promise of fully automated learning of robotic behaviors directly from experience and interaction in the real world, due to its ability to process complex sensory input using general-purpose neural network representations. However, many existing RL algorithms require days or weeks (or more) worth of real-world data in order to converge to the desired behavior. Furthermore, such systems can be tough to deploy on complex robotic systems (such as legged robots) which can easily get damaged during the exploration phase, hyperparameter settings can be challenging to tune, and various safety considerations can introduce further limitations.

In collaboration with UC Berkeley, we recently released Soft Actor-Critic (SAC), a stable and efficient deep RL algorithm suitable for real-world robotic skill learning that is well-aligned with the requirements of robotic experimentation. Importantly, SAC is efficient enough to solve real-world robot tasks in only a handful of hours, and works on a variety of environments with a single set of hyperparameters. Below, we discuss some of the research behind SAC, and also describe some of our recent experiments.

Requirements for Real-World Robotic Learning
Real-world robotic experimentation brings significant challenges, such as constant interruptions in the data stream due to hardware failures and manual resets, and smooth exploration to avoid mechanical wear and tear on the robot, which set additional restrictions to both the algorithm and its implementation, including (but not limited to):
  • Good sample efficiency to lower the learning time
  • Minimal number of hyperparameters that require tuning
  • Reusing already collected data on different scenarios (known as off-policy learning)
  • Ensuring that learning and exploration does not damage the hardware
Soft Actor-Critic
Soft actor-critic is based on maximum entropy reinforcement learning, a framework that aims to both maximize the expected reward (which is the standard RL objective) and to maximize the policy's entropy. Policies with higher entropy are more random, which intuitively means that maximum entropy reinforcement learning prefers the most random policy that still achieves a high reward.

Why might this be desirable for robotic learning? The most obvious reason is that policies optimized for maximum entropy will be more robust: if the policy can tolerate highly random behavior during training, it is more likely to respond successfully to unexpected perturbations at test time. However, a more subtle reason is that training for maximum entropy can improve both the algorithm's robustness to hyperparameters and its sample efficiency (to learn more, see this BAIR blog post, and this tutorial).

Soft actor-critic maximizes the entropy augmented reward by learning a stochastic policy that maps states to actions and a Q-function that estimates the objective value of the current policy, optimizing them using approximate dynamic programming. In doing so, SAC views the objective as a grounded way to derive better reinforcement learning algorithms that perform consistently and are sample efficient enough to be applicable to real-world robotic applications. For technical details please see our technical report.

Performance of SAC
We evaluated SAC with two tasks: 1) quadrupedal walking with the Minitaur robot from Ghost Robotics, and 2) rotating a valve with a three finger Dynamixel Claw. Learning to walk presents a substantial challenge, as the robot is underactuated, and must therefore delicately balance contact forces on the legs to make forward progress. An untrained policy can lose balance and fall, and too many falls will eventually damage the robot, making sample-efficient learning essential.

Although we trained our policy only on flat terrain, we subsequently tested it on varied terrains and obstacles. In principle, policies learned with soft actor-critic should be robust to test-time perturbations, because they are trained to maximize entropy (i.e., inject maximal noise) at training-time. Indeed, we observe that the policies learned with our method are robust to these perturbations without any additional learning.
Illustration of learned walking, using SAC implemented on the Minitaur robot. A full video of the learning process can be found at our project website.
The manipulation task requires the hand to rotate a valve-like object so that the colored peg faces to the right, as shown below. This task is exceptionally challenging due to both the perception challenges and the need to control a hand with 9 degrees of freedom. In order to perceive the valve, the robot must use raw RGB images shown in the inset at the bottom right. The initial position of the valve is reset uniformly at random for each episode, forcing the policy to learn to use the raw RGB images to perceive the current valve orientation.
Soft actor-critic solves both of these tasks quickly: the Minitaur locomotion takes 2 hours, and the valve-turning task from image observations takes 20 hours. We also learned a policy for the valve-turning task without images by providing the actual valve position as an observation to the policy. Soft actor-critic can learn this easier version of the valve task in 3 hours. For comparison, prior work has used natural policy gradients to learn the same task without images in 7.4 hours.

Conclusion
Our work demonstrates that deep reinforcement learning based on maximum entropy framework can be applied to learn robot skills in challenging real-world settings. Since the policies are learned directly in the real world, they exhibit robustness to variations in the environment, which can be difficult to obtain otherwise. We also showed that we can learn directly from high-dimensional image observations, which represents a significant challenge in classical robotics. We hope that the release of SAC helps other research teams in their effort to adopt deep RL for more complex real-world tasks in the future.

For more technical details, please visit the BAIR blog post, or read an early preprint of the locomotion experiment and a more complete description of the algorithm. You can find the implementation on GitHub.

Acknowledgements
This research was done in collaboration between Google and UC Berkeley. We would like to thank all the people who were involved, including Sehoon Ha, Kristian Hartikainen, Jie Tan, George Tucker, Vincent Vanhoucke and Aurick Zhou.

Source: Google AI Blog


Applications are open for the Google North America Public Policy Fellowship

Starting today, we’re accepting applications for the 2019 North America Google Policy Fellowship. Our fellowship gives undergraduate and graduate students a paid opportunity to spend 10-weeks diving head first into Internet policy at leading nonprofits, think tanks and advocacy groups. In addition to opportunities in Washington, D.C. and California, we’ve expanded our program to include academic institutions and advocacy groups in New York and Utah, where students will have the chance to be at the forefront of debates on internet freedom and economic opportunity. We’re looking for students from all majors and degree programs who are passionate about technology and want to gain hands on experience exploring important intersections of tech policy.

The application period opens today for the North America region and all applications must be received by 12:00 p.m. ET/9 a.m. PT, Friday, February, 15th. This year's program will run from early June through early August, with regular programming throughout the summer. More specific information, including a list of this year’s hosts and locations, can be found on our site.

You can learn about the program, application process and host organizations on the Google Public Policy Fellowship website.

Beta Channel Update for Chrome OS

The Beta channel has been updated to 72.0.3626.59 (Platform version: 11316.82.0 / 11316.82.1) for most Chrome OS devices. This build contains a number of bug fixes, security updates and feature enhancements.  A list of changes can be found here.

If you find new issues, please let us know by visiting our forum or filing a bugInterested in switching channels? Find out how. You can submit feedback using ‘Report an issue...’ in the Chrome menu (3 vertical dots in the upper right corner of the browser).

David McMahon
Google Chrome

Dev Channel Update for Desktop

The dev channel has been updated to 73.0.3673.0 for Windows, Mac & Linux.


A partial list of changes is available in the log. Interested in switching release channels? Find out how. If you find a new issue, please let us know by filing a bug. The community help forum is also a great place to reach out for help or learn about common issues.
Abdul Syed
Google Chrome

Ways to succeed in Google News

With the New Year now underway, we'd like to offer some best practices and advice we hope will lead publishers to more success within Google News in 2019.

General advice

There is a lot of helpful information to consider within the Google News Publisher Help Center. Be sure to have read the material in this area, in particular the content and technical guidelines.

Headlines and dates


  • Present clear headlines: Google News looks at a variety of signals to determine the headline of an article, including within your HTML title tag and for the most prominent text on the page. Review our headline tips.
  • Provide accurate times and dates: Google News tries to determine the time and date to display for an article in a variety of ways. You can help ensure we get it right by using the following methods:
    • Show one clear date and time: As per our date guidelines, show a clear, visible date and time between the headline and the article text. Prevent other dates from appearing on the page whenever possible, such as for related stories.
    • Use structured data: Use the datePublished and dateModified schema and use the correct time zone designator for AMP or non-AMP pages
  • Avoid artificially freshening stories: If an article has been substantially changed, it can make sense to give it a fresh date and time. However, don't artificially freshen a story without adding significant information or some other compelling reason for the freshening. Also, do not create a very slightly updated story from one previously published, then delete the old story and redirect to the new one. That's against our article URLs guidelines.

Duplicate content

Google News seeks to reward independent, original journalistic content by giving credit to the originating publisher, as both users and publishers would prefer. This means we try not to allow duplicate content—which includes scraped, rewritten, or republished material—to perform better than the original content. In line with this, these are guidelines publishers should follow:

  • Block scraped content: Scraping commonly refers to taking material from another site, often on an automated basis. Sites that scrape content must block scraped content from Google News.
  • Block rewritten content: Rewriting refers to taking material from another site, then rewriting that material so that it is not identical. Sites that rewrite content in a way that provides no substantial or clear added value must block that rewritten content from Google News. This includes, but is not limited to, rewrites that make only very slight changes or those that make many word replacements but still keep the original article's overall meaning.
  • Block or consider canonical for republished content: Republishing refers to when a publisher has permission from another publisher or author to republish an original work, such as material from wire services or in partnership with other publications.
    Publishers that allow others to republish content can help ensure that their original versions perform better in Google News by asking those republishing to block or make use of canonical.
    Google News also encourages those that republish material to consider proactively blocking such content or making use of the canonical, so that we can better identify the original content and credit it appropriately.
  • Avoid duplicate content: If you operate a network of news sites that share content, the advice above about republishing is applicable to your network. Select what you consider to be the original article and consider blocking duplicates or making use of the canonical to point to the original.

Transparency


  • Be transparent: Visitors to your site want to trust and understand who publishes it and information about those who have written articles. That's why our content guidelines stress that content should have posts with clear bylines, information about authors, and contact information for the publication.
  • Don't be deceptive: Our content policies do not allow sites or accounts that impersonate any person or organization, or that misrepresent or conceal their ownership or primary purpose. We do not allow sites or accounts that engage in coordinated activity to mislead users. This includes, but isn't limited to, sites or accounts that misrepresent or conceal their country of origin or that direct content at users in another country under false premises.

More tips


  • Avoid taking part in link schemes: Don't participate in link schemes, which can include large-scale article marketing programs or selling links that pass PageRank. Review our page on link schemes for more information.
  • Use structured for rich presentation: Both those using AMP and non-AMP pages can make use of structured data to optimize your content for rich results or carousel-like presentations.
  • Protect your users and their data: Consider securing every page of your website with HTTPS to protect the integrity and confidentiality of the data users exchange on your site. You can find more useful tips in our best practices on how to implement HTTPS.

Here's to a great 2019!

We hope these tips help publishers succeed in Google News over the coming year. For those who have more questions about Google News, we are unable to do one-to-one support. However, we do monitor our Google News Publisher Forum—which has been newly-revamped—and try to provide guidance on questions that might help a number of publishers all at once. The forum is also a great resource where publishers share tips and advice with each other.

Sunset of the Ad Manager API v201802

On Thursday, February 28, 2019, in accordance with the deprecation schedule, v201802 of the Ad Manager API will be sunset. At that time, any requests made to this version will return errors.

If you’re still using this version, now is the time to upgrade to the latest release and take advantage of new functionality like new reporting Dimensions, enhanced options for Targeting, and improved Forecast breakdowns.

To upgrade, check the release notes to identify any breaking changes, grab the latest version of your client library, and update your code.

Significant changes include: This is not an exhaustive list, so be sure to check the release notes for a list of all changes. As always, don't hesitate to reach out to us with any questions.

To be notified of future deprecations and sunsets, join the Ad Manager API Sunset Announcements group and adjust your notification settings. If you are an administrator on your network, you can also receive notifications when an application is making requests to your network using a deprecated version, as explained in this post.

Allow Google Calendar users to book Microsoft Exchange resources

What’s changing

You can now let your Google Calendar users book Microsoft Exchange calendar resources, such as meeting rooms, when they schedule a meeting.

Who’s impacted

Admins and end users

Why you’d use it

We know that some of you manage a coexistence of Google Calendar users and Microsoft Exchange users within your organizations. Last year, we added the ability to share free/busy information across users in these two environments. With this launch, Calendar users can now easily book any resources that are stored in Exchange.

How to get started


  • Admins: To enable Exchange room booking in the Admin console, please follow these instructions.
  • End users: Once this feature is enabled, Calendar users will see both Calendar and Exchange resources displayed as bookable options.

Additional details

For more information about Calendar interop, check out the Help Center.

Helpful links

Help Center: Allow Calendar users to book Exchange resources

Availability

Rollout details


G Suite editions
Available to all G Suite editions

On/off by default?
This feature will be OFF by default and can be enabled at the domain level.

Stay up to date with G Suite launches
Notice the new format for these launch announcements? Give us feedback on it here.

What’s new in Scratch 3.0, a programming language designed for kids

In 2013, the MIT Media Lab started creating a new version of Scratch, a graphical, block-based programming language used by tens of millions of kids to create and share interactive stories, games and animations. We partnered with the Media Lab on this new version of the language—Scratch 3.0—and the Google Blockly team developed the programming language’s graphical coding blocks. OurCS First program, which offers kids in fourth through eighth grades Scratch coding lessons, also created new activities designed to teach Scratch’s new features.

On January 2, Scratch 3.0 launched with a new look, new sprites (digital characters that perform actions in a project), backdrops (backgrounds), sounds, and extensions—plus, it’s now available on tablets. To help educators get ready for Scratch 3.0, we’ve created a comprehensive help article that includes support documents and videos featuring the new interface and customizable lesson plans.

I recently caught up with Mitchel Resnick, who leads the group at MIT that develops Scratch, to talk about the programming language and what’s new in version 3.0.

What is Scratch 3.0 and why is it cool?

Scratch 3.0 is a new version of Scratch that expands how and what students can create with code. We’re excited to see the diverse and creative projects that students will develop with it.

What are your favorite features of 3.0?

I love the Scratch 3.0 “extensions.” Each extension gives students an extra set of coding blocks to take Scratch’s capabilities even further. With new robotics extensions, students can use Scratch to program motors, lights and sensors. With the Google Translate extension, students can program characters to speak in other languages. As the library of extensions continues to grow, Scratch will have even more capabilities.

If you had to choose a sprite to represent yourself, which would you choose and why?

I’d choose the Ten80 Dance sprite. I’m a really bad dancer myself and wish I had moves like these.

sprite

What are some of the new sprites and backdrops?

We worked with artists and illustrators (including long-time Scratcher ipzy) to create a diverse collection of new sprites and backdrops. You’ll find new fashion sprites, animals, snacks, cars and more. Do you want to create a fantasy world with centaurs, griffins and unicorns? How about a game set in outer space? Are you into sports or dancing or dinosaurs? Whatever you’re interested in, we think there’s something for everyone in Scratch 3.0.

What are some Scratch 3.0 features that educators will like?

Educators will appreciate the new video tutorials in Scratch 3.0—there are tutorials to help students get started, explain new features and support new types of projects. We also worked closely with the CS First team to ensure that CS First videos and activities are ready for use with Scratch 3.0. Plus, Scratch 3.0 works on many different platforms, including touch devices like tablets—and there’s a desktop version of Scratch 3.0, so you can still use Scratch 3.0 even if you don’t have an internet connection.

Scratch 3.0 is live in CS First now, so be sure to check out its new look and features. To get some inspiration for your next creation, head to theonline community to see others’ Scratch projects.

Scratch 3.0’s new programming blocks, built on Blockly

Posted by Erik Pasternak, Blockly team Manager

Coding is a powerful tool for creating, expressing, and understanding ideas. That's why our goal is to make coding available to kids around the world. It's also why, in late 2015, we decided to collaborate with the MIT Media Lab on the redesign of the programming blocks for their newest version of Scratch.

Left: Scratch 2.0's code rendering. Right: Scratch 3.0's new code rendering.

Scratch is a block-based programming language used by millions of kids worldwide to create and share animations, stories, and games. We've always been inspired by Scratch, and CS First, our CS education program for students, provides lessons for educators to teach coding using Scratch.

But Scratch 2.0 was built on Flash, and by 2015, it became clear that the code needed a JavaScript rewrite. This would be an enormous task, so having good code libraries would be key.

And this is where the Blockly team at Google came in. Blockly is a library that makes it easy for developers to add block programming to their apps. By 2015, many of the web's visual coding activities were built on Blockly, through groups like Code.org, App Inventor, and MakeCode. Today, Blockly is used by thousands of developers to build apps that teach kids how to code.

One of our Product Managers, Champika (who earned her master's degree in Scratch's lab at MIT) believed Blockly could be a great fit for Scratch 3.0. She brought together the Scratch and Google Blockly teams for informal discussions. It was clear the teams had shared goals and values and could learn a lot from one another. Blockly brought a flexible, powerful library to the table, and the Scratch team brought decades of experience designing for kids.

Champika and the Blockly team together at I/O Youth, 2016.

Those early meetings kicked off three years of fun (and hard work) that led to the new blocks you see in Scratch 3.0. The two teams regularly traveled across the country to work together in person, trade puns, and pore over designs. Scratch's feedback and design drove lots of new features in Blockly, and Blockly made those features available to all developers.

On January 2nd, Scratch 3.0 launched with all of the code open source and publicly developed. At Google, we created two coding activities that showcase this code base. The first was Code a Snowflake, which was used by millions of kids as part of Google's Santa Tracker. The second was a Google Doodle that celebrated 50 years of kids coding and gave millions of people their first experience with block programming. As an added bonus, we worked with Scratch to include an extension for Google Translate in Scratch 3.0.

With Scratch 3.0, even more people are programming with blocks built on Blockly. We're excited to see what else you, our developers, will build on Blockly.