YouTube creators meet Pope Francis to discuss promoting understanding and empathy

YouTube has helped millions of people see that we have a lot in common, despite our differences. Building these bridges can start with a simple conversation, and over the past 11 years, we’ve seen YouTube creators use the power of video to do just that. From Hayla Ghazal encouraging women in the Middle East and around the world to speak up, to Dulce Candy sharing her own story as an undocumented immigrant and military veteran, creators from around the world have used our platform to express themselves, encourage new perspectives, and inspire solidarity within global fan bases.

We want to continue empowering people to come to YouTube to tell stories and form connections that encourage empathy and understanding between diverse communities.

That’s why today 11 international YouTube creators met with Pope Francis, who cares deeply about bringing young people together. This first-of-its-kind dialogue took place during the VI Scholas World Congress, which the Pope created to encourage peace through real encounters with youth from different backgrounds.

The YouTube creators who participated in this conversation represent more than 27 million subscribers globally. They come from ten different countries and diverse religious backgrounds: Louise Pentland (United Kingdom), Lucas Castel (Argentina), Matemática Río (Brazil), Hayla Ghazal (United Arab Emirates), Dulce Candy (United States), Matthew Patrick (United States), Jamie and Nikki (Australia and Sudan/Egypt), Greta Menchi (Italy), Los Polinesios (Mexico) and anna RF (Israel).



During the conversation with the Pope, these creators raised topics that they are passionate about as role models, including immigrant rights, gender equality, loneliness and self-esteem, and greater respect for diversity of all kinds.

We’re inspired by the many conversations these creators have sparked throughout their YouTube journeys. To hear more about what they discussed at the Vatican today, tune in to each of their channels for personal videos in the coming weeks. We hope to continue helping people share their stories - the more we can all understand, the more we can come together as a global community.

Juniper Downs, Head of Policy for YouTube, recently watched “I am a Muslim, hug me if you trust me.”

Source: YouTube Blog


Flaky Tests at Google and How We Mitigate Them

by John Micco

At Google, we run a very large corpus of tests continuously to validate our code submissions. Everyone from developers to project managers rely on the results of these tests to make decisions about whether the system is ready for deployment or whether code changes are OK to submit. Productivity for developers at Google relies on the ability of the tests to find real problems with the code being changed or developed in a timely and reliable fashion.

Tests are run before submission (pre-submit testing) which gates submission and verifies that changes are acceptable, and again after submission (post-submit testing) to decide whether the project is ready to be released. In both cases, all of the tests for a particular project must report a passing result before submitting code or releasing a project.

Unfortunately, across our entire corpus of tests, we see a continual rate of about 1.5% of all test runs reporting a "flaky" result. We define a "flaky" test result as a test that exhibits both a passing and a failing result with the same code. There are many root causes why tests return flaky results, including concurrency, relying on non-deterministic or undefined behaviors, flaky third party code, infrastructure problems, etc. We have invested a lot of effort in removing flakiness from tests, but overall the insertion rate is about the same as the fix rate, meaning we are stuck with a certain rate of tests that provide value, but occasionally produce a flaky result. Almost 16% of our tests have some level of flakiness associated with them! This is a staggering number; it means that more than 1 in 7 of the tests written by our world-class engineers occasionally fail in a way not caused by changes to the code or tests.

When doing post-submit testing, our Continuous Integration (CI) system identifies when a passing test transitions to failing, so that we can investigate the code submission that caused the failure. What we find in practice is that about 84% of the transitions we observe from pass to fail involve a flaky test! This causes extra repetitive work to determine whether a new failure is a flaky result or a legitimate failure. It is quite common to ignore legitimate failures in flaky tests due to the high number of false-positives. At the very least, build monitors typically wait for additional CI cycles to run this test again to determine whether or not the test has been broken by a submission adding to the delay of identifying real problems and increasing the pool of changes that could contribute.

In addition to the cost of build monitoring, consider that the average project contains 1000 or so individual tests. To release a project, we require that all these tests pass with the latest code changes. If 1.5% of test results are flaky, 15 tests will likely fail, requiring expensive investigation by a build cop or developer. In some cases, developers dismiss a failing result as flaky only to later realize that it was a legitimate failure caused by the code. It is human nature to ignore alarms when there is a history of false signals coming from a system. For example, see this article about airline pilots ignoring an alarm on 737s. The same phenomenon occurs with pre-submit testing. The same 15 or so failing tests block submission and introduce costly delays into the core development process. Ignoring legitimate failures at this stage results in the submission of broken code.

We have several mitigation strategies for flaky tests during presubmit testing, including the ability to re-run only failing tests, and an option to re-run tests automatically when they fail. We even have a way to denote a test as flaky - causing it to report a failure only if it fails 3 times in a row. This reduces false positives, but encourages developers to ignore flakiness in their own tests unless their tests start failing 3 times in a row, which is hardly a perfect solution.
Imagine a 15 minute integration test marked as flaky that is broken by my code submission. The breakage will not be discovered until 3 executions of the test complete, or 45 minutes, after which it will need to be determined if the test is broken (and needs to be fixed) or if the test just flaked three times in a row.

Other mitigation strategies include:
  • A tool that monitors the flakiness of tests and if the flakiness is too high, it automatically quarantines the test. Quarantining removes the test from the critical path and files a bug for developers to reduce the flakiness. This prevents it from becoming a problem for developers, but could easily mask a real race condition or some other bug in the code being tested.
  • Another tool detects changes in the flakiness level of tests and works to identify the change that caused the test to change the level of flakiness.

In summary, test flakiness is an important problem, and Google is continuing to invest in detecting, mitigating, tracking, and fixing test flakiness throughout our code base. For example:
  • We have a new team dedicated to providing accurate and timely information about test flakiness to help developers and build monitors so that they know whether they are being harmed by test flakiness.
  • As we analyze the data from flaky test executions, we are seeing promising correlations with features that should enable us to identify a flaky result accurately without re-running the test.


By continually advancing the state of the art for teams at Google, we aim to remove the friction caused by test flakiness from the core developer workflows.

Bringing virtual cats to your world with Project Tango

Posted by Jason Guo, Developer Programs Engineer, Project Tango

Project Tango brings augmented reality (AR) experiences to life. From the practical to the whimsical, Project Tango apps help place virtual objects -- anything from new living room furniture to a full-sized dinosaur -- into your physical world.

Last month we showed you how to quickly and easily make a simple solar system in AR. But if you are ready for something more advanced, the tutorial below describes how to use Project Tango’s depth APIs to associate virtual objects with real world objects. It also shows you how to use a Tango Support Library function to find the planar surface in an environment.

So what’s our new tutorial project? We figured that since cats rule the Internet, we’d place a virtual cat in AR! The developer experience is designed to be simple -- when you tap on the screen, the app creates a virtual cat based on real-world geometry. You then use the depth camera to locate the surface you tapped on, and register (place) the cat in the right 3D position.

Bring on the cats!

Before you start, you’ll need to download the Project Tango Unity SDK. Then you can follow the steps below to create your own cats.

Step 1: Create a new Unity project and import the Tango SDK package into the project.

Step 2: Create a new scene. If you don’t know how to do this, look back at the solar system tutorial. Just like the solar system project, you’ll use the Tango Manager and Tango AR Camera in the scene and remove the default Main Camera gameobject. After doing this, you should see the scene hierarchy like this:

Step 3: Build and run once, making sure sure the application shows the video feed from Tango’s camera.

Step 4: Enable the Depth checkbox on the Tango Manager gameobject.

Step 5: Drag and drop the Tango Point Cloud prefab to the scene from the TangoPrefab folder.

Tango Point Cloud includes a bunch of useful functions related to point cloud, including finding the floor, transforming pointcloud to unity global space, and rendering debug points. In this case, you’ll use the FindPlane function to find a plane based on the touch event.

Step 6: Create a UI Controller gameobject in the scene. To do this, click the “Create” button under the Hierarchy tab, then click “Create Empty.” The UI Controller will be the hosting gameobject to run your UIController.cs script (which you’ll create in the next step).

Step 7: Click on “UIController gameobject” in the inspector window, then click “Add Component” to add a C# script named KittyUIController.cs. KittyUIController.cs will handle the touch event, call the FindPlane function, and place your kitty into the scene.

Step 8: Double click on the KittyUIController.cs file and replace the script with the following code

using UnityEngine;
using System.Collections;

public class KittyUIController : MonoBehaviour
{
public GameObject m_kitten;
private TangoPointCloud m_pointCloud;

void Start()
{
m_pointCloud = FindObjectOfType();
}

void Update ()
{
if (Input.touchCount == 1)
{
// Trigger place kitten function when single touch ended.
Touch t = Input.GetTouch(0);
if (t.phase == TouchPhase.Ended)
{
PlaceKitten(t.position);
}
}
}

void PlaceKitten(Vector2 touchPosition)
{
// Find the plane.
Camera cam = Camera.main;
Vector3 planeCenter;
Plane plane;
if (!m_pointCloud.FindPlane(cam, touchPosition, out planeCenter, out plane))
{
Debug.Log("cannot find plane.");
return;
}

// Place kitten on the surface, and make it always face the camera.
if (Vector3.Angle(plane.normal, Vector3.up) < 30.0f)
{
Vector3 up = plane.normal;
Vector3 right = Vector3.Cross(plane.normal, cam.transform.forward).normalized;
Vector3 forward = Vector3.Cross(right, plane.normal).normalized;
Instantiate(m_kitten, planeCenter, Quaternion.LookRotation(forward, up));
}
else
{
Debug.Log("surface is too steep for kitten to stand on.");
}
}
}
Notes on the code
Here are some notes on the code above:
  • m_kitten is a reference to the Kitten gameobject (we’ll add the model in the following steps)
  • m_pointCloud is a reference to the TangoPointCloud script on the Tango Point Cloud gameobject. We need this reference to call the FindPlane method on it
  • We assign the m_pointcloud reference in the Start() function
  • We check the touch count and its state in the Update() function when the single touch has ended
  • We invoke the PlaceKitten(Vector2 touchPosition) function to place the cat into 3D space. It queries the main camera’s location (in this case, the AR camera), then calls the FindPlane function based on the camera’s position and touch position. FindPlane returns an estimated plane from the touch point, then places the cat on a plane if it’s not too steep. As a note, the FindPlane function is provided in the Tango Support Library. You can visit TangoSDK/TangoSupport/Scripts/TangoSupport.cs to see all of its functionalities.
Step 9: Put everything together by downloading the kitty.unitypackage, which includes a cat model with some simple animations. Double click on the package to import it into your project. In the project folder you will find a Kitty prefab, which you can drag and drop to the Kitten field on the KittyUIController.
Step 10:
Compile and run the application again. You should able to tap the screen and place kittens everywhere!We hope you enjoyed this tutorial combining the joy of cats with the magic of AR. Stay tuned to this blog for more AR updates and tutorials!

A final note on this tutorial
So you’ve just created virtual cats that live in AR. That’s great, but from a coding perspective, you’ll need to follow some additional steps to make a truly performant AR application. Check out our Unity example code on Github (especially the Augmented Reality example) to learn more about building a good AR application. Also, if you need a refresher, check out this talk from I/O around building 6DOF games with Project Tango.

Learn about building for Google Maps over Coffee with Ankur Kotwal

Posted by Laurence Moroney, Developer Advocate

If you’ve ever used any of the Google Maps or Geo APIs, you’ll likely have watched a video, read a doc, or explored some code written by Ankur Kotwal. We sat down with him to discuss his experience with these APIs, from the past, to the present and through to the future. We discuss how to get started in building mapping apps, and how to consume many of Google’s web services that support them.

We also discuss the Santa Tracker application, that Ankur was instrumental in delivering, including some fun behind the scenes stories of the hardes project manager he’s ever worked with!

Take in the Sites of Rio de Janeiro Before the Games Begin

Preparations are underway in the “Marvelous City” in anticipation of the 2016 Olympics Games which is expected to draw an extra half a million people to Brazil this summer. The Google Street View team has also been busy preparing for the festivities. Over the past few months, we’ve capturing fresh imagery, so everyone can enjoy the magic of Rio de Janeiro – whether planning to attend in person or watch the excitement from afar.


Google Street View engineer takes pictures from the inside of Rio’s Olympic Park

Starting today, a quick visit to Street View will give you a preview of the places where the world's most talented athletes will make history. Barrel down the Olympic mountain bike trail and take a stroll on the track where runners will sprint as fast as their legs will carry them in an attempt to bring home the gold.


Olympic Mountain Bike Trail

We’re also releasing indoor imagery of more than 200 hotels, restaurants and bars across the city. Take a peek at the pink carpeted Copacabana Palace or the breathtaking poolside ocean views at the Fasano Hotel. If you’ll be in Rio for the Games, check out the vibe before you make restaurant reservations or the local bar to ensure there’s ample room on the dancefloor to bust out your Samba moves.


Suite in the Fasano Hotel, famous for its celebrity guests in Rio

Step outside to take in some of the most iconic sites of Rio including Christ the Redeemer, The Dona Marta Hilltop, and Arpoador Beach. We’ve captured imagery of every main tourist attraction in Rio including the famous Selarón Steps.


View from Arpoador Beach

In addition to the many beautiful sites, we’ve collected up-to-date Street View imagery of Rio's streets and neighborhoods so you can get a feel for the area around your accommodation ahead of time. Check out the bus stops to familiarize yourself with local transportation - or pick the perfect people watching Juice bar to enjoy an Acai bowl on the way from the hotel to the beach.

Whether you’re preparing to visit in person or simply enjoying the sites from afar, make yourself a caipirinha or have a Pau De Queijo (delicious Brazilian cheese bread) while you explore.

Posted by Marcus Leal, Google Maps manager in Brazil

Source: Google LatLong


Google Photos: One year, 200 million users, and a whole lot of selfies

A year ago, we introduced Google Photos with one mission: To be a home for all your photos and videos, organized and brought to life, so that you can share and save what matters.

Now 200 million of you are using Google Photos each month. We’ve delivered more than 1.6 billion animations, collages and movies, among other things. You’ve collectively freed up 13.7 petabytes of storage on your devices—it would take 424 years to swipe through that many photos! We’ve also applied 2 trillion labels, and 24 billion of those have been for ... selfies.

To celebrate our first birthday, we’ve gathered a few of the team's favorite tips and updates we’ve made in the past year, so you can keep all that good stuff going...

1. To fly through Google Photos on the web at photos.google.com, press Shift-? to see a list of keyboard shortcuts.

2. Narrow down your search results by searching for more than one thing at a time. Search for two people: “Mom and Dad,” or a person and a place: “Mom Yosemite,” a place and a thing: “Yosemite bear,” or a person and a thing: “Mom bear” to find that photo of your mama bear with the real bear.

3. Running out of Google storage? On photos.google.com, under settings, you can choose to convert all of your uploaded content from “Original quality” to the free “High quality” size to recover lots of space.

4. Enter your favorite emoji (😎 🍂 💗 🎂 ) into search to pull up your corresponding photos. Not joking.

5. On photos.google.com, easily find the photos you recently uploaded by going to search, then choosing "Show More” and then “Recently Added.”

6. Tap into your device folders from the top of the albums page on Android, and see which folders are being backed up. Double-check that all those screenshots are safe!

7. Create a shared album for your family. Every time someone adds a new photo, everyone will get a notification so they can see your latest photo or video.

8. Have you spied the easter egg in the photo editor on Android? Hint...It’s out of this world.

9. Occasionally photos can appear out of order in your gallery—perhaps because the date was incorrectly set on your phone or camera when you took them. On photos.google.com, you can edit both the time and time zone of a photo or group of photos to put them in the right order in your library. Change one and they all get adjusted.

10. At the top of the albums page on mobile, scroll the carousel to the right and tap on the videos tile to get a view of all the videos in your library (on photos.google.com, you’ll see videos at the top of the album page).

Thanks for a wonderful first year—keep it up; all those selfies aren’t going to take themselves!

https://2.bp.blogspot.com/-41Xtn-xx7P8/V0iSHXk66oI/AAAAAAAASYA/Kyj9hzBMXFoDhWon4SgDCO-0-EwEbtMRwCLcB/s400/GooglePhotos.jpg

Machine learning at the museum: this week on Google Cloud Platform




Is there a limit to what you can do with machine learning? Doesn’t seem like it.

At Moogfest last week, Google researchers presented about Project Magenta, which uses TensorFlow, the open-source machine learning library that we developed, to see if computers can create original pieces of art and music.

Researchers have also shown that they can use TensorFlow to train systems to imitate the grand masters. With this implementation of neural style in TensorFlow, it becomes easy to render an image that looks like it was created by Vincent Van Gogh or Pablo Picasso — or a combination thereof. Take this image of Frank Gehry’s Stata Center in winter,
add style inputs from Van Gogh’s Starry Night and a Picasso Dora Maar:
and end up with:
Voila! A Picasso-esque Van Gogh for 21st century!

(The code for neural style was first posted to GitHub last December, but the author continues to update it and welcomes pull requests.)

Or maybe fine art isn’t your thing. This week, we also saw how to use TensorFlow to solve trivial programming problems, forecast demand — even predict the elections.

Because TensorFlow is open source, anyone can use it, on the platform of their choice. But it’s worth mentioning that running machine learning on Google Cloud Platform works especially well. You can learn more about GCP’s machine learning capabilities here. And if you’re doing something awesome with machine learning on GCP, we’d love to hear about it — just tweet us at @googlecloud.

Announcing v201605 of the AdWords API

Today we’re announcing the release of AdWords API v201605. This is the third release that follows the new release schedule announced in January 2016. Here are the highlights:
  • Expanded Text Ads and responsive ads for Display. Support was added for creating the new ExpandedTextAd and ResponsiveDisplayAd ad types in test accounts. Give these new formats a try with your test accounts. This functionality will be made available for production accounts over the next few months.
  • Campaign bid estimates by platform. Inclusion of campaign estimates by platform was added to TrafficEstimatorService. Check out the updated Estimating Traffic guide for more details.
  • Bid modifiers for all platforms (test accounts only). Campaign and ad group bid modifiers for all platforms are now supported across all versions, but only in test accounts. Previously, platform bid modifiers were only supported for the HighEndMobile (30001) platform. This functionality will be made available for production accounts over the next few months, as mentioned in a recent Inside AdWords blog post.
  • Improved reporting on quality score. The new HasQualityScore field lets you filter reporting rows based on the presence or absence of quality score data for each criterion. In addition, the QualityScore field will now have a value of '--' on rows where HasQualityScore = false. Previously, these rows had a QualityScore value of 6 for Search campaigns, and 0 (zero) for Display campaigns.
  • Image dimensions in reports. The ImageCreativeImageHeight and ImageCreativeImageWidth fields were added to the Ad Performance report so you can retrieve image dimensions for all image ads without making multiple AdGroupAdService requests.
If you’re using v201601 of the AdWords API, please note that it’s now deprecated and will be sunset on August 23, 2016. We encourage you to skip v201603 and migrate straight to v201605.

As with every new version of the AdWords API, we encourage you to carefully review all changes in the release notes and the v201605 migration guide. The updated client libraries and code examples will be published shortly.

If you have any questions or need help with migration, please post on the forum or the Ads Developers Plus Page.

Speeding up the Google app for iOS users

This just in: your Google app for iPhone and iPad is now faster. We’ve cut down loading time and updated the app with some new features to help you save time and get the information you need as quickly as possible.

Faster every step of the way
Each time you open the app or do a search, everything will load just a bit quicker. Whether you can notice the difference or not, these small improvements will save app users a combined 6.5 million hours this year.

Instant article loading with AMP
A few months ago, we announced that Accelerated Mobile Pages (AMP) were coming to the mobile web. Starting today, AMP will be available in the Google app for iOS. So now news articles from a vast array of publishers will load instantly for your reading pleasure. Just look out for the lightning bolt and “AMP” next to articles in the “Top Stories” section of your search results and enjoy blazing-fast news.


Play sports highlights right from your Now cards
With the NBA and NHL playoffs in full swing and the Olympics around the corner, it’s a good time to be a sports fan. And now, you can instantly watch sports highlights right from your Now cards. When you get a card with sports highlights, just tap the play button and watch it right from the app. Score!

So whether you’re searching for news from around the world or video playbacks from your favorite sports team, the Google app’s got you covered, faster. Ready, set, search!

Unni Narayanan, Director, Product Management

Source: Inside Search


Dev Channel Update for Chrome OS

The Dev channel has been updated to 52.0.2743.0 (Platform version: 8350.1.0, 8350.2.0, 8350.3.0) for all Chrome OS devices except for x86-mario, x86-zgb and expresso . This build contains a number of bug fixes, security updates and feature enhancements. A list of changes can be found here.
If you find new issues, please let us know by visiting our forum or filing a bug. Interested in switching channels? Find out how. You can submit feedback using ‘Report an issue...’ in the Chrome menu (3 vertical dots in the upper right corner of the browser).


Grace Kihumba
Google Chrome