For centuries, creative people have turned tools into art, or come up with inventions to change how we think about the world around us. Today you can explore the intersection of art and technology through two new experiments, created by artists in collaboration with the Google Arts & Culture Lab, only recently announced at Google I/O 2019.
Created by artists Molmol Kuo & Zach Lieberman, Weird Cuts lets you make collages using augmented reality. You can select one of the cutouts shown in the camera screen to take a photo in a particular shape. The resulting cut-out can then be copy-pasted into the space around you, as seen through your camera’s eye. Download the app, available on iOS and Android, at g.co/weirdcuts.
Want to design your very own artwork with AI? Artist duo Pinar & Viola and Google engineer Alexander Mordvintsev—best known for his work on DeepDream—used machine learning to create a tool to do so. To use Infinite Patterns, upload an image and a DeepDream algorithm will transform and morph it into a unique pattern. For Pinar & Viola it is the perfect way to find new design inspirations for fashion by challenging one’s perception of shape, color and reality.
These experiments were created in the Google Arts & Culture Lab, where we invite artists and coders to explore how technology can inspire artistic creativity. Collaborations at the Lab gave birth to Cardboard, the affordable VR headset, and Art Selfie, which has matched millions of selfies with works of art around the world.
To continue to encourage this emerging field of art with machine intelligence, we’re announcing the Artists + Machine Intelligence Grants for contemporary artists exploring creative applications of machine learning. This program will offer artists engineering mentorship, access to core Google research, and project funding.
Machine learning and artificial intelligence are greats tool for artists, and there’s so much more to learn. If you’re curious about its origins and future, dive into the online exhibition “AI: More than Human” by the Barbican Centre, in which some of the world’s leading experts, artists and innovators explore the evolving relationship between humans and technology.
You can try our new experiments as well as the digital exhibition on the Google Arts & Culture app for iOS and Android.
While Google Voice gives you the flexibility to use your work phone number on any device, there may be times when a desk phone is preferred or helps ease the transition from a legacy telephony system to Google Voice.
How to get started
Admins: Use our Help Center to see how to provision a desk phone for Voice.
End users: Once a desk phone has been set up for you by an admin, see how to use a desk phone with Voice.
Students are currently wrapping up the Community Bonding phase where they become familiar with the open source projects they will be working with by spending time learning the codebase, the community’s best practices, and integrating into the community. Students will start their 12-week coding projects on May 29th.
Each year we like to share program statistics about the GSoC program and the accepted students and mentors involved in the program.
89.2% are participating in their first GSoC
75% are first time applicants
77.5% are undergraduates, 16.6% are masters students, and 5.9% are in PhD programs
72.8% are Computer Science majors, 3.5% are Mathematics majors, 16.8% are other Engineering majors (Electrical, Mechanical, Aerospace, etc.)
Students are in a variety of majors including Atmospheric Science, Neuroscience, Economics, Linguistics, Geology, and Pharmacy.
There were a record number of students submitting proposals for the program this year: 5,606 students from 103 countries submitted 7,555 proposals.
In our next GSoC statistics post we will delve deeper into the schools, gender breakdown, mentors, and registration numbers for the 2019 program.
Video calling on Duo helps you savor the moments with people who matter to you, and today we have a couple of updates that help you connect with loved ones and personalize your calls and messages.
Video call with the whole family
No need to play telephone, now up to eight people can catch up with group calling on Duo. Group calling is now available globally on both iOS and Android, and like all Duo calls and video messages, group calls are also encrypted end-to-end so your conversations stay private.
Data Saving mode
Data can be costly, so in select regions including Indonesia, India, and Brazil, you can limit data usage on mobile networks and Wi-Fi on Android. If you turn on Data Saving mode in Settings, both you and the person you’re calling will save on data usage in video calls. Data saving mode will be rolling out to more markets in the coming months.
Personalize video messages
Video messages let you record a quick hello when you don’t have time to call or when the person you’re calling can’t pick up. Now on Android and coming soon to iOS, you can personalize video messages by adding text and emojis, or even drawing on your message using brushes.
Ok, no more stalling. Time to pick up the phone to leave Mom a video message!
Posted by Tali Dekel, Research Scientist and Forrester Cole, Software Engineer, Machine Perception
The human visual system has a remarkable ability to make sense of our 3D world from its 2D projection. Even in complex environments with multiple moving objects, people are able to maintain a feasible interpretation of the objects’ geometry and depth ordering. The field of computer vision has long studied how to achieve similar capabilities by computationally reconstructing a scene’s geometry from 2D image data, but robust reconstruction remains difficult in many cases.
A particularly challenging case occurs when both the camera and the objects in the scene are freely moving. This confuses traditional 3D reconstruction algorithms that are based on triangulation, which assumes that the same object can be observed from at least two different viewpoints, at the same time. Satisfying this assumption requires either a multi-camera array (like Google’s Jump), or a scene that remains stationary as the single camera moves through it. As a result, most existing methods either filter out moving objects (assigning them “zero” depth values), or ignore them (resulting in incorrect depth values).
Left: The traditional stereo setup assumes that at least two viewpoints capture the scene at the same time. Right: We consider the setup where both camera and subject are moving.
In “Learning the Depths of Moving People by Watching Frozen People”, we tackle this fundamental challenge by applying a deep learning-based approach that can generate depth maps from an ordinary video, where both the camera and subjects are freely moving. The model avoids direct 3D triangulation by learning priors on human pose and shape from data. While there is a recent surge in using machine learning for depth prediction, this work is the first to tailor a learning-based approach to the case of simultaneous camera and human motion. In this work, we focus specifically on humans because they are an interesting target for augmented reality and 3D video effects.
Our model predicts the depth map (right; brighter=closer to the camera) from a regular video (left), where both the people in the scene and the camera are freely moving.
Sourcing the Training Data We train our depth-prediction model in a supervised manner, which requires videos of natural scenes, captured by moving cameras, along with accurate depth maps. The key question is where to get such data. Generating data synthetically requires realistic modeling and rendering of a wide range of scenes and natural human actions, which is challenging. Further, a model trained on such data may have difficulty generalizing to real scenes. Another approach might be to record real scenes with an RGBD sensor (e.g., Microsoft’s Kinect), but depth sensors are typically limited to indoor environments and have their own set of 3D reconstruction issues.
Instead, we make use of an existing source of data for supervision: YouTube videos in which people imitate mannequins by freezing in a wide variety of natural poses, while a hand-held camera tours the scene. Because the entire scene is stationary (only the camera is moving), triangulation-based methods--like multi-view-stereo (MVS)--work, and we can get accurate depth maps for the entire scene including the people in it. We gathered approximately 2000 such videos, spanning a wide range of realistic scenes with people naturally posing in different group configurations.
Videos of people imitating mannequins while a camera tours the scene, which we used for training. We use traditional MVS algorithms to estimate depth, which serves as supervision during training of our depth-prediction model.
Inferring the Depth of Moving People The Mannequin Challenge videos provide depth supervision for moving camera and “frozen” people, but our goal is to handle videos with a moving camera and moving people. We need to structure the input to the network in order to bridge that gap.
A possible approach is to infer depth separately for each frame of the video (i.e., the input to the model is just a single frame). While such a model already improves over state-of-the-art single image methods for depth prediction, we can improve the results further by considering information from multiple frames. For example, motion parallax, i.e., the relative apparent motion of static objects between two different viewpoints, provides strong depth cues. To benefit from such information, we compute the 2D optical flow between each input frame and another frame in the video, which represents the pixel displacement between the two frames. This flow field depends on both the scene’s depth and the relative position of the camera. However, because the camera positions are known, we can remove their dependency from the flow field, which results in an initial depth map. This initial depth is valid only for static scene regions. To handle moving people at test time, we apply a human-segmentation network to mask out human regions in the initial depth map. The full input to our network then includes: the RGB image, the human mask, and the masked depth map from parallax.
Depth prediction network: The input to the model includes an RGB image (Frame t), a mask of the human region, and an initial depth for the non-human regions, computed from motion parallax (optical flow) between the input frame and another frame in the video. The model outputs a full depth map for Frame t. Supervision for training is provided by the depth map, computed by MVS.
The network’s job is to “inpaint” the depth values for the regions with people, and refine the depth elsewhere. Intuitively, because humans have consistent shape and physical dimensions, the network can internally learn such priors by observing many training examples. Once trained, our model can handle natural videos with arbitrary camera and human motion. Below are some examples of our depth-prediction model results based on videos, with comparison to recent state-of-the-art learning based methods.
Comparison of depth prediction models to a video clip with moving cameras and people. Top: Learning based monocular depth prediction methods (DORN; Chen et al.). Bottom: Learning based stereo method (DeMoN), and our result.
3D Video Effects Using Our Depth Maps Our predicted depth maps can be used to produce a range of 3D-aware video effects. One such effect is synthetic defocus. Below is an example, produced from an ordinary video using our depth map.
Other possible applications for our depth maps include generating a stereo video from a monocular one, and inserting synthetic CG objects into the scene. Depth maps also provide the ability to fill in holes and disoccluded regions with the content exposed in other frames of the video. In the following example, we have synthetically wiggled the camera at several frames and filled in the regions behind the actor with pixels from other frames of the video.
Acknowledgements The research described in this post was done by Zhengqi Li, Tali Dekel, Forrester Cole, Richard Tucker, Noah Snavely, Ce Liu and Bill Freeman. We would like to thank Miki Rubinstein for his valuable feedback.
We’re providing more control over user password policies for some customers using third-party identity providers (IdPs) via SAML. Previously, these customers could not enforce the “Require password change” setting for their users. Now, SSO customers who have a network mask defined can turn on this setting and force their users to change their Google password the next time they log in using their G Suite or Cloud Identity credentials.
Why you’d use it
For many customers who use third-party IdPs via SAML, preventing “Require password change” is the desired behavior. Their users only need to know their credentials for their IdP so forcing them to change their Google password is not meaningful.
However, some G Suite admins in domains with a third-party IdP use a network mask to allow some of their users to log in using their G Suite or Cloud Identity credentials. In such deployments, there may be users who sign in using their G Suite credentials. For these users, admins may want to generate a temporary password and then have the user change it on the next login. This update will help admins of domains that use SSO and a network mask to do this.
How to get started
Admins: This update will only impact domains with a SAML IdP configured for SSO and a network mask. To check if you have a network mask, go to Admin console > Security > Network masks and see if there’s information defined.
Admins at domains with SAML IdP configured for SSO and a network mask can turn on the setting in the Admin console (“Require password change”) or via the Admin SDK (“Do Force password change on Next Login”). Once turned on, it will be enforced for that user’s next login. See the sample screenshot below.
If your domain has SSO but does not have a network mask configured, then there will be no change. The required password change option will show as OFF and you won’t be able to turn it on. See the sample screenshot below.
French fries, lettuce wraps, massaman curry, chicken wings, cupcakes—I could go on. When I was pregnant with my son last year, my cravings were completely overpowering. Lucky for me, I didn’t have to jump into the car and go to my favorite restaurants to get my fill—food delivery services saved my bacon on more occasions than I’d be comfortable admitting to the world.
Ever since then, I’ve counted myself as one of the millions of people who regularly order food for home delivery. Starting today, we’re making it even easier to get food delivered to your doorstep.
Find food and order faster Now you can use Google Search, Maps or the Assistant to order food from services like DoorDash, Postmates, Delivery.com, Slice, and ChowNow, with Zuppler and others coming soon. Look out for the “Order Online” button in Search and Maps when you search for a restaurant or type of cuisine. For participating restaurants, you can make your selections with just a few taps, view delivery or pickup times, and check out with Google Pay.
Let the Google Assistant handle dinner To use the Assistant on your phone to get your food fix, simply say, “Hey Google, order food from [restaurant].” You can also quickly reorder your go-to meal with some of our delivery partners by saying, “Hey Google, reorder food from [restaurant].” The Assistant pulls up your past orders, and in just a few seconds, you can select your usual dish.
Now's the perfect time to let Google help with your cravings. So, what are we ordering tonight?
Changes in the G Suite AdminSDK Reports API: We’re introducing a new consolidated log named groups_enterprise, which includes changes to groups and group memberships across all products and APIs. These were previously split across the groups and admin audit logs.
Changes in GCP Cloud Audit Logging: We’re adding Google Groups information to Cloud Audit Logs (CAL) in Stackdriver. See our Cloud Blog post for more details on how this could help GCP customers. Note that this will not change visibility of these logs in the G Suite Admin console - it just adds them to Cloud Audit Logs (CAL) in Stackdriver as well.
G Suite and GCP Admins only
Why you’d use it
These changes will help improve the security and usability of Groups as an IAM tool by streamlining administration, transparency, and access monitoring.
Changes in GCP Cloud Audit Logging: This is an opt-in feature that can be enabled at G Suite Admin console > Company profile > Legal & Compliance > Sharing options.
End users: No action needed.
Changes in the G Suite AdminSDK Reports API Changes to groups have historically been logged in either the groups or admin audit logs. Changes made in the Google Groups product are logged in the groups log while changes made through admin tools like the Admin console, AdminSDK, and GCDS are logged in the admin log. As part of our efforts to streamline administration and increase transparency, we’re introducing a new consolidated log named groups_enterprise, which includes changes to groups and group memberships across all products and APIs. This new log is now available through the AdminSDK Reports API and will be available in the Admin console in the future.
Changes in GCP Cloud Audit Logging Google Groups are the recommended way to grant access to GCP resources when using IAM policies. GCP customers have told us that having group audit logs available in Google Cloud Audit Logs would help streamline security and access monitoring. With that in mind, we’re adding Google Groups information to Cloud Audit Logs (CAL) in Stackdriver. See our Cloud Blog post for more details on how this can help GCP customers.
The Dev channel has been updated to 76.0.3789.0 (Platform version: 12200.0.0) for most Chrome OS devices. This build contains a number of bug fixes, security updates and feature enhancements.
If you find new issues, please let us know by visiting our forum or filing a bug. Interested in switching channels? Find out how. You can submit feedback using ‘Report an issue...’ in the Chrome menu (3 vertical dots in the upper right corner of the browser). Cindy Bayless Google Chrome
Google Tag Manager and Tag Manager 360 help you more easily and safely deploy tags for all your marketing and measurement tools. Security and collaboration features give IT teams more control over the tagging process, while features like auto-event triggers and built-in templates help marketers get the data they need without having to deal with code.
Today, we’re introducing Custom Templates—a new set of features in Tag Manager and Tag Manager 360 to give you more transparency and control over the tags on your site.
With Custom Templates, you can use a built-in Template Editor to design tag and variable templates that can be used throughout your container.
This means that less technical users can manage instances of your custom tags just like the built-in tags, without messing with code. (Custom Templates will show up alongside the built-in templates when you go to add a new tag or variable.) And, since you can write your template once and reuse it, less code will need to be loaded on your site.
When you use these APIs, associated template permissions will automatically be surfaced and require that you declare how you’re using them (e.g. where external scripts can be loaded from, which cookies can be accessed, where data can be sent, etc.):
The behavior of your templates is tightly controlled by these permissions. Other users will be able to see exactly what your custom tags and variables are permitted to do. And, developers can write on-page policies to govern their behavior.
Starting today, you’ll see a new Templates section in the left sidebar of your containers. Whether you’re a marketer wanting to do more in Tag Manager without code or a developer wanting more control over third-party tags on your site, Custom Templates will improve your tagging capabilities.