Tools to help you vote in the EU elections

You probably turn to the web to get information about an election before casting your vote—and you want to get to the important stuff quickly, like learning more about your candidates and understanding how to cast your ballot. To help you find the information you need about the European Parliamentary elections, we’ve introduced a set of useful features across Search in the European Union.  

Helping EU citizens find election information in Search

When you search for instructions for how to vote in your country, you now see those details right on the results page. We source this data directly fromthe European Parliament to ensure you get trusted information.

Example of voting requirements that appear in Search

Example of voting requirements that appear in Search

New ways for candidates and parties to reach voters

Supporting the electoral process also means helping voters learn more about their choices in the elections by providing accurate information about candidates, political parties, and their key priorities. The German Press Agency (dpa) provides us with information from electoral commissions in each EU country on candidates and parties running in the elections. This information appears within Knowledge Panels—dedicated spaces with key information about those parties and politicians when you search for their names.

Candidates who claim their Knowledge Panels have been able to submit a brief statement outlining their electoral platform, a set of top three policy priorities, and links to relevant social media profiles. All is visible right inside the Knowledge Panel in the local language of the candidate. Political parties running in the EU elections are also able to claim ownership of their panels and use Posts on Google to provide updates in the form of videos, text, or event listings, again available right on Search.

Bringing more transparency to election advertising online

To help people better understand the election ads they see online, earlier this year we outlined a new process to verify advertisers for the EU Parliamentary elections. These verified election ads also incorporate a clear “paid for by” disclosure. We recently launched our EU political advertising transparency report, which includes a library of election ads that appear across Google, YouTube and and partner properties. We’ve made this data downloadable, so researchers and journalists can easily use and analyze the content.

With these tools, we hope that it will be easier to get the information you need in order to vote in the EU elections.

Update all linked content with one click in Docs and Slides

What’s changing 

We’re adding a new “Linked objects” sidebar where users can see all linked content in their documents, such as embedded charts, tables, slides, and drawings.

Who’s impacted 

End users

Why you’d use it 

The Linked objects sidebar gives users the ability to quickly access all linked content to see if anything is outdated and update all the content with a single click.


How to get started 

Admins: No action required.
End users: To update the data in a multiple charts or tables:

  • In Docs or Slides, at the top click Tools > Linked objects
  • A sidebar will open on the right, at the bottom click Update all
    • Note: Click Update next to specific objects to update them individually. 

Additional details 

If you don’t see Update or Update All, your charts, tables, or slides may not be linked. To learn how to link charts, tables, or slides see this article in our Help Center.

Helpful links 

Update charts, tables, slides or drawings in a document or presentation. 
Link a chart, table, or slide to Google Docs or Slides. 

Availability 

Rollout details 

  • Rapid Release domains: Extended rollout (potentially longer than 15 days for feature visibility) starting on May 20, 2019. 
  • Scheduled Release domains: Extended rollout (potentially longer than 15 days for feature visibility) starting on June 17, 2019. 

G Suite editions

  • Available to all G Suite editions. 

On/off by default? 

  • This feature will be ON by default.

Stay up to date with G Suite launches

How artists use AI and AR: collaborations with Google Arts & Culture

For centuries, creative people have turned tools into art, or come up with inventions to change how we think about the world around us. Today you can explore the intersection of art and technology through two new experiments, created by artists in collaboration with the Google Arts & Culture Lab, only recently announced at Google I/O 2019.


Created by artists Molmol Kuo & Zach Lieberman, Weird Cuts lets you make collages using augmented reality. You can select one of the cutouts shown in the camera screen to take a photo in a particular shape. The resulting cut-out can then be copy-pasted into the space around you, as seen through your camera’s eye. Download the app, available on iOS and Android, at g.co/weirdcuts.

Weirdcuts.jpg

Weird cuts in action 

Want to design your very own artwork with AI? Artist duo Pinar & Viola and Google engineer Alexander Mordvintsev—best known for his work on DeepDream—used machine learning to create a tool to do so. To use Infinite Patterns, upload an image and a DeepDream algorithm will transform and morph it into a unique pattern. For Pinar & Viola it is the perfect way to find new design inspirations for fashion by challenging one’s perception of shape, color and reality.

ezgif.com-gif-maker (2).gif

Infinite Patterns

These experiments were created in the Google Arts & Culture Lab, where we invite artists and coders to explore how technology can inspire artistic creativity. Collaborations at the Lab gave birth to Cardboard, the affordable VR headset, and Art Selfie, which has matched millions of selfies with works of art around the world.


To continue to encourage this emerging field of art with machine intelligence, we’re announcing the Artists + Machine Intelligence Grants for contemporary artists exploring creative applications of machine learning. This program will offer artists engineering mentorship, access to core Google research, and project funding.

Machine learning and artificial intelligence are greats tool for artists, and there’s so much more to learn. If you’re curious about its origins and future, dive into the online exhibition “AI: More than Human” by the Barbican Centre, in which some of the world’s leading experts, artists and innovators explore the evolving relationship between humans and technology.


You can try our new experiments as well as the digital exhibition on the Google Arts & Culture app for iOS and Android.

Use desk phones with Google Voice

What’s changing 

It’s now easier to set up and manage desk phones with Google Voice. Specifically, you can now use the Admin console to:

  • See all desk phones in your organization, including the model, phone status, assigned user, and more. 
  • Provision a Polycom VVX x50 OBi Edition device to a specific user in just a few clicks. When you provision a phone, the user’s number will be assigned to the phone after an automatic update. 
Use our Help Center to find out more and watch a brief video about how to set up desk phones with Google Voice.

Who’s impacted 

Admins and end users

Why you’d use it 

While Google Voice gives you the flexibility to use your work phone number on any device, there may be times when a desk phone is preferred or helps ease the transition from a legacy telephony system to Google Voice.

How to get started 


  • Admins: Use our Help Center to see how to provision a desk phone for Voice. 
  • End users: Once a desk phone has been set up for you by an admin, see how to use a desk phone with Voice.  

Helpful links 




Availability 

Rollout details 


Google Voice subscriptions 

  • Available to Google Voice Standard and Google Voice Premier subscriptions. 
  • Not available to Google Voice Starter subscriptions. 

On/off by default? 

  • This feature will be OFF by default.

Stay up to date with G Suite launches

Google Summer of Code 2019 (Statistics Part 1)

Since 2005, Google Summer of Code (GSoC) has been bringing new developers into the open source community every year. This year, we accepted 1,276 students from 63 countries into the 2019 GSoC program to work with 201 open source organizations over the summer.

Students are currently wrapping up the Community Bonding phase where they become familiar with the open source projects they will be working with by spending time learning the codebase, the community’s best practices, and integrating into the community. Students will start their 12-week coding projects on May 29th.

Each year we like to share program statistics about the GSoC program and the accepted students and mentors involved in the program.

Accepted Students

  • 89.2% are participating in their first GSoC
  • 75% are first time applicants

Degrees

  • 77.5% are undergraduates, 16.6% are masters students, and 5.9% are in PhD programs
  • 72.8% are Computer Science majors, 3.5% are Mathematics majors, 16.8% are other Engineering majors (Electrical, Mechanical, Aerospace, etc.)
  • Students are in a variety of majors including Atmospheric Science, Neuroscience, Economics, Linguistics, Geology, and Pharmacy.

Proposals

There were a record number of students submitting proposals for the program this year: 5,606 students from 103 countries submitted 7,555 proposals.

In our next GSoC statistics post we will delve deeper into the schools, gender breakdown, mentors, and registration numbers for the 2019 program.

By Stephanie Taylor, Google Open Source

A few new features to try on your next video call with Google Duo

Video calling on Duo helps you savor the moments with people who matter to you, and today we have a couple of updates that help you connect with loved ones and personalize your calls and messages.


Video call with the whole family


No need to play telephone, now up to eight people can catch up with group calling on Duo. Group calling is now available globally on both iOS and Android, and like all Duo calls and video messages, group calls are also encrypted end-to-end so your conversations stay private.
Group calling with Google Duo

Data Saving mode


Data can be costly, so in select regions including Indonesia, India, and Brazil, you can limit data usage on mobile networks and Wi-Fi on Android. If you turn on Data Saving mode in Settings, both you and the person you’re calling will save on data usage in video calls. Data saving mode will be rolling out to more markets in the coming months.

Personalize video messages


Video messages let you record a quick hello when you don’t have time to call or when the person you’re calling can’t pick up. Now on Android and coming soon to iOS, you can personalize video messages by adding text and emojis, or even drawing on your message using brushes.


Draw and write on video messages with Google Duo

Ok, no more stalling. Time to pick up the phone to leave Mom a video message!

Moving Camera, Moving People: A Deep Learning Approach to Depth Prediction



The human visual system has a remarkable ability to make sense of our 3D world from its 2D projection. Even in complex environments with multiple moving objects, people are able to maintain a feasible interpretation of the objects’ geometry and depth ordering. The field of computer vision has long studied how to achieve similar capabilities by computationally reconstructing a scene’s geometry from 2D image data, but robust reconstruction remains difficult in many cases.

A particularly challenging case occurs when both the camera and the objects in the scene are freely moving. This confuses traditional 3D reconstruction algorithms that are based on triangulation, which assumes that the same object can be observed from at least two different viewpoints, at the same time. Satisfying this assumption requires either a multi-camera array (like Google’s Jump), or a scene that remains stationary as the single camera moves through it. As a result, most existing methods either filter out moving objects (assigning them “zero” depth values), or ignore them (resulting in incorrect depth values).
Left: The traditional stereo setup assumes that at least two viewpoints capture the scene at the same time. Right: We consider the setup where both camera and subject are moving.
In “Learning the Depths of Moving People by Watching Frozen People”, we tackle this fundamental challenge by applying a deep learning-based approach that can generate depth maps from an ordinary video, where both the camera and subjects are freely moving. The model avoids direct 3D triangulation by learning priors on human pose and shape from data. While there is a recent surge in using machine learning for depth prediction, this work is the first to tailor a learning-based approach to the case of simultaneous camera and human motion. In this work, we focus specifically on humans because they are an interesting target for augmented reality and 3D video effects.
Our model predicts the depth map (right; brighter=closer to the camera) from a regular video (left), where both the people in the scene and the camera are freely moving.
Sourcing the Training Data
We train our depth-prediction model in a supervised manner, which requires videos of natural scenes, captured by moving cameras, along with accurate depth maps. The key question is where to get such data. Generating data synthetically requires realistic modeling and rendering of a wide range of scenes and natural human actions, which is challenging. Further, a model trained on such data may have difficulty generalizing to real scenes. Another approach might be to record real scenes with an RGBD sensor (e.g., Microsoft’s Kinect), but depth sensors are typically limited to indoor environments and have their own set of 3D reconstruction issues.

Instead, we make use of an existing source of data for supervision: YouTube videos in which people imitate mannequins by freezing in a wide variety of natural poses, while a hand-held camera tours the scene. Because the entire scene is stationary (only the camera is moving), triangulation-based methods--like multi-view-stereo (MVS)--work, and we can get accurate depth maps for the entire scene including the people in it. We gathered approximately 2000 such videos, spanning a wide range of realistic scenes with people naturally posing in different group configurations.
Videos of people imitating mannequins while a camera tours the scene, which we used for training. We use traditional MVS algorithms to estimate depth, which serves as supervision during training of our depth-prediction model.
Inferring the Depth of Moving People
The Mannequin Challenge videos provide depth supervision for moving camera and “frozen” people, but our goal is to handle videos with a moving camera and moving people. We need to structure the input to the network in order to bridge that gap.

A possible approach is to infer depth separately for each frame of the video (i.e., the input to the model is just a single frame). While such a model already improves over state-of-the-art single image methods for depth prediction, we can improve the results further by considering information from multiple frames. For example, motion parallax, i.e., the relative apparent motion of static objects between two different viewpoints, provides strong depth cues. To benefit from such information, we compute the 2D optical flow between each input frame and another frame in the video, which represents the pixel displacement between the two frames. This flow field depends on both the scene’s depth and the relative position of the camera. However, because the camera positions are known, we can remove their dependency from the flow field, which results in an initial depth map. This initial depth is valid only for static scene regions. To handle moving people at test time, we apply a human-segmentation network to mask out human regions in the initial depth map. The full input to our network then includes: the RGB image, the human mask, and the masked depth map from parallax.
Depth prediction network: The input to the model includes an RGB image (Frame t), a mask of the human region, and an initial depth for the non-human regions, computed from motion parallax (optical flow) between the input frame and another frame in the video. The model outputs a full depth map for Frame t. Supervision for training is provided by the depth map, computed by MVS.
The network’s job is to “inpaint” the depth values for the regions with people, and refine the depth elsewhere. Intuitively, because humans have consistent shape and physical dimensions, the network can internally learn such priors by observing many training examples. Once trained, our model can handle natural videos with arbitrary camera and human motion.
Below are some examples of our depth-prediction model results based on videos, with comparison to recent state-of-the-art learning based methods.
Comparison of depth prediction models to a video clip with moving cameras and people. Top: Learning based monocular depth prediction methods (DORN; Chen et al.). Bottom: Learning based stereo method (DeMoN), and our result.
3D Video Effects Using Our Depth Maps
Our predicted depth maps can be used to produce a range of 3D-aware video effects. One such effect is synthetic defocus. Below is an example, produced from an ordinary video using our depth map.
Bokeh video effect produced using our estimated depth maps. Video courtesy of Wind Walk Travel Videos.
Other possible applications for our depth maps include generating a stereo video from a monocular one, and inserting synthetic CG objects into the scene. Depth maps also provide the ability to fill in holes and disoccluded regions with the content exposed in other frames of the video. In the following example, we have synthetically wiggled the camera at several frames and filled in the regions behind the actor with pixels from other frames of the video.
Acknowledgements
The research described in this post was done by Zhengqi Li, Tali Dekel, Forrester Cole, Richard Tucker, Noah Snavely, Ce Liu and Bill Freeman. We would like to thank Miki Rubinstein for his valuable feedback.

Source: Google AI Blog


SSO + network mask domains can now force Google password reset on next login

What’s changing 

We’re providing more control over user password policies for some customers using third-party identity providers (IdPs) via SAML. Previously, these customers could not enforce the “Require password change” setting for their users. Now, SSO customers who have a network mask defined can turn on this setting and force their users to change their Google password the next time they log in using their G Suite or Cloud Identity credentials.

Who’s impacted 

Admins only

Why you’d use it 

For many customers who use third-party IdPs via SAML, preventing “Require password change” is the desired behavior. Their users only need to know their credentials for their IdP so forcing them to change their Google password is not meaningful.

However, some G Suite admins in domains with a third-party IdP use a network mask to allow some of their users to log in using their G Suite or Cloud Identity credentials. In such deployments, there may be users who sign in using their G Suite credentials. For these users, admins may want to generate a temporary password and then have the user change it on the next login. This update will help admins of domains that use SSO and a network mask to do this.

How to get started 


  • Admins: This update will only impact domains with a SAML IdP configured for SSO and a network mask. To check if you have a network mask, go to Admin console > Security > Network masks and see if there’s information defined. 




  • Admins at domains with SAML IdP configured for SSO and a network mask can turn on the setting in the Admin console (“Require password change”) or via the Admin SDK (“Do Force password change on Next Login”). Once turned on, it will be enforced for that user’s next login. See the sample screenshot below. 




  • If your domain has SSO but does not have a network mask configured, then there will be no change. The required password change option will show as OFF and you won’t be able to turn it on. See the sample screenshot below. 


Helpful links 

Help Center: Set up single sign-on for managed Google Accounts using third-party Identity providers
G Suite Admin SDK documentation for updating user details 

Availability 

Rollout details 


G Suite editions 

  • Available to all G Suite editions 

On/off by default? 

  • The new setting is automatically available depending on whether or not an SSO domain has a network mask configured.

Stay up to date with G Suite launches

What’s for dinner? Order it with Google

French fries, lettuce wraps, massaman curry, chicken wings, cupcakes—I could go on. When I was pregnant with my son last year, my cravings were completely overpowering. Lucky for me, I didn’t have to jump into the car and go to my favorite restaurants to get my fill—food delivery services saved my bacon on more occasions than I’d be comfortable admitting to the world.

Ever since then, I’ve counted myself as one of the millions of people who regularly order food for home delivery. Starting today, we’re making it even easier to get food delivered to your doorstep.

Find food and order faster
Now you can use Google Search, Maps or the Assistant to order food from services like DoorDash, Postmates, Delivery.com, Slice, and ChowNow, with Zuppler and others coming soon. Look out for the “Order Online” button in Search and Maps when you search for a restaurant or type of cuisine. For participating restaurants, you can make your selections with just a few taps, view delivery or pickup times, and check out with Google Pay.  

Let the Google Assistant handle dinner
To use the Assistant on your phone to get your food fix, simply say, “Hey Google, order food from [restaurant].” You can also quickly reorder your go-to meal with some of our delivery partners by saying, “Hey Google, reorder food from [restaurant].” The Assistant pulls up your past orders, and in just a few seconds, you can select your usual dish.

Now's the perfect time to let Google help with your cravings. So, what are we ordering tonight?

Consolidated Google Groups audit logs now available in G Suite and GCP

What’s changing 

Consolidated Google Groups audit logs are now available in the G Suite AdminSDK Reports API and GCP Cloud Audit Logs. Specifically you’ll notice:

  • Changes in the G Suite AdminSDK Reports API: We’re introducing a new consolidated log named groups_enterprise, which includes changes to groups and group memberships across all products and APIs. These were previously split across the groups and admin audit logs. 
  • Changes in GCP Cloud Audit Logging: We’re adding Google Groups information to Cloud Audit Logs (CAL) in Stackdriver. See our Cloud Blog post for more details on how this could help GCP customers. Note that this will not change visibility of these logs in the G Suite Admin console - it just adds them to Cloud Audit Logs (CAL) in Stackdriver as well. 


Who’s impacted 

G Suite and GCP Admins only

Why you’d use it 

These changes will help improve the security and usability of Groups as an IAM tool by streamlining administration, transparency, and access monitoring.

How to get started 


  • Admins: 
    • Changes in the G Suite AdminSDK Reports API: Get started with the AdminSDK Reports API
    • Changes in GCP Cloud Audit Logging: This is an opt-in feature that can be enabled at G Suite Admin console > Company profile > Legal & Compliance > Sharing options. 
  • End users: No action needed. 


Additional details 

Changes in the G Suite AdminSDK Reports API 
Changes to groups have historically been logged in either the groups or admin audit logs. Changes made in the Google Groups product are logged in the groups log while changes made through admin tools like the Admin console, AdminSDK, and GCDS are logged in the admin log. As part of our efforts to streamline administration and increase transparency, we’re introducing a new consolidated log named groups_enterprise, which includes changes to groups and group memberships across all products and APIs. This new log is now available through the AdminSDK Reports API and will be available in the Admin console in the future.

Changes in GCP Cloud Audit Logging 
Google Groups are the recommended way to grant access to GCP resources when using IAM policies. GCP customers have told us that having group audit logs available in Google Cloud Audit Logs would help streamline security and access monitoring. With that in mind, we’re adding Google Groups information to Cloud Audit Logs (CAL) in Stackdriver. See our Cloud Blog post for more details on how this can help GCP customers.

Helpful links 

Cloud Blog: Integrated Google Groups Audit Transparency from G Suite to GCP Cloud Audit Logs 
Get started with the G Suite AdminSDK Reports API 

Availability 

Rollout details 


G Suite editions 
  • Google Groups are available to all G Suite editions. 

On/off by default? 
  • G Suite AdminSDK Reporting API for consolidate group events will be ON by default. 
  • GCP Cloud Audit Logging for groups will be OFF by default and can be enabled at the domain level.


Stay up to date with G Suite launches