Monthly Archives: October 2017

The meeting room, by G Suite

With G Suite, we’re focused on building tools that help you bring great ideas to life. We know meetings are the main entry point for teams to share and shape ideas into action. That’s why we recently introduced Hangouts Meet, an evolution of Google Hangouts designed specifically for the workplace, and Jamboard, a way to bring creative brainstorming directly into meetings. Combined with Calendar and Drive, these tools extend collaboration beyond four walls and transform how we work—so every team member has a voice, no matter location.

But the transformative power of video meetings is wasted if it’s not affordable and accessible to all organizations. So today, we’re introducing Hangouts Meet hardware—a new way to bring high-quality video meetings to businesses of any size. We’re also announcing new software updates designed to make your meetings even more productive.

Introducing Hangouts Meet hardware

Hangouts Meet hardware is a cost-effective way to bring high-quality video meetings to your business. The hardware kit consists of four components: a touchscreen controller, speakermic, 4K sensor camera and ASUS Chromebox.

Hangouts Meet controller

The new controller provides a modern, intuitive touchscreen interface that allows people to easily join scheduled events from Calendar or view meeting details with a single tap. You can pin and mute team members, as well as control the camera, making managing meetings easy. You can also add participants with the dial-a-phone feature and present from a laptop via HDMI. If you’re a G Suite Enterprise edition customer, you can record the meeting to Drive.

Designed by Google, the Hangouts Meet speakermic actively eliminates echo and background noise to provide crisp, clear audio. Up to five speakermics can be daisy-chained together with a single wire, providing coverage for larger rooms without tabletop clutter.

The 4K sensor camera with 120° field of view easily captures everyone at the table, even in small spaces that some cameras find challenging. Each camera component is fine-tuned to make meetings more personal and distraction-free. Built with machine learning, the camera can intelligently detect participants and automatically crop and zoom to frame them.

Powered by Chrome OS, the ASUS Chromebox makes deploying and managing Hangouts Meet hardware easier than ever. The Chromebox can automatically push updates to other components in the hardware kit, making it easier for large organizations to ensure security and reliability. Remote device monitoring and management make it easy for IT administrators to stay in control, too.

New Hangouts Meet enhancements greatly improve user experience and simplify our meeting rooms. It also creates new ways for our team to collaborate. Bradley Rhodes
IT Analyst, Woolworths Limited, Australia

Says Bradley Rhodes, IT Analyst End User Computing at Woolworths Ltd Australia, “We are very excited about the new Hangouts Meet hardware, particularly the easy-to-use touchscreen. The enhancements greatly improve the user experience and simplify our meeting rooms. We have also seen it create new ways for our team to collaborate, like via the touch-to-record functionality which allows absent participants to catch up more effectively.”

More features, better meetings

We’re also announcing updates to Meet based on valuable feedback. If you’re a G Suite Enterprise edition customer, you can:

Dial in image Hangouts Meet
  • Record meetings and save them to Drive. Can’t make the meeting? No problem. Record your meeting directly to Drive. Even without a Hangouts Meet hardware kit, Meet on web can save your team’s ideas with a couple of clicks.
  • Host meetings with up to 50 participants. Meet supports up to 50 participants in a meeting, especially useful for bringing global teams together from both inside and outside of your organization.
  • Dial in from around the globe. The dial-in feature in Meet is now available in more than a dozen markets. If you board a flight in one country and land in another, Meet will automatically update your meeting’s dial-in listing to a local phone number.

These new features are rolling out gradually. The hardware kit is priced at $1999 and is available in select markets around the globe beginning today.

Whether you're collaborating in Jamboard, recording meetings and referencing discussions in Drive or scheduling your next team huddle in Calendar, Hangouts Meet hardware makes it even easier to bring the power of your favorite G Suite tools into team meetings. For more information, visit the G Suite website.

Source: Drive


The meeting room, by G Suite

(Cross-posted from The Keyword)




With G Suite, we’re focused on building tools that help you bring great ideas to life. We know meetings are the main entry point for teams to share and shape ideas into action. That’s why we recently introduced Hangouts Meet, an evolution of Google Hangouts designed specifically for the workplace, and Jamboard, a way to bring creative brainstorming directly into meetings.

Combined with Calendar and Drive, these tools extend collaboration beyond four walls and transform how we work—so every team member has a voice, no matter location.

But the transformative power of video meetings is wasted if it’s not affordable and accessible to all organizations. So today, we’re introducing Hangouts Meet hardware—a new way to bring high-quality video meetings to businesses of any size. We’re also announcing new software updates designed to make your meetings even more productive.

Introducing Hangouts Meet hardware 

Hangouts Meet hardware is a cost-effective way to bring high-quality video meetings to your business. The hardware kit consists of four components: a touchscreen controller, speakermic, 4K-sensor Ultra HD camera and ASUS Chromebox.

The new controller provides a modern, intuitive touchscreen interface that allows people to easily join scheduled events from Calendar or view meeting details with a single tap. You can pin and mute team members, as well as control the camera, making managing meetings easy. You can also add participants with the dial-a-phone feature and present from a laptop via HDMI. If you’re a G Suite Enterprise edition customer, you can record the meeting to Drive.



Designed by Google, the Hangouts Meet speakermic actively eliminates echo and background noise to provide crisp, clear audio. Up to five speakermics can be daisy-chained together with a single wire, providing coverage for larger rooms without tabletop clutter.

The 4K sensor Ultra HD camera with 120° field of view easily captures everyone at the table, even in small spaces that some cameras find challenging. Each camera component is fine-tuned to make meetings more personal and distraction-free. Built with machine learning, the camera can intelligently detect participants and automatically crop and zoom to frame them.

Powered by ChromeOS, the ASUS Chromebox makes deploying and managing Hangouts Meet hardware easier than ever. The Chromebox can automatically push updates to other components in the hardware kit, making it easier for large organizations to ensure security and reliability. Remote device monitoring and management make it easy for IT administrators to stay in control, too.

Says Bradley Rhodes, IT Analyst End User Computing at Woolworths Ltd Australia, “We are very excited about the new Hangouts Meet hardware, particularly the easy-to-use touchscreen. The enhancements greatly improve the user experience and simplify our meeting rooms. We have also seen it create new ways for our team to collaborate, like via the touch-to-record functionality which allows absent participants to catch up more effectively.”

More features, better meetings

We’re also announcing updates to Meet based on valuable feedback. If you’re a G Suite Enterprise edition customer, you can:


  • Record meetings and save them to Drive: Can’t make the meeting? No problem. Record your meeting directly to Drive. Even without a Hangouts Meet hardware kit, Meet on web can save your team’s ideas with a couple of clicks.
  • Host meetings with up to 50 participants: Meet supports up to 50 participants in a meeting, especially useful for bringing global teams together from both inside and outside of your organization.
  • Dial in from around the globe: The dial-in feature in Meet is now available in more than a dozen markets. If you board a flight in one country and land in another, Meet will automatically update your meeting’s dial-in listing to a local phone number.





These new features are rolling out gradually. The hardware kit is priced at $1999 and is available in select markets around the globe beginning today.

Whether you're collaborating in Jamboard, recording meetings and referencing discussions in Drive or scheduling your next team huddle in Calendar, Hangouts Meet hardware makes it even easier to bring the power of your favorite G Suite tools into team meetings. For more information, visit the G Suite website.

Additional information for G Suite admins

  • More details on recording meetings and saving them to Drive available here
  • 50-person meeting support in Meet is coming soon. Specific timing and details to follow on the G Suite Updates blog.



Launch Details
Release track:

  • Record a meeting: Launching to Rapid Release, with Scheduled Release coming in 2 weeks
  • 50-person meeting support in Meet: Coming soon.
  • International dial-in: Available to Rapid and Scheduled Release


Editions:

  • Meeting features are available to G Suite enterprise editions only
  • Hangouts Meet hardware is available to all G Suite editions


    Impact:
    All end users

    Action:
    Change management suggested/FYI

    More Information
    Manage Hangouts Meet for your G Suite team
    Record a meeting

    Launch release calendar
    Launch detail categories
    Get these product update alerts by email
    Subscribe to the RSS feed of these updates

    Record a Hangouts Meet meeting and save it to Google Drive

    Whether for trainings, important announcements, or syncing with your team, meetings have many purposes. Sometimes not every teammate can attend, or there is a need to share or reference notes from meetings after they have ended. To simplify this process, Hangouts Meet video meetings for G Suite Enterprise edition can now be recorded and saved to the cloud, making them easy to share, view, and even play in sped-up mode.

    Any participant in the same domain as the organizer can start and stop a recording from web or Hangouts Meet hardware (and Chromebox for Meetings), and all participants are notified that the meeting is being recorded.



    Recordings are saved to a “Meet Recordings” folder in the Drive of the meeting owner and the recording is automatically attached to the Calendar event and shared with all invited guests in the same domain.

    G Suite Enterprise edition admins can control whose meetings can be recorded at the organizational unit (OU) level. Within the Admin console, navigate to Apps > G Suite > Settings for Google Hangouts and select “Meet Settings.” Please note, this setting is on by default for all OUs. The setting is disabled for OUs that don’t have Drive enabled.

    Launch Details
    Release track:
    Launching to Rapid Release, with Scheduled Release coming in 2 weeks

    Editions:
    Available to G Suite Enterprise edition only

    Rollout pace:
    Full rollout (1–3 days for feature visibility)

    Impact:
    All end users

    Action:
    Change management suggested/FYI

    More Information
    Help Center


    The Keyword
    The meeting room, by G Suite

    Launch release calendar
    Launch detail categories
    Get these product update alerts by email
    Subscribe to the RSS feed of these updates

    Google Cloud Dedicated Interconnect gets global routing, more locations, and is GA



    We have major updates to Dedicated Interconnect, which helps enable fast private connections to Google Cloud Platform (GCP) from numerous facilities across the globe, so you can extend your on-premises network to your GCP Virtual Private Cloud (VPC) network. With faster private connections offered by Dedicated Interconnect, you can build applications that span on-premises infrastructure and GCP without compromising privacy or performance.

    Dedicated Interconnect is now GA and ready for production-grade workloads, and covered by a service level agreement. Dedicated Interconnect can be configured to offer a 99.9% or a 99.99% uptime SLA. Please see the Dedicated Interconnect documentation for details on how to achieve these SLAs.

    Going global with the help of Cloud Router


    Dedicated Interconnect now supports global routing for Cloud Router, a new feature that allows subnets in GCP to be accessible from any on-premise network through the Google network. This feature presents a new flag in Cloud Router that allows the network to advertise all the subnets in a project. For example, a connection from your on-premise data center in Chicago to GCP’s Dedicated Interconnect location in Chicago now gives you access to all subnets running in all GCP regions around the globe, including those in the Americas, Asia and Europe. We believe this functionality is unique among leading cloud providers. This feature is generally available, and you can learn more about it in the Cloud Router documentation.
    Using Cloud Router Global Routing to connect on-premises workloads via "Customer Peering Router" with GCP workloads in regions anywhere in the world.

    Dedicated Interconnect is your new neighbor


    Dedicated Interconnect is also available from four new locations: Mumbai, Munich, Montreal and Atlanta. This means you can connect to Google’s network from almost anywhere in the world. For a full list of locations, visit the Dedicated Interconnect locations page. Please note, in the graphic below, many locations (blue dots) offer service from more than one facility.
    In addition to those four new Google locations, we’re also working with Equinix to offer Dedicated Interconnect access in multiple markets across the globe, ensuring that no matter where you are, there's a Dedicated Interconnect connection close to you.
    "By providing direct access to Google Cloud Dedicated Interconnect, we are helping enterprises leverage Google’s network  the largest in the world and accelerate their hybrid cloud strategies globally. Dedicated Interconnect offered in collaboration with Equinix enables customers to easily build the cloud of their choice with dedicated, low-latency connections and SLAs that enterprise customers have come to expect from hybrid cloud architectures." 
    Ryan Mallory, Vice President, Global Solutions Enablement, Equinix

    Here at Google Cloud, we’re really excited about Dedicated Interconnect, including the 99.99% uptime SLA, four new locations, and Cloud Router Global Routing. Dedicated Interconnect will make it easier for more businesses to connect to Google Cloud, and we can’t wait to see the next generation of enterprise workloads that Dedicated Interconnect makes possible.

    If you’d like to learn which connection option is right for you, more about pricing and whole lots more, please take a look at the Interconnect product page.

    Ten third-party applications added to the G Suite pre-integrated SSO apps catalog

    With Single-Sign-On (SSO), users can access all of their enterprise cloud applications—including the Admin console for admins—after signing in just one time. Google supports the two most popular enterprise SSO standards, OpenID Connect and SAML, and there are more than 800 applications with pre-integrated SSO support in our third-party apps catalog already.

    We’re now adding SAML integration for ten additional applications: Aha!, Atlassian Cloud, Datadog, Desk, Github Business, HackerOne, Mavenlink, Mixpanel, SpringerLink, and Springerlink Test.

    You can find our full list of pre-integrated applications, as well as instructions for installing them, in the Help Center.

    Note that apart from the pre-integrated SAML applications, G Suite also supports installing “Custom SAML Applications,” which means that admins can install any third-party application that supports SAML. The advantage of a pre-integrated app is that the installation is much easier. You can learn more about installing Custom SAML Applications in this Help Center article.

    Launch Details
    Release track:
    Launching to both Rapid release and Scheduled release

    Editions:
    Available to all G Suite editions

    Rollout pace:
    Gradual rollout (potentially longer than 3 days for feature visibility)

    Impact:
    Admins only

    Action:
    Admin action suggested/FYI

    More Information
    Help Center: Using SAML to set up federated SSO

    Launch release calendar
    Launch detail categories
    Get these product update alerts by email
    Subscribe to the RSS feed of these updates

    Reach more customers with Local Services by Google

    When people need a plumber or a locksmith, they search online for a business nearby. With Local Services by Google, businesses like yours can show up at the top of Search, so that you can reach local clients right when they’re interested, and book more jobs.

    Today we’re announcing that Local Services, previously in a pilot as Home Services, is running in 17 cities across the U.S., and will be available in 30 major metro areas by the end of 2017.

    LocalServices_Query_HouseCleaning.png
    Local Services unit with results for the search query “house cleaning in Menlo Park”

    All Google Guaranteed businesses that appear in Local Services are background checked and display a badge of trust, which limits deceptive advertisers, elicits trust among users and highlights quality businesses. Once on the platform, you can make a personalized profile page that displays your reviews, contact info, and unique aspects about your business like being eco-friendly or family-owned. Potential clients can view your profile and make a decision and get in touch right away. You only pay for leads that are relevant to the services you offer, and it’s easy for you to turn your ads on and off so you get leads when you want them.

    How Local Services became a “game changer” for one small business

    RosesCC owner.jpg
    Luis Gonzalez, owner of Roses Cleaning Corporation

    Luis Gonzalez started his cleaning company in 2010, after being laid off in the wake of the financial crisis. At first, Luis vacuumed the hallways in his apartment building for $100 off his rent.

    Realizing there was a demand for local cleaning services, Luis created a website and founded Roses House Cleaning Services, nicknamed after his wife.

    In the beginning, Luis used online directories to advertise locally, but a year ago he started using Local Services and saw an immediate jump in calls.

    While a $300 newspaper ad might yield three calls from potential customers, Luis says, “I get 3-5 calls every day through Local Services.” He calls the service a "game changer," that has sustained and grown his business—now he doesn't have to worry about getting enough jobs to fill his schedule each week.

    LocalServices_Mobile App.png
    The Local Services mobile app is available on Android and iOS

    Booking appointments and tracking real results on-the-go

    Luis uses the Local Services app to manage leads from his phone throughout the day, making it easy to integrate into his existing workflow. He can answer questions right away, giving customers the individual attention that keeps them coming back. Luis can track the number of leads he's received and how they're converting to jobs, right from the app.

    He also uses the Local Services app to manage his budget and track calls on the go. Luis says, “If they’re not repeat customers, all the new calls are coming from Google.” When he’s too busy, it’s easy for Luis to turn the ads off so he only gets leads when he wants them.

    With the consistent flow of business and the help of Google, Roses Cleaning has grown. They’ve recently hired two new employees to keep up with the demand.

    RosesCC team.jpg
    Luis has hired two new employees to keep up the calls from Local Services

    Small businesses all over the U.S. are using Local Services to bring in more calls from new customers who are actively looking to book service providers through the platform. Iftah Sagi, the owner of IVS Security in Atlanta says he gets about eight calls a day from customers that found his business through Local Services. Dan Travers, the owner of 1-800-ANYTYME Plumbing, Heating and Air, says his booking rates are up by almost 70% since joining Local Services. Both owners have also hired additional employees to keep up with the increase in call volume, just like Luis.

    We’re passionate about helping small businesses like Roses Cleaning, IVS Security, and 1-800-ANYTYME reach new customers directly and grow their business. You can sign up to be one of our service providers here.

    Closing the Simulation-to-Reality Gap for Deep Robotic Learning



    Each of us can learn remarkably complex skills that far exceed the proficiency and robustness of even the most sophisticated robots, when it comes to basic sensorimotor skills like grasping. However, we also draw on a lifetime of experience, learning over the course of multiple years how to interact with the world around us. Requiring such a lifetime of experience for a learning-based robot system is quite burdensome: the robot would need to operate continuously, autonomously, and initially at a low level of proficiency before it can become useful. Fortunately, robots have a powerful tool at their disposal: simulation.

    Simulating many years of robotic interaction is quite feasible with modern parallel computing, physics simulation, and rendering technology. Moreover, the resulting data comes with automatically-generated annotations, which is particularly important for tasks where success is hard to infer automatically. The challenge with simulated training is that even the best available simulators do not perfectly capture reality. Models trained purely on synthetic data fail to generalize to the real world, as there is a discrepancy between simulated and real environments, in terms of both visual and physical properties. In fact, the more we increase the fidelity of our simulations, the more effort we have to expend in order to build them, both in terms of implementing complex physical phenomena and in terms of creating the content (e.g., objects, backgrounds) to populate these simulations. This difficulty is compounded by the fact that powerful optimization methods based on deep learning are exceptionally proficient at exploiting simulator flaws: the more powerful the machine learning algorithm, the more likely it is to discover how to "cheat" the simulator to succeed in ways that are infeasible in the real world. The question then becomes: how can a robot utilize simulation to enable it to perform useful tasks in the real world?

    The difficulty of transferring simulated experience into the real world is often called the "reality gap." The reality gap is a subtle but important discrepancy between reality and simulation that prevents simulated robotic experience from directly enabling effective real-world performance. Visual perception often constitutes the widest part of the reality gap: while simulated images continue to improve in fidelity, the peculiar and pathological regularities of synthetic pictures, and the wide, unpredictable diversity of real-world images, makes bridging the reality gap particularly difficult when the robot must use vision to perceive the world, as is the case for example in many manipulation tasks. Recent advances in closing the reality gap with deep learning in computer vision for tasks such as object classification and pose estimation provide promising solutions.  For example,  Shrivastava et al. and Bousmalis et al. explored pixel-level domain adaptation. Ganin et al. and Bousmalis and Trigeorgis et al. focus on feature-level domain adaptation. These advances required a rethinking of the approaches used to solve the simulation-to-reality domain shift problem for robotic manipulation as well. Although a number of recent works have sought to address the reality gap in robotics, through techniques such as machine learning-based domain adaptation (Tzeng et al.) and randomization of simulated environments (Sadeghi and Levine), effective transfer in robotic manipulation has been limited to relatively simple tasks, such as grasping rectangular, brightly-colored objects (Tobin et al. and James et al.) and free-space motion (Christiano et al.).  In this post, we describe how learning in simulation, in our case PyBullet, and using domain adaptation methods such as machine learning methods that deal with the simulation-to-reality domain shift, can accelerate learning of robotic grasping in the real world. This approach can enable real robots to grasp a large of variety physical objects, unseen during training, with a high degree of proficiency.

    The performance effect of using 8 million simulated samples of procedural objects with no randomization and various amounts of real data.

    Before we consider introducing simulated experience, what does it take for our robots to learn to reliably grasp such not-before-seen objects with only real-world experience? In a previous post, we discussed how the Google Brain team and X’s robotics teams teach robots how to grasp a variety of ordinary objects by just using images from a single monocular camera. It takes tens to hundreds of thousands of grasp attempts, the equivalent of thousands of robot-hours of real-world experience. Although distributing the learning across multiple robots expedites this, the realities of real-world data collection, including maintenance and wear-and-tear, mean that these kinds of data collection efforts still take a significant amount of real time. As mentioned above, an appealing alternative is to use off-the-shelf simulators and learn basic sensorimotor skills like grasping in a virtual environment. Training a robot how to grasp in simulation can be parallelized easily over any number of machines, and can provide large amounts of experience in dramatically less time (e.g., hours rather than months) and at a fraction of the cost.


    If the goal is to bridge the reality gap for vision-based robotic manipulation, we must answer a few critical questions. First, how do we design simulation so that simulated experience appears realistic to a neural network? And second, how should we integrate simulated and real experience in a way that maximizes transfer to the real world? We studied these questions in the context of a particularly challenging and important robotic manipulation task: vision-based grasping of diverse objects. We extensively evaluated the effect of various simulation design decisions in combination with various techniques for integrating simulated and real experience for maximal performance.
    The setup we used for collecting the simulated and real-world datasets.

    Images used during training of simulated grasping experience with procedurally generated objects (left) and of real-world experience with a varied collection of everyday physical objects (right). In both cases, we see pairs of image inputs with and without the robot arm present.
    When it comes to simulation, there are a number of choices we have to make: the type of objects to use for simulated grasping, whether to use appearance and/or dynamics randomization, and whether to extract any additional information from the simulator that could aid adaptation to the real world. The types of objects we use in simulation is a particularly important one, and there are a number of choices. A question that comes naturally is: how realistic do the objects used in simulation need to be? Using randomly generated procedural objects is the most desirable choice, because these objects are generated effortlessly on demand, and are easy to parameterize if we change the requirements of the task. However, they are not realistic and one could imagine they might not be useful for transferring the experience of grasping them to the real world. Using realistic 3D object models from a publicly available model library, such as the widely used ShapeNet, is another choice, which however restricts our findings to be related to the characteristics of the specific models we are using. In this work, we compared the effect of using procedurally-generated and realistic objects from the ShapeNet model repository, and found that simply using random objects generated programmatically was not just sufficient for efficient experience transfer from simulation to reality, but also generalized better to the real world than using ShapeNet ones.

    Some of the procedurally-generated objects used in simulation.

    Some of the ShapeNet objects used in simulation.

    Some of the physical objects used to collect real grasping experience.
    Another decision about our simulated environment has to do with the randomization of the simulation. Simulation randomization has shown promise in providing generalization to real-world environments in previous work. We further evaluate randomization as a way to provide generalization by separately evaluating the effect of using appearance randomization (randomly changing textures of different visual components of the virtual environment), and dynamics randomization (randomly changing object mass, and friction properties). For our task, visual randomization had a positive effect when we did not use domain adaptation methods to aid with generalization, and had no effect when we included domain adaptation. Using dynamics randomization did not show a significant improvement for this particular task, however it is possible that dynamics randomization might be more relevant in other tasks. These results suggest that, although randomization can be an important part of simulation-to-real-world transfer, the inclusion of effective domain adaptation can have a substantially more pronounced impact for vision-based manipulation tasks.

    Appearance randomization in simulation.
    Finally, the information we choose to extract and use for our domain adaptation methods has a significant impact on performance. In one of our proposed methods, we utilize the extracted semantic map of the simulated image, ie the description of each pixel in the simulated image, and use it to ground our proposed domain adaptation approach to produce semantically-meaningful realistic samples, as we discuss below.

    Our main proposed approach to integrating simulated and real experience, which we call GraspGAN, takes as input synthetic images generated by a simulator, along with their semantic maps, and produces adapted images that look similar to real-world ones. This is possible with adversarial training, a powerful idea proposed by Goodfellow et al. In our framework, a convolutional neural network, the generator, takes as input synthetic images and generates images that another neural network, the discriminator, cannot distinguish from actual real images. The generator and discriminator networks are trained simultaneously and improve together, resulting in a generator that can produce images that are both realistic and useful for learning a grasping model that will generalize to the real world. One way to make sure that these images are useful is the use of the semantic maps of the synthetic images to ground the generator. By using the prediction of these masks as an auxiliary task, the generator is encouraged to produce meaningful adapted images that correspond to the original label attributed to the simulated experience. We train a deep vision-based grasping model with both visually-adapted simulated and real images, and attempt to account for the domain shift further by using a feature-level domain adaptation technique which helps produce a domain-invariant model. See below the GraspGAN adapting simulated images to realistic ones and a semantic map it infers.


    By using synthetic data and domain adaptation we are able to reduce the number of real-world samples required to achieve a given level of performance by up to 50 times, using only randomly generated objects in simulation. This means that we have no prior information about the objects in the real world, other than pre-specified size limits for the graspable objects. We have shown that we are able to increase performance with various amounts of real-world data, and also that by using only unlabeled real-world data and our GraspGAN methodology, we obtain real-world grasping performance without any real-world labels that is similar to that achieved with hundreds of thousands of labeled real-world samples. This suggests that, instead of collecting labeled experience, it may be sufficient in the future to simply record raw unlabeled images, use them to train a GraspGAN model, and then learn the skills themselves in simulation.

    Although this work has not addressed all the issues around closing the reality gap, we believe that our results show that using simulation and domain adaptation to integrate simulated and real robotic experience is an attractive choice for training robots. Most importantly, we have extensively evaluated the performance gains for different available amounts of labeled real-world samples, and for the different design choices for both the simulator and the domain adaptation methods used. This evaluation can hopefully serve as a guide for practitioners to use for their own design decisions and for weighing the advantages and disadvantages of incorporating such an approach in their experimental design.

    This research was conducted by K. Bousmalis, A. Irpan, P. Wohlhart, Y. Bai, M, Kelcey, M. Kalakrishnan, L. Downs, J. Ibarz, P. Pastor, K. Konolige, S. Levine, V. Vanhoucke, with special thanks to colleagues at Google Research and X who've contributed their expertise and time to this research. An early preprint is available on arXiv.

    The collection of procedurally-generated objects we used in simulation was made publicly available here by Laura Downs.











    Security and disinformation in the U.S. 2016 election

    We’ve seen many types of efforts to abuse Google’s services over the years. And, like other internet platforms, we have found some evidence of efforts to misuse our platforms during the 2016 U.S. election by actors linked to the Internet Research Agency in Russia. 

    Preventing the misuse of our platforms is something that we take very seriously; it’s a major focus for our teams. We’re committed to finding a way to stop this type of abuse, and to working closely with governments, law enforcement, other companies, and leading NGOs to promote electoral integrity and user security, and combat misinformation. 

    We have been conducting a thorough investigation related to the U.S. election across our products drawing on the work of our information security team, research into misinformation campaigns from our teams, and leads provided by other companies. Today, we are sharing results from that investigation. While we have found only limited activity on our services, we will continue to work to prevent all of it, because there is no amount of interference that is acceptable.

    We will be launching several new initiatives to provide more transparency and enhance security, which we also detail in these information sheets: what we found, steps against phishing and hacking, and our work going forward.

    Our work doesn’t stop here, and we’ll continue to investigate as new information comes to light. Improving transparency is a good start, but we must also address new and evolving threat vectors for misinformation and attacks on future elections. We will continue to do our best to help people find valuable and useful information, an essential foundation for an informed citizenry and a robust democratic process.

    Security and disinformation in the U.S. 2016 election

    We’ve seen many types of efforts to abuse Google’s services over the years. And, like other internet platforms, we have found some evidence of efforts to misuse our platforms during the 2016 U.S. election by actors linked to the Internet Research Agency in Russia. 

    Preventing the misuse of our platforms is something that we take very seriously; it’s a major focus for our teams. We’re committed to finding a way to stop this type of abuse, and to working closely with governments, law enforcement, other companies, and leading NGOs to promote electoral integrity and user security, and combat misinformation. 

    We have been conducting a thorough investigation related to the U.S. election across our products drawing on the work of our information security team, research into misinformation campaigns from our teams, and leads provided by other companies. Today, we are sharing results from that investigation. While we have found only limited activity on our services, we will continue to work to prevent all of it, because there is no amount of interference that is acceptable.

    We will be launching several new initiatives to provide more transparency and enhance security, which we also detail in these information sheets: what we found, steps against phishing and hacking, and our work going forward.

    Our work doesn’t stop here, and we’ll continue to investigate as new information comes to light. Improving transparency is a good start, but we must also address new and evolving threat vectors for misinformation and attacks on future elections. We will continue to do our best to help people find valuable and useful information, an essential foundation for an informed citizenry and a robust democratic process.

    GNSS Analysis Tools from Google

    Posted by Frank van Diggelen, Software Engineer

    Last year in Android Nougat, we introduced APIs for retrieving Global Navigation Satellite System (GNSS) Raw measurements from Android devices. This past week, we publicly released GNSS Analysis Tools to process and analyze these measurements.

    Android powers over 2 billion devices, and Android phones are made by many different manufacturers. The primary intent of these tools is to enable device manufacturers to see in detail how well the GNSS receivers are working in each particular device design, and thus improve the design and GNSS performance in their devices. However, with the tools publicly available, there is also significant value to the research and app developer community.

    How to use the tool

    The GNSS Analysis Tool is a desktop application that takes in raw the GNSS Measurements logged from your Android device as input.

    This desktop application provides interactive plots, organized into three columns showing the behavior of the RF, Clock, and Measurements. This data allows you to see the behavior of the GNSS receiver in great detail, including receiver clock offset and drift to the order of 1 nanosecond and 1 ppb and measurement errors on a satellite-by-satellite basis. This allows you to do sophisticated analysis at a level that, until now, was almost inaccessible to anyone but the chip manufacturers themselves.

    The tools support multi-constellation (GPS, GLONASS, Galileo, BeiDou and QZSS) and multi-frequency. The image below shows the satellite locations for L1, L5, E1 and E5 signals tracked by a dual frequency chip.

    The tools provide an interactive control screen from which you can manipulate the plots, shown below. From this control screen, you can change the background color, enable the Menu Bars for printing or saving, and select specific satellites for the plots.

    Receiver test report

    The tools also provide automatic test reports of receivers. Click "Make Report" to automatically create the test report. The report evaluates the API implementation, Received Signal, Clock behavior, and Measurement accuracy. In each case it will report PASS or FAIL based on the performance against known good benchmarks. This test report is primarily meant for the device manufacturers to use as they iterate on the design and implementation of a new device. A sample report is shown below.

    Our goal with providing these Analysis Tools is to empower device manufacturers, researchers, and developers with data and knowledge to make Android even better for our customers. You can visit the GNSS Measurement site to learn more and download this application.