Monthly Archives: November 2017

Getting Started with the Poly API

Posted by Bruno Oliveira, Software Engineer

As developers, we all know that having the right assets is crucial to the success of a 3D application, especially with AR and VR apps. Since we launched Poly a few weeks ago, many developers have been downloading and using Poly models in their apps and games. To make this process easier and more powerful, today we launched the Poly API, which allows applications to dynamically search and download 3D assets at both edit and run time.

The API is REST-based, so it's inherently cross-platform. To help you make the API calls and convert the results into objects that you can display in your app, we provide several toolkits and samples for some common game engines and platforms. Even if your engine or platform isn't included in this list, remember that the API is based on HTTP, which means you can call it from virtually any device that's connected to the Internet.

Here are some of the things the API allows you to do:

  • List assets, with many possible filters:
    • keyword
    • category ("Animals", "Technology", "Transportation", etc.)
    • asset type (Blocks, Tilt Brush, etc)
    • complexity (low, medium, high complexity)
    • curated (only curated assets or all assets)
  • Get a particular asset by ID
  • Get the user's own assets
  • Get the user's liked assets
  • Download assets. Formats vary by asset type (OBJ, GLTF1, GLTF2).
  • Download material files and textures for assets.
  • Get asset metadata (author, title, description, license, creation time, etc)
  • Fetch thumbnails for assets

Poly Toolkit for Unity Developers

If you are using Unity, we offer Poly Toolkit for Unity, a plugin that includes all the necessary functionality to automatically wrap the API calls and download and convert assets, exposing it through a simple C# API. For example, you can fetch and import an asset into your scene at runtime with a single line of code:

PolyApi.GetAsset(ASSET_ID,
result => { PolyApi.Import(result.Value, PolyImportOptions.Default()); });

Poly Toolkit optionally also handles authentication for you, so that you can list the signed in user's own private assets, or the assets that the user has liked on the Poly website.

In addition, Poly Toolkit for Unity also comes with an editor window, where you can search for and import assets from Poly into your Unity scene directly from the editor.

Poly Toolkit for Unreal Developers

If you are using Unreal, we also offer Poly Toolkit for Unreal, which wraps the API and performs automatic download and conversion of OBJs and Blocks models from Poly. It allows you to query for assets and filter results, download assets and import assets as ready-to-use Unreal actors that you can use in your game.

Credit: Piano by Bruno Oliveira

How to use Poly API with Android, Web or iOS app

Not using a game engine? No problem! If you are developing for Android, check out our Android sample code, which includes a basic sample with no external dependencies, and also a sample that shows how to use the Poly API in conjunction with ARCore. The samples include:

  • Asynchronous HTTP connections to the Poly API.
  • Asynchronous downloading of asset files.
  • Conversion of OBJ and MTL files to OpenGL-compatible VBOs and IBOs.
  • Examples of basic shaders.
  • Integration with ARCore (dynamically downloads an object from Poly and lets the user place it in the scene).

Credit: Cactus wrenby Poly by Google

If you are an iOS developer, we have two samples for you as well: one using SceneKit and one using ARKit, showing how to build an iOS app that downloads and imports models from Poly. This includes all the logicnecessary to open an HTTP connection, make the API requests, parse the results, build the 3D objects from the data and place them on the scene.

For web developers, we also offer a complete WebGL sample using Three.js, showing how to get and display a particular asset, or perform searches. There is also a sample showing how to import and display Tilt Brush sketches.

Credit: Forest by Alex "SAFFY" Safayan

No matter what engine or platform you are using, we hope that the Poly API will help bring high quality assets to your app and help you increase engagement with your users! You can find more information about the Poly API and our toolkits and samples on our developers site.

Understanding Bias in Peer Review



In the 1600’s, a series of practices came into being known collectively as the “scientific method.” These practices encoded verifiable experimentation as a path to establishing scientific fact. Scientific literature arose as a mechanism to validate and disseminate findings, and standards of scientific peer review developed as a means to control the quality of entrants into this literature. Over the course of development of peer review, one key structural question remains unresolved to the current day: should the reviewers of a piece of scientific work be made aware of the identify of the authors? Those in favor argue that such additional knowledge may allow the reviewer to set the work in perspective and evaluate it more completely. Those opposed argue instead that the reviewer may form an opinion based on past performance rather than the merit of the work at hand.

Existing academic literature on this subject describes specific forms of bias that may arise when reviewers are aware of the authors. In 1968, Merton proposed the Matthew effect, whereby credit goes to the best established researchers. More recently, Knobloch-Westerwick et al. proposed a Matilda effect, whereby papers from male-first authors were considered to have greater scientific merit that those from female-first authors. But with the exception of one classical study performed by Rebecca Blank in 1991 at the American Economic Review, there have been few controlled experimental studies of such effects on reviews of academic papers.

Last year we had the opportunity to explore this question experimentally, resulting in “Reviewer bias in single- versus double-blind peer review,” a paper that just appeared in the Proceedings of the National Academy of Sciences. Working with Professor Min Zhang of Tsinghua University, we performed an experiment during the peer review process of the 10th ACM Web Search and Data Mining Conference (WSDM 2017) to compare the behavior of reviewers under single-blind and double-blind review. Our experiment ran as follows:
  1. We invited a number of experts to join the conference Program Committee (PC).
  2. We randomly split these PC members into a single-blind cadre and a double-blind cadre.
  3. We asked all PC members to “bid” for papers they were qualified to review, but only the single-blind cadre had access to the names and institutions of the paper authors.
  4. Based on the resulting bids, we then allocated two single-blind and two double-blind PC members to each paper.
  5. Each PC member read his or her assigned papers and entered reviews, again with only single-blind PC members able to see the authors and institutions.
At this point, we closed our experiment and performed the remainder of the conference reviewing process under the single-blind model. As a result, we were able to assess the difference in bidding and reviewing behavior of single-blind and double-blind PC members on the same papers. We discovered a number of surprises.

Our first finding shows that compared to their double-blind counterparts, single-blind PC members tend to enter higher scores for papers from top institutions (the finding holds for both universities and companies) and for papers written by well-known authors. This suggests that a paper authored by an up-and-coming researcher might be reviewed more negatively (by a single-blind PC member) than exactly the same paper written by an established star of the field.

Digging a little deeper, we show some additional findings related to the “bidding process,” in which PC members indicate which papers they would like to review. We found that single-blind PC members (a) bid for about 22% fewer papers than their double-blind counterparts, and (b) bid preferentially for papers from top schools and companies. Finding (a) is especially intriguing; with no author information reviewers have less information, arguably making the job of weighing the merit of each paper more difficult. Yet, the double-blind reviewers bid for more work, not less, than their single-blind counterparts. This suggests that double-blind reviewers become more engaged in the review process. Finding (b) is less surprising, but nonetheless enlightening: In the presence of author names and institution, this information is incorporated into the reviewers’ bids. All else being equal, the odds that single-blind reviewers bid on papers from top institutions is about 15 percent above parity.

We also studied whether the actual or perceived gender of authors influenced the behavior of single-blind versus double-blind reviewers. Here the results are a little more nuanced. Compared to double-blind reviewers, we saw about a 22% decrease in the odds that a single-blind reviewer would give a female-authored paper a favorable review, but due to the smaller count of female-authored papers this result was not statistically significant. In an extended version of our paper, we consider our study as well as a range of other studies in the literature and perform a “meta-analysis” of all these results. From this larger pool of observations, the combined results do show a significant finding for the gender effect.

To conclude, we see that the practice of double-blind reviewing yields a denser landscape of bids, which may result in a better allocation of papers to qualified reviewers. We also see that reviewers who see author and institution information tend to bid more for papers from top institutions, and are more likely to vote to accept papers from top institutions or famous authors than their double-blind counterparts. This offers some evidence to suggest that a particular piece of work might be accepted under single-blind review if the authors are famous or come from top institutions, but rejected otherwise. Of course, the situation remains complex: double-blind review imposes an administrative burden on conference organizers, reduces the opportunity to detect several varieties of conflict of interest, and may in some cases be difficult to implement due to the existence of pre-prints or long-running research agendas that are well-known to experts in the field. Nonetheless, we recommend that journal editors and conference chairs carefully consider the merits of double-blind review.

Please take a look at our full paper for more details of our study.

Machine learning gives environmentalists something to tweet about

Editor’s note: TensorFlow, our open source machine learning library, is just that—open to anyone. Companies, nonprofits, researchers and developers have used TensorFlow in some pretty cool ways, and we’re sharing those stories here on Keyword. Here’s one of them.


Victor Anton captured tens of thousands of birdsong recordings, collected over a three-year period. But he had no way to figure out which birdsong belonged to what bird.

The recordings, taken at 50 locations around a bird sanctuary in New Zealand known as “Zelandia,” were part of an effort to better understand the movement and numbers of threatened species including the Hihi, Tīeke and Kākāriki. Because researchers didn’t have reliable information about where the birds were and how they moved about, it was difficult to make good decisions about where to target conservation efforts on the ground.

Endangered species include the Kākāriki, Hihi, and Tīekei
Endangered species include the Kākāriki, Hihi, and Tīekei.

That’s where the recordings come in. Yet the amount of audio data was overwhelming. So Victor—a Ph.D. student at Victoria University of Wellington, New Zealand—and his team turned to technology.

“We knew we had lots of incredibly valuable data tied up in the recordings, but we simply didn’t have the manpower or a viable solution that would help us unlock this,” Victor tells us. “So we turned to machine learning to help us.”

Some of the audio recorders set up at 50 sites around the sanctuary
Some of the audio recorders set up at 50 sites around the sanctuary.

In one of the quirkier applications of machine learning, they trained a Google TensorFlow-based system to recognize specific bird calls and measure bird activity. The more audio it deciphered, the more it learned, and the more accurate it became.


It worked like this: the AI system used audio that had been recorded and stored, chopping it into minute-long segments, and then converting the file into a spectrogram. After the spectrograms were chopped into chunks, each spanning less than a second, they were processed individually by a deep convolutional neural network. A recurrent neural network then tied together the chunks and produced a continual prediction of which of the three birds was present across the minute-long segment. These segments were compiled to create a fuller picture about the presence and movement of the birds.

TensorFlow processed the Spectograms and learned to identify the calls of different species

TensorFlow processed the Spectograms and learned to identify the calls of different species.

The team faced some unique challenges. They were starting with a small quantity of labelled data, the software would often pick up other noises like construction, cars and even doorbells, and some of the bird species had a variety of birdsongs or two would sing at the same time.

To overcome these hurdles, they tested, verified and retrained the system many times over. As a result, they have learned things that would have otherwise remained locked up in thousands of hours of data. While it’s still early days, already conservation groups are talking to Victor about how they can use these initial results to better target their efforts. Moreover, the team has seen enough encouraging signs that they believe that their tools can be applied to other conservation projects.

“We are only just beginning to understand the different ways we can put machine learning to work in helping us protect different fauna,” says Victor, “ultimately allowing us to solve other environmental challenges across the world.”

Preventing unauthorized inventory

Advertising should be free of invalid activity – including unauthorized, misrepresented, and fake ad inventory – which diverts revenue from legitimate publishers and tricks marketers into wasting their money. Earlier this year we worked with the IAB Tech Lab to create the ads.txt standard, a simple solution to help stop bad actors from selling unauthorized inventory across the industry. Since then, we’ve shared our plans to integrate the standard into our advertiser and publisher advertising platforms.

As of November 8th, Google’s advertising platforms filter all unauthorized ad inventory identified by published ads.txt files:
  • Marketers and agencies using DoubleClick Bid Manager and AdWords will not buy unauthorized impressions as identified by publishers’ ads.txt files.
  • DoubleClick Ad Exchange and AdSense publishers that use ads.txt are protected against unauthorized inventory being sold in our auctions.

Preventing the sale of unauthorized inventory depends on having complete and accurate ads.txt information. So, to make sure our systems are filtering traffic as accurately as possible, we built an ads.txt crawler based on concepts used in our search index technology. It scans all active sites across our network daily, over 30m domains, for ads.txt files, to prevent unauthorized inventory from entering our systems.



The adoption of ads.txt has been growing quickly and the standard is reaching scale across publishers:
  • Over 100,000 ads.txt files have been published
  • 750 of the comScore 2,000 have ads.txt files
  • Over 50% of inventory seen by DBM comes from domains with ads.txt files

We believe ads.txt is a significant step in cleaning up bad inventory and it's great to have the broad support of our partners like L’Oreal, Omnicom Media Group, and the Financial Times.
“Consumers place enormous value on the ability to trust brands, which is why transparency in advertising is a top priority at L’Oreal. We look forward to collaborating with Google on this initiative as we continue to encourage the industry to follow suit.”
- Marie Gulin-Merle, CMO L’Oreal USA
"Removing counterfeit inventory from the ecosystem is critical to maintaining trust in digital. The simple act of publishing an ads.txt file helps provide the transparency we need to quickly reduce counterfeit inventory from harming our clients."
- Steve Katelman, EVP Global Strategic Partnerships, Omnicom Media Group
“It's great to see adoption of ads.txt across the industry and we're happy to see Google put their support behind this initiative. By eliminating counterfeit inventory from the ecosystem, marketers' budgets will work that much harder and revenue will reach real working media to fund the independent, high-quality journalism which society depends upon."
- Anthony Hitchings, Digital Advertising Operations Director, Financial Times

It’s amazing to see how fast the industry is adopting ads.txt, but there is still more to be done. Supporting industry initiatives like ads.txt is critical to maintaining the health of the digital advertising ecosystem. That’s why we’ll continue to invest and innovate to make the ecosystem more valuable, transparent, and trusted for everyone.

Posted by Per Bjorke
Product Manager, Google Ad Traffic Quality

Preventing unauthorized inventory

Advertising should be free of invalid activity – including unauthorized, misrepresented, and fake ad inventory – which diverts revenue from legitimate publishers and tricks marketers into wasting their money. Earlier this year we worked with the IAB Tech Lab to create the ads.txt standard, a simple solution to help stop bad actors from selling unauthorized inventory across the industry. Since then, we’ve shared our plans to integrate the standard into our advertiser and publisher advertising platforms.

As of November 8th, Google’s advertising platforms filter all unauthorized ad inventory identified by published ads.txt files:
  • Marketers and agencies using DoubleClick Bid Manager and AdWords will not buy unauthorized impressions as identified by publishers’ ads.txt files.
  • DoubleClick Ad Exchange and AdSense publishers that use ads.txt are protected against unauthorized inventory being sold in our auctions.

Preventing the sale of unauthorized inventory depends on having complete and accurate ads.txt information. So, to make sure our systems are filtering traffic as accurately as possible, we built an ads.txt crawler based on concepts used in our search index technology. It scans all active sites across our network daily, over 30m domains, for ads.txt files, to prevent unauthorized inventory from entering our systems.



The adoption of ads.txt has been growing quickly and the standard is reaching scale across publishers:
  • Over 100,000 ads.txt files have been published
  • 750 of the comScore 2,000 have ads.txt files
  • Over 50% of inventory seen by DBM comes from domains with ads.txt files

We believe ads.txt is a significant step in cleaning up bad inventory and it's great to have the broad support of our partners like L’Oreal, Omnicom Media Group, and the Financial Times.
“Consumers place enormous value on the ability to trust brands, which is why transparency in advertising is a top priority at L’Oreal. We look forward to collaborating with Google on this initiative as we continue to encourage the industry to follow suit.”
- Marie Gulin-Merle, CMO L’Oreal USA
"Removing counterfeit inventory from the ecosystem is critical to maintaining trust in digital. The simple act of publishing an ads.txt file helps provide the transparency we need to quickly reduce counterfeit inventory from harming our clients."
- Steve Katelman, EVP Global Strategic Partnerships, Omnicom Media Group
“It's great to see adoption of ads.txt across the industry and we're happy to see Google put their support behind this initiative. By eliminating counterfeit inventory from the ecosystem, marketers' budgets will work that much harder and revenue will reach real working media to fund the independent, high-quality journalism which society depends upon."
- Anthony Hitchings, Digital Advertising Operations Director, Financial Times

It’s amazing to see how fast the industry is adopting ads.txt, but there is still more to be done. Supporting industry initiatives like ads.txt is critical to maintaining the health of the digital advertising ecosystem. That’s why we’ll continue to invest and innovate to make the ecosystem more valuable, transparent, and trusted for everyone.

Posted by Per Bjorke
Product Manager, Google Ad Traffic Quality

Get the most out of Data Studio Community Connectors

Data Studio Community Connectors enable direct connections from Data Studio to any internet accessible data source. Anyone can build their own Community Connector or use any available ones.

Try out the new Community Connectors in the gallery

We have recently added additional Community Connectors to the Data Studio Community Connector gallery from developers including: DataWorx, Digital Inspiration, G4interactive, Kevpedia, Marketing Miner, MarketLytics, Mito, Power My Analytics, ReportGarden, and Supermetrics. These connectors will let you access data from additional external sources, leveraging Data Studio as a free and powerful reporting and analysis solution. You can now use more than 50 Community Connectors from within the Gallery to access all your data.

Try out these free Community Connectors: Salesforce, Twitter, Facebook Marketing.

Find the connector you need

In the Data Studio Community Connector gallery, it is possible for multiple connectors to connect to the same data source. There are also instances where a single connector can connect to multiple data sources. To help users find the connector they need, we have added the Data Sources page where you can search for Data Sources and see what connectors are available to use. The connector list includes native connectors in Data Studio as well as verified and Open Source Community Connectors. You can directly use the connectors by clicking the direct links on the Data Sources page.

Vote for your data source

If your data source is not available to use through any existing connector, you can Vote for your data source. This will let developers know which Data Sources are most in demand. Developers should also let us know which Community Connector you are building. We will use this information to update the Data Sources page.

Tell us your story

If you have any interesting connector stories, ideas, or if you’d like to share some amazing reports you’ve created using Community Connectors please let us know by giving us a shout or send us your story at [email protected].

Google Summer of Code 2017 Mentor Summit

This year Google brought over 320 mentors from all over the world (33 countries!) to Google's offices in Sunnyvale, California for the 2017 Google Summer of Code Mentor Summit. This year 149 organizations were represented, which provided the perfect opportunity to meet like-minded open source enthusiasts and discuss ways to make open source better and more sustainable.
Group photo by Dmitry Levin used under a CC BY-SA 4.0 license.
The Mentor Summit is run as an unconference in which attendees create and join sessions based on their interests. “I liked the unconference sessions, that they were casual and discussion based and I got a lot out of them. It was the place I connected with the most people,” said Cassie Tarakajian, attending on behalf of the Processing Foundation.

Attendees quickly filled the schedule boards with interesting sessions. One theme in this year’s session schedule was the challenging topic of failing students. Derk Ruitenbeek, part of the phpBB contingent, had this to say:
“This year our organisation had a high failure rate of 3 out of 5 accepted students. During the Mentor Summit I attended multiple sessions about failing students and rating proposals and got a lot [of] useful tips. Talking with other mentors about this really helped me find ways to improve student selection for our organisation next time.”
This year was the largest Mentor Summit ever – with the exception of our 10 Year Reunion in 2014 – and had the best gender diversity yet. Katarina Behrens, a mentor who worked with LibreOffice, observed:
“I was pleased to see many more women at the summit than last time I participated. I'm also beyond happy that now not only women themselves, but also men engage in increasing (not only gender) diversity of their projects and teams.”
We've held the Mentor Summit for the past 10+ years as a way to meet some of the thousands of mentors whose generous work for the students makes the program successful, and to give some of them and the projects they represent a chance to meet. This year was their first Mentor Summit for 52% of the attendees, giving us a lot of fresh perspectives to learn from!

We love hosting the Mentor Summit and attendees enjoy it, as well, especially the opportunity to meet each other. In fact, some attendees met in person for the first time at the Mentor Summit after years of collaborating remotely! According to Aveek Basu, who mentored for The Linux Foundation, the event was an excellent opportunity for “networking with like minded people from different communities. Also it was nice to know about people working in different fields from bioinformatics to robotics, and not only hard core computer science.” 

You can browse the event website and read through some of the session notes that attendees took to learn a bit more about this year’s Mentor Summit.

Now that Google Summer of Code 2017 and the Mentor Summit have come to a close, our team is busy gearing up for the 2018 program. We hope to see you then!

By Maria Webb, Google Open Source 

Host Hangouts Meet meetings with up to 50 participants

We recently announced a few exciting additions to the Hangouts Meet suite of products and features, including support of up to 50 participants in a meeting. This feature is now available for all meetings organized by a G Suite Enterprise edition user.

The 50-participant limit supports people joining from any mixture of video and dial-in entry points so you can bring together even more people from all over the world.

Launch Details
Release track:
Launching to Rapid Release, with Scheduled Release coming in 2 weeks

Editions:
Available to G Suite Enterprise edition only

Rollout pace:
Full rollout (1–3 days for feature visibility)

Impact:
All end users

Action:
Change management suggested/FYI

More Information
G Suite Updates blog: The meeting room, by G Suite
Help Center: Get Started with Meet
Help Center: Hangouts Meet Benefits and features
G Suite Learning Center: How many people can join a video meeting?

Launch release calendar
Launch detail categories
Get these product update alerts by email
Subscribe to the RSS feed of these updates

Control who can move your domain’s content out of Team Drives

Team Drives allow you to share files with people inside and outside of your domain. While you may want people outside of your domain, such as clients and partners, to add and contribute to your domain’s Team Drives, it’s important that you have control over who can move files out as well.

Today, we are introducing a new sharing setting in the Admin console that allows you, as a G Suite admin, to control who can remove content from your domain’s Team Drives and prevents your data from leaving your organization. This setting applies to both moving content from a Team Drive in your domain to a Team Drive or My Drive in an external domain as well as moving content from an a My Drive of a user in your domain to a Team Drive in an external domain.

There are three options to chose from within this setting: “Anyone,” “No one,” or “Only users in this domain.



You can find this setting in the Admin console under Apps > G Suite > Settings for Drive and Docs > Sharing settings.

By default, this setting is set to “Anyone,” which matches the Google Drive behavior that was previously in place with Team Drives. Additionally, these permissions are determined at the organizational unit (OU) level. This means that the setting will take effect based on the owner of the file and the setting of that owner’s OU.

This new setting will not be available in the Admin console if the “Sharing outside of [domain name]” selection is set to “off.”

Please note: this setting does not prevent users from transferring ownership by adding collaborators or using the sharing dialog. It only controls ownership transfer that happens as a result of moving content out of a shared Team Drive.

For more information on sharing settings for Team Drives, check out the Help Center.

Launch Details
Release track:
Launching to both Rapid Release and Scheduled Release

Editions:
Available to all G Suite editions

Rollout pace:
Full rollout (1–3 days for feature visibility)

Impact:
Admins only

Action:
Admin action suggested/FYI

More Information
Help Center: Manage your Team Drive users and activity

Launch release calendar
Launch detail categories
Get these product update alerts by email
Subscribe to the RSS feed of these updates

Get local help with your Google Assistant

No matter what questions you’re asking—whether about local traffic or a local business—your Google Assistant should be able to help. And starting today, it’s getting better at helping you, if you’re looking for nearby services like an electrician, plumber, house cleaner and more.

To get started, say “Ok Google, find me a plumber” to your Assistant on your Android phone, iPhone or voice activated speaker, like Google Home. The Assistant will then ask you a few follow up questions and you’ll get a list of some local options nearby.

local

In the U.S., this feature will start rolling out over the coming week, so help is just around the corner. In many cities the Google Assistant will suggest providers that have been prescreened by Google and companies like HomeAdvisor and Porch so you can feel confident they're ready to take on the job. And if you’re in a city that doesn’t have any available guaranteed or screened providers, you’ll still get an answer from the Assistant with other nearby results.

So start planning your next big project—whether it's fixing your garage door or painting your garage door—all with your Assistant by your side.