Author Archives: Google Devs

Announcing ARCore 1.0 and new updates to Google Lens

Anuj Gosalia, Director of Engineering, AR

With ARCore and Google Lens, we're working to make smartphone cameras smarter. ARCore enables developers to build apps that can understand your environment and place objects and information in it. Google Lens uses your camera to help make sense of what you see, whether that's automatically creating contact information from a business card before you lose it, or soon being able to identify the breed of a cute dog you saw in the park. At Mobile World Congress, we're launching ARCore 1.0 along with new support for developers, and we're releasing updates for Lens and rolling it out to more people.

ARCore, Google's augmented reality SDK for Android, is out of preview and launching as version 1.0. Developers can now publish AR apps to the Play Store, and it's a great time to start building. ARCore works on 100 million Android smartphones, and advanced AR capabilities are available on all of these devices. It works on 13 different models right now (Google's Pixel, Pixel XL, Pixel 2 and Pixel 2 XL; Samsung's Galaxy S8, S8+, Note8, S7 and S7 edge; LGE's V30 and V30+ (Android O only); ASUS's Zenfone AR; and OnePlus's OnePlus 5). And beyond those available today, we're partnering with many manufacturers to enable their upcoming devices this year, including Samsung, Huawei, LGE, Motorola, ASUS, Xiaomi, HMD/Nokia, ZTE, Sony Mobile, and Vivo.

Making ARCore work on more devices is only part of the equation. We're bringing developers additional improvements and support to make their AR development process faster and easier. ARCore 1.0 features improved environmental understanding that enables users to place virtual assets on textured surfaces like posters, furniture, toy boxes, books, cans and more. Android Studio Beta now supports ARCore in the Emulator, so you can quickly test your app in a virtual environment right from your desktop.

Everyone should get to experience augmented reality, so we're working to bring it to people everywhere, including China. We'll be supporting ARCore in China on partner devices sold there— starting with Huawei, Xiaomi and Samsung—to enable them to distribute AR apps through their app stores.

We've partnered with a few great developers to showcase how they're planning to use AR in their apps. Snapchat has created an immersive experience that invites you into a "portal"—in this case, FC Barcelona's legendary Camp Nou stadium. Visualize different room interiors inside your home with Sotheby's International Realty. See Porsche's Mission E Concept vehicle right in your driveway, and explore how it works. With OTTO AR, choose pieces from an exclusive set of furniture and place them, true to scale, in a room. Ghostbusters World, based on the film franchise, is coming soon. In China, place furniture and over 100,000 other pieces with Easyhome Homestyler, see items and place them in your home when you shop on JD.com, or play games from NetEase, Wargaming and Game Insight.

With Google Lens, your phone's camera can help you understand the world around you, and, we're expanding availability of the Google Lens preview. With Lens in Google Photos, when you take a picture, you can get more information about what's in your photo. In the coming weeks, Lens will be available to all Google Photos English-language users who have the latest version of the app on Android and iOS. Also over the coming weeks, English-language users on compatible flagship devices will get the camera-based Lens experience within the Google Assistant. We'll add support for more devices over time.

And while it's still a preview, we've continued to make improvements to Google Lens. Since launch, we've added text selection features, the ability to create contacts and events from a photo in one tap, and—in the coming weeks—improved support for recognizing common animals and plants, like different dog breeds and flowers.

Smarter cameras will enable our smartphones to do more. With ARCore 1.0, developers can start building delightful and helpful AR experiences for them right now. And Lens, powered by AI and computer vision, makes it easier to search and take action on what you see. As these technologies continue to grow, we'll see more ways that they can help people have fun and get more done on their phones.

Looking for Europe’s top entrepreneurs: The 2018 Digital Top 50 Awards

Posted by Torsten Schuppe, Vice President, Marketing

Tech entrepreneurs are changing the world through their own creativity and passion. To celebrate Europe's thriving developers and the entrepreneurial scene and honor the most promising tech companies, in 2016 we founded the Digital Top 50 Awards, in association with McKinsey and Rocket Internet.

The 2018 edition of the awards are now open for applications and companies with a digital product or service from the EU and from EFTA countries can apply on the Digital Top 50 website until April 1, 2018.

All top 50 companies will receive free tickets and showcase space at Tech Open Berlin on June 20-21 2018, where the final winners in each category will be announced. The winner in the Tech for Social Impact category will be granted a cash prize of 50,000 euros, and all five winners will be provided with support from the founding partners to scale their businesses further—through leading professional advice, structured consulting and coaching programs, as well as access to a huge network of relevant industry contacts.

Helping people embrace new digital opportunities is at the heart of our Grow with Google initiative in Europe. With the DT50 awards, we hope to recognize a new generation of startups and scale-ups, and help them grow further and realize their dreams.

Google Developers Launchpad Studio welcomes more machine learning healthcare startups

Posted by Malika Cantor, Developer Relations Program Manager

We're excited to announce the three new startups joining Launchpad Studio, our 6-month mentorship program tailored to help applied-machine learning startups build great products using the most advanced tools and technologies available. We intend to support these startups by leveraging some of our platforms like Google Cloud Platform, TensorFlow, and Android, while also providing one-on-one support from product and research experts from several Google teams including Google Cloud, Verily, X, Brain, and ML Research. Launchpad Studio has also enlisted the expertise of a number of top industry practitioners and thought leaders to ensure Studio startups are successful over the long-term. These three startups were selected based on the novel ways they've applied ML to important challenges in the Healthcare industry:

Nanowear: Managing congestive heart failure

The annual cost of treating heart failures in the US is currently estimated to be ~$40bn annually. With the continued aging of the US population, the impact of Congestive Heart Failure is expected to increase substantially.

Through light-weight, low-cost cloth-based form factors, Nanowear can capture and transmit medical-grade data directly from the skin enabling deep analytics and prescriptive recommendations. As a first product application, Nanowear's SimpleSense aims to transform Congestive Heart Failure management.

Nanowear intends to develop predictive models that provide both physicians and patients with leading indicators and data to anticipate potential hospitalizing events. Combining these datasets with deep machine learning capabilities will position Nanowear at the epicenter of the next generation of telemedicine and connected-self healthcare.

Owkin: Decentralizing healthcare data

With the big data revolution, the medical and scientific communities have more information to work with than in all of history combined. However, with such a wealth of information, it is increasingly difficult to differentiate productive leads from dead ends.

Artificial intelligence and machine learning powered by systems biology can organize, validate, predict and compare the overabundance of information. Owkin builds mathematical models and algorithms that can interpret omics, visual data, biostatistics and patient profiles.

Owkin is focused on federated learning in healthcare to overcome the data sharing problem, building collective intelligence from distributed data.

Portal Telemedicina: Bringing healthcare to rural areas

A low percentage of healthcare specialists per patient and no interoperability between medical devices causes exam results in Brazil to take an average of 60 days to be ready, cost hundreds of dollars, and leaves millions of people with no access to quality healthcare.

The standard solution for such a problem is Telemedicine, but the lack of direct automatic communication with medical devices and pre processing AI behind it hurts its scalability, resulting in very low adoption worldwide.

Portal Telemedicina is a digital healthcare platform that provides reliable, fast, low-cost online diagnostics to hundreds of cities in Brazil. Thanks to revolutionary communication protocols and AI automation, the solution enables interoperability across systems and patients. Exams are handled seamlessly from medical devices to diagnostics. The company counts on a huge proprietary dataset and uses Google's TensorFlow to train machine learning algorithms in millions of images and correlated health records to predict pathologies at human level accuracy.

Leveraging artificial intelligence to empower doctors, the startup helps millions of lives in Brazil and wants to expand and provide universal access to healthcare.

More about the Launchpad Studio program

Each startup will get tailored, equity-free support, with the goal of successfully completing a ML-focused project during the term of the program. To support this process, we provide resources, including deep engagement with engineers in Google Cloud, Google X, and other product teams, as well as Google Cloud credits. We also include both Google Cloud Platform and GSuite training in our engagement with all Studio startups.

Join Us

Based in San Francisco, Launchpad Studio is a fully tailored product development acceleration program that matches top ML startups and experts from Silicon Valley with the best of Google - its people, network, and advanced technologies - to help accelerate applied ML and AI innovation. The program's mandate is to support the growth of the ML ecosystem, and to develop product methodologies for ML.

Launchpad Studio is looking to work with the best and most game-changing ML startups from around the world. While we're currently focused on working with startups in the Healthcare and Biotech space, we'll soon be announcing other industry verticals, and any startup applying AI / ML technology to a specific industry vertical can apply on a rolling-basis.

AMP stories: Bringing visual storytelling to the open web

Posted by Rudy Galfi, Product Manager for AMP at Google

The AMP story format is a recently launched addition to the AMP Project that provides content publishers with a mobile-focused format for delivering news and information as visually rich, tap-through stories.

A visual-driven format for evolving news consumption on mobile

Some stories are best told with text while others are best expressed through images and videos. On mobile devices, users browse lots of articles, but engage with few in-depth. Images, videos and graphics help publishers to get their readers' attention as quickly as possible and keep them engaged through immersive and easily consumable visual information.

Recently, as with many new or experimental features within AMP, contributors from multiple companies — in this case, Google and a group of publishers — came together to work toward building a story-focused format in AMP. The collective desire was that this format offer new, creative and visually rich ways of storytelling specifically designed for mobile.

Minimize technical challenges and let creators focus on the storytelling

The mobile web is great for distributing and sharing content, but mastering performance can be tricky. Creating visual stories on the web with the fast and smooth performance that users have grown accustomed to in native apps can be challenging. Getting these key details right often poses prohibitively high startup costs, particularly for small publishers.

AMP stories are built on the technical infrastructure of AMP to provide a fast, beautiful experience on the mobile web. Just like any web page, a publisher hosts an AMP story HTML page on their site and can link to it from any other part of their site to drive discovery. And, as with all content in the AMP ecosystem, discovery platforms can employ techniques like pre-renderable pages, optimized video loading and caching to optimize delivery to the end user.

AMP stories aim to make the production of stories as easy as possible from a technical perspective. The format comes with preset but flexible layout templates, standardized UI controls, and components for sharing and adding follow-on content.

Yet, the design gives great editorial freedom to content creators to tell stories true to their brand. Publishers involved in the early development of the AMP stories format — CNN, Conde Nast, Hearst, Mashable, Meredith, Mic, Vox Media, and The Washington Post — have brought together their reporters, illustrators, designers, producers, and video editors to creatively use this format and experiment with novel ways to tell immersive stories for a diverse set of content categories.

Developer preview for AMP stories is starting today

Today AMP stories are available for everyone to try on their websites. As part of the AMP Project, the AMP story format is free and open for anyone to use. To get started, check out the tutorial and documentation. We are looking forward to feedback from content creators and technical contributors alike.

Also, starting today, you can see AMP stories on Google Search. To try it out, search for publisher names (like the ones mentioned above) within g.co/ampstories using your mobile browser. At a later point, Google plans to bring AMP stories to more products across Google, and expand the ways they appear in Google Search.

The cpu_features library

Originally posted by Guillaume Chatelet from the Google Compiler Research Team on the Google Open Source Blog

"Write Once, Run Anywhere." That was the promise of Java back in the 1990s. You could write your Java code on one platform, and it would run on any CPU implementing a Java Virtual Machine.

Copyright Andrew Dunn, licensed CC-BY-SA-2.0

But for developers who need to squeeze every bit of performance out of their applications, that's not enough. Since the dawn of computing, performance-minded programmers have used insights about hardware to fine tune their code.

Let's say you're working on code for which speed is paramount, perhaps a new video codec or a library to process tensors. There are individual instructions that will dramatically improve performance, like fused multiply-add, as well as entire instruction sets like SSE2 and AVX, that can give the critical portions of your code a speed boost.

Here's the problem: there's no way to know a priori which instructions your CPU supports. Identifying the CPU manufacturer isn't sufficient. For instance, Intel's Haswell architecture supports the AVX2 instruction set, while Sandy Bridge doesn't. Some developers resort to desperate measures like reading /proc/cpuinfo to identify the CPU and then consulting hardcoded mappings of CPU IDs to instructions.

Enter cpu_features, a small, fast, and simple open source library to report CPU features at runtime. Written in C99 for maximum portability, it allocates no memory and is suitable for implementing fundamental functions and running in sandboxed environments.

The library currently supports x86, ARM/AArch64, and MIPS processors, and we'll be adding to it as the need arises. We also welcome contributions from others interested in making programs "write once, run fast everywhere."

Grow your app business with Google’s new education program for Universal App campaigns

Posted by Sissie Hsiao, VP of Product, Mobile App Advertising at Google

Today, we're launching a new interactive education program for Universal App campaigns (UAC). UAC makes it easy for you to reach users and grow your app business at scale. It uses Google's machine learning technology to help find the customers that matter most to you, based on your business goals — across Google Play, Google.com, YouTube and the millions of sites and apps in the Display Network.

UAC is a shift in the way you market your mobile apps, so we designed the program's first course to help you learn how to get the best results from UAC. Here are a few reasons we encourage you take the course:

  • Learn from industry experts - The course was created by marketers who've been in your shoes and vetted by the team who built the Universal App campaign.
  • Learn on your schedule - Watch snackable videos at your own pace. The course is made up of short 3-minute videos to help you master the content faster.
  • Practice what you learn - Complete interactive activities based on real life scenarios like using UAC to help launch a new app or release an update for your app.

So, take the course today and let us know what you think. You can also read more about UAC best practices here and here.

Happy New Year and hope to see you in class!

Announcing TensorFlow 1.5

Posted by Laurence Moroney, Developer Advocate

We're delighted to announce that TensorFlow 1.5 is now public! Install it now to get a bunch of new features that we hope you'll enjoy!

Eager Execution for TensorFlow

First off, Eager Execution for TensorFlow is now available as a preview. We've heard lots of feedback about the programming style of TensorFlow, and how developers really want an imperative, define-by-run programming style. With Eager Execution for TensorFlow enabled, you can execute TensorFlow operations immediately as they are called from Python. This makes it easier to get started with TensorFlow, and can make research and development more intuitive.

For example, think of a simple computation like a matrix multiplication. Today, in TensorFlow it would look something like this:

x = tf.placeholder(tf.float32, shape=[1, 1])
m = tf.matmul(x, x)

with tf.Session() as sess:
print(sess.run(m, feed_dict={x: [[2.]]}))

If you enable Eager Execution for TensorFlow, it will look more like this:

x = [[2.]]
m = tf.matmul(x, x)

print(m)

You can learn more about Eager Execution for TensorFlow here (check out the user guide linked at the bottom of the page, and also this presentation) and the API docs here.

TensorFlow Lite

The Developer preview of TensorFlow Lite is built into version 1.5. TensorFlow Lite, TensorFlow's lightweight solution for mobile and embedded devices, lets you take a trained TensorFlow model and convert it into a .tflite file which can then be executed on a mobile device with low-latency. Thus the training doesn't have to be done on the device, nor does the device need to upload data to the cloud to have it worked upon. So, for example, if you want to classify an image, a trained model could be deployed to the device and classification of the image is done on-device directly.

TensorFlow Lite includes a sample app to get you started. This app uses the MobileNet model of 1001 unique image categories. It recognizes an image and matches it to a number of categories, listing the top 3 that it recognizes. The app is available on both Android and iOS.

You can learn more about TensorFlow Lite, and how to convert your models to be available on mobile here.

GPU Acceleration Updates

If you are using GPU Acceleration on Windows or Linux, TensorFlow 1.5 now has CUDA 9 and cuDNN 7 support built-in.

To learn more about NVidia's Compute Unified Device Architecture (CUDA) 9, check out NVidia's site here.

This is enhanced by the CUDA Deep Neural Network Library (cuDNN), the latest release of which is version 7. Support for this is now included in TensorFlow 1.5.

Here are some Medium articles on GPU support on Windows and Linux, and how to install them on your workstation (if it supports the requisite hardware)

Documentation Site Updates

In line with this release we've also overhauled the documentation site, including an improved Getting Started flow that will get you from no knowledge to building a neural network to classify different types of iris in a very short time. Check it out!

Other Enhancements

Beyond these features, there's lots of other enhancements to Accelerated Linear Algebra (XLA), updates to RunConfig and much more. Check the release notes here.

Installing TensorFlow 1.5

To get TensorFlow 1.5, you can use the standard pip installation (or pip3 if you use python3)

$  pip install --ignore-installed --upgrade tensorflow

Google Play Games Services C++ SDK 3.0 Released

Posted by Clayton Wilkinson, Developer Relations

We're pleased to announce the availability of the Google Play Games Services C++ SDK version 3.0. The highlights of this release are:

  • Requires Android NDK r14 or greater.
  • Compiled using the clang toolchain. The use of clang with projects using this SDK is strongly recommended in order to avoid unexpected behavior.
  • The armeabi ABI has been removed. You should use armeabi-v7a.
  • Bug fixes for the Nearby API
  • Refinements in the Snapshots API.

More details can be found in the release notes on the downloads page.

The SDK can be downloaded from: https://developers.google.com/games/services/downloads/sdks

Samples using this SDK can be downloaded from GitHub: https://github.com/playgameservices/cpp-android-basic-samples

Thanks and happy coding!

Real-world data in PageSpeed Insights

Posted by Mushan Yang and Xiangyu Luo, Software Engineers

PageSpeed Insights provides information about how well a page adheres to a set of best practices. In the past, these recommendations were presented without the context of how fast the page performed in the real world, which made it hard to understand when it was appropriate to apply these optimizations. Today, we're announcing that PageSpeed Insights will use data from the Chrome User Experience Report to make better recommendations for developers and the optimization score has been tuned to be more aligned with the real-world data.

The PSI report now has several different elements:

  • The Speed score categorizes a page as being Fast, Average, or Slow. This is determined by looking at the median value of two metrics: First Contentful Paint (FCP) and DOM Content Loaded (DCL). If both metrics are in the top one-third of their category, the page is considered fast.
  • The Optimization score categorizes a page as being Good, Medium, or Low by estimating its performance headroom. The calculation assumes that a developer wants to keep the same appearance and functionality of the page.
  • The Page Load Distributions section presents how this page's FCP and DCL events are distributed in the data set. These events are categorized as Fast (top third), Average (middle third), and Slow (bottom third) by comparing to all events in the Chrome User Experience Report.
  • The Page Stats section describes the round trips required to load the page's render-blocking resources, the total bytes used by the page, and how it compares to the median number of round trips and bytes used in the dataset. It can indicate if the page might be faster if the developer modifies the appearance and functionality of the page.
  • Optimization Suggestions is a list of best practices that could be applied to this page. If the page is fast, these suggestions are hidden by default, as the page is already in the top third of all pages in the data set.

For more details on these changes, see About PageSpeed Insights. As always, if you have any questions or feedback, please visit our forumsand please remember to include the URL that is being evaluated.

Actions on Google: new directory, device availability and smart home controls

Posted by Brad Abrams, Product Manager

With the Google Assistant and Actions on Google, we're excited for 2018 and look forward to continuing the developer momentum you've helped us build. To start the year off right, we're at the Consumer Electronics Show in Las Vegas showcasing the Assistant at home, on the go and in the car—and all the ways it can help in each of those places. You can learn more here. For developers like you, we're building upon those same areas to extend the ways you can reach users in those places, too.

Helping users get more done, together

Today we're introducing a new web directory and an updated directoryexperience with the Assistant on phones. These directories give users even more visibility into everything your app can help them do. They also make it even easier for users to share links to your apps. And together with your help, we're adding Actions all the time including those that are coming soon from SpotHero and Starbucks.

Even better, when you publish your first app, you'll become eligible for our developer community program, that supports you with up to $200 in monthly Google Cloud credit and an Assistant t-shirt - with the perks and opportunities growing the more you do, including earning a Google Home.

At home, on the go and in the car

With the Assistant, your apps are available across many devices and this year, we're making them even more available with new integrations at home, on the go and in the car.

For the home, we announced that smart displays with the Assistant built in are coming later this year. Smart displays come with the added benefit of a touch screen, they can help provide a visual experience for users.

Beyond smart displays, we also have the Assistant coming to new speakers and TVs with the Assistant built in, as well as new headphones that are optimized for the Assistant.

Finally, starting later this week, we're bringing the Assistant to Android Auto, allowing users to project Android Auto, and with it the Assistant, onto the screen in their compatible car.

The best part is that compatible apps will be available to users on all these devices without additional work. With that said, to ensure the best user experience, here are a few tips:

  • Smart displays — use high resolution imagery as users will be interacting with larger images than those sized for phones.
  • Android Auto — since this experience is in a car and only voice-only apps will be available, keep voice interaction and sounds simple and not too jarring or distracting.

More control of your home with smart home control

In addition to the enhanced home experience with built in devices, we're also updating our home control experience, making it easier than ever to build for smart homes. The Google Assistant already works with more than 1,500 smart devices from 200+ brands, but this is still just the start for the number of devices we anticipate will be built for the smart home.

We first launched the smart home Actionsat I/O this year and we started with support for things like lights, plugs and thermostats. Now, we're excited to announce we've added direct support for a number of new device types, including: cameras, dishwashers, dryers, vacuumsand washers. This means that users can control all kinds of appliances in their home just by asking the Google Assistant. And in order to support these new integrations, we're also expanding the supported device traitsto include: camerastream, dock, modes, runcycle, scene, start/stopand toggles. With all these new devices, it is a good thing we have made it even easier to build smart home Actions with a streamlined development flow and insightful analytics to help you improve your smart home Action. Ready to begin? Start here!

And that's our news for now. Thanks for everything you do to make the Assistant more helpful, fun and interactive! It's been an exciting year to see the platform expand to new languages and devices and to see what you've all created. We can't wait to see what you build and the new ways users are able to get things done as a result. Here's to a great year!