Author Archives: Google Devs

Building for Billions

Originally posted on Android Developers blog

Posted by Sam Dutton, Ankur Kotwal, Developer Advocates; Liz Yepsen, Program Manager

‘TOP-UP WARNING.’ ‘NO CONNECTION.’ ‘INSUFFICIENT BANDWIDTH TO PLAY THIS RESOURCE.’

These are common warnings for many smartphone users around the world.

To build products that work for billions of users, developers must address key challenges: limited or intermittent connectivity, device compatibility, varying screen sizes, high data costs, short-lived batteries. We first presented developers.google.com/billionsand related Android and Web resources at Google I/O last month, and today you can watch the video presentations about Android or the Web.

These best practices can help developers reach billions by delivering exceptional performance across a range of connections, data plans, and devices. g.co/dev/billions will help you:

Seamlessly transition between slow, intermediate, and offline environments

Your users move from place to place, from speedy wireless to patchy or expensive data. Manage these transitions by storing data, queueing requests, optimizing image handling, and performing core functions entirely offline.

Provide the right content for the right context

Keep context in mind - how and where do your users consume your content? Selecting text and media that works well across different viewport sizes, keeping text short (for scrolling on the go), providing a simple UI that doesn’t distract from content, and removing redundant content can all increase perception of your app’s quality while giving real performance gains like reduced data transfer. Once these practices are in place, localization options can grow audience reach and increase engagement.

Optimize for mobile hardware

Ensure your app or Web content is served and runs well for your widest possible addressable market, covering all actively used OS versions, while still following best practices, by testing on virtual or actual devices in target markets. Native Android apps should set minimum and target SDKs. Also, remember low cost phones have smaller amounts of RAM; apps should therefore adjust usage accordingly and minimize background running. For in-depth information on minimizing APK size, check out this series of Medium posts. On the Web, optimize JavaScript CPU usage, avoid raster image rendering, and minimize resource requests. Find out more here.

Reduce battery consumption

Low cost phones usually have shorter battery life. Users are sensitive to battery consumption levels and excessive consumption can lead to a high uninstall rate or avoidance of your site. Benchmark your battery usage against sessions on other pages or apps, or using tools such as Battery Historian, and avoid long-running processes which drain batteries.

Conserve data usage

Whatever you’re building, conserve data usage in three simple steps: understand loading requirements, reduce the amount of data required for interaction, and streamline navigation so users get what they want quickly. Conserving data on behalf of your users (and with native apps, offering configurable network usage) helps retain data-sensitive users -- especially those on prepaid plans or contracts with limited data -- as even “unlimited” plans can become expensive when roaming or if unexpected fees are applied.

Have another insight, or a success launching in low-connectivity conditions or on low-cost devices? Let us know on our G+ post.

Announcing turndown of the Google Feed API

Posted by Dan Ciruli, Product Manager, Google Cloud Platform Team

The Google Feed API was one of Google’s original AJAX APIs, first announced in 2007. It had a good run. However, interest and use of the API has waned over time, and it is running on API infrastructure that is now two generations old at Google.

Along with many of our other free APIs, in April 2012, we announced that we would be deprecating it in three years time. As of April 2015, the deprecation period has elapsed. While we have continued to operate the API in the interim, it is now time to announce the turndown.

As a final courtesy to developers, we plan to operate the API until September 29, 2016, when Google Feed API will cease operation. Please ensure that your application is no longer using this API by then.

Google appreciates how important APIs and developer trust are and we do not take decisions like this one lightly. We remain committed to providing great services and being open and communicative about their statuses.

For those developers who find RSS an essential part of their workflow, there are commercial alternatives that may well fit your use case very well.

Universal rendering with SwiftShader, now open source

Originally Posted on Chromium Blog

Posted by Nicolas Capens, Software Engineer and Pixel Pirate
SwiftShader is a software library for high-performance graphics rendering on the CPU. Google already uses this library in multiple products, including Chrome, Android development tools, and cloud services. Starting today, SwiftShader is fully open source, expanding its pool of potential applications.


Since 2009, Chrome has used SwiftShader to enable 3D rendering on systems that can’t fully support hardware-accelerated rendering. While 3D content like WebGL is written for a GPU, some users’ devices don’t have graphics hardware capable of executing this content. Others may have drivers with serious bugs which can make 3D rendering unreliable, or even impossible. Chrome uses SwiftShader on these systems in order to ensure 3D web content is available to all users.

WithWithoutWebGL3.png
Chrome running without SwiftShader on a machine with an inadequate GPU (left) cannot run the WebGL Globe experiment. The same machine with SwiftShader enabled (right) is able to fully render the content.


SwiftShader implements the same OpenGL ES graphics API used by Chrome and Android. Making SwiftShader open source will enable other browser vendors to support 3D content universally and move the web platform forward as a whole. In particular, unconditional WebGL support allows web developers to create more engaging content, such as casual games, educational apps, collaborative content creation software, product showcases, virtual tours, and more. SwiftShader also has applications in the cloud, enabling rendering on GPU-less systems.


To provide users with the best performance, SwiftShader uses several techniques to efficiently perform graphics calculations on the CPU. Dynamic code generation enables tailoring the code towards the tasks at hand at run-time, as opposed to the more common compile-time optimization. This complex approach is simplified through the use of Reactor, a custom C++ embedded language with an intuitive imperative syntax. SwiftShader also uses vector operations in SIMT fashion, together with multi-threading technology, to increase parallelism across the CPU’s available cores and vector units. This enables real-time rendering for uses such as app streaming on Android.


Developers can access the SwiftShader source code from its git repository. Sign up for the mailing list to stay up to date on the latest developments and collaborate with other SwiftShader developers from the open-source community.

Daydream Labs: animating 3D objects in VR

Rob Jagnow, Software Engineer, Google VR

Whether you're playing a game or watching a video, VR lets you step inside a new world and become the hero of a story. But what if you want to tell a story of your own?

Producing immersive 3D animation can be difficult and expensive. It requires complex software to set keyframes with splined interpolation or costly motion capture setups to track how live actors move through a scene. Professional animators spend considerable effort to create sequences that look expressive and natural.

At Daydream Labs, we've been experimenting with ways to reduce technical complexity and even add a greater sense of play when animating in VR. In one experiment we built, people could bring characters to life by picking up toys, moving them through space and time, and then replay the scene.


As we saw people play with the animation experiment we built, we noticed a few things:

The need for complex metaphors goes away in VR: What can be complicated in 2D can be made intuitive in 3D. Instead of animating with graph editors or icons representing location, people could simply reach out, grab a virtual toy, and carry it through the scene. These simple animations had a handmade charm that conveyed a surprising degree of emotion.

The learning curve drops to zero: People were already familiar with how to interact with real toys, so they jumped right in and got started telling their stories. They didn't need a lengthy tutorial, and they were able to modify their animations and even add new characters without any additional help.

People react to virtual environments the same way they react to real ones: When people entered a playful VR environment, they understood it was safe space to play with the toys around them. They felt comfortable performing and speaking in funny voices. They took more risks knowing the virtual environment was designed for play.

To create more intricate animations, we also built another experiment that let people independently animate the joints of a single character. It let you record your character’s movement as you separately animated the feet, hands, and head — just like you would with a puppet.


VR allows us to rethink software and make certain use cases more natural and intuitive. While this kind of animation system won’t replace professional tools, it can allow anyone to tell their own stories. There are many examples of using VR for storytelling, especially with video and animation, and we’re excited to see new perspectives as more creators share their stories in VR.

TensorFlow v0.9 now available with improved mobile support

Posted by Pete Warden, Software Engineer

When we started building TensorFlow, supporting mobile devices was a top priority. We were already supporting many of Google’s mobile apps like Translate, Maps, and the Google app, which use neural networks running on devices. We knew that we had to make mobile a first-class part of open source TensorFlow.

TensorFlow has been available to developers on Android since launch, and today we're happy to add iOS in v0.9 of TensorFlow, along with Raspberry Pi support and new compilation options.

To build TensorFlow on iOS we’ve created a set of scripts, including a makefile, to drive the cross-compilation process. The makefile can also help you build TensorFlow without using Bazel, which is not always available.

All this is in the latest TensorFlowdistribution. You can read more by visiting our Mobile TensorFlow guide and the documentation in our iOS samples and Android sample. The mobile samples allow you to classify images using the ImageNet Inception v1 classifier.

These mobile samples are just the beginning---we'd love your help and your contributions. Tag social media posts with #tensorflow so we can hear about your projects!

See the full TensorFlow 0.9.0 release notes here.

New Google Cast SDK released for Android and iOS

Posted by Adam Champy, Product Manager for Google Cast SDK

Google Cast makes it easy for developers to extend their mobile experience to the most beautiful screens and speakers in the home.

At Google I/O, we announced our new Google Cast SDK. This new SDK focuses on making development for Cast quicker, more reliable, and easier to maintain. We’ve introduced full state management that helps you implement the right abstraction between your app and Google Cast. We’ve also delivered a full Cast user experience, matching the Google Cast design checklist.

Today we are releasing this SDK for Android and iOS Senders, including an introductory video, full documentation, and reference sample apps and codelab tutorials for both platforms. Initial developer feedback is that first-time implementations can save significant development time compared with our previous SDKs.


A few things we’ve announced will be coming in the next few months, including a customizable Expanded Controller and adding customization to the Mini Controller, to help accelerate development even further.

Drop by our Cast developer site to learn about the new SDK and APIs, and join our developer community on Google+ at g.co/googlecastdev to discuss this with other developers.

Project Bloks: Making code physical for kids

Originally posted on Google Research Blog



At Google, we’re passionate about empowering children to create and explore with technology. We believe that when children learn to code, they’re not just learning how to program a computer—they’re learning a new language for creative expression and are developing computational thinking: a skillset for solving problems of all kinds.
In fact, it’s a skillset whose importance is being recognised around the world—from President Obama’s CS4All program to the inclusion of Computer Science in the UK National Curriculum. We’ve long supported and advocated the furthering of CS education through programs and platforms such as Blockly, Scratch Blocks, CS First and Made w/ Code.
Today, we’re happy to announce Project Bloks, a research collaboration between Google, Paulo Blikstein (Stanford University) and IDEO with the goal of creating an open hardware platform that researchers, developers and designers can use to build physical coding experiences. As a first step, we’ve created a system for tangible programming and built a working prototype with it. We’re sharing our progress before conducting more research over the summer to inform what comes next.
Physical codingKids are inherently playful and social. They naturally play and learn by using their hands, building stuff and doing things together. Making code physical - known as tangible programming - offers a unique way to combine the way children innately play and learn with computational thinking.
Project Bloks is preceded and shaped by a long history of educational theory and research in the area of hands-on learning. From Friedrich Froebel, Maria Montessori and Jean Piaget’s pioneering work in the area of learning by experience, exploration and manipulation, to the research started in the 1970s by Seymour Papert and Radia Perlman with LOGO and TORTIS. This exploration has continued to grow and includes a wide range of research and platforms.
However, designing kits for tangible programming is challenging—requiring the resources and time to develop both the software and the hardware. Our goal is to remove those barriers. By creating an open platform, Project Bloks will allow designers, developers and researchers to focus on innovating, experimenting and creating new ways to help kids develop computational thinking. Our vision is that, one day, the Project Bloks platform becomes for tangible programming what Blockly is for on-screen programming.
The Project Bloks systemWe’ve designed a system that developers can customise, reconfigure and rearrange to create all kinds of different tangible programming experiences.
A birdseye view of the customisable and reconfigurable Project Bloks system
The Project Bloks system is made up of three core components the “Brain Board”, “Base Boards” and “Pucks”. When connected together they create a set of instructions which can be sent to connected devices, things like toys or tablets, over wifi or Bluetooth.
The three core components of the Project Bloks system
Pucks: abundant, inexpensive, customisable physical instructionsPucks are what make the Project Bloks system so versatile. They help bring the infinite flexibility of software programming commands to tangible programming experiences. Pucks can be programmed with different instructions, such as ‘turn on or off’, ‘move left’ or ‘jump’. They can also take the shape of many different interactive forms—like switches, dials or buttons. With no active electronic components, they’re also incredibly cheap and easy to make. At a minimum, all you'd need to make a puck is a piece of paper and some conductive ink.
Pucks allow for the creation and customisation of endless amount of different domain-specific physical instructions cheaply and easily.
Base Boards: a modular design for diverse tangible programming experiencesBase Boards read a Puck’s instruction through a capacitive sensor. They act as a conduit for a Puck’s command to the Brain Board. Base Boards are modular and can be connected in sequence and in different orientations to create different programming flows and experiences.
The modularity of the Base Boards means they can be arranged in different configurations and flows
Each Base Board is fitted with a haptic motor and LEDs that can be used to give end-users real time feedback on their programming experience. The Base Boards can also trigger audio feedback from the Brain Board’s built-in speaker. Brain Board: control any device that has an API over WiFi or BluetoothThe Brain Board is the processing unit of the system, built on a Raspberry Pi Zero. It also provides the other boards with power, and contains an API to receive and send data to the Base Boards. It sends the Base Boards’ instructions to any device with WiFi or Bluetooth connectivity and an API. As a whole, the Project Bloks system can take on different form factors and be made out of different materials. This means developers have the flexibility to create diverse experiences that can help kids develop computational thinking: from composing music using functions to playing around with sensors or anything else they care to invent.
The Project Bloks system can be used to create all sorts of different physical programming experiences for kids
The Coding KitTo show how designers, developers, and researchers might make use of system, the Project Bloks team worked with IDEO to create a reference device, called the Coding Kit. It lets kids learn basic concepts of programming by allowing them to put code bricks together to create a set of instructions that can be sent to control connected toys and devices—anything from a tablet, to a drawing robot or educational tools for exploring science like LEGO® Education WeDo 2.0.
What’s next?We are looking for participants (educators, developers, parents and researchers) from around the world who would like to help shape the future of Computer Science education by remotely taking part in our research studies later in the year. If you would like to be part of our research study or simply receive updates on the project, please sign up. If you want more context and detail on Project Bloks, you can read our position paper. Finally, a big thank you to the team beyond Google who’ve helped us get this far—including the pioneers of tangible learning and programming who’ve inspired us and informed so much of our thinking.

Introducing Firebase Authentication

Originally posted on Firebase blog

For most developers, building an authentication system for your app can feel a lot like paying taxes. They are both relatively hard to understand tasks that you have no choice but doing, and could have big consequences if you get them wrong. No one ever started a company to pay taxes and no one ever built an app just so they could create a great login system. They just seem to be inescapable costs.

But now, you can at least free yourself from the auth tax. With Firebase Authentication, you can outsource your entire authentication system to Firebase so that you can concentrate on building great features for your app. Firebase Authentication makes it easier to get your users signed-in without having to understand the complexities behind implementing your own authentication system. It offers a straightforward getting started experience, optional UX components designed to minimize user friction, and is built on open standards and backed by Google infrastructure.

Implementing Firebase Authentication is relatively fast and easy. From the Firebase console, just choose from the popular login methods that you want to offer (like Facebook, Google, Twitter and email/password) and then add the Firebase SDK to your app. Your app will then be able to connect securely with the real time database, Firebase storage or to your own custom back end. If you have an auth system already, you can use Firebase Authentication as a bridge to other Firebase features.

Firebase Authentication also includes an open source UI library that streamlines building the many auth flows required to give your users a good experience. Password resets, account linking, and login hints that reduce the cognitive load around multiple login choices - they are all pre-built with Firebase Authentication UI. These flows are based on years of UX research optimizing the sign-in and sign-up journeys on Google, Youtube and Android. It includes Smart Lock for Passwords on Android, which has led to significant improvements in sign-in conversion for many apps. And because Firebase UI is open source, the interface is fully customizable so it feels like a completely natural part of your app. If you prefer, you are also free to create your own UI from scratch using our client APIs.

And Firebase Authentication is built around openness and security. It leverages OAuth 2.0 and OpenID Connect, industry standards designed for security, interoperability, and portability. Members of the Firebase Authentication team helped design these protocols and used their expertise to weave in latest security practices like ID tokens, revocable sessions, and native app anti-spoofing measures to make your app easier to use and avoid many common security problems. And code is independently reviewed by the Google Security team and the service is protected in Google’s infrastructure.

Fabulous use Firebase Auth to quickly implement sign-in

Fabulous uses Firebase Authentication to power their login system. Fabulous is a research-based app incubated in Duke University’s Center for Advanced Hindsight. Its goal is to help users to embark on a journey to reset poor habits, replacing them with healthy rituals, with the ultimate goal of improving health and well-being.

The developers of Fabulous wanted to implement an onboarding flow that was easy to use, required minimal updates, and reduced friction with the end user. They wanted an anonymous option so that users could experiment with it before signing up. They also wanted to support multiple login types, and have an option where the user sign-in flow was consistent with the look and feel of the app.

“I was able to implement auth in a single afternoon. I remember that I spent weeks before creating my own solution that I had to update each time the providers changed their API”
- Amine Laadhari, Fabulous CTO.

Malang Studio cut time-to market by months using Firebase Auth

Chu-Day is an application (available on Android and iOS) that helps couples to never forget the dates that matter most to them. It was created by the Korean firm Malang Studio, that develops character-centric, gamified lifestyle applications.

Generally, countdown and anniversary apps do not require users to sign-in, but Malang Studio wanted to make Chu-day special, and differentiate it from others by offering the ability to connect couples so they could jointly countdown to a special anniversary date. This required a sign-in feature, and in order to prevent users from dropping out, Chu-day needed to make the sign-in process seamless.

Malang Studio was able to integrate an onboarding flow in for their apps, using Facebook and Google Sign-in, in one day, without having to worry about server deployment or databases. In addition, Malang Studio has also been taking advantage of the Firebase User Management Console, which helped them develop and test their sign-in implementation as well as manage their users:

“Firebase Authentication required minimum configuration so implementing social account signup was easy and fast. User management feature provided in the console was excellent and we could easily implement our user auth system.”
- Marc Yeongho Kim, CEO / Founder from Malang Studio

For more about Firebase Authentication, visit the developers site and watch our I/O 2016 session, “Best practices for a great sign-in experience.”

Introducing the Android Basics Nanodegree

Posted by Shanea King-Roberson, Lead Program Manager Twitter: @shaneakr Instagram: @theshanea


Do you have an idea for an app but you don’t know where to start? There are over 1 billion Android devices worldwide, providing a way for you to deliver your ideas to the right people at the right time. Google, in partnership with Udacity, is making Android development accessible and understandable to everyone, so that regardless of your background, you can learn to build apps that improve the lives of people around you.

Enroll in the new Android Basics Nanodegree. This series of courses and services teaches you how to build simple Android apps--even if you have little or no programming experience. Take a look at some of the apps built by our students:

The app "ROP Tutorial" built by student Arpy Vanyan raises awareness of a potentially blinding eye disorder called Retinopathy of Prematurity that can affect newborn babies.

And user Charles Tommo created an app called “Dr Malaria” that teaches people ways to prevent malaria.

With courses designed by Google, you can learn skills that are applicable to building apps that solve real world problems. You can learn at your own pace to use Android Studio(Google’s official tool for Android app development) to design app user interfaces and implement user interactions using the Java programming language.

The courses walk you through step-by-step on how to build an order form for a coffee shop, an app to track pets in a shelter, an app that teaches vocabulary words from the Native American Miwok tribe, and an app on recent earthquakes in the world. At the end of the course, you will have an entire portfolio of apps to share with your friends and family.

Upon completing the Android Basics Nanodegree, you also have the opportunity to continue your learning with the Career-track Android Nanodegree (for intermediate developers). The first 50 participants to finish the Android Basics Nanodegree have a chance to win a scholarship for the Career-track Android Nanodegree. Please visit udacity.com/legal/scholarshipfor additional details and eligibility requirements. You now have a complete learning path to help you become a technology entrepreneur or most importantly, build very cool Android apps, for yourself, your communities, and even the world.

All of the individual courses that make up this Nanodegree are available online for no charge at udacity.com/google. In addition, Udacity provides paid services, including access to coaches, guidance on your project, help staying on track, career counseling, and a certificate upon completion for a fee.

You will be exposed to introductory computer science concepts in the Java programming language, as you learn the following skills.

  • Build app user interfaces
  • Implement user interactions
  • Store information in a database
  • Pull data from the internet into your app
  • Identify and fix unexpected behavior in the app
  • Localize your app to support other languages

To enroll in the Android Basics Nanodegree program, click here.

See you in class!

Introducing the Google Sheets API v4: Transferring data from a SQL database to a Sheet

Posted by Wesley Chun (@wescpy), Developer Advocate, Google Apps

At Google I/O 2016, we launched a new Google Sheets API—click hereto watch the entire announcement. The updated API includes many new features that weren’t available in previous versions, including access to functionality found in the Sheets desktop and mobile user interfaces. My latest DevBytevideo shows developers how to get data into and out of a Google Sheet programmatically, walking through a simple script that reads rows out of a relational database and transferring the data to a brand new Google Sheet.

Let’s take a sneak peek of the code covered in the video. Assuming that SHEETS has been established as the API service endpoint, SHEET_ID is the ID of the Sheet to write to, and datais an array with all the database rows, this is the only call developers need to make to write that raw data into the Sheet:


SHEETS.spreadsheets().values().update(spreadsheetId=SHEET_ID,
range='A1', body=data, valueInputOption='RAW').execute()
Reading rows out of a Sheet is even easier. With SHEETS and SHEET_ID again, this is all you need to read and display those rows:
rows = SHEETS.spreadsheets().values().get(spreadsheetId=SHEET_ID,
range='Sheet1').execute().get('values', [])
for row in rows:
print(row)

If you’re ready to get started, take a look at the Python or other quickstarts in a variety of languages before checking out the DevByte. If you want a deeper dive into the code covered in the video, check out the post at my Python blog. Once you get going with the API, one of the challenges developers face is in constructing the JSON payload to send in API calls—the common operations samples can really help you with this. Finally, if you’re ready to get going with a meatier example, check out our JavaScript codelab where you’ll write a sample Node.js app that manages customer orders for a toy company, the database of which is used in this DevByte, preparing you for the codelab.

We hope all these resources help developers create amazing applications and awesome tools with the new Google Sheets API! Please subscribe to our channel, give us your feedback below, and tell us what topics you would like to see in future episodes!