Category Archives: Google Developers Blog

News and insights on Google platforms, tools and events

Join us live on May 23rd as we announce the latest Ads, Analytics and DoubleClick innovations

Posted by Sridhar Ramaswamy Senior Vice President, Ads and Commerce

What: Google Marketing Next keynote live stream
When: Tuesday, May 23rd at 9:00 a.m. PT/12:00 p.m. ET.
Duration: 1 hour
Where: On the Inside AdWords Blog



Be the first to hear about Google’s latest marketing innovations, the moment they’re announced. Watch live as my team and I share new Ads, Analytics and DoubleClick innovations designed to improve your ability to reach consumers, simplify campaign measurement and increase your productivity. We’ll also give you a sneak peek at how brands are starting to use the Google Assistant to delight customers.

Register for the live stream here.

Until then, follow us on Twitter, Google+, Facebook and LinkedIn for previews of what's to come.

XLA – TensorFlow, compiled

By the XLA team within Google, in collaboration with the TensorFlow team


One of the design goals and core strengths of TensorFlow is its flexibility. TensorFlow was designed to be a flexible and extensible system for defining arbitrary data flow graphs and executing them efficiently in a distributed manner using heterogenous computing devices (such as CPUs and GPUs).


But flexibility is often at odds with performance. While TensorFlow aims to let you define any kind of data flow graph, it’s challenging to make all graphs execute efficiently because TensorFlow optimizes each op separately. When an op with an efficient implementation exists or when each op is a relatively heavyweight operation, all is well; otherwise, the user can still compose this op out of lower-level ops, but this composition is not guaranteed to run in the most efficient way.



This is why we’ve developed XLA (Accelerated Linear Algebra), a compiler for TensorFlow. XLA uses JIT compilation techniques to analyze the TensorFlow graph created by the user at runtime, specialize it for the actual runtime dimensions and types, fuse multiple ops together and emit efficient native machine code for them - for devices like CPUs, GPUs and custom accelerators (e.g. Google’s TPU).


Fusing composable ops for increased performance

Consider the tf.nn.softmax op, for example. It computes the softmax activations of its parameter as follows:

CodeCogsEqn.gif


Softmax can be implemented as a composition of primitive TensorFlow ops (exponent, reduction, elementwise division, etc.):

softmax = exp(logits) / reduce_sum(exp(logits), dim)


This could potentially be slow, due to the extra data movement and materialization of temporary results that aren’t needed outside the op. Moreover, on co-processors like GPUs such a decomposed implementation could result in multiple “kernel launches” that make it even slower.


XLA is the secret compiler sauce that helps TensorFlow optimize compositions of primitive ops automatically. Tensorflow, augmented with XLA, retains flexibility without sacrificing runtime performance, by analyzing the graph at runtime, fusing ops together and producing efficient machine code for the fused subgraphs.


For example, a decomposed implementation of softmax as shown above would be optimized by XLA to be as fast as the hand-optimized compound op.


More generally, XLA can take whole subgraphs of TensorFlow operations and fuse them into efficient loops that require a minimal number of kernel launches. For example:

Many of the operations in this graph can be fused into a single element-wise loop. Consider a single element of the bias vector being added to a single element from the matmul result, for example. The result of this addition is a single element that can be compared with 0 (for ReLU). The result of the comparison can be exponentiated and divided by the sum of exponents of all inputs, resulting in the output of softmax. We don’t really need to create the intermediate arrays for matmul, add, and ReLU in memory.


s[j] = softmax[j](ReLU(bias[j] + matmul_result[j]))


A fused implementation can compute the end result within a single element-wise loop, without allocating needless memory. In more advanced scenarios, these operations can even be fused into the matrix multiplication.


XLA helps TensorFlow retain its flexibility while eliminating performance concerns.


On internal benchmarks, XLA shows up to 50% speedups over TensorFlow without XLA on Nvidia GPUs. The biggest speedups come, as expected, in models with long sequences of elementwise operations that can be fused to efficient loops. However, XLA should still be considered experimental, and some benchmarks may experience slowdowns.


In this talk from TensorFlow Developer Summit, Chris Leary and Todd Wang describe how TensorFlow can make use of XLA, JIT, AOT, and other compilation techniques to minimize execution time and maximize computing resources.


Extreme specialization for executable size reduction


In addition to improved performance, TensorFlow models can benefit from XLA for restricted-memory environments (such as mobile devices) due to the executable size reduction it provides. tfcompile is a tool that leverages XLA for ahead-of-time compilation (AOT) - a whole graph is compiled to XLA, which then emits tight machine code that implements the ops in the graph. Coupled with a minimal runtime this scheme provides considerable size reductions.


For example, given a 3-deep, 60-wide stacked LSTM model on android-arm, the original TF model size is 2.6 MB (1 MB runtime + 1.6 MB graph); when compiled with XLA, the size goes down to 600 KB.


This size reduction is achieved by the full specialization of the model implied by its static compilation. When the model runs, the full power and flexibility of the TensorFlow runtime is not required - only the ops implementing the actual graph the user is interested in are compiled to native code. That said, the performance of the code emitted by the CPU backend of XLA is still far from optimal; this part of the project requires more work.

Support for alternative backends and devices


To execute TensorFlow graphs on a new kind of computing device today, one has to re-implement all the TensorFlow ops (kernels) for the new device. Depending on the device, this can be a very significant amount of work.


By design, XLA makes supporting new devices much easier by adding custom backends. Since TensorFlow can target XLA, one can add a new device backend to XLA and thus enable it to run TensorFlow graphs. XLA provides a significantly smaller implementation surface for new devices, since XLA operations are just the primitives (recall that XLA handles the decomposition of complex ops on its own). We’ve documented the process for adding a custom backend to XLA on this page. Google uses this mechanism to target TPUs from XLA.


Conclusion and looking forward


XLA is still in early stages of development. It is showing very promising results for some use cases, and it is clear that TensorFlow can benefit even more from this technology in the future. We decided to release XLA to TensorFlow Github early to solicit contributions from the community and to provide a convenient surface for optimizing TensorFlow for various computing devices, as well as retargeting the TensorFlow runtime and models to run on new kinds of hardware.


Introducing Python Fire, a library for automatically generating command line interfaces

By David Bieber, Software Engineer on Google Brain

Originally posted on the Google Open Source Blog


Today we are pleased to announce the open-sourcing of Python Fire. Python Fire generates command line interfaces (CLIs) from any Python code. Simply call the Fire function in any Python program to automatically turn that program into a CLI. The library is available from pypi via `pip install fire`, and the source is available on GitHub.


Python Fire will automatically turn your code into a CLI without you needing to do any additional work. You don't have to define arguments, set up help information, or write a main function that defines how your code is run. Instead, you simply call the `Fire` function from your main module, and Python Fire takes care of the rest. It uses inspection to turn whatever Python object you give it -- whether it's a class, an object, a dictionary, a function, or even a whole module -- into a command line interface, complete with tab completion and documentation, and the CLI will stay up-to-date even as the code changes.


To illustrate this, let's look at a simple example.
#!/usr/bin/env python
import fire


class Example(object):
 def hello(self, name='world'):
   """Says hello to the specified name."""
   return 'Hello {name}!'.format(name=name)


def main():
 fire.Fire(Example)


if __name__ == '__main__':
 main()


When the Fire function is run, our command will be executed. Just by calling Fire, we can now use the Example class as if it were a command line utility.


$ ./example.py hello
Hello world!
$ ./example.py hello David
Hello David!
$ ./example.py hello --name=Google
Hello Google!


Of course, you can continue to use this module like an ordinary Python library, enabling you to use the exact same code both from Bash and Python. If you're writing a Python library, then you no longer need to update your main method or client when experimenting with it; instead you can simply run the piece of your library that you're experimenting with from the command line. Even as the library changes, the command line tool stays up to date.


At Google, engineers use Python Fire to generate command line tools from Python libraries. We have an image manipulation tool built by using Fire with the Python Imaging Library, PIL. In Google Brain, we use an experiment management tool built with Fire, allowing us to manage experiments equally well from Python or from Bash.


Every Fire CLI comes with an interactive mode. Run the CLI with the `--interactive` flag to launch an IPython REPL with the result of your command, as well as other useful variables already defined and ready to use. Be sure to check out Python Fire's documentation for more on this and the other useful features Fire provides.


Between Python Fire's simplicity, generality, and power, we hope you find it a useful library for your own projects.

Apply now to Launchpad Accelerator—now including Africa and Europe!

Posted By: Roy Glasberg, Global Lead, Launchpad Program & Accelerator

After recently hosting another amazing group of startups for Launchpad Accelerator, we're ready to kick things off again with the next class! Apply hereby 9am PST on April 24, 2017.

Starting today, we'll be accepting applications from growth-stage innovative tech startups from these countries:
  • Asia: India, Indonesia, Thailand, Vietnam, Malaysia and the Philippines
  • Latin America: Argentina, Brazil, Chile, Colombia and Mexico
And we're delighted to expand the program to countries in Africa and Europe for the first time!
  • Africa: Kenya, Nigeria and South Africa
  • Europe: Czech Republic, Hungary and Poland
The equity-free program will begin on July 17th, 2017 at the Google Developers Launchpad Space in San Francisco and will include 2 weeks of all-expense-paid training.

What are the benefits?

The training at Google HQ includes intensive mentoring from 20+ Google teams, and expert mentors from top technology companies and VCs in Silicon Valley. Participants receive equity-free support, credits for Google products, PR support and continue to work closely with Google back in their home country during the 6-month program.

What do we look for when selecting startups?

Each startup that applies to the Launchpad Accelerator is considered holistically and with great care. Below are general guidelines behind our process to help you understand what we look for in our candidates.

All startups in the program must:
  • Be a technological startup.
  • Be targeting their local markets.
  • Have proven product-market fit (beyond ideation stage).
  • Be based in the countries listed above.
Additionally, we are interested in what kind of startup you are. We also consider:
  • The problem you are trying to solve. How does it create value for users? How are you addressing a real challenge for your home city, country or region?
  • Does your management team have a leadership mindset and the drive to become an influencer?
  • Will you share what you learn in Silicon Valley for the benefit of other startups in your local ecosystem?
If you're based outside of these countries, stay tuned, as we expect to add more countries to the program in the future.

We can't wait to learn more about your startup and work together to solve your challenges and help grow your business.

“Level up” your gaming business with new innovations for apps

Originally shared on the Inside AdMob Blog
Posted by Sissie Hsiao, Product Director, Mobile Advertising, Google. Last played Fire Emblem Heroes for Android

Mobile games mean more than just fun. They mean business. Big business. According to App Annie, game developers should capture almost half of the $189B global market for in-app purchases and advertising by 20201.

Later today, at the Games Developer Conference (GDC) in San Francisco, I look forward to sharing a series of new innovations across ad formats, monetization tools and measurement insights for apps.

  • New playable and video ad formats to get more people into your game
  • Integrations to help you create better monetization experiences 
  • Measurement tools that provide insights about how players are interacting with your game
Let more users try your game with a playable ad format

There’s no better way for a new user to experience your game than to actually play it. So today, we introduced playables, an interactive ad format in Universal App Campaigns that allows users to play a lightweight version of your game, right when they see it in any of the 1M+ apps in the Google Display Network.

studio.justad.mobi-Management-studio-test_ad.php-browser&saved&id=703423(Nexus 5X)_nexus5x-portrait.png
Jam City’s playable ad for Cookie Jam

Playables help you get more qualified installs from users who tried your game in the ad and made the choice to download it for more play time. By attracting already-engaged users into your app, playables help you drive the long-term outcomes you care about — rounds played, levels beat, trophies won, purchases made and more.

"Jam City wants to put our games in the hands of more potential players as quickly as possible. Playables get new users into the game right from the ad, which we've found drives more engagement and long-term customer value." Josh Yguado, President & COO Jam City, maker of Panda Pop and Cookie Jam.

Playables will be available for developers through Universal App Campaigns in the coming months, and will be compatible with HTML5 creatives built through Google Web Designer or third-party agencies.

Improve the video experience with ads designed for mobile viewing

Most mobile video ad views on the Google Display Network are watched on devices held vertically2. This can create a poor experience when users encounter video ad creatives built for horizontal viewing.

Developers using Universal App Campaigns will soon be able to use an auto-flip feature that automatically orients your video ads to match the way users are holding their phones. If you upload a horizontal video creative in AdWords, we will automatically create a second, vertical version for you.

Cookie Jam horizontal video and vertical-optimized video created through auto-flip technology

The auto-flip feature uses Google's machine learning technology to identify the most important objects in every frame of your horizontal video creative. It then produces an optimized, vertical version of your video ad that highlights those important components of your original asset. Early tests show that click-through rates are about 20% higher on these dynamically-generated vertical videos than on horizontal video ads watched vertically3.

Unlock new business with rewarded video formats, and free, unlimited reporting

Developers have embraced AdMob's platform to mediate rewarded video ads as a way to let users watch ads in exchange for an in-app reward. Today, we are delighted to announce that we are bringing Google’s video app install advertising demand from AdWords to AdMob, significantly increasing rewarded demand available to developers. Advertisers that use Universal App Campaigns can seamlessly reach this engaged, game-playing audience using your existing video creatives.

We are also investing in better measurement tools for developers by bringing the power of Firebase Analytics to more game developers with a generally available C++ SDK and an SDK for Unity, a leading gaming engine.

002-v1-entryPoint_v2.png
C++ and Unity developers can now access Firebase Analytics for real-time player insights

With Firebase Analytics, C++ and Unity developers can now capture billions of daily events — like level completes and play time — to get more nuanced player insights and gain a deeper understanding of metrics like daily active users, average revenue per user and player lifetime value.

This is an exciting time to be a game developer. It’s been a privilege to meet so many of you at GDC 2017 and learn about the amazing games that you’re all building. We hope the innovations we announced today help you grow long-term gaming businesses and we look forward to continuing on this journey with you.

Until next year, GDC!

1 - App Monetization Report, November 2016, App Annie
2 - More than 80% of video ad views in mobile apps on the Google Display Network are from devices held vertically video, Google Internal Data
3 - Google Internal Data

Adding text and shapes with the Google Slides API

Originally shared on the G Suite Developers Blog

Posted by Wesley Chun (@wescpy), Developer Advocate, G Suite
When the Google Slidesteam launched their very first API last November, it immediately opened up a whole new class of applications. These applications have the ability to interact with the Slides service, so you can perform operations on presentations programmatically. Since its launch, we've published several videos to help you realize some of those possibilities, showing you how to:
Today, we're releasing the latest Slides API tutorial in our video series. This one goes back to basics a bit: adding text to presentations. But we also discuss shapes—not only adding shapes to slides, but also adding text within shapes. Most importantly, we cover one best practice when using the API: create your own object IDs. By doing this, developers can execute more requests while minimizing API calls.



Developers use insertText requests to tell the API to add text to slides. This is true whether you're adding text to a textbox, a shape or table cell. Similar to the Google Sheets API, all requests are made as JSON payloads sent to the API's batchUpdate() method. Here's the JavaScript for inserting text in some object (objectID) on a slide:
{
"insertText": {
"objectId": objectID,
"text": "Hello World!\n"
}
Adding shapes is a bit more challenging, as you can see from itssample JSON structure:

{
"createShape": {
"shapeType": "SMILEY_FACE",
"elementProperties": {
"pageObjectId": slideID,
"size": {
"height": {
"magnitude": 3000000,
"unit": "EMU"
},
"width": {
"magnitude": 3000000,
"unit": "EMU"
}
},
"transform": {
"unit": "EMU",
"scaleX": 1.3449,
"scaleY": 1.3031,
"translateX": 4671925,
"translateY": 450150
}
}
}
}
Placing or manipulating shapes or images on slides requires more information so the cloud service can properly render these objects. Be aware that it does involve some math, as you can see from the Page Elements page in the docs as well as the Transforms concept guide. In the video, I drop a few hints and good practices so you don't have to start from scratch.

Regardless of how complex your requests are, if you have at least one, say in an array named requests, you'd make an API call with the aforementioned batchUpdate() method, which in Python looks like this (assuming SLIDES is the service endpoint and a presentation ID of deckID):

SLIDES.presentations().batchUpdate(presentationId=deckID,
body=requests).execute()
For a detailed look at the complete code sample featured in the DevByte, check out the deep dive post. As you can see, adding text is fairly straightforward. If you want to learn how to format and style that text, check out the Formatting Text post and video as well as the text concepts guide.
To learn how to perform text search-and-replace, say to replace placeholders in a template deck, check out the Replacing Text & Images post and video as well as the merging data into slides guide. We hope these developer resources help you create that next great app that automates the task of producing presentations for your users!

Debug TensorFlow Models with tfdbg

Posted by Shanqing Cai, Software Engineer, Tools and Infrastructure.

We are excited to share TensorFlow Debugger (tfdbg), a tool that makes debugging of machine learning models (ML) in TensorFlow easier.
TensorFlow, Google's open-source ML library, is based on dataflow graphs. A typical TensorFlow ML program consists of two separate stages:
  1. Setting up the ML model as a dataflow graph by using the library's Python API,
  2. Training or performing inference on the graph by using the Session.run()method.
If errors and bugs occur during the second stage (i.e., the TensorFlow runtime), they are difficult to debug.

To understand why that is the case, note that to standard Python debuggers, the Session.run() call is effectively a single statement and does not exposes the running graph's internal structure (nodes and their connections) and state (output arrays or tensors of the nodes). Lower-level debuggers such as gdb cannot organize stack frames and variable values in a way relevant to TensorFlow graph operations. A specialized runtime debugger has been among the most frequently raised feature requests from TensorFlow users.

tfdbg addresses this runtime debugging need. Let's see tfdbg in action with a short snippet of code that sets up and runs a simple TensorFlow graph to fit a simple linear equation through gradient descent.

import numpy as np
import tensorflow as tf
import tensorflow.python.debug as tf_debug
xs = np.linspace(-0.5, 0.49, 100)
x = tf.placeholder(tf.float32, shape=[None], name="x")
y = tf.placeholder(tf.float32, shape=[None], name="y")
k = tf.Variable([0.0], name="k")
y_hat = tf.multiply(k, x, name="y_hat")
sse = tf.reduce_sum((y - y_hat) * (y - y_hat), name="sse")
train_op = tf.train.GradientDescentOptimizer(learning_rate=0.02).minimize(sse)

sess = tf.Session()
sess.run(tf.global_variables_initializer())

sess = tf_debug.LocalCLIDebugWrapperSession(sess)
for _ in range(10):
sess.run(train_op, feed_dict={x: xs, y: 42 * xs})

As the highlighted line in this example shows, the session object is wrapped as a class for debugging (LocalCLIDebugWrapperSession), so the calling the run() method will launch the command-line interface (CLI) of tfdbg. Using mouse clicks or commands, you can proceed through the successive run calls, inspect the graph's nodes and their attributes, visualize the complete history of the execution of all relevant nodes in the graph through the list of intermediate tensors. By using the invoke_stepper command, you can let the Session.run() call execute in the "stepper mode", in which you can step to nodes of your choice, observe and modify their outputs, followed by further stepping actions, in a way analogous to debugging procedural languages (e.g., in gdb or pdb).

A class of frequently encountered issue in developing TensorFlow ML models is the appearance of bad numerical values (infinities and NaNs) due to overflow, division by zero, log of zero, etc. In large TensorFlow graphs, finding the source of such nodes can be tedious and time-consuming. With the help of tfdbg CLI and its conditional breakpoint support, you can quickly identify the culprit node. The video below demonstrates how to debug infinity/NaN issues in a neural network with tfdbg:

A screencast of the TensorFlow Debugger in action, from this tutorial.


Compared with alternative debugging options such as Print Ops, tfdbg requires fewer lines of code change, provides more comprehensive coverage of the graphs, and offers a more interactive debugging experience. It will speed up your model development and debugging workflows. It offers additional features such as offline debugging of dumped tensors from server environments and integration with tf.contrib.learn. To get started, please visit this documentation. This research paperlays out the design of tfdbg in greater detail.

The minimum required TensorFlow version for tfdbgis 0.12.1. To report bugs, please open issues on TensorFlow's GitHub Issues Page. For general usage help, please post questions on StackOverflow using the tag tensorflow.
Acknowledgements
This project would not be possible without the help and feedback from members of the Google TensorFlow Core/API Team and the Applied Machine Intelligence Team.





Announcing TensorFlow 1.0

Posted By: Amy McDonald Sandjideh, Technical Program Manager, TensorFlow

In just its first year, TensorFlow has helped researchers, engineers, artists, students, and many others make progress with everything from language translation to early detection of skin cancer and preventing blindness in diabetics. We're excited to see people using TensorFlow in over 6000 open-source repositories online.


Today, as part of the first annual TensorFlow Developer Summit, hosted in Mountain View and livestreamed around the world, we're announcing TensorFlow 1.0:


It's faster: TensorFlow 1.0 is incredibly fast! XLA lays the groundwork for even more performance improvements in the future, and tensorflow.org now includes tips & tricksfor tuning your models to achieve maximum speed. We'll soon publish updated implementations of several popular models to show how to take full advantage of TensorFlow 1.0 - including a 7.3x speedup on 8 GPUs for Inception v3 and 58x speedup for distributed Inception v3 training on 64 GPUs!


It's more flexible: TensorFlow 1.0 introduces a high-level API for TensorFlow, with tf.layers, tf.metrics, and tf.losses modules. We've also announced the inclusion of a new tf.keras module that provides full compatibility with Keras, another popular high-level neural networks library.


It's more production-ready than ever: TensorFlow 1.0 promises Python API stability (details here), making it easier to pick up new features without worrying about breaking your existing code.

Other highlights from TensorFlow 1.0:

  • Python APIs have been changed to resemble NumPy more closely. For this and other backwards-incompatible changes made to support API stability going forward, please use our handy migration guide and conversion script.
  • Experimental APIs for Javaand Go
  • Higher-level API modules tf.layers, tf.metrics, and tf.losses - brought over from tf.contrib.learnafter incorporating skflowand TF Slim
  • Experimental release of XLA, a domain-specific compiler for TensorFlow graphs, that targets CPUs and GPUs. XLA is rapidly evolving - expect to see more progress in upcoming releases.
  • Introduction of the TensorFlow Debugger (tfdbg), a command-line interface and API for debugging live TensorFlow programs.
  • New Android demos for object detection and localization, and camera-based image stylization.
  • Installation improvements: Python 3 docker images have been added, and TensorFlow's pip packages are now PyPI compliant. This means TensorFlow can now be installed with a simple invocation of pip install tensorflow.

We're thrilled to see the pace of development in the TensorFlow community around the world. To hear more about TensorFlow 1.0 and how it's being used, you can watch the TensorFlow Developer Summit talks on YouTube, covering recent updates from higher-level APIs to TensorFlow on mobile to our new XLA compiler, as well as the exciting ways that TensorFlow is being used:





Click herefor a link to the livestream and video playlist (individual talks will be posted online later in the day).


The TensorFlow ecosystem continues to grow with new techniques like Foldfor dynamic batching and tools like the Embedding Projector along with updates to our existing tools like TensorFlow Serving. We're incredibly grateful to the community of contributors, educators, and researchers who have made advances in deep learning available to everyone. We look forward to working with you on forums like GitHub issues, Stack Overflow, @TensorFlow, the discuss@tensorflow.orggroup, and at future events.



G Suite Developer Sessions at Google Cloud Next 2017

Originally posted on the G Suite Developers Blog

Posted by Wesley Chun (@wescpy), Developer Advocate, G Suite

There are over 200 sessions happening next month at Google Cloud's Next 2017 conferencein San Francisco... so many choices! Along with content geared towards Google Cloud Platform, this year features the addition of G Suite so all 3 pillars of cloud computing (IaaS, PaaS, SaaS) are represented!


There are already thousands of developers including Independent Software Vendors (ISVs) creating solutions to help schools and enterprises running the G Suite collaboration and productivity suite (formerly Google Apps). If you're thinking about becoming one, consider building applications that extend, enhance, and integrate G Suite apps and data with other mission critical systems to help businesses and educational institutions succeed.


Looking for inspiration? Here's a preview of some of the sessions that current and potential G Suite developers should consider:


The first is intro blog post & video for the latest Google Sheets API as well as the intro blog post & video for the Google Slides API. Part of the talk also covers Google Apps Script, the Javascript-in-the-cloud solution that gives developers programmatic access to authorized G Suite data along with the ability to connect to other Google and external services.


If that's not enough Apps Script for you, or you're new to that technology, swing by to hear its Product Manager give you an introduction in his talk, quick intro video to give you an idea of what you can do with it!


Did you know that Apps Script also powers "add-ons" which extend the functionality of Google Docs, Sheets, and Forms? Then come to "the G Suite Marketplace where administrators or employees can install your add-ons for their organizations.


In addition to Apps Script apps, all your Google Docs, Sheets, and Slides documents live in Google Drive. But did you know that Drive is not just for individual file storage? Hear directly from a Drive Product Manager on how you can, "the Drive API and Team Drives, you can extend what Drive can do for your organization. One example from the most recent Google I/O tells the story of how WhatsApp used the Drive API to back up all your conversations! To get started with your own Drive API integration, check out this blog post and short video. Confused by when you should use Google Drive or Google Cloud Storage? I've got an app, err video, for that too! :-)


Not a software engineer but still code as part of your profession? Want to build a custom app for your department or line of business without having to worry about IT overhead? You may have heard about Google App Maker, our low-code development tool that does exactly that. Curious to learn more about it? Hear directly from its Product Manager lead in his talk entitled, "

All of these talks are just waiting for you at
Next, the best place to get your feet wet developing for G Suite, and of course, the Google Cloud Platform. Start by checking out the session schedule. Next will also offer many opportunities to meet and interact with industry peers along with representatives from all over Google who love the cloud. Register today and see you in San Francisco!




Introducing Google Developers India: A Local Youtube Channel for India’s Mobile Development Revolution

Posted By Peter Lubbers, Senior Program Manager

Today, we're launching the Google Developers India channel: a brand new Youtube channel tailored for Indian Developers. The channel will include original content like interviews with local experts, developer spotlights, technical tutorials, and complete Android courses to help you be a successful developer.

Why India?

By 2018, India will have the largest developer base in the world with over 4 million developers. Our initiative to train 2 million Indian developers, along with the tremendous popularity of mobile development in the country and the desire to build better mobile apps, will be best catered by an India-specific developers channel featuring Indian developers, influencers, and experts.



Here is a taste of what's to come in 2017:
  • Tech Interviews: Advice from India's top developers, influencers and tech experts.
  • Developer Stories: Inspirational stories of Indian developers.
  • DevShow India: A weekly show that will keep new and seasoned developers updated on all the news, trainings, and API's from Google.
  • Skilled to Scaled: A real life developer journey that takes us from the germination of an idea for an app, all the way to monetizing it on Google Play.
So what's next?


The channel is live now. Click hereto check it out.