Monthly Archives: February 2017

Debug TensorFlow Models with tfdbg

Posted by Shanqing Cai, Software Engineer, Tools and Infrastructure.

We are excited to share TensorFlow Debugger (tfdbg), a tool that makes debugging of machine learning models (ML) in TensorFlow easier.
TensorFlow, Google's open-source ML library, is based on dataflow graphs. A typical TensorFlow ML program consists of two separate stages:
  1. Setting up the ML model as a dataflow graph by using the library's Python API,
  2. Training or performing inference on the graph by using the Session.run()method.
If errors and bugs occur during the second stage (i.e., the TensorFlow runtime), they are difficult to debug.

To understand why that is the case, note that to standard Python debuggers, the Session.run() call is effectively a single statement and does not exposes the running graph's internal structure (nodes and their connections) and state (output arrays or tensors of the nodes). Lower-level debuggers such as gdb cannot organize stack frames and variable values in a way relevant to TensorFlow graph operations. A specialized runtime debugger has been among the most frequently raised feature requests from TensorFlow users.

tfdbg addresses this runtime debugging need. Let's see tfdbg in action with a short snippet of code that sets up and runs a simple TensorFlow graph to fit a simple linear equation through gradient descent.

import numpy as np
import tensorflow as tf
import tensorflow.python.debug as tf_debug
xs = np.linspace(-0.5, 0.49, 100)
x = tf.placeholder(tf.float32, shape=[None], name="x")
y = tf.placeholder(tf.float32, shape=[None], name="y")
k = tf.Variable([0.0], name="k")
y_hat = tf.multiply(k, x, name="y_hat")
sse = tf.reduce_sum((y - y_hat) * (y - y_hat), name="sse")
train_op = tf.train.GradientDescentOptimizer(learning_rate=0.02).minimize(sse)

sess = tf.Session()
sess.run(tf.global_variables_initializer())

sess = tf_debug.LocalCLIDebugWrapperSession(sess)
for _ in range(10):
sess.run(train_op, feed_dict={x: xs, y: 42 * xs})

As the highlighted line in this example shows, the session object is wrapped as a class for debugging (LocalCLIDebugWrapperSession), so the calling the run() method will launch the command-line interface (CLI) of tfdbg. Using mouse clicks or commands, you can proceed through the successive run calls, inspect the graph's nodes and their attributes, visualize the complete history of the execution of all relevant nodes in the graph through the list of intermediate tensors. By using the invoke_stepper command, you can let the Session.run() call execute in the "stepper mode", in which you can step to nodes of your choice, observe and modify their outputs, followed by further stepping actions, in a way analogous to debugging procedural languages (e.g., in gdb or pdb).

A class of frequently encountered issue in developing TensorFlow ML models is the appearance of bad numerical values (infinities and NaNs) due to overflow, division by zero, log of zero, etc. In large TensorFlow graphs, finding the source of such nodes can be tedious and time-consuming. With the help of tfdbg CLI and its conditional breakpoint support, you can quickly identify the culprit node. The video below demonstrates how to debug infinity/NaN issues in a neural network with tfdbg:

A screencast of the TensorFlow Debugger in action, from this tutorial.


Compared with alternative debugging options such as Print Ops, tfdbg requires fewer lines of code change, provides more comprehensive coverage of the graphs, and offers a more interactive debugging experience. It will speed up your model development and debugging workflows. It offers additional features such as offline debugging of dumped tensors from server environments and integration with tf.contrib.learn. To get started, please visit this documentation. This research paperlays out the design of tfdbg in greater detail.

The minimum required TensorFlow version for tfdbgis 0.12.1. To report bugs, please open issues on TensorFlow's GitHub Issues Page. For general usage help, please post questions on StackOverflow using the tag tensorflow.
Acknowledgements
This project would not be possible without the help and feedback from members of the Google TensorFlow Core/API Team and the Applied Machine Intelligence Team.





Developer Advocates offer up their favorite Google Cloud NEXT 17 sessions



Here at Google Cloud, we employ a small army of developer advocates, DAs for short, who are out on the front lines at conferences, at customer premise, or on social media, explaining our technologies and communicating back to people like me and our product teams about your needs as a member of a development community.

DAs take the responsibility of advocating for developers seriously, and have spent time poring over the extensive Google Cloud Next '17 session catalog, bookmarking the talks that will benefit you. To wit:
  • If you’re a developer working in Ruby, you know to turn to Aja Hammerly for all things Ruby/Google Cloud Platform (GCP)-related. Aja’s top pick for Rubyists at Next is Google Cloud Platform < 3 Ruby with Google Developer Program Engineer Remi Taylor, but there are other noteworthy mentions on her personal blog.
  • Mete Atamel is your go-to DA for all things Windows on GCP. Selfishly, his top Next session is his own about running ASP.NET apps on GCP, but he has plenty more suggestions for you to choose from
  • Groovy nut Guillaume Laforge is going to be one busy guy at Next, jumping from between sessions about PaaS, serverless and containers, to name a few. Here’s his full list of his must-see sessions
  • If you’re a game developer, let Mark Mandel be your guide. Besides co-presenting with Rob Whitehead, CTO of Improbable, Mark has bookmarked sessions about location-based gaming, using GPUs and game analytics. Mosy on over to his personal blog for the full list.
  • In the past year, Google Apps Script has opened the door to building amazing customizations for G Suite, our communication and collaboration platform. In this G Suite Developers blog post, Wesley Chun walks you through some of the cool Apps Script sessions, as well as sessions about App Maker and some nifty G Suite APIs. 
  • Want to attend sessions that teach you about our machine learning services? That’s where you’ll find our hands-on ML expert Sara Robinson, who in addition to recommending her favorite Next sessions, also examines her talk from last year’s event using Cloud Natural Language API. 
For my part, I’m really looking forward to Day 3, which we’re modeling after my favorite open source conferences thanks to Sarah Novotny’s leadership. We’ll have a carefully assembled set of open talks on Kubernetes, TensorFlow and Apache Beam that cover the technologies, how to contribute, the ecosystems around them and small group discussions with the developers. For a full list of keynotes, bootcamps and breakout sessions, check out the schedule and reserve your spot.

Google Cloud at HIMSS: engaging with the healthcare and health IT community

At Google Cloud, we’re working closely with the healthcare industry to provide the technology and tools that help create better patient experiences, empower care teams to work together and accelerate research. We're focused on supporting the digital transformation of our healthcare customers through data management at scale and advancements in machine learning for timely and actionable insights.

Next week at the HIMSS Health IT Conference, we're demonstrating the latest innovations in smart data, digital health, APIs, machine learning and real-time communications from Google Cloud, Research, Search, DeepMind and Verily. Together, we offer solutions that help enable hospital and health IT customers to tackle the rapidly evolving and long standing challenges facing the healthcare industry. Here’s a preview of the Google Cloud customers and partners who are joining us at HIMSS.

For customers like the Colorado Center for Personalized Medicine (CCPM) at the University of Colorado Denver, trust and security are paramount. CCPM has worked closely with the Google Cloud Platform (GCP) team to securely manage and analyze a complicated data set to identify  genetic patterns across a wide range of diseases and reveal new treatment options based on a patient’s unique DNA.

And the Broad Institute of MIT and Harvard has used Google Genomics for years to combine the power, security features and scale of GCP with the Broad Institute’s expertise in scientific analysis.

“At the Broad Institute we are committed to driving the pace of innovation through sharing and collaboration. Google Cloud Platform has profoundly transformed the way we build teams and conduct science and has accelerated our research,"  William Mayo, Chief Information Officer at Broad Institute told us.

To continue to offer these and other healthcare customers the tools they need, today we’re announcing support for the HL7 FHIR Foundation to help the developer community advance data interoperability efforts. The FHIR open standard defines a modern, web API-based approach to communicating healthcare data, making it easier to securely communicate across the healthcare ecosystem including hospitals, labs, applications and research studies.

"Google Cloud Platform’s commitment to support the ongoing activities of the FHIR community will help advance our goal of global health data interoperability. The future of health computing is clearly in the cloud, and our joint effort will serve to accelerate this transition," said Grahame Grieve, Principal at Health Intersections, FHIR Product Lead

Beyond open source, we're committed to supporting a thriving ecosystem of partners whose solutions enable customers to improve patient care across the industry.

We’ve seen great success for our customers in collaboration with Kinvey, which launched its HIPAA-compliant digital health platform on GCP to leverage our cloud infrastructure and integrate its capabilities with our machine learning and analytics services.  

“In the past year, we’ve seen numerous organizations in healthcare, from institutions like Thomas Jefferson University and Jefferson Health that are building apps to transform care, education and research, and startups like iTether and TempTraq that are driving innovative new solutions, turn to GCP to accelerate their journey to a new patient-centric world,” said Sravish Sridhar, CEO of Kinvey.

We’ve also published a new guide for HIPAA compliance on GCP, which describes our approach to data security on GCP and provides best-practice guidance on how to securely bring healthcare workloads to the cloud.

Stop by our booth at HIMSS to hear more about how we’re working with the healthcare industry across Google. We would love to learn how we can engage with you on your next big idea to positively transform healthcare.

Google Cloud at HIMSS: engaging with the healthcare and health IT community

At Google Cloud, we’re working closely with the healthcare industry to provide the technology and tools that help create better patient experiences, empower care teams to work together and accelerate research. We're focused on supporting the digital transformation of our healthcare customers through data management at scale and advancements in machine learning for timely and actionable insights.

Next week at the HIMSS Health IT Conference, we're demonstrating the latest innovations in smart data, digital health, APIs, machine learning and real-time communications from Google Cloud, Research, Search, DeepMind and Verily. Together, we offer solutions that help enable hospital and health IT customers to tackle the rapidly evolving and long standing challenges facing the healthcare industry. Here’s a preview of the Google Cloud customers and partners who are joining us at HIMSS.

For customers like the Colorado Center for Personalized Medicine (CCPM) at the University of Colorado Denver, trust and security are paramount. CCPM has worked closely with the Google Cloud Platform (GCP) team to securely manage and analyze a complicated data set to identify  genetic patterns across a wide range of diseases and reveal new treatment options based on a patient’s unique DNA.

And the Broad Institute of MIT and Harvard has used Google Genomics for years to combine the power, security features and scale of GCP with the Broad Institute’s expertise in scientific analysis.

“At the Broad Institute we are committed to driving the pace of innovation through sharing and collaboration. Google Cloud Platform has profoundly transformed the way we build teams and conduct science and has accelerated our research,"  William Mayo, Chief Information Officer at Broad Institute told us.

To continue to offer these and other healthcare customers the tools they need, today we’re announcing support for the HL7 FHIR Foundation to help the developer community advance data interoperability efforts. The FHIR open standard defines a modern, web API-based approach to communicating healthcare data, making it easier to securely communicate across the healthcare ecosystem including hospitals, labs, applications and research studies.

"Google Cloud Platform’s commitment to support the ongoing activities of the FHIR community will help advance our goal of global health data interoperability. The future of health computing is clearly in the cloud, and our joint effort will serve to accelerate this transition," said Grahame Grieve, Principal at Health Intersections, FHIR Product Lead

Beyond open source, we're committed to supporting a thriving ecosystem of partners whose solutions enable customers to improve patient care across the industry.

We’ve seen great success for our customers in collaboration with Kinvey, which launched its HIPAA-compliant digital health platform on GCP to leverage our cloud infrastructure and integrate its capabilities with our machine learning and analytics services.  

“In the past year, we’ve seen numerous organizations in healthcare, from institutions like Thomas Jefferson University and Jefferson Health that are building apps to transform care, education and research, and startups like iTether and TempTraq that are driving innovative new solutions, turn to GCP to accelerate their journey to a new patient-centric world,” said Sravish Sridhar, CEO of Kinvey.

We’ve also published a new guide for HIPAA compliance on GCP, which describes our approach to data security on GCP and provides best-practice guidance on how to securely bring healthcare workloads to the cloud.

Stop by our booth at HIMSS to hear more about how we’re working with the healthcare industry across Google. We would love to learn how we can engage with you on your next big idea to positively transform healthcare.

Understanding App Monetization with Google AdMob

All views expressed in this blog post are solely those of the author, and not Google. This guest post is from Sreeraman Thiagarajan, a Google Developers Expert in the app marketing and monetization space and a published author on the Economic Times. Sreeraman is featured as our guest blogger to share insights and tips from his experience to help AdMob developers grow their earnings. If you’re new to AdMob, be sure to sign up here.

There’s never been a better time to be an app developer in India. According to reports referenced in the Economic Times, app downloads on Google Play from India grew from 3.5 billion downloads in 2015 to 6.2 billion downloads in 20161. Based on this report, India now holds the top spot in the world for apps downloaded on Google Play, even outranking the US and Brazil.

As noted in a recent article by Quartz India, nearly 90% of India’s over 220 million smartphone users have Android smartphones, so the ~6 billion app download figures comes as no surprise2. However, from a revenue perspective, Times of India reported that India is  far behind and does not feature as one of the top ten markets3. (According to Android Authority Japan, U.S, and South Korea rank highest4).

World over, an article in MarketWire states that iOS and Android app publishers earned over 89 billion dollars in 2016 as revenue from their app, which includes paid apps, in-app purchases (IAP), and of course ad revenue5.
Indian app developers are in need of proven app monetization techniques. When exploring revenue generation opportunities by deploying Google’s in-app advertising suite, AdMob is a great starting point.

Here’s how:

Don’t be shy or scared of using in-app ads. In my interaction with many startups and app developers, there’s a disturbing insight I’ve discovered. Many developers think using in-app ads are a clichéd way of generating revenue, and that they must come up with a unique and novel way of making money. Nope, it is not necessary to reinvent the wheel. Ad supported businesses have been thriving for decades . Besides being a source of income for a publisher, advertising subsidizes the price of a product for consumers. For example, if not for ads in a local newspaper, we may have to pay 10x or more than its current selling price. The concept of freemium apps may never have picked up as well.

Many of the largest and most recognizable apps use advertising to support their business model. Rather than reinventing their revenue models, they constantly innovate to maximize the ad revenue. From major sporting events to longstanding publishing houses to new age tech-based content providers, every one of them smartly leverages the power of monetizing the massive eyeballs they receive by showing ads, without disrupting the user experience.

An app or game is no different than our above examples. These apps and games can generate money through ads if they can garner users at scale and engage them frequently (converting them into DAU’s or daily active users). Google’s AdMob can help developers immensely in building an ad-supported app and in diversifying revenue streams beyond paid subscriptions or in-app upgrades and purchases.

Picking the right in-app ad platform:There are many options to chose from when picking an ad monetization platform. In fact, there are over 50 ad networks that app developers can choose from, or they can even build their own ad serving mechanism within the app to show ‘house ads’ - the ability to cross promote other apps or services of yours. Or one can also sell ad inventory (such as a masthead, a branded product placed within an app, or branded power-ups in games, etc.) through direct sales teams. However, building one’s own ad suite or depending largely on direct ads is not scalable, and warrants too much time and effort of developers and ad sales team alike to make this work profitably. This is where AdMob makes its biggest contribution in making life easy for both iOS and Android developers.

AdMob has a built-in mechanism that lets developers show ‘house ads’ to cross promote their portfolio of other apps for free. AdMob can also power your direct deals, which lets you run your own directly-negotiated ad deals with advertisers.

Another exciting feature of AdMob is ‘mediation’. Mediation is a technology which helps apps to maximize the number of ads shown in an app, and thus helps increase revenue. Through AdMob mediation, one can integrate nearly 40 different mobile ads networks and even engage in SDK-less mediation for a select set of networks. With mediation, apps can enjoy the benefit of dynamic bidding and direct integration with other ad networks, which allows automatic CPM updates. This eliminates time and effort taken to manually adjust bidding floors. In terms of in-app monetization, AdMob is one handy tool that has all you need to survive - and thrive.

Watch out for the part 2 in this series where we’ll discuss optimizing and measuring app monetization. Google has made a lot of resources available on AdMob here and if you are a developer with apps that has over 100,000 downloads you can request a free consultation here.

1 - http://tech.economictimes.indiatimes.com/news/internet/india-is-top-market-for-google-play-store/56638573
2 - https://qz.com/886985/india-logged-the-most-android-app-downloads-and-usage-in-2016/
3 - http://timesofindia.indiatimes.com/companies/india-number-one-in-google-play-app-downloads-usage/articleshow/56680067.cms
4 - http://www.androidauthority.com/google-play-performance-q2-2015-google-and-apple-gain-big-from-new-emerging-markets-626622/
5 - http://www.marketwired.com/press-release/app-annie-reports-publishers-made-over-89-billion-as-downloads-reached-90-billion-2016-2188696.htm

Source: Inside AdMob


Get in the game with NBA VR on Daydream

Can't get enough dunks, three pointers, and last-second jumpers? Experience the NBA in a whole new way with the new NBA VR app, available on Daydream.

Catch up with highlights in your own virtual sports lounge or watch the NBA’s first original VR series, “House of Legends,” where NBA legends discuss everything from pop culture to the greatest moments of their career. The series tips off today with seven-time NBA Champion Robert Horry. New episodes featuring stars like Chauncey Billups and Baron Davis will debut regularly.

Daydream gives sports fans a new way to connect to the leagues, teams and players they care about most. The NBA VR app joins a lineup that already includes:

  • NFL VR: Get access to the NFL Immersed series featuring 360° behind-the-scenes looks into the lives of players, coaches, cheerleaders, and even fans themselves as they prepare for game day.
  • MLB.com Home Run Derby VR: Hit monster home runs with the Daydream controller in eight iconic MLB ballparks and bring home the ultimate Derby crown.
  • NextVR: From NBA games and the Kentucky Derby, to the NFL and the US Open, experience your favorite sporting events live or revisit them through highlights.

You're just a download away from being closer than ever to the sporting events and athletes you love!

Get in the game with NBA VR on Daydream

Can't get enough dunks, three pointers, and last-second jumpers? Experience the NBA in a whole new way with the new NBA VR app, available on Daydream.

Catch up with highlights in your own virtual sports lounge or watch the NBA’s first original VR series, “House of Legends,” where NBA legends discuss everything from pop culture to the greatest moments of their career. The series tips off today with seven-time NBA Champion Robert Horry. New episodes featuring stars like Chauncey Billups and Baron Davis will debut regularly.

Daydream gives sports fans a new way to connect to the leagues, teams and players they care about most. The NBA VR app joins a lineup that already includes:

  • NFL VR: Get access to the NFL Immersed series featuring 360° behind-the-scenes looks into the lives of players, coaches, cheerleaders, and even fans themselves as they prepare for game day.
  • MLB.com Home Run Derby VR: Hit monster home runs with the Daydream controller in eight iconic MLB ballparks and bring home the ultimate Derby crown.
  • NextVR: From NBA games and the Kentucky Derby, to the NFL and the US Open, experience your favorite sporting events live or revisit them through highlights.

You're just a download away from being closer than ever to the sporting events and athletes you love!

Bringing digital skills training to more classrooms in Korea

Recently a group of Googlers visited Ogeum Middle School in Seoul, where they joined a junior high school class that had some fun trying out machine learning based experiments. The students got to see neural nets in action, with experiments that have trained computers to guess what someone’s drawing, or that turn a picture taken with a smartphone into a song.

Ogeum School - Giorgio Cam
Students at Ogeum Middle School trying out Giorgio Cam, an experiment built with machine learning that lets you make music with the computer just by taking a picture. It uses image recognition to label what it sees, then it turns those labels into lyrics of a song.

We’re always excited to see kids develop a passion for technology, because it seeds an interest in using technology to solve challenges later in life.

The students at Ogeum Middle School are among the first of over 3,000 kids across Korea we hope to reach through “Digital Media Campus” (or 디지털 미디어 캠퍼스 in Korean), a new digital literacy education program. Through a Google.org grant to the Korea Federation of Science Culture and Education Studies (KOSCE), we plan to reach junior high school students in 120 schools across the country this year. Students in their ‘free semester’—a time when middle schoolers can take up electives to explore future career paths—will be able to enroll in this 32-hour course spanning 16 weeks beginning next month.

KOSCE-trained tutors will show kids how to better evaluate information online and assess the validity of online sources, teach them to use a range of digital tools so they can do things like edit videos and create infographics, and help them experience exciting technologies like AR and VR. By giving them a glimpse of how these technologies work, we hope to excite them about the endless possibilities offered by technology. Perhaps this will even encourage them to consider the world of careers that technology opens up to them.  

Helping kids to recognize these opportunities often starts with dismantling false perceptions at home. This is why we’re also offering a two-hour training session to 2,000 parents, who’ll pick up tips to help their kids use digital media.

We ran a pilot of the program last year, and have been heartened by the positive feedback we’ve received so far. Teachers and parents have told us that they appreciate the skills it teaches kids to be competitive in a digital age. And the students are excited to discover new digital tools and resources that are useful to them in their students.

While we might not be able to reach every high school student with this program, we hope to play a small role in helping to inspire Korea’s next generation of tech innovators.

Bringing digital skills training to more classrooms in Korea

Recently a group of Googlers visited Ogeum Middle School in Seoul, where they joined a junior high school class that had some fun trying out machine learning based experiments. The students got to see neural nets in action, with experiments that have trained computers to guess what someone’s drawing, or that turn a picture taken with a smartphone into a song.

Ogeum School - Giorgio Cam
Students at Ogeum Middle School trying out Giorgio Cam, an experiment built with machine learning that lets you make music with the computer just by taking a picture. It uses image recognition to label what it sees, then it turns those labels into lyrics of a song.

We’re always excited to see kids develop a passion for technology, because it seeds an interest in using technology to solve challenges later in life.

The students at Ogeum Middle School are among the first of over 3,000 kids across Korea we hope to reach through “Digital Media Campus” (or 디지털 미디어 캠퍼스 in Korean), a new digital literacy education program. Through a Google.org grant to the Korea Federation of Science Culture and Education Studies (KOSCE), we plan to reach junior high school students in 120 schools across the country this year. Students in their ‘free semester’—a time when middle schoolers can take up electives to explore future career paths—will be able to enroll in this 32-hour course spanning 16 weeks beginning next month.

KOSCE-trained tutors will show kids how to better evaluate information online and assess the validity of online sources, teach them to use a range of digital tools so they can do things like edit videos and create infographics, and help them experience exciting technologies like AR and VR. By giving them a glimpse of how these technologies work, we hope to excite them about the endless possibilities offered by technology. Perhaps this will even encourage them to consider the world of careers that technology opens up to them.  

Helping kids to recognize these opportunities often starts with dismantling false perceptions at home. This is why we’re also offering a two-hour training session to 2,000 parents, who’ll pick up tips to help their kids use digital media.

We ran a pilot of the program last year, and have been heartened by the positive feedback we’ve received so far. Teachers and parents have told us that they appreciate the skills it teaches kids to be competitive in a digital age. And the students are excited to discover new digital tools and resources that are useful to them in their students.

While we might not be able to reach every high school student with this program, we hope to play a small role in helping to inspire Korea’s next generation of tech innovators.

Explore new dimensions of film at Adelaide Fringe Festival, with a wave of your smartphone

Falling in love is often described as a temporary madness. We feel time stop, lose our words, and find our head over our heels. Through a new interactive film, Love at Fifth Site, audiences are invited to explore the joy and the jungle of love, with the wave of a smartphone.

Our Creative Lab team and Grumpy Sailor Creative are presenting Love at Fifth Site, premiering at the Adelaide Fringe Festival from 17 February to 19 March 19, with a cast of young Australian talent, including Susie Youssef, Shannon Murphy and Rarriwuy Hick. Love at Fifth Site allows the audience to ‘shine a light’ onto the inner monologue of the film’s protagonists across a series of serendipitous and sometimes awkward encounters.

 

Through an installation of ‘mini-sets’ across Adelaide Fringe’s Digital Arcade space, the audience watch Sam and Tina over 20 years as they almost get together, and are exposed to their fears and insecurities and sometimes the bad luck that comes between them.

Using the mobile browser’s device orientation API and Chromebooks, the technology transforms any smartphone into a remote control for a nearby display. The smartphone’s gyroscope then responds to interaction and movement, allowing audiences to delve into different dimensions of the story.



Love at Fifth Site builds on previous work by the Creative Lab to explore how technology can help artists create new and engaging experiences for their audiences. The work forms part of an ongoing exploration of how technology can help artists push the boundaries of traditional storytelling. Check it out if you are heading to Adelaide Fringe!