Monthly Archives: October 2017

Boo! A Very Fiber Halloween


Halloween has always been special at Google -- from our early days, Googleween celebrations were the stuff of legend. At Google Fiber, we continue that tradition -- and help to bring the spirit of the holiday to our Fiber cities, as well. From office decorating to trunk or treating in our communities, Google Fiber teams scared up some surprises and celebrated together this Halloween.


Check out some photos from our Halloween activities around the country (and be sure to scroll all the way to the bottom for another proud Google tradition - Dooglers!).

Team KC brings the hot dogs, flying squirrels, and more!
Salt Lake City embraces their inner Harry Potter.
Team Charlotte brings the bacon, unicorns, and more!
Team Nashville and families - Taylor Swift, Malificent, and the Brawny man -- oh my!
Raleigh Durham mixes it up!
Rockford Peaches in Charlotte



GCP arrives in India with launch of Mumbai region



The first Google Cloud Platform region in India is now open for you to build applications and store your data, and promises to significantly improve latency for GCP customers and end users in the area.1

The new Mumbai region, asia-south1, joins Singapore, Taiwan, Sydney and Tokyo in Asia Pacific and makes it easier to build highly available, performant applications using resources across those geographies.

Hosting applications in the new region can improve latency from 20-90% for end users in Chennai, Hyderabad, Bangalore, and of course Mumbai, compared to hosting them in the closest region, Singapore.

Services 


The Mumbai region has everything you need to build the next great application, and three zones to make it stand up to whatever Mother Nature has to offer:
Interested in a GCP service that’s not available in the India region? No problem. You can access this service via the Google Network, the largest cloud network as measured by number of points of presence.

If you’d like to privately connect to the Mumbai region, we offer Dedicated Interconnect at two locations, these locations are GPX Mumbai and Tata Mumbai IDC.

Pay in local currency


With the opening of this Mumbai region, Indian customers are now able to buy these services directly in Indian rupees. Below is an animated GIF of the SKUs tool in action, now in rupees.

What customers are saying


Indian companies welcome the addition of this GCP region to South Asia.

“We wanted to have a low latency and secure cloud platform to create our active-active, high availability and load balanced multi-cloud setup. GCP gave us a low latency network, better than expected SSL performance, and the ability to optimize costs further with custom machine types. The new India region will help us bring our service even closer to Indian consumers.” 
— Manish Verma, Chief Technology Officer at Hungama
“As a senior leader within the organization, I see the key benefits of GCP and other technologies being lower cost, greater efficiency and improved business continuity. For example, the current data center team can be redeployed to other initiatives as the technical experts at GCP will be undertaking most of the management and maintenance tasks.” 
— R D Bhatnagar, Chief Technology Officer at DB Corp
“Google has been pushing hard into deep learning and making powerful tools and technologies available on GCP. We really appreciate the stability and scalability of the GCP platform. As a fast-growing startup, we can scale our platform up and down in minutes with few worries.” 
— Dr. Sven Niedner, Chief Operations Officer at Innoplexus

"Once our team was exposed to GCP and understood the benefits of the platform, our mindset changed from 'let us do everything on our own' to 'let us do what we do best' and delegate the remainder. We are always eager to see what new services are being launched and are extremely excited about what GCP can provide as part of its roadmap." 
— Sandeep Kalidindi, Head of Technology at PaGaLGuy

Getting started


For help migrating to GCP, please contact our local partners. For additional details on the Mumbai region, please visit our Mumbai region page where you’ll get access to free resources, whitepapers, the "Cloud On-Air" on-demand video series and more. Our locations page provides updates on the availability of additional services and regions. Contact us to request early access to new regions and help us prioritize what we build next.


1 Please visit our Service Specific Terms to get detailed information on our data storage capabilities.

Investing even more in Australia’s future

Great innovations that improve the lives of millions usually have one thing in common: they are born from an obsession with solving a specific problem.
We take the same approach to our products and services at Google, focusing on the needs of our users and employing technology to make their lives easier. It’s that same fixation that also drives the ten nonprofit organisations that Google Australia supported a year ago when we announced $5 million in funding through the Google.org Impact Challenge.
Anna Marsden, managing director of the Great Barrier Reef Foundation, explains how they are helping to save the reef with autonomous robots. 
Nonprofits such as the George Institute for Global Health, which is creating an SMS-based support service to help people with chronic diseases lead healthier lives, or the Centre for Eye Research Australia, with an eyesight self-assessment system for Australians in remote areas. The Great Barrier Reef Foundation is protecting coral reef ecosystems though a low-cost, autonomous robot designed to monitor, map, manage and preserve coral reef ecosystems, while The Nature Conservancy Australia is deploying mobile technology to protect global fish stocks.

This week we celebrated the work of those ten organisations in the year since we announced their funding, hearing about their progress and the milestones they have achieved.
Today, we are thrilled to announce that next year we will invest even more to help tackle Australia’s toughest problems.
The Olga Tennison Autism Research Centre is working on an app to help parents detect autism in their children.
In 2018 we will hold Australia’s third Google Impact Challenge, with a minimum funding commitment of $5 million, inviting charities and nonprofits to propose technology-driven solutions to challenges facing our society. Working with prominent Australians to judge submissions, and inviting the public to vote on their favourite projects, we will select another 10 projects to support with funding and resources from Google.
This will be the third time we have run the Google.org Impact Challenge in Australia, making us the first country outside of America to do so. We started back in 2014, supporting nonprofits such as Infoxchange and AIME, which yesterday launched its game Second Chances, designed to encourage indigenous kids to engage more with maths and science.
Normally we only do an Impact Challenge once, but we have been impressed by the calibre of the ideas and the teams that came forward. The ideas in Australia are not only the best innovators all across Australia, but on par with any of the best innovation ideas we've seen globally.
We are excited to see the new ideas that will emerge through the 2018 Google Impact Challenge, and to assist organisations with visions to use technology in addressing important causes.
Representatives of the ten nonprofits that received funding in the 2016 Google.org Impact Challenge at the anniversary celebration held at Google's offices in Sydney.
Google has always worked best when it helped people work on big problems in new ways. Through Google.org, we rally our philanthropy, people, and products to support nonprofits making an impact in their communities.
In Australia, that commitment continues to grow. We aim to continue to assist all Australians in making creating a safer, more inclusive society for everyone.

Eager Execution: An imperative, define-by-run interface to TensorFlow

Posted by Asim Shankar and Wolff Dobson, Google Brain Team

Today, we introduce eager execution for TensorFlow.

Eager execution is an imperative, define-by-run interface where operations are executed immediately as they are called from Python. This makes it easier to get started with TensorFlow, and can make research and development more intuitive.

The benefits of eager execution include:

  • Fast debugging with immediate run-time errors and integration with Python tools
  • Support for dynamic models using easy-to-use Python control flow
  • Strong support for custom and higher-order gradients
  • Almost all of the available TensorFlow operations

Eager execution is available now as an experimental feature, so we're looking for feedback from the community to guide our direction.

To understand this all better, let's look at some code. This gets pretty technical; familiarity with TensorFlow will help.

Using Eager Execution

When you enable eager execution, operations execute immediately and return their values to Python without requiring a Session.run(). For example, to multiply two matrices together, we write this:

import tensorflow as tf
import tensorflow.contrib.eager as tfe

tfe.enable_eager_execution()

x = [[2.]]
m = tf.matmul(x, x)

It's straightforward to inspect intermediate results with print or the Python debugger.


print(m)
# The 1x1 matrix [[4.]]

Dynamic models can be built with Python flow control. Here's an example of the Collatz conjecture using TensorFlow's arithmetic operations:

a = tf.constant(12)
counter = 0
while not tf.equal(a, 1):
if tf.equal(a % 2, 0):
a = a / 2
else:
a = 3 * a + 1
print(a)

Here, the use of the tf.constant(12) Tensor object will promote all math operations to tensor operations, and as such all return values with be tensors.

Gradients

Most TensorFlow users are interested in automatic differentiation. Because different operations can occur during each call, we record all forward operations to a tape, which is then played backwards when computing gradients. After we've computed the gradients, we discard the tape.

If you're familiar with the autograd package, the API is very similar. For example:

def square(x):
return tf.multiply(x, x)

grad = tfe.gradients_function(square)

print(square(3.)) # [9.]
print(grad(3.)) # [6.]

The gradients_function call takes a Python function square() as an argument and returns a Python callable that computes the partial derivatives of square() with respect to its inputs. So, to get the derivative of square() at 3.0, invoke grad(3.0), which is 6.

The same gradients_function call can be used to get the second derivative of square:

gradgrad = tfe.gradients_function(lambda x: grad(x)[0])

print(gradgrad(3.)) # [2.]

As we noted, control flow can cause different operations to run, such as in this example.

def abs(x):
return x if x > 0. else -x

grad = tfe.gradients_function(abs)

print(grad(2.0)) # [1.]
print(grad(-2.0)) # [-1.]

Custom Gradients

Users may want to define custom gradients for an operation, or for a function. This may be useful for multiple reasons, including providing a more efficient or more numerically stable gradient for a sequence of operations.

Here is an example that illustrates the use of custom gradients. Let's start by looking at the function log(1 + ex), which commonly occurs in the computation of cross entropy and log likelihoods.

def log1pexp(x):
return tf.log(1 + tf.exp(x))
grad_log1pexp = tfe.gradients_function(log1pexp)

# The gradient computation works fine at x = 0.
print(grad_log1pexp(0.))
# [0.5]
# However it returns a `nan` at x = 100 due to numerical instability.
print(grad_log1pexp(100.))
# [nan]

We can use a custom gradient for the above function that analytically simplifies the gradient expression. Notice how the gradient function implementation below reuses an expression (tf.exp(x)) that was computed during the forward pass, making the gradient computation more efficient by avoiding redundant computation.

@tfe.custom_gradient
def log1pexp(x):
e = tf.exp(x)
def grad(dy):
return dy * (1 - 1 / (1 + e))
return tf.log(1 + e), grad
grad_log1pexp = tfe.gradients_function(log1pexp)

# Gradient at x = 0 works as before.
print(grad_log1pexp(0.))
# [0.5]
# And now gradient computation at x=100 works as well.
print(grad_log1pexp(100.))
# [1.0]

Building models

Models can be organized in classes. Here's a model class that creates a (simple) two layer network that can classify the standard MNIST handwritten digits.

class MNISTModel(tfe.Network):
def __init__(self):
super(MNISTModel, self).__init__()
self.layer1 = self.track_layer(tf.layers.Dense(units=10))
self.layer2 = self.track_layer(tf.layers.Dense(units=10))
def call(self, input):
"""Actually runs the model."""
result = self.layer1(input)
result = self.layer2(result)
return result

We recommend using the classes (not the functions) in tf.layers since they create and contain model parameters (variables). Variable lifetimes are tied to the lifetime of the layer objects, so be sure to keep track of them.

Why are we using tfe.Network? A Network is a container for layers and is a tf.layer.Layer itself, allowing Networkobjects to be embedded in other Network objects. It also contains utilities to assist with inspection, saving, and restoring.

Even without training the model, we can imperatively call it and inspect the output:

# Let's make up a blank input image
model = MNISTModel()
batch = tf.zeros([1, 1, 784])
print(batch.shape)
# (1, 1, 784)
result = model(batch)
print(result)
# tf.Tensor([[[ 0. 0., ...., 0.]]], shape=(1, 1, 10), dtype=float32)

Note that we do not need any placeholders or sessions. The first time we pass in the input, the sizes of the layers' parameters are set.

To train any model, we define a loss function to optimize, calculate gradients, and use an optimizer to update the variables. First, here's a loss function:

def loss_function(model, x, y):
y_ = model(x)
return tf.nn.softmax_cross_entropy_with_logits(labels=y, logits=y_)

And then, our training loop:

optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.001)
for (x, y) in tfe.Iterator(dataset):
grads = tfe.implicit_gradients(loss_function)(model, x, y)
optimizer.apply_gradients(grads)

implicit_gradients() calculates the derivatives of loss_function with respect to all the TensorFlow variables used during its computation.

We can move computation to a GPU the same way we've always done with TensorFlow:

with tf.device("/gpu:0"):
for (x, y) in tfe.Iterator(dataset):
optimizer.minimize(lambda: loss_function(model, x, y))

(Note: We're shortcutting storing our loss and directly calling the optimizer.minimize, but you could also use the apply_gradients() method above; they are equivalent.)

Using Eager with Graphs

Eager execution makes development and debugging far more interactive, but TensorFlow graphs have a lot of advantages with respect to distributed training, performance optimizations, and production deployment.

The same code that executes operations when eager execution is enabled will construct a graph describing the computation when it is not. To convert your models to graphs, simply run the same code in a new Python session where eager execution hasn't been enabled, as seen, for example, in the MNIST example. The value of model variables can be saved and restored from checkpoints, allowing us to move between eager (imperative) and graph (declarative) programming easily. With this, models developed with eager execution enabled can be easily exported for production deployment.

In the near future, we will provide utilities to selectively convert portions of your model to graphs. In this way, you can fuse parts of your computation (such as internals of a custom RNN cell) for high-performance, but also keep the flexibility and readability of eager execution.

How does my code change?

Using eager execution should be intuitive to current TensorFlow users. There are only a handful of eager-specific APIs; most of the existing APIs and operations work with eager enabled. Some notes to keep in mind:

  • As with TensorFlow generally, we recommend that if you have not yet switched from queues to using tf.data for input processing, you should. It's easier to use and usually faster. For help, see this blog post and the documentation page.
  • Use object-oriented layers, like tf.layer.Conv2D() or Keras layers; these have explicit storage for variables.
  • For most models, you can write code so that it will work the same for both eager execution and graph construction. There are some exceptions, such as dynamic models that use Python control flow to alter the computation based on inputs.
  • Once you invoke tfe.enable_eager_execution(), it cannot be turned off. To get graph behavior, start a new Python session.

Getting started and the future

This is still a preview release, so you may hit some rough edges. To get started today:

There's a lot more to talk about with eager execution and we're excited… or, rather, we're eager for you to try it today! Feedback is absolutely welcome.

October Talks at Google: a month of celebrity sightings

It was a star-studded month for Talks at Google, our very own speaker series. A few celebs stopped by to chat about what they’re up to on the screen and the stage. Check them out below: 

Reese Witherspoon, Jon Rudnitsky, and Hallie Meyers-Shyer visited Google NYC to talk about their new movie "Home Again.” The interview reveals the celebrity history behind the house where the movie was filmed, Reese’s mission to “show a girl she can be the center of her own story” as well as the story behind why Reese started her own production company.

Reese Witherspoon, Jon Rudnitsky, and Hallie Meyers-Shyer stop by Google NYC to talk about their new movie "Home Again.”

DeepMind CEO Demis Hassabis and Denis Villeneuve discuss his new film "Blade Runner 2049,” and how “cinema can evolve when we capture life in front of the camera.” Villeneuve explains that it’s important to give actors the space to create things that weren't planned—he calls this the “chaos of life.” If you can’t get enough of Blade Runner, check out Harrison Ford's talk, too.

DeepMind CEO Demis Hassabis and Denis Villeneuve discuss his new film "Blade Runner 2049."

Watch the cast of Broadway's Miss Saigon perform a few songs, and discuss how the play—which takes place in the 1970s during the Vietnam War—is relevant today, and helps create an open dialogue about issues we’re facing nearly 50 years after the story takes place.  

The cast of Broadway’s Miss Saigon perform a few songs

Executive Producers Morgan Freeman and Lori McCreary discuss CBS's “Madam Secretary” as Season 4 kicks off, sharing their personal histories,why they created their powerhouse production company, Revelations Entertainment, and Lori’s amazing history as one of the first women to bring computer technology to the motion picture industry.

Stars of “A Bad Mom’s Christmas” Mila Kunis, Kristen Bell, and Kathryn Hahn stopped by Google HQ to discuss their new movie, parenthood, and how they recharge.

Stars of “A Bad Mom’s Christmas”

Actress, singer and author Anna Kendrick chats about her book, "Scrappy Little Nobody,” and (naturally) brings the laughs with funny anecdotes from her life and career.

Actress, singer and author Anna Kendrick chats about her book, "Scrappy Little Nobody."

As always, to see more talks, subscribe to Talks at Google on YouTube, follow them on Twitter or browse their website.

Manage system apps on company-owned Android devices

System apps are those apps that come preinstalled on Android devices, like Clock and Calculator. Many of these apps can’t be uninstalled and aren’t available in the Play Store for management. We want to give G Suite admins greater control over these system apps, so we’re introducing settings in the Admin console to:
  • enable all system apps,
  • disable all system apps,
  • enable select system apps, or
  • disable select system apps.

These settings will only apply to system apps on company-owned Android devices (i.e. Android devices in Device Owner mode). At launch, by default, all system apps will be enabled.


For more details on how to use these features, check out the Help Center.

IMPORTANT: These settings launched in the Admin console on October 31st, but they will not take effect for end users and devices until November 14th. If you’d prefer to disable some or all system apps, we recommend doing so before the settings take effect.

Launch Details
Release track:
  • Admin console settings launching to both Rapid Release and Scheduled Release on October 31st, 2017
  • End user impact launching to both Rapid Release and Scheduled release on November 14, 2017

Editions:
Available to all G Suite editions

Rollout pace:
  • Full rollout (1–3 days for feature visibility) for Admin console settings
  • Gradual rollout (up to 15 days for feature visibility) for end user impact

Impact:
Admins and end users

Action:
Admin action suggested/FYI

More Information
Help Center: Set up mobile device management
Help Center: Manage apps on mobile devices


Launch release calendar
Launch detail categories
Get these product update alerts by email
Subscribe to the RSS feed of these updates

Eager Execution: An imperative, define-by-run interface to TensorFlow



Today, we introduce eager execution for TensorFlow. Eager execution is an imperative, define-by-run interface where operations are executed immediately as they are called from Python. This makes it easier to get started with TensorFlow, and can make research and development more intuitive.

The benefits of eager execution include:
  • Fast debugging with immediate run-time errors and integration with Python tools
  • Support for dynamic models using easy-to-use Python control flow
  • Strong support for custom and higher-order gradients
  • Almost all of the available TensorFlow operations
Eager execution is available now as an experimental feature, so we're looking for feedback from the community to guide our direction.

To understand this all better, let's look at some code. This gets pretty technical; familiarity with TensorFlow will help.

Using Eager Execution

When you enable eager execution, operations execute immediately and return their values to Python without requiring a Session.run(). For example, to multiply two matrices together, we write this:
import tensorflow as tf
import tensorflow.contrib.eager as tfe

tfe.enable_eager_execution()

x = [[2.]]
m = tf.matmul(x, x)
It’s straightforward to inspect intermediate results with print or the Python debugger.
print(m)
# The 1x1 matrix [[4.]]
Dynamic models can be built with Python flow control. Here's an example of the Collatz conjecture using TensorFlow’s arithmetic operations:
a = tf.constant(12)
counter = 0
while not tf.equal(a, 1):
if tf.equal(a % 2, 0):
a = a / 2
else:
a = 3 * a + 1
print(a)
Here, the use of the tf.constant(12) Tensor object will promote all math operations to tensor operations, and as such all return values with be tensors.

Gradients

Most TensorFlow users are interested in automatic differentiation. Because different operations can occur during each call, we record all forward operations to a tape, which is then played backwards when computing gradients. After we've computed the gradients, we discard the tape.

If you’re familiar with the autograd package, the API is very similar. For example:
def square(x):
return tf.multiply(x, x)

grad = tfe.gradients_function(square)

print(square(3.)) # [9.]
print(grad(3.)) # [6.]
The gradients_function call takes a Python function square() as an argument and returns a Python callable that computes the partial derivatives of square() with respect to its inputs. So, to get the derivative of square() at 3.0, invoke grad(3.0), which is 6.

The same gradients_function call can be used to get the second derivative of square:
gradgrad = tfe.gradients_function(lambda x: grad(x)[0])

print(gradgrad(3.)) # [2.]
As we noted, control flow can cause different operations to run, such as in this example.
def abs(x):
return x if x > 0. else -x

grad = tfe.gradients_function(abs)

print(grad(2.0)) # [1.]
print(grad(-2.0)) # [-1.]

Custom Gradients

Users may want to define custom gradients for an operation, or for a function. This may be useful for multiple reasons, including providing a more efficient or more numerically stable gradient for a sequence of operations.

Here is an example that illustrates the use of custom gradients. Let's start by looking at the function log(1 + ex), which commonly occurs in the computation of cross entropy and log likelihoods.
def log1pexp(x):
return tf.log(1 + tf.exp(x))
grad_log1pexp = tfe.gradients_function(log1pexp)

# The gradient computation works fine at x = 0.
print(grad_log1pexp(0.))
# [0.5]
# However it returns a `nan` at x = 100 due to numerical instability.
print(grad_log1pexp(100.))
# [nan]
We can use a custom gradient for the above function that analytically simplifies the gradient expression. Notice how the gradient function implementation below reuses an expression (tf.exp(x)) that was computed during the forward pass, making the gradient computation more efficient by avoiding redundant computation.
@tfe.custom_gradient
def log1pexp(x):
e = tf.exp(x)
def grad(dy):
return dy * (1 - 1 / (1 + e))
return tf.log(1 + e), grad
grad_log1pexp = tfe.gradients_function(log1pexp)

# Gradient at x = 0 works as before.
print(grad_log1pexp(0.))
# [0.5]
# And now gradient computation at x=100 works as well.
print(grad_log1pexp(100.))
# [1.0]

Building models

Models can be organized in classes. Here's a model class that creates a (simple) two layer network that can classify the standard MNIST handwritten digits.
class MNISTModel(tfe.Network):
def __init__(self):
super(MNISTModel, self).__init__()
self.layer1 = self.track_layer(tf.layers.Dense(units=10))
self.layer2 = self.track_layer(tf.layers.Dense(units=10))
def call(self, input):
"""Actually runs the model."""
result = self.layer1(input)
result = self.layer2(result)
return result
We recommend using the classes (not the functions) in tf.layers since they create and contain model parameters (variables). Variable lifetimes are tied to the lifetime of the layer objects, so be sure to keep track of them.

Why are we using tfe.Network? A Network is a container for layers and is a tf.layer.Layer itself, allowing Network objects to be embedded in other Network objects. It also contains utilities to assist with inspection, saving, and restoring.

Even without training the model, we can imperatively call it and inspect the output:
# Let's make up a blank input image
model = MNISTModel()
batch = tf.zeros([1, 1, 784])
print(batch.shape)
# (1, 1, 784)
result = model(batch)
print(result)
# tf.Tensor([[[ 0. 0., ...., 0.]]], shape=(1, 1, 10), dtype=float32)
Note that we do not need any placeholders or sessions. The first time we pass in the input, the sizes of the layers’ parameters are set.

To train any model, we define a loss function to optimize, calculate gradients, and use an optimizer to update the variables. First, here's a loss function:
def loss_function(model, x, y):
y_ = model(x)
return tf.nn.softmax_cross_entropy_with_logits(labels=y, logits=y_)
And then, our training loop:
optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.001)
for (x, y) in tfe.Iterator(dataset):
grads = tfe.implicit_gradients(loss_function)(model, x, y)
optimizer.apply_gradients(grads)
implicit_gradients() calculates the derivatives of loss_function with respect to all the TensorFlow variables used during its computation.

We can move computation to a GPU the same way we’ve always done with TensorFlow:
with tf.device("/gpu:0"):
for (x, y) in tfe.Iterator(dataset):
optimizer.minimize(lambda: loss_function(model, x, y))
(Note: We're shortcutting storing our loss and directly calling the optimizer.minimize, but you could also use the apply_gradients() method above; they are equivalent.)

Using Eager with Graphs

Eager execution makes development and debugging far more interactive, but TensorFlow graphs have a lot of advantages with respect to distributed training, performance optimizations, and production deployment.

The same code that executes operations when eager execution is enabled will construct a graph describing the computation when it is not. To convert your models to graphs, simply run the same code in a new Python session where eager execution hasn’t been enabled, as seen, for example, in the MNIST example. The value of model variables can be saved and restored from checkpoints, allowing us to move between eager (imperative) and graph (declarative) programming easily. With this, models developed with eager execution enabled can be easily exported for production deployment.

In the near future, we will provide utilities to selectively convert portions of your model to graphs. In this way, you can fuse parts of your computation (such as internals of a custom RNN cell) for high-performance, but also keep the flexibility and readability of eager execution.

How does my code change?

Using eager execution should be intuitive to current TensorFlow users. There are only a handful of eager-specific APIs; most of the existing APIs and operations work with eager enabled. Some notes to keep in mind:
  • As with TensorFlow generally, we recommend that if you have not yet switched from queues to using tf.data for input processing, you should. It's easier to use and usually faster. For help, see this blog post and the documentation page.
  • Use object-oriented layers, like tf.layer.Conv2D() or Keras layers; these have explicit storage for variables.
  • For most models, you can write code so that it will work the same for both eager execution and graph construction. There are some exceptions, such as dynamic models that use Python control flow to alter the computation based on inputs.
  • Once you invoke tfe.enable_eager_execution(), it cannot be turned off. To get graph behavior, start a new Python session.

Getting started and the future

This is still a preview release, so you may hit some rough edges. To get started today:
There's a lot more to talk about with eager execution and we're excited… or, rather, we're eager for you to try it today! Feedback is absolutely welcome.

The meeting room, by G Suite

With G Suite, we’re focused on building tools that help you bring great ideas to life. We know meetings are the main entry point for teams to share and shape ideas into action. That’s why we recently introduced Hangouts Meet, an evolution of Google Hangouts designed specifically for the workplace, and Jamboard, a way to bring creative brainstorming directly into meetings. Combined with Calendar and Drive, these tools extend collaboration beyond four walls and transform how we work—so every team member has a voice, no matter location.

But the transformative power of video meetings is wasted if it’s not affordable and accessible to all organizations. So today, we’re introducing Hangouts Meet hardware—a new way to bring high-quality video meetings to businesses of any size. We’re also announcing new software updates designed to make your meetings even more productive.

Introducing Hangouts Meet hardware

Hangouts Meet hardware is a cost-effective way to bring high-quality video meetings to your business. The hardware kit consists of four components: a touchscreen controller, speakermic, 4K sensor camera and ASUS Chromebox.

Hangouts Meet controller

The new controller provides a modern, intuitive touchscreen interface that allows people to easily join scheduled events from Calendar or view meeting details with a single tap. You can pin and mute team members, as well as control the camera, making managing meetings easy. You can also add participants with the dial-a-phone feature and present from a laptop via HDMI. If you’re a G Suite Enterprise edition customer, you can record the meeting to Drive.

Designed by Google, the Hangouts Meet speakermic actively eliminates echo and background noise to provide crisp, clear audio. Up to five speakermics can be daisy-chained together with a single wire, providing coverage for larger rooms without tabletop clutter.

The 4K sensor camera with 120° field of view easily captures everyone at the table, even in small spaces that some cameras find challenging. Each camera component is fine-tuned to make meetings more personal and distraction-free. Built with machine learning, the camera can intelligently detect participants and automatically crop and zoom to frame them.

Powered by Chrome OS, the ASUS Chromebox makes deploying and managing Hangouts Meet hardware easier than ever. The Chromebox can automatically push updates to other components in the hardware kit, making it easier for large organizations to ensure security and reliability. Remote device monitoring and management make it easy for IT administrators to stay in control, too.

New Hangouts Meet enhancements greatly improve user experience and simplify our meeting rooms. It also creates new ways for our team to collaborate. Bradley Rhodes
IT Analyst, Woolworths Limited, Australia

Says Bradley Rhodes, IT Analyst End User Computing at Woolworths Ltd Australia, “We are very excited about the new Hangouts Meet hardware, particularly the easy-to-use touchscreen. The enhancements greatly improve the user experience and simplify our meeting rooms. We have also seen it create new ways for our team to collaborate, like via the touch-to-record functionality which allows absent participants to catch up more effectively.”

More features, better meetings

We’re also announcing updates to Meet based on valuable feedback. If you’re a G Suite Enterprise edition customer, you can:

Dial in image Hangouts Meet
  • Record meetings and save them to Drive. Can’t make the meeting? No problem. Record your meeting directly to Drive. Even without a Hangouts Meet hardware kit, Meet on web can save your team’s ideas with a couple of clicks.
  • Host meetings with up to 50 participants. Meet supports up to 50 participants in a meeting, especially useful for bringing global teams together from both inside and outside of your organization.
  • Dial in from around the globe. The dial-in feature in Meet is now available in more than a dozen markets. If you board a flight in one country and land in another, Meet will automatically update your meeting’s dial-in listing to a local phone number.

These new features are rolling out gradually. The hardware kit is priced at $1999 and is available in select markets around the globe beginning today.

Whether you're collaborating in Jamboard, recording meetings and referencing discussions in Drive or scheduling your next team huddle in Calendar, Hangouts Meet hardware makes it even easier to bring the power of your favorite G Suite tools into team meetings. For more information, visit the G Suite website.

Announcing Fast Pair – effortless Bluetooth pairing for Android

Posted by Ritesh Nayak M and Ronald Ho, Product Managers

Today we're announcing Fast Pair, a hassle-free process to pair your Bluetooth devices on all supported Android devices running Google Play services 11.7+ with compatibility back to Marshmallow (Android 6.0). Fast Pair makes discovery & pairing of Bluetooth devices easy and is currently rolling out to Android 6.0+ devices. You can try this out with Google Pixel Buds or Libratone's Q Adapt On-Ear, and soon on Plantronics Voyager 8200 series wireless headsets.

Ease of use, speed and security are the design principles driving the Fast Pair specification. Fast Pair uses BLE (Bluetooth Low Energy) for advertising and discovery and uses classic Bluetooth for pairing. Here's what a Fast Pair flow looks like:

  1. Turn on a Fast Pair-enabled device and put it in pairing mode.
    • Android scans for BLE broadcasts in close proximity of the user's phone and discovers a Fast Pair packet (provided Bluetooth and Location is turned on).
    • This packet is sent to our servers to get back the device's product image, product name and companion app (if there is one).
  2. The user receives a high priority notification asking them to "Tap to pair" to the device. The notification contains the product name and image.
  3. When the user taps on the notification, we use classic Bluetooth to establish a connection.
  4. A success notification is shown which contains a link to download the companion app (if there is one).

Imagine doing all of this without ever fumbling with Bluetooth settings. Users get a seamless and secure pairing experience and confidence that they're connecting to the right product. Manufacturers get their brand, device name and companion app in front of the users.

Thanks to a couple of our partners who have been instrumental in prototyping and testing this spec, and whose feedback has been invaluable to the Fast Pair effort. If you are a Bluetooth accessory manufacturer and want to adopt Fast Pair for your device, please reach out to us.

Plantronics is an audio pioneer and a global leader in the communications industry. From Unified Communications and customer service ecosystems, to data analytics and Bluetooth headsets, Plantronics delivers high-quality communications solutions that customers count on today, while relentlessly innovating on behalf of their future. For more information visit plantronics.com

Libratone is on a mission to liberate sound and to expand peoples' experiences with music in the era of streaming. Founded in 2009 in Denmark, Libratone is one of the first audio companies to consider the aesthetics of speakers – to move them out of the corner of the room and into the center and onward, for the consumer on the move. For more information visit libratone.com

The meeting room, by G Suite

With G Suite, we’re focused on building tools that help you bring great ideas to life. We know meetings are the main entry point for teams to share and shape ideas into action. That’s why we recently introduced Hangouts Meet, an evolution of Google Hangouts designed specifically for the workplace, and Jamboard, a way to bring creative brainstorming directly into meetings. Combined with Calendar and Drive, these tools extend collaboration beyond four walls and transform how we work—so every team member has a voice, no matter location.

But the transformative power of video meetings is wasted if it’s not affordable and accessible to all organizations. So today, we’re introducing Hangouts Meet hardware—a new way to bring high-quality video meetings to businesses of any size. We’re also announcing new software updates designed to make your meetings even more productive.

Introducing Hangouts Meet hardware

Hangouts Meet hardware is a cost-effective way to bring high-quality video meetings to your business. The hardware kit consists of four components: a touchscreen controller, speakermic, 4K sensor camera and ASUS Chromebox.

Hangouts Meet controller

The new controller provides a modern, intuitive touchscreen interface that allows people to easily join scheduled events from Calendar or view meeting details with a single tap. You can pin and mute team members, as well as control the camera, making managing meetings easy. You can also add participants with the dial-a-phone feature and present from a laptop via HDMI. If you’re a G Suite Enterprise edition customer, you can record the meeting to Drive.

Designed by Google, the Hangouts Meet speakermic actively eliminates echo and background noise to provide crisp, clear audio. Up to five speakermics can be daisy-chained together with a single wire, providing coverage for larger rooms without tabletop clutter.

The 4K sensor camera with 120° field of view easily captures everyone at the table, even in small spaces that some cameras find challenging. Each camera component is fine-tuned to make meetings more personal and distraction-free. Built with machine learning, the camera can intelligently detect participants and automatically crop and zoom to frame them.

Powered by Chrome OS, the ASUS Chromebox makes deploying and managing Hangouts Meet hardware easier than ever. The Chromebox can automatically push updates to other components in the hardware kit, making it easier for large organizations to ensure security and reliability. Remote device monitoring and management make it easy for IT administrators to stay in control, too.

New Hangouts Meet enhancements greatly improve user experience and simplify our meeting rooms. It also creates new ways for our team to collaborate. Bradley Rhodes
IT Analyst, Woolworths Limited, Australia

Says Bradley Rhodes, IT Analyst End User Computing at Woolworths Ltd Australia, “We are very excited about the new Hangouts Meet hardware, particularly the easy-to-use touchscreen. The enhancements greatly improve the user experience and simplify our meeting rooms. We have also seen it create new ways for our team to collaborate, like via the touch-to-record functionality which allows absent participants to catch up more effectively.”

More features, better meetings

We’re also announcing updates to Meet based on valuable feedback. If you’re a G Suite Enterprise edition customer, you can:

Dial in image Hangouts Meet
  • Record meetings and save them to Drive. Can’t make the meeting? No problem. Record your meeting directly to Drive. Even without a Hangouts Meet hardware kit, Meet on web can save your team’s ideas with a couple of clicks.
  • Host meetings with up to 50 participants. Meet supports up to 50 participants in a meeting, especially useful for bringing global teams together from both inside and outside of your organization.
  • Dial in from around the globe. The dial-in feature in Meet is now available in more than a dozen markets. If you board a flight in one country and land in another, Meet will automatically update your meeting’s dial-in listing to a local phone number.

These new features are rolling out gradually. The hardware kit is priced at $1999 and is available in select markets around the globe beginning today.

Whether you're collaborating in Jamboard, recording meetings and referencing discussions in Drive or scheduling your next team huddle in Calendar, Hangouts Meet hardware makes it even easier to bring the power of your favorite G Suite tools into team meetings. For more information, visit the G Suite website.

Source: Google Cloud