Posted by Chris Hohorst, Head of Mobile Sites Transformation
Mobile now accounts for over half of all web traffic1, making performance on small screens more important than ever.
Despite this increase, a recent study by Google found that the average time it takes to load a mobile landing page is 22 seconds. When you consider that 53% of mobile site visitors will leave a site if it takes more than three seconds to load, it's clear why conversion rates are consistently lower on mobile than desktop.
Website visitors now expect their mobile experience to be as flawless as desktop, and the majority of online businesses are failing to deliver.
With this in mind, we're introducing the new Google Mobile Sites certification. Passing the Mobile Sites exam signals that you have a demonstrated ability to build and optimize high-quality sites, and allows you to promote yourself as a Google accredited mobile site developer.
Through codifying best practice in mobile site development, we hope to improve the general standard of mobile design and speed, and make it easier to find the best talent.
What the exam covers
To pass the exam, you'll need to show proficiency across mobile site design, mobile UX best practice, mobile site speed optimization, and advanced web technologies. We've put together a study guide that covers everything you'll need to know.
What are the benefits? We know that a lot of web developers are doing great work on mobile sites - this certification is a way of promoting them to a wider audience. Being certified means being recognized by Google as an expert in mobile site optimization, which will make you more accessible and attractive to potential clients looking for a good match for those services.
The certification will display on your Partners profile, helping you stand out to businesses looking for mobile site development, and can also be shared across social media.
How to sign up
Check out our study guide to get started. Then, to take the exam, please click on the Mobile Sites certification link and log in to your Google Partners account. If you're not signed up yet, you can create a Partners user profile by registering here.
The exam is open to all web developers globally in English and, once completed, the certification will remain valid for 12 months.
You may have read recently that the Google Cloud Platform team upgraded to Issue Tracker, the same system that Google uses internally. This allows for improved collaboration between all of us and all of you. Issues you file will have better exposure internally, and you get improved transparency in terms of seeing the issues we're actively working on. Starting today, G Suite developers will also have a new issue tracker to which we've already migrated existing issues from previous systems.
Whether it's a bug that you've found, or if you wish to submit a favorite feature request, the new issue tracker is here for you. Heads up, you need to be logged in with your Google credentials to view or update issues in the tracker.
The new issue tracker for G Suite developers.
Each G Suite API and developer tool has its own "component" number that you can search. For your convenience, below is the entire list. You may browse for issues relevant to the Google APIs that you're using, or click on the convenience links to report an issue or request a new/missing feature:
By Sunil Vemuri, Product Manager for Actions on Google
Since we launchedthe Actions on Google platform last year, we've seen a lot of creative actions for use cases ranging from meditation to insurance. But one of the areas where we're especially excited is gaming. Games like Akinator to SongPop demonstrate that developers can create new and engaging experiences for users. To bring more great games online, we're adding new tools to Actions on Google to make it easier than ever for you to build games for the Google Assistant.
First, we're releasing a brand new sound effect library. These effects can make your games more engaging, help you create a more fun persona for your action, and hopefully put smiles on your users' faces. From airplanes, slide whistles, and bowlingto cats purring and thunder, you're going to find hundreds of options that will add some pizzazz to your Action.
Second, for those of you who feel nostalgic about interactive text adventures, we just published a handy guide on how to bring these games to life with the Google Assistant. With many old favorites being open source or in the public domain, you are now able to re-introduce these classics to Google Assistant users on Google Home.
Finally, for those of you who are looking to build new types of games, we've recently expanded the list of tool and consulting companies that have integrated their development solutions with Actions on Google. New collaborators like Pullstring, Converse.AI, Solstice and XAPP Media are now also able to help turn your vision into reality.
We can't wait to see how you use our sound library and for the new and classic games you'll bring to Google Assistant users on Google Home! Make sure you join our Google+ community to discuss Actions on Google with other developers.
Free and open source software has been part of our technical and organizational foundation since Google's early beginnings. From servers running the Linux kernel to an internal culture of being able to patch any other team's code, open source is part of everything we do. In return, we've released millions of lines of open source code, run programs like Google Summer of Code and Google Code-in, and sponsor open source projects and communities through organizations like Software Freedom Conservancy, the Apache Software Foundation, and many others.
Today, we're launching opensource.google.com, a new website for Google Open Source that ties together all of our initiatives with information on how we use, release, and support open source.
This new site showcases the breadth and depth of our love for open source. It will contain the expected things: our programs, organizations we support, and a comprehensive list of open source projects we've released. But it also contains something unexpected: a look under the hood at how we "do" open source.
Helping you find interesting open source
One of the tenets of our philosophy towards releasing open source code is that "more is better." We don't know which projects will find an audience, so we help teams release code whenever possible. As a result, we have released thousands of projects under open source licenses ranging from larger products like TensorFlow, Go, and Kubernetes to smaller projects such as Light My Piano, Neuroglancerand Periph.io. Some are fully supported while others are experimental or just for fun. With so many projects spread across 100 GitHub organizations and our self-hosted Git service, it can be difficult to see the scope and scale of our open source footprint.
To provide a more complete picture, we are launching a directory of our open source projects which we will expand over time. For many of these projects we are also adding information about how they are used inside Google. In the future, we hope to add more information about project lifecycle and maturity.
How we do open source
Open source is about more than just code; it's also about community and process. Participating in open source projects and communities as a large corporation comes with its own unique set of challenges. In 2014, we helped form the TODO Group, which provides a forum to collaborate and share best practices among companies that are deeply committed to open source. Inspired by many discussions we've had over the years, today we are publishing our internal documentation for how we do open source at Google.
Our policies and procedures are informed by many years of experience and lessons we've learned along the way. We know that our particular approach to open source might not be right for everyone—there's more than one way to do open source—and so these docs should not be read as a "how-to" guide. Similar to how it can be valuable to read another engineer's source code to see how they solved a problem, we hope that others find value in seeing how we approach and think about open source at Google.
To hear a little more about the backstory of the new Google Open Source site, we invite you to listen to the latest episode from our friends at The Changelog. We hope you enjoy exploring the new site!
Posted by Sridhar Ramaswamy Senior Vice President, Ads and Commerce What: Google Marketing Next keynote live stream When: Tuesday, May 23rd at 9:00 a.m. PT/12:00 p.m. ET. Duration: 1 hour Where: On the Inside AdWords Blog
Be the first to hear about Google’s latest marketing innovations, the moment they’re announced. Watch live as my team and I share new Ads, Analytics and DoubleClick innovations designed to improve your ability to reach consumers, simplify campaign measurement and increase your productivity. We’ll also give you a sneak peek at how brands are starting to use the Google Assistant to delight customers.
By the XLA team within Google, in collaboration with the TensorFlow team
One of the design goals and core strengths of TensorFlow is its flexibility. TensorFlow was designed to be a flexible and extensible system for defining arbitrary data flow graphs and executing them efficiently in a distributed manner using heterogenous computing devices (such as CPUs and GPUs).
But flexibility is often at odds with performance. While TensorFlow aims to let you define any kind of data flow graph, it’s challenging to make all graphs execute efficiently because TensorFlow optimizes each op separately. When an op with an efficient implementation exists or when each op is a relatively heavyweight operation, all is well; otherwise, the user can still compose this op out of lower-level ops, but this composition is not guaranteed to run in the most efficient way.
This is why we’ve developed XLA (Accelerated Linear Algebra), a compiler for TensorFlow. XLA uses JIT compilation techniques to analyze the TensorFlow graph created by the user at runtime, specialize it for the actual runtime dimensions and types, fuse multiple ops together and emit efficient native machine code for them - for devices like CPUs, GPUs and custom accelerators (e.g. Google’s TPU).
Fusing composable ops for increased performance
Consider the tf.nn.softmax op, for example. It computes the softmax activations of its parameter as follows:
Softmax can be implemented as a composition of primitive TensorFlow ops (exponent, reduction, elementwise division, etc.):
This could potentially be slow, due to the extra data movement and materialization of temporary results that aren’t needed outside the op. Moreover, on co-processors like GPUs such a decomposed implementation could result in multiple “kernel launches” that make it even slower.
XLA is the secret compiler sauce that helps TensorFlow optimize compositions of primitive ops automatically. Tensorflow, augmented with XLA, retains flexibility without sacrificing runtime performance, by analyzing the graph at runtime, fusing ops together and producing efficient machine code for the fused subgraphs.
For example, a decomposed implementation of softmax as shown above would be optimized by XLA to be as fast as the hand-optimized compound op.
More generally, XLA can take whole subgraphs of TensorFlow operations and fuse them into efficient loops that require a minimal number of kernel launches. For example:
Many of the operations in this graph can be fused into a single element-wise loop. Consider a single element of the bias vector being added to a single element from the matmul result, for example. The result of this addition is a single element that can be compared with 0 (for ReLU). The result of the comparison can be exponentiated and divided by the sum of exponents of all inputs, resulting in the output of softmax. We don’t really need to create the intermediate arrays for matmul, add, and ReLU in memory.
A fused implementation can compute the end result within a single element-wise loop, without allocating needless memory. In more advanced scenarios, these operations can even be fused into the matrix multiplication.
XLA helps TensorFlow retain its flexibility while eliminating performance concerns.
On internal benchmarks, XLA shows up to 50% speedups over TensorFlow without XLA on Nvidia GPUs. The biggest speedups come, as expected, in models with long sequences of elementwise operations that can be fused to efficient loops. However, XLA should still be considered experimental, and some benchmarks may experience slowdowns.
In this talk from TensorFlow Developer Summit, Chris Leary and Todd Wang describe how TensorFlow can make use of XLA, JIT, AOT, and other compilation techniques to minimize execution time and maximize computing resources.
Extreme specialization for executable size reduction
In addition to improved performance, TensorFlow models can benefit from XLA for restricted-memory environments (such as mobile devices) due to the executable size reduction it provides. tfcompile is a tool that leverages XLA for ahead-of-time compilation (AOT) - a whole graph is compiled to XLA, which then emits tight machine code that implements the ops in the graph. Coupled with a minimal runtime this scheme provides considerable size reductions.
For example, given a 3-deep, 60-wide stacked LSTM model on android-arm, the original TF model size is 2.6 MB (1 MB runtime + 1.6 MB graph); when compiled with XLA, the size goes down to 600 KB.
This size reduction is achieved by the full specialization of the model implied by its static compilation. When the model runs, the full power and flexibility of the TensorFlow runtime is not required - only the ops implementing the actual graph the user is interested in are compiled to native code. That said, the performance of the code emitted by the CPU backend of XLA is still far from optimal; this part of the project requires more work.
Support for alternative backends and devices
To execute TensorFlow graphs on a new kind of computing device today, one has to re-implement all the TensorFlow ops (kernels) for the new device. Depending on the device, this can be a very significant amount of work.
By design, XLA makes supporting new devices much easier by adding custom backends. Since TensorFlow can target XLA, one can add a new device backend to XLA and thus enable it to run TensorFlow graphs. XLA provides a significantly smaller implementation surface for new devices, since XLA operations are just the primitives (recall that XLA handles the decomposition of complex ops on its own). We’ve documented the process for adding a custom backend to XLA on this page. Google uses this mechanism to target TPUs from XLA.
Conclusion and looking forward
XLA is still in early stages of development. It is showing very promising results for some use cases, and it is clear that TensorFlow can benefit even more from this technology in the future. We decided to release XLA to TensorFlow Github early to solicit contributions from the community and to provide a convenient surface for optimizing TensorFlow for various computing devices, as well as retargeting the TensorFlow runtime and models to run on new kinds of hardware.
Today we are pleased to announce the open-sourcing of Python Fire. Python Fire generates command line interfaces (CLIs) from any Python code. Simply call the Fire function in any Python program to automatically turn that program into a CLI. The library is available from pypi via `pip install fire`, and the source is available on GitHub.
Python Fire will automatically turn your code into a CLI without you needing to do any additional work. You don't have to define arguments, set up help information, or write a main function that defines how your code is run. Instead, you simply call the `Fire` function from your main module, and Python Fire takes care of the rest. It uses inspection to turn whatever Python object you give it -- whether it's a class, an object, a dictionary, a function, or even a whole module -- into a command line interface, complete with tab completion and documentation, and the CLI will stay up-to-date even as the code changes.
To illustrate this, let's look at a simple example.
#!/usr/bin/env python
import fire
classExample(object):
def hello(self, name='world'):
"""Says hello to the specified name."""
return'Hello {name}!'.format(name=name)
def main():
fire.Fire(Example)
if __name__ == '__main__':
main()
When the Fire function is run, our command will be executed. Just by calling Fire, we can now use the Example class as if it were a command line utility.
$ ./example.py hello
Hello world!
$ ./example.py hello David
Hello David!
$ ./example.py hello --name=Google
Hello Google!
Of course, you can continue to use this module like an ordinary Python library, enabling you to use the exact same code both from Bash and Python. If you're writing a Python library, then you no longer need to update your main method or client when experimenting with it; instead you can simply run the piece of your library that you're experimenting with from the command line. Even as the library changes, the command line tool stays up to date.
At Google, engineers use Python Fire to generate command line tools from Python libraries. We have an image manipulation tool built by using Fire with the Python Imaging Library, PIL. In Google Brain, we use an experiment management tool built with Fire, allowing us to manage experiments equally well from Python or from Bash.
Every Fire CLI comes with an interactive mode. Run the CLI with the `--interactive` flag to launch an IPythonREPL with the result of your command, as well as other useful variables already defined and ready to use. Be sure to check out Python Fire's documentation for more on this and the other useful features Fire provides.
Between Python Fire's simplicity, generality, and power, we hope you find it a useful library for your own projects.
Posted By: Roy Glasberg, Global Lead, Launchpad Program & Accelerator
After recently hosting another amazing group of startups for Launchpad Accelerator, we're ready to kick things off again with the next class! Apply hereby 9am PST on April 24, 2017.
Starting today, we'll be accepting applications from growth-stage innovative tech startups from these countries:
Asia: India, Indonesia, Thailand, Vietnam, Malaysia and the Philippines
Latin America: Argentina, Brazil, Chile, Colombia and Mexico
And we're delighted to expand the program to countries in Africa and Europe for the first time!
Africa: Kenya, Nigeria and South Africa
Europe: Czech Republic, Hungary and Poland
The equity-free program will begin on July 17th, 2017 at the Google Developers Launchpad Space in San Francisco and will include 2 weeks of all-expense-paid training.
What are the benefits? The training at Google HQ includes intensive mentoring from 20+ Google teams, and expert mentors from top technology companies and VCs in Silicon Valley. Participants receive equity-free support, credits for Google products, PR support and continue to work closely with Google back in their home country during the 6-month program.
What do we look for when selecting startups? Each startup that applies to the Launchpad Accelerator is considered holistically and with great care. Below are general guidelines behind our process to help you understand what we look for in our candidates.
All startups in the program must:
Be a technological startup.
Be targeting their local markets.
Have proven product-market fit (beyond ideation stage).
Be based in the countries listed above.
Additionally, we are interested in what kind of startup you are. We also consider:
The problem you are trying to solve. How does it create value for users? How are you addressing a real challenge for your home city, country or region?
Does your management team have a leadership mindset and the drive to become an influencer?
Will you share what you learn in Silicon Valley for the benefit of other startups in your local ecosystem?
If you're based outside of these countries, stay tuned, as we expect to add more countries to the program in the future.
We can't wait to learn more about your startup and work together to solve your challenges and help grow your business.
Originally shared on the Inside AdMob Blog Posted by Sissie Hsiao, Product Director, Mobile Advertising, Google. Last played Fire Emblem Heroes for Android
Mobile games mean more than just fun. They mean business. Big business. According to App Annie, game developers should capture almost half of the $189B global market for in-app purchases and advertising by 20201.
Later today, at the Games Developer Conference (GDC) in San Francisco, I look forward to sharing a series of new innovations across ad formats, monetization tools and measurement insights for apps.
New playable and video ad formats to get more people into your game
Integrations to help you create better monetization experiences
Measurement tools that provide insights about how players are interacting with your game
Let more users try your game with a playable ad format
There’s no better way for a new user to experience your game than to actually play it. So today, we introduced playables, an interactive ad format in Universal App Campaigns that allows users to play a lightweight version of your game, right when they see it in any of the 1M+ apps in the Google Display Network.
Jam City’s playable ad for Cookie Jam
Playables help you get more qualified installs from users who tried your game in the ad and made the choice to download it for more play time. By attracting already-engaged users into your app, playables help you drive the long-term outcomes you care about — rounds played, levels beat, trophies won, purchases made and more.
"Jam City wants to put our games in the hands of more potential players as quickly as possible. Playables get new users into the game right from the ad, which we've found drives more engagement and long-term customer value." Josh Yguado, President & COO Jam City, maker of Panda Pop and Cookie Jam.
Playables will be available for developers through Universal App Campaigns in the coming months, and will be compatible with HTML5 creatives built through Google Web Designer or third-party agencies.
Improve the video experience with ads designed for mobile viewing
Most mobile video ad views on the Google Display Network are watched on devices held vertically2. This can create a poor experience when users encounter video ad creatives built for horizontal viewing.
Developers using Universal App Campaigns will soon be able to use an auto-flip feature that automatically orients your video ads to match the way users are holding their phones. If you upload a horizontal video creative in AdWords, we will automatically create a second, vertical version for you.
Cookie Jam horizontal video and vertical-optimized video created through auto-flip technology
The auto-flip feature uses Google's machine learning technology to identify the most important objects in every frame of your horizontal video creative. It then produces an optimized, vertical version of your video ad that highlights those important components of your original asset. Early tests show that click-through rates are about 20% higher on these dynamically-generated vertical videos than on horizontal video ads watched vertically3.
Unlock new business with rewarded video formats, and free, unlimited reporting
Developers have embraced AdMob's platform to mediate rewarded video ads as a way to let users watch ads in exchange for an in-app reward. Today, we are delighted to announce that we are bringing Google’s video app install advertising demand from AdWords to AdMob, significantly increasing rewarded demand available to developers. Advertisers that use Universal App Campaigns can seamlessly reach this engaged, game-playing audience using your existing video creatives.
We are also investing in better measurement tools for developers by bringing the power of Firebase Analytics to more game developers with a generally available C++ SDK and an SDK for Unity, a leading gaming engine.
C++ and Unity developers can now access Firebase Analytics for real-time player insights
With Firebase Analytics, C++ and Unity developers can now capture billions of daily events — like level completes and play time — to get more nuanced player insights and gain a deeper understanding of metrics like daily active users, average revenue per user and player lifetime value.
This is an exciting time to be a game developer. It’s been a privilege to meet so many of you at GDC 2017 and learn about the amazing games that you’re all building. We hope the innovations we announced today help you grow long-term gaming businesses and we look forward to continuing on this journey with you.
Until next year, GDC!
1 - App Monetization Report, November 2016, App Annie 2 - More than 80% of video ad views in mobile apps on the Google Display Network are from devices held vertically video, Google Internal Data 3 - Google Internal Data
When the Google Slidesteam launched their very first API last November, it immediately opened up a whole new class of applications. These applications have the ability to interact with the Slides service, so you can perform operations on presentations programmatically. Since its launch, we've published several videos to help you realize some of those possibilities, showing you how to:
Today, we're releasing the latest Slides API tutorial in our video series. This one goes back to basics a bit: adding text to presentations. But we also discuss shapes—not only adding shapes to slides, but also adding text within shapes. Most importantly, we cover one best practice when using the API: create your own object IDs. By doing this, developers can execute more requests while minimizing API calls.
Developers use insertText requests to tell the API to add text to slides. This is true whether you're adding text to a textbox, a shape or table cell. Similar to the Google Sheets API, all requests are made as JSON payloads sent to the API's batchUpdate() method. Here's the JavaScript for inserting text in some object (objectID) on a slide:
Placing or manipulating shapes or images on slides requires more information so the cloud service can properly render these objects. Be aware that it does involve some math, as you can see from the Page Elements page in the docs as well as the Transforms concept guide. In the video, I drop a few hints and good practices so you don't have to start from scratch.
Regardless of how complex your requests are, if you have at least one, say in an array named requests, you'd make an API call with the aforementioned batchUpdate() method, which in Python looks like this (assuming SLIDES is the service endpoint and a presentation ID of deckID):
For a detailed look at the complete code sample featured in the DevByte, check out the deep dive post. As you can see, adding text is fairly straightforward. If you want to learn how to format and style that text, check out the Formatting Text post and video as well as the text concepts guide. To learn how to perform text search-and-replace, say to replace placeholders in a template deck, check out the Replacing Text & Images post and video as well as the merging data into slides guide. We hope these developer resources help you create that next great app that automates the task of producing presentations for your users!