Monthly Archives: October 2015

Proguard and AdMob mediation

If you’re an Android developer who uses ProGuard to post-process builds, you already know the improvements it can make to APK size and speed. Just as handy, though, is its ability to obfuscate your compiled code by stripping out debug information and renaming classes, methods, and fields to generic identifiers. It’s a great way to discourage reverse-engineering of your application. If you’re an AdMob publisher who uses mediation, however, you need to take special care when configuring ProGuard in order to avoid obfuscating some of the code used in the mediation process.

AdMob mediation needs two classes to maintain their original names in your final APK: AdUrlAdapter and AdMobAdapter. If either of those has been renamed by ProGuard, it can cause the SDK to incorrectly return “no fill” responses for the AdMob demand in your mediated ad units.

The good news is that it’s easy to avoid this problem. Just add the following two keep options to your ProGuard configuration file:


-keep class com.google.ads.mediation.admob.AdMobAdapter {
*;
}

-keep class com.google.ads.mediation.AdUrlAdapter {
*;
}

These options instruct ProGuard to avoid renaming the two classes, and to leave the names of their fields and methods unobfuscated as well. With the original names intact, the mediation system will be able to instantiate them dynamically whenever they’re needed, and your otherwise obfuscated application won’t miss out on any AdMob impressions.

The third-party networks your app mediates may also need certain classes exempted from obfuscation. Be sure to check with those networks to find out if they have recommendations for ProGuard configuration.

If you have technical questions about this (or anything else relating to the Google Mobile Ads SDK) stop by our forum.

tags: android, admob_mediation, mobile_ads_sdk

Google Summer of Code wrap-up: STE||AR Group

Today we are featuring the STE||AR Group, another Google Summer of Code veteran organization. Adrian Serio gives an overview of their four students summer projects below.

stellar.png

The STE||AR Group is an international team of researchers who aim to improve application scalability by more efficiently utilizing hardware resources available to developers. This summer has been an exciting time for the STE||AR Group’s Google Summer of Code (GSoC) mentors and students alike! We were very pleased with the dedication and effort of all five of our participants.

Our students made contributions to three of our software products:
  • HPX: a distributed C++ runtime system which comes with a standards-compliant API and allows users to scale their applications across thousands of machines
  • LibGeoDecomp: an auto-parallelizing library for petascale computer simulations which is able to take advantage of HPX to better adapt fluctuating workloads to the system
  • LibFlatArray: a highly efficient multidimensional array library which provides an object-oriented interface but stores data in a vectorization-friendly Struct-of-Arrays format.

Just like how these three products can work together as a tightly integrated stack, our goal with the GSoC projects was to create synergy between them and steer our development towards increasing the adaptivity and efficiency of our software. Below are the summaries of our student’s projects.

Implementation of a New Resource Manager in HPX: Nidhi Makhijani
This project set out to properly assign hardware resources to executors: C++ objects that dictate the way a thread should be executed. Nidhi was able to allocate resources to an executor when it was created and return the resources when it stops. Additionally, Nidhi laid the groundwork for dynamic allocation where the resource manager can monitor and share resources amongst all of the running executors.

SIMD Wrapper for ARM NEON, Intel AVX512 & KNC in LibFlatArray: Larry Xiao
Vectorization is imperative for writing highly efficient numerical kernels. The goal of this project was to extend the already existing SIMD wrappers in LibFlatArray to more architectures (e.g. ARM NEON, Intel AVX512, etc.) and to extend the capabilities of these wrappers. Larry set out to study the different ISAs (Instruction Set Architectures), and make the library run efficiently on these architectures.

CSV Formatted Performance Counters for HPX: Devang Bacharwar
HPX provides users with a uniform interface to access arbitrary system information from anywhere in the system. Devang’s project allows users to request these counters in a CSV format. Additionally, he has enabled the ability to get timestamps with each value as well. These features will make it easier for HPX users to perform analysis on the performance data gathered from an application.

Integrate a C++AMP Kernel with HPX:  Marcin Copik
The HPX runtime system can coordinate the execution and synchronization of OpenCL kernels on arbitrary OpenCL devices, such as GPUs, in a system. In his GSoC project, Marcin used a C++ AMP compiler to produce an OpenCL kernel from a parallel algorithm implemented by HPX. Marcin integrated the Kalmar AMP compiler into the HPX build system, transformed a parallel for each algorithm into an OpenCL kernel, dispatched the kernel to a GPU and synchronized the result with a concurrently running HPX application.

A Flexible IO Infrastructure for LibGeoDecomp: Konstantin Kronfeldner
In LibGeoDecomp, users are able to read from and write to arbitrary regions of the simulation space. These operations are carried out by objects which we call Steerers and Writers. Over the summer, Konstantin added the ability for these Steerers and Writers to be dynamically created and destroyed. LibGeoDecomp is typically used on supercomputers, where jobs are executed non-interactively via a batch system. Konstantin's extensions enable users to interact with the application at runtime. They can view and modify the simulation model dynamically. The benefit of this is a significantly lower turnaround time for domain scientists who need to carry out many computational experiments.

By Adrian Serio, Scientific Program Coordinator, STE||AR Group

Discover Europe’s hidden gems with Google Maps and the Financial Times

We’re always trying to build technology that helps people access and explore the world around them, from the Liwa desert to a tucked-away restaurant in Playa Carmen. That’s why today, we’re excited to announce a partnership with the Financial Times called Hidden Cities. Hidden Cities is a FT Weekend series that combines Google technology with FT journalism. It allows readers to discover places to eat, drink, and shop in the world’s political and cultural capitals, and easily explore them using Google Maps.
We’re kicking off the series by showcasing gems off the beaten path in Brussels. Readers can expect lots of beer and chocolate recommendations from local tastemakers and FT journalists alike - including picks from master chocolatier Pierre Marcolini, brewery chief Jean Van Roy, and the FT Bureau staff past and present.
You can check out the online experience at ft.com/hiddencities and this weekend’s FT Weekend Magazine. Look out for the next Hidden Cities installment in November, which will take users under the surface of another European capital - London.

Posted by Molly Welch, Product Marketing Manager

Camio offers AI-based, real-time video monitoring that learns user preferences using Google Cloud Platform

Today’s guest post comes from Carter Maslan, CEO of Camio, a startup that converts real-time video into useful information.

I adore our dog Jane. She moved 3,000 miles away, but I still wake up to her barking and pillow-fluffing. That's because our son shares his Camio feed with us. We get video alerts on the iPad he leaves behind as a pet monitor. These alerts come out of Camio’s AI technology, which includes hundreds of servers running neural nets competing to learn the exact moments our family wants to see. To riff on an old Google bumper sticker, "My other pet sitter is a data center" — or actually many data centers running Camio.

Camio is a cloud-based video monitoring service for watching, storing, analyzing and searching for real-time videos. It turns any camera into a smart home or business monitor, where real-time video is uploaded and saved in the cloud. Retailers can use Camio to watch and understand foot traffic in stores. Families can capture video from birthday parties and family reunions. Building owners protect offices from theft and vandalism. Care providers monitor elders living alone. Parents know at a glance when their kids get home or the dog sitter leaves.

People can use Camio to view live feeds and engage in auto-answering two-way chats for free, from anywhere in the world. They can also receive alerts when particular search conditions are met, such as when a car pulls into the driveway.

A huge technical challenge we had was to design and implement a highly scalable backend system for capturing, storing, retrieving and analyzing millions of videos — without building and managing our own servers. We wanted to spend every incremental engineering dollar on making the product great, instead of investing in infrastructure.

We had already been using Google App Engine, Google Compute Engine, Google Cloud Datastore and Google Cloud Storage, and then one of our engineers read about the Google Cloud Platform for Startups credit program. We asked one of our investors, Greylock Partners, to endorse our application to the program, which prompted us to explore Google BigQuery, Google Cloud Monitoring, Google Cloud Bigtable, Managed VMs and other services too.

As a result of being in the startup program, we were able to access valuable technical resources, including consultations with many of Google’s top engineers. The credits also allowed us to defer operating costs, so we could devote more resources to building our product.

We store videos in Cloud Storage, a high capacity, highly scalable storage solution ideal for working with extremely large data objects. The videos are processed by Compute Engine to identify patterns and events, and the metadata output from Compute Engine is indexed in Cloud Datastore, a NoSQL schemaless database for non-relational data. The videos themselves are delivered to users via App Engine. It’s all scalable, and able to grow as we grow.

There’s no advantage in our doing the things that Google does so well for us. Being maniacally focused on our product — that’s the road to success.

Posted by Carter Maslan, CEO, Camio

Catch your breath and take in the best views of Sabah on Street View

Malaysia’s most eastern state of Sabah sits just south of a typhoon belt. Seafarers used to call it the "Land Below The Wind," as it provided refuge from the raging storms of the north. From today you too can catch your breath, in awe, at the beauty of Sabah with the launch of new Street View imagery from 23 islands and nature reserves from the area.

Sabah is home to the highest mountain in Malaysia, so you can now scale the Mount Kinabalu peak from the couch. A UNESCO world heritage site, Mount Kinabalu sits in Kinabalu Park which teems with unique flora and fauna — including the gigantic Rafflesia plant and orangutans.





Scaling the Mount Kinabalu peak is even tougher with an 18kg Trekker on your back

Or take a quiet cruise down the Kinabatangan River. The longest river in Sabah, Kinabatangan winds through a forest-covered floodplain which is home to Proboscis monkeys, Sumatran rhinos and Asian elephants. If you like wildlife, try and catch a glimpse of the orangutans at Sepilok Orang Utan Reserve:


Two orangutans having a tête-à-tête at the Sepilok Orang Utan Reserve

Once you’ve explored the jungle, why not go island hopping and visit Mabul or Mataking? The clear turquoise waters of the Celebes Sea are teeming with sea life and gentle sloping reefs which makes them diving hotspots.


Take a virtual dip at Mabul Island


The tiny Mataking Island can be walked around in an hour. We bet you could go even faster with Street View.

Sabah is home to incredibly unique natural diversity. We hope you enjoy scaling the peaks of Kinabalu, going deep into the jungle, or lazing around on the many island beaches with this new Street View collection.

Posted by Nhazlisham Hamdan, Street View Operations Lead Malaysia, Indonesia & Thailand


Source: Google LatLong


How to measure translation quality in your user interfaces



Worldwide, there are about 200 languages that are spoken by at least 3 million people. In this global context, software developers are required to translate their user interfaces into many languages. While graphical user interfaces have evolved substantially when compared to text-based user interfaces, they still rely heavily on textual information. The perceived language quality of translated user interfaces (UIs) can have a significant impact on the overall quality and usability of a product. But how can software developers and product managers learn more about the quality of a translation when they don’t speak the language themselves?

Key information in interaction elements and content are mostly conveyed through text. This aspect can be illustrated by removing text elements from a UI, as shown in the the figure below.
Three versions of the YouTube UI: (a) the original, (b) YouTube without text elements, and (c) YouTube without graphic elements. It gets apparent how the textless version is stripped of the most useful information: it is almost impossible to choose a video to watch and navigating the site is impossible.
In "Measuring user rated language quality: Development and validation of the user interface Language Quality Survey (LQS)", recently published in the International Journal of Human-Computer Studies, we describe the development and validation of a survey that enables users to provide feedback about the language quality of the user interface.

UIs are generally developed in one source language and translated afterwards string by string. The process of translation is prone to errors and might introduce problems that are not present in the source. These problems are most often due to difficulties in the translation process. For example, the word “auto” can be translated to French as automatique (automatic) or automobile (car), which obviously has a different meaning. Translators might chose the wrong term if context is missing during the process. Another problem arises from words that behave as a verb when placed in a button or as a noun if part of a label. For example, “access” can stand for “you have access” (as a label) or “you can request access” (as a button).

Further pitfalls are gender, prepositions without context or other characteristics of the source text that might influence translation. These problems sometimes even get aggravated by the fact that translations are made by different linguists at different points in time. Such mistranslations might not only negatively affect trustworthiness and brand perception, but also the acceptance of the product and its perceived usefulness.

This work was motivated by the fact that in 2012, the YouTube internationalization team had anecdotal evidence which suggested that some language versions of YouTube might benefit from improvement efforts. While expert evaluations led to significant improvements of text quality, these evaluations were expensive and time-consuming. Therefore, it was decided to develop a survey that enables users to provide feedback about the language quality of the user interface to allow a scalable way of gathering quantitative data about language quality.

The Language Quality Survey (LQS) contains 10 questions about language quality. The first five questions form the factor “Readability”, which describes how natural and smooth to read the used text is. For instance, one question targets ease of understanding (“How easy or difficult to understand is the text used in the [product name] interface?”). Questions 6 to 9 summarize the frequency of (in)consistencies in the text, called “Linguistic Correctness”. The full survey can be found in the publication.

Case study: applying the LQS in the field

As the LQS was developed to discover problematic translations of the YouTube interface and allow focused quality improvement efforts, it was made available in over 60 languages and data were gathered for all these versions of the YouTube interface. To understand the quality of each UI version, we compared the results for the translated versions to the source language (here: US-English). We inspected first the global item, in combination with Linguistic Correctness and Readability. Second, we inspected each item separately, to understand which notion of Linguistic Correctness or Readability showed worse (or better) values. Here are some results:
  • The data revealed that about one third of the languages showed subpar language quality levels, when compared to the source language.
  • To understand the source of these problems and fix them, we analyzed the qualitative feedback users had provided (every time someone selected the lower two end scale points, pointing at a problem in the language, a text box was surfaced, asking them to provide examples or links to illustrate the issues).
  • The analysis of these comments provided linguists with valuable feedback of various kinds. For instance, users pointed to confusing terminology, untranslated words that were missed during translation, typographical or grammatical problems, words that were translated but are commonly used in English, or screenshots in help pages that were in English but needed to be localized. Some users also pointed to readability aspects such as sections with old fashioned or too formal tone as well as too informal translations, complex technical or legal wordings, unnatural translations or rather lengthy sections of text. In some languages users also pointed to text that was too small or criticized the readability of the font that was used.
  • In parallel, in-depth expert reviews (so-called “language find-its”) were organized. In these sessions, a group of experts for each language met and screened all of YouTube to discover aspects of the language that could be improved and decided on concrete actions to fix them. By using the LQS data to select target languages, it was possible to reduce the number of language find-its to about one third of the original estimation (if all languages had been screened).
LQS has since been successfully adapted and used for various Google products such as Docs, Analytics, or AdWords. We have found the LQS to be a reliable, valid and useful tool to approach language quality evaluation and improvement. The LQS can be regarded as a small piece in the puzzle of understanding and improving localization quality. Google is making this survey broadly available, so that everyone can start improving their products for everyone around the world.

New Course on Developing Android Apps for Google Cast and Android TV

Posted by Josh Gordon, Developer Advocate

Go where your users are: the living room! Google Cast lets users stream their favorite apps from Android, iOS and the Web right onto their TV. Android TV turns a TV into an Android device, only bigger!




We've partnered with Udacity to launch a new online course - Google Cast and Android TV Development. This course teaches you how to extend your existing Android app to work with these technologies. It’s packed with practical advice, code snippets, and deep dives into sample code.

You can take advantage of both, without having to rewrite your app. Android TV is just Android on a new form factor, and the Leanback library makes it easy to add a big screen and cinematic UI to your app. Google Cast comes with great samples and guides to help you get started. Google also provides the Cast Companion Library, which makes it faster and easier to add cast to your Android app.

This class is part of our larger series on Ubiquitous Computing across other Google platforms, including Android Wear, and Android Auto. Designed as short, standalone courses, you can take any on its own, or take them all!

Get started now and try it out at no cost, your users are waiting!


Try the new cloud management console

Every day, our customers log into the Cloud Console to manage resources running on Google Cloud Platform. Many of you spend a lot of time there, deploying, configuring and managing various aspects of the platform.

We’re excited to share a beta of the new UI design, which helps you focus on what matters most to you.



Highlights include:

  • Improved Navigation:
    • Global navigation: We’ve re-architected the navigation so that you can access features in several different ways. Open the menu to see all Cloud Platform offerings in one consolidated place or use the keyboard shortcut to quickly enter into search based navigation.
    • Local navigation: We’ve streamlined the navigation layout of each feature to allow complete focus within that space and maximized all screen real estate.
  • Search: Newly-added navigational search allows you to jump to any feature, page or API in the console from a persistent search box.
  • Customization: Focus the console to reflect only the services you actually use by “pinning” them to the top bar for fast one-click access from anywhere.

  • Increased visibility: Identify and address issues from a configurable dashboard of your resources and services.

The new console is in beta for a month until it’s released to general availability in late November. Please visit console.cloud.google.com, click the button at the top that reads, “Try the beta console,” and let us know what you think by using the feedback icon:
in the new UI.

-Posted by Stewart Fife, Product Manager, Google Cloud Platform

Updated Guidance on Implementing AdMob Interstitial Ads

We’ve recently made some updates to our AdMob Help Center to help provide further guidance on implementing interstitial ads. These best practices, along with examples of what you should and shouldn’t do, are designed to help developers implement interstitial ads.

Mobile devices have limited screen size, which means that careful planning for your ad placement is especially important. Improper implementation can lead to accidental clicks, and our goal is to build a strong ecosystem that benefits users, advertisers and developers in the long term.

Our goal is to provide examples which help developers create positive user experiences.  Users should not be overwhelmed with interstitial ads. Repeated interstitial ads often lead to poor user experiences and accidental clicks.

Additionally, users should not be surprised by interstitial ads. Placing interstitial ads so that they suddenly appear when a user is focused on a task at hand (e.g., playing a game, filling out a form, reading content) may lead to accidental clicks and often creates a frustrating user experience.

Finally, ads should not be placed in a way that prevents viewing the app’s core content, nor in a way that interferes with navigating or interacting with the app’s core content and functionality.

More examples and details, along with additional visual examples of best practices can be found on the AdMob Help Center. Here you will find disallowed implementations and recommendations on how to fix interstitial implementations, as well as general interstitial ad guidance.

Subscribe to app development insights from AdMob by Google here.

JohnBrownHeadshot.png

Posted by John Brown
Head of Publisher Policy Communications

Source: Inside AdMob


Tracking our annual carbon footprint

As the world looks toward global climate negotiations at COP21 in Paris this December, we’d like to share updates on our commitment to carbon neutrality. The latest figures just posted on our Google Green website show that we're a carbon neutral company for the eighth year in a row, our carbon footprint is growing more slowly than our business, and our use of renewables continues to increase.

For 2014, we reported a carbon footprint of 2.49 million metric tons of carbon dioxide equivalent (CO2e) to the Carbon Disclosure Project (CDP), a global nonprofit that collects and shares climate change data. Our carbon intensity, which is a way to measure the level of greenhouse gas emissions per million dollars of revenue, has dropped for the sixth year in a row: for every million dollars of revenue we generated in 2014, we emitted 22.9 metric tons of CO2e from our operations and buildings. That means that our footprint continues to grow more slowly than our business because we’re able to get more done with each gram of carbon we emit.

Improved data center efficiency initiatives, renewable energy purchases, and high-quality carbon offset purchases all help bring our net carbon footprint back down to zero. For example, compared to five years ago in our data centers, we now get 3.5 times the computing power out of the same amount of energy. Our focus on keeping our carbon footprint in check means that people using Google’s products can also feel good about the minimal environmental impact of their searches, Gmail messages, YouTube views, and more. Our calculation still holds true that serving an active Google user for one month is like driving a car just one mile.

We are the largest corporate purchaser of renewable energy in the world. As of 2014, 37% of the electricity for our operations—which includes our offices, data centers and other infrastructure—came from renewable sources. That’s up from 35% in 2013, which is striking given how we’re growing as a company. To keep up with that growth, we’re continuing to sign new long-term energy contracts, including one that can power our entire main campus in Mountain View with 100% local wind energy. These long term commitments are not only good for the environment, but they also make good business sense.

We are committed to making investments that ensure the amount of energy we get from renewable sources will increase significantly in the next couple of years and at the same time add new capacity to the grid. It's a pattern we anticipate will accelerate; we’ve also doubled down—make that tripled down!—this past summer with our climate pledge to the White House to triple our renewable energy purchases over the next decade.

Climate change is one of the most significant global challenges of our time; Google wants to do its part and make a difference. We’ll continue to update you on that progress.