Monthly Archives: August 2013

RenderScript Intrinsics

Posted by R. Jason Sams, Android RenderScript Tech Lead

RenderScript has a very powerful ability called Intrinsics. Intrinsics are built-in functions that perform well-defined operations often seen in image processing. Intrinsics can be very helpful to you because they provide extremely high-performance implementations of standard functions with a minimal amount of code.

RenderScript intrinsics will usually be the fastest possible way for a developer to perform these operations. We’ve worked closely with our partners to ensure that the intrinsics perform as fast as possible on their architectures — often far beyond anything that can be achieved in a general-purpose language.

Table 1. RenderScript intrinsics and the operations they provide.

Name Operation
ScriptIntrinsicConvolve3x3, ScriptIntrinsicConvolve5x5 Performs a 3x3 or 5x5 convolution.
ScriptIntrinsicBlur Performs a Gaussian blur. Supports grayscale and RGBA buffers and is used by the system framework for drop shadows.
ScriptIntrinsicYuvToRGB Converts a YUV buffer to RGB. Often used to process camera data.
ScriptIntrinsicColorMatrix Applies a 4x4 color matrix to a buffer.
ScriptIntrinsicBlend Blends two allocations in a variety of ways.
ScriptIntrinsicLUT Applies a per-channel lookup table to a buffer.
ScriptIntrinsic3DLUT Applies a color cube with interpolation to a buffer.

Your application can use one of these intrinsics with very little code. For example, to perform a Gaussian blur, the application can do the following:

RenderScript rs = RenderScript.create(theActivity);
ScriptIntrinsicBlur theIntrinsic = ScriptIntrinsicBlur.create(mRS, Element.U8_4(rs));;
Allocation tmpIn = Allocation.createFromBitmap(rs, inputBitmap);
Allocation tmpOut = Allocation.createFromBitmap(rs, outputBitmap);

This example creates a RenderScript context and a Blur intrinsic. It then uses the intrinsic to perform a Gaussian blur with a 25-pixel radius on the allocation. The default implementation of blur uses carefully hand-tuned assembly code, but on some hardware it will instead use hand-tuned GPU code.

What do developers get from the tuning that we’ve done? On the new Nexus 7, running that same 25-pixel radius Gaussian blur on a 1.6 megapixel image takes about 176ms. A simpler intrinsic like the color matrix operation takes under 4ms. The intrinsics are typically 2-3x faster than a multithreaded C implementation and often 10x+ faster than a Java implementation. Pretty good for eight lines of code.

Renderscript optimizations chart

Figure 1. Performance gains with RenderScript intrinsics, relative to equivalent multithreaded C implementations.

Applications that need additional functionality can mix these intrinsics with their own RenderScript kernels. An example of this would be an application that is taking camera preview data, converting it from YUV to RGB, adding a vignette effect, and uploading the final image to a SurfaceView for display.

In this example, we’ve got a stream of data flowing between a source device (the camera) and an output device (the display) with a number of possible processors along the way. Today, these operations can all run on the CPU, but as architectures become more advanced, using other processors becomes possible.

For example, the vignette operation can happen on a compute-capable GPU (like the ARM Mali T604 in the Nexus 10), while the YUV to RGB conversion could happen directly on the camera’s image signal processor (ISP). Using these different processors could significantly improve power consumption and performance. As more these processors become available, future Android updates will enable RenderScript to run on these processors, and applications written for RenderScript today will begin to make use of those processors transparently, without any additional work for developers.

Intrinsics provide developers a powerful tool they can leverage with minimal effort to achieve great performance across a wide variety of hardware. They can be mixed and matched with general purpose developer code allowing great flexibility in application design. So next time you have performance issues with image manipulation, I hope you give them a look to see if they can help.

Building Better Maps in Russia and Hong Kong

When you travel or look at a map of your city, you want it to be as accurate as possible. We do too. That’s why we’re launching our Ground Truth initiative in Hong Kong and parts of Russia (including Moscow, St. Petersburg, Novosibirsk, Yekaterinburg and large areas in the west of the country), so we can build a better map that helps you find what you need and get to where you’re going, quickly and easily. These new, updated maps for Russia and Hong Kong will automatically become part of your maps using our JavaScript, Android, and iOS APIs. Ground Truth enables us to update a country’s map at a faster pace to provide you with an up-to-date map that mirrors the real world as closely as possible. Ground Truth also makes it possible for you to contribute your local knowledge to the map and report any issues you find through the Report a Problem tool, so together we can build a better map.

The updated maps in Russia and Hong Kong now provide detailed walking paths in many well-known parks and landmarks, making navigating easier especially in pedestrian-friendly Hong Kong. For example, we’ve added walking paths to Victoria Park - you can now zig zag across the park as you please.

Russia is rather large, so many people prefer to travel by car. Today’s update is good news for drivers as well, as we’ve made big improvements to our road network. We’ve updated street names, turn restrictions and one-way streets as well as completely new maps in more than 50 towns across Russia. So the next time you drive to the city center for shopping, try out the Google Maps app for Android and iPhone to get there.

If you’d rather adventure by sea, we’ve also added ferry routes, down to the specific harbor of departure. For example, in the updated map of Hong Kong’s Central and Western District below you can see the ferry routes as well as nearby points of interests and transportation options.

The updated map also indicates places of interest more clearly, such as hospitals, national parks, and universities. For example, Moscow State University, Russia’s oldest and largest university, now has more detail with cleaner walking paths, named roads, and labels for different department buildings.

Whether you’re gazing at the awe-inspiring spirals of St. Basil’s Cathedral in Moscow or strolling through the bustling Tsim Sha Tsui Promenade in Hong Kong, Google Maps is here to help you see, explore, and discover the world!

To learn more about Ground Truth, check out this presentation from Google I/O 2013:

Posted by Kirill Levin, Google Maps Software Engineer

Continuing to invest in a clean, open exchange

Recently there has been a great deal of discussion about applications that inject or overlay ads on sites without the express approval of users and those sites, and then monetize the inventory as their own. We believe that this kind of activity is bad for end users and damages the integrity of the advertising industry. In order for the programmatic marketplace to achieve its full potential and help as many marketers and publishers as we think it can, there needs to be trust between advertisers, publishers, and users.

We’ve invested, since the beginning, in strong policies and a system of checks and filters to ensure that the inventory on the DoubleClick Ad Exchange is the highest quality in the industry. Here’s a quick summary of what we do to stop invalid injected inventory from entering our exchange.

We don’t support spammy applications. Period.
Both the Google Platforms program policies and the DoubleClick Ad Exchange (AdX) Seller Program Guidelines strictly prohibit the use of systems, including toolbars, that overlay ad space on a given site without express permission of the site owner. In addition, we have numerous processes and technologies in place to review publishers’ inventory as well as advertisers’ ads to maintain a high standard of quality for how advertising is transacted on our platforms. 

In light of the increased concerns on this subject, many publishers have asked us for guidance on what to ask the exchanges or networks they work with. Here are three suggested questions any publisher partner should be able to answer in regards to protecting against injected inventory:
  • Does your platform work with or supply advertising for clients who inject display ads in browsers?
  • Do your program policies prohibit the use of systems to inject display ads in browsers, without first having obtained user consent or consent from the site affected?
  • Please provide me a report of all the inventory partners on your platform serving my domain?

We do, and will always, support our publisher partners. 
Finally, I’d like to thank the millions of publishers who use the DoubleClick Ad Exchange, large and small, that day in and day out, provide amazing value both to their users and their advertisers. We welcome a broader discussion with our partners and with the industry about how to collectively solve this issue and others. Together, we can all ask the tough questions, hold each other accountable, and ultimately create the web we all want, where publishers, users and advertisers all thrive.

Posted by Scott Spencer, Director of Product Management

Here’s my playlist, so submit a video, maybe?

Update: YouTube Direct Lite for iOS is now available as well. This version demonstrates best practices for using the YouTube APIs on iOS.

YouTube Direct Lite allows you to solicit videos from your users and then moderate those submissions into standard YouTube playlists for display. And now there is an app for that.

With the YouTube Direct Lite apps (Android, iOS), your fans can

  • record a new video,
  • upload an existing video from their device,
  • pick one of their own YouTube uploads

and submit to your playlist, all from their Android device. You can then moderate their submissions, which won't show up in your playlist till you explicitly approve them.

YouTube Direct Lite platform doesn’t require any server-side code that needs to be configured or deployed. As the moderator, you will see a playlist of videos waiting for your approval. The videos you approve, will be added into your channel.

How to start using the Android application

1) Register your Android app
2) Enable the Youtube Data API v3 and Google+ API in your API Console.
3) Include the Google Play Services library in your project to build this application.
4) Plug your Playlist Id into and Android API Key into

Main ActivityYouTube playerUpload Service
Main ActivityYouTube playerUpload Service

How to start using the iOS application

2) Plug your Playlist Id, Client ID and Client Secret into Utils.h
2) Install the Google Client Library.
3) Run the sample

Main Activity     YouTube player     Upload Service
Uploads PlaylistiFrame Player
YouTube Upload

Open-sourced to reference best practices of YouTube APIs on Android and iOS

YouTube Direct Lite apps (Android, iOS) are open-sourced projects and you are more than welcome to customize them for your needs. You can also contribute back to the projects with bug reports, feature or merge requests.

Android application uses the YouTube Data API v3, YouTube Android Player API, YouTube Resumable Uploads, Google Play Services and Plus API.

In addition to Android best practices for the YouTube APIs, this project follows the design and development guidelines for Android. This project adheres to Holo style, typography, 48dp rhythm, iconography and uses IntentService, BigPictureStyle notification, and GoogleAccountCredential.

iOS application uses the YouTube Data API v3, the YouTube iFrame Player API, and YouTube Resumable Uploads.

In addition, in these videos, we talk about the philosophy we followed in building these apps and a few best practices for the Youtube APIs, Android, and iOS development.

This App is still experimental, so stay tuned here and subscribe to the YouTube for Developers channel to keep up on the latest.

Ibrahim Ulukaya, YouTube API Team

Respecting Audio Focus

Posted by Kristan Uccello, Google Developer Relations

It’s rude to talk during a presentation, it disrespects the speaker and annoys the audience. If your application doesn’t respect the rules of audio focus then it’s disrespecting other applications and annoying the user. If you have never heard of audio focus you should take a look at the Android developer training material.

With multiple apps potentially playing audio it's important to think about how they should interact. To avoid every music app playing at the same time, Android uses audio focus to moderate audio playback—your app should only play audio when it holds audio focus. This post provides some tips on how to handle changes in audio focus properly, to ensure the best possible experience for the user.

Requesting audio focus

Audio focus should not be requested when your application starts (don’t get greedy), instead delay requesting it until your application is about to do something with an audio stream. By requesting audio focus through the AudioManager system service, an application can use one of the AUDIOFOCUS_GAIN* constants (see Table 1) to indicate the desired level of focus.

Listing 1. Requesting audio focus.

1. AudioManager am = (AudioManager) mContext.getSystemService(Context.AUDIO_SERVICE);
3.  int result = am.requestAudioFocus(mOnAudioFocusChangeListener,
4.    // Hint: the music stream.
5.    AudioManager.STREAM_MUSIC,
6.    // Request permanent focus.
7.    AudioManager.AUDIOFOCUS_GAIN);
8.  if (result == AudioManager.AUDIOFOCUS_REQUEST_GRANTED) {
9.    mState.audioFocusGranted = true;
10. } else if (result == AudioManager.AUDIOFOCUS_REQUEST_FAILED) {
11.   mState.audioFocusGranted = false;
12. }

In line 7 above, you can see that we have requested permanent audio focus. An application could instead request transient focus using AUDIOFOCUS_GAIN_TRANSIENT which is appropriate when using the audio system for less than 45 seconds.

Alternatively, the app could use AUDIOFOCUS_GAIN_TRANSIENT_MAY_DUCK, which is appropriate when the use of the audio system may be shared with another application that is currently playing audio (e.g. for playing a "keep it up" prompt in a fitness application and expecting background music to duck during the prompt). The app requesting AUDIOFOCUS_GAIN_TRANSIENT_MAY_DUCK should not use the audio system for more than 15 seconds before releasing focus.

Handling audio focus changes

In order to handle audio focus change events, an application should create an instance of OnAudioFocusChangeListener. In the listener, the application will need to handle theAUDIOFOCUS_GAIN* event and AUDIOFOCUS_LOSS* events (see Table 1). It should be noted that AUDIOFOCUS_GAIN has some nuances which are highlighted in Listing 2, below.

Listing 2. Handling audio focus changes.

1. mOnAudioFocusChangeListener = new AudioManager.OnAudioFocusChangeListener() {  
3. @Override
4. public void onAudioFocusChange(int focusChange) {
5.   switch (focusChange) {
6.   case AudioManager.AUDIOFOCUS_GAIN:
7.     mState.audioFocusGranted = true;
9.     if(mState.released) {
10.   initializeMediaPlayer();
11.    }
13. switch(mState.lastKnownAudioFocusState) { 14. case UNKNOWN: 15. if(mState.state == PlayState.PLAY && !mPlayer.isPlaying()) { 16. mPlayer.start(); 17. } 18. break; 19. case AudioManager.AUDIOFOCUS_LOSS_TRANSIENT: 20. if(mState.wasPlayingWhenTransientLoss) { 21. mPlayer.start(); 22. } 23. break; 24. case AudioManager.AUDIOFOCUS_LOSS_TRANSIENT_CAN_DUCK: 25. restoreVolume(); 26. break; 27. } 28.
29. break; 30. case AudioManager.AUDIOFOCUS_LOSS: 31. mState.userInitiatedState = false; 32. mState.audioFocusGranted = false; 33. teardown(); 34. break; 35. case AudioManager.AUDIOFOCUS_LOSS_TRANSIENT: 36. mState.userInitiatedState = false; 37. mState.audioFocusGranted = false; 38. mState.wasPlayingWhenTransientLoss = mPlayer.isPlaying(); 39. mPlayer.pause(); 40. break; 41. case AudioManager.AUDIOFOCUS_LOSS_TRANSIENT_CAN_DUCK: 42. mState.userInitiatedState = false; 43. mState.audioFocusGranted = false; 44. lowerVolume(); 45. break; 46. } 47. mState.lastKnownAudioFocusState = focusChange; 48. } 49.};

AUDIOFOCUS_GAIN is used in two distinct scopes of an applications code. First, it can be used when registering for audio focus as shown in Listing 1. This does NOT translate to an event for the registered OnAudioFocusChangeListener, meaning that on a successful audio focus request the listener will NOT receive an AUDIOFOCUS_GAIN event for the registration.

AUDIOFOCUS_GAIN is also used in the implementation of an OnAudioFocusChangeListener as an event condition. As stated above, the AUDIOFOCUS_GAIN event will not be triggered on audio focus requests. Instead the AUDIOFOCUS_GAIN event will occur only after an AUDIOFOCUS_LOSS* event has occurred. This is the only constant in the set shown Table 1 that is used in both scopes.

There are four cases that need to be handled by the focus change listener. When the application receives an AUDIOFOCUS_LOSS this usually means it will not be getting its focus back. In this case the app should release assets associated with the audio system and stop playback. As an example, imagine a user is playing music using an app and then launches a game which takes audio focus away from the music app. There is no predictable time for when the user will exit the game. More likely, the user will navigate to the home launcher (leaving the game in the background) and launch yet another application or return to the music app causing a resume which would then request audio focus again.

However another case exists that warrants some discussion. There is a difference between losing audio focus permanently (as described above) and temporarily. When an application receives an AUDIOFOCUS_LOSS_TRANSIENT, the behavior of the app should be that it suspends its use of the audio system until it receives an AUDIOFOCUS_GAIN event. When the AUDIOFOCUS_LOSS_TRANSIENT occurs, the application should make a note that the loss is temporary, that way on audio focus gain it can reason about what the correct behavior should be (see lines 13-27 of Listing 2).

Sometimes an app loses audio focus (receives an AUDIOFOCUS_LOSS) and the interrupting application terminates or otherwise abandons audio focus. In this case the last application that had audio focus may receive an AUDIOFOCUS_GAIN event. On the subsequent AUDIOFOCUS_GAIN event the app should check and see if it is receiving the gain after a temporary loss and can thus resume use of the audio system or if recovering from an permanent loss, setup for playback.

If an application will only be using the audio capabilities for a short time (less than 45 seconds), it should use an AUDIOFOCUS_GAIN_TRANSIENT focus request and abandon focus after it has completed its playback or capture. Audio focus is handled as a stack on the system — as such the last process to request audio focus wins.

When audio focus has been gained this is the appropriate time to create a MediaPlayer or MediaRecorder instance and allocate resources. Likewise when an app receives AUDIOFOCUS_LOSS it is good practice to clean up any resources allocated. Gaining audio focus has three possibilities that also correspond to the three audio focus loss cases in Table 1. It is a good practice to always explicitly handle all the loss cases in the OnAudioFocusChangeListener.

Table 1. Audio focus gain and loss implication.


Note: AUDIOFOCUS_GAIN is used in two places. When requesting audio focus it is passed in as a hint to the AudioManager and it is used as an event case in the OnAudioFocusChangeListener. The gain events highlighted in green are only used when requesting audio focus. The loss events are only used in the OnAudioFocusChangeListener.

Table 2. Audio stream types.

Stream Type Description
STREAM_ALARM The audio stream for alarms
STREAM_DTMF The audio stream for DTMF Tones
STREAM_MUSIC The audio stream for "media" (music, podcast, videos) playback
STREAM_NOTIFICATION The audio stream for notifications
STREAM_RING The audio stream for the phone ring
STREAM_SYSTEM The audio stream for system sounds

An app will request audio focus (see an example in the sample source code linked below) from the AudioManager (Listing 1, line 1). The three arguments it provides are an audio focus change listener object (optional), a hint as to what audio channel to use (Table 2, most apps should use STREAM_MUSIC) and the type of audio focus from Table 1, column 1. If audio focus is granted by the system (AUDIOFOCUS_REQUEST_GRANTED), only then handle any initialization (see Listing 1, line 9).

Note: The system will not grant audio focus (AUDIOFOCUS_REQUEST_FAILED) if there is a phone call currently in process and the application will not receive AUDIOFOCUS_GAIN after the call ends.

Within an implementation of OnAudioFocusChange(), understanding what to do when an application receives an onAudioFocusChange() event is summarized in Table 3.

In the cases of losing audio focus be sure to check that the loss is in fact final. If the app receives an AUDIOFOCUS_LOSS_TRANSIENT or AUDIOFOCUS_LOSS_TRANSIENT_CAN_DUCK it can hold onto the media resources it has created (don’t call release()) as there will likely be another audio focus change event very soon thereafter. The app should take note that it has received a transient loss using some sort of state flag or simple state machine.

If an application were to request permanent audio focus with AUDIOFOCUS_GAIN and then receive an AUDIOFOCUS_LOSS_TRANSIENT_CAN_DUCK an appropriate action for the application would be to lower its stream volume (make sure to store the original volume state somewhere) and then raise the volume upon receiving an AUDIOFOCUS_GAIN event (see Figure 1, below).

Table 3. Appropriate actions by focus change type.

Focus Change Type Appropriate Action
AUDIOFOCUS_GAIN Gain event after loss event: Resume playback of media unless other state flags set by the application indicate otherwise. For example, the user paused the media prior to loss event.
AUDIOFOCUS_LOSS Stop playback. Release assets.
AUDIOFOCUS_LOSS_TRANSIENT Pause playback and keep a state flag that the loss is transient so that when the AUDIOFOCUS_GAIN event occurs you can resume playback if appropriate. Do not release assets.
AUDIOFOCUS_LOSS_TRANSIENT_CAN_DUCK Lower volume or pause playback keeping track of state as with AUDIOFOCUS_LOSS_TRANSIENT. Do not release assets.

Conclusion and further reading

Understanding how to be a good audio citizen application on an Android device means respecting the system's audio focus rules and handling each case appropriately. Try to make your application behave in a consistent manner and not negatively surprise the user. There is a lot more that can be talked about within the audio system on Android and in the material below you will find some additional discussions.

Example source code is available here:

Africa Connected: Tell us your web success story

Every day, the web is changing lives in Africa. In the next five years, 7 out of the world’s 10 fastest growing economies will be from Africa. The web is playing an important part in this development. Today we’re launching a new initiative called ‘Africa Connected: Success stories powered by the web’, and we need your help. Have you or has someone you know embraced the web and Google’s tools? Has it transformed your life, your passion or your business? If so, we want to hear from you!

The five most inspiring stories stand to win prizes of $25,000 each, and the winners will also have the opportunity to work with a Google champion to help them make their venture even more awesome.   

Looking across the web in Africa, we’re already seeing awe-inspiring success stories like those of Just A Band, a music & arts collective from Kenya , or Chike, a businessman and his family from Nigeria, who are the creators of the popular Afrinolly mobile app, which already has over 3 million downloads. Other inspiring examples include Naa, a young jewelry designer from Ghana; Noel, a citizen journalist from Togo; Mdu, a young South African animator; and Asurf a self-taught filmmaker from Nigeria. All of these web heroes have used the web and Google tools to develop and scale their ideas.

Do you have a healthy disregard for the impossible? Are you using the web and technology to do cool and extraordinary things? Whether you’re a photographer, an entrepreneur, a fashion designer or a community activist, this is an opportunity to share your story with the world.   

Categories for contest entries include:
  • Education
  • Entertainment/Arts/Sports
  • Technology 
  • Community and NGOs
  • Small Business
20 semi-finalists will be selected to take part in an interview and to produce a short promotional video. A judging panel made up of Googlers and external judges will then determine the 10 finalists. Submissions are open from August 27, 2013 to October 31, 2013. The competition will run until February 2014 when the winners will be announced. 

Even if you do not yet have a story to share, you can learn more about the web and Google through the Africa Connected platform. We're looking forward to showcasing your amazing web success story! 

For more information and to enter the Africa Connected contest, visit

Posted by Affiong Osuchukwu, Country Marketing Manager, Google Nigeria


L'Afrique connectée : Racontez-nous votre success story avec Internet

Chaque jour, le web change des vies en Afrique. Dans les cinq année à venir, sept des dix économies à la plus forte croissance seront africaines. Le web joue un rôle essentiel dans ce développement. Aujourd'hui, nous lançons une nouvelle initiative : « L'Afrique connectée : Des succès grâce au web », et nous avons besoin de vous. Avez-vous ou connaissez-vous quelqu'un qui a adopté Internet et les outils Google ? Cela a-t-il transformé votre vie, vos passions ou votre métier ? Si oui, nous avons envie d'entendre votre histoire !

Les cinq histoires les plus intéressantes vaudront à leur auteur un prix de 25 000 $ chacun, et les lauréats auront également l'opportunité de travailler avec un champion Google pour rendre leur aventure encore plus palpitante.

En surfant sur les sites web africains, nous avons déjà repéré des histoires passionnantes telles que celle de Just A Band, un collectif musical et artistique kenyan, ou de Chike, un homme d'affaires et sa famille résidant au Nigéria, qui ont créé l'appli mobile Afrinolly, d'ores et déjà téléchargée plus de 3 millions de fois. Parmi d'autres réalisations prometteuses, on peut aussi citer : Naa, un jeune styliste en joaillerie ghanéen ; Noel, un journaliste citoyen togolais ; Mdu, un jeune animateur sud-africain ou encore Asurf, un réalisateur autodidacte nigérian. Tous ces héros d'Internet se sont servis des outils Google pour développer et déployer leurs idées.

Est-ce que, pour vous, l'impossible n'existe pas ? Utilisez-vous le web et la technologie pour réaliser des choses hors du commun ? Que vous soyez photographe, entrepreneur, styliste de mode ou militant d'une cause communautaire, vous avez ici l'occasion de partager votre histoire avec le monde entier.

Le concours comprend plusieurs catégories :

  • Éducation
  • Divertissements/Arts/Sports
  • Technologie 
  • Communautés et ONG
  • Petites entreprises
Vingt demi-finalistes seront sélectionnés pour participer à un entretien et produire une courte vidéo promotionnelle. Un jury composé de Googlers et de juges externes sélectionnera ensuite les dix finalistes. Les candidatures doivent nous parvenir entre le 27 août 2013 et le 31 octobre 2013. Le concours prendra fin en février 2014, avec l'annonce des lauréats.

Même si vous n'avez pas encore d'histoire à raconter, vous pouvez en savoir plus sur Internet et sur Google via la plate-forme L'Afrique connectée. Nous sommes impatients de connaître votre success story avec le web !

Pour plus d'informations et pour participer au concours L'Afrique connectée, consultez le site

Publié par Affiong Osuchukwu, Responsable Marketing Pays, Google Nigéria

Joining a moment in history through the modern web

Nearly 50 years ago, Dr. Martin Luther King Jr. delivered a stirring speech on the steps of the Lincoln Memorial with the words “I have a dream.” Today, we’re sharing a new way to take part in this historic moment through a web experience developed by our friends at Organic and Unit9 for the National Park Foundation.

Called “March on Washington,” the experience invites you to relive that moment in time by listening to an original recording of Dr. King’s words accompanied by immersive photography from the event itself.

One of the most powerful abilities of the web is that it connects people from all over the world in new ways. In “March on Washington,” you can also virtually join this historic event by recording yourself reciting Dr. King’s words. Then, you can play back other participants’ recordings as a crowd-sourced narrative of voices, hearing the timeless message repeated back from people all over the world.

We’re excited to see the modern web enable experiences like “March on Washington” that bring together people and history in new, powerful ways. Head over to on a laptop, phone or tablet to check it out.

Posted by Max Heinritz, Associate Product Manager & Modern Marcher

(Cross-posted from the Chrome blog)

Map of the Week: Orbitz

In today’s guest blog post, we hear from Monika Szymanski and Mike Kelley, of Orbitz' Android engineering team, who recently migrated from version 1 to version 2 of the Google Maps Android API

About Orbitz
Nearly 30% of hotel bookings are now made via mobile devices, fueled in part by the growth of the Android platform. The recently released 3rd-generation update of the Orbitz - Flights, Hotels, Cars app for Android brings major speed and ease of use improvements along with the latest Android UI design patterns to the app. The Google Maps Android API v2 is also integrated into the hotel search experience. Read on to find out how we did it, with tips and sample code along the way.

Migrating from v1 to v2 of the Google Maps Android API 
While users of the Android app will notice some changes to the app’s user interface for maps, the changes to our code are more than skin deep. New classes offered in v2 of the Google Maps Android API like MapFragment and SupportMapFragment, the transition from ItemizedOverlays to Markers, and the addition of a well-supported info window pattern have made including a Google Map in an Android app much easier.

Say hello to the 3rd generation of Orbitz - Flights, Hotels, Cars app, using the Google Maps Android API v2

Featuring Fragments 
Prior to the introduction of MapFragment (and SupportMapFragment)  in v2, we had to write a lot of code to manually show/hide the map view in our app. Only one instance of MapView could be created per activity, so we had to be overly clever about persisting that instance in our Activity. Lack of proper Fragment support was a common pain point for developers integrating v1 of the Google Maps Android API in their application.

When Fragment support was added in v2, we essentially rewrote our map code to take advantage of the new features of MapFragment. Let’s start by taking a look at our hotel results Activity layout:

You’ll notice that we’re not including the actual fragment in the layout - we add the Fragment at runtime, because we don’t want to pay the cost of the fragment transaction and add all the markers on the map, unless the user requests it.

You’ll also notice a bit of a hack at the bottom of the layout. In testing, we found that the MapFragment would leave a black box artifact on the screen on certain devices when the user opened our sliding menu navigation. Adding a simple FrameLayout “above” the map seems to fix the problem.

Extending SupportMapFragment makes it much easier to separate the map display logic from our Activity and list fragment. Our SupportMapFragment (and its inner classes) is responsible for:
  • Adding markers representing each available hotel 
  • Customizing the GoogleMap UI options 
  • Centering and animating the map to show the added markers 
  • Showing an info window when a marker is clicked 
  • Launching an Intent to display more details when an info window is clicked 
Next up, we’ll talk about how we add markers to the map and keep memory usage down.

Managing Markers 
One of the challenges in migrating from v1 to v2 of the Google Maps Android API was figuring out the best way to know which hotel’s info to display when a marker is tapped. To solve this, we place each <Marker, Hotel> pair in a HashMap when adding the markers to the Google Map. Later, we can use this HashMap to look up a marker's corresponding hotel info.

The code snippets below illustrate how we do it.

This HashMap allows us to look up the selected hotel in our InfoWindowAdapter, enabling us to display more information about it.

We place quite a few markers on the map for hotel results and each marker can have a different custom image. It's really easy to run out of memory and we were getting quite a few OutOfMemoryExceptions early in development. To manage memory more effectively, we made sure we didn't create a new new Bitmap and BitmapDescriptor for every marker placed on the map. We also ensured that the resources were recycled after we were done with them.

When the user taps a marker, we want to show more information; that’s where info windows come in handy. 

Introducing Info Windows 
Aside from simply viewing the location of all available hotels on a map, users are typically interested in the name and price of the hotel. The architecture for implementing this information window changed considerably from version 1 to version 2 of the Google Maps Android API. 

Before: Info windows in the Google Maps Android API v1
When using v1 of the Google Maps Android API, our app displayed more detailed hotel information in a custom info view when the user tapped on a hotel marker. That custom view displayed the hotel name and price, and triggered a custom animation when the view was added to the screen. This animation made it appear that the view was growing from inside the pin on the map.

We achieved this effect by setting the LayoutParams to MapView.LayoutParams.BOTTOM_CENTER and MapView.LayoutParams.MODE_MAP, which centered the bottom of the custom view on top of the tapped hotel marker.

With the introduction of the Google Maps Android API v2, MapView.LayoutParams.MODE_MAP was removed, so we explored alternative treatments to show the hotel information when the user clicks on a result. For our purposes, the best alternative was to use the new info window interface. 

After: Info windows in the Google Maps Android API v2
Creating an InfoWindowAdapter is pretty straightforward. The API provides two ways to populate the info window; either by supplying the contents of the info window (shown in the default window stylec) or creating a full View. Because we wanted to have a custom window background, loaded from a 9-patch, we opted to build a complete View for the info window by overriding the getInfoContents() method to return null, and by returning a custom View from getInfoWindow().

Here’s a sample of our code:

We could further simplify this code by having our HotelView take a Hotel model as a parameter in the constructor. 

A caveat with info windows is that even though they are populated by the returned View, the info window is not treated like a *live* View object. The system will call the view’s draw() method only once, then use that cached rendering as the user continues to interact with the map. Our existing animation didn’t work in the Google Maps Android API v2, but we decided to be consistent with the platform and remove the animation rather than try to hack around this limitation.

We <3 Google Maps Android API v2
Upgrading from version 1 to version 2 of the Google Maps Android API was virtually painless and fun to do! The introduction of MapFragment helped us separate the map display logic from the rest of the code and made code reuse much easier. Using custom info views was very straightforward with the new info window interface. We look forward to adding even more Google Map features to our app.

Posted by Monica Tran, Maps Developer Marketing

Monika Szymanski is a Lead Engineer on the Android team at Orbitz, where she works on apps that are friendly, fast, and easy to use. In her free time, she enjoys outdoors, running, red wine, and anything chocolate.

Mike Kelley is a Software Engineer at Orbitz, where he works on Android travel tools to help people travel smarter. He's a Michigan grad, transportation and technology enthusiast and craft beer buff. Some of Mike's ideas and projects live online at

Orbitz Worldwide (NYSE: OWW) is a leading global online travel company that uses innovative technology to enable leisure and business travelers to search for, plan and book a broad range of travel products and services including airline tickets, hotels, car rentals, cruises, and vacation packages. Orbitz Worldwide owns a portfolio of consumer brands that includes Orbitz, CheapTickets, ebookers, and HotelClub. Also within the Orbitz Worldwide family, Orbitz Partner Network delivers private label travel solutions to a broad range of partners including many of the world's largest airlines, and Orbitz for Business delivers managed corporate travel solutions for corporations.

Crowdsourcing Solutions in the AdWords Community for the Back-to-School Season

Back-to-school season — the second biggest retail event of the year — is upon us, and over on the AdWords Community, we’re getting back to the ABCs of AdWords with our “Boost Your Account” optimization series.

In the series, you can join users from the AdWords Community forum in sharing solutions with certain businesses looking for help addressing challenges with their current advertising plan.

Got ideas for improving conversion rate on video campaigns or selecting the right landing page for your ad? Share them with Bill directly on this Community thread and learn more about his unique challenges in this Hangout on Air.

Check out another business featured in the series here and come back to the AdWords Community throughout this month for new installments. If you're interested in having your business featured in the series, please fill out the interest form here.

Here, there and everywhere: Google Keep reminds you at the right time

Notes are a good way to keep track of all you have to do, but most of us need a little nudge now and then. Google Keep can remind you of important tasks and errands at just the right time and place. For example, Keep works with Google Now to remind you of your grocery list when you walk into your favorite grocery store, and nudges you on Thursday night to take out the trash.

To get started, select the “Remind me” button from the bottom of any note and choose the type of reminder you want to add. You can add time-based reminders for a specific date and time, or a more general time of day, like tomorrow morning. Adding a location reminder is incredibly easy too—as soon as you start typing Google Keep suggests places nearby.

Of course, sometimes plans change. If you get a reminder you’re not ready to deal with, simply snooze it to a time or place that’s better for you.


It’s now even easier to get to all of your notes using the new navigation drawer, which includes a way to view all of your upcoming reminders in one place. And for people who want more separation between their home and work lives, the drawer also lets you easily switch between your accounts. 

And finally, we've made it easier to add your existing photos to a Google Keep note on Android. When you tap the camera icon you can choose between taking a new photo or adding one you already have from Gallery.

The new update is gradually rolling out in Google Play, and available now on the web at and in the Chrome App.

Posted by Erin Rosenthal, Product Manager