Monthly Archives: August 2017

Launchpad Accelerator is open to more countries around the world! Apply now.

Posted by Roy Glasberg, Global Lead, Launchpad Program & Accelerator

Launchpad Accelerator gives us an opportunity to work with and empower amazing developers, who are solving major challenges all around the world -- whether it's streamlining digital commerce across Africa, providing access to multimedia tools that support special needs education, or using AI to simplify business operations.

That's why we're doubling down on our efforts and opening up applications for the next class of the program to more countries for the first time starting today. Here's the full list of the new additions:

  • Africa: Algeria, Egypt, Ghana, Morocco, Tanzania, Tunisia & Uganda
  • Asia: Bangladesh, Myanmar, Pakistan & Sri Lanka
  • Europe: Estonia, Romania, Ukraine, Belarus & Russia
  • Latin America: Costa Rica, Panama, Peru & Uruguay

They'll be joined by our larger list of countries that are already part of the program, including: Argentina, Brazil, Chile, Colombia, Czech Republic, Hungary, India, Indonesia, Kenya, Malaysia, Mexico, Nigeria, Philippines, Poland, South Africa, Thailand, and Vietnam.

The application process for the equity-free program will end on October 2, 2017 at 9AM PST. Later in the year, the list of selected developers will be invited to the Google Developers Launchpad Space in San Francisco for 2 weeks of all-expense-paid training.

What are the benefits?

The training at Google HQ includes intensive mentoring from 20+ Google teams, and expert mentors from top technology companies and VCs in Silicon Valley. Participants receive equity-free support, credits for Google products, PR support and continue to work closely with Google back in their home country during the 6-month program. Hear from some alumnus about their experiences here.

What do we look for when selecting startups?

Each startup that applies to the Launchpad Accelerator is considered holistically and with great care. Below are general guidelines behind our process to help you understand what we look for in our candidates.

All startups in the program must:

  • Be a technological startup.
  • Be targeting their local markets.
  • Have proven product-market fit (beyond ideation stage).
  • Be based in the countries listed above.

Additionally, we are interested in what kind of startup you are. We also consider:

  • The problem you are trying to solve. How does it create value for users? How are you addressing a real challenge for your home city, country or region?
  • Does your management team have a leadership mindset and the drive to become an influencer?
  • Will you share what you learn in Silicon Valley for the benefit of other startups in your local ecosystem?
  • If you're based outside of these countries, stay tuned, as we expect to add more countries to the program in the future.

We can't wait to hear from you and see how we can work together to improve your business.

Participants from Class 4

Travel photography 101 with #teampixel

We’ve traveled far and wide with #teampixel this summer but not as far as Jeremy Foster, this week’s Pixel expert. He’s been globetrotting the world for the past seven years and offers some sage advice on taking photos with your Pixel. So whether you’re a weekend warrior or a seasoned explorer, check out his tips for travel photography:

Tip #1: All about HDR

Use for: Those glorious sunsets you only seem to find when traveling.

High Dynamic Range (HDR) is is best utilized when you have an uneven exposure in your photo—that is to say, when some of the photo is bright and some is dark (for example, a landscape shot with a bright sky and dark foreground). When you activate HDR, your Pixel will take three photos in burst mode, at different exposures, and blend them all together for a well-balanced photo.

Tip #2: Turn up the volume

Use for: Street photography in crowded places. 

Hardware buttons (like the volume button) are easier to access than software buttons (like in your camera app). For a more discreet shooting experience, skip the on-screen shutter button and opt to use the volume button instead. You’ll also have a sturdier grip on the phone, which means there’s less chance for motion blur in your photos. Pro tip: If the phone is in sleep mode, double-click the power button to open the camera and slide your finger over by an inch to the volume button to snap a photo! You don’t even need to look at the screen.

Tip #3: Let’s get down to details

Use when: Your photo of that gorgeous mountain range doesn’t look like the real thing.

A camera can only capture one-third the amount of detail as the human eye, but editing your photos can bring the other two-thirds of that stunning landscape into view. Tap the “Auto” filter for the Pixel’s best guess, or, for more fine-tuned control, and to create your desired effect, tap the slider icon to adjust Light, Color, and Pop. Want to get even more granular? Tap the down arrows next to Light and Color for full control over exposure, contrast, highlights, shadows, saturation, skin tone, and more.

Tip #4: Anyone can be a videographer

Use when: pictures just won’t do.

The Google Pixel has the remarkable ability to capture 4K video—the sharpest video that exists. Go into your camera settings and make sure your back camera video resolution is set to “UHD 4K (30 fps)”—that stands for “Ultra High Definition 4K” (30 frames per second). Not bad for a piece of hardware that sits in your back pocket.

And here’s another weekly roundup of our favorite photos! Keep crushing it #teampixel ✌️

Transformer: A Novel Neural Network Architecture for Language Understanding



Neural networks, in particular recurrent neural networks (RNNs), are now at the core of the leading approaches to language understanding tasks such as language modeling, machine translation and question answering. In Attention Is All You Need we introduce the Transformer, a novel neural network architecture based on a self-attention mechanism that we believe to be particularly well-suited for language understanding.

In our paper, we show that the Transformer outperforms both recurrent and convolutional models on academic English to German and English to French translation benchmarks. On top of higher translation quality, the Transformer requires less computation to train and is a much better fit for modern machine learning hardware, speeding up training by up to an order of magnitude.
BLEU scores (higher is better) of single models on the standard WMT newstest2014 English to German translation benchmark.
BLEU scores (higher is better) of single models on the standard WMT newstest2014 English to French translation benchmark.
Accuracy and Efficiency in Language Understanding
Neural networks usually process language by generating fixed- or variable-length vector-space representations. After starting with representations of individual words or even pieces of words, they aggregate information from surrounding words to determine the meaning of a given bit of language in context. For example, deciding on the most likely meaning and appropriate representation of the word “bank” in the sentence “I arrived at the bank after crossing the…” requires knowing if the sentence ends in “... road.” or “... river.”

RNNs have in recent years become the typical network architecture for translation, processing language sequentially in a left-to-right or right-to-left fashion. Reading one word at a time, this forces RNNs to perform multiple steps to make decisions that depend on words far away from each other. Processing the example above, an RNN could only determine that “bank” is likely to refer to the bank of a river after reading each word between “bank” and “river” step by step. Prior research has shown that, roughly speaking, the more such steps decisions require, the harder it is for a recurrent network to learn how to make those decisions.

The sequential nature of RNNs also makes it more difficult to fully take advantage of modern fast computing devices such as TPUs and GPUs, which excel at parallel and not sequential processing. Convolutional neural networks (CNNs) are much less sequential than RNNs, but in CNN architectures like ByteNet or ConvS2S the number of steps required to combine information from distant parts of the input still grows with increasing distance.

The Transformer
In contrast, the Transformer only performs a small, constant number of steps (chosen empirically). In each step, it applies a self-attention mechanism which directly models relationships between all words in a sentence, regardless of their respective position. In the earlier example “I arrived at the bank after crossing the river”, to determine that the word “bank” refers to the shore of a river and not a financial institution, the Transformer can learn to immediately attend to the word “river” and make this decision in a single step. In fact, in our English-French translation model we observe exactly this behavior.

More specifically, to compute the next representation for a given word - “bank” for example - the Transformer compares it to every other word in the sentence. The result of these comparisons is an attention score for every other word in the sentence. These attention scores determine how much each of the other words should contribute to the next representation of “bank”. In the example, the disambiguating “river” could receive a high attention score when computing a new representation for “bank”. The attention scores are then used as weights for a weighted average of all words’ representations which is fed into a fully-connected network to generate a new representation for “bank”, reflecting that the sentence is talking about a river bank.

The animation below illustrates how we apply the Transformer to machine translation. Neural networks for machine translation typically contain an encoder reading the input sentence and generating a representation of it. A decoder then generates the output sentence word by word while consulting the representation generated by the encoder. The Transformer starts by generating initial representations, or embeddings, for each word. These are represented by the unfilled circles. Then, using self-attention, it aggregates information from all of the other words, generating a new representation per word informed by the entire context, represented by the filled balls. This step is then repeated multiple times in parallel for all words, successively generating new representations.
The decoder operates similarly, but generates one word at a time, from left to right. It attends not only to the other previously generated words, but also to the final representations generated by the encoder.

Flow of Information
Beyond computational performance and higher accuracy, another intriguing aspect of the Transformer is that we can visualize what other parts of a sentence the network attends to when processing or translating a given word, thus gaining insights into how information travels through the network.

To illustrate this, we chose an example involving a phenomenon that is notoriously challenging for machine translation systems: coreference resolution. Consider the following sentences and their French translations:
It is obvious to most that in the first sentence pair “it” refers to the animal, and in the second to the street. When translating these sentences to French or German, the translation for “it” depends on the gender of the noun it refers to - and in French “animal” and “street” have different genders. In contrast to the current Google Translate model, the Transformer translates both of these sentences to French correctly. Visualizing what words the encoder attended to when computing the final representation for the word “it” sheds some light on how the network made the decision. In one of its steps, the Transformer clearly identified the two nouns “it” could refer to and the respective amount of attention reflects its choice in the different contexts.
The encoder self-attention distribution for the word “it” from the 5th to the 6th layer of a Transformer trained on English to French translation (one of eight attention heads).
Given this insight, it might not be that surprising that the Transformer also performs very well on the classic language analysis task of syntactic constituency parsing, a task the natural language processing community has attacked with highly specialized systems for decades.
In fact, with little adaptation, the same network we used for English to German translation outperformed all but one of the previously proposed approaches to constituency parsing.

Next Steps
We are very excited about the future potential of the Transformer and have already started applying it to other problems involving not only natural language but also very different inputs and outputs, such as images and video. Our ongoing experiments are accelerated immensely by the Tensor2Tensor library, which we recently open sourced. In fact, after downloading the library you can train your own Transformer networks for translation and parsing by invoking just a few commands. We hope you’ll give it a try, and look forward to seeing what the community can do with the Transformer.

Acknowledgements
This research was conducted by Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez and Łukasz Kaiser. Additional thanks go to David Chenell for creating the animation above.

Back to School in the Tar Heel State

This week, North Carolina students headed back to class, and their teachers were ready for them. Beyond arranging their rooms, hosting open houses and preparing lesson plans, teachers are seeking out the tech resources that will make their classroom an environment that is better able to equip students with 21st century skills.


Thanks to a partnership with the Kenan Fellows Program at North Carolina State University, two elementary school STEM educators spent this summer as interns at the Charlotte and Raleigh/Durham Google Fiber offices, experiencing what it’s like to  work at a tech company and taking new ideas and experiences back to the classroom.


Michelle McElhiney, a 5th grade teacher at Oakhurst STEAM Academy in Charlotte, and Amelia Robinson, 3rd-grade math teacher at Envision Science Academy serving students in Raleigh and Wake Forest, both dedicated their summer to this project.


Their efforts resulted in two extensive STEM-focused projects, featuring new career-connected curricula and "STEAM To-Go" mobile learning labs or kits. The STEAM To-Go labs pair technology resources like Chromebooks, Makey Makeys, and stop-motion animation tools with lesson ideas and online resources for use in public school classrooms. There are two different mobile lab options. Teachers can choose between "Circuitry and Music" and "Animation and Coding." Each kit comes with a teacher’s guide that helps the teacher get up to speed on the technology, offers links to online tools and apps, makes recommendations for how teachers can connect the projects to state-mandated outcomes and "literacy links" so that they can enhance the grade-level goals in their classroom.

These Google Fiber funded kits are available at no cost to elementary and middle school teachers at the STEM and STEAM academies in Charlotte Mecklenburg Schools starting in the fall of 2017. STEAM To-Go mobile labs, which include hardware and educator guides, can be borrowed by teachers for one week, allowing educators to integrate them into classroom activities, link them to literacy extensions and align them with grade level standards. The experiences are designed for kids who have little or no coding experience; educators can easily modify the program for students ranging from lower elementary to middle school.
IMG_5155.jpg


To launch this project, Google Fiber hosted a back to school reception earlier this month at our Charlotte Google Fiber Space for teachers at STEM elementary and middle schools. More than 60 educators and administrators attended to learn about the STEAM To-Go mobile labs, have fun with hands-on STEM activities and celebrate the start of another school year together.

IMG_5171.jpg


Stay tuned for more news about upcoming teacher leadership trainings in Raleigh and Durham.  Let’s start the school year full STEAM ahead!

By Tia Bethea in Raleigh-Durham and Jess George in Charlotte, Google Fiber Community Impact Managers

Updates to Google Play policy promote standalone Android Wear apps

Posted by Hoi Lam, Lead Developer Advocate, Android Wear
Strava - a standalone wear app available to both Android and iOS users

Android Wear 2.0 represents the the latest evolution of the Android Wear platform. It introduced the concept of standalone apps that can connect to the network directly and work independently of a smartphone. This is critical to providing apps not only to our Android users, but also iOS users - which is increasingly important as we continue to expand our diverse ecosystem of watches and users. In addition, Wear 2.0 brought multi-APK support to Wear apps, which reduces the APK size of your phone apps, and makes it possible for iOS users to experience your Wear apps.

Today, we are announcing that multi-APKs will also work for Android Wear 1.0 watches, so you can now reach all of your users without needing to bundle your Wear app within your phone app's APK. Additionally, the Google Play Store policy will change to promote the use of multi-APKs and standalone apps. This covers all types of apps that are designed to run on the watch, including watch faces, complication data providers as well as launchable apps.

Policy change

The policy change will be effective from the 18th of January, 2018. At this time, the following apps will lose the "Enhanced for Android Wear" badge in the Google Play Store and will not be eligible to be listed in the top charts in the Play Store for Android Wear:

  • Mobile apps that support Wear notification enhancements but do not have a separate Wear app.
  • Wear apps that are bundled with mobile apps instead of using multi-APK.

Since multi-APK is now supported by devices running Wear 1.0 and 2.0, developers embedding their Wear app APKs in phone APKs should unbundle their Wear APK and upload it to the Play Store as a multi-APK. This will allow them to continue to qualify for the "Enhanced for Android Wear" badge as well as be eligible to appear in the Android Wear top charts. The two APKs can continue to share the same package name.

In addition to providing top app charts, we periodically put together curated featured collections. To be eligible for selection for these collections, developers will need to make their Wear apps function independently from the phone, as a standalone app. These apps will need to work on watches that are paired with both iOS and Android phones.

What are standalone apps?

Standalone apps are Wear apps that do not require a phone app to run. The app either does not require network access or can access the network directly without the phone app - something that is supported by Android Wear 2.0.

To mark your app as standalone, put the following meta-data tag in the AndroidManifest.xml:

<application>
...
  <meta-data
    android:name="com.google.android.wearable.standalone"
    android:value="true" />
...
</application>

In some rare cases, the user experience may be enhanced by the syncing of data between the phone and watch. For example, a cycling app can use the watch to display the current pace, and measure the user's heart rate, while displaying a map on the phone. In this scenario, we recommend that developers ensure that their Wear apps function without a phone and treat the phone experience as optional as far as the Wear apps are concerned. In these cases, a Wear app is still considered standalone and should be marked as such in its AndroidManifest.xml file.

Wear what you want

From the beginning, Android Wear has been about wear what you want -- the styles, watches, and apps you want to wear. This latest policy change lets you highlight your Android Wear apps, giving users even more choice about what apps they want on their watches.

Automatic protections in Android: Q&A with a security expert

Editor's note: The Android security team works to keep more than two billion users safe, and with the release of Android Oreo, they’ve rolled out some new security protections. We sat down with Adrian Ludwig, Director of Android Security to learn about his team, their approach to security, and what Oreo’s new protections mean for people who use and love Android.

Keyword: Talk to us a bit about what your team does.

Adrian: We build security features for Android that help keep the whole ecosystem safe. Our software engineers write code that encrypts user data, helps find security bugs faster, prevents bugs from becoming security exploits, and finds applications that are trying to harm users or their information.  

How do you build these protections?

It starts with research. Because security is constantly evolving, our teams have to understand today’s issues, in Android and elsewhere, so we can provide better security now and in the future. Researchers in and out of Google are like detectives: they find new stuff, work to understand it deeply, and share it with the broader security community.

We then use those findings to make our protections stronger. We’re focused on tools like Google Play Protect and efforts like “platform hardening,” incremental protections to the Android platform itself. We’re also starting to apply machine learning to security threats, an early stage effort that we’re really excited about.

The final step is enabling all Android users to benefit from the protections. I’m really proud of the work our team has done with Google Play Protect, for example. Every day, it monitors more than 50 billion apps in Play, other app marketplaces, and across the web for potentially unsafe apps. If it finds any, we’ll prevent people from installing them and sometimes remove them from users’ phones directly. Users don’t need to do anything—this just works, automatically.

What are the challenges to protecting Android?

In security, we often talk about the trade-off between usability and protection. Sometimes, you can protect a device more effectively if there are certain things users can’t do on your device. And security is always much easier when things are predictable: for instance when all of the devices you are protecting are built the same way and can basically do the same thing.

But, Android security is different because the ecosystem is so diverse. The variety of use cases, form factors, and users forces us to be open-minded about how we should secure without limiting Android’s flexibility. We can’t possibly protect Android users with a single safeguard—our diversity of protections reflects the diversity in the Android ecosystem.

What are some of the new ways you’re protecting users in Android Oreo (not in robo- speak, please)?

Hang on, I gotta turn on Google Translate.

There are a … 0101100110 … sorry … a bunch! We’ve invested significantly in making it easier to update devices with security “patches,” fixes for potential safety problems, more commonly known as vulnerabilities. As a sidenote, you may have heard about “exploits.” If a vulnerability is a window, an exploit is a way to climb through it. The vast majority of the time, we’ll patch a vulnerability before anyone can exploit it. We have a project called Treble that makes it easier for us to work with partners and deliver updates to users. We want to close the window (and add some shutters) as quickly as possible.

We’ve also worked to improve verified boot, which confirms the device is in a known good state when it starts up, further hardened the Android kernel, which makes sure that hackers can’t change the way that code executes on a device, and evolved Seccomp which limits the amount of code that is visible to hackers.  Basically, we’re moving all the windows higher so any open ones are harder to climb through.

You announced Google Play Protect earlier this year. Tell us a bit about that and why it’s important for Android users?

For several years, we’ve been building “security services” which periodically check devices for potential security issues, allow Google and/or the user to review the status, and then use that information to protect the device. These services interact with Google Play in real-time to help secure it, hence the name “Google Play Protect.”

Our goal with Google Play Protect is to make sure that every user and every device has constant access to the best protections that Google can provide. Those protections are easy to use (ironically, for many people, Google Play Protect is so easy to use that they didn’t even know it was turned on!) and they benefit from everything Google knows about the security of Android devices.

Google Play Protect isn’t available just for users with Oreo -- it guards any device with Google Play Services, running Android Gingerbread, or later.

Updates are a challenge with Android, especially in regard to security. Why is that so hard? What are you doing to improve it?

What makes Android so cool and unique—its flexibility and openness—also presents a really big security challenge. There is a broad and diverse range of devices running Android, operated by a complex collection of partners and device manufacturers around the world. It’s our responsibility to make it easy for the entire ecosystem to receive and deploy updates, but the ecosystem has to work together in order to make it happen. One approach to the problem is to make updates easier through technical changes, such as Project Treble. Another is to work with partners to better understand how updates are produced, tested, and delivered to users.  

What’s the toughest part of your job?

Prioritization. Often we need to balance researching super cool, extremely rare issues with more incremental maintenance of our existing systems. It’s really important that we are laser-focused on both; it’s the only way we can protect the entire ecosystem now and longer-term.

What’s your favorite part?

I’m amazed and humbled by how many people use Android as their primary (or only) way to connect to the internet and to the broader world. We’ve still got a ton of work to do, but I’m incredibly proud of the role my team has played in making those connections safe and secure.  

Ok, last question: How do you eat your Oreos?

In one bite. (But I can’t handle the Double Stufs).

Source: Android


Automatic protections in Android: Q&A with a security expert

Editor's note: The Android security team works to keep more than two billion users safe, and with the release of Android Oreo, they’ve rolled out some new security protections. We sat down with Adrian Ludwig, Director of Android Security to learn about his team, their approach to security, and what Oreo’s new protections mean for people who use and love Android.

Keyword: Talk to us a bit about what your team does.

Adrian: We build security features for Android that help keep the whole ecosystem safe. Our software engineers write code that encrypts user data, helps find security bugs faster, prevents bugs from becoming security exploits, and finds applications that are trying to harm users or their information.  

How do you build these protections?

It starts with research. Because security is constantly evolving, our teams have to understand today’s issues, in Android and elsewhere, so we can provide better security now and in the future. Researchers in and out of Google are like detectives: they find new stuff, work to understand it deeply, and share it with the broader security community.

We then use those findings to make our protections stronger. We’re focused on tools like Google Play Protect and efforts like “platform hardening,” incremental protections to the Android platform itself. We’re also starting to apply machine learning to security threats, an early stage effort that we’re really excited about.

The final step is enabling all Android users to benefit from the protections. I’m really proud of the work our team has done with Google Play Protect, for example. Every day, it monitors more than 50 billion apps in Play, other app marketplaces, and across the web for potentially unsafe apps. If it finds any, we’ll prevent people from installing them and sometimes remove them from users’ phones directly. Users don’t need to do anything—this just works, automatically.

What are the challenges to protecting Android?

In security, we often talk about the trade-off between usability and protection. Sometimes, you can protect a device more effectively if there are certain things users can’t do on your device. And security is always much easier when things are predictable: for instance when all of the devices you are protecting are built the same way and can basically do the same thing.

But, Android security is different because the ecosystem is so diverse. The variety of use cases, form factors, and users forces us to be open-minded about how we should secure without limiting Android’s flexibility. We can’t possibly protect Android users with a single safeguard—our diversity of protections reflects the diversity in the Android ecosystem.

What are some of the new ways you’re protecting users in Android Oreo (not in robo- speak, please)?

Hang on, I gotta turn on Google Translate.

There are a … 0101100110 … sorry … a bunch! We’ve invested significantly in making it easier to update devices with security “patches,” fixes for potential safety problems, more commonly known as vulnerabilities. As a sidenote, you may have heard about “exploits.” If a vulnerability is a window, an exploit is a way to climb through it. The vast majority of the time, we’ll patch a vulnerability before anyone can exploit it. We have a project called Treble that makes it easier for us to work with partners and deliver updates to users. We want to close the window (and add some shutters) as quickly as possible.

We’ve also worked to improve verified boot, which confirms the device is in a known good state when it starts up, further hardened the Android kernel, which makes sure that hackers can’t change the way that code executes on a device, and evolved Seccomp which limits the amount of code that is visible to hackers.  Basically, we’re moving all the windows higher so any open ones are harder to climb through.

You announced Google Play Protect earlier this year. Tell us a bit about that and why it’s important for Android users?

For several years, we’ve been building “security services” which periodically check devices for potential security issues, allow Google and/or the user to review the status, and then use that information to protect the device. These services interact with Google Play in real-time to help secure it, hence the name “Google Play Protect.”

Our goal with Google Play Protect is to make sure that every user and every device has constant access to the best protections that Google can provide. Those protections are easy to use (ironically, for many people, Google Play Protect is so easy to use that they didn’t even know it was turned on!) and they benefit from everything Google knows about the security of Android devices.

Google Play Protect isn’t available just for users with Oreo -- it guards any device with Google Play Services, running Android Gingerbread, or later.

Updates are a challenge with Android, especially in regard to security. Why is that so hard? What are you doing to improve it?

What makes Android so cool and unique—its flexibility and openness—also presents a really big security challenge. There is a broad and diverse range of devices running Android, operated by a complex collection of partners and device manufacturers around the world. It’s our responsibility to make it easy for the entire ecosystem to receive and deploy updates, but the ecosystem has to work together in order to make it happen. One approach to the problem is to make updates easier through technical changes, such as Project Treble. Another is to work with partners to better understand how updates are produced, tested, and delivered to users.  

What’s the toughest part of your job?

Prioritization. Often we need to balance researching super cool, extremely rare issues with more incremental maintenance of our existing systems. It’s really important that we are laser-focused on both; it’s the only way we can protect the entire ecosystem now and longer-term.

What’s your favorite part?

I’m amazed and humbled by how many people use Android as their primary (or only) way to connect to the internet and to the broader world. We’ve still got a ton of work to do, but I’m incredibly proud of the role my team has played in making those connections safe and secure.  

Ok, last question: How do you eat your Oreos?

In one bite. (But I can’t handle the Double Stufs).

Source: Android


Google Play Developer API new fields for In-app Billing information

Posted by Neto Marin, Developer Advocate

We'd like to share with you some good news about an improvement in the data available via the Google Play Developer API. Starting Monday Aug 28, the API for Purchases.productsand Purchases.subscriptionswill be returning a couple of new values:

  • orderId
    • To be returned via both products and subscriptions API
      • For Purchases, this will be the order id present in the purchase.
      • For subscriptions, this will be the orderId associated with the most recent recurring order id.
  • New subscription cancelReason: 2. Subscription replaced
    • Will be returned for subscriptions which were canceled due to the user changing subscription plans (e.g. upgrading to a new subscription plan).

This additional data will be automatically returned to you in the JSON responses to your API calls. Please double check your integration to make sure this new field and value will not cause any problems for you.

To view all of the values returned by the APIs, check Purchases.productsand Purchases.subscriptionsreference pages.

Coding gets easier with new series of books on Google Play

2

Girls Who Code wants to close the gender gap in technology.

In order to do that, they help kids develop an interest in science and technology at an early age. This is where Google Play comes in—we’re putting together a new collection of 13 books on Google play that will get kids excited about coding.

1

We’re releasing the first two books today: Girls Who Code: Learn to Code and Change the World explains coding in a way that’s actually easy to understand, and shares real-life stories of women working at places like Pixar and NASA. In Girls Who Code: The Friendship Code #1, you’ll get to know Lucy as she joins the coding club at school and needs your help translating cryptic coding messages.

The next 11 titles—to be released over the next two years—will range from board books and picture books for children, to coding manuals, activity books and coding-themed journals for young adults. While you’re waiting for the remaining titles to be released, this is a list of books recommended by Girls who Code.

Get reading and start coding!

Explore your new campus with Google Maps

Whether you’re a freshman, transfer student, or visiting parent, Google Maps helps you get where you’re going (and more) on campus and off.

Navigate to specific areas on campus
College campuses can be huge, with sprawling buildings, social areas, and sports stadiums. When navigating to a campus on Google Maps, just type in the college name and tap the navigation button. You’ll automatically see a list of the most popular areas on campus to choose from. Tap the one you’re headed to in order to get directions directly there. If you’re worried about parking, tap “find parking” to see the nearest garages or lots.

Get your bearings with Street View
Once you’ve unpacked your bags, it’s time to get acquainted with the rest of campus. Using Google Maps for Mobile, search for your university and check out panoramic views of your new campus via the Street View thumbnail. Google Maps shows Street View imagery of thousands of campuses around the world. So if your 8 a.m. class is on the opposite side of campus, a little bit of digital exploring will help you know your surroundings and get there on time.

Find your new favorite coffee shop
Need to find a local coffee shop with free Wi-Fi to cram or an art supply store for that project you procrastinated on? Google Maps doesn’t just give you directions–it helps you find the places you need, when you need them. Simply enter the category you’re looking for in the search bar to see the relevant available options near you.

Heading to college for the first time can be exciting and intimidating. Let Google Maps take the uncertainty out of getting around and exploring your new area, so you can focus on picking a major.

Source: Google LatLong