Author Archives: Google Blogs

Balanced Partitioning and Hierarchical Clustering at Scale



Solving large-scale optimization problems often starts with graph partitioning, which means partitioning the vertices of the graph into clusters to be processed on different machines. The need to make sure that clusters are of near equal size gives rise to the balanced graph partitioning problem. In simple terms, we need to partition the vertices of a given graph into k almost equal clusters, while we minimize the number of edges that are cut by the partition. This NP-hard problem is notoriously difficult in practice because the best approximation algorithms for small instances rely on semidefinite programming which is impractical for larger instances.

This post presents the distributed algorithm we developed which is more applicable to large instances. We introduced this balanced graph-partitioning algorithm in our WSDM 2016 paper, and have applied this approach to several applications within Google. Our more recent NIPS 2017 paper provides more details of the algorithm via a theoretical and empirical study.

Balanced Partitioning via Linear Embedding
Our algorithm first embeds vertices of the graph onto a line, and then processes vertices in a distributed manner guided by the linear embedding order. We examine various ways to find the initial embedding, and apply four different techniques (such as local swaps and dynamic programming) to obtain the final partition. The best initial embedding is based on “affinity clustering”.

Affinity Hierarchical Clustering
Affinity clustering is an agglomerative hierarchical graph clustering based on Borůvka’s classic Maximum-cost Spanning Tree algorithm. As discussed above, this algorithm is a critical part of our balanced partitioning tool. The algorithm starts by placing each vertex in a cluster of its own: v0, v1, and so on. Then, in each iteration, the highest-cost edge out of each cluster is selected in order to induce larger merged clusters: A0, A1, A2, etc. in the first round and B0, B1, etc. in the second round and so on. The set of merges naturally produces a hierarchical clustering, and gives rise to a linear ordering of the leaf vertices (vertices with degree one). The image below demonstrates this, with the numbers at the bottom corresponding to the ordering of the vertices.
Our NIPS’17 paper explains how we run affinity clustering efficiently in the massively parallel computation (MPC) model, in particular using distributed hash tables (DHTs) to significantly reduce running time. This paper also presents a theoretical study of the algorithm. We report clustering results for graphs with tens of trillions of edges, and also observe that affinity clustering empirically beats other clustering algorithms such as k-means in terms of “quality of the clusters”. This video contains a summary of the result and explains how this parallel algorithm may produce higher-quality clusters even compared to a sequential single-linkage agglomerative algorithm.

Comparison to Previous Work
In comparing our algorithm to previous work in (distributed) balanced graph partitioning, we focus on FENNEL, Spinner, METIS, and a recent label propagation-based algorithm. We report results on several public social networks as well as a large private map graph. For a Twitter followership graph, we see a consistent improvement of 15–25% over previous results (Ugander and Backstrom, 2013), and for LiveJournal graph, our algorithm outperforms all the others for all cases except k = 2, where ours is slightly worse than FENNEL's.

The following table presents the fraction of cut edges in the Twitter graph obtained via different algorithms for various values of k, the number of clusters. The numbers given in parentheses denote the size imbalance factor: i.e., the relative difference of the sizes of largest and smallest clusters. Here “Vanilla Affinity Clustering” denotes the first stage of our algorithm where only the hierarchical clustering is built and no further processing is performed on the cuts. Notice that this is already as good as the best previous work (shown in the first two columns below), cutting a smaller fraction of edges while achieving a perfect (and thus better) balance (i.e., 0% imbalance). The last column in the table includes the final result of our algorithm with the post-processing.

k
UB13
(5%)
Vanilla Affinity
Clustering
(0%)
Final Algorithm
(0%)
20
37.0%
38.0%
35.71%
27.50%
40
43.0%
40.0%
40.83%
33.71%
60
46.0%
43.0%
43.03%
36.65%
80
47.5%
44.0%
43.27%
38.65%
100
49.0%
46.0%
45.05%
41.53%

Applications
We apply balanced graph partitioning to multiple applications including Google Maps driving directions, the serving backend for web search, and finding treatment groups for experimental design. For example, in Google Maps the World map graph is stored in several shards. The navigational queries spanning multiple shards are substantially more expensive than those handled within a shard. Using the methods described in our paper, we can reduce 21% of cross-shard queries by increasing the shard imbalance factor from 0% to 10%. As discussed in our paper, live experiments on real traffic show that the number of multi-shard queries from our cut-optimization techniques is 40% less compared to a baseline Hilbert embedding technique. This, in turn, results in less CPU usage in response to queries. In a future blog post, we will talk about application of this work in the web search serving backend, where balanced partitioning helped us design a cache-aware load balancing system that dramatically reduced our cache miss rate.

Acknowledgements
We especially thank Vahab Mirrokni whose guidance and technical contribution were instrumental in developing these algorithms and writing this post. We also thank our other co-authors and colleagues for their contributions: Raimondas Kiveris, Soheil Behnezhad, Mahsa Derakhshan, MohammadTaghi Hajiaghayi, Silvio Lattanzi, Aaron Archer and other members of NYC Algorithms and Optimization research team.

Source: Google AI Blog


The real world as your playground: Build real-world games with Google Maps APIs



The mobile gaming landscape is changing as more and more studios develop augmented reality games. In order to mix realities, developers first need to understand the real world — the physical environment around their players. That’s why we’re excited to announce a new offering for building real-world games using Google Maps’ tried-and-tested model of the world.



Game studios can easily reimagine our world as a medieval fantasy, a bubble gum candy land, or a zombie-infested post-apocalyptic city. With Google Maps’ real-time updates and rich location data, developers can find the best places for playing games, no matter where their players are.



Completely customize your games

To make it easy to get started, we’ve brought the richness of Google Maps to the Unity game engine. We turn buildings, roads, and parks into GameObjects in Unity, where developers can then add texture, style, and customization to match the look and feel of your game. This means that they can focus on building rich, immersive gameplay without the overhead of scaffolding a global-scale game world.



“With Google Maps data integrated into Unity, we were able to focus our time and energy on building detailed virtual experiences for our users to find virtual dinosaurs in the real world.” - Alexandre Thabet, CEO, Ludia



Create immersive experiences all over the globe

Game developers will now have access to a rich, accurate, and living model of the world to form the foundation of their game worlds. With access to over 100 million 3D buildings, roads, landmarks, and parks from over 200 countries, they can deliver rich engaging game play across the globe.



"We are excited to partner with Google to provide the most up-to-date and rich location data to enable us to create an immersive experience tied to your location. When new buildings or roads are built, we’ll have access to them in our game. Google Maps’ unrivalled location data, covering world-famous landmarks, businesses and buildings, like the Statue of Liberty, Eiffel Tower, London Eye, Burj Khalifa, and India Gate, makes exploring your surroundings a breathtaking experience,” said Teemu Huuhtanen, CEO, Next Games



Design rich and engaging games in the real world

Designing interactions around real-world places at global scale is a huge challenge and requires knowing a lot about a player’s environment. We make it easy to find places that are appropriate, pleasant, and fun to play — no matter where your players are.



"Building game interactions around real-world places at global scale and finding places that are relevant to users and fun to play is challenging. Google Maps APIs helped us incorporate the real-world, user relevant locations into our game. Users from all over the world can experience the Ghostbusters virtual world through our game, leveraging Google's location data.​" - HAN Sung Jin, CEO, FourThirtyThree Inc.(4:33)



Deliver game experiences at Google-scale

Building on top of Google Maps’ global infrastructure means faster response times, the ability to scale on demand, and peace of mind knowing that your game will just work.



We're excited to be bringing the best of Google to mobile gaming. All our early access partners leveraged ARCore to better understand the user's environment and reach over 100M devices across the ecosystem. At Google we have even more products to help developers – from Google Cloud for your game server needs to YouTube and Google Play for promotional help, and more.



We’ll be featuring a live demo at the Game Developer Conference in booth 823 next week in San Francisco. If you’re interested in building real-world gaming experiences, visit our web page or contact sales.



Behind the Motion Photos Technology in Pixel 2


One of the most compelling things about smartphones today is the ability to capture a moment on the fly. With motion photos, a new camera feature available on the Pixel 2 and Pixel 2 XL phones, you no longer have to choose between a photo and a video so every photo you take captures more of the moment. When you take a photo with motion enabled, your phone also records and trims up to 3 seconds of video. Using advanced stabilization built upon technology we pioneered in Motion Stills for Android, these pictures come to life in Google Photos. Let’s take a look behind the technology that makes this possible!
Motion photos on the Pixel 2 in Google Photos. With the camera frozen in place the focus is put directly on the subjects. For more examples, check out this Google Photos album.
Camera Motion Estimation by Combining Hardware and Software
The image and video pair that is captured every time you hit the shutter button is a full resolution JPEG with an embedded 3 second video clip. On the Pixel 2, the video portion also contains motion metadata that is derived from the gyroscope and optical image stabilization (OIS) sensors to aid the trimming and stabilization of the motion photo. By combining software based visual tracking with the motion metadata from the hardware sensors, we built a new hybrid motion estimation for motion photos on the Pixel 2.

Our approach aligns the background more precisely than the technique used in Motion Stills or the purely hardware sensor based approach. Based on Fused Video Stabilization technology, it reduces the artifacts from the visual analysis due to a complex scene with many depth layers or when a foreground object occupies a large portion of the field of view. It also improves the hardware sensor based approach by refining the motion estimation to be more accurate, especially at close distances.
Motion photo as captured (left) and after freezing the camera by combining hardware and software For more comparisons, check out this Google Photos album.
The purely software-based technique we introduced in Motion Stills uses the visual data from the video frames, detecting and tracking features over consecutive frames yielding motion vectors. It then classifies the motion vectors into foreground and background using motion models such as an affine transformation or a homography. However, this classification is not perfect and can be misled, e.g. by a complex scene or dominant foreground.
Feature classification into background (green) and foreground (orange) by using the motion metadata from the hardware sensors of the Pixel 2. Notice how the new approach not only labels the skateboarder accurately as foreground but also the half-pipe that is at roughly the same depth.
For motion photos on Pixel 2 we improved this classification by using the motion metadata derived from the gyroscope and the OIS. This accurately captures the camera motion with respect to the scene at infinity, which one can think of as the background in the distance. However, for pictures taken at closer range, parallax is introduced for scene elements at different depth layers, which is not accounted for by the gyroscope and OIS. Specifically, we mark motion vectors that deviate too much from the motion metadata as foreground. This results in a significantly more accurate classification of foreground and background, which also enables us to use a more complex motion model known as mixture homographies that can account for rolling shutter and undo the distortions it causes.
Background motion estimation in motion photos. By using the motion metadata from Gyro and OIS we are able to accurately classify features from the visual analysis into foreground and background.
Motion Photo Stabilization and Playback
Once we have accurately estimated the background motion for the video, we determine an optimally stable camera path to align the background using linear programming techniques outlined in our earlier posts. Further, we automatically trim the video to remove any accidental motion caused by putting the phone away. All of this processing happens on your phone and produces a small amount of metadata per frame that is used to render the stabilized video in real-time using a GPU shader when you tap the Motion button in Google Photos. In addition, we play the video starting at the exact timestamp as the HDR+ photo, producing a seamless transition from still image to video.
Motion photos stabilize even complex scenes with large foreground motions.
Motion Photo Sharing
Using Google Photos, you can share motion photos with your friends and as videos and GIFs, watch them on the web, or view them on any phone. This is another example of combining hardware, software and machine learning to create new features for Pixel 2.

Acknowledgements
Motion photos is a result of a collaboration across several Google Research teams, Google Pixel and Google Photos. We especially want to acknowledge the work of Karthik Raveendran, Suril Shah, Marius Renn, Alex Hong, Radford Juang, Fares Alhassen, Emily Chang, Isaac Reynolds, and Dave Loxton.

Source: Google AI Blog


Semantic Image Segmentation with DeepLab in TensorFlow



Semantic image segmentation, the task of assigning a semantic label, such as “road”, “sky”, “person”, “dog”, to every pixel in an image enables numerous new applications, such as the synthetic shallow depth-of-field effect shipped in the portrait mode of the Pixel 2 and Pixel 2 XL smartphones and mobile real-time video segmentation. Assigning these semantic labels requires pinpointing the outline of objects, and thus imposes much stricter localization accuracy requirements than other visual entity recognition tasks such as image-level classification or bounding box-level detection.
Today, we are excited to announce the open source release of our latest and best performing semantic image segmentation model, DeepLab-v3+ [1]*, implemented in TensorFlow. This release includes DeepLab-v3+ models built on top of a powerful convolutional neural network (CNN) backbone architecture [2, 3] for the most accurate results, intended for server-side deployment. As part of this release, we are additionally sharing our TensorFlow model training and evaluation code, as well as models already pre-trained on the Pascal VOC 2012 and Cityscapes benchmark semantic segmentation tasks.

Since the first incarnation of our DeepLab model [4] three years ago, improved CNN feature extractors, better object scale modeling, careful assimilation of contextual information, improved training procedures, and increasingly powerful hardware and software have led to improvements with DeepLab-v2 [5] and DeepLab-v3 [6]. With DeepLab-v3+, we extend DeepLab-v3 by adding a simple yet effective decoder module to refine the segmentation results especially along object boundaries. We further apply the depthwise separable convolution to both atrous spatial pyramid pooling [5, 6] and decoder modules, resulting in a faster and stronger encoder-decoder network for semantic segmentation.
Modern semantic image segmentation systems built on top of convolutional neural networks (CNNs) have reached accuracy levels that were hard to imagine even five years ago, thanks to advances in methods, hardware, and datasets. We hope that publicly sharing our system with the community will make it easier for other groups in academia and industry to reproduce and further improve upon state-of-art systems, train models on new datasets, and envision new applications for this technology.

Acknowledgements
We would like to thank the support and valuable discussions with Iasonas Kokkinos, Kevin Murphy, Alan L. Yuille (co-authors of DeepLab-v1 and -v2), as well as Mark Sandler, Andrew Howard, Menglong Zhu, Chen Sun, Derek Chow, Andre Araujo, Haozhi Qi, Jifeng Dai, and the Google Mobile Vision team.

References
  1. Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation, Liang-Chieh Chen, Yukun Zhu, George Papandreou, Florian Schroff, and Hartwig Adam, arXiv: 1802.02611, 2018.
  2. Xception: Deep Learning with Depthwise Separable Convolutions, François Chollet, Proc. of CVPR, 2017.
  3. Deformable Convolutional Networks — COCO Detection and Segmentation Challenge 2017 Entry, Haozhi Qi, Zheng Zhang, Bin Xiao, Han Hu, Bowen Cheng, Yichen Wei, and Jifeng Dai, ICCV COCO Challenge Workshop, 2017.
  4. Semantic Image Segmentation with Deep Convolutional Nets and Fully Connected CRFs, Liang-Chieh Chen, George Papandreou, Iasonas Kokkinos, Kevin Murphy, and Alan L. Yuille, Proc. of ICLR, 2015.
  5. Deeplab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs, Liang-Chieh Chen, George Papandreou, Iasonas Kokkinos, Kevin Murphy, and Alan L. Yuille, TPAMI, 2017.
  6. Rethinking Atrous Convolution for Semantic Image Segmentation, Liang-Chieh Chen, George Papandreou, Florian Schroff, and Hartwig Adam, arXiv:1706.05587, 2017.


* DeepLab-v3+ is not used to power Pixel 2's portrait mode or real time video segmentation. These are mentioned in the post as examples of features this type of technology can enable.

Source: Google AI Blog


Updated basemap style for Google Maps APIs





Google Maps APIs will soon be updated with a new look and feel to provide an experience in line with the recent updates to Google Maps. Late last year, we refined the color, typography, and iconography of the Google Maps basemap to improve focus, clarity of information, and readability.



This means that the maps in your products will eventually get an update as well, with stylistic changes such as:




  • New basemap with an updated color scheme and typography

  • An updated pin style marks points of interest on the map in place of the previous circular icons

  • Different colors and icons reflecting categories of points of interest (Food & Drink, Shopping, Transport, etc.)





Existing design




New design

Timeline

The transition to the new look will happen over time and by individual API, with an opt-in period (defaulting to the previous style) and an opt-out period (defaulting to the new style) before the new style is enforced.



The first APIs to offer the new look are the Google Maps SDK for iOS and the Google Places API for iOS, which we are launching as opt-in today. To get updates on the timelines for each API, star the following issues on the Maps APIs Issue Tracker.



If you experience any issues with the Google Maps APIs new styles please let us know by creating a bug report.














API


Estimated opt-in launch


Tracking issue


Google Maps SDK for iOS


13 February



Google Places API for iOS


13 February



Google Maps JavaScript API


14 February (version 3.32)



Google Static Maps API


Mid February



Google Maps Android API


April



Google Places API for Android


May







The updated style is already live across all Google products that incorporate Google Maps, including the Assistant, Search and Android Auto. Opt-in to the new style to give your users the same consistent experience no matter how or where they see our maps.


With Google Maps APIs, Toyota Europe keeps teen drivers safe and sound





Editor’s note: Today’s post is from Christophe Hardy, Toyota Motor Europe’s Manager of Social Business. He’ll explain how Toyota used Google Maps APIs to build an Android app to keep teen drivers safe.



It’s a milestone that teenagers celebrate and parents fear: getting that first driver’s license. For teens, a license means freedom and a gateway to adulthood. For parents, it means worrying about their kid’s safety, with no way to make sure they’re doing the right thing behind the wheel.



We know that the risk of motor vehicle crashes is higher among 16-19-year-olds than any other age group, and that speeding and using smartphones are two of the main main causes. So as part of Toyota's efforts to eliminate accidents and fatalities, we worked with Molamil and MapsPeople to build Safe and Sound, an Android app for European teen drivers. It takes a lighthearted but effective approach to help young drivers stay focused on speed limits and the rules of the road, not on their cellphones. And it can be used by anyone, not just Toyota owners.



One way Safe and Sound combats speeding and distracted driving is by using music. Before parents turn over their car keys, parents and teens download and run the app to set it up. The app syncs with Spotify, and uses the Google Maps Roads API to monitor a teen’s driving behavior. If Safe and Sound determines the teen is speeding, it’ll override the teen’s music with a Spotify playlist specifically chosen by the parent—and the teen can’t turn it off. As any parent knows, parents and kids don’t always agree on music. And there’s nothing less cool to a teen than being forced to listen to folk ballads or ‘70s soft rock. (The embarrassment doubles if their friends are in the car.) The parents’ playlist turns off and switches back to the teen’s only when the teen drives at the speed limit.



The app also helps prevent distracted driving. When it detects the car is moving above nine miles an hour, it switches on a “do not disturb” mode that blocks social media notifications, incoming and outgoing texts, and phone calls. If the teen touches the phone, the app will detect that too, and play the parents' Spotify playlist until the teen removes his or her hand. At the end of the drive, Safe and Sound alerts parents to how many times their teen exceeded the speed limit or touched the phone. Parents can also tap a link in the app that displays the route the teen drove in Google Maps.



Google Maps provided us the ideal platform for building Safe and Sound. It has accurate, up-to-date and comprehensive map data, including road speed limits. The documentation is great, which made using the Google Maps Roads API simple. It also scales to handle millions of users, an important consideration as we roll out the app to more of Europe.



Safe and Sound is currently available in English throughout the continent, with a Spanish version launching soon in Spain, and a Dutch and French version coming to Belgium. And we’re looking to localize Safe and Sound into even more languages.



We hope Safe and Sound helps keep more teens safe, and brings more parents peace of mind. Plus, there’s never been a better use for that playlist of yacht rock classics.

Faster, more affordable access to the web across Africa

When it comes to searching, faster is always better. Whether you’re commuting to work, searching for the latest sports news, or for the phone number of a nearby restaurant, quick access to information from the web is crucial. In Africa, we see that nearly 40% of people with Android devices may have a slow or delayed experience while they're on the web due to insufficient RAM (random access memory) on their device.

We’re now introducing a feature which we hope will help you get the information you’re looking for quicker, easier and more affordably. Now for most of Africa, when you search on Google with a low RAM device (512MB of RAM or less) via the Google App, Chrome or Android browser, webpages that you access from Google’s search results page will be optimized to load faster and use less data.



This feature is already available in Indonesia, India, Brazil and Nigeria, and analyses show that these optimized pages load three times faster and use 80 percent less data. Traffic to webpages from Google search also increased by up to 40 percent. However, if you’d prefer to see the original page, you can choose that option at the top of your page.

We hope this feature can help improve the Search experience for millions of people where network connections are slow and access to devices is limited. Search on!

Removing Place Add, Delete & Radar Search features

Back in 2012, we launched the Place Add / Delete feature in the Google Places API to enable applications to instantly update the information in Google Maps’ database for their own users, as well as submit new places to add to Google Maps. We also introduced Radar Search to help users identify specific areas of interest within a geographic area.



Unfortunately, since we introduced these features, they have not been widely adopted, and we’ve recently launched easier ways for users to add missing places. At the same time, these features have proven incompatible with future improvements we plan to introduce into the Places API.



Therefore, we’ve decided to remove the Place Add / Delete and Radar Search features in the Google Places API Web Service and JavaScript Library. Place Add is also being deprecated in the Google Places API for Android and iOS. These features will remain available until June 30, 2018. After that date, requests to the Places API attempting to use these features will receive an error response.




Next steps


We recommend removing these features from all your applications, before they are turned down at the end of June 2018.



Nearby Search can work as an alternative for Radar Search, when used with rankby=distance and without keyword or name. Please check the Developer's Guide for more details, in the Web Service or Places library in the Google Maps JavaScript API.



The Client Libraries for Google Maps Web Services for Python, Node.js, Java and Go are also being updated to reflect the deprecated status of this functionality.



We apologize for any inconvenience this may cause, but we hope that the alternative options we provide will still help meet your needs. Please submit any questions or feedback to our issue tracker.






author image

Posted by Fontaine Foxworth, Product Manager, Google Maps APIs


Get your users where they need to go on any platform with Google Maps URLs

Last week at Google I/O we announced Google Maps URLs, a new way for developers to link directly to Google Maps from any app. Over one billion people use the Google Maps apps and sites every month to get information about the world, and now we're making it easier to leverage the power of our maps from any app or site.





Why URLs?


Maps can be important to help your users get things done, but we know sometimes maps don't need to be a core part of your app or site. Sometimes you just need the ability to complete your users’ journey—including pointing them to a specific location. Maybe they're ready to buy from you and need to find your nearest store, or they want to set up a meeting place with other users. All of these can be done easily in Google Maps already.



What you can do is use Google Maps URLs to link into Google Maps and trigger the functionality you or your users need automatically. Google Maps URLs are not new. You've probably noticed that copying our URLs out of a browser works—on some platforms. While we have Android Intents and an iOS URL Scheme, they only work on their native platforms. Not only is that more work for developers, it means any multi-user functionality is limited to users on that same platform.




Cross platform


So to start, we needed a universal URL scheme we could support cross-platform—Android, iOS, and web. A messaging app user should be able to share a location to meet up with their friend without worrying about whether the message recipient is on Android or iOS. And for something as easy as that, developers shouldn't have to reimplement the same feature with two different libraries either.



So when a Google Maps URL is opened, it will be handled by the Google Maps app installed on the user's device, whatever device that is. If Google Maps for Android or iOS is available, that's where the user will be taken. Otherwise, Google Maps will open in a browser.




Easy to use


Getting started is simple—just replace some values in the URL based on what you're trying to accomplish. That means we made it easy to construct URLs programmatically. Here are a few examples to get you started:



Say someone has finished booking a place to stay and need figure out how to get there or see what restaurants are nearby:

https://www.google.com/maps/search/?api=1&query=sushi+near+94043





The query parameter does what it says: plugs a query in. Here we've specified a place, but if you do the same link with no location it will search near the user clicking it. Try it out: click here for sushi near you.





This is similar to our query above, but this time we got back a single result, so it gets additional details shown on the page:

google.com/maps/search/?api=1&query=shoreline+amphitheatre





The api parameter (mandatory) specifies the version of Maps URLs that you're using. We're launching version 1.







Or if a user has set up their fitness app and want to try out a new route on their bike:

www.google.com/maps/dir/?api=1&destination=stevens+creek+trail&travelmode=bicycling&dir_action=navigate













We can specify the travelmode to bicycling, destination to a nearby bike trail, and we're done!



And we can also open StreetView directly with a focus of our choice to give a real sense of what a place is like:

www.google.com/maps/@?api=1&map_action=pano&viewpoint=36.0665,-112.0906&heading=85&pitch=10&fov=75





The viewpoint is a LatLng coordinate we want to get imagery for, and heading, pitch, and fov allows you to specify exactly where to look.




Need more functionality?


Google Maps URLs are great to help your users accomplish some tasks in Google Maps. However, when you need more flexibility, customization, or control, we recommend integrating Google Maps into your app or site instead. This is where our more powerful Google Maps APIs come into play. With our feature-rich range of APIs, you can access full functionality and can control your camera, draw shapes on the map, or style your maps to match your apps, brand, or just for better UI. And if you want to go beyond the map we have metadata on Places, images, and much more.




Learn more


When you're happy to delegate the heavy lifting and make use of the Google Maps app for your needs, Maps URLs are for you. Check out our new documentation.



Thank you for using Google Maps URLs and the Google Maps APIs! Be sure to share your feedback or any issues in the issue tracker.






author image

Posted by Joel Kalmanowicz, Product Manager, Google Maps APIs


Google Maps and Particle partner to bring location-aware capabilities to IoT devices





Particle and Google Maps make it easy for IoT devices to identify their location without the use of a GPS. With a single line of code, a device or sensor dispersed across a network (an IoT edge device) can access Google’s geospatial database of Wi-Fi and cellular networks using the Google Maps Geolocation API.



This means you no longer need to invest in expensive and power hungry GPS modules to know the location of their IoT devices and sensors. Alternatively, you can also use Google Maps APIs in conjunction with existing GPS systems to increase accuracy and provide location data even when GPS fails, as it often does indoors.



Particle and Google now provide the whole chain—location aware devices that send context rich data to Google Cloud Platform. When IoT sensors know their location, the information they collect and send back becomes more contextualized, allowing you to make more informed, high-order decisions. By feeding context-rich data back into Google Cloud Platform, you have access to robust set of cloud products and services.



Although asset tracking is traditionally built on a foundation that includes GPS, satellite based GPS often fails in dense urban environments and indoors. In these scenarios, GPS signals are blocked by tall buildings or roofs. The Geolocation API is based on cell tower and Wi-Fi signals that continue to operate where GPS fails. This capability allows you to track your assets anywhere, both indoor and out.



In an IoT driven world, you can track more than just location. Additional signals can be critical to your objectives. For example, in the cold supply chain, temperature as well as location are key pieces of data to track in the factory, on the loading dock and in transit. This enables a holistic view of the supply chain and its ability to deliver a high quality product.



With a Wi-Fi enabled product built on the Particle platform, you can use the Google Maps Geolocation API to offer location aware auto configuration. This creates a seamless setup experience, enhanced operation and valuable analytics. Using geolocation your Particle devices can auto configure timezone, tune to available broadcast bands and connect to regional service providers.



For example, location aware window blinds can reference the number of available hours of sunlight and then make informed decision on how to passively heat a room. A smart coffee machine can report back its location allowing your marketing teams to better understand its market penetration and target demographic.



Visit the documentation for full directions to enable geolocation on your Particle devices. There are four basic steps to complete:




  1. Get a Google Maps API key enabled for Geolocation.

  2. Flash the Google Maps Firmware on your Particle Devices.

  3. Enable the Google Maps Integration in the Particle Console.

  4. Test it Out!




Google and Particle will be demoing the integration at IoT World beginning May 16. Stop by booth #310 near the main hall entrance to see the demo in person or for more information, review our developer documentation and get started today.






author image

About Ken: Ken is a Lead on the Industry Solutions team. He works with customers to bring innovative solutions to market.