Tag Archives: GCP

Google Dev Library Letters — 12th Issue

Posted by Garima Mehra, Program Manager

‘Google Dev Library Letters’ is curated to bring you some of the latest projects developed with Google tech submitted to Google Dev Library Platform. We hope this brings you the inspiration you need for your next project!


Android

Shape your Image: Circle, Rounded Square, or Cuts at the corner in Android by Sriyank Siddhartha

Using the MDC library, shape images in just a few lines of code by using ShapeableImageView.


Foso/Ktorfit by Jens Klingenberg

HTTP client / Kotlin Symbol Processor for Kotlin Multiplatform (Js, Jvm, Android, Native, iOS) using KSP and Ktor clients inspired by Retrofit.

ML in Action: Campaign to Collect and Share Machine Learning Use Cases

Posted by Hee Jung, Developer Relations Community Manager / Soonson Kwon, Developer Relations Program Manager

ML in Action is a virtual event to collect and share cool and useful machine learning (ML) use cases that leverage multiple Google ML products. This is the first run of an ML use case campaign by the ML Developer Programs team.

Let us announce the winners right now, right here. They have showcased practical uses of ML, and how ML was adapted to real life situations. We hope these projects can spark new applied ML project ideas and provide opportunities for ML community leaders to discuss ML use cases.

4 Winners of "ML in Action" are:

Detecting Food Quality with Raspberry Pi and TensorFlow

By George Soloupis, ML Google Developer Expert (Greece)

This project helps people with smell impairment by identifying food degradation. The idea came suddenly when a friend revealed that he has no sense of smell due to a bike crash. Even with experiences attending a lot of IT meetings, this issue was unaddressed and the power of machine learning is something we could rely on. Hence the goal. It is to create a prototype that is affordable, accurate and usable by people with minimum knowledge of computers.

The basic setting of the food quality detection is this. Raspberry Pi collects data from air sensors over time during the food degradation process. This single board computer was very useful! With the GUI, it’s easy to execute Python scripts and see the results on screen. Eight sensors collected data of the chemical elements such as NH3, H2s, O3, CO, and CH4. After operating the prototype for one day, categories were set following the results. The first hours of the food out of the refrigerator as “good” and the rest as “bad”. Then the dataset was evaluated with the help of TensorFlow and the inference was done with TensorFlow Lite.

Since there were no open source prototypes out there with similar goals, it was a complete adventure. Sensors on PCBs and standalone sensors were used to get the best mixture of accuracy, stability and sensitivity. A logic level converter has been used to minimize the use of resistors, and capacitors have been placed for stability. And the result, a compact prototype! The Raspberry Pi could attach directly on with slots for eight sensors. It is developed in such a way that sensors can be replaced at any time. Users can experiment with different sensors. And the inference time values are sent through the bluetooth to a mobile device. As an end result a user with no advanced technical knowledge will be able to see food quality on an app built on Android (Kotlin).

Reference: Github, more to read

* This project is supported by Google Impact Fund.


Election Watch: Applying ML in Analyzing Elections Discourse and Citizen Participation in Nigeria

By Victor Dibia, ML Google Developer Expert (USA)

This project explores the use of GCP tools in ingesting, storing and analyzing data on citizen participation and election discourse in Nigeria. It began on the premise that the proliferation of social media interactions provides an interesting lens to study human behavior, and ask important questions about election discourse in Nigeria as well as interrogate social/demographic questions.

It is based on data collected from twitter between September 2018 to March 2019 (tweets geotagged to Nigeria and tweets containing election related keywords). Overall, the data set contains 25.2 million tweets and retweets, 12.6 million original tweets, 8.6 million geotagged tweets and 3.6 million tweets labeled (using an ML model) as political.

By analyzing election discourse, we can learn a few important things including - issues that drive election discourse, how social media was utilized by candidates, and how participation was distributed across geographic regions in the country. Finally, in a country like Nigeria where updated demographics data is lacking (e.g., on community structures, wealth distribution etc), this project shows how social media can be used as a surrogate to infer relative statistics (e.g., existence of diaspora communities based on election discussion and wealth distribution based on device type usage across the country).

Data for the project was collected using python scripts that wrote tweets from the Twitter streaming api (matching certain criteria) to BigQuery. BigQuery queries were then used to generate aggregate datasets used for visualizations/analysis and training machine learning models (political text classification models to label political text and multi class classification models to label general discourse). The models were built using Tensorflow 2.0 and trained on Colab notebooks powered by GCP GPU compute VMs.

References: Election Watch website, ML models descriptions one, two


Bioacoustic Sound Detector (To identify bird calls in soundscapes)

By Usha Rengaraju, TFUG Organizer (India)

 
(Bird image is taken by Krisztian Toth @unsplash)

“Visionary Perspective Plan (2020-2030) for the conservation of avian diversity, their ecosystems, habitats and landscapes in the country” proposed by the Indian government to help in the conservation of birds and their habitats inspired me to take up this project.

Extinction of bird species is an increasing global concern as it has a huge impact on food chains. Bioacoustic monitoring can provide a passive, low labor, and cost-effective strategy for studying endangered bird populations. Recent advances in machine learning have made it possible to automatically identify bird songs for common species with ample training data. This innovation makes it easier for researchers and conservation practitioners to accurately survey population trends and they’ll be able to regularly and more effectively evaluate threats and adjust their conservation actions.

This project is an implementation of a Bioacoustic monitor using Masked Autoencoders in TensorFlow and Cloud TPUs. The project will be presented as a browser based application using Flask. The deep learning prototype can process continuous audio data and then acoustically recognize the species.

The goal of the project when I started was to build a basic prototype for monitoring of rare bird species in India. In future I would like to expand the project to monitor other endangered species as well.

References: Kaggle Notebook, Colab Notebook, Github, the dataset and more to read


Persona Labs' Digital Personas

By Martin Andrews and Sam Witteveen, ML Google Developer Experts (Singapore)

Over the last 3 years, Red Dragon AI (a company co-founded by Martin and Sam) has been developing real-time digital “Personas”. The key idea is to enable users to interact with life-like Personas in a format similar to a Zoom call : Speaking to them and seeing them respond in real time, just as a human would. Naturally, each Persona can be tailored to tasks required (by adjusting the appearance, voice, and ‘motivation’ of the dialog system behind the scenes and their corresponding backend APIs).

The components required to make the Personas work effectively include dynamic face models, expression generation models, Text-to-Speech (TTS), dialog backend(s) and Speech Recognition (ASR). Much of this was built on GCP, with GPU VMs running the (many) Deep Learning models and combining the outputs into dynamic WebRTC video that streams to users via a browser front-end.

Much of the previous years’ work focussed on making the Personas’ faces behave in a life-like way, while making sure that the overall latency (i.e. the time between the Persona hearing the user asking a question, to their lips starting the response) is kept low, and the rendering of individual images matches the 25 frames-per-second video rate required. As you might imagine, there were many Deep Learning modeling challenges, coupled with hard engineering issues to overcome.

In terms of backend technologies, Google Cloud GPUs were used to train the Deep Learning models (built using TensorFlow/TFLite, PyTorch/ONNX & more recently JAX/Flax), and the real-time serving is done by Nvidia T4 GPU-enabled VMs, launched as required. Google ASR is currently used as a streaming backend for speech recognition, and Google’s WaveNet TTS is used when multilingual TTS is needed. The system also makes use of Google’s serverless stack with CloudRun and Cloud Functions being used in some of the dialog backends.

Visit the Persona’s website (linked below) and you can see videos that demonstrate several aspects : What the Personas look like; their Multilingual capability; potential applications; etc. However, the videos can’t really demonstrate what the interactivity ‘feels like’. For that, it’s best to get a live demo from Sam and Martin - and see what real-time Deep Learning model generation looks like!

Reference: The Persona Labs website

#IamaGDE: Martin Kleppe

#IamaGDE series presents: Google Maps Platform

The Google Developers Experts program is a global network of highly experienced technology experts, influencers, and thought leaders who actively support developers, companies, and tech communities by speaking at events and publishing content.

Image shows Martin Kleppe looking straight ahead at the camera. He is standing against a wooden wall

Martin Kleppe is a co-founder of Ubilabs, a Google Cloud partner since 2012. He is the head of development, responsible for R&D and technical strategy. He talks with customers to understand their challenges and which location technologies will best serve them. He also works with partners like Google to determine the best ways to combine the latest technologies.

Ubilabs – a data and location technology consultancy specializing in product and software development, visualization, implementation, and data management – built the official feature tour and travel app demos for Google Maps Platform at Google I/O 2021.

“We worked with the latest Google Maps Platform APIs to create demos that showcase the combined features,” he says.

Ubilabs also works closely with the Google Earth team, developing content and UI components for its storytelling platform. In addition, they are in close contact with the Google Maps Platform team, giving feedback on the latest products and features before they’re released.

Image shows Martin Kleppe leaning down to look at his computer. He appears to be onstage at an event giving a presentation

Kleppe’s background is in design and coding web applications. At Bauhaus University, Kleppe was in the Media Art and Design program, where he learned the principle of simplicity in design, with a focus on what users want. Students’ projects combined technology and art, and while programming courses weren’t required, students were expected to explain their thinking. Kleppe worked on live blogging platforms using maps and mobile phones, even before popular maps APIs and iPhones came out.

“I was always focused on interactive maps,” Kleppe says. “Before Google Maps, I used NASA satellite imagery and Japanese cell phones with access to the user’s location. That’s how I got into programming and into maps.”

After university, he decided to start a company.

“When the Google Maps API came out, I thought it was a perfect time to make a business out of it,” he said. “We started with a few friends, and then the company took off.”

Kleppe keeps his focus on the value of maps to users: Maps connect virtual users on computer screens to the real world, providing a connection between humans and data, and they provide context around a location.

“Maps, especially Google Maps, make it easy for users to get access to information”, says Kleppe. “With the Google Maps API, it was super easy to put things on a map.”

Image shows Martin Kleppe standing alone onstage. His computer is on a podium in front of him and a large screen is up on the wall behind him.

Getting involved in the developer community

Two different events got Kleppe involved in the developer community. In Hamburg, where he lives, he belonged to a small local JavaScript user group and gave frequent talks that led to group discussions about the Google Maps API and the usability aspects of the Google API. Ubilabs was also invited to talk about Google Maps at a Google technical event. The Google office in Hamburg held different meetups, and Kleppe found himself giving 12 talks a year and organizing conferences with tracks on topics including JavaScript and Google Maps.

When Kleppe’s first child was born, he cut down on his travel and poured his energy into his burgeoning relationships with developer advocates and teams. Around the same time, Kleppe became a Google Developer Expert (GDE).

“I wanted to become a GDE because I’m always eager to share Ubilabs’ latest use of Google technologies and inspire people with showcases of our work, pushing the limit of what’s possible,” Kleppe says. “As a GDE, you’re closer to the development and what’s happening at Google.”

GDEs get previews of APIs, know the people who are working on projects at Google, and give feedback.

“It is very interesting and important for us as a company to be prepared and to know what clients will be able to use in the future,” he says. “And it’s important for me, as well, because I always want to learn the latest things and be on top of tech trends.”

Kleppe enjoys interacting with the other Google Maps Platform GDEs, who live all over the world.

“They are teachers at universities, self-employed developers who work on small apps, and full-time speakers and educators,” Kleppe says. “That’s valuable to me because I work with clients, and I have my own views, but seeing how other people are working on the same thing from other perspectives helps me understand why different aspects of APIs exist.”

These days, Kleppe primarily works with the members of the Google Maps Platform, Google Earth, and Google Cloud product teams who develop new APIs, to provide his expert feedback.

“We work with a lot of different clients and understand what developers, clients, and users need and can pass useful feedback,” he says. “I think there is value to the community in having us test and review the new APIs. We think about new ways to use them, and if we find errors, we can help the Google Maps Platform team address these issues before the APIs are released to the public.”

In 2020, Kleppe found himself even more connected with the Google team when Ubilabs was selected to build the official Google Maps Platform demos for Google I/O 2021, featuring the new WebGL-powered features that are currently in beta release.

Image shows Martin Kleppe onstage behind his podium and laptop. His name is projected onto the screen behind him

Favorite Maps features and current projects

“WebGL is a different way of rendering maps,” says Kleppe. “The underlying technology has huge potential. It uses GPU-accelerated computing, where you use the graphics card in your machine to render 3D buildings and place 3D objects in space. Before this technology was available, your data was an additional layer that covered the map. Now, you have control over how the data is visually merged with the existing map. This will provide the user an immersive experience, like looking at a view of a city.”

Kleppe notes that the Google Maps APIs used to be limited to about a thousand objects, but now it can handle millions of moving points on a map. For him, being able to visualize and analyze large amounts of data is useful to Ubilabs because the company focuses on data visualization and analysis.

“You can visualize, animate, and draw data on the map,” he says. “You can analyze real-time information on maps.”

Kleppe says the best and most interesting feature of the Google Maps Platform APIs is the ability to combine so many data sources – the base map with streets, a layer to draw business info, an extra API to calculate directions, and the ability to draw one’s own information on top of all that.

“You’re not limited to the set of data in the API,” he says. “You’re always free to merge your own data into an abstract view of the world.”

Ubilabs put 3D features on their wish list years ago.

“The 3D feature is the most awaited feature for us,” he says. “Nine years later, it’s finally becoming a reality, so I’m super excited.”

Image shows Martin Kleppe presenting onstage. A large projector screen displays code behind him

Future plans

In his role at Ubilabs, Kleppe is sharpening the company’s strategic goals and adding focus on the field of data analytics with location aspects.

“We can work with clients and enable them to gain insights from their data,” he says. Every dataset has a connection to something in the real world, and if you use that information, you can make decisions about your business.”

His GDE role has allowed Kleppe to try Google Cloud’s data-specific products.

“We combine the visualization power of maps with the power of data analytics in Google Cloud and stream massive amounts of highly dynamic location data; for example, to track ships or analyze movements of assets,” Kleppe says. “We built a cloud team with a clear focus on data analytics on GCP the past three years, to look into how to preprocess data in the cloud, and we see a lot of potential.”

Follow Martin on Twitter at @aemkei | Read more about Ubilabs | Check out Martin’s experiments.

For more information on Google Maps Platform, visit our website or learn more about our GDE program.

Introducing the Open Source Insights Project

Open Source Insights

Google has been working on software supply-chain security for many years, and transitive dependencies remain one of the most complex and least understood aspects. While we will be integrating this data into our Cloud and internal products in a variety of ways, we believe there is an immediate value in helping developers understand and visualize dependencies. Today, we are excited to share an exploratory visualization site: Open Source Insights, which provides an interactive view of the dependencies of open source projects.

Software development practices have evolved significantly over the last few years. Collaborative development with distributed feature development, consumption of open source and third-party packages, and publicly maintained software libraries have become commonplace, partly as a result of the widespread use of open source software. The advantages of open source are so clear that people and companies that would once have rejected OSS are now adopting it as a critical element of their environment.

But there are challenges brought by OSS too. The pace of change is electric, and it can be hard to keep up. The software packages that a large project depends on might update too frequently to keep a clear picture of what is happening. And those packages, in turn, can change their dependencies to provide new features or fix bugs. Security problems and other issues can arise unexpectedly in your project as a result, and the scale of the problem can make it all difficult to manage. Even a modest OSS project might depend on hundreds of packages.

There are tools to help, of course: vulnerability scanners and dependency audits that can help identify when a package is exposed to a vulnerability. But it can still be difficult to visualize the big picture, to understand what you depend on, and what that implies.

Open Source Insights provides a visualization of a project’s dependencies and their properties. Our exploratory website can be used to get an overview of how a particular software package is put together. Among other features, it provides interactive tools to visualize and analyze full, transitive dependency graphs. It also has a comparison tool to highlight how different versions of a package might affect your dependencies, perhaps by changing their own dependencies, adding licensing requirements, or fixing security problems.

Dependency graph for express 4.17.1

Open Source Insights shows you all this information about a package without asking you to install the package first. You can see instantly what installing a package—or an updated version—might mean for your project, how popular it is, find links to source code and other information, and then decide whether it should be installed. Insights also helps you see the importance of your project by showing the projects that depend on it: its dependents. Even a small project is important if a large number of other projects depend on it, either directly or through transitive dependencies.

Open Source Insights continuously scans millions of projects in the open source software ecosystem, gathering information about packages, including licensing, ownership, security issues, and other metadata such as download counts, popularity signals, and OpenSSF Scorecards. It then constructs a full dependency graph—transitively tracking dependencies, dependencies' dependencies, and so on—and incorporates the metadata, then publishes it so you can see how it all might affect your software. And the information it provides is continually updated.
Filtered dependency graph showing how eslint 7.27.0 depends on chalk 2.4.2 and 4.1.1

Filtered dependency graph showing how eslint 7.27.0 depends on chalk 2.4.2 and 4.1.1

This information can help visualize how software is put together, whether an update is worth doing, or how to fix a problem.

Today, Open Source Insights supports npm, Maven, Go modules, and Cargo. While we work on adding additional packaging systems, we want to hear from you: How could this data fit into your development workflow? What would make it more useful? You can reach the team at deps.dev to share your thoughts; we’ll be collecting feedback for the upcoming months and look forward to hearing your ideas on how best to improve supply-chain security.

Visit our website at deps.dev to try it out.

From the Google Cloud team: What this means for GCP’s open cloud

For users of open source software, this may be the first time you’re seeing dependency and vulnerability information in an organized and accessible way. If you’re using a managed service based on open source, it’s important to remember that you may not be affected by all vulnerabilities listed. Your provider may have taken steps to harden the products you use, and when a new vulnerability is disclosed, your provider may take responsibility for patching this on your behalf.

Google Cloud follows both these steps to help users get the benefits of open cloud while prioritizing security. Multiple layers of hardening create defense-in-depth, which helps protect services like Google Kubernetes Engine (GKE), Cloud Run and Cloud Functions from a container escape vulnerability. For components that are the user’s responsibility, we’re constantly rolling out new services—like GKE Autopilot—that automate these responsibilities.

We’re committed to protecting our customers, both through our patch rewards program and the recently launched cyber insurance partnership, the Risk Protection Program, which moves from shared responsibility to shared fate. We look forward to bringing our customers new information on their open source dependencies.

By Andrew Gerrand, Michael Goddard, Rob Pike and Nicky Ringland of the Open Source Insights Team

OpenTelemetry’s First Release Candidates

OpenTelemetry has hit another milestone with the tracing specification reaching release candidate status.

With the specification now ready to go, expect to see tracing release candidates of the official APIs and SDKs over the next few weeks, along with updated exporters for Cloud Trace. In the coming months the same will follow for the metrics specification, followed by metrics release candidates of the APIs and SDKs and Cloud Monitoring exporters, followed by the project’s general availability. At this point we’ll switch our default application metrics and distributed tracing instrumentation from OpenCensus to OpenTelemetry.

This is exciting news for Google Cloud customers, as OpenTelemetry will enable even better observability experiences, both with Cloud Monitoring and Cloud Trace, or the third party monitoring and operations tools of your choice.

Originally posted on the on the OpenTelemetry blog.


As we’ve discussed in past announcements, we’re hard at work building OpenTelemetry’s first GA quality release. Today marks another milestone in this journey, with the freezing and first release candidate of the tracing specification.
Tracing Spec Release Candidate

The tracing specification is now considered to be a release candidate (RC) and is frozen, and the OpenTelemetry APIs and SDKs have a stable specification to build their own release candidates against. This means:
  • API, SDK, and Collector release candidates will appear within the next few weeks.
  • No breaking spec changes are allowed between now and the final GA specification, beyond any showstopper (P1) issues that are revealed in the RC period. We don’t expect any of these to appear, but the purpose of the RC period is for us to validate that we have a GA-worthy spec.
  • Some non-breaking changes will be allowed during the RC period. Most of these are clarifications of existing behaviour or are pure editorial updates.
The release candidate sections of the specification include all tracing related dependencies, specifically the following sections: Trace, Baggage, Resource, Context Propagation, Environment Variables, Exporters (for traces). You can view the progress of each OpenTelemetry component’s implementation in the project status matrix.

What’s Coming Next?

Achieving a release candidate of the tracing specification has been the top priority of OpenTelemetry since releasing our beta in March. With this completed, our focus now shifts to tracing release candidates of the APIs, SDKs, Collector, and auto instrumentation components, and producing a release candidate of the metrics specification.

RC Tracing Implementations

Most OpenTelemetry APIs and SDKs are close to completing their tracing RC implementations, and we expect the first wave of these to arrive within the next two weeks. Contributors who are looking to provide instrumentation (for various web frameworks, storage clients, etc.) can start building against release candidate APIs once they arrive. While the APIs may change in response to issues discovered during RC usage and testing (which will result in multiple pre-GA release candidates for these components), these will be extremely constrained.

Several SDKs will have two waves of release candidate milestones: the first will contain functionality from the tracing and context propagation sections of the specification, and the second will include release candidate implementations for baggage, exporters, resources, and environment variables.

Metrics

In parallel to the tracing RC component releases, we will apply the focus that we’ve had on tracing to the metrics specification. Starting this week, we will categorize which work items are required for GA, which can be optionally allowed in GA (non-breaking), and which will be shifted to post-GA. After completing this, we will track our burndown progress, and lock the metrics specification and publish a metrics specification release candidate once all P1 items are complete. Shortly after this, the APIs, SDKs, Collector, and other components will publish release candidates with RC-quality tracing and metrics functionality.

Productionization and GA Readiness Work

Once the metrics specification, SDKs, Collector, and other components reach release candidate status, we will focus on productionization tasks like writing documentation, producing a post-GA versioning strategy, building additional automated tests, etc. Once we are satisfied with each component’s adoptability and reliability, we will announce their general availability.

Overall Timeline

  1. Components (APIs, SDKs, Collector, auto instrumentation, etc.) issue release candidates with RC-quality tracing functionality.
  2. The metrics section of the specification achieves RC quality and is frozen.
  3. Components issue release candidates with RC-quality tracing and metrics functionality.
  4. Once we are satisfied with our metrics + tracing release candidates, OpenTelemetry goes GA.
  5. Logging enters beta, then issues an RC specification, followed by RC-quality logging functionality in each component, followed by a GA for logging.
We will have a better understanding of our GA release timeline in the coming weeks once outstanding work on the metrics specification is fully accounted for.

Tracking a Language’s Progress

As mentioned above, you can view the progress of a particular component (API, SDK, etc.) in the project status matrix. Each component’s implementation has their own timeline, though a core set (the JavaScript, Java, Go, Python, and .Net APIs + SDKs, the Collector, and Java auto instrumentation) are all tracking well. Each component has its own GA burndown board.

FAQ

I want to use OpenTelemetry on my production services; what’s the impact of today’s announcement?

SDKs with release candidate quality tracing support will be available in a few weeks. Release candidates are not recommended for critical production services, however they are functional and are intended to offer APIs that are compatible with their upcoming GA counterparts.

I want to write instrumentation for OpenTelemetry; what’s the impact of today’s announcement?

APIs with release candidate quality tracing support will be available shortly (prior to the SDKs). You can bind against these to produce traces that will be picked up by the OpenTelemetry SDKs or any other implementations that implement the OpenTelemetry APIs.

When will OpenTelemetry offer drop-in replacements for OpenCensus and OpenTracing?

Work is currently underway on bridge APIs that allow OpenTelemetry SDKs to seamlessly replace OpenCensus libraries or OpenTracing implementations. While the delivery date of this functionality is not tied to OpenTelemetry’s GA goals, we expect this to arrive between each API + SDK’s release candidate and GA milestones.

Wrapping Up

Producing a specification release candidate is an important milestone for the OpenTelemetry community, and it took significant effort on the part of our contributors to make this happen. We’d like to thank every person and every organization that was a part of this release, and to recognize that their contributions are laying the groundwork for the project's long term success.

If you haven’t been a part of the OpenTelemetry community but would like to join, now is the perfect time! OpenTelemetry is now in the top three CNCF projects by weekly and cumulative commits, and no matter your level of commitment (ha!) to the project, contributions are always welcome. If you have a particular area that you’re interested in (for example, the Python API + SDK), the best way to get involved is to join the relevant weekly SIG meetings or interact with other contributors on Gitter.

By Morgan McLean, Google Cloud

Recreating historical streetscapes using deep learning and crowdsourcing

For many, gazing at an old photo of a city can evoke feelings of both nostalgia and wonder. We have Google Street View for places in the present day, but what about places in the past? What was it like to walk through Manhattan in the 1940s? To create a rewarding “time travel” experience for both research and entertainment purposes, Google Research is launching Kartta Labs, an open source, scalable system on Google Cloud and Kubernetes that tackles the difficult problem of reconstructing what cities looked like in the past from scarce historical maps and photos.

Kartta Labs consists of three main parts:
  • A temporal map server, which shows how maps change over time;
  • A crowdsourcing platform, which allows users to upload historical maps of cities, georectify, and vectorize them (i.e. match them to real world coordinates);
  • And an upcoming 3D experience platform, which runs on top of maps creating the 3D experience by using deep learning to reconstruct buildings in 3D from limited historical images and maps data.

Maps & Crowdsourcing

Kartta Labs is a growing suite of open source tools that work together to create a map server with a time dimension, allowing users to populate the service with historically accurate data.
gif of editor in use

Warper

The entry point to crowdsourcing is Warper, an open source web app based on MapWarper that allows users to upload historical images of maps and georectify them by finding control points on the historical map and corresponding points on a base map.

Once a user uploads a scanned historical map, Warper makes a best guess of the map’s geolocation by extracting textual information from the map. This initial guess is used to place the map roughly in its location and allow the user to georeference the map pixels by placing pairs of control points on the historical map and a reference map. Given the georeferenced points, the application warps the image such that it aligns well with the reference map.

Warper runs as a Ruby on Rails application using a number of open source geospatial libraries and technologies, including but not limited to PostGIS and GDAL. The resulting maps can be exported in PNG, GeoTIFF, and other open formats. Warper also runs a raster tiles server that serves each georectified map at a tile URL. This raster tile server is used to load the georectified map as a background in the Editor application that is described next.

Editor

Editor is an open source web application which is a customized version of the OpenStreetMap editor; customizations include support for time dimension and integration with the other tools in the Kartta Labs suite. Editor allows users to load the georectified historical maps and trace their geographic features (e.g., building footprints, roads, etc.). This traced data is stored in vector format.

Extracted geometries in vector format, as well as metadata (e.g., address, name, and start or end dates), are stored in a geospatial database that can be queried, edited, styled, and rendered into new maps.

Kartta

Finally, the temporal map front end, Kartta (based on Tegola), visualizes the vector tiles allowing the users to navigate historical maps in space and time. Kartta works like any familiar map application (such as Google Maps), but also has a time slider so the user can choose the year at which they want to see the map. By moving the time slider, the user is able to see how features in the map, such as buildings and roads, changes over time.

3D Experience

To actually create the “time traveling” 3D experience, the forthcoming 3D Models module aims to reconstruct the detailed full 3D structures of historical buildings. The module will associate images with maps data, organize these 3D models properly in one repository, and render them on the historical maps with a time dimension.

Preliminary Results

Figure 2 – Bird’s eye view of 3D-reconstructed  Chelsea, Manhattan with a time slider
Figure 3 – Street level view of 3D-reconstructed Chelsea, Manhattan

Conclusion

We developed the tools outlined above to facilitate crowdsourcing and tackle the main challenge of insufficient historical data. We hope Kartta Labs acts as a nexus for an active community of developers, map enthusiasts, and casual users that not only utilizes our historical datasets and open source code, but actively contributes to both. The launch of our implementation of the Kartta Labs suite is imminent—keep an eye out on the Google AI blog for that announcement!

By Raimondas Kiveris – Google Research

W3C Trace Context Specification: What it Means for You

Since the first days of Google Cloud Platform (GCP), Google has been at the forefront of making your applications more observable. Beyond Stackdriver, our most visible impact in this space is OpenTelemetry, which we initiated in 2017 (as OpenCensus) and has grown into a huge community that includes the majority of APM / monitoring vendors and cloud platforms.

While OpenTelemetry allows developers to easily capture distributed traces and metrics from their own services, there’s also a need to trace requests as they propagate through components that developers don’t directly control, like managed services, load balancers, network hardware, etc. To solve this we co-defined a prototype HTTP header that these components can rely on, gathered partners, and moved the work into the W3C.

This work is now complete, and the W3C Trace Context format is now an official standard. Once implemented in GCP, this will make our services even easier to manage, both with Stackdriver and other third party distributed tracing tools. We explain more in the
official post on the W3C blog, which I’ve copied below:

The W3C Distributed Tracing working group has moved the Trace Context specification to the next maturity level. The specification is already being adopted and implemented by many platforms and SDKs. This article describes the Trace Context specification and how it improves troubleshooting and monitoring of modern distributed apps.

W3C Trace Context specification defines the format for propagating distributed tracing context between services. Distributed tracing makes it easy for developers to find the causes of issues in highly-distributed microservices applications by tracking how a single interaction was processed across multiple services. Each step of a trace is correlated through an ID that is passed between services, and W3C Trace Context now defines a standard for these context propagation headers.

Until now, different tracing systems have defined their own headers. Examples include Zipkin’s B3 format and X-Google-Cloud-Trace. Adopting a common context propagation format has been long desired by developers, APM vendors, and cloud platform hosts, as compatibility provides numerous benefits:
  • Web and RPC frameworks that use this standard to provide context propagation out of the box will also offer cross-service log correlation, even for developers who haven’t set up distributed tracing.
  • API producers can record the trace IDs of requests from API consumers and provide additional spans or metadata to their customers for a given traced request. Producers can also correlate customer trace IDs to internal traces when debugging technical issues raised by consumers.
  • Networking infrastructure (proxies, load balancers, routers, etc.) can both ensure that context propagation headers are not removed from requests passing through them, and can record spans or logs for a given trace, without having to support multiple vendor-specific formats. Potential examples of these include router appliances, cloud load balancers, and sidecar proxies like Envoy.
  • Instrumentation can be further decoupled from a developer’s choice of APM vendor. For example, using both OpenTelemetry and a given vendor’s agents, a developer can instrument different services in an application, and traces will flow through the system and be processed correctly by the vendor’s backend.
  • Web browsers and other clients can use these identifiers to correlate their telemetry with traces collected from backend services. This functionality is currently being defined.
To address this effort, a group of cloud providers, open source contributors, and APM vendors started defining a standard HTTP context propagation header that would replace their homegrown formats. This specification has been discussed and iterated on over the past two years, and the group working on it has grown significantly over that time. Sponsors include Google, Microsoft, Dynatrace, and New Relic (W3C members), and the group was officially moved into the W3C in 2018 for the work to proceed under the guidance of an official standards body and to spur even greater adoption.

TraceContext has since been adopted by OpenTelemetry (which enables it by default and also serves as the reference implementation), Azure services, Dynatrace, Elastic, Google Cloud Platform, Lightstep, and New Relic. We are tracking adoption in this list.

This first phase of work has focused on HTTP, as it is commonly used and has no built-in affordances for trace context propagation (gRPC and some newer RPC systems do). The same group of committee members are also working to define trace context propagation in other formats, starting with AMQP and MQTT for IoT; other upcoming topics include context propagation from clients and web browsers.

By Morgan McLean, OpenTelemetry + Stackdriver

A new issue tracker for G Suite developers

Originally Posted on the G Suite Developers Blog
Posted by Ryan Roth, Developer Programs Engineer & Wesley Chun, Developer Advocate, G Suite

You may have read recently that the Google Cloud Platform team upgraded to Issue Tracker, the same system that Google uses internally. This allows for improved collaboration between all of us and all of you. Issues you file will have better exposure internally, and you get improved transparency in terms of seeing the issues we're actively working on. Starting today, G Suite developers will also have a new issue tracker to which we've already migrated existing issues from previous systems.

Whether it's a bug that you've found, or if you wish to submit a favorite feature request, the new issue tracker is here for you. Heads up, you need to be logged in with your Google credentials to view or update issues in the tracker.



The new issue tracker for G Suite developers.

Each G Suite API and developer tool has its own "component" number that you can search. For your convenience, below is the entire list. You may browse for issues relevant to the Google APIs that you're using, or click on the convenience links to report an issue or request a new/missing feature:
To get started, take a look at the documentation pages, as well as the FAQ. For more details, be sure to check out the Google Cloud Platform announcement, too. We look forward to working more closely with all of you soon!