Author Archives: Google Devs

Launchpad Accelerator expands with regional programs

Posted by Josh Yellin - Global Lead, Launchpad Accelerator

For the past five years, Launchpad has been connecting startups from around the world with the best of Google - its people, network, methodologies, and technologies. We have worked with market leaders including BrainQ (Israel), Flutterwave (Nigeria), Jumo (South Africa), Nestaway (India), and Nubank (Brazil), empowering their sustainable growth through high-touch programs in AI/ML implementation, leadership best-practices, and access to global capital.

In order to better support startup ecosystems in key regions, we're thrilled to expand Launchpad Accelerator by introducing new, standalone initiatives. Building on Google VP Yossi Matias's initial engagement with startups in Tel Aviv, we are introducing accelerators in Tel Aviv, Israel; Lagos, Nigeria; and São Paulo, Brazil. These regional accelerators are representative of our long-term commitment to support and learn from startup ecosystems around the world.

In Tel Aviv, we are working with Machine Learning startups. Our first class launched in March with four ML startups focused on healthcare technology solutions. If you are interested in joining our next class, you can find more information here.

In Lagos, we are working with seed-stage companies solving a range of market-related problems. Our first class, which also launched in March, includes 12 startups working across e-commerce, education, and supply chain. Learn more about the Launchpad Africa program here.

In São Paulo, we will be working with growth-stage companies that operate across a number of sectors. The application for Class One is currently open and will begin in May 2018. If you wish to apply, please do so here.

We will continue to operate a program in San Francisco for top growth-stage global startups. With the addition of accelerators in key regions, we are able to design more customized programs, develop stronger relationships with our partners on the ground, and support the growth of local startup ecosystems.


Stay updated on developments and future opportunities by subscribing to the Google Developers newsletter.

Google Fonts launches Korean support

Posted by the Google Fonts team

The Google Fonts catalog now includes Korean web fonts for designers and developers working with the nation's unique Hangul writing system. While some of the fonts themselves have been available in beta for years now, we introduced official support for Korean earlier this month after devising a more efficient means of serving Chinese, Japanese, and Korean (CJK) font files, which have very large character sets and file sizes.

We've always wanted to offer CJK fonts, and over the years we've worked on foundational technologies such as WOFF2 and CSS3 unicode-range in order to make this possible. Last year, Google engineers experimented with different approaches to slicing fonts into smaller subsets, and found that certain techniques had very good results that enabled this launch.

The Hangul script is distinct from Chinese Hanzi and Japanese Kanji characters. In some ways, it shares greater similarity with Western writing systems because it is constructed from a phonetic alphabet. Whereas the visual features of Hanzi and Kanji logograms give no direct indication of their pronunciation, Hangul is a phonographic script in which written words are built from their constituent sounds.

Hangul starts with a set of 19 consonants and 21 vowels (1). When writing a sentence, individual characters are first identified (2), then combined into blocks that represent compete words (3), and finally conjugated and arranged in grammatical form to create a sentence (4).

Despite the elegant logic underlying Hangul script, Korean fonts present the same basic difficulty for developers that Chinese and Japanese fonts do. Hangul characters may be constructed from just 40 basic elements, but the final forms add up quickly. Korean fonts eventually require over ten thousand characters, meaning the files are too large for most users to download so that they will appear instantly upon visiting a website. A typical full Korean font hovers around 4Mb, whereas even fairly extensive Latin fonts rarely exceed 250Kb.

During the time that Korean fonts were only available on the Google Fonts Early Access system, we were surprised that many web developers were willing to accept the latency implications of serving full font files to their users. Still, in order to graduate these fonts out of our Early Access system, we needed to devise a way for them to work for a wider cross-section of web users, especially those with relatively slow connections.

The Google Fonts API offers larger font files as several subsets, such as "latin" and "cyrillic." When the service launched, these subsets had to be selected by developers. For a few years, we've enabled the 'unicode-range' property of CSS3 for browsers that support it. This means when a large font file is sliced into subsets, the ranges of the Unicode characters in each subset are declared as part of the @font-face declaration. This allows browsers to fetch only a particular subset when those characters appear in a web page.

One of the key benefits of the Google Fonts API is cross-site caching, and this benefit continues to apply to the delivery of font subsets through unicode-range. The font files we serve are used by many domains, so after you visit a site and your browser downloads its fonts, the files are saved in the browser's cache. Then the next time you visit another site that uses the same font files, there's no need for your browser to download it again. This latency benefit only increases over time, and since the many subsets of large font files are cached the same way, you'll see the same cross-site benefits with our CJK fonts.

Over the years we have worked with the W3C and browser developers to ensure that unicode-range would become well supported. Now that Chrome, Firefox, Safari, and Edge have shipped this feature, there is enough support to enable a new means of delivering Korean web fonts that works seamlessly for these browsers.

Support for the unicode-range feature has become widespread, according to caniuse.com

In order maximize efficiency, we wanted to know which characters it made the most sense to cluster together in a subset. We devised a slicing strategy by analyzing text on the Korean-language web to extract patterns of Unicode characters, building topic models of which ones tend to appear together on the same page.

As we evaluated different slicing strategies to decide which Korean characters to include in each subset, our goal was to minimize both the number of subsets and the number of requests. If we sliced the script into 1,000 arbitrary subsets, without factoring in usage and commonality, we would get way too many HTTP requests. We built a testing framework to see how a variety of strategies worked with real-world traffic using our Early Access system, and we launched Korean fonts in our directory with the most efficient one we've found so far.

Strategy 1 is no slicing. The best strategy had 20 times fewer connection requests than the worst, which simply divides the font into equal parts without accounting for patterns of language use.

Moving forward, we think we can do even better. With our scale, a small improvement can justify a lot of effort. By continuing to use our testing framework on different approaches to slicing, we can tune our serving to be as efficient as possible. For the web developers who use our API, and all end users, these kinds of changes are totally transparent and don't require any further work on your part. For example, when WOFF2 came out in 2015, every user with a browser supporting WOFF2 got a 25% faster experience. We transparently make things better for all users on an ongoing basis, and there's enormous potential for future improvements in the delivery of CJK fonts.

This launch began with five Korean fonts originally designed by the leading Korean type foundry Sandoll for Naver. Since the initial launch, we have grown the collection to 23 Korean families, and to showcase them we commissioned a digital specimen website from Math Practice, a digital design studio in New York City. Here you can see beautiful Korean typography in action—and with fast page loads made possible by our new slicing technique.

Thanks to SooYoung Jang, Irin Kim, E Roon Kang, Wonyoung So, Guhong Min, Hannah Son, Aaron Bell, Marc Foley, and all the typeface designers involved in growing the Korean fonts collection and developing the minisite.

Transitioning Google URL Shortener to Firebase Dynamic Links

Posted by Michael Hermanto, Software Engineer, Firebase

We launched the Google URL Shortener back in 2009 as a way to help people more easily share links and measure traffic online. Since then, many popular URL shortening services have emerged and the ways people find content on the Internet have also changed dramatically, from primarily desktop webpages to apps, mobile devices, home assistants, and more.

To refocus our efforts, we're turning down support for goo.gl over the coming weeks and replacing it with Firebase Dynamic Links (FDL). FDLs are smart URLs that allow you to send existing and potential users to any location within an iOS, Android or web app. We're excited to grow and improve the product going forward. While most features of goo.gl will eventually sunset, all existing links will continue to redirect to the intended destination.

For consumers

Starting April 13, 2018, anonymous users and users who have never created short links before today will not be able to create new short links via the goo.gl console. If you are looking to create new short links, we recommend you use Firebase Dynamic Links or check out popular services like Bitly and Ow.ly as an alternative.

If you have existing goo.gl short links, you can continue to use all features of goo.gl console for a period of one year, until March 30, 2019, when we will discontinue the console. You can manage all your short links and their analytics through the goo.gl console during this period.

After March 30, 2019, all links will continue to redirect to the intended destination. Your existing short links will not be migrated to the Firebase console, however, you will be able to export your link information from the goo.gl console.

For developers

Starting May 30, 2018, only projects that have accessed URL Shortener APIs before today can create short links. To create new short links, we recommend FDL APIs. FDL short links will automatically detect the user's platform and send the user to either the web or your app, as appropriate.

If you are already calling URL Shortener APIs to manage goo.gl short links, you can continue to use them for a period of one year, until March 30, 2019, when we will discontinue the APIs.

As it is for consumers, all links will continue to redirect to the intended destination after March 30, 2019. However, existing short links will not be migrated to the Firebase console/API.

URL Shortener has been a great tool that we're proud to have built. As we look towards the future, we're excited about the possibilities of Firebase Dynamic Links, particularly when it comes to dynamic platform detection and links that survive the app installation process. We hope you are too!

Announcing TensorRT integration with TensorFlow 1.7

Posted by Laurence Moroney (Google) and Siddarth Sharma (NVIDIA)

Today we are announcing integration of NVIDIA® TensorRTTM and TensorFlow. TensorRT is a library that optimizes deep learning models for inference and creates a runtime for deployment on GPUs in production environments. It brings a number of FP16 and INT8 optimizations to TensorFlow and automatically selects platform specific kernels to maximize throughput and minimizes latency during inference on GPUs. We are excited about the new integrated workflow as it simplifies the path to use TensorRT from within TensorFlow with world-class performance. In our tests, we found that ResNet-50 performed 8x faster under 7 ms latency with the TensorFlow-TensorRT integration using NVIDIA Volta Tensor Cores as compared with running TensorFlow only.

Sub-Graph Optimizations within TensorFlow

Now in TensorFlow 1.7, TensorRT optimizes compatible sub-graphs and let's TensorFlow execute the rest. This approach makes it possible to rapidly develop models with the extensive TensorFlow feature set while getting powerful optimizations with TensorRT when performing inference. If you were already using TensorRT with TensorFlow models, you know that certain unsupported TensorFlow layers had to be imported manually, which in some cases could be time consuming.

From a workflow perspective, you need to ask TensorRT to optimize TensorFlow's sub-graphs and replace each subgraph with a TensorRT optimized node. The output of this step is a frozen graph that can then be used in TensorFlow as before.

During inference, TensorFlow executes the graph for all supported areas, and calls TensorRT to execute TensorRT optimized nodes. As an example, if your graph has 3 segments, A, B and C. Segment B is optimized by TensorRT and replaced by a single node. During inference, TensorFlow executes A, then calls TensorRT to execute B, and then TensorFlow executes C.

The newly added TensorFlow API to optimize TensorRT takes the frozen TensorFlow graph, applies optimizations to sub-graphs and sends back to TensorFlow a TensorRT inference graph with optimizations applied. See the code below as an example.

# Reserve memory for TensorRT inference engine
gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction = number_between_0_and_1)
...
trt_graph = trt.create_inference_graph(
input_graph_def = frozen_graph_def,
outputs = output_node_name,
max_batch_size=batch_size,
max_workspace_size_bytes=workspace_size,
precision_mode=precision) # Get optimized graph

The per_process_gpu_memory_fraction parameter defines the fraction of GPU memory that TensorFlow is allowed to use, the remaining being available for TensorRT. This parameter should be set the first time the TensorFlow-TensorRT process is started. As an example, a value of 0.67 would allocate 67% of GPU memory for TensorFlow and the remaining 33 % for TensorRT engines.

The create_inference_graph function takes a frozen TensorFlow graph and returns an optimized graph with TensorRT nodes. Let's look at the function's parameters:

  • input_graph_def: frozen TensorFlow graph
  • outputs: list of strings with names of output nodes e.g. ["resnet_v1_50/predictions/Reshape_1"]
  • max_batch_size: integer, size of input batch e.g. 16
  • max_workspace_size_bytes: integer, maximum GPU memory size available for TensorRT
  • precision_mode: string, allowed values "FP32", "FP16" or "INT8"

As an example, if the GPU has 12GB memory, in order to allocate ~4GB for TensorRT engines, set the per_process_gpu_memory_fraction parameter to ( 12 - 4 ) / 12 = 0.67 and the max_workspace_size_bytes parameter to 4000000000.

Lets apply the new API to ResNet-50 and see what the optimized model looks like in TensorBoard. The complete code to run the example is available in . The image on the left is ResNet-50 without TensorRT optimizations and the right image is after. In this case, most of the graph gets optimized by TensorRT and replaced by a single node (highlighted).

Optimized INT8 Inference performance

TensorRT provides capabilities to take models trained in single (FP32) and half (FP16) precision and convert them for deployment with INT8 quantizations at reduced precision with minimal accuracy loss. INT8 models compute faster and place lower requirements on bandwidth but present a challenge in representing weights and activations of neural networks because of the reduced dynamic range available.

Dynamic Range Minimum Positive Value
FP32 -3.4×1038 ~ +3.4×1038 1.4 × 10−45
FP16 65504 ~ +65504 5.96 x 10-8
INT8 -128 ~ +127 1

To address this, TensorRT uses a calibration process that minimizes the information loss when approximating the FP32 network with a limited 8-bit integer representation. With the new integration, after optimizing the TensorFlow graph with TensorRT, you can pass the graph to TensorRT for calibration as below.

trt_graph=trt.calib_graph_to_infer_graph(calibGraph)

The rest of the inference workflow remains unchanged from above. The output of this step is a frozen graph that is executed by TensorFlow as described earlier.

Automatically use Tensor Cores on NVIDIA Volta GPUs

TensorRT runs half precision TensorFlow models on Tensor Cores in VOLTA GPUs for inference. Tensor Cores, provide 8x more throughput than single precision math pipelines. Half precision (also known as FP16) data compared to higher precision FP32 vs FP64 reduces memory usage of the neural network. This allows training and deployment of larger networks, and FP16 data transfers take less time than FP32 or FP64 transfers.

Each Tensor Core performs D = A x B + C, where A, B, C and D are matrices. A and B are half-precision 4x4 matrices, whereas D and C can be either half or single precision 4x4 matrices. The peak performance of Tensor Cores on the V100 is about an order of magnitude (10x) faster than double precision (FP64) and about 4 times faster than single precision (FP32).

Availability

We are excited about this release and will continue to work closely with NVIDIA to enhance this integration. We expect the new solutions ensure the highest performance possible while maintaining the ease and flexibility of TensorFlow. And as TensorRT supports more networks, you will automatically benefit from the updates without any changes to your code.

To get the new solution, you can use the standard pip install process once TensorFlow 1.7 is released:

pip install tensorflow-gpu r1.7

Till then, find detailed installation instructions here: https://github.com/tensorflow/tensorflow/tree/r1.7/tensorflow/contrib/tensorrt

Try it out and let us know what you think!

[Video] Hamilton app built in 3 months with Flutter reaches 1M+ installs

Originally posted on Flutter's Medium by Martin Aguinis

Hamilton and Posse, a design and development agency in New York, had three short months to develop and launch mobile apps for the hit Broadway show. How did they accomplish that? Using Flutter, Google's new mobile UI framework.

Reaching millions of users — with an outstanding half a million monthly active users and featured on both the App Store and Google Play— the apps let fans enter the ticket lottery, buy merchandise, play trivia, take selfies with a #HamCam, read frequently updated news and interviews, and more.

Watch this video case study to see how Flutter continues to help apps like Hamilton succeed on iOS and Android. You can read more details about the development of this app on Posse's blog post.

Flutter is free and open source. Get started today at flutter.io. We can't wait to see what you build!

Funding 15,000 web and android scholarship in Africa – to provide employable developer skills

Posted by William Florance, Head, Economic Impact Programs

Africa's digital journey is rapidly gaining speed. According to the recent data, over 73 million people came online in Africa for the first time in 2017- that's more than the population of the UK! This means there are now about 435 million people on the continent using the Web to engage, connect and access information online. That's a good thing! But with this growth comes with an increased need to scale efforts to make the Web more relevant and useful to African users. This will require more skilled hands working with individuals and local businesses to develop content and platforms that will support Africa's digital growth.

In July 2017, Google's CEO, Sundar Pichai, announced a pledge to provide digital skills training to ten million people in Africa, and also to provide mobile developer training to 100,000 Africans. Today, in line with that commitment, we're excited to announce the launch of our new Africa Web and Android Scholarship program aimed at providing 15,000 scholarships to developers resident in Africa countries.

Working in partnership with Udacity and Andela, we will be offering 15,000 2-month 'single course' scholarships and 500 6-month nanodegree scholarships to aspiring and professional developers across Africa. The training will be available online via the Udacity training website, and the Andela Learning Community will support the students (in Nigeria and Kenya) through mentorship, in-person meet-ups, and online communities.

In order to access the full nanodegree scholarships, learners will have to complete lessons and quizzes courses being offered under the Udacity single course scholarships (also known as challenge courses) in addition to their active participation and support of classmates in the student community. We will be offering 10,000 scholarships to beginners (with little or no programming experience) and 5,000 to professional developers (with +1 year of experience) spread across Android and mobile web development tracks. The 10,000 beginner scholarships will include Android beginner courses and basic introduction to HTML & CSS; while the 5,000 intermediate scholarships include Android fundamentals for intermediate and building offline web applications courses respectively. Both courses are taught in English through an online program on Udacity open to Africa residence. The top 500 students at the end of the challenge will earn a full Nanodegree scholarship to one of four Nanodegree programs in Android or web development.

The application period closes on April 24th. Interested or want to learn more, visit https://www.udacity.com/google-africa-scholarships?utm_source=devblog

Discontinuing support for JSON-RPC and Global HTTP Batch Endpoints

Posted by Dan O’Meara, Program Manager, Google Cloud Platform team

We have invested heavily in our API and service infrastructure to improve performance and security and to add features developers need to build world-class APIs. As we make changes we must address features that are no longer compatible with the latest architecture and business requirements.

The JSON-RPC protocol (http://www.jsonrpc.org/specification) and Global HTTP Batch (Javascript example) are two such features. Our support for these features was based on an architecture using a single shared proxy to receive requests for all APIs. As we move towards a more distributed, high performance architecture where requests go directly to the appropriate API server we can no longer support these global endpoints.

As a result, next year, on January 25, 2019 we will discontinue support for both these features.

We know that these changes have customer impact and have worked to make the transition steps as clear as possible. Please see the guidance below which will help ease the transition.

What do you need to do?

Google API Client Libraries have been regenerated to no longer make requests to the global HTTP batch endpoint. Clients using these libraries must upgrade to the latest version. Clients not using the Google API Client Libraries and/or making custom calls to the JSON-RPC endpoint or HTTP batch endpoint will need to make the changes outlined below.

JSON-RPC

To identify whether you use JSON-RPC, you can check whether you send requests to https://www.googleapis.com/rpc or https://content.googleapis.com/rpc. If you do, you should migrate.

  1. If you are using client libraries (either the Google published libraries or other libraries) that use the JSON-RPC endpoint, switch to client libraries that speak to the API's REST endpoint:
  2. Example code for Javascript

    Before

    // json-rpc request for the list method
    gapi.client.rpcRequest('zoo.animals.list', 'v2',
    name:'giraffe'}).execute(x=>console.log(x))

    After

    // json-rest request for the list method
    gapi.client.zoo.animals.list({name:'giraffe'}).then(x=>console.log(x))

    OR

    1. If you are not using client libraries (i.e. making raw HTTP requests):
      1. Use the REST URLs, and
      2. Change how you form the request and parse the response.

    Example code

    Before

    // Request URL (JSON-RPC)
    POST https://content.googleapis.com/rpc?alt=json&key=xxx
    // Request Body (JSON-RPC)
    [{
    "jsonrpc":"2.0","id":"gapiRpc",
    "Method":"zoo.animals.list",
    "apiVersion":"v2",
    "Params":{"name":"giraffe"}
    }]

    After

    // Request URL (JSON-REST)
    GET https://content.googleapis.com/zoo/v2/animals?name=giraffe&key=xxx

    HTTP batch

    A batch request is homogenous if the inner requests are addressed to the same API, even if addressed to different methods of the same API. It is heterogeneous if the inner requests go to different APIs. Heterogeneous batching will not be supported after the turn down of the Global HTTP batch endpoint. Homogenous batching will still be supported but through API specific batch endpoints.

    1. If you are currently forming heterogeneous batch requests:
      1. Change your client code to send only homogenous batch requests.

    Example code

    The example demonstrates how we can split a heterogeneous batch request for 2 apis (urlshortener and zoo) into 2 homogeneous batch requests.

    Before

    // heterogeneous batch request example.

    // Notice that the outer batch request contains inner API requests
    // for two different APIs.

    // Request to urlshortener API
    request1 = gapi.client.urlshortener.url.get({"shortUrl": "http://goo.gl/fbsS"});

    // Request to zoo API
    request2 = gapi.client.zoo.animals.list();

    // Request to urlshortener API
    request3 = gapi.client.urlshortener.url.get({"shortUrl": "https://goo.gl/XYFuPH"});

    // Request to zoo API
    request4 = gapi.client.zoo.animal.get("name": "giraffe");

    // Creating single heterogeneous batch request object
    heterogeneousBatchRequest = gapi.client.newBatch();
    // adding the 4 batch requests
    heterogeneousBatchRequest.add(request1);
    heterogeneousBatchRequest.add(request2);
    heterogeneousBatchRequest.add(request3);
    heterogeneousBatchRequest.add(request4);
    // print the heterogeneous batch request
    heterogeneousBatchRequest.then(x=>console.log(x));

    After

    // Split heterogeneous batch request into two homogenous batch requests

    // Request to urlshortener API
    request1 = gapi.client.urlshortener.url.get({"shortUrl": "http://goo.gl/fbsS"});

    // Request to zoo API
    request2 = gapi.client.zoo.animals.list();

    // Request to urlshortener API
    request3 = gapi.client.urlshortener.url.get({"shortUrl": "http://goo.gl/fbsS"})

    // Request to zoo API
    request4 = gapi.client.zoo.animals.list();
    // Creating homogenous batch request object for urlshorterner
    homogenousBatchUrlshortener = gapi.client.newBatch();

    // Creating homogenous batch request object for zoo
    homogenousBatchZoo = gapi.client.newBatch();
    // adding the 2 batch requests for urlshorterner
    homogenousBatchUrlshortener.add(request1); homogenousBatchUrlshortener.add(request3);

    // adding the 2 batch requests for zoo
    homogenousBatchZoo.add(request2);
    homogenousBatchZoo.add(request4);
    // print the 2 homogenous batch request
    Promise.all([homogenousBatchUrlshortener,homogenousBatchZoo])
    .then(x=>console.log(x));

    OR

  3. If you are currently forming homogeneous batch requests
    1. And you are using Google API Client Libraries, then simply update to the latest versions.
    2. If you are using non-Google API client libraries or no client library (i.e making raw HTTP requests), then:
      • Change the endpoint from www.googleapis.com/batch to www.googleapis.com/batch//.
      • Or, simply read the value of 'batchPath' from the API's discovery doc and use that value.

For help on migration, consult the API documentation or tag Stack Overflow questions with the 'google-api' tag.

Firebase Crashlytics graduates from beta

Originally posted on the Firebase Blog by Jason St. Pierre, Product Manager.

Back in October, we were thrilled to launch a beta version of Firebase Crashlytics. As the top ranked mobile app crash reporter for over 3 years running, Crashlytics helps you track, prioritize, and fix stability issues in realtime. It's been exciting to see all the positive reactions, as thousands of you have upgraded to Crashlytics in Firebase!

Today, we're graduating Firebase Crashlytics out of beta. As the default crash reporter for Firebase going forward, Crashlytics is the next evolution of the crash reporting capabilities of our platform. It empowers you to achieve everything you want to with Firebase Crash Reporting, plus much more.

This release include several major new features in addition to our stamp of approval when it comes to service reliability. Here's what's new.

Integration with Analytics events

We heard from many of you that you love Firebase Crash Reporting's "breadcrumbs" feature. (Breadcrumbs are the automatically created Analytics events that help you retrace user actions preceding a crash.) Starting today, you can see these breadcrumbs within the Crashlytics section of the Firebase console, helping you to triage issues more easily.

To use breadcrumbs on Crashlytics, install the latest SDK and enable Google Analytics for Firebase. If you already have Analytics enabled, the feature will automatically start working.

Crash insights

By broadly analyzing aggregated crash data for common trends, Crashlytics automatically highlights potential root causes and gives you additional context on the underlying problems. For example, it can reveal how widespread incorrect UIKit rendering was in your app so you would know to address that issue first. Crash insights allows you to make more informed decisions on what actions to take, save time on triaging issues, and maximize the impact of your debugging efforts.

From our community:

"In the few weeks that we've been working with Crashlytics' crash insights, it's been quite helpful on a few particularly pesky issues. The description and quality of the linked resources makes it easy to immediately start debugging."

- Marc Bernstein, Software Development Team Lead, Hudl

Pinning important builds

Generally, you have a few builds you care most about, while others aren't as important at the moment. With this new release of Crashlytics, you can now "pin" your most important builds which will appear at the top of the console. Your pinned builds will also appear on your teammates' consoles so it's easier to collaborate with them. This can be especially helpful when you have a large team with hundreds of builds and millions of users.

dSYM uploading

To show you stability issues, Crashlytics automatically uploads your dSYM files in the background to symbolicate your crashes. However, some complex situations can arise (i.e. Bitcode compiled apps) and prevent your dSYMs from being uploaded properly. That's why today we're also releasing a new dSYM uploader tool within your Crashlytics console. Now, you can manually upload your dSYM for cases where it cannot be automatically uploaded.

Firebase's default crash reporter

With today's GA release of Firebase Crashlytics, we've decided to sunset Firebase Crash Reporting, so we can best serve you by focusing our efforts on one crash reporter. Starting today, you'll notice the console has changed to only list Crashlytics in the navigation. If you need to access your existing crash data in Firebase Crash Reporting, you can use the app picker to switch from Crashlytics to Crash Reporting.

Firebase Crash Reporting will continue to be functional until September 8th, 2018 - at which point it will be retired fully.

Upgrading to Crashlytics is easy: just visit your project's console, choose Crashlytics in the left navigation and click "Set up Crashlytics":

Linking Fabric and Firebase Crashlytics

If you're currently using both Firebase and Fabric, you can now link the two to see your existing crash data within the Firebase console. To get started, click "Link app in Fabric" within the console and go through the flow on fabric.io:

If you are only using Fabric right now, you don't need to take any action. We'll be building out a new flow in the coming months to help you seamlessly link your existing app(s) from Fabric to Firebase. In the meantime, we encourage you to try other Firebase products.

We are excited to bring you the best-in class crash reporter in the Firebase console. As always, let us know your thoughts and we look forward to continuing to improve Crashlytics. Happy debugging!

Artifact management for open source software

Posted by Kit Merker, JFrog

It's often said that open source is free like speech, not free like beer. But every so often, the developers behind an open source project can take advantage of free services to make their project better.

We believe in supporting the good work of open source projects to help the maintainers, who do an often thankless job, to be more productive.


Last year, we collaborated with Google to announce the availability of Artifactory Pro hosted on Google Cloud Platform free of charge for qualifying open source projects. The idea was to make sure that open source maintainers could reliably share their build outputs between team members for development, testing and deployment. This will help ensure that the open source projects which developers around the world rely on are easy to consume.

Since the announcement, over 30 projects have qualified for and joined, including OpenMRS, Psono, and Grails.

If you run an open source project and are interested, we encourage you to apply.

Join the “Build Actions for Your Community” Event Series

Posted by Ido Green, Developer Advocate

Ever wanted to learn about developing for the Google Assistant and meet other developers that are passionate about conversational UI? Well, we've got some good news!

Today, we are launching a global series of events about Actions on Google, run by Google Developers Groups (GDG) and other community groups. In these events, you'll be able to meet other developers and go together through educational content, uniquely crafted for these events by Google engineers. This includes tutorials on how to build your first Action and advanced sessions on how to use more complex features of the platform. By the end of the event you attend, you'll be able to build an Action for your community - be it your hometown, your professional network, or interest group.

And if you don't see an event near you, don't worry - you can always organize your own. We'll help!

It's going to be a great year for Actions developers. Please join us and check out the dedicated event website with all the event details and more information: developers.google.com/events/buildactions!