Tag Archives: Beta

Beta Channel Update for ChromeOS

The Beta channel has been updated to 89.0.4329.16 (Platform version: 13729.8.0) for most Chrome OS devices. This build contains a number of bug fixes, security updates and feature enhancements. Changes can be viewed here.


If you find new issues, please let us know by visiting our forum or filing a bug. Interested in switching channels? Find out how. You can submit feedback using ‘Report an issue...’ in the Chrome menu (3 vertical dots in the upper right corner of the browser).

Geo Hsu
Google Chrome

Cloud Spanner Emulator Reaches 1.0 Milestone!

The Cloud Spanner emulator provides application developers with the full set of APIs, including the full breadth of SQL and DDL features that can be run locally for prototyping, development and testing. This offline emulator is free and improves developer productivity for customers. Today, we are happy to announce that Cloud Spanner emulator is generally available (GA) with support for Partitioned APIs, Cloud Spanner client libraries, and SQL features.

Since Cloud Spanner emulator’s beta launch in April, 2020, we have seen strong adoption of the local emulator from customers of Cloud Spanner. Several new and existing customers adopted the emulator in their development & continuous test pipelines. They noticed significant improvements in developer productivity, speed of test execution, and error-free applications deployed to production. We also added several features in this release based on the valuable feedback we received from beta users. The full list of features is documented in the GitHub readme.

Partition APIs

When reading or querying large amounts of data from Cloud Spanner, it can be useful to divide the query into smaller pieces, or partitions, and use multiple machines to fetch the partitions in parallel. The emulator now supports Partition Read, Partition Query, and Partition DML APIs.

Cloud Spanner client libraries

With the GA launch, the latest versions of all the Cloud Spanner client libraries support the emulator. We have added support for C#, Node.js, PHP, Python, Ruby client libraries and the Cloud Spanner JDBC driver. This is in addition to C++, Go and Java client libraries that were already supported with the beta launch. Be sure to check out the minimum version for each of the client libraries that support the emulator.

Use the Getting Started guides to try the emulator with the client library of your choice.

SQL features

Emulator now supports the full set of SQL features provided by Cloud Spanner. Some of the notable additions being support for SQL functions JSON_VALUE, JSON_QUERY, CEILING, POWER, CHARACTER_LENGTH, and FORMAT. We now also support untyped parameter bindings in SQL statements which are used by our client libraries written in languages with dynamic typing e.g., Python, PHP, Node.js and Ruby.

Using Emulator in CI/CD pipelines

You may now point the majority of your existing CI/CD to the Cloud Spanner emulator instead of a real Cloud Spanner instance brought up on GCP. This will save you both cost and time, since an emulator instance comes up instantly and is free to use!

What’s even better is that you can bring up multiple instances in a single execution of the emulator, and of course multiple databases. Thus, tests that interact with a Cloud Spanner database can now run in parallel since each of them can have their own database, making tests hermetic. This can reduce flakiness in unit tests and reduce the number of bugs that can make their way to continuous integration tests or to production.

In case your existing CI/CD architecture assumes the existence of a Cloud Spanner test instance and/or test database against which the tests run, you can achieve similar functionality with the emulator as well. Note that the emulator doesn’t come up with a default instance or a default database as we expect users to create instances and databases as required in their tests for hermeticity as explained above. Below are two examples of how you can bring up an emulator with a default instance or database: 1) By using a docker image or 2) Programmatically.

Starting Emulator from Docker

The emulator can be started using Docker on Linux, MacOS, and Windows. As a prerequisite, you would need to install Docker on your system. To bring up an emulator with a default database/instance, you can execute a shell script in your docker file to do so. Such a script would make RPC calls to CreateInstance and CreateDatabase after bringing up the emulator server. You can also look at this example on how to put this together when using docker.
Run Emulator Programmatically

You can bring up the emulator binary in the same process as your test program. Then you can then create a default instance/database in your ‘Setup’ and clean up the same when the tests are over. Note that the exact procedure for bringing up an ‘in-process’ service may vary with the client library language and platform of your choice.

Other alternatives to start the emulator, including pre-built linux binaries, are listed here.
Try it now

Learn more about Google Cloud Spanner emulator and try it out now.

By Asheesh Agrawal, Google Open Source

Update on the Google Ads API Beta

Since our last announcement in July, we've made several updates to improve the performance of the Google Ads API. Your feedback was essential to us in making these improvements and will continue to be throughout the remainder of the Beta.

What has been fixed?
Over the past several months, we've rolled out performance improvements to Google Ads API read and mutate functionality. Some of these updates are visible in recent versions of the API, such as the launch of GoogleAdsService.SearchStream() in v3_0, while other improvements have sped up the response times of existing services and methods.

What's next for the Google Ads API Beta?
It is our top priority to get the Google Ads API ready for general availability. This involves rolling out features for key user journeys, for example: a service for asynchronous batch updates. If you have any feedback on the API's readiness to address your tool's requirements, we'd like to hear from you!

Which API should I use?
Throughout the remainder of the Beta, the AdWords API will continue to be the primary API for programmatically accessing and managing Google Ads campaigns. When deciding whether to use the Google Ads API Beta to run production systems, please keep in mind that we may release updates in preparation for general availability. As a reminder, changes will be released in new versions of the Google Ads API Beta. They will not affect your existing code unless announced otherwise on this blog.

If you have any feedback or questions regarding the performance, feature availability and overall usability of the Google Ads API Beta, please contact us at [email protected].

The Tekton Pipelines Beta release

Tekton is a powerful and flexible open-source framework for creating CI/CD systems, allowing developers to build, test, and deploy across cloud providers and on-premise systems. The project recently released its Beta, which creates higher levels of stability by bringing the best features into the Pipelines Beta and brings more trust between the users and the features.


Tekton is used for infrastructure development on top of Kubernetes; it provides an open source framework for creating CI/CD systems, easily allowing developers to build, test, and deploy applications across applications.

With the new Beta functionality, users can rest assured that Beta features will not be removed, and that there will be a 9-month window dedicated to finding solutions for incompatible API changes. Since many in the Tekton community are using Tekton Pipelines to run APIs, this new release helps guarantee that any new developments on top of Tekton are reliable and optimized for best performance, with a budget of several months to make any necessary adjustments.

As platform builders require a stable API and feature set, the Beta launch includes Tasks, ClusterTasks and TaskRuns, Pipelines and PipelineRuns, to provide a foundation that users can rely on. Google created working groups in conjunction with other contributors from various companies to drive the Beta release. The team continues to churn out new Pipeline features towards a GA launch at the end of the year, while also focussing on bringing other components like metadata storage, Triggers, and the Catalog to Beta.


While initially starting as part of the Knative project from Google, in collaboration with developers from other organizations, Tekton was donated to the Continuous Delivery Foundation (CDF) in early 2019. Tekton’s initial design for the interface was even inspired by the Cloud Build API—and to this day—Google remains heavily involved in the commitment to develop Tekton, by participating in the governing board, and staffing a dedicated team invested in the success of this project. These characteristics make Tekton a prime example of a collaboration in open source.

Since its launch in February 2019, Tekton has had 3712 pull requests from 262 contributors across 39 companies spanning 16 countries. Many widely used projects across the open source industry are built on Tekton:
  • Puppet Project Nebula
  • Jenkins X
  • Red Hat OpenShift Pipelines
  • IBM Cloud Continuous Delivery
  • Kabanero – open source project led by IBM
  • Rio – open source project led by Rancher
  • Kf – open source project led by Google
Interested in trying out Tekton yourself? To install Tekton in your own kubernetes cluster (1.15 or newer), use kubectl to install the latest Tekton release:

kubectl apply -f
https://storage.googleapis.com/tekton-releases/pipeline/latest/release.yaml

You can jump right in by saving this Task to a file called task.yaml:

apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
  name: hello-world
spec:
  steps:
  - image: ubuntu
    script: |
      echo "hello world"


Tasks are one of the most important building blocks of Tekton! Head over to tektoncd/catalog for more examples of reusable Tasks.

To run the hello-world Task, first apply it to your cluster with kubectl:

kubectl apply -f task.yaml

The easiest way to start running our Task is to use the Tekton command line tool, tkn. Install tkn using the right method for your OS, and you can run your Task with:

tkn task start hello-world --showlog

That’s just a taste of Tekton! At tekton.dev/try the community is hard at work adding interactive tutorials that let you try Tekton in a virtual environment. And you can dump straight into the docs at tekton.dev/docs and join the Tekton community at github.com/tektoncd/community.

Congratulations to all the contributors who made this Beta release possible!

By Radha Jhatakia and Christie Wilson, Google Open Source

Beta Channel Update for Desktop

The beta channel has been updated to 77.0.3865.35 for Windows, Mac, and Linux.


A full list of changes in this build is available in the log. Interested in switching release channels?  Find out how here. If you find a new issue, please let us know by filing a bug. The community help forum is also a great place to reach out for help or learn about common issues.


Lakshmana Pamarthy
Google Chrome

Beta Channel Update for Desktop

The beta channel has been updated to 76.0.3809.100 for Windows, Mac, and Linux.

A full list of changes in this build is available in the log. Interested in switching release channels?  Find out how here. If you find a new issue, please let us know by filing a bug. The community help forum is also a great place to reach out for help or learn about common issues.



Abdul Syed
Google Chrome

Beta Channel Update for Desktop

The beta channel has been updated to 74.0.3729.108 for Windows, Mac, and, Linux.

A full list of changes in this build is available in the log. Interested in switching release channels?  Find out how here. If you find a new issue, please let us know by filing a bug. The community help forum is also a great place to reach out for help or learn about common issues.



Abdul Syed
Google Chrome

What’s new in Android P Beta

Posted By Dave Burke, VP of Engineering

Earlier today we unveiled a beta version of Android P, the next release of Android. Android P puts AI at the core of the operating system and focuses on intelligent and simple experiences. You can read more about the new user features here.

For developers, Android P beta offers a range of ways to take advantage of these new smarts, especially when it comes to increasing engagement with your apps.

You can get Android P beta on Pixel devices by enrolling here. And thanks to Project Treble, you can now get the beta on top devices from our partners as well -- Essential, Nokia, Oppo, Sony, Vivo, and Xiaomi, with others on the way.

Visit android.com/beta for the full list of devices, and details on how to get Android P beta on your device. To get started developing with Android P beta, visit developer.android.com/preview.

A smarter smartphone, with machine learning at the core

Android P makes a smartphone smarter, helping it learn from and adapt to the user. Your apps can take advantage of the latest in machine intelligence to help you reach more users and offer new kinds of experiences.

Adaptive Battery

Battery is the number one priority we hear from mobile phone users, regardless of the device they are using. In Android P we've partnered with DeepMind on a new feature we call Adaptive Battery that optimizes how apps use battery.

Adaptive Battery uses machine learning to prioritize access to system resources for the apps the user cares about most. It puts running apps into groups with different restrictions using four new "App Standby buckets" ranging from "active" to "rare". Apps will change buckets over time, and apps not in the "active" bucket will have restrictions in: jobs, alarms, network and high-priority Firebase Cloud Messages.

If your app is optimized for Doze, App Standby, and Background Limits, Adaptive Battery should work well for you right out of the box. We recommend testing your app in each of the four buckets. Check out the documentation for the details.

App Actions

App Actions are a new way to raise the visibility of your app to users as they start their tasks. They put your app's core capabilities in front of users as suggestions to handle their tasks, from key touch-points across the system like the Launcher and Smart Text Selection, Google Play, Google Search app, and the Assistant.

Actions use machine learning to surface just the right apps to users based on their context or recent interactions. Because Actions highlight your app where and when it's most relevant, they're a great way to reach new users and re-engage with existing users.

To support App Actions, just define your app's capabilities as semantic intents. App Actions use the same catalog of common intents as conversational Actions for the Google Assistant, which surface on voice-activated speakers, Smart displays, cars, TVs, headphones, and more. There's no API surface needed for App Actions, so they will work on any supported Android platform version.

Actions will be available soon for developers to try, sign up here if you'd like to be notified.

Slices

Along with App Actions we're introducing Slices, a new way for your apps to provide remote content to users. With Slices you can surface rich, templated UI in places like Google Search and Assistant. Slices are interactive with support for actions, toggles, sliders, scrolling content, and more.

Slices are a great new way to engage users and we wanted them to be available as broadly as possible. We added platform support in Android P, and we built the developer APIs and templates into Android Jetpack, our new set of libraries and tools for building great apps. Through Jetpack, your Slices implementation can target users all the way back to Kitkat -- across 95% of active Android devices. We'll also be able to update the templates regularly to support new use cases and interactions (such as text input).

Check out the Getting Started guide to learn how to build with Slices -- you can use the SliceViewer tool to see how your Slices look. Over time we plan to expand the number of places that your Slices can appear, including remote display in other apps.

Smart reply in notifications

The Smart Reply feature in Gmail and Inbox are excellent examples of how machine intelligence can positively transform an app experience. In Android P we've brought Smart Replies to Notifications with an API to let you provide this optimization to your users. To make it easier to populate replies in your notifications, you'll soon be able to use ML Kit -- see developers.google.com/mlkit for details.

Text Classifier

In Android P we've extended the ML models that identify entities in content or text input to support more types like Dates and Flight Numbers and we're making those improvements available to developers through the TextClassifier API. We're also updating the Linkify API that automatically creates links to take advantage of these TextClassification models and have enriched the options the user has for quick follow on actions. Developers will have additional options of linkifying any of the entities recognized by the TextClassifier service. Smart Linkify has significant improvements in accuracy and precision of detection and performance.

Even better, the models are now updated directly from Google Play, so your apps can take advantage of model improvements using the same APIs. Once the updated models are installed, all of the entity recognition happens on-device and data is not sent over the network.

Simplicity

We put a special emphasis on simplicity in Android P, evolving Android's UI to streamline and enhance user tasks. For developers, the changes help improve the way users find, use, and manage your apps.

New system navigation

We're introducing a new system navigation in Android P that gives users easier access to Home, Overview, and the Assistant from a single button on every screen. The new navigation simplifies multitasking and makes discovering related apps much easier. In the Overview, users have a much larger view of what they were doing when they left each app, making it much easier to see and resume the activity. The Overview also provides access to search, predicted apps, and App Actions, and takes users to All Apps with another swipe.

Text Magnifier

In Android P we've also added a new Magnifier widget, designed to make it easier to select text and manipulate the text cursor in text. By default, classes that extend TextView automatically support the magnifier, but you can use the Magnifier API to attach it to any custom View, which opens it up to a variety of uses.

Background restrictions

We're making it simple for users to identify and manage apps that are using battery in the background. From our work on Android Vitals, Android can detect battery-draining app behaviors such as excessive wake locks and others. Now in Android P, Battery Settings lists such apps and lets users restrict their background activities with a single tap.

When an app is restricted, its background jobs, alarms, services, and network access are affected. To stay off of the list, pay attention to your Android Vitals dashboard in the Play Console, which can help you understand performance and battery issues.

Background Restrictions ensures baseline behaviors that developers can build for across devices and manufacturers. Although device makers can add restrictions on top of the core set, they must provide user controls via Battery Settings.

We've added a standard API to let apps check whether they are restricted, as well as new ADB commands to let you manually apply restrictions to your apps for testing. See the documentation for details. We also plan to add restrictions related metrics to your Play Console Android Vitals dashboard in the future.

Enhanced audio with Dynamics Processing

Android P introduces a new Dynamics Processing Effect in the Audio Framework that lets developers improve audio quality. With Dynamics Processing, you can isolate specific frequencies and lower loud or increase soft sounds to enhance the acoustic quality of your application. For example, your app can improve the sound of someone who speaks quietly in a loud, distant or otherwise acoustically challenging environment.

The Dynamics Processing API gives you access to a multi-stage, multi-band dynamics processing effect that includes a pre-equalizer, a multi-band compressor, a post-equalizer and a linked limiter. It lets you modify the audio coming out of Android devices and optimize it according to the preferences of the listener or the ambient conditions. The number of bands and active stages is fully configurable, and most parameters can be controlled in realtime, such as gains, attack/release times, thresholds, etc.

To see what you can do with the Dynamics Processing Effect, please see the documentation.

Security

Biometric prompt

Android P provides a standard authentication experience across the growing range of biometric sensors. Apps can use the new BiometricPrompt API instead of displaying their own biometric auth dialogs. This new API replaces the FingerprintDialog API added in DP1. In addition to supporting Fingerprints (including in-display sensors), it also supports Face and Iris authentication, providing a system-wide consistent experience. There is a single USE_BIOMETRIC permission that covers all device-supported biometrics. FingerprintManager and the corresponding USE_FINGERPRINT permission are now deprecated, so please switch to BiometricPrompt as soon as possible.

Protected Confirmation

Android P introduces Android Protected Confirmation, which use the Trusted Execution Environment (TEE) to guarantee that a given prompt string is shown and confirmed by the user. Only after successful user confirmation will the TEE then sign the prompt string, which the app can verify.

Stronger protection for private keys

We've added StrongBox as a new KeyStore type, providing API support for devices that provide key storage in tamper-resistant hardware with isolated CPU, RAM, and secure flash. You can set whether your keys should be protected by a StrongBox security chip in your KeyGenParameterSpec.

Android P Beta

Bringing a new version of Android to users takes a combined effort across Google, silicon manufacturers (SM), device manufacturers (OEMs), and carriers. The process is technically challenging and can take time -- to make it easier, we launched Project Treble last year as part of Android Oreo. Since then we've been working with partners on the initial bring-up and now we're seeing proof of what Treble can do.

Today we announced that 6 of our top partners are joining us to release Android P Beta on their devices -- Sony Xperia XZ2, Xiaomi Mi Mix 2S, Nokia 7 Plus, Oppo R15 Pro, Vivo X21UD and X21, and Essential PH‑1. We're inviting early adopters and developers around the world to try Android P Beta on any of these devices -- as well as on Pixel 2, Pixel 2 XL, Pixel, and Pixel XL.

You can see the full list of supported partner and Pixel devices at android.com/beta. For each device you'll find specs and links to the manufacturer's dedicated site for downloads, support, and to report issues. For Pixel devices, you can now enroll your device in the Android Beta program and automatically receive the latest Android P Beta over-the-air.

Try Android P Beta on your favorite device today and let us know your feedback! Stay tuned for updates on Project Treble coming soon.

Make your apps compatible

With more users starting to get Android P Beta on their devices, now is the time to test your apps for compatibility, resolve any issues, and publish an update as soon as possible. See the migration guide for steps and a recommended timeline.

To test for compatibility, just install your current app from Google Play onto a device or emulator running Android P Beta and work through the user flows. The app should run and look great, and handle the Android P behavior changes properly. In particular, pay attention to adaptive battery, Wi-Fi permissions changes, restrictions on use of camera and sensors from the background, stricter SELinux policy for app data, and changes in TLS enabled by default, and Build.SERIAL restriction.

Compatibility through public APIs

It's important to test your apps for uses of non-SDK interfaces. As noted previously, in Android P we're starting a gradual process to restrict access to selected non-SDK interfaces, asking developers -- including app teams inside Google -- to use the public equivalents instead.

If your apps are using private Android interfaces and libraries, you should move to using public APIs from the Android SDK or NDK. The first developer preview displayed a toast warning for uses of non-SDK interfaces -- starting in Android P Beta, uses of non-SDK interfaces that are not exempted will generate errors in your apps -- so you'll now get exceptions thrown instead of a warning.

To help you identify reflective usage of non-SDK APIs, we've added two new methods in StrictMode. You can use detectNonSdkApiUsage() to warn when your app accesses non-SDK APIs via reflection or JNI, and you can use permitNonSdkApiUsage() to suppress StrictMode warnings for those accesses. This can help you understand your app's use of non-SDK APIs -- even if the APIs are exempted at this time, it's best to plan for the future and eliminate their use.

In cases where there is no public API that meets your use-case, please let us know immediately. We want to make sure that the initial rollout only affects interfaces where developers can easily migrate to public alternatives. More about the restrictions is here.

Test with display cutout

It's also important to test your app with display cutout. Now you can use several of our partner devices running Android Beta to make sure your app looks its best with a display cutout. You can also use the emulated cutout support that's available on any Android P device through Developer options.

Get started with Android P

When you're ready, dive into Android P and learn about the many new features and APIs you can take advantage of in your apps. To make it easier to explore the new APIs, take a look at the API diff reports (API 27->DP2, DP1->DP2) along with the Android P API reference. Visit the Developer Preview site for details. Also check out this video highlighting what's new for developers in Android P Beta.

To get started with Android P, download the P Developer Preview SDK and tools into Android Studio 3.1 or use the latest version of Android Studio 3.2. If you don't have a device that runs Android P Beta, you can use the Android emulator to run and test your app.

As always, your feedback is critical, so please let us know what you think — the sooner we hear from you, the more of your feedback we can integrate. When you find issues, please report them here. We have separate hotlists for filing platform issues, app compatibility issues, and third-party SDK issues.

Beta Channel Update for Desktop

The beta channel has been updated to 66.0.3359.117 for Windows, Mac, and, Linux.

A full list of changes in this build is available in the log. Interested in switching release channels?  Find out how here. If you find a new issue, please let us know by filing a bug. The community help forum is also a great place to reach out for help or learn about common issues.

Abdul Syed
Google Chrome

Firebase Crashlytics graduates from beta

Originally posted on the Firebase Blog by Jason St. Pierre, Product Manager.

Back in October, we were thrilled to launch a beta version of Firebase Crashlytics. As the top ranked mobile app crash reporter for over 3 years running, Crashlytics helps you track, prioritize, and fix stability issues in realtime. It's been exciting to see all the positive reactions, as thousands of you have upgraded to Crashlytics in Firebase!

Today, we're graduating Firebase Crashlytics out of beta. As the default crash reporter for Firebase going forward, Crashlytics is the next evolution of the crash reporting capabilities of our platform. It empowers you to achieve everything you want to with Firebase Crash Reporting, plus much more.

This release include several major new features in addition to our stamp of approval when it comes to service reliability. Here's what's new.

Integration with Analytics events

We heard from many of you that you love Firebase Crash Reporting's "breadcrumbs" feature. (Breadcrumbs are the automatically created Analytics events that help you retrace user actions preceding a crash.) Starting today, you can see these breadcrumbs within the Crashlytics section of the Firebase console, helping you to triage issues more easily.

To use breadcrumbs on Crashlytics, install the latest SDK and enable Google Analytics for Firebase. If you already have Analytics enabled, the feature will automatically start working.

Crash insights

By broadly analyzing aggregated crash data for common trends, Crashlytics automatically highlights potential root causes and gives you additional context on the underlying problems. For example, it can reveal how widespread incorrect UIKit rendering was in your app so you would know to address that issue first. Crash insights allows you to make more informed decisions on what actions to take, save time on triaging issues, and maximize the impact of your debugging efforts.

From our community:

"In the few weeks that we've been working with Crashlytics' crash insights, it's been quite helpful on a few particularly pesky issues. The description and quality of the linked resources makes it easy to immediately start debugging."

- Marc Bernstein, Software Development Team Lead, Hudl

Pinning important builds

Generally, you have a few builds you care most about, while others aren't as important at the moment. With this new release of Crashlytics, you can now "pin" your most important builds which will appear at the top of the console. Your pinned builds will also appear on your teammates' consoles so it's easier to collaborate with them. This can be especially helpful when you have a large team with hundreds of builds and millions of users.

dSYM uploading

To show you stability issues, Crashlytics automatically uploads your dSYM files in the background to symbolicate your crashes. However, some complex situations can arise (i.e. Bitcode compiled apps) and prevent your dSYMs from being uploaded properly. That's why today we're also releasing a new dSYM uploader tool within your Crashlytics console. Now, you can manually upload your dSYM for cases where it cannot be automatically uploaded.

Firebase's default crash reporter

With today's GA release of Firebase Crashlytics, we've decided to sunset Firebase Crash Reporting, so we can best serve you by focusing our efforts on one crash reporter. Starting today, you'll notice the console has changed to only list Crashlytics in the navigation. If you need to access your existing crash data in Firebase Crash Reporting, you can use the app picker to switch from Crashlytics to Crash Reporting.

Firebase Crash Reporting will continue to be functional until September 8th, 2018 - at which point it will be retired fully.

Upgrading to Crashlytics is easy: just visit your project's console, choose Crashlytics in the left navigation and click "Set up Crashlytics":

Linking Fabric and Firebase Crashlytics

If you're currently using both Firebase and Fabric, you can now link the two to see your existing crash data within the Firebase console. To get started, click "Link app in Fabric" within the console and go through the flow on fabric.io:

If you are only using Fabric right now, you don't need to take any action. We'll be building out a new flow in the coming months to help you seamlessly link your existing app(s) from Fabric to Firebase. In the meantime, we encourage you to try other Firebase products.

We are excited to bring you the best-in class crash reporter in the Firebase console. As always, let us know your thoughts and we look forward to continuing to improve Crashlytics. Happy debugging!