Posted by Jeff Bailey, AOSP Team
AOSP has been around for more than 10 years and visibility into the project has often been restricted to the Android Team and Partners. A lot of that has been rooted in business needs: we want to have fun things to show off at launches and the code wasn't factored in a way that let us do more in the open.
At the Android Developer Summit last month, we demoed GSI running on a number of partner devices, enabled through Project Treble. The work done to make that happen has provided the separation needed, and has also made it easier to work with our partners to upstream fixes for Android into AOSP. As a result of this, more than 40% of the commits to our git repository came in through our open source tree in Q3 of this year.
Publishing Android's Continuous Integration Dashboard
In order to support the developers working directly in AOSP and our partners upstreaming changes, we have enabled more than 8000 tests in presubmit -- tests that are run before the code is checked in -- and are working to add other continuous testing like the Compatibility Test Suite which ensures that our AOSP trees are in a continuously releasable state. Today we are excited to open this up for you through https://ci.android.com/.
On this dashboard, across the top are the targets that we are building, down the left are the revisions. As we add more targets (such as GSI), they will appear here. Each square in the table provides access to the build artifacts. An anchor on the left provides a permanent URL for that revision. Find out more at https://source.android.com/setup/build/dashboard.
Our DroidCop team (similar to Chromium's Tree Sherrifs) watches this dashboard and works with developers to ensure the health of the tree. This is just the start for us and we are building on this tool to add more in the coming months.
I'd like to thank the Android Engineering Productivity Team for embracing this and I'm excited for us to take this step! I'd love to hear how you use this. Contact me at @jeffbaileyaosp on Twitter, firstname.lastname@example.org, or tag /u/jeffbailey in a post to reddit.com/r/androiddev.
Source: Android Developers Blog
Heading home for the holidays? Here’s hoping it’ll be a joyous reunion with friends and family, with plenty of cookies to go around. But if you’ve already been dreading those questions from your great-aunt about your love life, consider the ways we teach the Assistant to have natural conversations—it’ll make the talk your great-aunt a little less dreadful.
I’m on the Assistant’s conversational design team, where we work to make your chats with the Google Assistant as pleasant as possible. I’ve been teaching computers how to talk for nearly 20 years, starting my career working on some of those automated phone systems you’ve probably dealt with when you lost a suitcase at the airport. (In my case on a recent trip to Norway, it took 10 of those phone calls to find that lost bag!)
In my years in the industry, I’ve learned a thing or two about how to make conversations work. And so has the Google Assistant.
Give just the right amount of information.
We’ve all had that one relative who keeps droning on about a boring topic at the dinner table, oblivious to the fact that half the room has dozed off. And sometimes we experience the opposite problem, where we ask someone a question and they don’t provide enough information. Like when I ask my son what time it is, and he responds, “Yes.”
To strike the right balance when we design conversations for the Google Assistant, we follow something called the Cooperative Principle, proposed by Paul Grice in the 1970s. His Maxim of Quantity means we shouldn’t talk too much, or too little.
Here’s an example of a conversation that follows the Maxim of Quantity, along with one that doesn’t.
Uncle Anthony: So, how is your first year of college going?
Me: Great! I’m taking four classes. My favorite is called “Taking care of turtles in the 21st century.” Do you know what a turtle’s favorite food is?
Uncle Anthony: So, how is your first year of college going?|
Me: Great! I’m taking four classes. My favorite is called “Taking care of turtles in the 21st century.” Some turtles are carnivores, and some are vegetarian. Sea turtles even eat squid. Leatherback sea turtles can grow to 1000 pounds!
Uncle Anthony: Zzzzzzz….
Make it clear when it’s the other person’s turn to talk.
We use a variety of signals to let another person know when we’ve finished talking, and when it’s the next person’s turn to talk. For example, when I pause to take a bite of peppermint bark, that’s an opening for the other person to speak. When designing conversations with computers, which aren’t able to use things like eye contact and body language to determine when it’s their turn, it’s key to end each turn with a question or an instruction, to avoid confusion. And that tactic can work with your family, too, so you’re not always talking over one another.
Me: So I went to this awesome concert. Have you ever been to a concert?
Grandma: Yes, I went to see the Beach Boys in 1987. What a show! Who did you see, dear?
Me: Wow, how interesting. I went to a show called Punky Kittens.
Me: So I went to this awesome concert. Have you ever been to a concert? I went--
Grandma Zara: Yes, I went to--
Me: --went to the greatest show the other day, and...
Grandma Zara: What?
Acknowledge the person you’re speaking with.
One of our most basic desires as humans is to be understood. We want to know the other person is hearing us correctly, like when you ask your brother to pass the green beans, not the gravy. One way the Assistant does this is by using something called “implicit confirmation.” This is how you let someone know they’ve been heard, and establish trust. Let’s see an example where, due to a misunderstanding, a cranberry crisis nearly occurs:
Me: Hey Joanne, I love (mumble mumble) cranberry sauce!|
Chef cousin who hates canned cranberry sauce: You like canned cranberry sauce?
Me: Actually I said FRESH cranberry sauce…
Chef: Me too!
Me: Hey Joanne, I love <mumble mumble> cranberry sauce!
Chef: What? I hate that stuff!
Me: Oh yeah? I don’t see why, you make it every year!
Only use visuals when they’re appropriate.
The Google Assistant is available on multiple types of devices, from the voice-only Google Home, to the voice-forward Home Hub, to the multi-modal mobile phone. Because of this, we need to consider when it’s most appropriate to introduce visuals, such as cards or carousels, to the conversation.
Our go-to design principle is to add visuals when they enhance the discussion, and not to let them overshadow the rest of the conversation. Try to keep this in mind when you’re sitting down with family and friends.
Me: I just came back from a trip to Costa Rica, where we saw some amazing monkeys. Here’s my favorite monkey picture! <shows 1 photo>
Everyone: Oooh! How cute!
Me: Who wants to see my slideshow of my cruise to Costa Rica? I have 350 photos. Let me find that one on the beach where I saw a monkey. In fact, I’ll show you all 50 of them!
Everyone: (Runs away.)
Get some practice with your Assistant.
Before you head out for the holidays, try having a few conversations with your Google Assistant and see if you can spot these great communication principles in action. We hope that by following some of these best practices, your holiday dinners will be more pleasant and relaxed.
And if you’re looking for some fun things to do with your Google Assistant, try saying “Hey Google, talk to Santa” or “Hey Google, tell me a winter story.”
- Refreshed look for Camera app
- Fingerprint and PIN enrollment in Out of Box Experience
- Autocomplete in Launcher search
- Adaptive top UI in Chrome browser based on user scrolling
- Unified setup flow to connect with an Android phone
- Assistant natively integrated into the OS (Pixel Slate first, expanding to more devices later)
- New features for families including app management and screen time limits.
- Ability to create semi-full pages in Launcher for customizations
- Launched Android P on Pixel Slate
- Fingerprint authentication mode on Pixel Slate
- Portrait mode for Camera app on Pixel Slate
We know app developers of all sizes need valuable and easy-to-use solutions to earn more from their apps. That’s why we’ve invested in providing tools that not only empower you to build sustainable revenue streams, but also make your job easier.
Here are a few vital ways we help developers grow their businesses, along with a look at what’s new.
Google’s advanced monetization technology
Earlier this year we introduced Open Bidding in beta, a new monetization model where all participating ad buyers compete simultaneously in one unified auction. Developers using Open Bidding are already seeing more ad revenue and less latency for their users.
Today, we are excited to announce that the Open Bidding program now features eight advertising partners for mobile ad buying. In addition to OpenX, Index Exchange, Smaato, Tapjoy and AdColony, now Facebook Audience Network, AppLovin, and Rubicon Project are joining the ongoing beta. With these new partners, we're offering diverse sources of app advertising to compete for ad inventory in real time, driving even more revenue for app developers.
"AdColony is excited to join forces with Google to move the app monetization ecosystem forward with Open Bidding. AdMob’s scale of advertiser demand and ease of integration provides a tremendous opportunity for app developers to drive more revenue and operational efficiency.”
- David Pokress, EVP Publishing & Account Management at AdColony
We’re continuing to add new features to Open Bidding based on feedback from our beta participants, including support for all ad formats like interstitial, rewarded, banner, which are all available today, and native, which will be added soon.
In addition to asking for more formats and advertising demand, developers participating in our beta have also asked us for transparency into who is bidding on and buying their ad inventory. We’re pleased to announce that we'll be adding a new auction report that allows developers to understand how their different advertising partners are performing. Open Bidding Auction Reports will be available to beta participants early next year.
While we’re paving the way for the next era of monetization technology, we also know that waterfall mediation* isn’t going away anytime soon. That’s why we’ve built Open Bidding to work seamlessly with waterfall mediation to maximize the value of every impression and simplify operations. Beta participants have noted the compatibility as a meaningful value-add.
Get set up quickly with developer-first tools
Regardless of whether developers use waterfall mediation, Open Bidding, or both, we’re committed to delivering the best experience on our platform. We know onboarding processes can be painful and create extra work—but we’ve got a few new tools to help with that.
AdMob’s new Mediation Test Suite beta makes it easier to test if your app is set up correctly to display ads, so you don’t miss out on revenue. Now, if you hit a snag with the SDK integration, you can test each individual network and instantly identify the source of the issue (e.g. SDK, adapter, credentials, etc.)—no more blindly troubleshooting issues and searching for the source. Once everything checks out, you’ll see an ad in the testing environment to confirm that the pipes are ready.
Another new beta feature is the ability to “warm up” your SDK adapters to reduce timeouts on the first ad request. Soon, you'll be able to initialize all SDK adapters in a single call to AdMob, ensuring all adapters are ready to go when the first ad is requested.
Our goal is to make setup easier. And if you do get stuck, we have the resources and global support to help you move fast.
In addition to building better tools, we’re partnering with players across the ecosystem and moving to a more efficient model that enables developers to earn more from their apps. Stay tuned for more exciting announcements over the next few months.
*Waterfall mediation uses historical revenue data to prioritize networks and call them one at a time.
This blogpost is a collaboration between Google and Twitter. Authored by Jingwei Hao with support from César Puerta, Fred Lohner from Twitter, and developed with Jingyu Shi from Google.
As app developers at Twitter, we know that battery life is an important aspect of the mobile experience for our users. Over time we've taken several steps to optimize our app to work with the power saving features, particularly around push notifications. In this article, we'll share what we did to save battery life on our users' devices in the hope this will help other developers optimize their apps as well.
Firebase Cloud Messaging migrationEarlier this year, we upgraded our notifications messaging library to the next evolution of GCM: Firebase Cloud Messaging (FCM). This gave us the ability to use the latest APIs and get access to the additional features Firebase has to offer.
This was a very unique migration for us for multiple reasons:
- Push notifications are an important part of Twitter's mobile engagement strategy. Notifications help our users stay informed and connected with the world, and helped a man get his nuggs. Therefore, we couldn't afford having unreliable delivery of notifications during and after the migration, which would negatively impact the platform.
- Since it's not possible to support both GCM and FCM in the application, we were not able to use typical A/B testing techniques during the migration.
Migrating to FCM proved valuable to us when testing against the power saving features on Android. With APIs like: getPriority() and getOriginalPriority(), FCM gave us insight into if the priority of FCM messages are downgraded.
Set the right FCM message priorityOn the backend at Twitter, we always try to make sure that notifications are assigned with the appropriate priority, making sure that high priority FCM messages are only used to generate a user visible notification. In fact, a very small fraction of notifications we send are classified as high priority.
App Standby Buckets was introduced in Android 9 Pie, this feature impose restrictions on the number of high priority messages the app will receive based on which bucket the app belongs to. As a result, high priority messages should be reserved for the notifications users are more likely to interact with. Using high priority FCM messages for actions which do not involve user interactions can lead to negative consequences, for example: once an app exhausts its app standby bucket quota, the following genuinely urgent FCM messages will be downgraded to normal priority and delayed when device is in Doze.
To understand how our app performs with App Standby Buckets, we gathered statistics on the notification priorities at both send and delivery time for the Twitter App:
- During our test, we observed that none of the high priority FCM messages were downgraded, especially for the 2% of devices which are bucketed in the frequent or lower bucket when the notifications are delivered. This is particularly worth noting since the system could potentially impose restrictions on high priority FCM messages for devices in low active buckets.
- 86% of the devices were bucketed in the active bucket when high priority FCM messages are delivered. This is another positive signal that our priority assignment of the messages is consistent with the use pattern from the users.
Avoid follow-up data prefetch for notificationsPrefetching data is a popular practice to enrich the user experience around notifications. This entails including a piece of metadata within the payload of a notification. When the notification is delivered, the app leverages the payload data to start one or multiple network calls to download more data before the rendering of the notification. FCM payload has a 4KB max limit, and when more data is needed to create rich notifications, this prefetch practice is used. But doing this has a trade-off, and will increase both device power consumption and notification latency. At Twitter, among all the types of notifications we push to our users, there's only one type which does prefetching which makes up to less than 1% of the volume. In addition, in the cases where data prefetching for notification is unavoidable, it should be scheduled with JobScheduler or WorkManager tasks in order to avoid issues with the background execution limits in Oreo+.
Create notification channelsIn addition, Notification Channels were introduced in the Android 8 Oreo release. We designed the importance level of the channels with multiple factors in mind, user experience being the most important and power-saving being another. Currently, Twitter for Android has nine notification channels, among which only direct messaging, emergency, and security are designed as high importance, leaving most of the channels with a lower importance level, making it less intrusive.
SummaryWe set out to improve our user experience by migrating to FCM, prioritizing notifications cautiously, limiting prefetching, and carefully designing notification channels. We were glad to find that these changes had a positive impact on battery performance and enabled our app to take advantage of the power-saving features introduced in recent versions of Android.
Designing for power optimization in a large evolving application is a complex and ongoing process, particularly as the Android platform grows and provides more granular controls. At Twitter we strongly believe in continuous refinement to improve application performance and resource consumption. We hope this discussion is useful in your own quest to optimize your app's performance and use of resources 💙
Source: Android Developers Blog
If you find a new issue, please let us know by filing a bug. More information about Chrome for Android is available on the Chrome site.
If you find new issues, please let us know by visiting our forum or filing a bug. Interested in switching channels? Find out how. You can submit feedback using ‘Report an issue...’ in the Chrome menu (3 vertical dots in the upper right corner of the browser).