Alternative input methods for Android TV

Posted by Benjamin Baxter, Developer Advocate and Bacon Connoisseur

Hero image displaying phones and tvs communicating to each other

All TVs have the same problem with keyboard input: It is very cumbersome to hunt and peck for each letter using a D-pad with a remote. And if you make a mistake, trying to correct it compounds the problem.

APIs like Smart Lock and Autofill, can ease user's frustrations, but for certain types of input, like login, you need to collect complex input that is difficult for users using the on-screen keyboard.

With the Nearby Connections API, you can use a second screen to gather input from the user with less friction.

How Nearby Connections works

From the documentation:

"Nearby Connections is an offline peer-to-peer socket model for communication based on advertising and discovering devices in proximity.

Usage of the API falls into two phases: pre-connection, and post-connection.

In the pre-connection phase, Advertisers advertise themselves, while Discoverers discover nearby Advertisers and send connection requests. A connection request from a Discoverer to an Advertiser initiates a symmetric authentication flow that results in both sides independently accepting (or rejecting) the connection request.

After a connection request is accepted by both sides, the connection is established and the devices enter the post-connection phase, during which both sides can exchange data."

In most cases the TV is the advertiser and the phone is the discoverer. In the example below, the assumed second device is a phone. The API and patterns described in this article are not limited to a phone. For example, a tablet could also be the second screen device.

The TV is the Advertiser and the phone is the Discoverer.

Login Example

There are many times when keyboard input is required. Authenticating users and collecting billing information (like zip codes and name on card) are common cases. This example handles a login flow that uses a second screen to see how Nearby Connections can help reduce friction.

1. The user opens your app on her TV and needs to login. You can show a screen of options similar to the setup flow for a new TV.

Android TV setup with prompt to continue on the user's phone.

2. After the user chooses to login with their phone, the TV should start advertising and send the user to the associated login app on their phone, which should start discovering.

There are a variety of solutions to open the app on the phone. As an example, Android TV's setup flow has the user open the corresponding app on their mobile device. Initiating the hand-off is a more a UX concern than a technology concern.

Animation showing setup hand off from TV to phone.

3. The phone app should display the advertising TV and prompt the user to initiate the connection. After the (encrypted -- see Security Considerations below for more on this) connection is established the TV can stop advertising and the phone can stop discovering.

"Advertising/Discovery using Nearby Connections for hours on end can affect a device's battery. While this is not usually an issue for a plugged-in TV, it can be for mobile devices, so be conscious about stopping advertising and discovery once they're no longer needed."

4. Next, the phone can start collecting the user's input. Once the user enters their login information, the phone should send it to the TV in a BYTES payload over the secure connection.

5. When the TV receives the message it should send an ACK (using a BYTES payload) back to the phone to confirm delivery.

6. When the phone receives the ACK, it can safely close the connection.

The following diagram summarizes the sequence of events:

Sequence diagram of order of events to setup a connect and send a message.

UX considerations

Nearby Connections needs location permissions to be able to discover nearby devices. Be transparent with your users. Tell them why they need to grant the location permission on their phone.

Since the TV is advertising, it does not need location permissions.

Start advertising: The TV code

After the user chooses to login on the phone, the TV should start advertising. This is a very simple process with the Nearby API.

override fun onGuidedActionClicked(action: GuidedAction?) {
    super.onGuidedActionClicked(action)
    if( action == loginAction ) {
        // Update the UI so the user knows to check their phone
        navigationFlowCallback.navigateToConnectionDialog()
        doStartAdvertising(requireContext()) { payload ->
            handlePayload(payload)
        }
    }
}

When the user clicks a button, update the UI to tell them to look at their phone to continue. Be sure to offer a way to cancel the remote login and try manually with the cumbersome onscreen keyboard.

This example uses a GuidedStepFragment but the same UX pattern applies to whatever design you choose.

Advertising is straightforward. You need to supply a name, a service id (typically the package name), and a `ConnectionLifeCycleCallback`.

You also need to choose a strategy that both the TV and the phone use. Since it is possible that the users has multiple TVs (living room, bedroom, etc) the best strategy to use is P2P_CLUSTER.

Then start advertising. The onSuccessListener and onFailureListener tell you whether or not the device was able to start advertising, they do not indicate a device has been discovered.

fun doStartAdvertising(context: Context) {
    Nearby.getConnectionsClient(context).startAdvertising(
        context.getString(R.string.tv_name),
        context.packageName,
        connectionLifecycleCallback,
        AdvertisingOptions.Builder().setStrategy(Strategy.P2P_CLUSTER).build()
    )
    .addOnSuccessListener {
        Log.d(LoginStepFragment.TAG, "We are advertising!")
    }
    .addOnFailureListener {
        Log.d(LoginStepFragment.TAG, "We cannot start advertising.")
        Toast.makeText(
            context, "We cannot start advertising.", Toast.LENGTH_LONG)
                .show()
    }
}

The real magic happens in the `connectionLifecycleCallback` that is triggered when devices start to initiate a connection. The TV should accept the handshake from the phone (after performing the necessary authentication -- see Security Considerations below for more) and supply a payload listener.

val connectionLifecycleCallback = object : ConnectionLifecycleCallback() {

    override fun onConnectionInitiated(
            endpointId: String, 
            connectionInfo: ConnectionInfo
    ) {
        Log.d(TAG, "Connection initialized to endpoint: $endpointId")
        // Make sure to authenticate using `connectionInfo.authenticationToken` 
        // before accepting
        Nearby.getConnectionsClient(context)
            .acceptConnection(endpointId, payloadCallback)
    }

    override fun onConnectionResult(
        endpointId: String, 
        connectionResolution: ConnectionResolution
    ) {
        Log.d(TAG, "Received result from connection: ${connectionResolution.status.statusCode}")
        doStopAdvertising()
        when (connectionResolution.status.statusCode) {
            ConnectionsStatusCodes.STATUS_OK -> {
                Log.d(TAG, "Connected to endpoint: $endpointId")
                otherDeviceEndpointId = endpointId
            }
            else -> {
                otherDeviceEndpointId = null
            }
        }
    }

    override fun onDisconnected(endpointId: String) {
        Log.d(TAG, "Disconnected from endpoint: $endpointId")
        otherDeviceEndpointId = null
    }
}

The payloadCallback listens for the phone to send the login information needed. After receiving the login information, the connection is no longer needed. We go into more detail later in the Ending the Conversation section.

Discovering the big screen: The phone code

Nearby Connections does not require the user's consent. However, the location permission must be granted in order for discovery with Nearby Connections to work its magic. (It uses BLE scanning under the covers.)

After opening the app on the phone, start by prompting the user for location permission if not already granted on devices running Marshmallow and higher.

Once the permission is granted, start discovering, confirm the connection, collect the credentials, and send a message to the TV app.

Discovering is as simple as advertising. You need a service id (typically the package name -- this should be the same on the Discoverer and Advertiser for them to see each other), a name, and a `EndpointDiscoveryCallback`. Similar to the TV code, the flow is triggered by callbacks based on the connection status.

Nearby.getConnectionsClient(context).startDiscovery(
        context.packageName,
        mobileEndpointDiscoveryCallback,
        DiscoveryOptions.Builder().setStrategy(Strategy.P2P_CLUSTER).build()
        )
        .addOnSuccessListener {
            // We're discovering!
            Log.d(TAG, "We are discovering!")
        }
         .addOnFailureListener {
            // We were unable to start discovering.
            Log.d(TAG, "We cannot start discovering!")
        }

The Discoverer's listeners are similar to the Advertiser's success and failure listeners; they signal if the request to start discovery was successful or not.

Once you discover an advertiser, the `EndpointDiscoveryCallback` is triggered. You need to keep track of the other endpoint to know who to send the payload, e.g.: the user's credentials, to later.

val mobileEndpointDiscoveryCallback = object : EndpointDiscoveryCallback() {
    override fun onEndpointFound(
        endpointId: String, 
        discoveredEndpointInfo: DiscoveredEndpointInfo
    ) {
        // An endpoint was found!
        Log.d(TAG, "An endpoint was found, ${discoveredEndpointInfo.endpointName}")
        Nearby.getConnectionsClient(context)
            .requestConnection(
                context.getString(R.string.phone_name), 
                endpointId, 
                connectionLifecycleCallback)
    }

    override fun onEndpointLost(endpointId: String) {
        // A previously discovered endpoint has gone away.
        Log.d(TAG, "An endpoint was lost, $endpointId")
    }
}

One of the devices must initiate the connection. Since the Discoverer has a callback for endpoint discovery, it makes sense for the phone to request the connection to the TV.

The phone asks for a connection supplying a `connectionLifecycleCallback` which is symmetric to the callback in the TV code.

val connectionLifecycleCallback = object : ConnectionLifecycleCallback() {
    override fun onConnectionInitiated(
        endpointId: String,
        connectionInfo: ConnectionInfo
    ) {
        Log.d(TAG, "Connection initialized to endpoint: $endpointId")
        // Make sure to authenticate using `connectionInfo.authenticationToken` before accepting
        Nearby.getConnectionsClient(context)
                .acceptConnection(endpointId, payloadCallback)
    }

    override fun onConnectionResult(
        endpointId: String,
        connectionResolution: ConnectionResolution
    ) {
        Log.d(TAG, "Connection result from endpoint: $endpointId")
        when (connectionResolution.status.statusCode) {
            ConnectionsStatusCodes.STATUS_OK -> {
                Log.d(TAG, "Connected to endpoint: $endpointId")
                otherDeviceEndpointId = endpointId
                waitingIndicator.visibility = View.GONE
                emailInput.editText?.isEnabled = true
                passwordInput.editText?.isEnabled = true

                Nearby.getConnectionsClient(this).stopDiscovery()
            }
            else -> {
                otherDeviceEndpointId = null
            }
        }
    }

    override fun onDisconnected(endpointId: String) {
        Log.d(TAG, "Disconnected from endpoint: $endpointId")
        otherDeviceEndpointId = null
    }
}

Once the connection is established, stop discovery to avoid keeping this battery-intensive operation running longer than needed. The example stops discovery after the connection is established, but it is possible for a user to leave the activity before that happens. Be sure to stop the discovery/advertising in onStop() on both the TV and phone.


override fun onStop() {
    super.onStop()
    Nearby.getConnectionsClient(this).stopDiscovery()
}


Just like a TV app, when you accept the connection you supply a payload callback. The callback listens for messages from the TV app such as the ACK described above to clean up the connection.

After the devices are connected, the user can use the keyboard and send their authentication information to the TV by calling `sendPayload()`.

fun sendCreditials() {

    val email = emailInput.editText?.text.toString()
    val password = passwordInput.editText?.text.toString()

    val creds = "$email:$password"
    val payload = Payload.fromBytes(creds.toByteArray())
    Log.d(TAG, "sending payload: $creds")
    if (otherDeviceEndpointId != null) {
        Nearby.getConnectionsClient(this)
                .sendPayload(otherDeviceEndpointId, payload)
    }
}

Ending the conversation

After the phone sends the payload to the TV (and the login is successful), there is no reason for the devices to remain connected. The TV can initiate the disconnection with a simple shutdown protocol.

The TV should send an ACK to the phone after it receives the credential payload.

val payloadCallback = object : PayloadCallback() {
    override fun onPayloadReceived(endpointId: String, payload: Payload) {
        if (payload.type == Payload.Type.BYTES) {
            payload.asBytes()?.let {
                val body = String(it)
                Log.d(TAG, "A payload was received: $body")
                // Validate that this payload contains the login credentials, and process them.

                val ack = Payload.fromBytes(ACK_PAYLOAD.toByteArray())
                Nearby.getConnectionsClient(context).sendPayload(endpointId, ack)
            }
        }
    }

    override fun onPayloadTransferUpdate(
        endpointId: String,
        update: PayloadTransferUpdate
    ) {    }
}

The phone should have a `PayloadCallback` that initiates a disconnection in response to the ACK. This is also a good time to reset the UI to show an authenticated state.

private val payloadCallback = object : PayloadCallback() {
    override fun onPayloadReceived(endpointId: String, payload: Payload) {
        if (payload.type == Payload.Type.BYTES) {
            payload.asBytes()?.let {
                val body = String(it)
                Log.d(TAG, "A payload was received: $body")

                if (body == ACK_PAYLOAD) {
                    waitingIndicator.visibility = View.VISIBLE
                    waitingIndicator.text = getString(R.string.login_successful)
                    emailInput.editText?.isEnabled = false
                    passwordInput.editText?.isEnabled = false
                    loginButton.isEnabled = false

                    Nearby.getConnectionsClient(this@MainActivity)
                        .disconnectFromEndpoint(endpointId)
                }
            }
        }
    }

    override fun onPayloadTransferUpdate(
        endpointId: String,
        update: PayloadTransferUpdate
    ) {    }
}

Security considerations

For security (especially since we're sending over sensitive information like login credentials), it's strongly recommended that you authenticate the connection by showing a code and having the user confirm that the two devices being connected are the intended ones -- without this, the connection established by Nearby Connection is encrypted but not authenticated, and that's susceptible to Man-In-The-Middle attacks. The documentation goes into greater detail on how to authenticate a connection.

Let the user accept the connection by displaying a confirmation code on both devices.

Does your app offer a second screen experience?

There are many times when a user needs to supply input to a TV app. The Nearby API provides a way to offload the hardships of an onscreen-dpad-driven keyboard to an easy and familiar phone keyboard.

What use cases do you have where a second screen would simplify your user's life? Leave a comment or send me (@benjamintravels) or Varun (@varunkapoor, Team Lead for Nearby Connections) a tweet to continue the discussion.

The High Five: Put some R-E-S-P-E-C-T on it

Daydreaming and I’m thinking of Trends. This week, with a little help from the Google News Lab we honor the Queen of Soul, celebrate birthdays, shine some light on the left-handed among us and much more. Here’s a look at this week’s top trends.

Paying respect to a legend

On Thursday, we said goodbye to Aretha “Queen of Soul” Franklin who lost her battle with pancreatic cancer at 76. This musical legend gave the world iconic hits like “Respect,” “Natural Woman,” “Think,” “I Say A Little Prayer,” and “Chain of Fools,” all of which came in as the most-searched Aretha Franklin songs this week. She never shied away from the opportunity to flaunt her dramatic furs and show-stopping hats, all while reaching octaves and bravados that could make anyone drown in their own tears. Celebrities, world leaders and fans alike took time to pay their respects with folks in D.C., Michigan, Maryland, Georgia and Mississippi continuing to Rock Steady and search for details on the Queen of Soul. May we forever ride the midnight train of soul and take a drive down the Freeway of Love because that’s what Aretha would want us to do.

Dodging traffic in Los Angeles

Traffic is a way of life in Los Angeles and Elon Musk’s Boring Company is looking to make life a little bit easier, at least if you’re going to Dodger Stadium. The company is proposing a 3.6-mile underground tunnel in an effort to curb congestion on L.A. roads and people are intrigued. So much so that search interest in “boring company tunnel” spiked more than 60 percent over the past week in the U.S. Some people had tunnel vision, also searching for the North River Tunnels, Twin Peaks tunnel and Hezekiah’s Tunnel.

Sixty going on thirty

Madonna and Angela Bassett celebrated their sixtieth birthdays this week and people were in utter disbelief. Questions like “How old was Madonna when she had her daughter?” and “How does Angela Bassett stay looking so young?” were trending, as the mystery behind their fountain of youth glow remains unsolved. Washington D.C. was one of the top regions searching for both Madonna and Angela Bassett and the top search question on turning 60 was “What to say to someone turning 60?” Uhhh … Happy Birthday?  

Righty tighty, lefty loosey

Left-handers day was this past Monday and to celebrate, Oreo created a special left handers package and even sent a free package of cookies to all the residents of Left Hand, West Virginia. Lefties and righties alike took to Search to find out, “What percentage of people are left handed,” “Is there a left handers day club,” and “Is LeBron James left handed.” They also wanted to know if there were products and perks made especially for lefties such as “Best pens for lefties” and “Scholarships for lefties.” Looks like this week was the right week to be a left-handed.

Representation matters

Crazy Rich Asians, the first major studio production in 25 years to star an all-Asian cast, opened in theaters this week and is on track to net $30 million dollars by the end of the weekend. The top five states searching for the film include D.C., California, Hawaii, New York and Washington, and people searching for Crazy Rich Asians also searched for Geetha Govindam, Mile 22, and BlacKkKlansman over the past week in the U.S. Double-feature weekend at the theater anyone?

ZuriHac 2018: Haskell hackathon in Rapperswil

Google Open Source recently co-sponsored a three-day hackathon for Haskell, an open source functional programming language. Ivan Krišto from Google’s Zürich office talks more about the event below.

Over the weekend of June 9th, Rapperswil, Switzerland became a home for 300 Haskellers. Hochschule für Technik Rapperswil hosted the seventh annual ZuriHac, the biggest Haskell Hackathon in Europe. ZuriHac is a free, international coding festival with the goal to expand our community and to build and improve Haskell libraries, tools and infrastructure.

Participants could choose to hack all day long, attend the Haskell beginners course led by Julie Moronuki, join the Glasgow Haskell Compiler (GHC) DevOps track organized by GHC contributors with the goal to bring in new contributors, listen to the Haskell flavoured talks, or socialize and swim in the lake. The event was colocated with C++ standardization committee meetings which offered a unique opportunity for sharing ideas between the two communities.

Here is a short summary of featured talks at ZuriHac.
The event concluded with a presentation of the results of the three day hackathon: project presentations.

Video by Hochschule für Technik Rapperswil.

Once again, we broke the attendance record! We’re already preparing for ZuriHac 2019 and hope to keep up this amazing growth. See you next year!

By Ivan Krišto, Software Engineer

Safety-first AI for autonomous data center cooling and industrial control

Many of society’s most pressing problems have grown increasingly complex, so the search for solutions can feel overwhelming. At DeepMind and Google, we believe that if we can use AI as a tool to discover new knowledge, solutions will be easier to reach.

In 2016, we jointly developed an AI-powered recommendation system to improve the energy efficiency of Google’s already highly-optimized data centers. Our thinking was simple: Even minor improvements would provide significant energy savings and reduce CO2 emissions to help combat climate change.

Now we’re taking this system to the next level: instead of human-implemented recommendations, our AI system is directly controlling data center cooling, while remaining under the expert supervision of our data center operators. This first-of-its-kind cloud-based control system is now safely delivering energy savings in multiple Google data centers.

How it works

Every five minutes, our cloud-based AI pulls a snapshot of the data center cooling system from thousands of sensors and feeds it into our deep neural networks, which predict how different combinations of potential actions will affect future energy consumption. The AI system then identifies which actions will minimize the energy consumption while satisfying a robust set of safety constraints. Those actions are sent back to the data center, where the actions are verified by the local control system and then implemented.

The idea evolved out of feedback from our data center operators who had been using our AI recommendation system. They told us that although the system had taught them some new best practices—such as spreading the cooling load across more, rather than less, equipment—implementing the recommendations required too much operator effort and supervision. Naturally, they wanted to know whether we could achieve similar energy savings without manual implementation.


We’re pleased to say the answer was yes!

We wanted to achieve energy savings with less operator overhead. Automating the system enabled us to implement more granular actions at greater frequency, while making fewer mistakes.
Dan Fuenffinger
Dan Fuenffinger
Data Center Operator, Google

Designed for safety and reliability

Google's data centers contain thousands of servers that power popular services including Google Search, Gmail and YouTube. Ensuring that they run reliably and efficiently is mission-critical. We've designed our AI agents and the underlying control infrastructure from the ground up with safety and reliability in mind, and use eight different mechanisms to ensure the system will behave as intended at all times.

One simple method we’ve implemented is to estimate uncertainty. For every potential action—and there are billions—our AI agent calculates its confidence that this is a good action. Actions with low confidence are eliminated from consideration.

Another method is two-layer verification. Optimal actions computed by the AI are vetted against an internal list of safety constraints defined by our data center operators. Once the instructions are sent from the cloud to the physical data center, the local control system verifies the instructions against its own set of constraints. This redundant check ensures that the system remains within local constraints and operators retain full control of the operating boundaries.

Most importantly, our data center operators are always in control and can choose to exit AI control mode at any time. In these scenarios, the control system will transfer seamlessly from AI control to the on-site rules and heuristics that define the automation industry today.

Find out about the other safety mechanisms we’ve developed below:

DME_DCIQ_v08-05.png

Increasing energy savings over time

Whereas our original recommendation system had operators vetting and implementing actions, our new AI control system directly implements the actions. We’ve purposefully constrained the system’s optimization boundaries to a narrower operating regime to prioritize safety and reliability, meaning there is a risk/reward trade off in terms of energy reductions.

Despite being in place for only a matter of months, the system is already delivering consistent energy savings of around 30 percent on average, with further expected improvements. That’s because these systems get better over time with more data, as the graph below demonstrates. Our optimization boundaries will also be expanded as the technology matures, for even greater reductions.

graph.gif

This graph plots AI performance over time relative to the historical baseline before AI control. Performance is measured by a common industry metric for cooling energy efficiency, kW/ton (or energy input per ton of cooling achieved). Over nine months, our AI control system performance increases from a 12 percent improvement (the initial launch of autonomous control) to around a 30 percent improvement.

Our direct AI control system is finding yet more novel ways to manage cooling that have surprised even the data center operators. Dan Fuenffinger, one of Google’s data center operators who has worked extensively alongside the system, remarked: "It was amazing to see the AI learn to take advantage of winter conditions and produce colder than normal water, which reduces the energy required for cooling within the data center. Rules don’t get better over time, but AI does."

We’re excited that our direct AI control system is operating safely and dependably, while consistently delivering energy savings. However, data centers are just the beginning. In the long term, we think there's potential to apply this technology in other industrial settings, and help tackle climate change on an even grander scale.

Beta Channel Update for Desktop

The beta channel has been updated to 69.0.3497.42 for Windows, Mac, and, Linux.


A full list of changes in this build is available in the log. Interested in switching release channels?  Find out how here. If you find a new issue, please let us know by filing a bug. The community help forum is also a great place to reach out for help or learn about common issues.


Krishna Govind
Google Chrome

Simplifying Data Studio embeds and social sharing

Today we are introducing two new features to make sharing your Data Studio visualizations easier, including enhanced support for embedding your reports across the web, and rich snippets of your reports when you share them on social networks.

Embed with Embed.ly

Data Studio now supports embedding interactive reports on Medium, Reddit and hundreds of other sites that use Embed.ly. To embed your report, simply paste the report URL in your article. Simply embed your report and it will sync in real time, making it possible for you to distribute your interactive reports. Learn more.

Here are some examples of embedded reports:

DS Embed 1

Medium article showing Stack Overflow trends. Link

DS Embed 2

Reddit post showing real time departures for BART (Bay Area Rapid Transit). Link

Share rich snippets on social

When you share your report link on social platforms or messaging apps, you will now see a rich snippet including the title, thumbnail and description of the report. Your audience will know what to expect from the link and have better visibility to your reports. Rich snippets also help make your content more searchable on social networks.

To generate rich snippets, post the report URL you intend to share.

Here's an example of a rich snippet:

DS Embed 3

Rich snippets work on any social platform or messaging app that supports Open Graph Protocol including Google+, Facebook, Twitter, LinkedIn, Reddit and apps like Hangouts, iMessage and Slack.

Helping you find useful information fast on Search

Imagine you’re remodeling your kitchen, and you want information about how quartz compares to granite for your new countertops. Sure, Google can tell you what quartz and granite are, but that’s perhaps not what you had in mind. Chances are you’re hoping to learn more about the differences in cost, benefits, and durability of each, and may be looking for guidance on other subtopics to explore.


For these types of queries, we’re introducing a new way to get you to relevant information fast and help you get a glimpse of multiple aspects of a topic with a single search.


quarts vs. granite

Now when you search for something like [quartz vs. granite], you’ll see a panel with a set of relevant subtopics to explore. As another example, when you search [emergency fund], you'll get a quick view of information that relates to the recommended size, purpose, and importance of an emergency fund, and you can easily click the links to these relevant sources to learn more. This new format is meant to help guide you with what we understand to be common, useful aspects of the topic and help you sift through the information available, all with the goal of delivering the most relevant results for you.


These new panels are automatically generated based on our understanding of these topics from content on the web, and we hope you find them useful as they roll out over the next few days.


This update is the latest in a series of improvements we’ve been making to help you get information quickly with Search. As always, if you have any feedback on the information you see, please let us know via the feedback link at the bottom of the search results page. To learn more about how these types of features work, check out our post on featured snippets.

Source: Search


Streaming support spec for hearing aids on Android

Posted by Seang Chau, Vice President, Engineering

According to the World Health Organization1, around 466 million people worldwide have disabling hearing loss. This number is expected to increase to 900 million people by the year 2050. Google is working with GN Hearing to create a new open specification for hearing aid streaming support on future versions of Android. Users with hearing loss will be able to connect, pair, and monitor their hearing aids so they can hear their phones loudly and clearly.

Hearing aid users expect a high quality, low latency experience with minimal impact on phone and hearing aid battery life. We've published a new hearing aid spec for Android smartphones: Audio Streaming for Hearing Aids (ASHA) on Bluetooth Low Energy Connection-Oriented Channels. ASHA is designed to have a minimal impact on battery life with low-latency while maintaining a high quality audio experience for users who rely on hearing aids. We look forward to continually evolving the spec to even better meet the needs of our users.

The spec details the pairing and connectivity, network topology, system architecture, and system requirements for implementing hearing aids using low energy connection-oriented channels. Any hearing aid manufacturer can now build native hearing aid support for Android.

The protocol specification is available here.

Making it easier to Search in Swahili

Habari ya leo? Swahili is one of the most spoken African languages and we’re now making it much easier for the over 100 million Swahili speakers to search for things they care about. When someone conducts a search, they want answers as quickly as possible. To help Swahili speakers discover new information more easily, we’re now making the Google Knowledge Graph available in Swahili. So next time a Swahili user is searching for Nobel Peace Prize winner Wangari Maathai, we’ll show them things, not strings – and they’ll instantly get information that’s relevant to their query such as Wangari’s date of birth, her awards, or related books about her.



The image on the right hand side shows you the new search experience pulling information from the Knowledge Graph

The Knowledge Graph enables you to search for things, people or places that Google knows about—landmarks, celebrities, cities, sports teams, buildings, geographical features, movies, celestial objects, works of art and more. It’s not just rooted in public sources such as Freebase, Wikipedia and the CIA World Factbook, it’s also augmented at a much larger scale—because we’re focused on comprehensive breadth and depth. The Knowledge Graph is currently available in 59 languages, mapping out how more than 1 billion things in the real world are connected, and over 70 billion facts about them. And it’s tuned based on what people search for, and what we find out on the web, improving results over time.

We’ve now rolled out the Knowledge Graph in Swahili to users around the world. We hope that this update will make Search an even better experience for the millions of Swahili speakers in East Africa.

Posted by Ankita Goel, Senior Product Manager
 ====


La recherche simplifiée en swahili 


Comment allez-vous aujourd'hui ? Le swahili étant l’une des langues africaines les plus utilisées, nous offrons désormais la possibilité à plus de 100 millions de Swahilis de faire des recherches bien plus facilement sur les sujets qui les intéressent. Et parce que nous voulons tous avoir le plus rapidement possible les résultats de nos recherches Internet, nous proposons désormais le Knowledge Graph en swahili, pour que les personnes qui parlent swahili puissent découvrir de nouveaux contenus plus facilement. Ainsi, lorsqu’un internaute fera une recherche en swahili sur Wangari Maathai, Prix Nobel de la paix, il obtiendra instantanément des informations pertinentes associées telles que sa date de naissance, les prix qui lui ont été décernés, les ouvrages qui lui ont été consacrés : la recherche ne se limitera pas à retrouver les mots-clés identiques, elle sera élargie aux concepts (ou entités) associés.



Le Knowledge Graph permet de chercher des informations pertinentes sur des concepts, des personnes ou des lieux que Google connait : des monuments, des célébrités, des villes, des équipes de sport, des bâtiments, des éléments géographiques, des films, des objets célestes, des œuvres d’art et bien plus. La recherche ne se limite pas aux sources d'information publique telles que Freebase, Wikipedia ou le World Factbook de la CIA. Le Knowledge Graph porte sur une échelle beaucoup plus large, pour apporter toujours plus d'ampleur et de profondeur à la recherche. Le Knowledge Graph, actuellement disponible en 59 langues, recense plus d’un milliard d'entités du monde réel et les relations qui les lient les unes aux autres, et plus de 70 milliards de faits les concernant. Il est configuré à partir des recherches des internautes et du contenu disponible sur la toile. Tout cela permet d’améliorer les résultats au cours du temps.



Dès à présent, nous lançons le Knowledge Graph en swahili pour les internautes du monde entier. Nous espérons que cette nouvelle version améliorera l’expérience de recherche pour les millions de personnes parlant swahili en Afrique de l’Est. 

Publié par : Ankita Goel, Senior Product Manager

 ====


Kurahisisha Utafutaji katika Kiswahili 

Habari ya leo? Kiswahili ni mojawapo ya lugha maarufu za Afrika na kina watumiaji zaidi ya milioni 100. Sasa tunawarahisishia watumiaji hawa shughuli ya kutafuta vitu wanavyotaka. Mtu anapotafuta hoja, huwa anataka majibu haraka iwezekanavyo. Ili kuwasaidia watumiaji wa Kiswahili wagundue maelezo mapya kwa urahisi zaidi, sasa tunafanya Grafu ya Maarifa ya Google ipatikane katika Kiswahili. Hii inamaanisha kuwa endapo mtumiaji wa Kiswahili atatafuta Wangari Maathai, mshindi wa Tuzo ya Amani ya Nobel, hatutamwonyesha mfululizo wa sentensi - atapata maelezo yanayohusiana na hoja yake kama vile tarehe aliyozaliwa Wangari, tuzo alizopata au vitabu vilivyoandikwa kumhusu.



Grafu ya Maarifa inakusaidia kutafuta vitu, watu au maeneo ambayo Google inatambua - maeneo muhimu, watu mashuhuri, miji, timu za michezo, majengo, vipengele vya kijiografia, filamu, vitu vya angani, kazi za sanaa na zaidi. Haiangazii tu vyanzo vya umma kama vile Freebase, Wikipedia na CIA World Factbook, bali pia imeboreshwa kwa kiasi kikubwa mno—kwa sababu tunalenga upana na undani wa hoja. Grafu ya Maarifa kwa sasa inapatikana katika lugha 59, inaonyesha jinsi zaidi ya vitu bilioni 1 vinahusiana katika hali halisi, na zaidi ya hoja halisi bilioni 70 zinazovihusu vitu hivyo. Na imewekwa kulingana na hoja ambazo watu wanatafuta na mambo tunayopata kwenye wavuti. Hali hii huboresha matokeo kila wakati.



Sasa tumezindua Grafu ya Maarifa katika Kiswahili kwa watumiaji duniani kote. Tunatarajia kwamba sasisho hili litaboresha huduma ya Utafutaji kwa watumiaji wa Kiswahili katika Afrika Mashariki.

Mwandishi: Ankita Goel, Meneja Mkuu wa Bidhaa

Announcing the 10 startups that will take the stage at Demo Day Asia

https://lh5.googleusercontent.com/mCULEx1HRw-lk5psNSGcoZi6PZTH5Wyg9j9YOnE56QY1KI6VY59i0mrNiAXH1jlQAY5d8HrLTbsoMuzu3s6dnr1qPPFPC8f333FMbg9LFgBc4JoUOgnmOtiisUVh1ov3aGbfHn05
After our open call for startups to apply and pitch to top global investors at Demo Day Asia—taking place this September in Shanghai—we received hundreds of submissions. They came from founders from every corner of Asia-Pacific, across industries as diverse as agriculture, entertainment, and healthcare.
While difficult to narrow down this impressive field, after painstaking deliberation, the results are finally in. Out of 305 qualifying applications, the 10 finalists that will take the final stage at Demo Day Asia are... (drum-roll!):

/ SigTuple from India creates AI-based solutions to automate healthcare screening.
/ DycodeX from Indonesia develops Internet of Things solutions for livestock farming.
/ FreightExchange from Australia is an online platform for freight carriers to sell their unused space to shippers.
/ GITAI from Japan specializes in building robots that can help humans conduct scientific experiments in space.
/ Marham from Pakistan is a healthcare platform that helps people search, book appointments, and consult with doctors online.
/ Miotech from China is a fintech startup developing artificial intelligence-based software for financial services firms.
/ OneStockHome from Thailand offers an e-commerce platform for construction materials.
/ Origami Labs from Hong Kong makes smart rings that allow people to hear and send text messages without taking out their phones.
/ SkyMagic from Singapore produces drone swarming technology for live entertainment and traffic management systems.
/ Swingvy from Korea provides human resources solutions for businesses.

We’re proud that several of these companies belong to organisations that are a part of the Google for Entrepreneurs partner network, a community of over 35 member spaces globally and programs supporting startups. They include startups from Kibar in Indonesia, Fishburners in Australia, Hubba in Thailand, and Found in Singapore.


Congratulations to these outstanding startups and their founders! They will pitch to a distinguished panel of leaders from Google for Entrepreneurs, Sequoia Capital China, and Venturra Capital on September 20th in Shanghai. The startups that impress could come home with funding from investors and up to $100,000 in Google Cloud Platform credits. Most importantly, we hope these incredible startups blaze a path forward for other founders and continue to improve the lives of others with their innovative products. Good luck in Shanghai!


By Michael Kim, Partnerships Manager, Google For Entrepreneurs