Posted by Matteo Vallone, Google Play Games Business Development
Back in November, we launched the Google Play Indie Games Contest for developers from 15 European countries, to celebrate the passion and innovation of the indie community in the region. The contest will reward the winners with exposure to industry experts and players worldwide, as well as other prizes that will showcase their art and help them grow their business on Android and Google Play.
Thank you to the nearly 1000 of you who submitted high quality games in all types of genres! Your creativity, enthusiasm and dedication have once again impressed us and inspired us. We had a very fun time testing and judging the games based on fun, innovation, design excellence and technical and production quality, and it was challenging to select only 20 finalists:
At the final event attendees will have a say on which 10 of these finalists will get to pitch their games to the jury, who will decide on the final contest winners who will receive the top prizes.
Register now to join us in London, meet the developers, check out their great games, vote for your favourites, and have fun with various industry experts and indie developers.
A big thank you again to everyone who entered and congratulations to the finalists. We look forward to seeing you at the Saatchi Gallery in London on 16th February.
With the release of the 25.1.0 Support Library, there's a new entry in the family: the ExifInterface Support Library. With significant improvements introduced in Android 7.1 to the framework's ExifInterface, it only made sense to make those available to all API 9+ devices via the Support Library's ExifInterface.
The basics are still the same: the ability to read and write Exif tags embedded within image files: now with 140 different attributes (almost 100 of them new to Android 7.1/this Support Library!) including information about the camera itself, the camera settings, orientation, and GPS coordinates.
Camera Apps: Writing Exif Attributes
For Camera apps, the writing is probably the most important - writing attributes is still limited to JPEG image files. Now, normally you wouldn't need to use this during the actual camera capturing itself - you'd instead be calling the Camera2 API CaptureRequest.Builder.set() with JPEG_ORIENTATION, JPEG_GPS_LOCATION or the equivalents in the Camera1 Camera.Parameters. However, using ExifInterface allows you to make changes to the file after the fact (say, removing the location information on the user's request).
Reading Exif Attributes
For the rest of us though, reading those attributes is going to be our bread-and-butter; this is where we see the biggest improvements.
Firstly, you can read Exif data from JPEG and raw images (specifically, DNG, CR2, NEF, NRW, ARW, RW2, ORF, PEF, SRW and RAF files). Under the hood, this was a major restructuring, removing all native dependencies and building an extensive test suite to ensure that everything actually works.
For apps that receive images from other apps with a content:// URI (such as those sent by apps that target API 24 or higher), ExifInterface now works directly off of an InputStream; this allows you to easily extract Exif information directly out of content:// URIs you receive without having to create a temporary file.
Uri uri; // the URI you've received from the other app
InputStream in;
try {
in = getContentResolver().openInputStream(uri);
ExifInterface exifInterface = new ExifInterface(in);
// Now you can extract any Exif tag you want
// Assuming the image is a JPEG or supported raw format
} catch (IOException e) {
// Handle any errors
} finally {
if (in != null) {
try {
in.close();
} catch (IOException ignored) {}
}
}
Note: ExifInterface will not work with remote InputStreams, such as those returned from a HttpURLConnection. It is strongly recommended to only use them with content:// or file:// URIs.
One of the most important attributes when it comes to displaying images is the image orientation, stored in the aptly-named TAG_ORIENTATION, which returns one of the ORIENTATION_ constants. To convert this to a rotation angle, you can post-process the value.
int rotation = 0;
int orientation = exifInterface.getAttributeInt(
ExifInterface.TAG_ORIENTATION,
ExifInterface.ORIENTATION_NORMAL);
switch (orientation) {
case ExifInterface.ORIENTATION_ROTATE_90:
rotation = 90;
break;
case ExifInterface.ORIENTATION_ROTATE_180:
rotation = 180;
break;
case ExifInterface.ORIENTATION_ROTATE_270:
rotation = 270;
break;
}
There are some helper methods to extract values from specific Exif tags. For location data, the getLatLong() method gives you the latitude and longitude as floats and getAltitude() will give you the altitude in meters. Some images also embed a small thumbnail. You can check for its existence with hasThumbnail() and then extract the byte[] representation of the thumbnail with getThumbnail() - perfect to pass to BitmapFactory.decodeByteArray().
Working with Exif: Everything is optional
One thing that is important to understand with Exif data is that there are no required tags: each and every tag is optional - some services even specifically strip Exif data. Therefore throughout your code, you should always handle cases where there is no Exif data, either due to no data for a specific attribute or an image format that doesn't support Exif data at all (say, the ubiquitous PNGs or WebP images).
Add the ExifInterface Support Library to your project with the following dependency:
But when an Exif attribute is exactly what you need to prevent a mis-rotated image in your app, the ExifInterface Support Library is just what you need to #BuildBetterApps
Posted by Jason Douglas, PM Director for Actions on Google
The Google Assistant brings
together all of the technology and smarts we've been building for years,
from the Knowledge Graph to Natural Language Processing. To be a truly
successful Assistant, it should be able to connect users across the apps and
services in their lives. This makes enabling an ecosystem where developers can
bring diverse and unique services to users through the Google Assistant really
important.
In October, we previewed
Actions on Google, the developer platform for the Google Assistant. Actions on Google further
enhances the Assistant user experience by enabling you to bring your services to
the Assistant. Starting today, you can build Conversation Actions for Google
Home and request to
become an early access partner for upcoming platform features.
Conversation Actions for Google Home
Conversation Actions let you engage your users to deliver information, services,
and assistance. And the best part? It really is a conversation -- users won't
need to enable a skill or install an app, they can just ask to talk to your
action. For now, we've provided two developer samples of what's possible, just
say "Ok Google, talk to Number Genie " or try "Ok Google, talk to Eliza' for the
classic 1960s AI exercise.
You can get started today by visiting the Actions on Google website for
developers. To help create a smooth, straightforward development experience, we
worked with a number of
development partners, including conversational interaction development tools
API.AI and Gupshup, analytics tools DashBot and VoiceLabs and consulting
companies such as Assist, Notify.IO, Witlingo and Spoken Layer. We also created
a collection of samples and voice user
interface (VUI) resources or you can
check out the integrations from our early access
partners as they roll out over the coming weeks.
Coming soon: Actions for Pixel and Allo + Support for Purchases and
Bookings
Today is just the start, and we're excited to see what you build for the Google
Assistant. We'll continue to add more platform capabilities over time, including
the ability to make your integrations available across the various Assistant
surfaces like Pixel phones and Google Allo. We'll also enable support for
purchases and bookings as well as deeper Assistant integrations across
verticals. Developers who are interested in creating actions using these
upcoming features should register for our early access
partner program and help shape the future of the platform.
Build, explore and let us know what you think about Actions on Google! And to say in the loop, be sure to sign up for our newsletter, join our Google+ community, and use the “actions-on-google” tag on StackOverflow.
Take a break this holiday season and paint with satellite images of the Earth through a new experiment called Land Lines. The project lets you explore Google Earth images in unexpected ways through gesture. Earth provides the palette; your fingers, the paintbrush. There are two ways to explore–drag or draw. "Draw" to find satellite images that match your every line. "Drag" to create an infinite line of connected rivers, highways and coastlines. Here's a quick demo:
Everything runs in real time in your phone's web browser without any servers. The responsiveness of the project is a result of using machine learning, data optimization, and vantage-point trees to analyze the images and store that data.
We preprocessed the images using a combination of Open CV's Structured Forests machine learning based edge detection and ImageJ's Ridge Detection library. This culled the initial dataset of over fifty thousand high res images down to just a few thousand selected for their presence of lines, as shown in the example below. What ordinarily would take days was completed in just a few hours.
Example output from the line detection processing. The dominant line is highlighted in red while secondary lines are highlighted in green.
In the drawing exploration, we stored the resulting data in a vantage-point tree. This enabled us to efficiently run gesture matching against all the images and have results appear in milliseconds.
An early example of gesture matching using vantage point trees, where the drawn input is on the right and the closest results on the left.
Another example of user gesture analysis, where the drawn input is on the right and the closest results on the left.
Built in collaboration with Zach Lieberman, Land Lines is an experiment in big visual data that explores themes of connection. We tried several machine learning libraries in our development process. The learnings from that experience can be found in the case study, while the project code is available open-source on Git Hub. Start with a line at g.co/landlines.
It's common knowledge that presentations utilize a set of images to impart ideas to the audience. As a result, one of the best practices for creating great slide decks is to minimize the overall amount of text. It means that if you do have text in a presentation, the (few) words you use must have higher impact and be visually appealing. This is even more true when the slides are generated by a software application, say using the Google Slides API, rather than being crafted by hand.
The G Suite team recently launched the first Slides API, opening up a whole new category of applications. Since then, we've published several videos to help you realize some of those possibilities, showing you how to replace text and images in slides as well as how to generate slides from spreadsheet data. To round out this trifecta of key API use cases, we're adding text formatting to the conversation.
Developers manipulate text in Google Slides by sending API requests. Similar to the Google Sheets API, these requests come in the form of JSON payloads sent to the API's batchUpdate() method. Here's the JavaScript for inserting text in some shape (shapeID) on a slide:
In the video, developers learn that writing text, such as the request above, is less complex than reading or formatting because both the latter require developers to know how text on a slide is structured. Notice for writing that just the copy, and optionally an index, are all that's required. (That index defaults to zero if not provided.)
Assuming "Hello World!" has been successfully inserted in a shape on a slide, a request to bold just the "Hello" looks like this:
If you've got at least one request, like the ones above, in an array named requests, you'd ask the API to execute them with just one call to the API, which in Python looks like this (assuming SLIDES is your service endpoint and the slide deck ID is deckID):
To better understand text structure & styling in Google Slides, check out the text concepts guidein the documentation. For a detailed look at the complete code sample featured in the DevByte, check out the deep dive post. To see more samples for common API operations, take a look at this page. We hope the videos and all these developer resources help you create that next great app that automates producing highly impactful presentations for your users!
Earlier this week, the Google Developers YouTube channel crossed the 1 million subscribers threshold. This is a monumental achievement for us, and we are extremely honored that so many of you have found our content valuable enough to click that red ‘Subscribe’ button. The Google Developers YouTube channel has been bringing you content for just over 8 years, covering major developer events, like Google I/O and Playtime, as well as providing best practices on the latest tools to help you build high quality experiences! In that time, we’ve shared over 2,000 videos that have been viewed over 100 million times. Here is a look back at how we got to this milestone:
We are gearing up for another year of videos to help developers all over the world. To avoid missing any of it, you can subscribe to each of our YouTube channels using the following links: Google Developers | Android Developers | Chrome Developers | Firebase
Posted by Champika Fernando, Product Manager, Google, and Kasia Chmielinski, Product Lead, MIT Scratch Team
We want to empower developers to build great creative learning apps for kids. That's why, earlier this year, we announced Scratch Blocks, a free, open-source project created by the MIT Scratch and Google Kids Coding teams. Together, we are building this highly tinkerableand playful block-based programming grammar based on MIT's popular Scratch language and Blockly's architecture. With Scratch Blocks, developers can integrate Scratch-style coding into apps for kids.
Today, we're excited to share our progress in a number of areas:
1. Designing for tinkerability
Research from the MIT Media Lab has highlighted the importance of providing children with opportunities for quick experimentation and rapid cycles of iteration. For example, the Scratch programming environment makes it easy for kids to adjust the code while it's running, as well as try coding blocks by just tapping on them. Since our initial announcement in May, we've focused on supporting this type of tinkerability in the Scratch Blocks project by making it very easy for developers to connect Scratch Blocks directly to the Scratch VM (a related open-source project being developed by MIT). In this model, instead of blocks being converted to a text-based language like JavaScript which is then interpreted, the blocks themselves are the code. The result is a more tinkerable experience for the end-user.
2. Designing for all levels
Computational thinking1 is a valuable skill for everyone. In order to support developers building a wide diversity of coding experiences with Scratch Blocks, we've designed two related block grammars that can be used in a variety of contexts. One grammar uses icon-based blocks that connect horizontally, while the other uses text-based blocks that connect vertically.
We started by developing the horizontal grammar, which is well-suited for beginners of all ages due to its simplicity and limited number of blocks; additionally, this grammar is easier to manipulate on small screens. You can see an example of the horizontal icon-based grammar in Code a Snowflake (an activity in this year's Google Santa Tracker) built by the Google Kids Coding Team. More recently, we've added the vertical grammar, which supports a wider range of complex concepts. The horizontal grammar can also be translated into vertical blocks, making it possible to transition between the grammars. We imagine this will be useful in a number of situations, including designing for multiple screen sizes, or as an element of the app's learning experience.
3. Designing for all devices
We're building Scratch Blocks for a world that is increasingly mobile, where people of all ages will tinker with code in a variety of environments. We've improved the mobile experience in many key areas, both in Scratch Blocks as well as the underlying Blockly project:
Redesigned blocks for improved touchability
Fast loading of large projects on low-powered devices
Optimization of block manipulation and code editing on touch screens
Redesigned multi-touch support for a better experience on touch devices
How to get involved
These first six months of Scratch Blocks have been a lot of work - and a ton of fun. To stay up to date on the project, check out our Githubproject, and learn more on our Developer Page.
A key part of Android Wear 2.0 is letting
watch apps work as standalone apps, so users can respond to messages, track
their fitness, and use their favorite apps, even when their phone isn't around.
Developer Preview 4 includes a number of new APIs that will help you build more
powerful standalone apps.
Seamless authentication
To make authentication a seamless experience for both Android phone and iPhone
users, we have created new APIs for OAuth
and added support for one-click Google Sign-in. With the OAuth API for
Android Wear, users can tap a button on the watch that opens an authentication
screen on the phone. Your watch app can then authenticate with your server side
APIs directly. With Google Sign-In, it's even easier. All the user needs to do
is select which account they want to authenticate with and they are done.
In-app billing
In addition to paid apps, we have added in-app
billing support, to give you another way to monetize your Android Wear app
or watch face. Users can authorize purchases quickly and easily on the watch
through a 4-digit Google Account PIN. Whether it's new levels in a game or new
styles on a watch face, if you can build it, users can buy it.
Cross-device promotion
What if your watch app doesn't work standalone? Or what if it offers a better
user experience when both the watch and phone apps are installed? We've been
listening carefully to your feedback, and we've added two
new APIs (PlayStoreAvailability and RemoteIntent)
to help you navigate users to the Play Store on a paired device so they can
more easily install your app. Developers can also open custom URLs on the phone
from the watch via the new RemoteIntent API; no phone app or data
layer is required.
// Check Play Store is available
int playStoreAvailabilityOnPhone =
PlayStoreAvailability.getPlayStoreAvailabilityOnPhone(getApplicationContext());
if (playStoreAvailabilityOnPhone == PlayStoreAvailability.PLAY_STORE_ON_PHONE_AVAILABLE) {
// To launch a web URL, setData to Uri.parse("https://g.co/wearpreview")
Intent intent =
new Intent(Intent.ACTION_VIEW)
.addCategory(Intent.CATEGORY_BROWSABLE)
.setData(Uri.parse("market://details?id=com.google.android.wearable.app"));
// mResultReceiver is optional; it can be null.
RemoteIntent.startRemoteActivity(this, intent, mResultReceiver);
}
Swipe-to-dismiss is back
Many of you have given us the feedback that the swipe-to-dismiss gesture from
Android Wear 1.0 is an intuitive time-saver. We agree, and have reverted back to
the previous behavior with this developer preview release. To support
swipe-to-dismiss in this release, we've made the following platform and API
changes:
Activities now automatically support swipe-to-dismiss.
Swiping an activity from left to right will result in it being dismissed and the
app will navigate down the back stack.
New Fragment and View support. Developers can wrap the
containing views of a Fragment or Views in general in the new
SwipeDismissFrameLayout to implement custom actions such as going
down the back stack when the user swipes rather than exiting the activity.
Hardware button now maps to "power" instead of "back" which
means it can no longer be intercepted by apps.
Additional details are available under the behavior
changes section of the Android Wear Preview site.
Compatibility with Android Wear 1.0 apps
Android Wear apps packaged using the legacy embedded app mechanism can now be
delivered to Android Wear 2.0 watches. When a user installs a phone app that
also contains an embedded Android Wear app, the user will be prompted to install
the embedded app via a notification. If they choose not to install the embedded
app at that moment, they can find it in the Play Store on Android Wear under a
special section called "Apps you've used".
Despite support for the existing mechanism, there are significant benefits for
apps that transition to the multi-APK
delivery mechanism. Multi-APK allows the app to be searchable in the Play
Store on Android Wear, to be eligible for merchandising on the homepage, and to
be remotely installed from the web to the watch. As a result, we strongly
recommend that developers move to multi-APK.
More additions in Developer Preview 4
Action
and Navigation Drawers: An enhancement to peeking behavior
allows the user to take action without scrolling all the way to the top or
bottom of a list. Developers can further fine-tune drawer peeking behavior
through new APIs, such as setShouldPeekOnScrollDown for the action
drawer.
WearableRecyclerView:
The curved layout is now opt-in, and with this, the WearableRecyclerView is now
a drop-in replacement for RecyclerView.
Burn-in
protection icon for complications: Complication data providers can now
provide icons for use on screens susceptible to burn-in. These burn-in-safe
icons are normally the outline of the icon in interactive mode. Previously,
watch faces may have chosen not to display the icon at all in ambient mode to
prevent screen burn-in.
Feedback welcome!
Thanks for all your terrific feedback on Android Wear 2.0. Check out g.co/wearpreview for the latest builds and
documentation, keep the feedback coming by filing bugs or posting in our Android Wear
Developers community, and stay tuned for Android Wear Developer Preview 5!
Posted by Jason Douglas, PM Director for Actions on Google
The Google Assistant brings together all of the technology and smarts we've been building for years, from the Knowledge Graph to Natural Language Processing. To be a truly successful Assistant, it should be able to connect users across the apps and services in their lives. This makes enabling an ecosystem where developers can bring diverse and unique services to users through the Google Assistant really important.
In October, we previewedActions on Google, the developer platform for the Google Assistant. Actions on Google further enhances the Assistant user experience by enabling you to bring your services to the Assistant. Starting today, you can build Conversation Actions for Google Home and request to become an early access partner for upcoming platform features.
Conversation Actions for Google Home
Conversation Actions let you engage your users to deliver information, services, and assistance. And the best part? It really is a conversation -- users won't need to enable a skill or install an app, they can just ask to talk to your action. For now, we've provided two developer samples of what's possible, just say "Ok Google, talk to Number Genie " or try "Ok Google, talk to Eliza' for the classic 1960s AI exercise.
You can get started today by visiting the Actions on Google website for developers. To help create a smooth, straightforward development experience, we worked with a number of development partners, including conversational interaction development tools API.AI and Gupshup, analytics tools DashBot and VoiceLabs and consulting companies such as Assist, Notify.IO, Witlingo and Spoken Layer. We also created a collection of samples and voice user interface (VUI) resources or you can check out the integrations from our early access partners as they roll out over the coming weeks.
Coming soon: Actions for Pixel and Allo + Support for Purchases and Bookings
Today is just the start, and we're excited to see what you build for the Google Assistant. We'll continue to add more platform capabilities over time, including the ability to make your integrations available across the various Assistant surfaces like Pixel phones and Google Allo. We'll also enable support for purchases and bookings as well as deeper Assistant integrations across verticals. Developers who are interested in creating actions using these upcoming features should register for our early access partner program and help shape the future of the platform.
Build, explore and let us know what you think about Actions on Google! And to say in the loop, be sure to sign up for our newsletter, join our Google+ community, and use the “actions-on-google” tag on StackOverflow.
Posted by Patricia Correa, Head of Developer Marketing, Google Play
We’re wrapping up our annual global Playtime series of events with a last stop in Tokyo, Japan. This year Google Play hosted events in 10 cities: London, Paris, Berlin, Hong Kong, Singapore, Gurgaon, San Francisco, Sao Paulo, Seoul and Tokyo. We met with app and game developers from around the world to discuss how to build successful businesses on Google Play, share experiences, give feedback, collaborate, and get inspired.
You can now watch some of the best Playtime sessions on our Android Developers YouTube Channel, as listed below. The playlist opens with a video that celebrates collaboration.
Learn how we're helping users discover apps in the right context, creating new
ways to engage with users beyond the install, and powering innovative
experiences on emerging platforms like virtual reality, wearables, and auto.
Android development is more powerful and efficient than ever before. Android
Studio brings you speed, smarts, and support for Android Nougat. The broad range
of cross-platform tools on Firecase can improve your app on Android and beyond.
Material Design and Vulkan continue to improve the user experience and increase
engagement.
Daydream View is a VR headset and controller by Google that lets people explore
new worlds, or play games that put them at the center of action. Learn how we're
helping users discover apps in the right context and powering new experiences
with Daydream and Tango.
Augmented reality engages and delights people everywhere. In this fireside chat,
online furniture seller Wayfair and Niantic's Pokémon
GO share their experiences with AR and discuss how other developers can make
the most of the platform.
Learn how to create apps and games for emerging markets, which are expected to
drive 80% of global smartphone growth by 2020, by recognizing the key challenges
and designing the right app experiences to overcome them.
At minute 16:41, hear tips from Hugo Obi, co-founder of Nigerian games developer
Maliyo.
Set your app up for success using experimentation and iteration. Learn best
practices for soft launching and adapting your app for different markets and
device types.
Planning and executing a great growth strategy involves a complex set of choices
and mastery of many tools. In this session we discuss topics including key
business objectives, tools, and techniques to help you solve the growth puzzle
with our partner, SoundCloud.
User growth isn't just about growing the number of users you have. The key to
sustainability is creating and delivering core product value. In this session,
VC Greylock discusses how to identify your core action to focus on and shows you
how to use these insights to optimize your app for long term growth.
As the app marketplace becomes more competitive, developer success depends on
retaining users in apps they love. Find out which Google tools and features can
help you analyze your users' behaviors, improve engagement and retention in your
app and hear insights from others developers including Lifesum.
Deepdive into lifetime value models and predictive analytics in the apps ecosystem.
Tactics to get the most out of identified segments and how to upgrade their
behaviors to minimize churn.
Learn about Google's efforts to enable users, around the world, to seamlessly
and safely pay for content. This session provides updates on Google Play billing
and recent enhancements to our subscriptions platform.
Customize your game's experience for different users by targeting them with lifetime value
models and predictive analytics. Hear how these concepts are applied by
Space Ape Games to improve retention and monetization of their titles.
Learn how to use Google's latest tools, like Firebase, for benchmarking,
acquiring users and measuring your activities. Also, hear game
developer Seriously share their latest insights and strategies on YouTube
influencer campaigns.
Learn how successful developers keep their games fresh and engaging with Live
Operations. In this talk, the LiveOps expert on Marvel: Contest of Champions
discusses tips about the art and science of running an engaging LiveOps event.
Family-based households with children have higher tablet and smartphone
ownership rates than the general population. These families are more likely to
make purchases on their mobile devices and play games. Learn about how parents
choose what to download and buy, and how you can prepare for maximum conversion.
Papumba has a clear vision to grow a global business. Hear how they work with
experts to adapt their games to local markets and leverage Google Play's
developer tools to find success around the world.
You've spent time and resources getting users to download your apps, but what
happens after the install? Learn how to minimize churn and keep families engaged
with your content long term.
PlayKids has been at the forefront of the subscription business model since
their inception. See how they best serve their subscribers by refreshing their
content, expanding their offerings and investing in new verticals.