Tag Archives: Android Graphics

Handling Device Orientation Efficiently in Vulkan With Pre-Rotation

Illustration of prerotation

By Omar El Sheikh, Android Engineer

Francesco Carucci, Developer Advocate

Vulkan provides developers with the power to specify much more information to devices about rendering state compared to OpenGL. With that power though comes some new responsibilities; developers are expected to explicitly implement things that in OpenGL were handled by the driver. One of these things is device orientation and its relationship to render surface orientation. Currently, there are 3 ways that Android can handle reconciling the render surface of the device with the orientation of the device :

  1. The device has a Display Processing Unit (DPU) that can efficiently handle surface rotation in hardware to match the device orientation (requires a device that supports this)
  2. The Android OS can handle surface rotation by adding a compositor pass that will have a performance cost depending on how the compositor has to deal with rotating the output image
  3. The application itself can handle the surface rotation by rendering a rotated image onto a render surface that matches the current orientation of the display

What Does This Mean For Your Apps?

There currently isn't a way for an application to know whether surface rotation handled outside of the application will be free. Even if there is a DPU to take care of this for us, there will still likely be a measurable performance penalty to pay. If your application is CPU bound, this becomes a huge power issue due to the increased GPU usage by the Android Compositor which usually is running at a boosted frequency as well; and if your application is GPU bound, then it becomes a potentially large performance issue on top of that as the Android Compositor will preempt your application's GPU work causing your application to drop its frame rate.

On Pixel 4XL, we have seen on shipping titles that SurfaceFlinger (the higher-priority task that drives the Android Compositor) both regularly preempts the application’s work causing 1-3ms hits to frametimes, as well as puts increased pressure on the GPU’s vertex/texture memory as the Compositor has to read the frame to do its composition work.

Handling orientation properly stops GPU preemption by SurfaceFlinger almost entirely, as well as sees the GPU frequency drop 40% as the boosted frequency used by the Android Compositor is no longer needed.

To ensure surface rotations are handled properly with as little overhead as possible (as seen in the case above) we recommend implementing method 3 - this is known as pre-rotation. The primary mechanism with which this works is by telling the Android OS that we are handling the surface rotation by specifying in the orientation during swapchain creation through the passed in surface transform flags, which stops the Android Compositor from doing the rotation itself.

Knowing how to set the surface transform flag is important for every Vulkan application, since applications tend to either support multiple orientations, or support a single orientation where its render surface is in a different orientation to what the device considers its identity orientation; For example a landscape-only application on a portrait-identity phone, or a portrait-only application on a landscape-identity tablet.

In this post we will describe in detail how to implement pre-rotation and handle device rotation in your Vulkan application.

Modify AndroidManifest.xml

To handle device rotation in your app, change the application’s AndroidManifest.xml to tell Android that your app will handle orientation and screen size changes. This prevents Android from destroying and recreating the Android activity and calling the onDestroy() function on the existing window surface when an orientation change occurs. This is done by adding the orientation (to support API level <13) and screenSize attributes to the activity’s configChanges section:

<activity android:name="android.app.NativeActivity"
          android:configChanges="orientation|screenSize">

If your application fixes its screen orientation using the screenOrientation attribute you do not need to do this. Also if your application uses a fixed orientation then it will only need to setup the swapchain once on application startup/resume.

Get the Identity Screen Resolution and Camera Parameters

The next thing that needs to be done is to detect the device’s screen resolution that is associated with the VK_SURFACE_TRANSFORM_IDENTITY_BIT_KHR as this resolution is the one that the Swapchain will always need to be set to. The most reliable way to get this is to make a call to vkGetPhysicalDeviceSurfaceCapabilitiesKHR() at application startup and storing the returned extent - swapping the width and height based on the currentTransform that is also returned in order to ensure we are storing the identity screen resolution:

    VkSurfaceCapabilitiesKHR capabilities;
    vkGetPhysicalDeviceSurfaceCapabilitiesKHR(physDevice, surface, &capabilities);

    uint32_t width = capabilities.currentExtent.width;
    uint32_t height = capabilities.currentExtent.height;
    if (capabilities.currentTransform & VK_SURFACE_TRANSFORM_ROTATE_90_BIT_KHR ||
        capabilities.currentTransform & VK_SURFACE_TRANSFORM_ROTATE_270_BIT_KHR) {
      // Swap to get identity width and height
      capabilities.currentExtent.height = width;
      capabilities.currentExtent.width = height;
    }

    displaySizeIdentity = capabilities.currentExtent;

displaySizeIdentity is a VkExtent2D that we use to store said identity resolution of the app's window surface in the display’s natural orientation.

Detect Device Orientation Changes (Android 10+)

To know when the application has encountered an orientation change, the most reliable means of tracking this change involves checking the return value of vkQueuePresentKHR() and seeing if it returned VK_SUBOPTIMAL_KHR

  auto res = vkQueuePresentKHR(queue_, &present_info);
  if (res == VK_SUBOPTIMAL_KHR){
    orientationChanged = true;
  }

One thing to note about this solution is that it only works on devices running Android Q and above as that is when Android started to return VK_SUBOPTIMAL_KHR from vkQueuePresentKHR()

orientationChanged is a boolean stored somewhere accessible from the applications main rendering loop

Detecting Device Orientation Changes (Pre-Android 10)

For devices running older versions of Android below 10, a different implementation is needed since we do not have access to VK_SUBOPTIMAL_KHR.

Using Polling

On pre-10 devices we can poll the current device transform every pollingInterval frames, where pollingInterval is a granularity decided on by the programmer. The way we do this is by calling vkGetPhysicalDeviceSurfaceCapabilitiesKHR() and then comparing the returned currentTransform field with that of the currently stored surface transformation (in this code example stored in pretransformFlag)

  currFrameCount++;
  if (currFrameCount >= pollInterval){
    VkSurfaceCapabilitiesKHR capabilities;
    vkGetPhysicalDeviceSurfaceCapabilitiesKHR(physDevice, surface, &capabilities);

    if (pretransformFlag != capabilities.currentTransform) {
      window_resized = true;
    }
    currFrameCount = 0;
  }

On a Pixel 4 running Android Q, polling vkGetPhysicalDeviceSurfaceCapabilitiesKHR() took between .120-.250ms and on a Pixel 1XL running Android O polling took .110-.350ms

Using Callbacks

A second option for devices running below Android 10 is to register an onNativeWindowResized() callback to call a function that sets the orientationChanged flag to signal to the application an orientation change has occurred:

void android_main(struct android_app *app) {
  ...
  app->activity->callbacks->onNativeWindowResized = ResizeCallback;
}

Where ResizeCallback is defined as:

void ResizeCallback(ANativeActivity *activity, ANativeWindow *window){
  orientationChanged = true;
}

The drawback to this solution is that onNativeWindowResized() only ever gets called on 90 degree orientation changes (going from landscape to portrait or vice versa), so for example an orientation change from landscape to reverse landscape will not trigger the swapchain recreation, requiring the Android compositor to do the flip for your application.

Handling the Orientation Change

To actually handle the orientation change, we first have a check at the top of the main rendering loop for whether the orientationChanged variable has been set to true, and if so we'll go into the orientation change routine:

bool VulkanDrawFrame() {
 if (orientationChanged) {
   OnOrientationChange();
 }

And within the OnOrientationChange() function we will do all the work necessary to recreate the swapchain. This involves destroying any existing Framebuffers and ImageViews; recreating the swapchain while destroying the old swapchain (which will be discussed next); and then recreating the Framebuffers with the new swapchain’s DisplayImages. Note that attachment images (depth/stencil images for example) usually do not need to be recreated as they are based on the identity resolution of the pre-rotated swapchain images.

void OnOrientationChange() {
 vkDeviceWaitIdle(getDevice());

 for (int i = 0; i < getSwapchainLength(); ++i) {
   vkDestroyImageView(getDevice(), displayViews_[i], nullptr);
   vkDestroyFramebuffer(getDevice(), framebuffers_[i], nullptr);
 }

 createSwapChain(getSwapchain());
 createFrameBuffers(render_pass, depthBuffer.image_view);
 orientationChanged = false;
}

And at the end of the function we reset the orientationChanged flag to false to show that we have handled the orientation change.

Swapchain Recreation

In the previous section we mention having to recreate the swapchain. The first steps to doing so involves getting the new characteristics of the rendering surface:

void createSwapChain(VkSwapchainKHR oldSwapchain) {
   VkSurfaceCapabilitiesKHR capabilities;
   vkGetPhysicalDeviceSurfaceCapabilitiesKHR(physDevice, surface, &capabilities);
   pretransformFlag = capabilities.currentTransform;

With the VkSurfaceCapabilities struct populated with the new information, we can now check to see whether an orientation change has occurred by checking the currentTransform field and store it for later in the pretransformFlag field as we will be needing it for later when we make adjustments to the MVP matrix.

In order to do so we must make sure that we properly specify some attributes within the VkSwapchainCreateInfo struct:

VkSwapchainCreateInfoKHR swapchainCreateInfo{
  ...                        
  .sType = VK_STRUCTURE_TYPE_SWAPCHAIN_CREATE_INFO_KHR,
  .imageExtent = displaySizeIdentity,
  .preTransform = pretransformFlag,
  .oldSwapchain = oldSwapchain,
};

vkCreateSwapchainKHR(device_, &swapchainCreateInfo, nullptr, &swapchain_));

if (oldSwapchain != VK_NULL_HANDLE) {
  vkDestroySwapchainKHR(device_, oldSwapchain, nullptr);
}

The imageExtent field will be populated with the displaySizeIdentity extent that we stored at application startup. The preTransform field will be populated with our pretransformFlag variable (which is set to the currentTransform field of the surfaceCapabilities). We also set the oldSwapchain field to the swapchain that we are about to destroy.

It is important that the surfaceCapabilities.currentTransform field and the swapchainCreateInfo.preTransform field match because this lets the Android OS know that we are handling the orientation change ourselves, thus avoiding the Android Compositor.

MVP Matrix Adjustment

The last thing that needs to be done is actually apply the pre-transformation. This is done by applying a rotation matrix to your MVP matrix. What this essentially does is apply the rotation in clip space so that the resulting image is rotated to the device current orientation. You can then simply pass this updated MVP matrix into your vertex shader and use it as normal without the need to modify your shaders.

glm::mat4 pre_rotate_mat = glm::mat4(1.0f);
glm::vec3 rotation_axis = glm::vec3(0.0f, 0.0f, 1.0f);

if (pretransformFlag & VK_SURFACE_TRANSFORM_ROTATE_90_BIT_KHR) {
 pre_rotate_mat = glm::rotate(pre_rotate_mat, glm::radians(90.0f), rotation_axis);
}

else if (pretransformFlag & VK_SURFACE_TRANSFORM_ROTATE_270_BIT_KHR) {
 pre_rotate_mat = glm::rotate(pre_rotate_mat, glm::radians(270.0f), rotation_axis);
}

else if (pretransformFlag & VK_SURFACE_TRANSFORM_ROTATE_180_BIT_KHR) {
 pre_rotate_mat = glm::rotate(pre_rotate_mat, glm::radians(180.0f), rotation_axis);
}

MVP = pre_rotate_mat * MVP;

Consideration - Non-Full Screen Viewport and Scissor

If your application is using a non-full screen viewport/scissor region, they will need to be updated according to the orientation of the device. This requires that we enable the dynamic Viewport and Scissor options during Vulkan’s pipeline creation:

VkDynamicState dynamicStates[2] = {
  VK_DYNAMIC_STATE_VIEWPORT,
  VK_DYNAMIC_STATE_SCISSOR,
};

VkPipelineDynamicStateCreateInfo dynamicInfo = {
  .sType = VK_STRUCTURE_TYPE_PIPELINE_DYNAMIC_STATE_CREATE_INFO,
  .pNext = nullptr,
  .flags = 0,
  .dynamicStateCount = 2,
  .pDynamicStates = dynamicStates,
};

VkGraphicsPipelineCreateInfo pipelineCreateInfo = {
  .sType = VK_STRUCTURE_TYPE_GRAPHICS_PIPELINE_CREATE_INFO,
  ...
  .pDynamicState = &dynamicInfo,
  ...
};

VkCreateGraphicsPipelines(device, VK_NULL_HANDLE, 1, &pipelineCreateInfo, nullptr, &mPipeline);

The actual computation of the viewport extent during command buffer recording looks like this:

int x = 0, y = 0, w = 500, h = 400;
glm::vec4 viewportData;

switch (device->GetPretransformFlag()) {
  case VK_SURFACE_TRANSFORM_ROTATE_90_BIT_KHR:
    viewportData = {bufferWidth - h - y, x, h, w};
    break;
  case VK_SURFACE_TRANSFORM_ROTATE_180_BIT_KHR:
    viewportData = {bufferWidth - w - x, bufferHeight - h - y, w, h};
    break;
  case VK_SURFACE_TRANSFORM_ROTATE_270_BIT_KHR:
    viewportData = {y, bufferHeight - w - x, h, w};
    break;
  default:
    viewportData = {x, y, w, h};
    break;
}

const VkViewport viewport = {
    .x = viewportData.x,
    .y = viewportData.y,
    .width = viewportData.z,
    .height = viewportData.w,
    .minDepth = 0.0F,
    .maxDepth = 1.0F,
};

vkCmdSetViewport(renderer->GetCurrentCommandBuffer(), 0, 1, &viewport);

Where x and y define the coordinates of the top left corner of the viewport, and w and h define the width and height of the viewport respectively.

The same computation can be used to also set the scissor test, and is included below for completeness:

int x = 0, y = 0, w = 500, h = 400;
glm::vec4 scissorData;

switch (device->GetPretransformFlag()) {
  case VK_SURFACE_TRANSFORM_ROTATE_90_BIT_KHR:
    scissorData = {bufferWidth - h - y, x, h, w};
    break;
  case VK_SURFACE_TRANSFORM_ROTATE_180_BIT_KHR:
    scissorData = {bufferWidth - w - x, bufferHeight - h - y, w, h};
    break;
  case VK_SURFACE_TRANSFORM_ROTATE_270_BIT_KHR:
    scissorData = {y, bufferHeight - w - x, h, w};
    break;
  default:
    scissorData = {x, y, w, h};
    break;
}

const VkRect2D scissor = {
    .offset =
        {
            .x = (int32_t)viewportData.x,
            .y = (int32_t)viewportData.y,
        },
    .extent =
        {
            .width = (uint32_t)viewportData.z,
            .height = (uint32_t)viewportData.w,
        },
};

vkCmdSetScissor(renderer->GetCurrentCommandBuffer(), 0, 1, &scissor);

Consideration - Fragment Shader Derivatives

If your application is using derivative computations such as dFdx and dFdy, additional transformations may be needed to account for the rotated coordinate system as these computations are executed in pixel space. This requires the app to pass some indication of the preTransform into the fragment shader (such as an integer representing the current device orientation) and use that to map the derivative computations properly:

  • For a 90 degree pre-rotated frame
    • dFdx needs to be mapped to dFdy
    • dFdy needs to be mapped to dFdx
  • For a 270 degree pre-rotated frame
    • dFdx needs to be mapped to -dFdy
    • dFdy needs to be mapped to -dFdx
  • For a 180 degree pre-rotated frame,
    • dFdx needs to be mapped to -dFdx
    • dFdy needs to be mapped to -dFdy

Conclusion

In order for your application to get the most out of Vulkan on Android, implementing pre-rotation is a must. The most important takeaways from this blogpost are:

  1. Ensure that during swapchain creation or recreation, the pretransform flag is set so that it matches the flag returned by the Android operating system. This will avoid the compositor overhead
  2. Keep the swapchain size fixed to the identity resolution of the app's window surface in the display’s natural orientation
  3. Rotate the MVP matrix in clip space to account for the devices orientation since the swapchain resolution/extent no longer update with the orientation of the display
  4. Update viewport and scissor rectangles as needed by your application

Here is a link to a minimal example of pre-rotation being implemented for an android application:

https://github.com/google/vulkan-pre-rotation-demo

Wide Color Photos Are Coming to Android: Things You Need to Know to be Prepared

Posted by Peiyong Lin, Software Engineer

Android is now at the point where sRGB color gamut with 8 bits per color channel is not enough to take advantage of the display and camera technology. At Android we have been working to make wide color photography happen end-to-end, e.g. more bits and bigger gamuts. This means, eventually users will be able to capture the richness of the scenes, share a wide color pictures with friends and view wide color pictures on their phones. And now with Android Q, it's starting to get really close to reality: wide color photography is coming to Android. So, it's very important to applications to be wide color gamut ready. This article will show how you can test your application to see whether it's wide color gamut ready and wide color gamut capable, and the steps you need to take to be ready for wide color gamut photography.

But before we dive in, why wide color photography? Display panels and camera sensors on mobile are getting better and better every year. More and more newly released phones will be shipped with calibrated display panels, some are wide color gamut capable. Modern camera sensors are capable of capturing scenes with a wider range of color outside of sRGB and thus produce wide color gamut pictures. And when these two come together, it creates an end-to-end photography experience with more vibrant colors of the real world.

At a technical level, this means there will be pictures coming to your application with an ICC profile that is not sRGB but some other wider color gamut: Display P3, Adobe RGB, etc. For consumers, this means their photos will look more realistic.

orange sunset

Display P3

orange sunset

SRGB

Colorful umbrellas

Display P3

Colorful umbrellas

SRGB

Above are images of the Display P3 version and the SRGB version respectively of the same scene. If you are reading this article on a calibrated and wide color gamut capable display, you can notice the significant difference between these them.

Color Tests

There are two kinds of tests you can perform to know whether your application is prepared or not. One is what we call color correctness tests, the other is wide color tests.

Color Correctness test: Is your application wide color gamut ready?

A wide color gamut ready application implies the application manages color proactively. This means when given images, the application always checks the color space and does conversion based on its capability of showing wide color gamut, and thus even if the application can't handle wide color gamut it can still show the sRGB color gamut of the image correctly without color distortion.

Below is a color correct example of an image with Display P3 ICC profile.

large round balloons outside on floor in front of a concrete wall

However, if your application is not color correct, then typically your application will end up manipulating/displaying the image without converting the color space correctly, resulting in color distortion. For example you may get the below image, where the color is washed-out and everything looks distorted.

large round balloons outside on floor in front of a concrete wall

Wide Color test: Is your application wide color gamut capable?

A wide color gamut capable application implies when given wide color gamut images, it can show the colors outside of sRGB color space. Here's an image you can use to test whether your application is wide color gamut capable or not, if it is, then a red Android logo will show up. Note that you must run this test on a wide color gamut capable device, for example a Pixel 3 or Samsung Galaxy S10.

red Android droid figure

What you should do to prepare

To prepare for wide color gamut photography, your application must at least pass the wide color gamut ready test, we call it color correctness test. If your application passes the wide color gamut ready tests, that's awesome! But if it doesn't, here are the steps to make it wide color gamut ready.

The key thing to be prepared and future proof is that your application should never assume sRGB color space of the external images it gets. This means application must check the color space of the decoded images, and do the conversion when necessary. Failure to do so will result in color distortion and color profile being discarded somewhere in your pipeline.

Mandatory: Be Color Correct

You must be at least color correct. If your application doesn't adopt wide color gamut, you are very likely to just want to decode every image to sRGB color space. You can do that by either using BitmapFactory or ImageDecoder.

Using BitmapFactory

In API 26, we added inPreferredColorSpace in BitmapFactory.Option, which allows you to specify the target color space you want the decoded bitmap to have. Let's say you want to decode a file, then below is the snippet you are very likely to use in order to manage the color:

final BitmapFactory.Options options = new BitmapFactory.Options();
// Decode this file to sRGB color space.
options.inPreferredColorSpace = ColorSpace.get(Named.SRGB);
Bitmap bitmap = BitmapFactory.decodeFile(FILE_PATH, options);

Using ImageDecoder

In Android P (API level 28), we introduced ImageDecoder, a modernized approach for decoding images. If you upgrade your apk to API level 28 or beyond, we recommend you to use it instead of the BitmapFactory and BitmapFactory.Option APIs.

Below is a snippet to decode the image to an sRGB bitmap using ImageDecoder#decodeBitmap API.

ImageDecoder.Source source =
        ImageDecoder.createSource(FILE_PATH);
try {
    bitmap = ImageDecoder.decodeBitmap(source,
            new ImageDecoder.OnHeaderDecodedListener() {
                @Override
                public void onHeaderDecoded(ImageDecoder decoder,
                        ImageDecoder.ImageInfo info,
                        ImageDecoder.Source source) {
                    decoder.setTargetColorSpace(ColorSpace.get(Named.SRGB));
                }
            });
} catch (IOException e) {
    // handle exception.
}

ImageDecoder also has the advantage to let you know the encoded color space of the bitmap before you get the final bitmap by passing an ImageDecoder.OnHeaderDecodedListener and checking ImageDecoder.ImageInfo#getColorSpace(). And thus, depending on how your applications handle color spaces, you can check the encoded color space of the contents and set the target color space differently.

ImageDecoder.Source source =
        ImageDecoder.createSource(FILE_PATH);
try {
    bitmap = ImageDecoder.decodeBitmap(source,
            new ImageDecoder.OnHeaderDecodedListener() {
                @Override
                public void onHeaderDecoded(ImageDecoder decoder,
                        ImageDecoder.ImageInfo info,
                        ImageDecoder.Source source) {
                    ColorSpace cs = info.getColorSpace();
                    // Do something...
                }
            });
} catch (IOException e) {
    // handle exception.
}

For more detailed usage you can check out the ImageDecoder APIs here.

Known bad practices

Some typical bad practices include but are not limited to:

  • Always assume sRGB color space
  • Upload image as texture without necessary conversion
  • Ignore the ICC profile during compression

All these cause a severe users perceived results: Color distortion. For example, below is a code snippet that results in the application not color correct:

// This is bad, don't do it!
final BitmapFactory.Options options = new BitmapFactory.Options();
final Bitmap bitmap = BitmapFactory.decodeFile(FILE_PATH, options);
glTexImage2D(GLES20.GL_TEXTURE_2D, 0, GLES31.GL_RGBA, bitmap.getWidth(),
        bitmap.getHeight(), 0, GLES20.GL_RGBA, GLES20.GL_UNSIGNED_BYTE, null);
GLUtils.texSubImage2D(GLES20.GL_TEXTURE_2D, 0, 0, 0, bitmap,
        GLES20.GL_RGBA, GLES20.GL_UNSIGNED_BYTE);

There's no color space checking before uploading the bitmap as the texture, and thus the application will end up with the below distorted image from the color correctness test.

large round balloons outside on floor in front of a concrete wall

Optional: Be wide color capable

Besides the above changes you must make in order to handle images correctly, if your applications are heavily image based, you will want to take additional steps to display these images in the full vibrant range by enabling the wide gamut mode in your manifest or creating a Display P3 surfaces.

To enable the wide color gamut in your activity, set the colorMode attribute to wideColorGamut in your AndroidManifest.xml file. You need to do this for each activity for which you want to enable wide color mode.

android:colorMode="wideColorGamut"

You can also set the color mode programmatically in your activity by calling the setColorMode(int) method and passing in COLOR_MODE_WIDE_COLOR_GAMUT.

To render wide color gamut contents, besides the wide color contents, you will also need to create a wide color gamut surfaces to render to. In OpenGL for example, your application must first check the following extensions:

And then, request the Display P3 as the color space when creating your surfaces, as shown in the following code snippet:

private static final int EGL_GL_COLORSPACE_DISPLAY_P3_PASSTHROUGH_EXT = 0x3490;

public EGLSurface createWindowSurface(EGL10 egl, EGLDisplay display,
                                      EGLConfig config, Object nativeWindow) {
  EGLSurface surface = null;
  try {
    int attribs[] = {
      EGL_GL_COLORSPACE_KHR, EGL_GL_COLORSPACE_DISPLAY_P3_PASSTHROUGH_EXT,
      egl.EGL_NONE
    };
    surface = egl.eglCreateWindowSurface(display, config, nativeWindow, attribs);
  } catch (IllegalArgumentException e) {}
  return surface;
}

Also check out our post about more details on how you can adopt wide color gamut in native code.

APIs design guideline for image library

Finally, if you own or maintain an image decoding/encoding library, you will need to at least pass the color correctness tests as well. To modernize your library, there are two things we strongly recommend you to do when you extend APIs to manage color:

  1. Strongly recommend to explicitly accept ColorSpace as a parameter when you design new APIs or extend existing APIs. Instead of hardcoding a color space, an explicit ColorSpace parameter is a more future-proof way moving forward.
  2. Strongly recommend all legacy APIs to explicitly decode the bitmap to sRGB color space. Historically there's no color management, and thus Android has been treating everything as sRGB implicitly until Android 8.0 (API level 26). This allows you to help your users maintain backward compatibility.

After you finish, go back to the above section and perform the two color tests.