Tag Archives: compression

Google and Binomial Partner to Open-Source Basis Universal Texture Format

Today, Google and Binomial are excited to announce that we have partnered to open source the Basis Universal texture codec to improve the performance of transmitting images on the web and within desktop and mobile applications, while maintaining GPU efficiency. This release fills an important gap in the graphics compression ecosystem and complements earlier work in Draco geometry compression.

The Basis Universal texture format is 6-8 times smaller than JPEG on the GPU, yet is a similar storage size as JPEG – making it a great alternative to current GPU compression methods that are inefficient and don’t operate cross platform – and provides a more performant alternative to JPEG/PNG. It creates compressed textures that work well in a variety of use cases - games, virtual & augmented reality, maps, photos, small-videos, and more!

Without a universal texture format, developers are left with 2 options:

  • Use GPU formats and take the storage size hit.
  • Use other formats that have reduced storage size but couldn't compete with the GPU performance.

Maintaining so many different GPU formats is a burden on the whole ecosystem, from GPU manufacturers to software developers to the end user who can’t get a great cross platform experience. We’re streamlining this with one solution that has built-in flexibility (like optional higher quality modes) but is much easier on everyone to improve and maintain.

How does it all work? Compress your image using the encoder, choosing the quality settings that make sense for your project (you can also submit multiple images for small videos or optimization purposes, just know they’ll share the same color palette). Insert the transcoder code before rendering, which will turn the intermediary format into the GPU format your computer can read. The image stays compressed throughout this process, even on your GPU!  Instead of needing to decode and read the whole image, the GPU will read only the parts it needs. Enjoy the performance benefits!
Basis Universal can efficiently target the most common GPU formats
Google and Binomial will be working together to continue to support, maintain and add features, so check back frequently for the latest. This initial release of Basis Universal transcodes into the following GPU formats: PVRTC1 opaque, ETC1, ETC2 basic alpha, BC1-5, and BC7 opaque. Over the coming months more functionality will be added including BC7 transparent, ASTC opaque and alpha, PVRTC1 transparent, and higher quality BC7/ASTC.
Basis Universal reduces transmission size for texture while maintaining similar image quality.
See full benchmarking results
Basis Universal improves GPU memory usage over .jpeg and .png
With this partnership, we hope to see adoption of the transcoder in all major browsers to make performant cross-platform compressed textures accessible to everyone via the WebGL API, and the forthcoming WebGPU API. In addition to opening up the possibility of seamless integration into pipelines, everyone now has access to the state of the art compressor, which will also be open sourced.

We look forward to seeing what people do with Basis Universal now that it's open sourced. Check out the code and demo on GitHub, let us know what you think, and how you plan to use it! Currently, Basis Universal transcoders are available in C++ and WebAssembly.

By Stephanie Hurlburt, Binomial and Jamieson Brettle, Chrome Media

Announcing the Second Workshop and Challenge on Learned Image Compression



Last year, we announced the Workshop and Challenge on Learned Image Compression (CLIC), an event that aimed to advance the field of image compression with and without neural networks. Held during the 2018 Computer Vision and Pattern Recognition conference (CVPR 2018), CLIC was quite a success, with 23 accepted workshop papers, 95 authors and 41 entries into the competition. This spawned many new algorithms for image compression, domain specific applications to medical image compression and augmentations to existing methods based, with the winner Tucodec (abbreviated TUCod4c in the image below) achieving 13% better mean opinion score (MOS) than Better Portable Graphics (BPG) compression.
An example image from the 2018 test set, comparing the original image to BPG, JPEG and the results from nine competing teams. All the methods are better than JPEG in color reproduction and many of them are comparable to BPG in their ability to create legible text on the sign.
This year, we are again happy co-sponsor the second Workshop and Challenge on Learned Image Compression at CVPR 2019 in Long Beach, California.The half day workshop will feature talks from invited guests Anne Aaron (Netflix), Aaron Van Den Oord (DeepMind) and Jyrki Alakuijala (Google), along with presentations from five top performing teams in the 2019 competition, which is currently open for submissions.

This year's competition features two tracks for participants to compete in. The first track remains the same as last year, in what we're calling the "low-rate compression" track. The goal for low-rate compression is to compress an image dataset to 0.15 bits per pixel and maintaining the highest quality metrics as measured by PSNR, MS-SSIM and a human evaluated rating task.

The second track incorporates feedback from last year's workshop, in which participants expressed interest in the inverse challenge of determining the amount an image could be compressed and still look good. In this "transparent compression" challenge, we set a relatively high quality threshold for the test dataset (in both PSNR and MS-SSIM) with the goal of compressing the dataset to the smallest file sizes.

If you're doing research in the field of learned image compression, we encourage you to participate in CLIC during CVPR 2019. For more details on the competition and dates, please refer to compression.cc.

Acknowledgements
This workshop is being jointly hosted by researchers at Google, Twitter and ETH Zürich. We'd like to thank: George Toderici (Google), Michele Covell (Google), Johannes Ballé (Google), Nick Johnston (Google), Eirikur Agustsson (Google), Wenzhe Shi (Twitter), Lucas Theis (Twitter), Radu Timofte (ETH Zürich), Fabian Mentzer (ETH Zürich) for their contributions.

Source: Google AI Blog


Brotli Compression in Google Display Ads

Posted by Michael Burns, Software Engineer, Publisher Tagging & Ads Latency Team

Our goal is to help publishers monetize their content and build sustainable businesses through advertising products that allow sites to load as fast as possible to minimize impact to user experience.

Almost two years ago, our compression team announced a new compression algorithm called Brotli. Today, we are happy to announce that the Brotli compression algorithm is now being used to compress Google Display Ads whenever possible. In our experiments, we see data savings of 15% in aggregate over standard gzip compression, and in some instances, a savings of over 40%! This reduces the amount of data sent to end users by tens of thousands of gigabytes every day! This also results in faster page loads and less battery consumption.

We hope results like this will encourage wider adoption and will advance web standards such as Brotli compression.

AMP Compression Update

Posted by Zachary Nado, Software Engineer

Recently we announcedthe addition of Brotli compression to the Google AMP Cache. All AMP documents served from the Google AMP Cache can now be served with Brotli, which will save a considerable amount of bandwidth for our users and further our goal of improving the mobile experience.

Brotliis a newer, more efficient compression algorithm created by Jyrki Alakuijala and Zoltán Szabadka with the Google Research Europe Compression Team. Launched in 2015, it has already been used to enable considerable savings in other areas of Google. While it is a generic compression algorithm, it has particularly impressive performance when applied to web documents; we have seen an average decrease in document size of around 10% when using Brotli instead of gzip, which has amounted to hundreds of gigabytes of bandwidth saved per day across the Google AMP Cache.

With smaller document sizes, pages load faster while also saving bandwidth which can amount to noticeable savings for users on limited data plans. The Google AMP Cache is just the beginning though, as engineering teams are working on Brotli support in many other products which can enable bandwidth savings throughout Google.

Announcing Guetzli: A New Open Source JPEG Encoder

Crossposted on the Google Research Blog

At Google, we care about giving users the best possible online experience, both through our own services and products and by contributing new tools and industry standards for use by the online community. That’s why we’re excited to announce Guetzli, a new open source algorithm that creates high quality JPEG images with file sizes 35% smaller than currently available methods, enabling webmasters to create webpages that can load faster and use even less data.

Guetzli [guɛtsli] — cookie in Swiss German — is a JPEG encoder for digital images and web graphics that can enable faster online experiences by producing smaller JPEG files while still maintaining compatibility with existing browsers, image processing applications and the JPEG standard. From the practical viewpoint this is very similar to our Zopfli algorithm, which produces smaller PNG and gzip files without needing to introduce a new format; and different than the techniques used in RNN-based image compression, RAISR, and WebP, which all need client and ecosystem changes for compression gains at internet scale.

The visual quality of JPEG images is directly correlated to its multi-stage compression process: color space transform, discrete cosine transform, and quantization. Guetzli specifically targets the quantization stage in which the more visual quality loss is introduced, the smaller the resulting file. Guetzli strikes a balance between minimal loss and file size by employing a search algorithm that tries to overcome the difference between the psychovisual modeling of JPEG's format, and Guetzli’s psychovisual model, which approximates color perception and visual masking in a more thorough and detailed way than what is achievable by simpler color transforms and the discrete cosine transform. However, while Guetzli creates smaller image file sizes, the tradeoff is that these search algorithms take significantly longer to create compressed images than currently available methods.

orig-libjpeg-guetzli.png
Figure 1. 16x16 pixel synthetic example of  a phone line  hanging against a blue sky — traditionally a case where JPEG compression algorithms suffer from artifacts. Uncompressed original is on the left. Guetzli (on the right) shows less ringing artefacts than libjpeg (middle) and has a smaller file size.
And while Guetzli produces smaller image file sizes without sacrificing quality, we additionally found that in experiments where compressed image file sizes are kept constant that human raters consistently preferred the images Guetzli produced over libjpeg images, even when the libjpeg files were the same size or even slightly larger. We think this makes the slower compression a worthy tradeoff.

montage-cats-zoom-eye2.png
Figure 2. 20x24 pixel zoomed areas from a picture of a cat’s eye. Uncompressed original on the left. Guetzli (on the right) shows less ringing artefacts than libjpeg (middle) without requiring a larger file size.
It is our hope that webmasters and graphic designers will find Guetzli useful and apply it to their photographic content, making users’ experience smoother on image-heavy websites in addition to reducing load times and bandwidth costs for mobile users. Last, we hope that the new explicitly psychovisual approach in Guetzli will inspire further image and video compression research.

By Robert Obryk and Jyrki Alakuijala, Software Engineers, Google Research Europe

Introducing Draco: compression for 3D graphics

3D graphics are a fundamental part of many applications, including gaming, design and data visualization. As graphics processors and creation tools continue to improve, larger and more complex 3D models will become commonplace and help fuel new applications in immersive virtual reality (VR) and augmented reality (AR).  Because of this increased model complexity, storage and bandwidth requirements are forced to keep pace with the explosion of 3D data.

The Chrome Media team has created Draco, an open source compression library to improve the storage and transmission of 3D graphics. Draco can be used to compress meshes and point-cloud data. It also supports compressing points, connectivity information, texture coordinates, color information, normals and any other generic attributes associated with geometry.

With Draco, applications using 3D graphics can be significantly smaller without compromising visual fidelity. For users this means apps can now be downloaded faster, 3D graphics in the browser can load quicker, and VR and AR scenes can now be transmitted with a fraction of the bandwidth, rendered quickly and look fantastic.


Sample Draco compression ratios and encode/decode performance*

Transmitting 3D graphics for web-based applications is significantly faster using Draco’s JavaScript decoder, which can be tied to a 3D web viewer. The following video shows how efficient transmitting and decoding 3D objects in the browser can be - even over poor network connections.



Video and audio compression have shaped the internet over the past 10 years with streaming video and music on demand. With the emergence of VR and AR, on the web and on mobile (and the increasing proliferation of sensors like LIDAR) we will soon be swimming in a sea of geometric data. Compression technologies, like Draco, will play a critical role in ensuring these experiences are fast and accessible to anyone with an internet connection. More exciting developments are in store for Draco, including support for creating multiple levels of detail from a single model to further improve the speed of loading meshes.

We look forward to seeing what people do with Draco now that it's open source. Check out the code on GitHub and let us know what you think. Also available is a JavaScript decoder with examples on how to incorporate Draco into the three.js 3D viewer.

By Jamieson Brettle and Frank Galligan, Chrome Media Team

* Specifications: Tests ran with textures and positions quantized at 14-bit precision, normal vectors at 7-bit precision. Ran on a single-core of a 2013 MacBook Pro.  JavaScript decoded using Chrome 54 on Mac OS X.

ETC2Comp: fast texture compression for games and VR

For mobile game and VR developers the ETC2 texture format has become an increasingly valuable tool for texture compression. It produces good on-GPU sizes (it stays compressed in memory) and higher quality textures (compared to its ETC1 counterpart).

These benefits come with a significant downside, however: ETC2 textures take significantly longer to compress than their ETC1 counterparts. As adoption of the ETC2 format increases in a project, so do build times. As such, developers have had to make the classic choice between quality and time.

We wanted to eliminate the need for developers to make that choice, so we’ve released ETC2Comp, a fast and high quality ETC2 encoder for games and VR developers.

ETC2 takes a long time to compress textures because the format defines a large number of possible combinations for encoding a block in the texture. To find the most perfect, highest quality compressed image means brute-forcing this incredibly large number of combinations, which clearly is not a time efficient option.

We designed ETC2Comp to get the same visual results at much faster speeds by deploying a few optimization techniques:

Directed Block Search. Rather than a brute-force search, ETC2Comp uses a much more limited, targeted search for the best encoding for a given block. ETC2Comp comes with a precomputed set of archetype blocks, where each archetype is associated with a sorted list of the ETC2 block format types that provide its best encodings. During the actual compression of a texture, each block is initially assigned an archetype, and multiple passes are done to test the block against its block format list to find the best encoding. As a result, the best option can be found much quicker than with a brute-force method.

Full effort setting. During each pass of the encoding process, all the blocks of the image are sorted by their visual quality (worst-looking to best-looking). ETC2Comp takes an effort parameter whose value specifies what percentage of the blocks to update during each pass of encoding. An effort value of 25, for instance, means that on each pass, only the 25% worst looking blocks are tested against the next format in their archetypes' format-chains. The result is a tradeoff between optimizing blocks that already look good, and the time it takes to do it.

Highly multi-threaded code. Since blocks can be evaluated independently during each pass, it’s straightforward to apply multithreading to the work. During encoding ETC2comp can take advantage of available parallel threads, and it even accepts a jobs parameter, where you can define exactly the number of threads you’d like it to use... in case you have a 256 core machine.

Check out the code on GitHub to get started with ETC2Comp and let us know what you think. You can use the tool from the command line or embed the C++ library in your project. If you want to know more about what’s going on under the hood, check out this blog post.

By Colt McAnlis, Developer Advocate

Introducing Brotli: a new compression algorithm for the internet

At Google, we think that internet users’ time is valuable, and that they shouldn’t have to wait long for a web page to load. Because fast is better than slow, two years ago we published the Zopfli compression algorithm. This received such positive feedback in the industry that it has been integrated into many compression solutions, ranging from PNG optimizers to preprocessing web content. Based on its use and other modern compression needs, such as web font compression, today we are excited to announce that we have developed and open sourced a new algorithm, the Brotli compression algorithm.

While Zopfli is Deflate-compatible, Brotli is a whole new data format. This new format allows us to get 20–26% higher compression ratios over Zopfli. In our study ‘Comparison of Brotli, Deflate, Zopfli, LZMA, LZHAM and Bzip2 Compression Algorithms’ we show that Brotli is roughly as fast as zlib’s Deflate implementation. At the same time, it compresses slightly more densely than LZMA and bzip2 on the Canterbury corpus. The higher data density is achieved by a 2nd order context modeling, re-use of entropy codes, larger memory window of past data and joint distribution codes. Just like Zopfli, the new algorithm is named after Swiss bakery products. Brötli means ‘small bread’ in Swiss German.

The smaller compressed size allows for better space utilization and faster page loads. We hope that this format will be supported by major browsers in the near future, as the smaller compressed size would give additional benefits to mobile users, such as lower data transfer fees and reduced battery use.

By Zoltan Szabadka, Software Engineer, Compression Team