Author Archives: Open Source Programs Office

Google funds open source silicon manufacturing shuttles for GlobalFoundries PDK

In August, we released the Process Design Kit (PDK) for the GlobalFoundries 180nm MCU technology platform under the Apache 2.0 license. This open source PDK, resulting from our ongoing pathfinding partnership with GlobalFoundries technology, offers open source silicon designers new capabilities for high volume production, affordability, and more voltage options by including the following standard cells:
  • Digital standard cells' libraries (7-track and 9-track)
  • Low (3.3V), Medium (5V, 6V) and High (10V) voltage devices
  • SRAM macros (64x8, 128x8, 256x8, 512x8)
  • I/O and primitives (Resistors, Capacitors, Transistors, eFuses) cells' libraries
Following the announcement about GlobalFoundries joining Google’s open source silicon initiative, we are now sponsoring a series of no-cost OpenMPW shuttle runs for the GF180MCU PDK in the coming months.


Those shuttles will leverage the existing OpenMPW shuttle infrastructure based on the OpenLane automated design flow with the same Caravel harness and the Efabless platform for project submissions.

Each shuttle run will select 40 projects based on the following criteria:
  • Design sources must be released publicly under an open source license.
  • Projects must be reproducible from design sources and the GF180MCU PDK.
  • Projects must be submitted within the shuttle deadline (projects submitted earlier get additional chances to be selected).
  • Projects must pass the pre-manufacturing checks.
The first shuttle GF-MPW-0 will be a test shuttle, with submissions open from Oct. 31, 2022 to Dec. 5, 2022. It will be used as a way to validate together with the community the integration of the new PDK with the open source silicon toolchain and the Caravel harness; further shuttles will have a longer project application window and improved testing.

We encourage you to re-submit your previous OpenMPW shuttle projects to this shuttle as a way to validate their portability across open source PDKs:
  • Go to developers.google.com/silicon.
  • Navigate to the "Create a new Project" link.
  • Follow the instructions to integrate your project into the last version of the caravel_user_project template.
  • Make sure you select the right variant of the GF180MCU PDK (5LM_1TM_9K) by exporting the following environment variable PDK=gf180mcuC in your workspace prior to running any commands.
  • Submit your project for manufacturing on the Efabless platform.
We're excited to see designers leveraging this program by porting their existing projects that were submitted previously to OpenMPW shuttles, but also by designing new projects that target the GF180MCU PDK, finding paths together to research and advance the open source silicon ecosystem.

By Ethan Mahintorabi, Software Engineer and Johan Euphrosine, Developer Programs Engineer – Hardware Toolchains Team, and Aaron Cunningham, Technical Program Manager – Google Open Source Programs Office

Sigstore project announces general availability and v1.0 releases


Today, the Sigstore community announced the general availability of their free, community-operated certificate authority and transparency log services. In addition, two of Sigstore’s foundational projects, Fulcio and Rekor, published v1.0 releases denoting a commitment to API stability. Google is proud to celebrate these open source community milestones. ?

Sigstore is a standard for signing, verifying, and protecting open source software. With increased industry attention being given to software supply chain security, including the recent Executive Order on Cybersecurity, the ability to know and trust where software comes from has never been more important. Sigstore simplifies and automates the complex parts of digitally signing software—making this more accessible and trustworthy than ever before.

Beginning in 2020 as an open source collaboration between Red Hat and Google, the Sigstore project has grown into a vendor-neutral, community operated and designed project that is part of the Open Source Security Foundation (OpenSSF). The ecosystem has also continued to grow spanning multiple package managers and ecosystems, and now if you download a new release by open source projects like Python or Kubernetes, you’ll see that they’ve been signed by Sigstore.

Google is an active, contributing member of the Sigstore community. In addition to upstream code contributions, Google has contributed in several other ways:
We are part of a larger open source community helping develop and run Sigstore, and welcome new adopters and contributors! To learn more about getting started using Sigstore, the project documentation helps guide you through the process of signing and verifying your software. To get started contributing, several individual repositories within the Sigstore GitHub organization use “good first issue” labels to give a hint of approachable tasks. The project maintains a Slack community (use the invite to join) and regularly holds community meetings.

By Dave Lester – Google Open Source Programs Office, and Bob Callaway – Google Open Source Security Team

Kubeflow applies to become a CNCF incubating project

Google has pioneered AI and ML and has a history of innovative technology donations to the open source community (e.g. TensorFlow and Jax). Google is also the initial developer and largest contributor to Kubernetes, and brings with it a wealth of experience to the project and its community. Building an ML Platform on our state-of-the-art Google Kubernetes Engine (GKE), we have learned best practices from our users, and in 2017, we used that experience to create and open source the Kubeflow project.

In May 2020, with the v1.0 release, Kubeflow reached maturity across a core set of its stable applications. During that year, we also graduated Kubeflow Serving as an independent project, KServe, which is now incubating in Linux Foundation AI & Data.

Today, Kubeflow has developed into an end-to-end, extendable ML platform, with multiple distinct components to address specific stages of the ML lifecycle: model development (Kubeflow Notebooks), model training (Kubeflow Pipelines and Kubeflow Training Operator), model serving (KServe), and automated machine learning (Katib).

The Kubeflow project now has close to 200 contributors from over 30 organizations, and the Kubeflow community has hosted several summits and contributor meetups across the world. The broader Kubeflow ecosystem includes a number distributions across multiple cloud service providers and on-prem environments. Kubeflow’s powerful development experience helps data scientists build, train, and deploy their ML models, enabling enterprise ML operation teams to deploy and scale advanced workflows in a variety of infrastructures.

Google’s application for Kubeflow to become a CNCF incubating project is the next big milestone for the Kubeflow community, and we’re thrilled to see how developers will continue to build and innovate in ML using this project.

What's next? The pull request we’ve opened today to join the CNCF as an incubating project is only the first step. Google and the Kubeflow community will work with the CNCF and their Technical Oversight Committee (TOC), to meet the incubation stage requirements. While the due diligence and eventual TOC decision can take a few months, the Kubeflow project will continue developing and releasing throughout this process.

If Kubeflow is accepted into CNCF, the project’s assets will be transferred to the CNCF, including the source code, trademark, and website, and other collaboration and social media accounts. At Google, we believe that using open source comes with a responsibility to contribute, maintain, and improve those projects. In that spirit, we will continue supporting the Kubeflow project and work with the community towards the next level of innovation.

Thanks to everyone who has contributed to Kubeflow over the years! We are excited for what lays ahead for the Kubeflow community.

By Thea Lamkin, Senior Program Manager and Mark Chmarny, Senior Technical Program Manager – Google Open Source

ko applies to become a CNCF sandbox project

Back in 2018, the team at Google working on Knative needed a faster way to iterate on Kubernetes controllers. They created a new tool dedicated to deploying Go applications to Kubernetes without having to worry about container images. That tool has proven to be indispensable to the Knative community, so in March 2019, Google released it as a stand-alone open source project named ko.

Since then, ko has gained in popularity as a simple, fast, and secure container image builder for Go applications. More recently, the ko community has added, amongst many other features, multi-platform support and automatic SBOM generation. Today, like the original team at Google, many open source and enterprise development teams depend on ko to improve their developer productivity. The ko project is also increasingly used as a solution for a number of build use-cases, and is being integrated into a variety of third party CI/CD tools.

At Google, we believe that using open source comes with a responsibility to contribute, sustain, and improve the projects that make our ecosystem better. To support the next phase of community-driven innovation, enable net-new adoption patterns, and to further raise the bar in the container tool industry, we are excited to announce today that we have submitted ko as a sandbox project to the Cloud Native Computing Foundation (CNCF).

This step begins the process of transferring the ko trademark, IP, and code to CNCF. We are excited to see how the broader open source community will continue innovating with ko.

By Mark Chmarny – Google Open Source Programs Office

Announcing KataOS and Sparrow


As we find ourselves increasingly surrounded by smart devices that collect and process information from their environment, it's more important now than ever that we have a simple solution to build verifiably secure systems for embedded hardware. If the devices around us can't be mathematically proven to keep data secure, then the personally-identifiable data they collect—such as images of people and recordings of their voices—could be accessible to malicious software.

Unfortunately, system security is often treated as a software feature that can be added to existing systems or solved with an extra piece of ASIC hardware— this generally is not good enough. Our team in Google Research has set out to solve this problem by building a provably secure platform that's optimized for embedded devices that run ML applications. This is an ongoing project with plenty left to do, but we're excited to share some early details and invite others to collaborate on the platform so we can all build intelligent ambient systems that have security built-in by default.

To begin collaborating with others, we've open sourced several components for our secure operating system, called KataOS, on GitHub, as well as partnered with Antmicro on their Renode simulator and related frameworks. As the foundation for this new operating system, we chose seL4 as the microkernel because it puts security front and center; it is mathematically proven secure, with guaranteed confidentiality, integrity, and availability. Through the seL4 CAmkES framework, we're also able to provide statically-defined and analyzable system components. KataOS provides a verifiably-secure platform that protects the user's privacy because it is logically impossible for applications to breach the kernel's hardware security protections and the system components are verifiably secure. KataOS is also implemented almost entirely in Rust, which provides a strong starting point for software security, since it eliminates entire classes of bugs, such as off-by-one errors and buffer overflows.

The current GitHub release includes most of the KataOS core pieces, including the frameworks we use for Rust (such as the sel4-sys crate, which provides seL4 syscall APIs), an alternate rootserver written in Rust (needed for dynamic system-wide memory management), and the kernel modifications to seL4 that can reclaim the memory used by the rootserver. And we've collaborated with Antmicro to enable GDB debugging and simulation for our target hardware with Renode.

Internally, KataOS also is able to dynamically load and run third-party applications built outside of the CAmkES framework. At the moment, the code on Github does not include the required components to run these applications, but we hope to publish these features in the near future.

To prove-out a secure ambient system in its entirety, we're also building a reference implementation for KataOS called Sparrow, which combines KataOS with a secured hardware platform. So in addition to the logically-secure operating system kernel, Sparrow includes a logically-secure root of trust built with OpenTitan on a RISC-V architecture. However, for our initial release, we're targeting a more standard 64-bit ARM platform running in simulation with QEMU.

Our goal is to open source all of Sparrow, including all hardware and software designs. For now, we're just getting started with an early release of KataOS on GitHub. So this is just the beginning, and we hope you will join us in building a future where intelligent ambient ML systems are always trustworthy.

By Sam, Scott, and June – AmbiML Developers

Flutter SLSA Progress & Identity and Access Management through Infrastructure As Code

We are excited to announce several new achievements in Dart and Flutter's mission to harden security. We have achieved Supply Chain Levels for Software Artifacts (SLSA) Level 2 security on Flutter’s Cocoon application, reduced our Identity and Access Management permissions to the minimum required access, and implemented Infrastructure-as-Code to manage permissions for some of our applications. These achievements follow our recent success to enable Allstar and Security Scorecards.

Highlights

Achieving Flutter’s Cocoon SLSA level 2: Cocoon application provides continuous integration orchestration for Flutter Infrastructure. Cocoon also helps integrate several CI services with Github and provides tools to make Github development easier. Achieving SLSA Level 2 for Cocoon means we have addressed all the security concerns of levels 1 and 2 across the application. Under SLSA Level 2, Cocoon has “extra resistance to specific threats” to its supply chain. The Google Open Source Security team has audited and validated our achievement of SLSA Level 2 for Cocoon.


Implementing Identity & Access Management (IAM) via Infrastructure-as-Code: We have implemented additional security hardening features by onboarding docs-flutter-dev, master-docs-flutter-dev, and flutter-dashboard to use Identity and Access Management through an Infrastructure-as-Code system. These projects host applications, provide public documentation for Flutter, and contain a dashboard website for Flutter build status.

Using our Infrastructure-as-Code approach, security permission changes require code changes, ensuring approval is granted before the change is made. This also means that changes to security permissions are audited through source control and contain associated reasoning for the change. Existing IAM roles for these applications have been pared so that the applications follow the Principle of Least Privilege.

Advantages

  • Achieving SLSA Level 2 for Cocoon means we have addressed all the security concerns of levels 1 and 2 across the application. Under SLSA Level 2, Cocoon has “extra resistance to specific threats” to its supply chain.
  • Provenance is now generated for both, flutter-dashboard and auto-submit, artifacts through Cocoon’s automated build process. Provenance on these artifacts shows proof of their code source and tamper-proof build evidence. This work helps harden the security on the multiple tools used during the Cocoon build process: Google Cloud Platform, Cloudbuild, App Engine, and Artifact Registry.
  • Overall we addressed 83% of all SLSA requirements across all levels for the Cocoon application. We have identified the work across the application which will need to be completed for each level and category of SLSA compliance. Because of this, we know we are well positioned to continue future work toward SLSA Level 4.

Learnings and Best Practices

  1. Relatively small changes to the Cocoon application’s build process significantly increased the security of its supply chain. Google Cloud Build made this simple, since provenance metadata is created automatically during the Cloud Build process.
  2. Regulating IAM permissions through code changes adds many additional benefits and can make granting first time access simpler.
  3. Upgrading the SLSA level of an application sometimes requires varying efforts depending on the different factors of the application build process. Working towards SLSA level 4 will likely necessitate different configuration and code changes than required for SLSA level 2.

Coming Soon

Since this is the beginning of the Flutter and Dart journey toward greater SLSA level accomplishments, we hope to apply our learnings to more applications. We hope to begin work toward SLSA level 2 and beyond for more complex repositories like Flutter/flutter. Also, we hope to achieve an even higher level of SLSA compliance for the Cocoon application.

References

Supply Chain Levels for Software Artifacts (SLSA) is a security framework which outlines levels of supply chain security for an application as a checklist.

By Jesse Seales, Software Engineer – Dart and Flutter Security Working Group

Announcing the second group of Open Source Peer Bonus winners in 2022



We’re excited to announce our second group of Open Source Peer Bonus winners in 2022! The Google Open Source Peer Bonus program is designed to recognize external open source contributors nominated by Googlers for their open source contributions. This cycle, we are pleased to announce a total of 141 winners across 110+ projects, residing in 36 countries.

All open source contributors external to Google are eligible to be nominated. Whether you’re a software engineer, technical writer, community advocate, mentor, user experience designer, security expert, or educator, etc. you can be nominated for a peer bonus

Our awards often come as a surprise to some while also providing motivation to others to responsibly contribute to open source. Learn more about what the Google Open Source Peer Bonus program means to our winners from this cycle:

“It was a very nice surprise to receive the Open Source Peer Bonus notification. I hope it can help lift contributors off, not only for their code contributions but for community contributions too.” – Oriol Abril Pla, ArviZ, PyMC

“The Kubernetes and CNCF ecosystem is massive. So, there are tons of opportunities to carve out your own niche in them. One of my key goals has been to make the project(s) more secure than how they were when I joined them. These awards are a welcome sprinkle of motivation to keep being a responsible open source contributor.” – Pushkar Joglekar, Kubernetes and CNCF

“I’m very pleased and proud to receive a Google Open Source Peer Bonus award. I was nominated for my contributions to The Good Docs Project where we are creating technical writing templates to help other projects create high-quality documentation. I’m passionate about the work we’re doing there, and have been hanging around the project since its inception in 2019. This is a friendly, inclusive community creating a safe space for folk to dip their toe into open source. We are global, and new folk are always welcome.” – Felicity Brand, The Good Docs Project

“I've been actively working on open source projects since my time at NIST with the FDS project starting in 2006. More recently with The Good Docs Project (TGDP) since 2020. It's been a very rewarding experience to contribute to TGDP, with such an amazing diversity of participants, perspectives and interests involved. To be given recognition through the OSPB program was a pleasant and unexpected surprise. While it's not at all what I am participating in the project for, it feels great to have someone else in the project bring my name up for this award. Thank you to TGDP and to Google for this honor.” – Bryan Klein, The Good Docs Project

“The Open Source Peer Bonus program is more than an appreciation for our contribution to the open source world. It encourages people to share their talent. To be the hero of the ones who are benefiting from your work, put your codes in the open source world.” – Nan YE, Orange Innovation China

“The TFX team and community is by far the most responsive, helpful and knowledgeable open-source project that I have worked on. It's a great feeling to be a part of the democratizing of productionised ML workflows, and being officially recognised on your efforts and contributions is the cherry on top.” – Jens Wiren, Analytical Impact Solutions

“The HTTP Archive team is welcoming to contributors and happily showed me the ropes until I got going. The project is invaluable to the web community, and working on the Web Almanac allowed me to work with domain experts on several topics, including Performance, JavaScript, and Third Parties.” – Kevin Farrugia, HTTP Archive

“Participating in these projects has been a great learning experience and has given me the opportunity to connect with a lot of great people. I am humble and grateful for the recognition and appreciation this program gives to the contributions made to these projects.” – Ole Markus With, kOps/etcdadm

“Google has been very generous in recognising VertFlow, which is a tool still in its infancy after the idea popped into my head a few months ago in conversation with a Google Cloud Customer Engineer. I hope this will encourage users to adopt VertFlow to reduce their carbon footprint when using GCP.” – Jack Lockyer-Stevens, VertFlow

Below is the list of current winners who gave us permission to thank them publicly:

Project

Winner

abap2xlsx

Gregor Wolf

ABC A System for Sequential Synthesis and Verification

Alan Mishchenko

Accelerated HW Synthesis

Zihao Li

Agones

Daniel Oliveira

Android, Pithus, Exodus Privacy, PiRogue, Frida

Esther Onfroy

AndroidX Jetpack

Michał Zieliński

Angular

Dario Piotrowicz

Angular Language Service

Ivan Wan

Apache Airflow

Elad Kalif

Apache Beam

Alex Van Boxel

Apache Beam

Austin Bennett

Apache Beam

Moritz Mack

Apache Hop

Matt Casters

aroman

Avi Romanoff

ArviZ and PyMC

Oriol Abril Pla

Babel

Nicolò Ribaudo

Bazel

Fabian Meumertzheim

Beam

Alex Kosolapov

Blockly

Johnny Oshika

BRLTTY

Dave Mielke

Bun

Jarred Sumner

cargo-make

Sagie Gur-Ari

Chrome DevTools Frontend

Percy Ley

Chromium

Juba Borgohain

Chromium

David Sanders

Chromium

Amos Lim

ClangBuiltLinux

Nathan Chancellor

cloud-data-quality

Amandeep Singh

CNCF

Ragashree M C

Contibuting.today Open Source meetup

Floor Drees

CoreDNS and Kubernetes

Chris O'Haver

cpu_features

Mykola Hohsadze

DartPad

Tim Maffett

dbus

Simon McVittie

Dill

Mike McKerns

distroless

Ole-Martin Bratteng

Don't kill my app and merge to Google Android CTS

Petr Nálevka

ecma262

Richard Gibson

Firebase Admin .NET SDK

Levi Muriuki

Firebase Admin Node.js SDK

Igor Savin

Firebase Admin Node.js SDK

Aras Abbasi

Firebase Apple SDK

Mike Hardy

Firebase Apple SDK

Jake Krog

Firebase Apple SDK

Alex Zchut

Firebase Arduino Client Library for ESP8266 and ESP32.

Suwatchai Klakerdpol

Firebase Crashlytics

Sergio Campamá

firebase-ios-sdk

Fumito Ito

firebase-ios-sdk

Tito Ciuro

firebase-js-sdk

Andi Pätzold

fish-shell

Peter Ammon

Flashrom

Thomas Heijligen

Flashrom

Felix Singer

FreeCAD

Lei Zheng

Fuchsia

Alexander Popov

Git

Jorawar Singh

git and openssh

Fabian Stelzer

GNU Guix

Ludovic Courtès

GNU Mes

Janneke Nieuwenhuizen

go-clean-arch

Iman Tumorang

golang/protobuf

Cassondra Foesch

google-cloud-pricing-cost-calculator

Nils Knieling

gopls

Ruslan Nigmatullin

GrapheneOS

Daniel Micay

GSYVideoPlayer

Asher Guo

Hello World gRPC-Gateway

Rajiv Singh

Lichess

Thibault Duplessis

JRuby

Charles Nutter

Keras

Sayak Paul

KernelWireguard

Jason Donenfeld

Knative

Mahamed Ali

Knative

Gabriel Freites

Kubernetes, CNCF

Pushkar Joglekar

Kubernetes (kOps, etcdadm etc)

Ciprian Hacman

Kubernetes (particularly kOps / etcdadm)

Ole Markus With

Kubernetes (particularly kOps / etcdadm)

Peter Rifel

Kubernetes Gateway API

Keith Mattix

KUnit/Linux kernel

Shuah Khan

Leaflet

Volodymyr Agafonkin

libyuv

Yuan Tong

lnav

Tim Stack

Log4J

Ralph Goers

Magit

Jonas Bernoulli

medium_stats

Oliver Tosky

Mockk

Oleksii Pylypenko

moja global

Harsh Bardhan Mishra

mvt (Mobile Verification Toolkit)

Claudio Guarnieri

OSS educator and collaborator

José Luis Chiquete

notcurses

nick black

Nudge

Erik Gomez

OpenSSF Allstar

Yori Yano

Oppia

Om Khandade

Oppia

Chantel Chan

OR-Tools

Xiang Chen

pcileech (and LeechCore subproject)

Ulf Frisk

Project Jupyter

Min Ragan-Kelley

Protocol Buffers

Yannic Bonenberger

pyinfra

Nick Mills-Barrett

PyPI

Jack Lockyer-Stevens

PyTorch / XLA

Ronghang Hu

QGIS

Nyall Dawson

react-native-firebase

Minsik Kim

Rich, Textualize

Will McGugan

Rust for Linux

Björn Roy Baron

sableangle

Miki Huang

Samba

David Mulder

Scorecards

Varun Sharma

Scorecards

Naveen Srinivasan

SimpleWebAuthn

Matthew Miller

SLSA

Michael Lieberman

Spock

Leonard Brünings

SQLAlchemy

Michael Bayer

stage0

Jeremiah Orians

styler

Lorenz Walthert

Surelog

Alain Dargelas

Svelte

Rich Harris

TC39

Jordan Harband

Tekton

Parth Patel

Tekton

Andrew Bayer

TensorFlow

Stefano Fabri

TensorFlow

Jason Zaman

TensorFlow Lite Examples - Android

Nan Ye

TFX

Ukjae Jeong

TFX

Jens Wiren

TFX-Addons

Gerard Casas Saez

TFX-Addons

Hannes Hapke

TFX-BSL

Martin Bomio

tfx-helper

Tomasz Mackowiak

The Good Docs Project

Aaron Peters

The Good Docs Project

Felicity Brand

The Good Docs Project

Ian Nguyen

The Good Docs Project

Bryan Klein

The Good Docs Project

Serena Jolley

Tow-Boot

Samuel Dionne-Riel

Trivy

Teppei Fukuda

TUF, CNCF

Marina Moore

V8

Ao Wang

ViSQOL

Feargus O'Gorman

W3C WebGPU standard

Mehmet Oguz Derin

wdi5

Volker Buzek

Web Almanac

Kevin Farrugia

WebRTC

Byoungchan Lee


Congratulations to our winners above and thank you for your open source contributions. We look forward to your continued support and efforts in the open source communities. Additionally, thank you to all of the Googlers who submitted nominations and our review committee members for reviewing nominations.

By Joe Sylvanovich – Google Open Source Programs Office

Lyra V2 – a better, faster, and more versatile speech codec

Since we open sourced the first version of Lyra on GitHub last year, we are delighted to see a vibrant community growing around it, with thousands of stars, hundreds of forks, and many comments and pull requests. There are people who fixed and formatted our code, built continuous integration for the project, and even added support for Web Assembly.

We are incredibly grateful for all these contributions, and we also heard the community's feedback, asking us to improve Lyra. Some examples of what developers wanted were to run Lyra on more platforms, develop applications in more languages; and for a model that computes faster with more bitrate options and lower latency, and better audio quality with fewer artifacts.

That's why we are now releasing Lyra V2, with a new architecture that enjoys a wider platform support, provides scalable bitrate capabilities, has better performance, and generates higher quality audio. With this release, we hope to continue to evolve with the community, and with its collective creativity, see new applications being developed and new directions emerging.

New Architecture

Lyra V2 is based on an end-to-end neural audio codec called SoundStream. The architecture has a residual vector quantizer (RVQ) sitting before and after the transmission channel, which quantizes the encoded information into a bitstream and reconstructs it on the decoder side.

Lyra V2's SoundStream architecture
The integration of RVQ into the architecture allows changing the bitrate of Lyra V2 at any time by selecting the number of quantizers to use. When more quantizers are used, higher quality audio is generated (at a cost of a higher bitrate). In Lyra V2, we support three different bitrates: 3.2 kps, 6 kbps, and 9.2 kbps. This enables developers to choose a bitrate most suitable for their network condition and quality requirements.

Lyra V2's model is exported in TensorFlow Lite, TensorFlow's lightweight cross-platform solution for mobile and embedded devices, which supports various platforms and hardware accelerations. The code is tested on Android phones and Linux, with experimental Mac and Windows support. Operation on iOS and other embedded platforms is not currently supported, although we expect it is possible with additional effort. Moreover, this paradigm opens Lyra to any future platform supported by TensorFlow Lite.

Better Performance

With the new architecture, the delay is reduced from 100 ms with the previous version to 20 ms. In this regard, Lyra V2 is comparable to the most widely used audio codec Opus for WebRTC, which has a typical delay of 26.5 ms, 46.5 ms, and 66.5 ms.

Lyra V2 also encodes and decodes five times faster than the previous version. On a Pixel 6 Pro phone, Lyra V2 takes 0.57 ms to encode and decode a 20 ms audio frame, which is 35 times faster than real time. The reduced complexity means that more phones can run Lyra V2 in real time than V1, and that the overall battery consumption is lowered.

Higher Quality

Driven by the advance of machine learning research over the years, the quality of the generated audio is also improved. Our listening tests show that the audio quality (measured in MUSHRA score, an indication of subjective quality) of Lyra V2 at 3.2 kbps, 6 kbps, and
9.2 kbps measures up to Opus at 10 kbps, 13 kbps, and 14 kbps respectively.

Lyra vs. Opus at various bitrates


Sample 1

Sample 2


Original

Opus       @6kbps


LyraV1


Opus     @10kbps


LyraV2 @3.2kbps


Opus           @13k


LyraV2    @6kbps


Opus     @14kbps


LyraV2 @9.2kbps

This makes Lyra V2 a competitive alternative to other state-of-the-art telephony codecs. While Lyra V1 already compares favorably to the Adaptive Multi-Rate (AMR-NB) codec, Lyra V2 further outperforms Enhanced Voice Services (EVS) and Adaptive Multi-Rate Wideband (AMR-WB), and is on par with Opus, all the while using only 50% - 60% of their bandwidth.

Lyra vs. state-of-the-art codecs


Sample 1

Sample 2



Original


AMR-NB



LyraV1



EVS



AMR-WB


Opus           @13kbps


LyraV2    @6kbps

This means more devices can be connected in bandwidth-constrained environments, or that additional information can be sent over the network to reduce voice choppiness through forward error correction and packet loss concealment.

Open Source Release

Lyra V2 continues to provide what is already in Lyra V1 (the build tools, the testing frameworks, the C++ encoding and decoding API, the signal processing toolchain, and the example Android app). Developers who have experience with the Lyra V1 API will find that the V2 API looks familiar, but with a few changes. For example, now it's possible to change bitrates during encoding (more information is available in the release notes). In addition, the model definitions and weights are included as .tflite files. As with V1, this release is a beta version and the API and bitstream are expected to change. The code for running Lyra is open sourced under the Apache license. We can’t wait to see what innovative applications people will create with the new and improved Lyra!

By Hengchin Yeh - Chrome

Acknowledgements

The following people helped make the open source release possible: from Chrome: Alejandro Luebs, Michael Chinen, Andrew Storus, Tom Denton, Felicia Lim, Bastiaan Kleijn, Jan Skoglund, Yaowu Xu, Jamieson Brettle, Omer Osman, Matt Frost, Jim Bankoski; and from Google Research: Neil Zeghidour, Marco Tagliasacchi

Co-simulating ML with Springbok using Renode

The landscape of Machine Learning software libraries and models is evolving rapidly, and to satisfy the ever-increasing demand for memory and compute while managing latency, power and security considerations, hardware must be developed in an iterative process alongside the workloads it is meant to run.

With its open architecture, custom instructions support and flexible vector extensions, the RISC-V ISA offers an unprecedented capacity for such co-design. And by energizing the open hardware ecosystem, RISC-V has supercharged research and innovation into how to improve chipmaking itself to better leverage the methods and suit the needs of software. Initiatives such as Google’s OpenMPW Shuttle show how a more open and software-focused approach to building hardware, are key to enabling a new wave of more powerful and transparent ML-focused solutions.

A RISC-V-based ML accelerator with a HW/SW co-design flow

In the past months, Google Research has joined efforts with Antmicro to work on a silicon project that can serve as a template for efficient hardware-software co-design. For their secure ML solution, the Google Research team supported by Antmicro has been developing a completely open source, rapid pre-silicon ML development flow using Renode, Antmicro’s open source simulation framework.

This builds on the result of cooperation from last year in which Antmicro implemented Renode support for RISC-V Vector extensions, which are used in the Google team’s RISC-V based ML accelerator codenamed Springbok. To allow a more well-rounded developer experience, as part of the project Antmicro is also working on improving the support for the underlying SoC and a large number of user oriented features such as OS-aware debugging, performance optimizations, payload profiling and performance measurement capabilities.

Springbok is part of Google’s AmbiML project that aims to create an open source ML development ecosystem centered on privacy and security. By using the RISC-V Vector extensions, the Google Research team has a standard but flexible way to parallelize the matrix multiply and accumulate operations that are universal in ML payloads. And thanks to Renode, the team can make informed choices as to how exactly to leverage RISC-V’s flexibility by analyzing tradeoffs between speed, complexity and specialization in a practical, iterative fashion using data generated by Renode and the text-based configuration capabilities that let them play around with hardware composition and functionality in a matter of minutes, not days.

Diagram of A RISC-V-based ML accelerator with a HW/SW co-design flow

On the ML software side, the ecosystem revolves around IREE—Google’s research project developing an open source ML compiler and runtime for constrained devices, based on LLVM MLIR.

IREE allows you to load models from typical ML frameworks such as TensorFlow or TensorFlow Lite and then convert them to Intermediate Representation (MLIR), which later goes through optimizations on graph level and then through an LLVM compilation flow to get the best-fitted runtime for a specific target. When it comes to deploying models on target devices, IREE provides APIs for both the C and Python programming languages as well as a TFLite C API which provides the same convention as TFLite for model loading, tensor management and inference invoking.

Using these runtimes, the model can be deployed and tested, debugged, benchmarked and executed on the target device or in a simulation environment like Renode.

Demoing the flow at Spring 2022 RISC-V Week

In the build up to the Spring 2022 RISC-V Week in Paris, the first such large open hardware meeting in years, an initial version of the AmbiML bare metal ML flow was released as open source. This includes both the ability to run interactively and an example CI using Antmicro’s GitHub Renode Action showing how such a workflow can be tested automatically on each commit. As a Google Cloud partner, Antmicro is currently working with Google Cloud to make Renode available for massive scale CI testing and deployments for scenarios similar to this one.

In a joint talk at the Paris event, Antmicro and Google presented the software co-development flow, together with a demo of a heterogeneous multi-core solution, with one core running the AmbiML Springbok payload and another core running Zephyr.

In the presented scenario the Springbok core, acting as a ML compute offload unit to the main CPU, executed inference on the MobileNetv1 network and reported the work done to the application core via a RISC-V custom instruction. Adding and modifying custom instructions is trivial in Renode, either via a single line of Python, C#, or even co-simulated in RTL.

Renode helps ML developers and silicon designers not only to run and test their solutions, but also to learn more about what their software is actually doing. As part of the Paris demonstration, Antmicro and Google showed how you can count executed instructions and how often specific opcodes are used to measure how well your solution is performing. These features, accompanied by execution metrics analysis, executed functions logging, and recently developed execution trace generation, give you great insight into every detail of your emulated ML environment.

These capabilities join the wide arsenal of hardware/software co-development solutions in Renode, such as RTL co-simulation which Antmicro has been developing with Microchip and support for verilated custom instructions developed with another ML-focused Google team responsible for RISC-V Custom Function Units and also used in the EU-funded VEDLIoT project.

Future plans

This is just the beginning of a wider activity from the Google Research team Antmicro is working with to release software and hardware components as well as tools supporting a collaborative co-design ecosystem for secure ML development. If you think Renode, RISC-V and co-development could help in building your next ML-focused product, go ahead and try the AmbiML flow yourself!

Visit the iree-rv32-springbok repository on GitHub, clone it locally and follow the instructions from README.md.

Renode Repository

You can also grab Renode from the official repository and start playing with the available demos, or head to the Renode documentation to read up on features helpful for ML acceleration development such as Verilator co-simulation.

By Peter Zierhoffer – Antmicro