Tag Archives: Open source

GSoC 2022: It’s a wrap!

We just wrapped up the final projects for Google Summer of Code 2022 and want to share some highlights from this year’s program. We are pleased to announce that a total of 1,054 GSoC contributors successfully completed the 2022 cycle.

2022 saw some considerable changes to the Google Summer of Code program. Let’s start with some stats around those three major changes:

    • The standard 12-week project length was used by 71.2% of contributors while 19.21% spent between 13–18 weeks on their project, while 9.33% of GSoC contributors took advantage of the 19–22 week project lengths. It is clear from feedback written by mentors and contributors alike the option for extended project lengths was a hit with participants.
    • GSoC 2022 allowed both medium-size (~175 hours) and large-size (~350 hours) projects. For 2022, 47% of the contributor projects were medium while 53% were large projects.
    • This year the program was also open to more than students for the first time and 10.4% of the accepted GSoC contributors were non-students.

In the final weeks of the program we asked contributors and mentors questions about their experiences with the program this year. Here are some of the key takeaways from the participants:

Favorite part of GSoC 2022

There were a few themes that rose to the top when contributors were asked what their favorite part of the program was:

  1. Getting involved in their organization’s open source community with folks from all around the world and their amazing mentors.
  2. Learning new skills (programming languages, skills, new technologies) and learning more about open source communities.
  3. Contributing to a meaningful community and project.
  4. Learning from experienced and thoughtful developers (their mentors and their whole community).

Improved programming skills

96% of contributors think that GSoC helped their programming skills. The most common responses to how GSoC improved their skills were:

  • Improving the quality of their code through feedback from mentors, collaboration and learning more about the importance of code reviews.
  • Gaining confidence in their coding skills and knowledge about best practices. Learning how to write more efficient code and to meet the org’s coding standards.
  • Ability to read and understand real complex codebases, and learning how to implement code with other developer’s code.

Most challenging parts of GSoC

And the most common struggles included:
  • Managing their time effectively with many other commitments.
  • Initial days starting with the organization, understanding the codebase, and sometimes learning a new programming language along the way.
  • Communicating with mentors and community members in different time zones and collaborating remotely.

Additional fun stats from GSoC Contributors

  • 99% of GSoC contributors would recommend their GSoC mentors
  • 98% of GSoC contributors plan to continue working with their GSoC organization
  • 99% of GSoC contributors plan to continue working on open source
  • 35% of GSoC contributors said GSoC has already helped them get a job or internship
  • 84% of GSoC contributors said they would consider being a mentor
  • 95% of GSoC contributors said they would apply to GSoC again

We know that’s a lot of numbers to read through, but folks ask us for more information and feedback on GSoC each year. Our hope is that we succeeded in providing additional details for this 2022 program. Every mentor and GSoC contributor took the time to fill in their evaluations and give us great written feedback on how the program affected them so we wanted to highlight this.

As we look forward to Google Summer of Code 2023, we want to thank all of our mentors, organization administrators, and contributors for a successful and smooth GSoC 2022. Thank you all for the time and energy you put in to make open source communities stronger and healthier.

Remember GSoC 2023 will be open for organization applications from January 23–February 7, 2023. We will announce the 2023 accepted GSoC organizations February 22 on the program site: g.co/gsoc. GSoC contributor applications will be open March 20–April 4, 2023.

By Stephanie Taylor, Program Manager – Google Open Source

Open sourcing the attention center model

When you look at an image, what parts of an image do you pay attention to first? Would a machine be able to learn this? We provide a machine learning model that can be used to do just that. Why is it useful? The latest generation image format (JPEG XL) supports serving the parts that you pay attention to first, which results in an improved user experience: images will appear to load faster. But the model not only works for encoding JPEG XL images, but can be used whenever we need to know where a human would look first.

An open sourcing attention center model

What regions in an image will attract the majority of human visual attention first? We trained a model to predict such a region when given an image, called the attention center model, which is now open sourced. In addition to the model, we provide a script to use it in combination with the JPEG XL encoder: google/attention-center.

Some example predictions of our attention center model are shown in the following figure, where the green dot is the predicted attention center point for the image. Note that in the “two parrots” image both parrots’ heads are visually important, so the attention center point will be in the middle.

Four images in quadrants as follows: A red door with brass doorknob in top left quadrant, headshot of a brown skinned girl waering a colorful sweater and ribbons in her hair and painted face smiling at the camera in the top right quadrant, A teal shuttered catherdral style window against a sand colored stucco wall with pink and red hibiscus in the forefront in the bottom left quadrant, A blue and yellow macaw and red and green macaw next to each other in the bottom right quadrant
Images are from Kodak image data set: http://r0k.us/graphics/kodak/

The model is 2MB and in the TensorFlow Lite format. It takes an RGB image as input and outputs a 2D point, which is the predicted center of human attention on the image. That predicted center is the place where we should start with operations (decoding and displaying in JPEG XL case). This allows the most visually salient/import regions to be processed as early as possible. Check out the code and continue to build upon it!

Attention center ground-truth data

To train a model to predict the attention center, we first need to have some ground-truth data from the attention center. Given an image, some attention points can either be collected by eye trackers [1], or be approximated by mouse clicks on a blurry version of the image [2]. We first apply temporal filtering to those attention points and keep only the initial ones, and then apply spatial filtering to remove noise (e.g., random gazes). We then compute the center of the remaining attention points as the attention center ground-truth. An example illustration figure is shown below for the process of obtaining the ground-truth.

Five images in a row showing the original image of a person standing on a rock by the ocean; the first is the original image, the second showing gaze/attention points, the third shoing temporal filtering, the fourth spatial filtering, and fifth, attention center

Attention center model architecture

The attention center model is a deep neural net, which takes an image as input, and uses a pre-trained classification network, e.g, ResNet, MobileNet, etc., as the backbone. Several intermediate layers that output from the backbone network are used as input for the attention center prediction module. These different intermediate layers contain different information e.g., shallow layers often contain low level information like intensity/color/texture, while deeper layers usually contain higher and more semantic information like shape/object. All are useful for the attention prediction. The attention center prediction applies convolution, deconvolution and/or resizing operator together with aggregation and sigmoid function to generate a weighting map for the attention center. And then an operator (the Einstein summation operator in our case) can be applied to compute the (gravity) center from the weighting map. An L2 norm between the predicted attention center and the ground-truth attention center can be computed as the training loss.

Attention center model architecture

Progressive JPEG XL images with attention center model

JPEG XL is a new image format that allows the user to encode images in a way to ensure the more interesting parts come first. This has the advantage that when viewing images that are transferred over the web, we can already display the attention grabbing part of the image, i.e. the parts where the user looks first and as soon as the user looks elsewhere ideally the rest of the image already has arrived and has been decoded. Using Saliency in progressive JPEG XL images | Google Open Source Blog illustrates how this works in principle. In short, in JPEG XL, the image is divided into square groups (typically of size 256 x 256), and the JPEG XL encoder will choose a starting group in the image and then grow concentric squares around that group. It was this need for figuring out where the attention center of an image is that led us to open source the attention center model, together with a script to use it in combination with the JPEG XL encoder. Progressive decoding of JPEG XL images has recently been added to Chrome starting from version 107. At the moment, JPEG XL is behind an experimental flag, which can be enabled by going to chrome://flags, searching for “jxl”.

To try out how partially loaded progressive JPEG XL images look, you can go to https://google.github.io/attention-center/.

By Moritz Firsching, Junfeng He, and Zoltan Szabadka – Google Research

References

[1] Valliappan, Nachiappan, Na Dai, Ethan Steinberg, Junfeng He, Kantwon Rogers, Venky Ramachandran, Pingmei Xu et al. "Accelerating eye movement research via accurate and affordable smartphone eye tracking." Nature communications 11, no. 1 (2020): 1-12.

[2] Jiang, Ming, Shengsheng Huang, Juanyong Duan, and Qi Zhao. "Salicon: Saliency in context." In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1072-1080. 2015.

Explore the new Learn Kubernetes with Google website!

As Kubernetes has become a mainstream global technology, with 96% of organizations surveyed by the CNCF1 using or evaluating Kubernetes for production use, it is now estimated that 31%2 of backend developers worldwide are Kubernetes developers. To add to the growing popularity, the 2021 annual report1 also listed close to 60 enhancements by special interest and working groups to the Kubernetes project. With so much information in the ecosystem, how can Kubernetes developers stay on top of the latest developments and learn what to prioritize to best support their infrastructure?

The new website Learn Kubernetes with Google brings together under one roof the guidance of Kubernetes experts—both from Google and across the industry—to communicate the latest trends in building your Kubernetes infrastructure. You can access knowledge in two formats.

One option is to participate in scheduled live events, which consist of virtual panels that allow you to ask questions to experts via a Q&A forum. Virtual panels last for an hour, and happen once quarterly. So far, we’ve hosted panels on building a multi-cluster infrastructure, the Dockershim deprecation, bringing High Performance Computing (HPC) to Kuberntes, and securing your services with Istio on Kubernetes. The other option is to pick one of the multiple on-demand series available. Series are made up of several 5-10 minute episodes and you can go through them at your own leisure. They cover different topics, including the Kubernetes Gateway API, the MCS API, Batch workloads, and Getting started with Kubernetes. You can use the search bar on the top right side of the website to look up specific topics.
ALT TEXT
As the cloud native ecosystem becomes increasingly complex, this website will continue to offer evergreen content for Kubernetes developers and users. We recently launched a new content category for ecosystem projects, which started by covering how to run Istio on Kubernetes. Soon, we will also launch a content category for developer tools, starting with Minikube.

Join hundreds of developers that are already part of the Learn Kubernetes with Google community! Bookmark the website, sign up for an event today, and be sure to check back regularly for new content.

By María Cruz, Program Manager – Google Open Source Programs Office

Get ready for Google Summer of Code 2023!

We are thrilled to announce the 2023 Google Summer of Code (GSoC) program and share the timeline with you to get involved! 2023 will be our 19th consecutive year of hosting GSoC and we could not be more excited to welcome more organizations, mentors, and new contributors into the program.

With just three weeks left in the 2022 program, we had an exciting year with 958 GSoC contributors completing their projects with 198 open source organizations.

Our 2022 contributors and mentors have given us extensive feedback and we are keeping the big changes we made this year, with one adjustment around eligibility described below.
  • Increased flexibility in project lengths (10-22 weeks, not a set 12 weeks for everyone) allowed many people to be able to participate and to not feel rushed as they wrapped up their projects. We have 109 GSoC contributors wrapping up their projects over the next three weeks.
  • Choice of project time commitment there are now two options, medium at ~175 hours or large at ~350 hours, with 47% and 53% GSoC contributors, respectively.
  • Our most talked about change was GSoC being open to contributors new to open source software development (and not just to students anymore). For 2023, we are expanding the program to be open to students and to beginners in open source software development.
We are excited to launch the 2023 GSoC program and to continue to help grow the open source community. GSoC’s mission of bringing new contributors into open source communities is centered around mentorship and collaboration. We are so grateful for all the folks that continue to contribute, mentor, and get involved in open source communities year after year.

Interested in applying to the Google Summer of Code Program?

Open Source Organizations
Check out our website to learn what it means to be a participating organization. Watch our new GSoC Org Highlight videos and get inspired about projects that contributors have worked on in the past.

Think you have what it takes to participate as a mentor organization? Take a look through our mentor guide to learn about what it means to be part of Google Summer of Code, how to prepare your community, gather excited mentors, create achievable project ideas, and tips for applying. We welcome all types of open source organizations and encourage you to apply—it is especially exciting for us to welcome new orgs into the program and we hope you are inspired to get involved with our growing community.

Want to be a GSoC Contributor?
Are you new to open source development or a student? Are you eager to gain experience on real-world software development projects that will be used by thousands or millions of people? It is never too early to start thinking about what kind of open source organization you’d like to learn more about and how the application process works!

Watch our new ‘Introduction to GSoC’ video to see a quick overview of the program. Read through our contributor guide for important tips from past participants on preparing your proposal, what to think about if you wish to apply for the program, and everything you wanted to know about the program. We also hope you’re inspired by checking out the nearly 200 organizations that participated in 2022 and the 1,000+ projects that have been completed so far!

We encourage you to explore our website for other resources and continue to check for more information about the 2023 program.

You are welcome and encouraged to share information about the 2023 GSoC program with your friends, family, colleagues, and anyone you think may be interested in joining our community. We are excited to welcome many more contributors and mentoring organizations in the new year!

By Stephanie Taylor, Program Manager, and Perry Burnham, Associate Program Manager for the Google Open Source Programs Office

Open Source Pass Converter for Mobile Wallets

Posted by Stephen McDonald, Developer Programs Engineer, and Nick Alteen, Technical Writer, Engineering, Wallet

Each of the mobile wallet apps implement their own technical specification for passes that can be saved to the wallet. Pass structure and configuration varies by both the wallet application and the specific type of pass, meaning developers have to build and maintain code bases for each platform.

As part of Developer Relations for Google Wallet, our goal is to make life easier for those who want to integrate passes into their mobile or web applications. Today, we're excited to release the open-source Pass Converter project. The Pass Converter lets you take existing passes for one wallet application, convert them, and make them available in your mobile or web application for another wallet platform.

Moving image of Pass Converter successfully converting an external pkpass file to a Google Wallet pass

The Pass Converter launches with support for Google Wallet and Apple Wallet apps, with plans to add support for others in the future. For example, if you build an event ticket pass for one wallet, you can use the converter to automatically create a pass for another wallet. The following list of pass types are supported for their respective platforms:

  • Event tickets
  • Generic passes
  • Loyalty/Store cards
  • Offers/Coupons
  • Flight/Boarding passes
  • Other transit passes

We designed the Pass Converter with flexibility in mind. The following features provide additional customization to your needs.

  • hints.json file can be provided to the Pass Converter to map Google Wallet pass properties to custom properties in other passes.
  • For pass types that require certificate signatures, you can simply generate the pass structure and hand it off to your existing signing process
  • Since images in Google Wallet passes are referenced by URLs, the Pass Converter can host the images itself, store them in Google Cloud Storage, or send them to another image host you manage.

If you want to quickly test converting different passes, the Pass Converter includes a demo mode where you can load a simple webpage to test converting passes. Later, you can run the tool via the command line to convert existing passes you manage. When you’re ready to automate pass conversion, the tool can be run as a web service within your environment.

The following command provides a demo web page on http://localhost:3000 to test converting passes.

node app.js demo

The next command converts passes locally. If the output path is omitted, the Pass Converter will output JSON to the terminal (for PKPass files, this will be the contents of pass.json).

node app.js <pass input path> <pass output path>

Lastly, the following command runs the Pass Converter as a web service. This service accepts POST requests to the root URL (e.g. https://localhost:3000/) with multipart/form-data encoding. The request body should include a single pass file.

node app.js


Ready to get started? Check out the GitHub repository where you can try converting your own passes. We welcome contributions back to the project as well!

Google funds open source silicon manufacturing shuttles for GlobalFoundries PDK

In August, we released the Process Design Kit (PDK) for the GlobalFoundries 180nm MCU technology platform under the Apache 2.0 license. This open source PDK, resulting from our ongoing pathfinding partnership with GlobalFoundries technology, offers open source silicon designers new capabilities for high volume production, affordability, and more voltage options by including the following standard cells:
  • Digital standard cells' libraries (7-track and 9-track)
  • Low (3.3V), Medium (5V, 6V) and High (10V) voltage devices
  • SRAM macros (64x8, 128x8, 256x8, 512x8)
  • I/O and primitives (Resistors, Capacitors, Transistors, eFuses) cells' libraries
Following the announcement about GlobalFoundries joining Google’s open source silicon initiative, we are now sponsoring a series of no-cost OpenMPW shuttle runs for the GF180MCU PDK in the coming months.


Those shuttles will leverage the existing OpenMPW shuttle infrastructure based on the OpenLane automated design flow with the same Caravel harness and the Efabless platform for project submissions.

Each shuttle run will select 40 projects based on the following criteria:
  • Design sources must be released publicly under an open source license.
  • Projects must be reproducible from design sources and the GF180MCU PDK.
  • Projects must be submitted within the shuttle deadline (projects submitted earlier get additional chances to be selected).
  • Projects must pass the pre-manufacturing checks.
The first shuttle GF-MPW-0 will be a test shuttle, with submissions open from Oct. 31, 2022 to Dec. 5, 2022. It will be used as a way to validate together with the community the integration of the new PDK with the open source silicon toolchain and the Caravel harness; further shuttles will have a longer project application window and improved testing.

We encourage you to re-submit your previous OpenMPW shuttle projects to this shuttle as a way to validate their portability across open source PDKs:
  • Go to developers.google.com/silicon.
  • Navigate to the "Create a new Project" link.
  • Follow the instructions to integrate your project into the last version of the caravel_user_project template.
  • Make sure you select the right variant of the GF180MCU PDK (5LM_1TM_9K) by exporting the following environment variable PDK=gf180mcuC in your workspace prior to running any commands.
  • Submit your project for manufacturing on the Efabless platform.
We're excited to see designers leveraging this program by porting their existing projects that were submitted previously to OpenMPW shuttles, but also by designing new projects that target the GF180MCU PDK, finding paths together to research and advance the open source silicon ecosystem.

By Ethan Mahintorabi, Software Engineer and Johan Euphrosine, Developer Programs Engineer – Hardware Toolchains Team, and Aaron Cunningham, Technical Program Manager – Google Open Source Programs Office

Interview with Doug Duhaime, contributor to Google’s Dev Library

Posted by the Google Dev Library Team

Introducing the Dev Library Contributor Spotlights - a blog series highlighting developers that are supporting the thriving development ecosystem by contributing their resources and tools to Google Dev Library.

We met with Doug Duhaime, Full Stack Developer in Yale University's Digital Humanities Lab, to discuss his passion for Machine Learning, his processes and what inspired him to release his PixPlot project as an Open Source.

What led you to explore the field of machine learning?

I was an English major in undergrad and in graduate school. I have a PhD in English literature. My dissertation was exploring copyright history and the ways that changes in copyright law affected the book market. How does the institution of fixed duration copyright influence the book market? To answer this question, I had to mine an enormous collection of data - half a million books, published before 1800 - to look at different patterns. That was one of the key projects that got me inspired to further explore the world of Machine Learning.

In fact, one of my projects - the PixPlot library - uses computer vision to analyze image collections, which was also partially used in my research. Part of my research looked at plagiarism detection and how readily people are inclined to copy images once it becomes legal to copy them from other texts. Computer vision helps us to answer these questions and identify key patterns.

I’ve seen machine learning and programming as a way to ask new questions in historical contexts. And there's a whole field of us - we're called digital humanists. Yale University, where I've been for the last five years, has a fantastic digital humanities program where researchers are asking questions like this and using fun machine learning platforms like TensorFlow to answer those questions.

Screenshot from the PixPlot library showing Image Fields in the Meserve-Kunhardt Collection with the following identified hotspots: Boxers, Buildings, Buttons, Chairs, Gowns

Can you tell us more about the evolution of your PixPlot library project?

We started in Yale's digital humanities lab with a project called neural neighbors. And the idea here was to find patterns in the Meserve-Kunhardt Collection of images.

Meserve-Kunhardt is a collection of photographs largely from the 19th century that Yale recently acquired. After being acquired by the university, some curators were preparing to identify all this really rich metadata to describe these images. However, they had a backlog, and they needed help to try to make sense of what's in this collection. And so, Neural Neighbors was our initial attempt to answer this question.

As this project went on, we started running up against limitations and asking bigger questions. For example, instead of just looking at the pictures, what would it be like to look at the entire collection all at once? In order to answer this question, we needed a more performant rendering layer.

So we decided to utilize TensorFlow, which allowed us to extract vector representation of each image. We then compressed the dimensionality of those vectors down to 2D. But for PixPlot, we decided to use a different dimensionality reduction technique called umap. And that brought us to the first release of PixPlot.

The idea here was to take the whole collection, shoot it down into 2D, and then let you move through it and look at the images in the collection wherein we expect images with similar content to be placed close by one another.

And so it's just evolved from that early genesis and Neural Neighbors through to where it is today.

What inspired you to release PixPlot as an open source project?

In the case of PixPlot, I was working for Yale University, and we had a goal to make as much of our contributions to the software world as possible open and publicly accessible without any commercial terms.

It was a huge privilege to spend time with the lab and build software that others found useful. I would say even more generally, in my personal life, I really like building things that people find useful and, when possible, contributing back to the open source world because, I think, so many of us learn from open source.

Google Dev Library Quote: We look at other people's examples and get excited by tools and projects others are building. And many of those are non-commercial. They're just open and free to the world. And it's great to give back when we can. Doug Duhaime Dev Library Contributor

Find out more content contributed and authored by Doug Duhaime and discover more unique tools and resources on the Google Dev Library website!

Sigstore project announces general availability and v1.0 releases


Today, the Sigstore community announced the general availability of their free, community-operated certificate authority and transparency log services. In addition, two of Sigstore’s foundational projects, Fulcio and Rekor, published v1.0 releases denoting a commitment to API stability. Google is proud to celebrate these open source community milestones. ?

Sigstore is a standard for signing, verifying, and protecting open source software. With increased industry attention being given to software supply chain security, including the recent Executive Order on Cybersecurity, the ability to know and trust where software comes from has never been more important. Sigstore simplifies and automates the complex parts of digitally signing software—making this more accessible and trustworthy than ever before.

Beginning in 2020 as an open source collaboration between Red Hat and Google, the Sigstore project has grown into a vendor-neutral, community operated and designed project that is part of the Open Source Security Foundation (OpenSSF). The ecosystem has also continued to grow spanning multiple package managers and ecosystems, and now if you download a new release by open source projects like Python or Kubernetes, you’ll see that they’ve been signed by Sigstore.

Google is an active, contributing member of the Sigstore community. In addition to upstream code contributions, Google has contributed in several other ways:
We are part of a larger open source community helping develop and run Sigstore, and welcome new adopters and contributors! To learn more about getting started using Sigstore, the project documentation helps guide you through the process of signing and verifying your software. To get started contributing, several individual repositories within the Sigstore GitHub organization use “good first issue” labels to give a hint of approachable tasks. The project maintains a Slack community (use the invite to join) and regularly holds community meetings.

By Dave Lester – Google Open Source Programs Office, and Bob Callaway – Google Open Source Security Team

Kubeflow applies to become a CNCF incubating project

Google has pioneered AI and ML and has a history of innovative technology donations to the open source community (e.g. TensorFlow and Jax). Google is also the initial developer and largest contributor to Kubernetes, and brings with it a wealth of experience to the project and its community. Building an ML Platform on our state-of-the-art Google Kubernetes Engine (GKE), we have learned best practices from our users, and in 2017, we used that experience to create and open source the Kubeflow project.

In May 2020, with the v1.0 release, Kubeflow reached maturity across a core set of its stable applications. During that year, we also graduated Kubeflow Serving as an independent project, KServe, which is now incubating in Linux Foundation AI & Data.

Today, Kubeflow has developed into an end-to-end, extendable ML platform, with multiple distinct components to address specific stages of the ML lifecycle: model development (Kubeflow Notebooks), model training (Kubeflow Pipelines and Kubeflow Training Operator), model serving (KServe), and automated machine learning (Katib).

The Kubeflow project now has close to 200 contributors from over 30 organizations, and the Kubeflow community has hosted several summits and contributor meetups across the world. The broader Kubeflow ecosystem includes a number distributions across multiple cloud service providers and on-prem environments. Kubeflow’s powerful development experience helps data scientists build, train, and deploy their ML models, enabling enterprise ML operation teams to deploy and scale advanced workflows in a variety of infrastructures.

Google’s application for Kubeflow to become a CNCF incubating project is the next big milestone for the Kubeflow community, and we’re thrilled to see how developers will continue to build and innovate in ML using this project.

What's next? The pull request we’ve opened today to join the CNCF as an incubating project is only the first step. Google and the Kubeflow community will work with the CNCF and their Technical Oversight Committee (TOC), to meet the incubation stage requirements. While the due diligence and eventual TOC decision can take a few months, the Kubeflow project will continue developing and releasing throughout this process.

If Kubeflow is accepted into CNCF, the project’s assets will be transferred to the CNCF, including the source code, trademark, and website, and other collaboration and social media accounts. At Google, we believe that using open source comes with a responsibility to contribute, maintain, and improve those projects. In that spirit, we will continue supporting the Kubeflow project and work with the community towards the next level of innovation.

Thanks to everyone who has contributed to Kubeflow over the years! We are excited for what lays ahead for the Kubeflow community.

By Thea Lamkin, Senior Program Manager and Mark Chmarny, Senior Technical Program Manager – Google Open Source

ko applies to become a CNCF sandbox project

Back in 2018, the team at Google working on Knative needed a faster way to iterate on Kubernetes controllers. They created a new tool dedicated to deploying Go applications to Kubernetes without having to worry about container images. That tool has proven to be indispensable to the Knative community, so in March 2019, Google released it as a stand-alone open source project named ko.

Since then, ko has gained in popularity as a simple, fast, and secure container image builder for Go applications. More recently, the ko community has added, amongst many other features, multi-platform support and automatic SBOM generation. Today, like the original team at Google, many open source and enterprise development teams depend on ko to improve their developer productivity. The ko project is also increasingly used as a solution for a number of build use-cases, and is being integrated into a variety of third party CI/CD tools.

At Google, we believe that using open source comes with a responsibility to contribute, sustain, and improve the projects that make our ecosystem better. To support the next phase of community-driven innovation, enable net-new adoption patterns, and to further raise the bar in the container tool industry, we are excited to announce today that we have submitted ko as a sandbox project to the Cloud Native Computing Foundation (CNCF).

This step begins the process of transferring the ko trademark, IP, and code to CNCF. We are excited to see how the broader open source community will continue innovating with ko.

By Mark Chmarny – Google Open Source Programs Office