Tag Archives: Open source

Blockly Summit 2019: Rendering, Accessibility, and More!


It has been over eight years since we started work on Blockly, an open source library for building drag-and-drop block coding apps. In that time, the team has grown from a single developer to a small team and a large community. Blockly is now a standard in the CS education space, used by Scratch, MakeCode, AppInventor, and hundreds of other developers to enable tens of millions of kids around the world to create and express themselves with code.

But Blockly isn't only used for education. The library provides everything an app developer needs to create rich block coding languages and is highly customizable and extensible. This means Blockly is also used by hobbyists and commercial companies alike for business logic, computer games, virtual reality, robotics, and just about anything else you can do with code.


The work we do on Blockly wouldn't be possible without the many folks who contribute back with code, suggestions, and support on the forums. As such, we were very excited to welcome around 30 members of the Blockly open source community to our second annual Blockly User Summit and to be able to make all of the talks available online!

The summit spanned two days in October and included 16 talks, over half of which were given by external contributors, and a Q&A with the Blockly team. The talks covered everything from Blockly's brand new rendering framework and building custom fields to explorations in performance and debugging block code. Check out the full playlist.

We also held a hackathon on the second day of the summit, with quick start guides for using our new rendering and accessibility APIs. If you're new to Blockly and you'd like a good starting point, take a look at our CodeLab and if you build your own cool demo let us know on our forums.



By Erik Pasternak, Kids Coding Team

RecSim: A Configurable Simulation Platform for Recommender Systems

Originally posted on the Google AI Blog

Significant advances in machine learning, speech recognition, and language technologies are rapidly transforming the way in which recommender systems engage with users. As a result, collaborative interactive recommenders (CIRs)—recommender systems that engage in a deliberate sequence of interactions with a user to best meet that user's needs—have emerged as a tangible goal for online services.

Despite this, the deployment of CIRs has been limited by challenges in developing algorithms and models that reflect the qualitative characteristics of sequential user interaction. Reinforcement learning (RL) is the de facto standard ML approach for addressing sequential decision problems, and as such is a natural paradigm for modeling and optimizing sequential interaction in recommender systems. However, it remains under-investigated and under-utilized for use in CIRs in both research and practice. One major impediment is the lack of general-purpose simulation platforms for sequential recommender settings, whereas simulation has been one of the primary means for developing and evaluating RL algorithms in real-world applications like robotics.

To address this, we have developed RᴇᴄSɪᴍ (available here), a configurable platform for authoring simulation environments to facilitate the study of RL algorithms in recommender systems (and CIRs in particular). RᴇᴄSɪᴍ allows both researchers and practitioners to test the limits of existing RL methods in synthetic recommender settings. RecSim’s aim is to support simulations that mirror specific aspects of user behavior found in real recommender systems and serve as a controlled environment for developing, evaluating and comparing recommender models and algorithms, especially RL systems designed for sequential user-system interaction.

As an open-source platform, RᴇᴄSɪᴍ: (i) facilitates research at the intersection of RL and recommender systems; (ii) encourages reproducibility and model-sharing; (iii) aids the recommender-systems practitioner, interested in applying RL to rapidly test and refine models and algorithms in simulation, before incurring the potential cost (e.g., time, user impact) of live experiments; and (iv) serves as a resource for academic-industry collaboration through the release of “realistic” stylized models of user behavior without revealing user data or sensitive industry strategies.

Reinforcement Learning and Recommendation Systems

One challenge in applying RL to recommenders is that most recommender research is developed and evaluated using static datasets that do not reflect the sequential, repeated interaction a recommender has with its users. Even those with temporal extent, such as MovieLens 1M, do not (easily) support predictions about the long-term performance of novel recommender policies that differ significantly from those used to collect the data, as many of the factors that impact user choice are not recorded within the data. This makes the evaluation of even basic RL algorithms very difficult, especially when it comes to reasoning about the long-term consequences of some new recommendation policy—research shows changes in policy can have long-term, cumulative impact on user behavior. The ability to model such user behaviors in a simulated environment, and devise and test new recommendation algorithms, including those using RL, can greatly accelerate the research and development cycle for such problems.

Overview of RᴇᴄSɪᴍ

RᴇᴄSɪᴍ simulates a recommender agent’s interaction with an environment consisting of a user model, a document model and a user choice model. The agent interacts with the environment by recommending sets or lists of documents (known as slates) to users, and has access to observable features of simulated individual users and documents to make recommendations. The user model samples users from a distribution over (configurable) user features (e.g., latent features, like interests or satisfaction; observable features, like user demographic; and behavioral features, such as visit frequency or time budget). The document model samples items from a prior distribution over document features, both latent (e.g., quality) and observable (e.g., length, popularity). This prior, as all other components of RᴇᴄSɪᴍ, can be specified by the simulation developer, possibly informed (or learned) from application data.

The level of observability for both user and document features is customizable. When the agent recommends documents to a user, the response is determined by a user-choice model, which can access observable document features and all user features. Other aspects of a user’s response (e.g., time spent engaging with the recommendation) can depend on latent document features, such as document topic or quality. Once a document is consumed, the user state undergoes a transition through a configurable user transition model, since user satisfaction or interests might change.

We note that RᴇᴄSɪᴍ provides the ability to easily author specific aspects of user behavior of interest to the researcher or practitioner, while ignoring others. This can provide the critical ability to focus on modeling and algorithmic techniques designed for novel phenomena of interest (as we illustrate in two applications below). This type of abstraction is often critical to scientific modeling. Consequently, high-fidelity simulation of all elements of user behavior is not an explicit goal of RᴇᴄSɪᴍ. That said, we expect that it may also serve as a platform that supports “sim-to-real” transfer in certain cases (see below).
Data Flow through components of RᴇᴄSɪᴍ. Colors represent different model components — user and user-choice models (green), document model (blue), and the recommender agent (red)

Applications

We have used RᴇᴄSɪᴍ to investigate several key research problems that arise in the use of RL in recommender systems. For example, slate recommendations can result in RL problems, since the parameter space for action grows exponentially with slate size, posing challenges for exploration, generalization and action optimization. We used RᴇᴄSɪᴍ to develop a novel decomposition technique that exploits simple, widely applicable assumptions about user choice behavior to tractably compute Q-values of entire recommendation slates. In particular, RᴇᴄSɪᴍ was used to test a number of experimental hypotheses, such as algorithm performance and robustness to different assumptions about user behavior.

Future Work

While RᴇᴄSɪᴍ provides ample opportunity for researchers and practitioners to probe and question assumptions made by RL/recommender algorithms in stylized environments, we are developing several important extensions. These include: (i) methodologies to fit stylized user models to usage logs to partially address the “sim-to-real” gap; (ii) the development of natural APIs using TensorFlow’s probabilistic APIs to facilitate model specification and learning, as well as scaling up simulation and inference algorithms using accelerators and distributed execution; and (iii) the extension to full-factor, mixed-mode interaction models that will be the hallmark of modern CIRs—e.g., language-based dialogue, preference elicitation, explanations, etc.

Our hope is that RᴇᴄSɪᴍ will serve as a valuable resource that bridges the gap between recommender systems and RL research — the use cases above are examples of how it can be used in this fashion. We also plan to pursue it as a platform to support academic-industry collaborations, through the sharing of stylized models of user behavior that, at suitable levels of abstraction, reflect a degree of realism that can drive useful model and algorithm development.

Further details of the RᴇᴄSɪᴍ framework can be found in the white paper, while code and colabs/tutorials are available here.

Acknowledgements
We thank our collaborators and early adopters of RᴇᴄSɪᴍ, including the other members of the RᴇᴄSɪᴍ team: Eugene Ie, Vihan Jain, Sanmit Narvekar, Jing Wang, Rui Wu and Craig Boutilier.

By Martin Mladenov, Research Scientist and Chih-wei Hsu, Software Engineer, Google Research

Google Code-in 2019 Contest for Teenagers

Today is the start of the 10th consecutive year of the Google Code-in (GCI) contest for teens. We anticipate this being the biggest contest yet!

The Basics

What is Google Code-in?
Our global, online contest introducing students to open source development. The contest runs for seven weeks until January 23, 2020.

Who can register?
Pre-university students ages 13-17 that have their parent or guardian’s permission to register for the contest.

How do students register and participate?
Students can register for the contest beginning today at g.co/gci. Once students have registered, and the parental consent form has been submitted and approved by Program Administrators, students can choose which “task” they want to work on first. Students choose the task they find interesting from a list of thousands of available tasks created by 29 participating open source organizations. Tasks take an average of 3-5 hours to complete. There are even beginner tasks that are a wonderful way for students to get started in the contest.

The task categories are:
  • Coding
  • Design
  • Documentation/Training
  • Outreach/Research
  • Quality Assurance
Why should students participate?
Students not only have the opportunity to work on a real open source software project, thus gaining invaluable skills and experience, but they also have the opportunity to be a part of the open source community. Mentors are readily available to help answer their questions while they work through the tasks.

Google Code-in is a contest so there are prizes*! Complete one task and receive a digital certificate, three completed tasks and you’ll also get a fun Google t-shirt. Finalists earn a jacket, runners-up earn backpacks, and grand prize winners (two from each organization) will receive a trip to Google headquarters in California in 2020!

Details
Over the past nine years, more than 11,000 students from 108 countries have successfully completed over 55,000 tasks in GCI. Curious? Learn more about GCI by checking out the Contest Rules, short videos, and FAQs. Please visit our contest site and read the Getting Started Guide.

Teachers, if you are interested in getting your students involved in Google Code-in we have resources available to help you get started.

By Stephanie Taylor, Google Open Source

* There are a handful of countries we are unable to ship physical goods to, as listed in the FAQs.

Google and Pixar add Draco Compression to Universal Scene Description (USD) Format

Google and Pixar have collaborated to add Draco compression to USD files to enable significantly smaller meshes for transmission, and real-time asset delivery on the web or in mobile applications.

Draco is an open source compression library to improve the storage and transmission of 3D assets—including compressing points, connectivity information, texture coordinates, color information, normals and any other attributes associated with geometry.

With Draco, applications can present complex 3D assets to the user much more quickly without compromising visual fidelity. For users this means apps can now be downloaded faster, 3D graphics can load quicker, and transmitted over any type of network, regardless of bandwidth.

USD addresses the need to robustly and scalably interchange and augment arbitrary 3D scenes that may be composed from many models and animations. USD also enables assembly and organization of any number of assets into virtual sets, scenes, and shots, transmit them from application to application, and non-destructively edit them (as overrides), with a single, consistent API, in a single scenegraph. USD provides a rich toolset for reading, writing, editing, and rapidly previewing 3D geometry and shading.

We tested Draco compression performance on a representative set of of USD objects and found that Draco on average compressed objects by more than 15X. On a typical 4G network, these assets would load 2.5X faster, all while using less of your users’ data plan.

Public Domain model Kore dressed in chiton and cape from SMK National Gallery of Denmark compressed 15X with Draco. 

Compressing USD objects with Draco enables a wide range of use cases moving forward, especially when delivering run-time assets to consumer devices. Anything from 3D commerce to complex AR scenes can benefit from reduced data requirements and quicker time to launch.

We look forward to seeing what people do with this combination of Draco compression and USD format. Check out the code on GitHub and let us know what you think and how you plan to use it!

By F. Sebastian Grassia, Pixar and Jamieson Brettle, Chrome Media

RecSim: A Configurable Simulation Platform for Recommender Systems



Significant advances in machine learning, speech recognition, and language technologies are rapidly transforming the way in which recommender systems engage with users. As a result, collaborative interactive recommenders (CIRs) recommender systems that engage in a deliberate sequence of interactions with a user to best meet that user's needs have emerged as a tangible goal for online services.

Despite this, the deployment of CIRs has been limited by challenges in developing algorithms and models that reflect the qualitative characteristics of sequential user interaction. Reinforcement learning (RL) is the de facto standard ML approach for addressing sequential decision problems, and as such is a natural paradigm for modeling and optimizing sequential interaction in recommender systems. However, it remains under-investigated and under-utilized for use in CIRs in both research and practice. One major impediment is the lack of general-purpose simulation platforms for sequential recommender settings, whereas simulation has been one of the primary means for developing and evaluating RL algorithms in real-world applications like robotics.

To address this, we have developed RᴇᴄSɪᴍ (available here), a configurable platform for authoring simulation environments to facilitate the study of RL algorithms in recommender systems (and CIRs in particular). RᴇᴄSɪᴍ allows both researchers and practitioners to test the limits of existing RL methods in synthetic recommender settings. RecSim’s aim is to support simulations that mirror specific aspects of user behavior found in real recommender systems and serve as a controlled environment for developing, evaluating and comparing recommender models and algorithms, especially RL systems designed for sequential user-system interaction.

As an open-source platform, RᴇᴄSɪᴍ: (i) facilitates research at the intersection of RL and recommender systems; (ii) encourages reproducibility and model-sharing; (iii) aids the recommender-systems practitioner, interested in applying RL to rapidly test and refine models and algorithms in simulation, before incurring the potential cost (e.g., time, user impact) of live experiments; and (iv) serves as a resource for academic-industry collaboration through the release of “realistic” stylized models of user behavior without revealing user data or sensitive industry strategies.

Reinforcement Learning and Recommendation Systems
One challenge in applying RL to recommenders is that most recommender research is developed and evaluated using static datasets that do not reflect the sequential, repeated interaction a recommender has with its users. Even those with temporal extent, such as MovieLens 1M, do not (easily) support predictions about the long-term performance of novel recommender policies that differ significantly from those used to collect the data, as many of the factors that impact user choice are not recorded within the data. This makes the evaluation of even basic RL algorithms very difficult, especially when it comes to reasoning about the long-term consequences of some new recommendation policy — research shows changes in policy can have long-term, cumulative impact on user behavior. The ability to model such user behaviors in a simulated environment, and devise and test new recommendation algorithms, including those using RL, can greatly accelerate the research and development cycle for such problems.

Overview of RᴇᴄSɪᴍ
RᴇᴄSɪᴍ simulates a recommender agent’s interaction with an environment consisting of a user model, a document model and a user choice model. The agent interacts with the environment by recommending sets or lists of documents (known as slates) to users, and has access to observable features of simulated individual users and documents to make recommendations. The user model samples users from a distribution over (configurable) user features (e.g., latent features, like interests or satisfaction; observable features, like user demographic; and behavioral features, such as visit frequency or time budget). The document model samples items from a prior distribution over document features, both latent (e.g., quality) and observable (e.g., length, popularity). This prior, as all other components of RᴇᴄSɪᴍ, can be specified by the simulation developer, possibly informed (or learned) from application data.

The level of observability for both user and document features is customizable. When the agent recommends documents to a user, the response is determined by a user-choice model, which can access observable document features and all user features. Other aspects of a user’s response (e.g., time spent engaging with the recommendation) can depend on latent document features, such as document topic or quality. Once a document is consumed, the user state undergoes a transition through a configurable user transition model, since user satisfaction or interests might change.

We note that RᴇᴄSɪᴍ provides the ability to easily author specific aspects of user behavior of interest to the researcher or practitioner, while ignoring others. This can provide the critical ability to focus on modeling and algorithmic techniques designed for novel phenomena of interest (as we illustrate in two applications below). This type of abstraction is often critical to scientific modeling. Consequently, high-fidelity simulation of all elements of user behavior is not an explicit goal of RᴇᴄSɪᴍ. That said, we expect that it may also serve as a platform that supports “sim-to-real” transfer in certain cases (see below).
Data Flow through components of RᴇᴄSɪᴍ. Colors represent different model components — user and user-choice models (green), document model (blue), and the recommender agent (red).
Applications
We have used RᴇᴄSɪᴍ to investigate several key research problems that arise in the use of RL in recommender systems. For example, slate recommendations can result in RL problems, since the parameter space for action grows exponentially with slate size, posing challenges for exploration, generalization and action optimization. We used RᴇᴄSɪᴍ to develop a novel decomposition technique that exploits simple, widely applicable assumptions about user choice behavior to tractably compute Q-values of entire recommendation slates. In particular, RᴇᴄSɪᴍ was used to test a number of experimental hypotheses, such as algorithm performance and robustness to different assumptions about user behavior.

Future Work
While RᴇᴄSɪᴍ provides ample opportunity for researchers and practitioners to probe and question assumptions made by RL/recommender algorithms in stylized environments, we are developing several important extensions. These include: (i) methodologies to fit stylized user models to usage logs to partially address the “sim-to-real” gap; (ii) the development of natural APIs using TensorFlow’s probabilistic APIs to facilitate model specification and learning, as well as scaling up simulation and inference algorithms using accelerators and distributed execution; and (iii) the extension to full-factor, mixed-mode interaction models that will be the hallmark of modern CIRs — e.g., language-based dialogue, preference elicitation, explanations, etc.

Our hope is that RᴇᴄSɪᴍ will serve as a valuable resource that bridges the gap between recommender systems and RL research — the use cases above are examples of how it can be used in this fashion. We also plan to pursue it as a platform to support academic-industry collaborations, through the sharing of stylized models of user behavior that, at suitable levels of abstraction, reflect a degree of realism that can drive useful model and algorithm development.

Further details of the RᴇᴄSɪᴍ framework can be found in the white paper, while code and colabs/tutorials are available here.

Acknowledgements
We thank our collaborators and early adopters of RᴇᴄSɪᴍ, including the other members of the RᴇᴄSɪᴍ team: Eugene Ie, Vihan Jain, Sanmit Narvekar, Jing Wang, Rui Wu and Craig Boutilier.

Source: Google AI Blog


Why Google’s Celebrating at KubeCon



It's hard to believe it was just five years ago Googlers decided to open up Kubernetes to the world, partnering with Red Hat, and eventually many others, to build a community that would reshape the world of infrastructure. Kubernetes' impact has been accelerated by an incredible 35,000 contributors, who give their time to make the project a cornerstone of cloud native computing, and the center of three huge events a year around the world.

No surprise then that we're excited to be at KubeCon + CloudNativeCon North America 2019 in San Diego: to celebrate Kubernetes, plus a constellation of other open source projects. You can meet leaders from these projects at the Google Cloud Community Lounge, part of Google's numerous activities at KubeCon + CloudNativeCon.

Happy Birthday, Go!

First of all, let's take a moment to celebrate the project that started it all, and wish the Go programming language a happy 10th birthday! As Kubernetes co-founder Joe Beda pointed out in 2014, Go is "Goldilocks for system software", and has been foundational to Kubernetes' success.
Since Kubernetes' adoption of Go, it has become known as the language of the cloud. Most of the projects you'll see this year at KubeCon are built with, or are compatible with, Go. And the Go community is a vibrant ecosystem in its own right: over the last ten years it has grown to see over 20 conferences a year, as well as 2,100 contributors. Read more about the history and evolution of Go in these 10th birthday blog posts from Russ Cox and Steve Francia.

Come join us in celebrating Go's birthday during the booth crawl, from 6.40pm to close on Tuesday November 19! Sweet treats will be served, meet with some of the Go team and other enthusiasts.

Google Open Source At KubeCon



KubeCon is a great chance to learn about the many open source projects that Google has founded or contributes to, and meet face-to-face with our engineers and community leaders. We want to hear about your use cases, meet with contributors (aspiring or experienced!), and connect with the whole community. Google's participation in the conference is not limited to technical talks. In fact, we sponsor and participate in many important community gatherings like the Diversity Lunch+Hack, and project governance like the Steering Committee and Technical Oversight Committee. We recognize and care about fostering a healthy, inclusive environment that extends far beyond just technology concerns.

Of course, Kubernetes is at the center of the conference. And, Google is deeply committed to the Kubernetes ecosystem, including creating and contributing to sub-projects such as kubebuilder, kustomize, KIND, and krew, as well as building testing infrastructure and fostering a community that inspires the whole world of open source. We're still here and still trailblazing. Come and talk with us at the community lounge, attend our talks and tutorials throughout the week, and drop in on hallway discussions galore!

gRPC, developed at Google and donated to CNCF, is a modern, open source, high-performance RPC framework that can run in any environment. Use gRPC to efficiently connect services in and across data centers with pluggable support for load balancing, tracing, health checking and authentication. Meet some of the gRPC maintainers, connect with the community, and pick up the latest limited edition PanCakes sticker at the Community Lounge on Wednesday!

Knative is an open source serverless platform hosted on Kubernetes. It abstracts away much of the complexity, allowing developers to focus on their code and build highly scalable, secure stateless applications. Knative codifies the best practices shared by successful real-world implementations, and solves the "boring but difficult" parts of deploying and managing cloud native services on Kubernetes.

Knative recently cut their 10th release of the project, providing a stable v1 API for the serving project, and additional work to enable production-readiness. Meet some of the maintainers and learn more about Knative in a hands-on workshop on the Monday November 18.

Istio is an open source service mesh, providing observability, control and security over communication in a distributed application -- running on Kubernetes or on VMs. Started in 2017 by Google and IBM, and reaching 1.0 in 2018, Istio has developed a vibrant community. Over the last 12 months, more than 600 developers from across 125+ companies have committed PRs! GitHub acknowledged this recently, reporting that Istio is in the top 4 of all projects across GitHub for contributor growth for committed PRs! The most recent quarterly release was Istio 1.4, which featured scalability and performance improvements as well as greatly improved experience for getting started.

Kubeflow is dedicated to making deployments of machine learning (ML) workflows on Kubernetes simple, portable and scalable. The community is building a powerful development experience for data scientists to build, train and deploy from notebooks, as well as the enterprise capabilities ML operations teams rely on to deploy and scale advanced workflows in a variety of infrastructures. The Kubeflow project is supported by 180+ contributors from 25+ organizations.

Kubeflow just released v0.7, featuring beta functionality for the Kubeflow 1.0 release, expected in early 2020. Learn how Kubeflow is being used and built in the 17+ talks featuring the project in the Machine Learning + Data track. If you're getting started, attend our Kubeflow OSS Hands-On Workshop on November 18, 1-3pm. Connect with the Kubeflow maintainers attending KubeCon by following @kubeflow.

Agones is an open source, multiplayer-dedicated, game server scaling and orchestration platform built for Kubernetes. Google founded the project in early 2018, along with Ubisoft, and recently Agones reached its 1.0 release milestone. It is being used in production for several games, with more to come!

If you're a game developer or just interested in non-traditional Kubernetes workloads, please join us for a workshop on Monday November 18, where we'll combine forces with our sibling project Open Match to turn a Kubernetes cluster into a game services backend. We're very excited about the future of open source and game development, and invite you to join us Tuesday morning in the Google Cloud Community Lounge for an open source in gaming meetup!

Krew makes it easy to discover and install kubectl plugins. Initially developed at Google, and now a Kubernetes subproject, over 60 open source kubectl plugins are distributed through Krew, and can be installed with a single command. Learn more in our breakout session, and come and talk about Krew in a Community Lounge Q&A session.

KIND makes it easy and cheap to test Kubernetes locally with Docker container "nodes". Started in the Kubernetes SIG-Testing group, this project has flourished and is now used for testing Kubernetes Pull Requests and many subprojects in the ecosystem including kubeadm, CSI, Istio, and Cluster API. KIND v0.6.0 will release before KubeCon, and the team will be presenting a deep dive and running a workshop on contributing to Kubernetes using KIND.

Envoy is a fundamental building block of service mesh architectures, providing a programmable data plane for services. Service mesh helps deploy multi-cloud applications and microservices at scale, by decoupling applications from networking, and service development from operations. Google is a lead contributor to the Envoy project, a critical part of Google Cloud's enterprise-grade service mesh products such as Anthos Service Mesh, Traffic Director and L7 ILB. We are excited to be a sponsor for EnvoyCon on Monday November 18, and would love for you to join us for an intro to Envoy on Thursday November 21.

OpenTelemetry provides the best way to capture metrics and distributed traces from your applications. Google is one of the founders of OpenTelemetry, and the community has grown to include most major monitoring, APM, and cloud vendors. OpenTelemetry is fully supported by the Google Cloud Platform and Stackdriver. Hear more about OpenTelemetry in the KubeCon keynotes, and go deeper in the overview and advanced breakout sessions.

Tekton defines reusable building blocks on Kubernetes for Cloud Native CI/CD. In March Tekton was donated to the CDF (the Continuous Delivery Foundation), and is being developed in collaboration with companies such as Red Hat, IBM, Cloudbees and other friends.

With the first release of Tekton Triggers and Tekton Pipelines, it's now possible to create an entire bespoke CI/CD system from scratch! Hear more about Tekton at the Continuous Delivery Summit on Monday, and catch the breakout sessions Mario's Adventures in Tekton Land and Russian Doll: Extending Containers with Nested Processes. Meet the Tekton maintainers at the Google Cloud Community Lounge on Thursday November 21 from 12.30-2.25pm, and follow @tektoncd for updates during the conference!

gVisor is a container-native sandbox for defense-in-depth anywhere. It provides a per-sandbox user-space kernel to improve isolation and mitigate privilege escalation vulnerabilities. gVisor integrates with Kubernetes and other container orchestration systems, while preserving the portability and resource efficiency of containers. Connect with the gVisor team in the Google Cloud Community Lounge during the Cloud Native Security meetup from 3.55-4.25pm on Thursday November 21.

Skaffold is a tool from Google that speeds up the feedback loop (build, tag, push, deploy) when developing applications on Kubernetes. Using Skaffold, you create a configuration for your project, and on every source code change Skaffold builds the images, deploys the app, abd starts tailing logs from pods and forwarding ports. For continuous delivery pipelines Skaffold provides a one-off deployment, with the ability to wait for health-checks. Skaffold is now GA, and Patrick Flynn and Balint Pato are available to chat about Skaffold and the Kubernetes Developer Experience from 4-5pm on Tuesday November 19, at the Google Cloud Community Lounge.

Additionally, join us for a breakout session on Binary Authorization in Kubernetes, which brings together Grafeas, an artifact metadata API for software supply chains, and Kritis, a deploy-time policy enforcer for Kubernetes applications.

We're excited to meet with so many users, friends, and fellow contributors at KubeCon, and hope you can join us in our talks and the Google Cloud Community Lounge.

Still can't get enough?

If you're interested in keeping up with the Kubernetes ecosystem, including all the news from KubeCon, subscribe to the Kubernetes Podcast from Google. Every week we release an episode covering news from the community, and in-depth interviews with leaders from the Kubernetes and cloud native ecosystem. Subscribe via your favorite podcast platform.

Knative governance update from the steering committee

Open source collaboration exemplifies the best aspects of contributors and companies uniting to solve difficult technical problems. And, at Google, we support thousands of projects, each with their own unique communities and challenges. Recently, the Knative Steering Committee came together to write a letter that distills this ethos, and we wanted to share it with you here.

***

Dear Knative Community,

Since the previous announcement on Knative ownership and stewardship, we’ve heard a lot of feedback from you, and the ecosystem at large. As a Steering Committee, our primary job is guiding the long-term success of the project, and this really means your success building great things with Knative. 

In order to accomplish that important goal, it requires everyone involved to be aligned on our values, what it means to be an active contributor, how you build progressive trust and responsibility, and fundamentally how we work together to build that success. The Kubernetes project explicitly defined these, and for us as a governing body, they strongly resonate. We will work to construct similar values and vet them across our community.

Trust is the heart of open source. And, clear governance is a means of building and maintaining that trust. It also provides clear signal to future contributors that joining the community is a good bet, and that everyone is visibly working toward the same goals. While there are always differences in approaches and ideas, the power of the community is its ability to collectively reconcile for the benefit of our user community first and foremost. This project is bigger than any one company or individual.

These aspirations are not enough. They will require rethinking how we structure project governance. The overarching goals of the project's governance will be:
    • Create a clear and documented contributor ladder that recognizes both code and non-code contributions as valuable, and provides a means to obtain membership in governance bodies like the Technical Oversight Committee and Steering Committee based on those contributions.
    • Allow the Steering Committee to oversee the usage and implementation of the Knative trademark, with the intent of limiting confusion for adopters, and providing assurances of implementation consistency. Google will provide the Steering Committee with a legal escalation path for enforcement when needed.
    • Widen the contributor community to include additional vendors, end-users, and ecosystem stakeholders such that fair, representational governance organically prevents any one vendor from having a majority in any part of the project. To be clear, no one company should aspire to control outcomes, as that is inherently in conflict with the goal of community stewardship. Committee representation must be a reflection of the diversity of contributors, and also allocated fairly based on the people doing the work. This is of course a delicate balance, but one we intend to solve with community input.
    • Develop the governance documents, community feedback, required tooling for metrics collection, and whatever else is necessary to enact these changes. Because the community we have now is ideally a small subset of the community we aspire to see in a year, we will target a one-year transition period to the new governance we define, similar to how the Kubernetes project moved from a bootstrap committee and charter to the new community-driven model. Building consensus is a painstaking process, so it is important to allocate enough time for all voices to be heard.
The most important takeaway here is that we are working together on this, and will do so with community input, in an inclusive way. This is the beginning of the process, and we want to go back to our roots and focus on the problems we are trying to solve for adopters of our work. Let's take this moment and rejoin our efforts to do great things together.

Respectfully yours,

Michael Behrendt (IBM), Brenda Chan (Pivotal), Paul Morie (Red Hat), Jaice Singer DuMars (Google), Ryan Gregg (Google), Donna Malayeri (Google), Tomas Isdal (Google)

Members, Knative Steering Committee


Improving Developer Experience for Writing Structured Data

Though we’re still waiting on the full materialization of the promise of the Semantic Web, search engines—including Google—are heavy consumers of structured data on the web through Schema.org. In 2015, pages with Schema.org markup accounted for 31.3% of the web. Among SEO communities, interest in Schema.org and structured data has been on the rise in recent years.

Yet, as the use of structured data continues to grow, the developer experience in authoring pieces of structured data remains spotty. I ran into this as I was trying to write my own snippets of JSON-LD. It turns out, the state-of-the-art way of writing JSON-LD is to: read the Schema.org reference; try writing a JSON literal on your own; when you think you’re done, paste the JSON into a validator (like Google’s structured data testing tool); see what’s wrong, fix; and repeat, as needed.

If it’s your first time writing JSON-LD, you might spend a few minutes figuring out how to represent an enum or boolean, looking for examples as needed.

Enter schema-dts

My experience left me with a feeling that things could be improved; writing JSON-LD should be no harder than any JSON that is constrained by a certain schema. This led me to create schema-dts (npm, github) a TypeScript-based library (and an optional codegen tool) with type definitions of the latest Schema.org JSON-LD spec.

The thinking was this: Just as IDEs (and, later, language server protocols for lightweight code editors) supercharge our developer experience with as-you-type error highlighting and code completions, we can supercharge the experience of writing those JSON-LD literals.

With IDEs and language server protocols, the write-test-debug loop was made much tighter. Developers get immediate feedback on the basic correctness of the code they write, rather than having to save sporadically and feed their code to a compiler for that feedback. With schema-dts, we try to take validators like the structured data testing tool out of the critical path of write-test-debug. Instead, you can use a library to type-check your JSON, reporting errors as you type, and offering completions for `@type`s, property names, and their values.


Thanks to TypeScript’s structural typing and discriminated unions, the general shape of Schema.org’s JSON-LD can be well-represented in TypeScript typings. I have previously described the type theory behind creating a TypeScript structure that expresses the Schema.org class structure, enumerations, `DataType`s, and properties.

Schema-dts includes two related pieces: the ‘default’ schema-dts NPM package which includes the latest Schema.org definitions, and the schema-dts-gen CLI which allows you to create your own typing definitions from Schema.org-like .nt N-Triple files. The CLI also has flags to control whether deprecated classes, properties, and enums should be included, what `@context` should be assumed by objects you write, etc.

Goals and Non-Goals

The goal of schema-dts isn’t to make type definitions that accept all legal Schema.org JSON literals. Rather, it is to make sure we provide typings that always (or almost always) result in legal Schema.org JSON-LD literals that search engines would accept. In the process, we’d like to make sure it’s as general as possible, without sacrificing type checking and useful completions.

For instance, RDF’s perspective is that structured data is property-centric, and the Schema.org reference of the domains and ranges of properties is only a suggestion for what values are inferred as. RDF actually permits values of any type to be assigned to a property. Instead, schema-dts will actually constrain you by the Schema.org values.

***

If you’re passionate about structured data, try schema-dts and join the conversation on GitHub!

By: Eyas Sharaiha, Geo Engineering & Open Source scheme-dts Project

The Go language turns 10: A Look at Go’s Growth in the Enterprise

Posted by Steve Francia, Go TeamGo's gopher mascot

The Go gopher was created by renowned illustrator Renee French. This image is adapted from a drawing by Egon Elbre.

November 10 marked Go’s 10th anniversary—a milestone that we are lucky enough to celebrate with our global developer community.

The Gopher community will be celebrating Go’s 10th anniversary at conferences such as Gopherpalooza in Mountain View and KubeCon in San Diego, and dozens of meetups around the world.

In recognition of this milestone, we’re taking a moment to reflect on the tremendous growth and progress Go (also known as golang) has made: from its creation at Google and open sourcing, to many early adopters and enthusiasts, to the global enterprises that now rely on Go everyday for critical workloads.

New to Go?

Go is an open-source programming language designed to help developers build fast, reliable, and efficient software at scale. It was created at Google and is now supported by over 2100 contributors, primarily from the open-source community. Go is syntactically similar to C, but with the added benefits of memory safety, garbage collection, structural typing, and CSP-style concurrency.

Most importantly, Go was purposefully designed to improve productivity for multicore, networked machines and large codebases—allowing programmers to rapidly scale both software development and deployment.

Millions of Gophers!

Today, Go has more than a million users worldwide, ranging across industries, experience, and engineering disciplines. Go’s simple and expressive syntax, ease-of-use, formatting, and speed have helped it become one of the fastest growing languages—with a thriving open source community.

As Go’s use has grown, more and more foundational services have been built with it. Popular open source applications built on Go include Docker, Hugo, Kubernetes. Google’s hybrid cloud platform, Anthos, is also built with Go.

Go was first adopted to support large amounts of Google’s services and infrastructure. Today, Go is used by companies including, American Express, Dropbox, The New York Times, Salesforce, Target, Capital One, Monzo, Twitch, IBM, Uber, and Mercado Libre. For many enterprises, Go has become their language of choice for building on the cloud.

An Example of Go In the Enterprise

One exciting example of Go in action is at MercadoLibre, which uses Go to scale and modernize its ecommerce ecosystem, improve cost-efficiencies, and system response times.

MercadoLibre’s core API team builds and maintains the largest APIs at the center of the company’s microservices solutions. Historically, much of the company’s stack was based on Grails and Groovy backed by relational databases. However this big framework with multiple layers was soon found encountering scalability issues.

Converting that legacy architecture to Go as a new, very thin framework for building APIs streamlined those intermediate layers and yielded great performance benefits. For example, one large Go service is now able to run 70,000 requests per machine with just 20 MB of RAM.

“Go was just marvelous for us,” explains Eric Kohan, Software Engineering Manager at MercadoLibre. “It’s very powerful and very easy to learn, and with backend infrastructure has been great for us in terms of scalability.”

Using Go allowed MercadoLibre to cut the number of servers they use for this service to one-eighth the original number (from 32 servers down to four), plus each server can operate with less power (originally four CPU cores, now down to two CPU cores). With Go, the company obviated 88 percent of their servers and cut CPU on the remaining ones in half—producing a tremendous cost-savings.

With Go, MercadoLibre’s build times are three times (3x) faster and their test suite runs an amazing 24 times faster. This means the company’s developers can make a change, then build and test that change much faster than they could before.

Today, roughly half of Mercadolibre's traffic is handled by Go applications.

"We really see eye-to-eye with the larger philosophy of the language," Kohan explains. "We love Go's simplicity, and we find that having its very explicit error handling has been a gain for developers because it results in safer, more stable code in production."

Visit go.dev to Learn More

We’re thrilled by how the Go community continues to grow, through developer usage, enterprise adoption, package contribution, and in many other ways.

Building off of that growth, we’re excited to announce go.dev, a new hub for Go developers.

There you’ll find centralized information for Go packages and modules, a wealth of learning resources to get started with the language, and examples of critical use cases and case studies of companies using Go.

MercadoLibre’s recent experience is just one example of how Go is being used to build fast, reliable, and efficient software at scale.

You can read more about MercadoLibre’s success with Go in the full case study.

Hey! Ho! Ten Years of Go!



Ten years ago, we announced the Go release here on this blog. This weekend we marked Go's 10th birthday as an open-source programming language and ecosystem for building modern networked software.

Go's original target was networked system infrastructure, anticipating what we now call the cloud. Go has become the language of the cloud, but more than that, Go has become the language of the open-source cloud, including Containerd, CoreDNS, Docker, Envoy, Etcd, Istio, Kubernetes, Prometheus, Terraform, and Vitess.

From our earliest days working on Go, we planned for Go to be open source. We knew that bootstrapping a new language and ecosystem was too large a project for one team or even one company to do alone. Go needed a thriving open-source community to curate and grow the ecosystem, to write books and tutorials, to teach courses to developers of all skill levels, and of course to find bugs and work on code improvements and new features. And of course we also wanted to share what we had created with everyone.

Open source at its best is about people working together to accomplish far more than any of them could have done alone. We are incredibly grateful to the thousands of people who have built up Go, its ecosystem, and its community with us over the past decade.

There are over a million Go developers worldwide, and companies all over the globe are looking to hire more. In fact, people often tell us that learning Go helped them get their first jobs in the tech industry. In the end, what we're most proud of about Go is not a well-designed feature or a clever bit of code but the positive impact Go has had in so many people's lives. We aimed to create a language that would help us be better developers, and we are thrilled that Go has helped so many others. Today we launched go.dev to be a hub for all Go developers to learn more and find ways to connect with each other.

As a thank you from us on the Go team at Google to Go contributors and developers worldwide for joining us on Go's journey, we are distributing a commemorative 10th anniversary pin at this month's Go Developer Network meetups. Renee French, who created the Go gopher for the release back in 2009, designed this special pin and also painted the mission control gopher scene at the top of this post. We thank Renee for giving Go so much of her time and a mascot that continues to delight and inspire a decade on.

As #GoTurns10, we hope everyone will take a moment to celebrate the Go community and all we have achieved together. On behalf of the entire Go team at Google, thank you to everyone who has joined us over the past decade. Let's make the next one even more incredible!



By Russ Cox, for the Go team