Tag Archives: Open source

Google Code-in 2019 Org Applications are Open!

We are now accepting applications for open source organizations interested in participating in the tenth Google Code-in 2019. Google Code-in (GCI) has invited pre-university students ages 13-17 to learn hands-on by contributing to open source software.

Each year we have heard inspiring stories from the participating mentors about their commitment to working with young students. We only select organizations that have participated in Google Summer of Code because they have gained experience in mentorship and know how to provide a support system for these new, young contributors.

Organization applications are now open and all interested open source organizations must apply before Monday, October 28, 2019 at 17:00 UTC.

In 2018, 27 organizations were accepted—9 of which were participating in GCI for the first time! Over the last 9 years, 11,232 students from 108 countries have completed more than 40,000 tasks for participating open source projects. Tasks fall into 5 categories:
  • Code: writing or refactoring.
  • Documentation/Training: creating/editing documents and helping others learn more.
  • Outreach/Research: community management, outreach/marketing, or studying problems and recommending solutions.
  • Quality Assurance: testing and ensuring code is of high quality.
  • Design: graphic design or user interface design.
Once an organization is selected for Google Code-in 2019 they will define these tasks and recruit mentors from their communities who are interested in providing online support for students during the seven week contest.

You can find a timeline, FAQ and other information about Google Code-in on our website. If you’re an educator interested in sharing Google Code-in with your students, please see the resources here.

By Radha Jhatakia, Google Open Source

Understanding Scheduling Behavior with SchedViz

Linux kernel scheduling behavior can be a key factor in application responsiveness and system utilization. Today, we’re announcing SchedViz, a new tool for visualizing Linux kernel scheduling behavior. We’ve used it inside Google to discover many opportunities for better scheduling choices and to root-cause many latency issues.

Thread Scheduling

Modern OSs execute multiple processes concurrently, by running each for a brief burst, then switching to the next: a feature called multiprogramming. Modern processors include multiple cores, each of which can run its own thread, known as multiprocessing. When these two features are combined, a new engineering challenge emerges: when should a thread run? How long should it run, and on what processor? This thread scheduling strategy is a complex problem, and can have a significant effect on performance. In particular, threads that don't get scheduled to run can suffer starvation, which can adversely affect user-visible latencies.

In an ideal system, a simple strategy of assigning chunks of CPU-time to threads in a round-robin manner would maximize fairness by ensuring all threads are equally starved. But, of course, real systems are far from ideal, and this view of fairness may not be an appropriate performance goal. Here are a few factors that make scheduling tricky:
  • Not all threads are equally important. Each thread has a priority that specifies its importance relative to other threads. Thread priorities must be selected carefully, and the scheduler must honor those selections.
  • Not all cores are equal. The structure of the memory hierarchy can make it costly to shift a thread from one core to another, especially if that shift moves it to a new NUMA node. Users can explicitly pin threads to a CPU or a set of CPUs, or can exclude threads from specific CPUs, using features like sched_setaffinity or cgroups. But such restrictions also make scheduling even tougher.
  • Not all threads want to run all the time. Threads may sleep waiting for some event, yielding their core to other execution. When the event occurs, pending threads should be scheduled quickly.
SchedViz permits you to observe real scheduling behavior. Comparing this with the expected or desired behavior can point to specific problems and possible solutions.

Tracepoints and Kernel Tracing

The Linux kernel is instrumented with hooks called tracepoints; when certain actions occur, any code hooked to the relevant tracepoint is called with arguments that describe the action. The kernel also provides a debug feature that can trace this data and stream it to a buffer for later analysis.

Hundreds of different tracepoints exist, arranged into families of related function. The sched family includes tracepoints that can reconstruct thread scheduling behavior—when threads switched in, blocked on some event, or migrated between cores. These sched tracepoints provide fine-grained and comprehensive detail about thread scheduling behavior over a short period of traced execution.

SchedViz: Visualize Thread Scheduling Over Time

SchedViz provides an easy way to gather kernel scheduling traces from hosts, and visualize those traces over time. Tracing is simple:
$ sudo ./trace.sh -capture_seconds 5 -out ~/traces
Then, importing the resulting collection into SchedViz takes just one click.


Once imported, a collection will always be available for later viewing, until you delete it.

The SchedViz UI displays collections in several ways. A zoomable and pannable heatmap shows system cores on the y-axis, and the trace duration on the x-axis. Each core in the system has a swim-lane, and each swim-lane shows CPU utilization (when that CPU is being kept busy) and wait-queue depth (how many threads are waiting to run on that CPU.) The UI also includes a thread list that displays which threads were active in the heatmap, along with how long they ran, waited to run, and blocked on some event, and how many times they woke up or migrated between cores. Individual threads can be selected to show their behavior over time, or expanded to see their details.

Using SchedViz to Identify Antagonisms: Not all threads are equally important

Antagonism describes the situation in which a victim thread is ready to run on some CPU, while another antagonist thread runs on that same CPU. Long antagonisms, or high cumulative duration of antagonisms, can degrade user experience or system efficiency by making a critical process unavailable at critical times.

Antagonist analysis is useful when threads are meant to have exclusive access to some core but don’t get it. In SchedViz, such antagonisms are listed in each thread’s summary, as well as being immediately visible as breaks in the victim thread's running bar. Zooming in reveals exactly what work is interfering.

Several antagonisms affect a thread that wants its CPU exclusively.
Root-causing an antagonism via zooming in.

Round-robin queueing, in which two or more threads, each wanting to run most or all of the time, occupy a single CPU for a period of time, also yields antagonisms. In this case, the scheduler attempts to avert starvation by giving multiple threads short time-slices to run in a round-robin manner. This reduces the throughput of affected threads while introducing often-significant, repeating, latencies. It is a sign that some portion of the system is overloaded.

In SchedViz, round-robin scheduling appears as a sequence of fixed-size intervals in which the running thread, and the set of waiting threads, changes with each interval. The SchedViz UI makes it easy to better understand what caused this phenomenon.

An overloaded CPU with two threads engaged in round-robin queueing. Running intervals are shown as ovals at top; waiting intervals as rectangles at bottom.
Zooming out and viewing more CPUs reveals that round-robin queueing started when a thread migrated into the overtaxed CPU.

Using SchedViz to Identify NUMA Issues: Not all cores are equal

Larger servers often have several NUMA nodes; a CPU can access a subset of memory (the DRAM local to its NUMA node) more quickly than other memory (other nodes' DRAMs). This non-uniformity is a practical consequence of growing core count, but it brings challenges.

On the one hand, a thread migrated away from the DRAM that holds most of its state will suffer, since it will then have to pay an extra tax for each DRAM access. SchedViz can help identify cases like this, making it clear when a thread has had to migrate across NUMA boundaries.

On the other hand, it is important to ensure that all NUMA nodes in a system are well-balanced, lest part of the machine is overloaded while another part of the machine sits idle.

A thread (in yellow) risks higher-latency memory accesses as it migrates across NUMA nodes.
A system risks both under-utilization and increased latency due to NUMA imbalance.

Beyond Scheduling

Many issues can be identified and explored using only sched tracepoints. But, there are many tracepoints, reflecting a wide variety of phenomena. Many of these tracepoints go well with scheduling data. For example, irq events can reveal when thread running time is spent handling interrupts; sys events can help reveal when execution moves into the kernel, and what it’s doing there; and workqueue events can show when kernel work is underway, and what work is being done. SchedViz presently offers limited support for visualizing these non-sched tracepoint families, but improving that support is an active area of development for us.

Google Summer of Code 2019 (Statistics Part 2)

2019 has been an epic year for Google Summer of Code as we celebrated 15 years of connecting university students from around the globe with 201 open source organizations big and small.

We want to congratulate our 1,134 students that complete GSoC 2019. Great work everyone!

Now that GSoC 2019 is over we would like to wrap up the program with some more statistics to round out the year.

Student Registrations

We had 30,922 students from 148 countries register for GSoC 2019 (that’s a 19.5% increase in registrations over last year, the previous record). Interest in GSoC clearly continues to grow and we’re excited to see it growing in all parts of the world.

For the first time ever we had students register from Bhutan, Fiji, Grenada, Papua New Guinea, South Sudan, and Swaziland.

Universities

The 1,276 students accepted into the GSoC 2019 program hailed from 6586 universities, of which, 164 have students participating for the first time in GSoC.

Schools with the most accepted students for GSoC 2019:

University # of Accepted Students
Indian Institute of Technology, Roorkee48
International Institute of Information Technology - Hyderabad29
Birla Institute of Technology and Science, Pilani (BITS Pilani)27
Guru Gobind Singh Indraprastha University (GGSIPU Dwarka)20
Indian Institute of Technology, Kanpur19
Indian Institute of Technology, Kharagpur19
Amrita University / Amrita Vishwa Vidyapeetham14
Delhi Technological University11
Indian Institute of Technology, Bombay11
Indraprastha Institute of Information and Technology, New Delhi11

Mentors

Each year we pore over gobs of data to extract some interesting statistics about the GSoC mentors. Here’s a quick synopsis of our 2019 crew:
  • Registered mentors: 2,815
  • Mentors with assigned student projects: 2,066
  • Mentors who have participated in GSoC for 10 or more years: 70
  • Mentors who have been a part of GSoC for 5 years or more: 307
  • Mentors that are former GSoC students: 691
  • Mentors that have also been involved in the Google Code-in program: 498
  • Percentage of new mentors: 35.84%
GSoC 2019 mentors are from all parts of the world, representing 81 countries!

Every year thousands of GSoC mentors help introduce the next generation to the world of open source software development—for that we are forever grateful. We can not stress enough that without our invaluable mentors the GSoC program would not exist. Mentorship is why GSoC has remained strong for 15 years, the relationships built between students and mentors have helped sustain the program and many of these communities. Sharing their passion for open source, our mentors have paved the road for generations of contributors to enter open source development.

Thank you to all of our mentors, organization administrators, and all of the “unofficial” mentors that help in our open source organization’s communities. Google Summer of Code is a community effort and we appreciate each and every one of you.

By Stephanie Taylor, Google Open Source

Unleashing Open Source Silicon

Open Source Silicon

We all know that open source software has changed the fundamental nature of the software industry and that Google generously adds fuel to this culture of openness and community through Google Summer of Code. What few people realize is that there is another major industry that is ripe for an open source overhaul—the silicon industry. And, this summer, a Google Summer of Code student helped open the floodgates.

If you search social media for “open source silicon,” you’ll find a few dozen names that pop up with some frequency. These folks are fanatically driving forward with open source circuit models and software for creating them. You’ll also find people clambering to jump aboard the RISC-V bandwagon. RISC-V, like x86, MIPS, and others before it, is a CPU “instruction set architecture,” and the mere fact that it is free of proprietary licenses has inspired countless open source implementations and an industry shake-up that has ARM quaking in its boots.

While this open source silicon community is a hotbed of enthusiasm, it is several decades behind the world of open source software. In this post, I’ll reveal the three reasons this movement has, thus far, not been able to take off like open source software, and I’ll explain why these three obstacles are all coming to a very sudden and dramatic end, that will unleash a tidal wave, catching the silicon industry by surprise. And you’ll see that Google Summer of Code, this year, played a pivotal role.

What’s Standing in the Way

So, why is coding and sharing circuit models any different from sharing software? Three reasons:
  1. Implementation Details: There’s more to worry about with hardware than software. Correct functionality is far from the only concern. Particular care must be given to physical implementation. And this detail must be modified for specific silicon technology and design constraints. As a result, leveraging open source logic can involve a substantial amount of rework.
  2. Access to software: While compilers for software tend to be open source, electronic design automation (EDA) tools for compiling hardware are traditionally proprietary and prohibitively expensive.
  3. Access to hardware: Unlike software, circuit models must be turned into silicon to be useful. Fabricating custom silicon is out of the question for a hobbyist, but field-programmable gate arrays (FPGAs) provide a more realistic option. These are chips that can be quickly reconfigured, or “programmed,” to implement any logic function. While FPGAs are within reach, they still cost money, and they are delivered by postal service, not a web browser. And, worst of all, it could take weeks to get an FPGA platform up and running and communicating with the open source logic.

Breaking Down the Barriers

Let’s look at what the open source community is doing to help:
  1. Implementation Details: There is a trend toward designing more abstractly, and leaving the details to tools. Open source tools can now compile C++ into silicon (with caveats). And several open source hardware description languages leverage modern software language innovations that make it easier to rework implementation details. The open source community has shown a greater willingness than industry to explore and adopt these languages. Though hardware remains fundamentally different from software, their differences are becoming less prominent.
  2. Access to software: Open source EDA software has marked some significant achievements in the past several years. Circuit designs have been implemented on FPGAs using 100% free and open source EDA tools. (Google Summer of Code has helped to fund a few open source EDA capabilities in projects under the Free and Open Source Silicon Foundation.) The US government has recognized the opportunity and is providing significant fuel to the fire through the Posh Open Source Hardware initiative. Being restricted to open source software can still be a bit limiting, but it is no longer prohibitive.
  3. Access to hardware: Hmmm. This is still a problem.
My personal contributions to this open source silicon movement stem from my startup, Redwood EDA. We directly target problem #1 by providing tools that support advanced (yet simpler) circuit modeling techniques. And, to address #2, we make all of our software freely available online for open source development. But neither open source EDA nor the efforts of my startup had been able to noticeably impact problem #3, access to hardware.

This is where bigger forces have stepped in. In the past few years, cloud providers have begun incorporating FPGAs into their datacenters. These are available to anyone with an internet connection and a credit card, bundled with industry-class EDA software, on a pay-per-use basis. Wow! This is the solution to hardware access! An open source developer can provide not only their hardware model but also the platform for which their model was designed. A user can download and go, just like they can with software! …in theory.

So here’s the rub. The learning curve for cloud FPGA platforms has been way too high for the open source community to latch on.

Our Project

With a bit of help from Politecnico di Milano’s NECST Lab and ThroughPuter Inc., I was able to get a project off the ground, and it attracted some attention for this year’s Google Summer of Code. I was happy to see an application from Ákos Hadnagy who had done some other ground-breaking work with me in the last Summer of Code, and he was accepted into the program. Together, this summer, we built infrastructure, automated flows, and wrote documentation (or more to the point, eliminated documentation), and now, instead of a month to ramp up, it is now possible to develop for this platform in a matter of minutes!
We dubbed our framework “1st CLaaS,” where we have coined the term “CLaaS” for custom logic as a service. Very simply, 1st CLaaS wraps a developer’s custom FPGA logic as a microservice. Standard web protocols can be used to stream bits to and from this logic, and platform details are hidden by the framework.

Implications and Wrap-up

So there is no longer anything standing in the way! Hobbyists can build and share hardware, and open source silicon can thrive. Just imagine the disruption this will have on the industry, which is currently driven by corporate giants. And with easy web integration, the opportunity and demand for hardware acceleration should rise, and we could start to see some interesting new capabilities on the web that were not imaginable until now.

Google certainly didn’t have this specific industry transformation in mind when starting Google Summer of Code, but I suspect the whole point of the program was to inspire and enable the unexpected. And it did!

If you’d like to contribute to 1st CLaaS or collaborate on some of the world’s first FPGA-accelerated web applications, we’d be more than happy to have you involved. I look forward to next year's applications.

By Steve Hoover, Redwood EDA, Google Summer of Code mentor

Google Code-in 2019 is Right Around the Corner!

This year is the 10th anniversary of the Google Code-in (GCI) contest! Students ages 13–17, globally, can learn about open source development by working on real projects, with mentorship from active developers. GCI begins on December 2, 2019 and runs for seven weeks, ending January 23, 2020.

Google Code-in is unique because students have the autonomy to select what they’re interested in working on from 2,500+ tasks created by open source organizations, all while having mentors available to answer questions as they work on tasks.

There are many questions that developers of any age ask themselves when they initially get involved in open source; from where to start to whether they have the expertise to truly support the organization. The beauty of GCI lies in the participating open source organizations who realize teens are often first time contributors, leading mentors who volunteer to come prepared with the patience and experience to help these newcomers join the open source community.
New contributors bring fresh perspectives, ideas, and enthusiasm into their open source communities, helping them thrive. Throughout the last 9 years, 58 GCI organizations helped 11,000 students from 108 countries make real contributions to open source projects; and to this day may of those students continue to participate in various open source communities and many have become mentors themselves! Some have even gone on to join Google Summer of Code (GSoC).

Contest participants work on a varied level of tasks that require anywhere from beginner to advanced skills in the following five categories:
  • Code: writing or refactoring
  • Documentation/Training: creating/editing documents and helping others learn more
  • Outreach/Research: community management, marketing, or studying problems and recommending solutions
  • Quality Assurance: testing and ensuring code is of high quality
  • Design: graphic design or user interface design
Organizations that are interested in mentoring students, can apply for Google Code-in beginning Thursday, October 10th. Google Code-in starts for students Monday, December 2nd!
Visit the contest site g.co/gci to learn more about the contest and find flyers, slide decks, timelines, and more.

By Radha Jhatakia, Google Open Source

Project Ihmehimmeli: Temporal Coding in Spiking Neural Networks



The discoveries being made regularly in neuroscience are an ongoing source of inspiration for creating more efficient artificial neural networks that process information in the same way as biological organisms. These networks have recently achieved resounding success in domains ranging from playing board and video games to fine-grained understanding of video. However, there is one fundamental aspect of biological brains that artificial neural networks are not yet fully leveraging: temporal encoding of information. Preserving temporal information allows a better representation of dynamic features, such as sounds, and enables fast responses to events that may occur at any moment. Furthermore, despite the fact that biological systems can consist of billions of neurons, information can be carried by a single signal (‘spike’) fired by an individual neuron, with information encoded in the timing of the signal itself.

Based on this biological insight, project Ihmehimmeli explores how artificial spiking neural networks can exploit temporal dynamics using various architectures and learning settings. “Ihmehimmeli” is a Finnish tongue-in-cheek word for a complex tool or a machine element whose purpose is not immediately easy to grasp. The essence of this word captures our aim to build complex recurrent neural network architectures with temporal encoding of information. We use artificial spiking networks with a temporal coding scheme, in which more interesting or surprising information, such as louder sounds or brighter colours, causes earlier neuronal spikes. Along the information processing hierarchy, the winning neurons are those that spike first. Such an encoding can naturally implement a classification scheme where input features are encoded in the spike times of their corresponding input neurons, while the output class is encoded by the output neuron that spikes earliest.
The Ihmehimmeli project team holding a himmeli, a symbol for the aim to build recurrent neural network architectures with temporal encoding of information.
We recently published and open-sourced a model in which we demonstrated the computational capabilities of fully connected spiking networks that operate using temporal coding. Our model uses a biologically-inspired synaptic transfer function, where the electric potential on the membrane of a neuron rises and gradually decays over time in response to an incoming signal, until there is a spike. The strength of the associated change is controlled by the "weight" of the connection, which represents the synapse efficiency. Crucially, this formulation allows exact derivatives of postsynaptic spike times with respect to presynaptic spike times and weights. The process of training the network consists of adjusting the weights between neurons, which in turn leads to adjusted spike times across the network. Much like in conventional artificial neural networks, this was done using backpropagation. We used synchronization pulses, whose timing is also learned with backpropagation, to provide a temporal reference to the network.

We trained the network on classic machine learning benchmarks, with features encoded in time. The spiking network successfully learned to solve noisy Boolean logic problems and achieved a test accuracy of 97.96% on MNIST, a result comparable to conventional fully connected networks with the same architecture. However, unlike conventional networks, our spiking network uses an encoding that is in general more biologically-plausible, and, for a small trade-off in accuracy, can compute the result in a highly energy-efficient manner, as detailed below.

While training the spiking network on MNIST, we observed the neural network spontaneously shift between two operating regimes. Early during training, the network exhibited a slow and highly accurate regime, where almost all neurons fired before the network made a decision. Later in training, the network spontaneously shifted into a fast but slightly less accurate regime. This behaviour was intriguing, as we did not optimize for it explicitly. Thus spiking networks can, in a sense, be “deliberative”, or make a snap decision on the spot. This is reminiscent of the trade-off between speed and accuracy in human decision-making.
A slow (“deliberative”) network (top) and a fast (“impulsive”) network (bottom) classifying the same MNIST digit. The figures show a raster plot of spike times of individual neurons in individual layers, with synchronization pulses shown in orange. In this example, both networks classify the digit correctly; overall, the “slow” network achieves better accuracy than the “fast” network.
We were also able to recover representations of the digits learned by the spiking network by gradually adjusting a blank input image to maximize the response of a target output neuron. This indicates that the network learns human-like representations of the digits, as opposed to other possible combinations of pixels that might look “alien” to people. Having interpretable representations is important in order to understand what the network is truly learning and to prevent a small change in input from causing a large change in the result.
How the network “imagines” the digits 0, 1, 3 and 7.
This work is one example of an initial step that project Ihmehimmeli is taking in exploring the potential of time-based biology-inspired computing. In other on-going experiments, we are training spiking networks with temporal coding to control the walking of an artificial insect in a virtual environment, or taking inspiration from the development of the neural system to train a 2D spiking grid to predict words using axonal growth. Our goal is to increase our familiarity with the mechanisms that nature has evolved for natural intelligence, enabling the exploration of time-based artificial neural networks with varying internal states and state transitions.

Acknowledgements
The work described here was authored by Iulia Comsa, Krzysztof Potempa, Luca Versari, Thomas Fischbacher, Andrea Gesmundo and Jyrki Alakuijala. We are grateful for all discussions and feedback on this work that we received from our colleagues at Google.

Source: Google AI Blog


Enabling Developers and Organizations to Use Differential privacy

Originally posted on the Google Developers Blog
By: Miguel Guevara, Product Manager, Privacy and Data Protection Office


Whether you're a city planner, a small business owner, or a software developer, gaining useful insights from data can help make services work better and answer important questions. But, without strong privacy protections, you risk losing the trust of your citizens, customers, and users.

Differentially-private data analysis is a principled approach that enables organizations to learn from the majority of their data while simultaneously ensuring that those results do not allow any individual's data to be distinguished or re-identified. This type of analysis can be implemented in a wide variety of ways and for many different purposes. For example, if you are a health researcher, you may want to compare the average amount of time patients remain admitted across various hospitals in order to determine if there are differences in care. Differential privacy is a high-assurance, analytic means of ensuring that use cases like this are addressed in a privacy-preserving manner.

Today, we’re rolling out the open-source version of the differential privacy library that helps power some of Google’s core products. To make the library easy for developers to use, we’re focusing on features that can be particularly difficult to execute from scratch, like automatically calculating bounds on user contributions. It is now freely available to any organization or developer that wants to use it.

A deeper look at the technology

Our open source library was designed to meet the needs of developers. In addition to being freely accessible, we wanted it to be easy to deploy and useful. 

Here are some of the key features of the library:
  • Statistical functions: Most common data science operations are supported by this release. Developers can compute counts, sums, averages, medians, and percentiles using our library.
  • Rigorous testing: Getting differential privacy right is challenging. Besides an extensive test suite, we’ve included an extensible ‘Stochastic Differential Privacy Model Checker library’ to help prevent mistakes.
  • Ready to use: The real utility of an open-source release is in answering the question “Can I use this?” That’s why we’ve included a PostgreSQL extension along with common recipes to get you started. We’ve described the details of our approach in a technical paper that we’ve just released today.
  • Modular: We designed the library so that it can be extended to include other functionalities such as additional mechanisms, aggregation functions, or privacy budget management.

Investing in new privacy technologies

We have driven the research and development of practical, differentially-private techniques since we released RAPPOR to help improve Chrome in 2014, and continue to spearhead their real-world application. 

We’ve used differentially private methods to create helpful features in our products, like how busy a business is over the course of a day or how popular a particular restaurant’s dish is in Google Maps, and improve Google Fi.


This year, we’ve announced several open-source, privacy technologies—Tensorflow Privacy, Tensorflow Federated, Private Join and Compute—and today’s launch adds to this growing list. We're excited to make this library broadly available and hope developers will consider leveraging it as they build out their comprehensive data privacy strategies. From medicine, to government, to business, and beyond, it’s our hope that these open-source tools will help produce insights that benefit everyone.

Acknowledgements
Software Engineers: Alain Forget, Bryant Gipson, Celia Zhang, Damien Desfontaines, Daniel Simmons-Marengo, Ian Pudney, Jin Fu, Michael Daub, Priyanka Sehgal, Royce Wilson, William Lam

Enabling developers and organizations to use differential privacy

Posted by Miguel Guevara, Product Manager, Privacy and Data Protection Office

Whether you're a city planner, a small business owner, or a software developer, gaining useful insights from data can help make services work better and answer important questions. But, without strong privacy protections, you risk losing the trust of your citizens, customers, and users.

Differentially-private data analysis is a principled approach that enables organizations to learn from the majority of their data while simultaneously ensuring that those results do not allow any individual's data to be distinguished or re-identified. This type of analysis can be implemented in a wide variety of ways and for many different purposes. For example, if you are a health researcher, you may want to compare the average amount of time patients remain admitted across various hospitals in order to determine if there are differences in care. Differential privacy is a high-assurance, analytic means of ensuring that use cases like this are addressed in a privacy-preserving manner.

Today, we’re rolling out the open-source version of the differential privacy library that helps power some of Google’s core products. To make the library easy for developers to use, we’re focusing on features that can be particularly difficult to execute from scratch, like automatically calculating bounds on user contributions. It is now freely available to any organization or developer that wants to use it.

A deeper look at the technology

Our open source library was designed to meet the needs of developers. In addition to being freely accessible, we wanted it to be easy to deploy and useful.

Here are some of the key features of the library:

  • Statistical functions: Most common data science operations are supported by this release. Developers can compute counts, sums, averages, medians, and percentiles using our library.
  • Rigorous testing: Getting differential privacy right is challenging. Besides an extensive test suite, we’ve included an extensible ‘Stochastic Differential Privacy Model Checker library’ to help prevent mistakes.
  • Ready to use: The real utility of an open-source release is in answering the question “Can I use this?” That’s why we’ve included a PostgreSQL extension along with common recipes to get you started. We’ve described the details of our approach in a technical paper that we’ve just released today.
  • Modular: We designed the library so that it can be extended to include other functionalities such as additional mechanisms, aggregation functions, or privacy budget management.

Investing in new privacy technologies

We have driven the research and development of practical, differentially-private techniques since we released RAPPOR to help improve Chrome in 2014, and continue to spearhead their real-world application.

We’ve used differentially private methods to create helpful features in our products, like how busy a business is over the course of a day or how popular a particular restaurant’s dish is in Google Maps, and improve Google Fi.

Screen recording on phone checking popular times of restaurant

This year, we’ve announced several open-source, privacy technologies—Tensorflow Privacy, Tensorflow Federated, Private Join and Compute—and today’s launch adds to this growing list. We're excited to make this library broadly available and hope developers will consider leveraging it as they build out their comprehensive data privacy strategies. From medicine, to government, to business, and beyond, it’s our hope that these open-source tools will help produce insights that benefit everyone.

Acknowledgements

Software Engineers: Alain Forget, Bryant Gipson, Celia Zhang, Damien Desfontaines, Daniel Simmons-Marengo, Ian Pudney, Jin Fu, Michael Daub, Priyanka Sehgal, Royce Wilson, William Lam

That’s a Wrap for Google Summer of Code 2019

As the 15th year of Google Summer of Code (GSoC) comes to a close, we are pleased to announce that 1,134 students from 61 countries have successfully completed the 2019 program. Congratulations to all of our students and mentors who made this summer’s program so memorable!

Throughout the last 12 weeks, the GSoC students worked eagerly with 201 open source organizations and over 2,000 mentors from 72 countries—learning to work virtually on teams and developing complex pieces of code. The student projects are now public so feel free to take a look at the amazing efforts they put in over the summer.

Many open source communities rely on new perspectives and talent to keep their projects thriving and without student contributions like these, they wouldn’t be able to grow their communities; GSoC students assist in redesigning and enhancing these organizations’ codebases sometimes as first-time contributors not only to the project but to open source! This is just the beginning for GSoC students—many go on to become future mentors and even more become long-term committers and some will start their own open source projects in the years to come

And last but not least, we would like to thank the mentors and organization administrators who make GSoC possible. Their dedication to welcoming new student contributors into their communities is inspiring and vital to grow the open source community. Thank you all!

Bringing Live Transcribe’s Speech Engine to Everyone

Earlier this year, Google launched Live Transcribe, an Android application that provides real-time automated captions for people who are deaf or hard of hearing. Through many months of user testing, we've learned that robustly delivering good captions for long-form conversations isn't so easy, and we want to make it easier for developers to build upon what we've learned. Live Transcribe's speech recognition is provided by Google's state-of-the-art Cloud Speech API, which under most conditions delivers pretty impressive transcript accuracy. However, relying on the cloud introduces several complications—most notably robustness to ever-changing network connections, data costs, and latency. Today, we are sharing our transcription engine with the world so that developers everywhere can build applications with robust transcription.

Those who have worked with our Cloud Speech API know that sending infinitely long streams of audio is currently unsupported. To help solve this challenge, we take measures to close and restart streaming requests prior to hitting the timeout, including restarting the session during long periods of silence and closing whenever there is a detected pause in the speech. Otherwise, this would result in a truncated sentence or word. In between sessions, we buffer audio locally and send it upon reconnection. This reduces the amount of text lost mid-conversation—either due to restarting speech requests or switching between wireless networks.



Endlessly streaming audio comes with its own challenges. In many countries, network data is quite expensive and in spots with poor internet, bandwidth may be limited. After much experimentation with audio codecs (in particular, we evaluated the FLAC, AMR-WB, and Opus codecs), we were able to achieve a 10x reduction in data usage without compromising accuracy. FLAC, a lossless codec, preserves accuracy completely, but doesn't save much data. It also has noticeable codec latency. AMR-WB, on the other hand, saves a lot of data, but delivers much worse accuracy in noisy environments. Opus was a clear winner, allowing data rates many times lower than most music streaming services while still preserving the important details of the audio signal—even in noisy environments. Beyond relying on codecs to keep data usage to a minimum, we also support using speech detection to close the network connection during extended periods of silence. That means if you accidentally leave your phone on and running Live Transcribe when nobody is around, it stops using your data.

Finally, we know that if you are relying on captions, you want them immediately, so we've worked hard to keep latency to a minimum. Though most of the credit for speed goes to the Cloud Speech API, Live Transcribe's final trick lies in our custom Opus encoder. At the cost of only a minor increase in bitrate, we see latency that is visually indistinguishable to sending uncompressed audio.

Today, we are excited to make all of this available to developers everywhere. We hope you'll join us in trying to build a world that is more accessible for everyone.

By Chet Gnegy, Alex Huang, and Ausmus Chang from the Live Transcribe Team