Tag Archives: Computer Science

Celebrate #CSEdWeek with Google Developers!

Posted by Google Developer Studio

Cartoon banner showing computer with graduation hat and browser window with text bubbles

Computer Science Education Week kicks off on Monday, December 7th and runs through the 13th. This annual call-to-action was started in 2009 by the Computer Science Teachers Association (CSTA) to raise awareness about the need to encourage CS education at all levels and to highlight the importance of computing across industries.

Google Developers strives to make learning Google technology accessible to all and works to empower developers to build good things together.

Whether you’re a student or a teacher, check out our collection of introductory resources and demos below. Learn how you can get started in your developer journey or empower others to get started.

Note: Some resources may require additional background and extra knowledge

Actions on Google

The Google Assistant developer platform lets you create software to extend the functionality of the Google Assistant with “Actions”. Actions let users get things done through a conversational interface that can range from a simple command to turn on the lights or a longer conversation, such as playing a trivia game or exploring a recipe for dinner.

As a developer, you can use the platform to easily create and manage unique and effective conversational experiences for your users.

Mobile devices showing demo Google Assistant conversation

Actions auto-generated from web content.

Codelab: Build Actions for Google Assistant using Actions Builder (Level 1)

This codelab covers beginner-level concepts for developing with Google Assistant; you do not need any prior experience with the platform to complete it. You’ll learn how to build a simple Action for the Google Assistant that tells users their fortune as they begin their adventure in the mythical land of Gryffinberg. Continue on to level 2 if you’re ready!

Codelab: Build Actions for Google Assistant using Actions SDK (Level 1)

This codelab covers beginner-level concepts for developing with the Actions SDK for Google Assistant; you do not need any prior experience with the platform to complete it.

Tip: If you prefer to work with more visual tools, do the Level 1 Actions Builder codelab instead, which creates the same Action using an in-console Actions Builder instead. View additional codelabs here.


Android Developers

Android is the world's most powerful mobile platform, with more than 2.5 billion active devices.

Build your first Android app by taking the free online Android Basics in Kotlin course created by Google. No programming experience is needed. You'll learn important concepts on how to build an app as well as the fundamentals of programming in Kotlin, the recommended programing language for developers who are new to Android. Start with the first unit: Kotlin Basics for Android!

Once you’re ready to take your app development to the next level, check out Android Basics: Unit 2 where you’ll where you'll build a tip calculator app and an app with a scrollable list of images. You can customize these apps or start building your own Android apps!

You can find more resources such as courses and documentation on developer.android.com. Stay up-to-date on the latest educational resources from the Android engineering team by following our YouTube channel, Twitter account, and subscribing to our newsletter.


Developer Student Clubs (DSC)

Developer Student Clubs are university based community groups for students interested in Google developer technologies.

DSC Solution Challenge

For two years, DSC has challenged students to solve problems in their local communities using technology. Learn the steps to get started on a real life project with tips from Arman Hezarkhani. Get inspired by the 2020 Solution Challenge winners and see what they built here.

If you’re a university student interested in joining or leading a DSC near you, click here to learn more.

Developer Student Clubs logo

Firebase

Firebase is a mobile and web applications development platform that allows you to manage and solve key challenges across the app lifecycle with its full suite of tools for building, quality, and business.

Codelab: Get to know Firebase for web

In this introductory codelab, you'll learn some of the basics of Firebase to create interactive web applications. Learn how to build and deploy an event RSVP and guestbook chat app using several Firebase products.

Codelab: Firebase web codelab

Following “Get to Know Firebase for web”, take this next codelab and you'll learn how to use Firebase to easily create web applications by implementing and deploying a chat client using Firebase products and services.

Get all the latest educational resources from the Firebase engineering team by following our YouTube channel, Twitter account, and visiting the website.


Flutter

Do you want to learn how to build natively compiled apps for mobile, web, and desktop from a single codebase? If the answer is yes, we have some great resources for you.

This is a guide to creating your first Flutter app. If you are familiar with object-oriented code and basic programming concepts such as variables, loops, and conditionals, you can complete this tutorial. You don’t need previous experience with Dart, mobile, or web programming.

Check out this free course from Google and Udacity, which is the perfect course if you’re brand new to Flutter.


Google Cloud Platform

Google Cloud Platform helps you build what's next with secure infrastructure, developer tools, APIs, data analytics and machine learning.

Cloud OnBoard: Core Infrastructure

In this training, learn the fundamentals of Google Cloud and how it can boost your flexibility and efficiency. During sessions and demos, you'll see the ins and outs of some of Google Cloud's most impactful tools, discover how to maximize your VM instances, and explore the best ways to approach your container strategy.

Google Cloud Codelabs and Challenges

Complete a codelab and coding challenge on Google Cloud topics such as Google Cloud Basics, Compute, Data, Mobile, Monitoring, Machine Learning and Networking.

For in-depth Google Cloud tutorials and the latest Google Cloud news, tune into our Google Cloud Platform YouTube channel!


Google Pay

Google Pay lets your customers pay with the press of a button — using payment methods saved to their Google Account. Learn how to integrate the Google Pay APIs for web and Android.

  1. Get started integrating Google Pay APIs for Android.
  2. Get started integrating Google Pay APIs for Web.

Google Workspace Developers

Google Workspace, formerly known as G Suite, includes all of the productivity apps you know and love—Gmail, Calendar, Drive, Docs, Sheets, Slides, Meet, and many more.

The Google Workspace Developer Platform is a collection of tools and resources that let you customize, extend, and integrate with Google Workspace. Low-code tools such as Apps Script enables you to build customizations that automate routine tasks, and professional resources such as Add-ons and APIs enable software vendors to build applications that extend and integrate with Google Workspace.

Learn Apps Script fundamentals with codelabs

If you're new to Apps Script, you can learn the basics using our Fundamentals of Apps Script with this Google Sheets codelab playlist.

  1. Fundamentals of Apps Script with Google Sheets #1: Macros & Custom Functions
  2. Fundamentals of Apps Script with Google Sheets #2: Spreadsheets, Sheets, and Ranges
  3. Fundamentals of Apps Script with Google Sheets #3: Working with Data
  4. Fundamentals of Apps Script with Google Sheets #4: Data Formatting
  5. Fundamentals of Apps Script with Google Sheets #5: Chart and Present Data in Slides

Stay updated on the newest Google Workspace developer tools and tutorials by following us on our YouTube channel and Twitter!


Material Design

Material Design is a design system, created by Google and backed by open-source code, that helps teams build high-quality digital experiences. Whether you’re building for Android, Flutter, or the web we have guidelines, code, and resources to help you build beautiful products, faster. We’ve compiled the best beginner resources here:

  1. Learn how to choose the right navigation for your app
  2. Building Beautiful Transitions with Material Motion for Android
  3. Get Started with Material Components for Android

If you’re interested in learning more about Material Design subscribe to the brand new YouTube channel for updates, and Q&A format videos.


TensorFlow

TensorFlow is an open source platform for machine learning to help you solve challenging, real-world problems with an entire ecosystem of tools, libraries and community resources.

Teachable Machine

Teachable Machine is a web tool that makes creating machine learning models fast, easy, and accessible to everyone. See how you can write a machine learning model without writing any code, save models, and use them in your own future projects. The models you make with are real TensorFlow.js models that work anywhere JavaScript runs.

Machine Learning Foundations:

Machine Learning Foundations is a free training course where you’ll learn the fundamentals of building machine learned models using TensorFlow. Heads up! You will need to know a little bit of Python.

Subscribe to our YouTube channel and Twitter account for all the latest in machine learning.




Celebrate more with Google

Here are other ways our friends within the ecosystem are supporting #CSEdWeek.

Google for Education

  • CS First Unplugged: A free computer science curriculum designed by teachers that makes coding easy to teach and fun to learn in a wide range of classrooms. Check out the website to collect more teacher resources from CS First.
  • Teach an Hour of Code: Flexible one-hour coding activities that can be completed online with a device, or offline and unplugged. Get started with a printable booklet with computational thinking activities for students to learn about how computer science helps us communicate and stay connected with people around the world.

Experiments with Google

A collection of innovative projects using Chrome, Android, AI, AR, Web, and more, along with helpful tools and resources to inspire others to create new experiments. New projects are added weekly, like this machine learning collaboration between Billie Eilish, YouTube Music and Google Creative Lab.

How Google’s 2020 summer interns became the newest contributors in open source

Our internship program changed in structure this year to accommodate a virtual environment, and we enjoyed seeing the intern involvement in our open source teams. Now, as the Summer 2020 Interns have departed Google, we’ve seen widespread impact across these OSS projects. Some accomplishments from the intern community included:
  • Mohamed Ibrahim, a Software Engineering major at the University of Ontario Institute of Technology, interned on the Earth Engine team in Geo. He built a web app from scratch that allows Earth Engine developers, who are primarily climate and remote-sensing researchers, to build rich UIs for their Earth Engine Apps without needing to write any code. Mohamed also learned two coding languages unfamiliar to him, enabling him to write over 10,000 lines of TypeScript, 480 lines of Go, and merge over 30 PRs during one internship.
App creator demo
Web app demo
  • Vismita Uppalli, a Cloud intern and Computer Science major at the University of Virginia, wrote a tutorial showing how to use AI Platform Operators on Apache Airflow, which is now published in the official Airflow docs.
  • Colin Marsch interned with the Android team and published a blog post for Android developers, "Re-writing the AOSP DeskClock App in Kotlin," which has reached over 1,600 viewers! He is scheduled to graduate from the University of Waterloo with a major in Computer Science in Spring 2021.
  • Satyam Ralhan worked in the MyHeart team in Research to build a first-of-its-kind Android app that engages users in conversations to encourage healthy habits. He created a demo, which explores the different phases of the app and how it learns to personalize lifestyle suggestions for various kinds of users. He is in his fourth year at the Indian Institute of Technology, Kanpur, studying Computer Science and Engineering.
    MyHeart app demo
  • An Apigee intern, Nicole Gizzo, presented her work analyzing API vocabularies at the API Specifications Conference. She is majoring in Computer Science and Cognitive Science at Rensselaer Polytechnic Institute, and will graduate in May 2021.
  • The OSS Fuzzing Interns have found and reported over 600 bugs to critical open source projects like the Linux kernel and Nginx, over 100 of which were security vulnerabilities.
  • Madelyn Dubuk, a SWE Intern on the Cloud DPE team and a Computer Science major at USC, worked with three other interns to create a full stack web app to help better understand test flakiness, and enjoyed working directly with other interns.
Initial feedback from our interns indicates that their OSS contributions won’t stop when their internships end. Of the interns who worked on OSS projects, 69% plan to continue contributing to OSS, enjoying the ability to talk about their work and have a broader impact. Beyond the impact on OSS, we’ve seen tremendous professional growth for our interns. Lucia Cantu-Miller, an intern on the Chrome team and Computer Science major at ITESM Monterrey, reflected she was, “proud of seeing how I’ve grown during the internship. As the days passed I became more confident in my work and in asking questions, I have grown a lot as a person and as a professional student.” Lucia wasn’t the only intern to experience this as 98% of interns who worked on OSS feel that Google is a good place to start a career. The success of this summer’s Internship is due in large part to the many contributions of Google’s OSS community—from the intern hosts to the project champions and mentors—we can’t thank them enough for their support. 

By Emma Stamp, Google Engineering Education

Quantum Supremacy Using a Programmable Superconducting Processor



Physicists have been talking about the power of quantum computing for over 30 years, but the questions have always been: will it ever do something useful and is it worth investing in? For such large-scale endeavors it is good engineering practice to formulate decisive short-term goals that demonstrate whether the designs are going in the right direction. So, we devised an experiment as an important milestone to help answer these questions. This experiment, referred to as a quantum supremacy experiment, provided direction for our team to overcome the many technical challenges inherent in quantum systems engineering to make a computer that is both programmable and powerful. To test the total system performance we selected a sensitive computational benchmark that fails if just a single component of the computer is not good enough.

Today we published the results of this quantum supremacy experiment in the Nature article, “Quantum Supremacy Using a Programmable Superconducting Processor”. We developed a new 54-qubit processor, named “Sycamore”, that is comprised of fast, high-fidelity quantum logic gates, in order to perform the benchmark testing. Our machine performed the target computation in 200 seconds, and from measurements in our experiment we determined that it would take the world’s fastest supercomputer 10,000 years to produce a similar output.
Left: Artist's rendition of the Sycamore processor mounted in the cryostat. (Full Res Version; Forest Stearns, Google AI Quantum Artist in Residence) Right: Photograph of the Sycamore processor. (Full Res Version; Erik Lucero, Research Scientist and Lead Production Quantum Hardware)
The Experiment
To get a sense of how this benchmark works, imagine enthusiastic quantum computing neophytes visiting our lab in order to run a quantum algorithm on our new processor. They can compose algorithms from a small dictionary of elementary gate operations. Since each gate has a probability of error, our guests would want to limit themselves to a modest sequence with about a thousand total gates. Assuming these programmers have no prior experience, they might create what essentially looks like a random sequence of gates, which one could think of as the “hello world” program for a quantum computer. Because there is no structure in random circuits that classical algorithms can exploit, emulating such quantum circuits typically takes an enormous amount of classical supercomputer effort.

Each run of a random quantum circuit on a quantum computer produces a bitstring, for example 0000101. Owing to quantum interference, some bitstrings are much more likely to occur than others when we repeat the experiment many times. However, finding the most likely bitstrings for a random quantum circuit on a classical computer becomes exponentially more difficult as the number of qubits (width) and number of gate cycles (depth) grow.
Process for demonstrating quantum supremacy.
In the experiment, we first ran random simplified circuits from 12 up to 53 qubits, keeping the circuit depth constant. We checked the performance of the quantum computer using classical simulations and compared with a theoretical model. Once we verified that the system was working, we ran random hard circuits with 53 qubits and increasing depth, until reaching the point where classical simulation became infeasible.
Estimate of the equivalent classical computation time assuming 1M CPU cores for quantum supremacy circuits as a function of the number of qubits and number of cycles for the Schrödinger-Feynman algorithm. The star shows the estimated computation time for the largest experimental circuits.
This result is the first experimental challenge against the extended Church-Turing thesis, which states that classical computers can efficiently implement any “reasonable” model of computation. With the first quantum computation that cannot reasonably be emulated on a classical computer, we have opened up a new realm of computing to be explored.

The Sycamore Processor
The quantum supremacy experiment was run on a fully programmable 54-qubit processor named “Sycamore.” It’s comprised of a two-dimensional grid where each qubit is connected to four other qubits. As a consequence, the chip has enough connectivity that the qubit states quickly interact throughout the entire processor, making the overall state impossible to emulate efficiently with a classical computer.

The success of the quantum supremacy experiment was due to our improved two-qubit gates with enhanced parallelism that reliably achieve record performance, even when operating many gates simultaneously. We achieved this performance using a new type of control knob that is able to turn off interactions between neighboring qubits. This greatly reduces the errors in such a multi-connected qubit system. We made further performance gains by optimizing the chip design to lower crosstalk, and by developing new control calibrations that avoid qubit defects.

We designed the circuit in a two-dimensional square grid, with each qubit connected to four other qubits. This architecture is also forward compatible for the implementation of quantum error-correction. We see our 54-qubit Sycamore processor as the first in a series of ever more powerful quantum processors.
Heat map showing single- (e1; crosses) and two-qubit (e2; bars) Pauli errors for all qubits operating simultaneously. The layout shown follows the distribution of the qubits on the processor. (Courtesy of Nature magazine.)

Testing Quantum Physics
To ensure the future utility of quantum computers, we also needed to verify that there are no fundamental roadblocks coming from quantum mechanics. Physics has a long history of testing the limits of theory through experiments, since new phenomena often emerge when one starts to explore new regimes characterized by very different physical parameters. Prior experiments showed that quantum mechanics works as expected up to a state-space dimension of about 1000. Here, we expanded this test to a size of 10 quadrillion and find that everything still works as expected. We also tested fundamental quantum theory by measuring the errors of two-qubit gates and finding that this accurately predicts the benchmarking results of the full quantum supremacy circuits. This shows that there is no unexpected physics that might degrade the performance of our quantum computer. Our experiment therefore provides evidence that more complex quantum computers should work according to theory, and makes us feel confident in continuing our efforts to scale up.

Applications
The Sycamore quantum computer is fully programmable and can run general-purpose quantum algorithms. Since achieving quantum supremacy results last spring, our team has already been working on near-term applications, including quantum physics simulation and quantum chemistry, as well as new applications in generative machine learning, among other areas.

We also now have the first widely useful quantum algorithm for computer science applications: certifiable quantum randomness. Randomness is an important resource in computer science, and quantum randomness is the gold standard, especially if the numbers can be self-checked (certified) to come from a quantum computer. Testing of this algorithm is ongoing, and in the coming months we plan to implement it in a prototype that can provide certifiable random numbers.

What’s Next?
Our team has two main objectives going forward, both towards finding valuable applications in quantum computing. First, in the future we will make our supremacy-class processors available to collaborators and academic researchers, as well as companies that are interested in developing algorithms and searching for applications for today’s NISQ processors. Creative researchers are the most important resource for innovation — now that we have a new computational resource, we hope more researchers will enter the field motivated by trying to invent something useful.

Second, we’re investing in our team and technology to build a fault-tolerant quantum computer as quickly as possible. Such a device promises a number of valuable applications. For example, we can envision quantum computing helping to design new materials — lightweight batteries for cars and airplanes, new catalysts that can produce fertilizer more efficiently (a process that today produces over 2% of the world’s carbon emissions), and more effective medicines. Achieving the necessary computational capabilities will still require years of hard engineering and scientific work. But we see a path clearly now, and we’re eager to move ahead.

Acknowledgements
We’d like to thank our collaborators and contributors — University of California Santa Barbara, NASA Ames Research Center, Oak Ridge National Laboratory, Forschungszentrum Jülich, and many others who helped along the way.


Source: Google AI Blog


DevFest 2019: It’s time for Latin America!

DevFest banner Posted by Mariela Altamirano, Community Manager for Latin America with Grant Timmerman, Developer Programs Engineer and Mete Atamel, Developer Advocate

DevFest season is always full of lively surprises with enchanting adventures right around the corner. Sometimes these adventures are big: attending a DevFest in the Caribbean, in the heart of the amazon jungle, or traveling more than 3,000 meters above sea level to discover the beautiful South American highlands. Other times they are small but precious: unlocking a new way of thinking that completely shifts how you code.

October marks the beginning of our DevFest 2019 season in Latin America, where all of these experiences become a reality thanks to the efforts of our communities.

What makes DevFests in LATAM different? Our community is free spirited, eager to explore the natural landscapes we call home, proud of our deep cultural diversity, and energized by our big cities. At the same time, we are connected to the tranquil spirit of our small towns. This year, we hope to reflect this way of life through our 55 official Latin America DevFests.

During the season, Latin America will open its doors to Google Developer Experts, Women Techmakers, Googlers, and other renowned speakers, to exchange ideas on Google products such as Android, TensorFlow, Flutter, Google Cloud Platform. Activities include, hackathons, codelabs and training sessions. This season, we will be joined by Googlers Grant Timmerman and Mete Atamel.

Grant is a Developer Programs Engineer at Google where he works on Cloud Functions, Cloud Run, and other serverless technologies on Google Cloud Platform. He loves open source, Node, and plays the alto saxophone in his spare time. During his time in Latin America, he'll be discussing all things serverless at DevFests and Cloud Summits in Chile, Argentina, Peru, Colombia, and Mexico.

Grant Timmerman, developer programs engineer
Mete Atamel, developer advocate

Mete is a Developer Advocate based in London. He focuses on helping developers with Google Cloud. At DevFest Sul in Floripa and other conferences and meetups throughout Brazil in October, he’ll be talking about serverless containers using Knative and Cloud Run. He first visited the region back in 2017 when he visited Sao Paulo

Afterwards, he went to Rio de Janeiro and immediately fell in love with the city, its friendly people and its positive vibe. Since then, he spoke at a number of conferences and meetups in Mexico, Colombia, Peru, Argentina, Uruguay and Brazil, and always has been impressed with the eagerness of people to learn more.

This year we will be visiting new countries such as Jamaica, Haiti, Guyana, Honduras, Venezuela and Ecuador that have created their first GDG (Google Developer Group) communities. Most of these new communities are celebrating their first DevFest! We'll also be hosting diversity and inclusion events, so keep an eye out for more details!

We thank everyone for being a part of DevFest and our community.

We hope you join us!

#DevFest

#DevFestLATAM

Find a DevFest near you at g.co/dev/fest/sa

Developer Student Clubs: A Walk That Changed Healthcare

Posted by Erica Hanson, Program Manager

ARUA, UGANDA - Samuel Mugisha is a 23 year old university student with a laugh that echoes off every wall and a mind determined to make change. Recently he heard from a healthcare worker that many children at a local clinic were missing vaccinations, so he decided to take a walk. He toured his community, neighbor to neighbor, and asked one simple question: “Can I see your vaccination card?”

In response he was given dirt stained, wrinkled, torn pieces of paper, holding life or death information - all written in scribble.

He squinted, held the cards to the light, rubbed them on his pant leg, but for no use. They were impossible to read. As Samuel put it, “They were broken.”

From the few cards he could read, Samuel noted children who had missed several vaccinations - they were unknowingly playing the odds, waiting to see if disease would find them.

“Looking through the cards, you could tell these kids had missed several vaccinations.”

Without hesitation, Samuel got right to work, determined to fix the healthcare system with technology.

He first brought together his closest friends from Developer Student Clubs (DSC), a program supporting students impacting their communities through tech. He asked them: “Why can’t technology solve our problem?”

Team photo of Developer Student Club

This newly formed team, including Samuel, Joshwa Benkya and Norman Acidri, came up with a twofold plan:

  1. Create a mobile app to replace the broken cards, so healthcare workers can clearly track which vaccines their young patients have received.
  2. Create a notification to alert healthcare workers when a child is due for a new vaccination.

The idea came together right as Developer Student Clubs launched its first Solution Challenge, an open call for all members to submit projects they recently imagined. These young developers had to give it a shot. They created a model, filled out an application, and pitched the idea. After waiting a month, they heard back - their team won the competition! Their idea was selected from a pool of 170 applicants across India, Africa, and Indonesia. In other words, everything was about to change.

In a country where talent can go unnoticed and problems often go unsolved, this new team had pushed through the odds. Developer Student Clubs is a platform for these types of bold thinkers. Students who view the issues of their region not simply as obstacles to overcome, but chances to mend their home, build a better life for themselves, and transform the experiences of their people.

The goal of the Solution Challenge, and all other DSC programs, is to educate young developers early and equip them with the right skills to make an impact in their community.

In this case, office space in Uganda was expensive and hard to find. Samuel’s team previously had few chances to all work under the same roof. After winning the challenge, Developer Student Clubs helped them find a physical space of their own to come together and collaborate - a simple tool, but one that led to a turning point. As Samuel described it,

“Developer Student Clubs helped us not be alone and apart from each other while trying to solve this problem. They gave us the space to come together and learn. We could all be in the same room, thinking together.”

Image of developers in classroom

With this new space to work, DSC then brought some of Africa’s best Google Developer Group Leads directly to the young developers. In these meetings, the students were given high-level insights on how to best leverage Android, Firebase, and Presto to propel their product forward. As Samuel put it:

“If we wanted to learn something, they gave us the best expert.”

As a result, the team realized that with the scarcity of internet in Uganda, Firebase was the perfect technology to build with - allowing healthcare workers to use the app offline but “check in” and receive updates when they were able to find internet.

Although the app has made impressive strides since winning the competition, this young team knows they can make it even better. They want to improve its usability by implementing more visuals and are working to create a version for parents, so families can track the status of their child’s vaccination on their own.

While there is plenty of work ahead, with these gifted students and Developer Student Clubs taking each step forward together, any challenge seems solvable.

What has the team been up to recently? From August 5th-9th they attended the Startup Africa Roadtrip, an intensive training week on how best to refine a startup business model.

Google AI Princeton: Current and Future Research



Google has long partnered with academia to advance research, collaborating with universities all over the world on joint research projects which result in novel developments in Computer Science, Engineering, and related fields. Today we announce the latest of these academic partnerships in the form of a new lab, across the street from Princeton University’s historic Nassau Hall, opening early next year. By fostering closer collaborations with faculty and students at Princeton, the lab aims to broaden research in multiple facets of machine learning, focusing its initial research efforts on optimization methods for large-scale machine learning, control theory and reinforcement learning. Below we give a brief overview of the research progress thus far.

Large-Scale Optimization
Imagine you have gone for a mountain hike and have run out of water. You need to get to a lake. How can you do so most efficiently? This is a matter of optimizing your route, and the mathematical analogue of this is the gradient descent method. You therefore move in the direction of steepest descent until you find the nearest lake at the bottom of your path. In the language of optimization, the location of the lake is referred to as a (local) minimum. The trajectory of gradient descent resembles the path, shown below, a thirsty yet avid hiker would take in order to get down to a lake as fast as she can.
Gradient descent (GD), and its randomized version, stochastic gradient descent (SGD), are the methods of choice for optimizing the weights of neural networks. Stacking all of the parameters together, we form a set of cells organized into vectors Let us take a simplistic view and assume that our neural net merely has 5 different parameters. Taking a gradient descent step amounts to subtracting the gradient vector (red) from the current set of parameters (blue) and putting the result back into the parameter vector.
Going back to our avid hiker, suppose she finds an unmarked path that is long and narrow, with limited visibility as she gazes down. If she follows the descent method her path would zig-zag down the hill, as shown in the illustration below on the left. However, she can now make faster progress by exploiting the skewed geometry of the terrain. That is, she can make a bigger leap forward than to the sides. In the context of gradient descent, pacing up is called acceleration. A popular class of acceleration methods is named adaptive regularization, or adaptive preconditioning, first introduced by the AdaGrad algorithm devised in collaboration with Prof. John Duchi from Stanford while he was at Google.
The idea is to change the geometry of the landscape of the optimization objective to make it easier for gradient descent to work. In order to do so, preconditioning methods stretch and rotate the space. The terrain after preconditioning looks like the serene, perfectly spherical lake above on the right, and the descent trajectory is a straight line! Procedurally, instead of subtracting the gradient vectors from the parameters vector per-se, adaptive preconditioning first multiplies the gradient by a 5×5 multicell structure, called a matrix preconditioner, as shown below.
This preconditioning operation yields a stretched and rotated gradient which is then subtracted as before, allowing much faster progress toward a basin. However, there is a downside to preconditioning, namely, its computational cost. Instead of subtracting a 5-dimensional gradient vector from a 5-dimensional parameter vector, the preconditioning transformation itself requires 5×5=25 operations. Suppose we would like to precondition gradients in order to learn a deep network with 10 million parameters. A single preconditioning step would require 100 trillion operations. In order to save computation, a diagonal version in which preconditioning amounts to stretching sans rotation was also introduced in the original AdaGrad paper. The diagonal version was later adopted and modified, yielding another very successful algorithm called Adam.

This simplified diagonal preconditioning imposes only a marginal additional cost to gradient descent. However, oversimplification has its own downside: we are no longer able to rotate our space. Going back to our hiker, if the deep-and-narrow canyon runs from southeast to northwest, she can no longer take large westward leaps. Had we provided her with a “rigged” compass in which the north pole is in the northwest, she could have followed her descent procedure as before. In high dimensions, the analog of compass rigging is full-matrix preconditioning. We thus asked ourselves whether we could devise a preconditioning method that is computationally efficient while allowing for the equivalent of coordinate rotations.

At Google AI Princeton, we developed a new method for full-matrix adaptive preconditioning at roughly the same computational cost as the commonly-used diagonal restriction. Details can be found in the paper, but the key idea behind the method is depicted below. Instead of using a full matrix, we replace the preconditioning matrix by a product of three matrices: a tall & thin matrix, a (small) square matrix, and a short & fat matrix. The vast amount of computation is performed using the smaller matrix. If we have d parameters, instead of a single large d × d matrix, the matrices maintained by GGT (shorthand for the operation Gradient GradientT), the proposed method, are of sizes d × k, k × k, k × d respectively.

For reasonable choices of k, which can be thought of as the “window size” of the algorithm, the computational bottleneck has been mitigated from a single large matrix, to that of a much smaller kkmatrix. In our implementation we typically choose k to be, say, 50, and maintaining the smaller square matrix is significantly less expensive while yielding good empirical performance. When compared to other adaptive methods on standard deep learning tasks, GGT is competitive with AdaGrad and Adam.

Spectral Filtering for Control and Reinforcement Learning
Another broad mission of Google’s research group in Princeton is to develop principled building blocks for decision-making systems. In particular, the group strives to leverage provable guarantees from the field of online learning, which studies the robust (worst-case) guarantees of decision-making algorithms under uncertainty. An online algorithm is said to attain a no-regret guarantee if it learns to make decisions as well as the best "offline" decision in hindsight. Ideas from this field have already enabled many innovations within theoretical computer science, and provide a mathematically elegant framework to study a widely-used technique called boosting. We envision using ideas from online learning to broaden the toolkit of modern reinforcement learning.

With that goal in mind, and in collaboration with researchers and students at Princeton, we developed the algorithmic technique of spectral filtering for estimation and control of linear dynamical systems (see several recent publications). In this setting, noisy observations (e.g., location sensor measurements) are being streamed from an unknown source. The source of the signal is a system whose state evolves over time following a set of linear equations (e.g. Newton's laws). To forecast future signals (prediction), or to perform actions which bring the system to a desired state (control), the usual approach starts with learning the model explicitly (a task termed system identification), which is often slow and inaccurate.

Spectral filtering circumvents the need to model the dynamics explicitly, by reformulating prediction and control as convex programs, enabling provable no-regret guarantees. A major component of the technique is that of a new signal processing transformation. The idea is to summarize the long history of past input signals through convolution with a tailored bank of filters, and then use this representation to predict the dynamical system’s future outputs. Each filter compresses the input signal into a single real number, by taking a weighted combination of the previous inputs.
A set of filters depicted in a plot of filter amplitude versus time. With our technique of spectral filtering, multiple filters are used to predict the state of a linear dynamical system at any given time. Each filter is a set of weights used to summarize past observations, such that combining them in a weighted fashion, over time allows us to accurately predict the system.
The mathematical derivation of these weights (filters) has an interesting connection to the spectral theory of Hankel matrices.

Looking Forward
We are excited about the progress we have made thus far in partnership with Princeton’s faculty and students, and we look forward to the official opening of the lab in the coming weeks. It has long been Google’s view that both industry and academia benefit significantly from an open research culture, and we look forward to our continued close collaboration.

Acknowledgments
The research and results discussed in this post would not have been possible without contributions from the following researchers: Naman Agarwal, Brian Bullins, Xinyi Chen, Udaya Ghai, Tomer Koren, Karan Singh, Cyril Zhang, Yi Zhang, and visiting professor Sham Kakade. Since joining Google earlier this year, the research team has been working remotely from both the Google NYC office as well as the Princeton University campus, and they look forward to moving into the new Google space across from the Princeton campus in the weeks to come.

Source: Google AI Blog


The Question of Quantum Supremacy



Quantum computing integrates the two largest technological revolutions of the last half century, information technology and quantum mechanics. If we compute using the rules of quantum mechanics, instead of binary logic, some intractable computational tasks become feasible. An important goal in the pursuit of a universal quantum computer is the determination of the smallest computational task that is prohibitively hard for today’s classical computers. This crossover point is known as the “quantum supremacy” frontier, and is a critical step on the path to more powerful and useful computations.

In “Characterizing quantum supremacy in near-term devices” published in Nature Physics (arXiv here), we present the theoretical foundation for a practical demonstration of quantum supremacy in near-term devices. It proposes the task of sampling bit-strings from the output of random quantum circuits, which can be thought of as the “hello world” program for quantum computers. The upshot of the argument is that the output of random chaotic systems (think butterfly effect) become very quickly harder to predict the longer they run. If one makes a random, chaotic qubit system and examines how long a classical system would take to emulate it, one gets a good measure of when a quantum computer could outperform a classical one. Arguably, this is the strongest theoretical proposal to prove an exponential separation between the computational power of classical and quantum computers.

Determining where exactly the quantum supremacy frontier lies for sampling random quantum circuits has rapidly become an exciting area of research. On one hand, improvements in classical algorithms to simulate quantum circuits aim to increase the size of the quantum circuits required to establish quantum supremacy. This forces an experimental quantum device with a sufficiently large number of qubits and low enough error rates to implement circuits of sufficient depth (i.e the number of layers of gates in the circuit) to achieve supremacy. On the other hand, we now understand better how the particular choice of the quantum gates used to build random quantum circuits affects the simulation cost, leading to improved benchmarks for near-term quantum supremacy (available for download here), which are in some cases quadratically more expensive to simulate classically than the original proposal.

Sampling from random quantum circuits is an excellent calibration benchmark for quantum computers, which we call cross-entropy benchmarking. A successful quantum supremacy experiment with random circuits would demonstrate the basic building blocks for a large-scale fault-tolerant quantum computer. Furthermore, quantum physics has not yet been tested for highly complex quantum states such as this.
Space-time volume of a quantum circuit computation. The computational cost for quantum simulation increases with the volume of the quantum circuit, and in general grows exponentially with the number of qubits and the circuit depth. For asymmetric grids of qubits, the computational space-time volume grows slower with depth than for symmetric grids, and can result in circuits exponentially easier to simulate.
In “A blueprint for demonstrating quantum supremacy with superconducting qubits” (arXiv here), we illustrate a blueprint towards quantum supremacy and experimentally demonstrate a proof-of-principle version for the first time. In the paper, we discuss two key ingredients for quantum supremacy: exponential complexity and accurate computations. We start by running algorithms on subsections of the device ranging from 5 to 9 qubits. We find that the classical simulation cost grows exponentially with the number of qubits. These results are intended to provide a clear example of the exponential power of these devices. Next, we use cross-entropy benchmarking to compare our results against that of an ordinary computer and show that our computations are highly accurate. In fact, the error rate is low enough to achieve quantum supremacy with a larger quantum processor.

Beyond achieving quantum supremacy, a quantum platform should offer clear applications. In our paper, we apply our algorithms towards computational problems in quantum statistical-mechanics using complex multi-qubit gates (as opposed to the two-qubit gates designed for a digital quantum processor with surface code error correction). We show that our devices can be used to study fundamental properties of materials, e.g. microscopic differences between metals and insulators. By extending these results to next-generation devices with ~50 qubits, we hope to answer scientific questions that are beyond the capabilities of any other computing platform.
Photograph of two gmon superconducting qubits and their tunable coupler developed by Charles Neill and Pedram Roushan.
These two publications introduce a realistic proposal for near-term quantum supremacy, and demonstrate a proof-of-principle version for the first time. We will continue to decrease the error rates and increase the number of qubits in quantum processors to reach the quantum supremacy frontier, and to develop quantum algorithms for useful near-term applications.

The Question of Quantum Supremacy



Quantum computing integrates the two largest technological revolutions of the last half century, information technology and quantum mechanics. If we compute using the rules of quantum mechanics, instead of binary logic, some intractable computational tasks become feasible. An important goal in the pursuit of a universal quantum computer is the determination of the smallest computational task that is prohibitively hard for today’s classical computers. This crossover point is known as the “quantum supremacy” frontier, and is a critical step on the path to more powerful and useful computations.

In “Characterizing quantum supremacy in near-term devices” published in Nature Physics (arXiv here), we present the theoretical foundation for a practical demonstration of quantum supremacy in near-term devices. It proposes the task of sampling bit-strings from the output of random quantum circuits, which can be thought of as the “hello world” program for quantum computers. The upshot of the argument is that the output of random chaotic systems (think butterfly effect) become very quickly harder to predict the longer they run. If one makes a random, chaotic qubit system and examines how long a classical system would take to emulate it, one gets a good measure of when a quantum computer could outperform a classical one. Arguably, this is the strongest theoretical proposal to prove an exponential separation between the computational power of classical and quantum computers.

Determining where exactly the quantum supremacy frontier lies for sampling random quantum circuits has rapidly become an exciting area of research. On one hand, improvements in classical algorithms to simulate quantum circuits aim to increase the size of the quantum circuits required to establish quantum supremacy. This forces an experimental quantum device with a sufficiently large number of qubits and low enough error rates to implement circuits of sufficient depth (i.e the number of layers of gates in the circuit) to achieve supremacy. On the other hand, we now understand better how the particular choice of the quantum gates used to build random quantum circuits affects the simulation cost, leading to improved benchmarks for near-term quantum supremacy (available for download here), which are in some cases quadratically more expensive to simulate classically than the original proposal.

Sampling from random quantum circuits is an excellent calibration benchmark for quantum computers, which we call cross-entropy benchmarking. A successful quantum supremacy experiment with random circuits would demonstrate the basic building blocks for a large-scale fault-tolerant quantum computer. Furthermore, quantum physics has not yet been tested for highly complex quantum states such as this.
Space-time volume of a quantum circuit computation. The computational cost for quantum simulation increases with the volume of the quantum circuit, and in general grows exponentially with the number of qubits and the circuit depth. For asymmetric grids of qubits, the computational space-time volume grows slower with depth than for symmetric grids, and can result in circuits exponentially easier to simulate.
In “A blueprint for demonstrating quantum supremacy with superconducting qubits” (arXiv here), we illustrate a blueprint towards quantum supremacy and experimentally demonstrate a proof-of-principle version for the first time. In the paper, we discuss two key ingredients for quantum supremacy: exponential complexity and accurate computations. We start by running algorithms on subsections of the device ranging from 5 to 9 qubits. We find that the classical simulation cost grows exponentially with the number of qubits. These results are intended to provide a clear example of the exponential power of these devices. Next, we use cross-entropy benchmarking to compare our results against that of an ordinary computer and show that our computations are highly accurate. In fact, the error rate is low enough to achieve quantum supremacy with a larger quantum processor.

Beyond achieving quantum supremacy, a quantum platform should offer clear applications. In our paper, we apply our algorithms towards computational problems in quantum statistical-mechanics using complex multi-qubit gates (as opposed to the two-qubit gates designed for a digital quantum processor with surface code error correction). We show that our devices can be used to study fundamental properties of materials, e.g. microscopic differences between metals and insulators. By extending these results to next-generation devices with ~50 qubits, we hope to answer scientific questions that are beyond the capabilities of any other computing platform.
Photograph of two gmon superconducting qubits and their tunable coupler developed by Charles Neill and Pedram Roushan.
These two publications introduce a realistic proposal for near-term quantum supremacy, and demonstrate a proof-of-principle version for the first time. We will continue to decrease the error rates and increase the number of qubits in quantum processors to reach the quantum supremacy frontier, and to develop quantum algorithms for useful near-term applications.

Source: Google AI Blog


Understanding Bias in Peer Review



In the 1600’s, a series of practices came into being known collectively as the “scientific method.” These practices encoded verifiable experimentation as a path to establishing scientific fact. Scientific literature arose as a mechanism to validate and disseminate findings, and standards of scientific peer review developed as a means to control the quality of entrants into this literature. Over the course of development of peer review, one key structural question remains unresolved to the current day: should the reviewers of a piece of scientific work be made aware of the identify of the authors? Those in favor argue that such additional knowledge may allow the reviewer to set the work in perspective and evaluate it more completely. Those opposed argue instead that the reviewer may form an opinion based on past performance rather than the merit of the work at hand.

Existing academic literature on this subject describes specific forms of bias that may arise when reviewers are aware of the authors. In 1968, Merton proposed the Matthew effect, whereby credit goes to the best established researchers. More recently, Knobloch-Westerwick et al. proposed a Matilda effect, whereby papers from male-first authors were considered to have greater scientific merit that those from female-first authors. But with the exception of one classical study performed by Rebecca Blank in 1991 at the American Economic Review, there have been few controlled experimental studies of such effects on reviews of academic papers.

Last year we had the opportunity to explore this question experimentally, resulting in “Reviewer bias in single- versus double-blind peer review,” a paper that just appeared in the Proceedings of the National Academy of Sciences. Working with Professor Min Zhang of Tsinghua University, we performed an experiment during the peer review process of the 10th ACM Web Search and Data Mining Conference (WSDM 2017) to compare the behavior of reviewers under single-blind and double-blind review. Our experiment ran as follows:
  1. We invited a number of experts to join the conference Program Committee (PC).
  2. We randomly split these PC members into a single-blind cadre and a double-blind cadre.
  3. We asked all PC members to “bid” for papers they were qualified to review, but only the single-blind cadre had access to the names and institutions of the paper authors.
  4. Based on the resulting bids, we then allocated two single-blind and two double-blind PC members to each paper.
  5. Each PC member read his or her assigned papers and entered reviews, again with only single-blind PC members able to see the authors and institutions.
At this point, we closed our experiment and performed the remainder of the conference reviewing process under the single-blind model. As a result, we were able to assess the difference in bidding and reviewing behavior of single-blind and double-blind PC members on the same papers. We discovered a number of surprises.

Our first finding shows that compared to their double-blind counterparts, single-blind PC members tend to enter higher scores for papers from top institutions (the finding holds for both universities and companies) and for papers written by well-known authors. This suggests that a paper authored by an up-and-coming researcher might be reviewed more negatively (by a single-blind PC member) than exactly the same paper written by an established star of the field.

Digging a little deeper, we show some additional findings related to the “bidding process,” in which PC members indicate which papers they would like to review. We found that single-blind PC members (a) bid for about 22% fewer papers than their double-blind counterparts, and (b) bid preferentially for papers from top schools and companies. Finding (a) is especially intriguing; with no author information reviewers have less information, arguably making the job of weighing the merit of each paper more difficult. Yet, the double-blind reviewers bid for more work, not less, than their single-blind counterparts. This suggests that double-blind reviewers become more engaged in the review process. Finding (b) is less surprising, but nonetheless enlightening: In the presence of author names and institution, this information is incorporated into the reviewers’ bids. All else being equal, the odds that single-blind reviewers bid on papers from top institutions is about 15 percent above parity.

We also studied whether the actual or perceived gender of authors influenced the behavior of single-blind versus double-blind reviewers. Here the results are a little more nuanced. Compared to double-blind reviewers, we saw about a 22% decrease in the odds that a single-blind reviewer would give a female-authored paper a favorable review, but due to the smaller count of female-authored papers this result was not statistically significant. In an extended version of our paper, we consider our study as well as a range of other studies in the literature and perform a “meta-analysis” of all these results. From this larger pool of observations, the combined results do show a significant finding for the gender effect.

To conclude, we see that the practice of double-blind reviewing yields a denser landscape of bids, which may result in a better allocation of papers to qualified reviewers. We also see that reviewers who see author and institution information tend to bid more for papers from top institutions, and are more likely to vote to accept papers from top institutions or famous authors than their double-blind counterparts. This offers some evidence to suggest that a particular piece of work might be accepted under single-blind review if the authors are famous or come from top institutions, but rejected otherwise. Of course, the situation remains complex: double-blind review imposes an administrative burden on conference organizers, reduces the opportunity to detect several varieties of conflict of interest, and may in some cases be difficult to implement due to the existence of pre-prints or long-running research agendas that are well-known to experts in the field. Nonetheless, we recommend that journal editors and conference chairs carefully consider the merits of double-blind review.

Please take a look at our full paper for more details of our study.

Consistent Hashing with Bounded Loads



Running a large-scale web service, such as content hosting, necessarily requires load balancing — distributing clients uniformly across multiple servers such that none get overloaded. Further, it is desirable to find an allocation that does not change very much over time in a dynamic environment in which both clients and servers can be added or removed at any time. In other words, we need the allocation of clients to servers to be consistent over time.

In collaboration with Mikkel Thorup, a visiting researcher from university of Copenhagen, we developed a new efficient allocation algorithm for this problem with tight guarantees on the maximum load of each server, and studied it theoretically and empirically. We then worked with our Cloud team to implement it in Google Cloud Pub/Sub, a scalable event streaming service, and observed substantial improvement on uniformity of the load allocation (in terms of the maximum load assigned to servers) while maintaining consistency and stability objectives. In August 2016 we described our algorithm in the paper “Consistent Hashing with Bounded Loads”, and shared it on ArXiv for potential use by the broader research community.

Three months later, Andrew Rodland from Vimeo informed us that he had found the paper, implemented it in haproxy (a widely-used piece of open source software), and used it for their load balancing project at Vimeo. The results were dramatic: applying these algorithmic ideas helped them decrease the cache bandwidth by a factor of almost 8, eliminating a scaling bottleneck. He recently summarized this story in a blog post detailing his use case. Needless to say, we were excited to learn that our theoretical research was not only put into application, but also that it was useful and open-sourced.

Background
While the concept of consistent hashing has been developed in the past to deal with load balancing in dynamic environments, a fundamental issue with all the previously developed schemes is that, in certain scenarios, they may result in sub-optimal load balancing on many servers.

Additionally, both clients and servers may be added or removed periodically, and with such changes, we do not want to move too many clients. Thus, while the dynamic allocation algorithm has to always ensure a proper load balancing, it should also aim to minimize the number of clients moved after each change to the system. Such allocation problems become even more challenging when we face hard constraints on the capacity of each server - that is, each server has a capacity that the load may not exceed. Typically, we want capacities close to the average loads.

In other words, we want to simultaneously achieve both uniformity and consistency in the resulting allocations. There is a vast amount of literature on solutions in the much simpler case where the set of servers is fixed and only the client set is updated, but in this post we discuss solutions that are relevant in the fully dynamic case where both clients and servers can be added and removed.

The Algorithm
We can think about the servers as bins and clients as balls to have a similar notation with well-studied balls-to-bins stochastic processes. The uniformity objective encourages all bins to have a load roughly equal to the average density (the number of balls divided by the number of bins). For some parameter ε, we set the capacity of each bin to either floor or ceiling of the average load times (1+ε). This extra capacity allows us to design an allocation algorithm that meets the consistency objective in addition to the uniformity property.

Imagine a given range of numbers overlaid on a circle. We apply a hash function to balls and a separate hash function to bins to obtain numbers in that range that correspond to positions on that circle. We then start allocating balls in a specific order independent of their hash values (let’s say based on their ID). Then each ball is moved clockwise and is assigned to the first bin with spare capacity.
Consider the example above where 6 balls and 3 bins are assigned using two separate hash functions to random locations on the circle. For the sake of this instance, assume the capacity of each bin is set to 2. We start allocating balls in the increasing order of their ID values. Ball number 1 moves clockwise, and goes to bin C. Ball number 2 goes to A. Balls 3 and 4 go to bin B. Ball number 5 goes to bin C. Then ball number 6 moves clockwise and hits bin B first. However bin B has capacity 2 and already contains balls 3 and 4. So ball 6 keeps moving to reach bin C but that bin is also full. Finally, ball 6 ends up in bin A that has a spare slot for it.

Upon any update in the system (ball or bin insertion/deletion), the allocation is recomputed to keep the uniformity objective. The art of the analysis is to show that a small update (a few number of insertions and deletions) results in minor changes in the state of the allocation and therefore the consistency objective is met. In our paper we show that every ball removal or insertion in the system results in O(1/ε2) movements of other balls. The most important thing about this upper bound is that it is independent of the total number of balls or bins in the system. So if the number of balls or bins are doubled, this bound will not change. Having an upper bound independent of the number of balls or bins introduces room for scalability as the consistency objective is not violated if we move to bigger instances. Simulations for the number of movements (relocations) per update is shown below when an update occurs on a bin/server.
The red curve shows the average number of movements and the blue bars indicate the variance for different values of ε (the x-axis). The dashed curve is the upper bound suggested by our theoretical results which fits nicely as a prediction of the actual number of movements. Furthermore, for any value of ε, we know the load of each bin is at most (1+ε) times the average load. Below we see the load distribution of bins for different values of ε=0.1, ε=0.3 and ε=0.9.
The distribution of loads for several values of ε. The load distribution is nearly uniform covering all ranges of loads from 0 to (1+ε) times average, and many bins with load equal to (1+ε) times average.
As one can see there is a tradeoff — a lower ε helps with uniformity but not with consistency, while larger ε values help with consistency. A lower ε will ensure that many loads will be equal to the hard capacity limit of (1+ε) times the average, and the rest have a decaying distribution.

When providing content hosting services, one must be ready to face a variety of instances with different characteristics. This consistent hashing scheme is ideal for such scenarios as it performs well even for worst-case instances.

While our internal results are exciting, we are even more pleased that the broader community found our solution useful enough to open-source, allowing anyone to use this algorithm. If you are interested in further details of this research, please see the paper on ArXiv, and stay tuned for more research from the NYC Algorithms Team!

Acknowledgements:
We would like to thank Alex Totok, Matt Gruskin, Sergey Kondratyev and Haakon Ringberg from the Google Cloud Pub/Sub team, and of course Mikkel Thorup for his invaluable contributions to this paper.