COVID-19: $6.5 million to help fight coronavirus misinformation

Health authorities have warned that an overabundance of information can make it harder for people to obtain reliable guidance about the coronavirus pandemic.
Helping the world make sense of this information requires a broad response, involving scientists, journalists, public figures, technology platforms and many others. Here are some ways we plan to help.
Supporting coronavirus fact-checking and verification efforts
We’re providing $6.5 million in funding to fact-checkers and nonprofits fighting misinformation around the world, with an immediate focus on coronavirus.
Collaboration is a crucial component of journalism’s response to a story as complicated and all-encompassing as COVID-19. For this reason, the Google News Initiative (GNI) is stepping up its support for First Draft. The nonprofit is providing an online resource hub, dedicated training and crisis simulations for reporters covering COVID-19 all over the globe. First Draft is also using its extensive CrossCheck network to help newsrooms respond quickly and address escalating content that is causing confusion and harm. We’re also renewing our support for the collaborative verification project Comprova in Brazil.
As fact-checkers address heightened demand for their work, we are providing immediate support to several organizations. Full Fact and Maldita.es will coordinate efforts in Europe focused on countries with the most cases (Italy, Spain, Germany, France and the United Kingdom) to amplify experts, share trends, and help reduce the spread of harmful false information. In Germany, CORRECTIV will step up its efforts to engage citizens in the fight against misinformation.
LatamChequea, coordinated by Chequeado, is providing a single hub to highlight the work of 21 fact-checking organizations across 15 countries in the Spanish-speaking world and Latin America. With our support, PolitiFact and Kaiser Health News will expand their health fact-checking partnership to focus on COVID-19 misinformation. 
Increasing access to data, scientific expertise and fact checks
Access to primary expert sources during an evolving public health crisis is both challenging and fundamental for journalists covering the story. To make this easier, we’re providing funding to SciLine, based at the American Association for the Advancement of Science, and the Australian Science Media Centre, creators of Scimex.org. We’re supporting the creation of a database for reporters developed by the journalism technology nonprofit Meedan in partnership with public health experts.
The GNI is also supporting the JSK Journalism Fellowships at Stanford University and Stanford's Big Local News group to create a global data resource for reporters working on  COVID-19. The new project will collate data from around the world and help journalists tell data-driven stories that have impact in their communities.
The International Fact-Checking Network (IFCN) continues to advocate for fact-checkers worldwide; our renewed support will boost their efforts to uphold best practices in the fact-checking field and showcase the work of the CoronaVirusFacts alliance. In addition, Science Feedback will conduct a network analysis using the hundreds of COVID-19 fact checks published globally to track the spread of related misinformation.
We also want to do more to highlight fact-check articles that address potentially harmful health misinformation more prominently to our users and we’re experimenting with how to best include a dedicated fact check section in the COVID-19 Google News experience.
Providing insights to fact-checkers, reporters and health authorities
So that reporters can understand and explain how the world is searching for the virus, we’ve made Google Trends data readily available in localized pages with embeddable visualizations. 
We’re also making more local Google Trends data available for journalists, health organizations and local authorities to help them understand people's information needs around the world.
Questions in search on Coronavirus in cities around the world
Questions in Search on Coronavirus around the world


Fact-checkers and health authorities need help to identify topics that people are searching for and where there might be a gap in the availability of good information online. Unanswered user questions—such as “what temperature kills coronavirus?”—can provide useful insights to fact-checkers and health authorities about content they may want to produce. 

To help, we’re supporting Data Leads in partnership with BOOM Live in India and Africa Check in Nigeria to leverage data from Question Hub. This will be complemented by an effort to train 1,000 journalists across India and Nigeria to spot health misinformation.

Our online resources are being updated to support the vital work journalists are doing. The GNI Training Center has tools for data journalism and verification in 16 languages, and our global team of Teaching Fellows is delivering workshops entirely online in 10 languages.

Today's announcement is one of several efforts we’re working on to support those covering this pandemic. We look forward to sharing more soon. 

Posted by Alexios Mantzarlis, News and Information Credibility Lead, Google News Lab

Chrome Beta for Android Update

Hi everyone! We've just released Chrome Beta 81 (81.0.4044.91) for Android: it's now available on Google Play.

You can see a partial list of the changes in the Git log. For details on new features, check out the Chromium blog, and for details on web platform updates, check here.

If you find a new issue, please let us know by filing a bug.

Ben Mason
Google Chrome

Announcing the 2020 Image Matching Benchmark and Challenge



Reconstructing 3D objects and buildings from a series of images is a well-known problem in computer vision, known as Structure-from-Motion (SfM). It has diverse applications in photography and cultural heritage preservation (e.g., allowing people to explore the sculptures of Rapa Nui in a browser) and powers many services across Google Maps, such as the 3D models created from StreetView and aerial imagery. In these examples, images are usually captured by operators under controlled conditions. While this ensures homogeneous data with a uniform, high-quality appearance in the images and the final reconstruction, it also limits the diversity of sites captured and the viewpoints from which they are seen. What if, instead of using images from tightly controlled conditions, one could apply SfM techniques to better capture the richness of the world using the vast amounts of unstructured image collections freely available on the internet?

In order to accelerate research into this topic, and how to better leverage the volume of data already publicly available, we present, “Image Matching across Wide Baselines: From Paper to Practice”, a collaboration with UVIC, CTU and EPFL, that presents a new public benchmark to evaluate methods for 3D reconstruction. Following on the results of the first Image Matching: Local Features and Beyond workshop held at CVPR 2019, this project now includes more than 25k images, each of which includes accurate pose information (location and orientation). This data is publicly available, along with the open-sourced benchmark, and is the foundation of the 2020 Image Matching Challenge to be held at CVPR 20201.

Recovering 3D Structure In the Wild
Google Maps already uses images donated by users to inform visitors about popular locations or to update business hours. However, using this type of data to build 3D models is much more difficult, since donated photos have a wide variety of viewpoints, lighting and weather conditions, occlusions from people and vehicles, and the occasional user-applied filters. The examples below highlight the diversity of images for the Trevi Fountain in Rome.
Some example images sampled from the Image Matching Challenge dataset, showing different perspectives of the Trevi Fountain.
In general, the use of SfM to reconstruct 3D scenes starts by identifying which parts of the images capture the same physical points of a scene, the corners of a window, for instance. This is achieved using local features, i.e., salient locations in an image that can be reliably identified across different views. They contain short description vectors (model representations) that capture the appearance around the point of interest. By comparing these descriptors, one can establish likely correspondences between the pixel coordinates of image locations across two or more images, and recover the 3D location of the point by triangulation. Both the pose from where the images were captured as well as the 3D location of the physical points observed (for example, identifying where the corner of the window is relative to the camera location) can then be jointly estimated. Doing this over many images and points allows one to obtain very detailed reconstructions.
A 3D reconstruction generated from over 3000 images, including those from the previous figure.
The challenge for this approach is the risk of having incorrect correspondences due, for example, to repeated structure such as the windows of the building, that may be very similar to each other, or transient elements that do not persist across images, such as the crowds admiring the Trevi Fountain. One way to filter these out is by reasoning about relations between correspondences using multiple images. An additional, even more powerful approach is to design better methods for identifying and isolating local features, for instance, by ignoring points on transient elements such as people. But to better understand the shortcomings of existing local feature algorithms for SfM and to provide insight into promising directions for future research, it is necessary to have a reliable benchmark to measure performance.

A Benchmark for Evaluating Local Features for 3D Reconstruction
Local features power many Google services, such as Image Search and product recognition in Google Lens, and are also used in mixed reality applications, like Google Maps' Live View, which relies on traditional, handcrafted local features. Designing better algorithms to identify and describe local features will lead to better performance overall.

Comparing the performance of local feature algorithms, however, has been difficult, because it is not obvious how to collect "ground-truth" data for this purpose. Some computer vision tasks rely on crowdsourcing: Google's OpenImages dataset labels "objects" with bounding boxes or pixel masks, by combining machine learning techniques with human annotators. This is not possible in this case, as it is not known what constitutes a "good" local feature a priori, making labelling infeasible. Additionally, existing benchmarks such as HPatches, are often small or limited to a narrow range of transformations, which can bias the evaluation.

What matters is the quality of the reconstruction, and that benchmarks reflect real-world scale and challenges in order to highlight opportunities for developing new approaches. To this end, we have created the Image Matching Benchmark, the first benchmark to include a large dataset of images for training and evaluation. The dataset includes more than 25k images (sourced from the public YFCC100m dataset), each of which has been augmented with accurate pose information (location and orientation). We obtain this "pseudo" ground-truth from large-scale SfM (100s-1000s of images, for each scene), which provides accurate and stable poses, and then run our evaluation on smaller subsets (10s of images), a much more difficult problem. This approach does not require expensive sensors or human labelling, and it provides better proxy metrics than previous benchmarks, which were restricted to small and homogenous datasets.
Visualizations from our benchmark. We show point-to-point matches generated by different local feature algorithms. Left to right: SIFT, HardNet, LogPolarDesc, R2D2. For details, please refer to our website.
We hope this benchmark, dataset and challenge helps advance the state of the art in 3D reconstruction with heterogeneous images. If you’re interested in participating in the challenge, please see the 2020 Image Matching Challenge website for more details.

Acknowledgements
The benchmark is joint work by Yuhe Jin and Kwang Moo Yi (University of Victoria), Anastasiia Mishchuk and Pascal Fua (EPFL), Dmytro Mishkin and Jiří Matas (Czech Technical University), and Eduard Trulls (Google). The CVPR workshop is co-organized by Vassileios Balntas (Scape Technologies/Facebook), Vincent Lepetit (Ecole des Ponts ParisTech), Dmytro Mishkin and Jiří Matas (Czech Technical University), Johannes Schönberger (Microsoft), Eduard Trulls (Google), and Kwang Moo Yi (University of Victoria).

1 Please note that as of April 2, 2020, CVPR is currently on track, despite the COVID-19 pandemic. Challenge information will be updated as the situation develops. Please see the 2020 Image Matching Challenge website for details.

Source: Google AI Blog


Beta Channel Update for Desktop

The beta channel has been updated to 81.0.4044.92 for Windows, Mac, and, Linux.

A full list of changes in this build is available in the log. Interested in switching release channels?  Find out how here. If you find a new issue, please let us know by filing a bug. The community help forum is also a great place to reach out for help or learn about common issues.



Prudhvikumar Bommana
Google Chrome

Stable Channel Update for Desktop

The stable channel has been updated to 80.0.3987.163 for Windows, Mac, and Linux, which will roll out over the coming days/weeks.



A list of all changes is available in the log. Interested in switching release channels? Find out how. If you find a new issue, please let us know by filing a bug. The community help forum is also a great place to reach out for help or learn about common issues.



Krishna Govind
Google Chrome

Resources to help Kiwi businesses manage through uncertainty caused by COVID-19



Small businesses are at the heart of New Zealand’s economy and local communities. So while COVID-19 has created unprecedented challenges for Kiwi businesses, we want to make sure the best of Google’s business resources and tools are readily available and helpful to get them through this time.

Today, Google New Zealand has launched Google for Small Business (g.co/smallbiz-covid19), a new online hub to provide helpful advice and resources to small and medium businesses as they navigate challenges caused by the spread of COVID-19.

The resources are designed to help businesses communicate effectively with their customers and employees, and maintain business operations and continuity planning in response to fast changing external conditions.

It includes step-by-step advice and links so business owners can adjust their existing arrangements as needed - for example, in response to having to temporarily close shopfront operations or moving employees to remote working arrangements.

The launch of this site closely follows an announcement by Google’s CEO Sundar Pichai, who has committed USD$800+ million globally to support small- medium-sized businesses (SMBs), health organisations and governments, and health workers on the front line of this global pandemic.

It also builds on steps already taken by Google including making video conferencing and productivity tools available free of charge for customers working remotely and for educational purposes, and providing online tips to small businesses.

A summary of the tips and resources are below:

Keep your customers informed
  • If your business or one of your locations has temporarily closed, mark the location as temporarily closed on Google Maps and Search.
  • If you have moved business operations to online, takeaway or delivery, edit your Business Profile on Google so customers know how to buy from you
  • Use Posts to tell customers on your Business Profile what is happening and if there are changes to how you are operating - for example, if you are now offering online sales or delivery or special offers.
  • If you have a shopfront which is closed but you’re still taking phone calls, update your business phone number to your mobile phone, so you can answer business calls remotely.
  • Set an email auto-reply to share your latest updates with customers - for example, if you are temporarily closed, or taking phone, online or delivery orders.

Continue to adapt to new customer behaviour
  • Ask what customers need from a business like yours right now - consider reaching out directly via your social media channels, or using tools like Google Trends and Google Alerts for insight into your local market or industry.
  • If you do not have a website for your business, start by getting a domain and exploring options for building a website. Your website can be simple – just make sure you include key information about your business and how potential customers can contact you.
  • Consider starting a free YouTube channel for your business. You can create videos to introduce your business, showcase what’s great about your products or services or teach customers how to do something new.

Run your business remotely
  • Help you and your team to effectively work from home with these tools and resources
  • Make a business continuity plan, and share it with employees via an email address they can access it outside of the office.
  • Collaborate with your co-workers using online tools and platforms - for example using a shared document, a quick conference call, or by creating an email list or a chat room.
  • Make sure you’re able to access important documents from anywhere by uploading them to the Cloud through tools like Google Drive or downloading to your mobile phone or computer for offline access.
  • If you’re using Chromebooks, ensure they have the right policies in place to access company resources from home and to keep devices and data secure.

Adjust your advertising (if necessary)

  • Edit your ads as needed to let customers know whether you're open for business and if you offer helpful services like expedited shipping.
  • Pause campaigns if your product availability is impacted by supply chain issues, increased demand, or other restrictions.
  • If your business relies on customers from countries most affected by the virus, consider prioritising your ad budget to other locations.


Post content

Join the beta for the new AdMob API

Today we’re announcing the open beta release of the AdMob API v1. It offers a new and improved way to interact with AdMob reporting data programmatically.

Built with app publishers in mind, the new AdMob API will replace the need for you to use the AdSense API and provide enhanced capabilities to query AdMob reporting data. For example, unlike the AdSense API that uses different definitions of certain ads metrics for app publishers, the AdMob API includes metrics that are consistent with the AdMob UI.

The AdMob API v1 beta release offers the following benefits for app publishers:

  • Receive metrics that are more accurate and consistent with the AdMob UI.
  • Gain access to mediation reporting programmatically.
  • Integrate newer technologies like JSON REST into your product sooner.

We will continue to make improvements to the AdMob API and we encourage you to join the open beta now and provide feedback to influence the product roadmap before the general release.

How can I join the beta?

The beta is available to all AdMob users. You can start with the Getting Started guide or use the client libraries that we have created for you. Additional client library samples will be coming soon.

Where can I learn more?

If you have any questions or need additional help, please contact us via the forum. We look forward to hearing your feedback.

A Step Towards Protecting Patients from Medication Errors



While no doctor, nurse, or pharmacist wants to make a mistake that harms a patient, research shows that 2% of hospitalized patients experience serious preventable medication-related incidents that can be life-threatening, cause permanent harm, or result in death. There are many factors contributing to medical mistakes, often rooted in deficient systems, tools, processes, or working conditions, rather than the flaws of individual clinicians (IOM report). To mitigate these challenges, one can imagine a system more sophisticated than the current rules-based error alerts provided in standard electronic health record software. The system would identify prescriptions that looked abnormal for the patient and their current situation, similar to a system that produces warnings for atypical credit card purchases on stolen cards. However, determining which medications are appropriate for any given patient at any given time is complex — doctors and pharmacists train for years before acquiring the skill. With the widespread use of electronic health records, it may now be feasible to use this data to identify normal and abnormal patterns of prescriptions.

In an initial effort to explore solutions to this problem, we partnered with UCSF's Bakar Computational Health Sciences Institute to publish “Predicting Inpatient Medication Orders in Electronic Health Record Data” in Clinical Pharmacology and Therapeutics, which evaluates the extent to which machine learning could anticipate normal prescribing patterns by doctors, based on electronic health records. Similar to our prior work, we used comprehensive clinical data from de-identified patient records, including the sequence of vital signs, laboratory results, past medications, procedures, diagnoses and more. Based on the patient’s current clinical state and medical history, our best model was able to anticipate physician’s actual prescribing decisions three quarters of the time.

Model Training
The dataset used for model training included approximately three million medication orders from over 100,000 hospitalizations. It used retrospective electronic health record data, which was de-identified by randomly shifting dates and removing identifying portions of the record in accordance with HIPAA, including names, addresses, contact details, record numbers, physician names, free-text notes, images, and more. The data was not joined or combined with any other data. All research was done using the open-sourced Fast Healthcare Interoperability Resources (FHIR) format, which we’ve previously used to make healthcare data more effective for machine learning. The dataset was not restricted to a particular disease or therapeutic area, which made the machine learning task more challenging, but also helped to ensure that the model could identify a larger variety of conditions; e.g. patients suffering from dehydration require different medications than those with traumatic injuries.

We evaluated two machine learning models: a long short-term memory (LSTM) recurrent neural network and a regularized, time-bucketed logistic model, which are commonly used in clinical research. Both were compared to a simple baseline that ranked the most frequently ordered medications based on a patient’s hospital service (e.g., General Medical, General Surgical, Obstetrics, Cardiology, etc.) and amount of time since admission. Each time a medication was ordered in the retrospective data, the models ranked a list of 990 possible medications, and we assessed whether the models assigned high probabilities to the medications actually ordered by doctors in each case.

As an example of how the model was evaluated, imagine a patient who arrived at the hospital with signs of an infection. The model reviewed the information recorded in the patient’s electronic health record — a high temperature, elevated white blood cell count, quick breathing rate — and estimated how likely it would be for different medications to be prescribed in that situation. The model’s performance was evaluated by comparing its ranked choices against the medications that the physician actually prescribed (in this example, the antibiotic vancomycin and sodium chloride solution for rehydration).
Based on a patient’s medical history and current clinical characteristics, the model ranks the medications a physician is most likely to prescribe.
Findings
Our best-performing model was the LSTM model, a class of models particularly effective for handling sequential data, including text and language. These models are capable of capturing the ordering and time recency of events in the data, making them a good choice for this problem.

Nearly all (93%) top-10 lists contained at least one medication that would be ordered by clinicians for the given patient within the next day. Fifty-five percent of the time, the model correctly placed medications prescribed by the doctor as one of the top-10 most likely medications, and 75% of ordered medications were ranked in the top-25. Even for ‘false negatives’ — cases where the medication ordered by doctors did not appear among the top-25 results — the model highly ranked a medication in the same class 42% of the time. This performance was not explained by the model simply predicting previously prescribed medications. Even when we blinded the model to previous medication orders, it maintained high performance.

What Does This Mean for Patients and Clinicians?
It’s important to remember that models trained this way reproduce physician behavior as it appears in historical data, and have not learned optimal prescribing patterns, how these medications might work, or what side effects might occur. However, learning ‘normal’ is a starting point to eventually spot abnormal, potentially dangerous orders. In our next phase of research, we will examine under which circumstances these models are useful for finding medication errors that could harm patients.

The results from this exploratory work are early first steps towards testing the hypothesis that machine learning can be applied to build systems that prevent mistakes and help to keep patients safe. We look forward to collaborating with doctors, pharmacists, other clinicians, and patients as we continue research to quantify whether models like this one are capable of catching errors, keeping patients safe in the hospital.

Acknowledgements
We would like to thank Atul Butte (UCSF), Claire Cui, Andrew Dai, Michael Howell, Laura Vardoulakis, Yuan (Emily) Xue, and Kun Zhang for their contributions towards the research work described in this post. We’d additionally like to thank members of our broader research team who have assisted in the development of analytical tools, data collection, maintenance of research infrastructure, assurance of data quality, and project management: Gabby Espinosa, Gerardo Flores, Michaela Hardt, Sharat Israni (UCSF), Jeff Love (UCSF), Dana Ludwig (UCSF), Hong Ji, Svetlana Kelman, I-Ching Lee, Mimi Sun, Patrik Sundberg, Chunfeng Wen, and Doris Wong.

Source: Google AI Blog


Chromebook accessibility tools for distance learning

Around the world, 1.5 billion students are now adjusting to learning from home. For students with disabilities, this adjustment is even more difficult without hands-on classroom instruction and support from teachers and learning specialists.

For educators and families using Chromebooks, there are a variety of built-in accessibility features to customize students’ learning experience and make them even more helpful. We’ve put together a list of some of these tools to explore as you navigate at-home learning for students with disabilities.

Supporting students who are low vision

To help students see screens more easily, you can find instructions for locating and turning on several Chromebook accessibility features in this Chromebook Help article. Here are a few examples of things you can try, based on students’ needs:

  • Increase the size of the cursor, or increase text size for better visibility. 

  • Add ahighlighted circle around the cursor when moving the mouse, text caret when typing, or keyboard-focused item when tabbing. These colorful rings appear when the items are in motion to draw greater visual focus, and then fade away.

  • For students with light sensitivity or eye strain, you can turn on high-contrast mode to invert colors across the Chromebook (or add this Chrome extension for web browsing in high contrast).

  • Increase the size of browser or app content, or make everything on the screen—including app icons and Chrome tabs—larger for greater visibility. 

  • For higher levels of zoom, try thefullscreen or docked magnifiers in Chromebook accessibility settings. The fullscreen magnifier zooms the entire screen, whereas the docked magnifier makes the top one-third of the screen a magnified area. Learn more in this Chromebook magnification tutorial.

002-B2S-Tips-Resize-GIF.gif

Helping students read and understand text

Features that read text out loud can be useful for students with visual impairments, learning and processing challenges, or even students learning a new language.

  • Select-to-speak lets students hear the text they choose on-screen spoken out loud, with word-by-word visual highlighting for better audio and visual connection.

  • With Chromevox, the built-in screen reader for Chromebooks, students can navigate around the Chromebook interface using audio spoken feedback or braille. To hear whatever text is under the cursor, turn on Speak text under the mouse in ChromeVox options. This is most beneficial for students who have significant vision loss. 

  • Add the Read&Write Chrome extension from Texthelp for spelling and grammar checks,  talking and picture dictionaries, text-to-speech and additional reading and writing supports- all in one easy to use toolbar. 

  • For students with dyslexia, try the OpenDyslexic Font Chrome extension to replace web page fonts with a more readable font. Or use the BeeLine Reader Chrome extension to color-code text to reduce eye strain and help students better track from one line of text to the next. You can also use the Thomas Jockin font in Google Docs, Sheets and Slides.

Guiding students with writing challenges or mobility impairments

Students can continue to develop writing skills while they’re learning from home.

  • Students can use their voice to enter text by enabling dictation in Chromebook accessibility settings, which works in edit fields across the device. If dictating longer assignments, students can also use voice typing in Google Docs to access a rich set of editing and formatting voice commands. Dictating writing assignments can also be very helpful for students who get a little stuck and want to get thoughts flowing by speaking instead of typing. 

  • Students with mobility impairments can use features like the on-screen keyboard to type using a mouse or pointer device, or automatic clicks to hover over items to click or scroll.

  • Try the Co:Writer Chrome extension for word prediction and completion, as well as excellent grammar help. Don Johnston is offering free access to this and other eLearning tools. Districts, schools, and education practitioners can submit a request for access.

How to get started with Chromebook accessibility tools

We just shared a 12-part video series with training for G Suite and Chromebook Accessibility features made by teachers for teachers. These videos highlight teachers’ experience using these features in the classroom, as well as what type of diverse learner specific features benefit. For more, you can watch these videos from the Google team, read our G Suite accessibility user guide, or join a Google Group to ask questions and get real time answers. To find great accessibility apps and ideas on how to use them, check out the Chromebook App Hub, and for training, head to the Teacher Center.


We’re also eager to hear your ideas—leave your thoughts in this Google Form and help educators benefit from your experience.

Become a Developer Student Club Lead

Posted by Erica Hanson, Global Program Lead, Developer Student Clubs

Calling all student developers: If you’re someone who wants to lead, is passionate about technology, loves problem-solving, and is driven to give back to your community, then Developer Student Clubs has a home for you. Interest forms for the upcoming 2020-2021 academic year are now available. Ready to dive in? Get started at goo.gle/dsc-leads.

Want to know more? Check out these details below.

Image description: People holding up Developer Students Club sign

What are Developer Student Clubs?

Developer Student Clubs (DSC) are university based community groups for students interested in Google developer technologies. With programs that meet in person and online, students from all undergraduate and graduate programs with an interest in growing as a developer are welcome. By joining a DSC, students grow their knowledge in a peer-to-peer learning environment and build solutions for local businesses and their community.

Why should I join?

- Grow your skills as a developer with training content from Google.

- Think of your own project, then lead a team of your peers to scale it.

- Build prototypes and solutions for local problems.

- Participate in a global developer competition.

- Receive access to select Google events and conferences.

- Gain valuable experience

Is there a Developer Student Club near me?

Developer Student Clubs are now in 68+ countries with 860+ groups. Find a club near you or learn how to start your own, here.

When do I need to submit the interest form?

You may express interest through the form until May 15th, 11:59pm PST. Get started here.

Make sure to learn more about our program criteria.

Our DSC Leads are working on meaningful projects around the world. Watch this video of how one lead worked to protect her community from dangerous floods in Indonesia. Similarly, read this story of how another lead helped modernize healthcare in Uganda.

We’re looking forward to welcoming a new group of leads to Developer Student Clubs. Have a friend who you think is a good fit? Pass this article along. Wishing all developer students the best on the path towards building great products and community.

Submit interest form here.



*Developer Student Clubs are student-led independent organizations, and their presence does not indicate a relationship between Google and the students' universities.