Only nine percent of C-level positions—and six percent of CEOs—at European startups are women. Of all the funds raised by European venture capital-backed companies in 2018, a staggering 93 percent went to all-male founding teams. In order to combat this, last year Google for Startups introduced the first Women Founders Residency at our Campus in London—one of seven Campuses around the world—to back women-led startups using technology to tackle key social issues. Founders receive access to Google products, resources, and mentoring to level the playing field for startup success. The program proved so successful that we are now currently accepting applications for the second Women Founders cohort.
To learn more about the Google for Startups Residency, we chatted with Elina Naydenova: biomedical engineer, data scientist and founder ofFeebris, a healthcare startup that graduated from Campus Residency in 2019. Not based in the UK? Explore other Google for Startups places and programs for founders of all backgrounds at startup.google.com.
What inspired you to start found Feebris? What problem are you trying to solve?
Healthcare should be a human right; yet, millions of people can’t get the care they need, when they need it. It’s unacceptable that in 2019, we can do our communications, our banking, our navigation, our shopping at a touch of a button, but still nearly one million children die of pneumonia because it gets diagnosed too late.
When I realized these deaths can be avoided through early diagnosis, I became obsessed with solving the problem. We set up Feebris so that the most vulnerable patients—children and the elderly—can diagnose pneumonia early. The Feebris AI platform lets anyone capture and interpret important health information in order to identify disease early and monitor conditions in the community. Feebris algorithms paired with sensors, such as digital stethoscopes, can be used by anyone, such as a teacher or a parent, in any remote area to detect issues early, avoid complications and prevent hospitalization.
How did Google for Startups Residency help you achieve your goals?
The most valuable training we received from Residency was how to implement an Objectives and Key Results (OKRs) framework for our startup. When we started Residency, we were going through due diligence with investors, so we worked with a senior Googler to set clear goals. This gave our investors confidence in our ability to map out our journey and identify appropriate milestones, and we went on to close our seed round of £1.1 million. Striking a balance between structure and agility is tremendously important in tech, and even more so for a startup. Residency gave us the right tools to forge a framework that we continue to follow and adapt as we evolve.
Second, the pool of expertise and deep knowledge that Google offers to the Residency startups is second to none. We’ve been connected with the leading experts in technology, like TensorFlowor ChromeOS, to help us develop core product functionality and our technical infrastructure.
Third, as a health technology startup, credibility is hugely important as we grow our footprint with healthcare providers. Residency provided us with a public platform to share our story and build awareness for the work we are doing, from public speaking opportunities to media articles.
What does Residency offer that is different than a traditional accelerator or other program you've attended?
Support at Campus is personalized to your needs and led by people who have successfully launched and scaled startups. Unlike the one-size-fits-all classroom programs, Residency is focused on unlocking opportunities and removing barriers for each business individually.
Over time, build relationships with people you like and admire because they might become your future dream team.
What does Google 1:1 mentorship offer you specifically? What were the most helpful takeaways?
Our Google mentor, Vitor Rodriguez, was generous with his time and advice. He has built a career at Google and also worked in a startup, so he understood the challenges we faced. Vitor spent hours with us, thinking through software architecture options and nurturing our ability to make scalable decisions. Vitor was our conduit into the immense pool of Google knowledge. He helped us analyze the problems that we faced and connected us with domain experts who hold essential insights to reach a solution. Vitor also taught us how to conduct highly technical interviews and cut through the wall of jargon that candidates build to reach a true evaluation of their abilities.
Mentorship also helped us recruit some of our key hires. We went in as a team of two, and by the end had grown to six. The Googlers we worked with during Residency helped us structure evaluation criteria and even conducted technical interviews with us, proving fundamental to the recruitment process.
What advice would you want to share with other founders?
Prioritize hiring, even when you are not hiring. As a founder, finding the right people is one of the most important jobs you have. But it can take a long time and you don’t want to feel rushed and get it wrong. Over time, build relationships with people you like and admire because they might become your future dream team.
Feedback? Let us know.
Source: G Suite Updates Blog
What’s changingWe’re enabling enhanced desktop security for Windows with a new beta. This will allow you to manage and secure Windows 10 devices through the Admin console, just as you do for Android, iOS, Chrome, and Jamboard devices today. It will also enable SSO so users can more easily access G Suite and other SSO-enabled applications on Windows 10 devices.
With these new controls G Suite admins can:
- Enable their organization to use existing G Suite account credentials to login to Windows 10 devices, and easily access apps and services with SSO
- Protect user accounts with anti-phishing, anti-hijacking, and suspicious login detection technologies
- Ensure that all Windows 10 devices used to access G Suite are updated, secure, and within compliance
- Perform admin actions, such as wiping a device and pushing device configuration updates, to Windows 10 devices from the cloud without specific network requirements
Sign up for the beta here.
Why you’d use itAutomatic device registration, the ability to secure all of your devices in a single Admin console, and cloud-based policy and device configuration deployment will simplify device management and security for your organization. Additionally, the ability to remotely wipe devices can help increase your organization’s data security.
Additionally, this makes life easier for users by reducing the hurdles and logins needed to access applications and get work done. Users need to log in just once to their Windows 10 device using their G Suite login credentials, and they’ll be able to access Google apps and any other enterprise cloud applications with SSO enabled without further logins.
How to get started
- Admins: Learn more and sign up for the beta here.
- End users: No action required until admins activate the beta.
Set policies, push configurations to devices, and wipe devices as needed
Admins can deploy policies and device configuration updates from the cloud, removing any network or other restraints for installing these updates on user devices. Policies and updates that can be applied by admins include BitLocker, Windows Update, and desktop customization. Additionally, admins can block or wipe devices if needed from the device page in the Admin console.
AvailabilityG Suite editions
- Available to G Suite Enterprise, G Suite Enterprise for Education, and Cloud Identity Premium customers
- Not available to G Suite Basic, G Suite Business, G Suite for Education, G Suite for Nonprofits, and Cloud Identity Free customers
Beta sign up
Find more information and sign up for the beta here.
Source: G Suite Updates Blog
As usual, our ongoing internal security work was responsible for a wide range of fixes:
-  Various fixes from internal audits, fuzzing and other initiatives
Source: Google Chrome Releases
Understanding sequential data — such as language, music or videos — is a challenging task, especially when there is dependence on extensive surrounding context. For example, if a person or an object disappears from view in a video only to re-appear much later, many models will forget how it looked. In the language domain, long short-term memory (LSTM) neural networks cover enough context to translate sentence-by-sentence. In this case, the context window (i.e., the span of data taken into consideration in the translation) covers from dozens to about a hundred words. The more recent Transformer model not only improved performance in sentence-by-sentence translation, but could be used to generate entire Wikipedia articles through multi-document summarization. This is possible because the context window used by Transformer extends to thousands of words. With such a large context window, Transformer could be used for applications beyond text, including pixels or musical notes, enabling it to be used to generate music and images.
However, extending Transformer to even larger context windows runs into limitations. The power of Transformer comes from attention, the process by which it considers all possible pairs of words within the context window to understand the connections between them. So, in the case of a text of 100K words, this would require assessment of 100K x 100K word pairs, or 10 billion pairs for each step, which is impractical. Another problem is with the standard practice of storing the output of each model layer. For applications using large context windows, the memory requirement for storing the output of multiple model layers quickly becomes prohibitively large (from gigabytes with a few layers to terabytes in models with thousands of layers). This means that realistic Transformer models, using numerous layers, can only be used on a few paragraphs of text or generate short pieces of music.
Today, we introduce the Reformer, a Transformer model designed to handle context windows of up to 1 million words, all on a single accelerator and using only 16GB of memory. It combines two crucial techniques to solve the problems of attention and memory allocation that limit Transformer’s application to long context windows. Reformer uses locality-sensitive-hashing (LSH) to reduce the complexity of attending over long sequences and reversible residual layers to more efficiently use the memory available.
The Attention Problem
The first challenge when applying a Transformer model to a very large text sequence is how to handle the attention layer. LSH accomplishes this by computing a hash function that matches similar vectors together, instead of searching through all possible pairs of vectors. For example, in a translation task, where each vector from the first layer of the network represents a word (even larger contexts in subsequent layers), vectors corresponding to the same words in different languages may get the same hash. In the figure below, different colors depict different hashes, with similar words having the same color. When the hashes are assigned, the sequence is rearranged to bring elements with the same hash together and divided into segments (or chunks) to enable parallel processing. Attention is then applied within these much shorter chunks (and their adjoining neighbors to cover the overflow), greatly reducing the computational load.
While LSH solves the problem with attention, there is still a memory issue. A single layer of a network often requires up to a few GB of memory and usually fits on a single GPU, so even a model with long sequences could be executed if it only had one layer. But when training a multi-layer model with gradient descent, activations from each layer need to be saved for use in the backward pass. A typical Transformer model has a dozen or more layers, so memory quickly runs out if used to cache values from each of those layers.
The second novel approach implemented in Reformer is to recompute the input of each layer on-demand during back-propagation, rather than storing it in memory. This is accomplished by using reversible layers, where activations from the last layer of the network are used to recover activations from any intermediate layer, by what amounts to running the network in reverse. In a typical residual network, each layer in the stack keeps adding to vectors that pass through the network. Reversible layers, instead, have two sets of activations for each layer. One follows the standard procedure just described and is progressively updated from one layer to the next, but the other captures only the changes to the first. Thus, to run the network in reverse, one simply subtracts the activations applied at each layer.
The novel application of these two approaches in Reformer makes it highly efficient, enabling it to process text sequences of lengths up to 1 million words on a single accelerator using only 16GB of memory. Since Reformer has such high efficiency, it can be applied directly to data with context windows much larger than virtually all current state-of-the-art text domain datasets. Perhaps Reformer’s ability to deal with such large datasets will stimulate the community to create them.
One area where there is no shortage of large-context data is image generation, so we experiment with the Reformer on images. In this colab, we present examples of how Reformer can be used to “complete” partial images. Starting with the image fragments shown in the top row of the figure below, Reformer can generate full frame images (bottom row), pixel-by-pixel.
|Top: Image fragments used as input to Reformer. Bottom: “Completed” full-frame images. Original images are from the Imagenet64 dataset.|
We believe Reformer gives the basis for future use of Transformer models, both for long text and applications outside of natural language processing. Following our tradition of doing research in the open, we have already started exploring how to apply it to even longer sequences and how to improve handling of positional encodings. Read the Reformer paper (selected for oral presentation at ICLR 2020), explore our code and develop your own ideas too. Few long-context datasets are widely used in deep learning yet, but in the real world long context is everywhere. Maybe you can find a new application for Reformer — start with this colab and chat with us if you have any problems or questions!
This research was conducted by Nikita Kitaev, Łukasz Kaiser and Anselm Levskaya. Additional thanks go to Afroz Mohiuddin, Jonni Kanerva and Piotr Kozakowski for their work on Trax and to the whole JAX team for their support.
Source: Google AI Blog
If you find new issues, please let us know by visiting our forum or filing a bug. Interested in switching channels? Find out how. You can submit feedback using ‘Report an issue...’ in the Chrome menu (3 vertical dots in the upper right corner of the browser).
Source: Google Chrome Releases
BazelCon 2019 by the Numbers
- 400+ attendees (2x increase over BazelCon 2018)
- 125 organizations represented including Microsoft, Spotify, Uber, Apple, Cruise, EA, Lyft, Tesla, SpaceX, SAP, Bloomberg, Wix, Etsy, BMW and others
- 26 full-length talks and 15 lightning talks by members of the external community and Googlers
- 16 hours of Q&A during Office Hours with Bazel team members
- 45 Bazel Bootcamp attendees
- 5 Birds of a Feather sessions on iOS, Python, Java, C++ and Front-end Bazel rules
- 182 users in the #bazelcon2019 Slack channel
BazelCon 2019 Full Length Talks
- Keynote: The Role of Catastrophic Failure in Software Design – Jeff Atwood (Stack Overflow/Discourse)
- Bazel State of the Union – John Field and Dmirty Lomov (Google)
- Building Self Driving Cars with Bazel – Axel Uhlig and Patrick Ziegler (BMW Group)
- Moving to a Bazel-based CI system: 6 Learnings – Or Shachar (Wix)
- Bazel Federation – Florian Weikert (Google)
- Lessons from our First 100,000 Bazel Builds – Kevin Gessner (Etsy)
- Migrating Lyft-iOS to Bazel – Keith Smiley and Dave Lee (Lyft)
- Test Selection – Benjamin Peterson (Dropbox)
- Porting iOS Apps to Bazel – Oscar Bonilla (LinkedIn)
- Boosting Dev Box Performance with Remote Execution for Non-Hermetic Build Engines – Erik Mavrinac (Microsoft)
- Building on Key - Keeping your Actions and Remote Executions in Tune – George Gensure (UberATG)
- Bazel remote execution API vs Goma – Mostyn Bramley-Moore (Vewd Software)
- Integrating with ease: leveraging BuildStream interaction with Bazel build for consistent results – Daniel Silverstone (Codethink)
- Building Self-Driving Cars with Bazel – Michael Broll and Nico Valigi (Cruise)
- Make local development (with Bazel) great again! – Ittai Zeidman (Wix)
- Gradle to Bazel – Chip Dickson and Charles Walker (SUM Global Technology)
- Bazel Bootcamp – Kyle Cordes (Oasis Digital)
- Bazel migration patterns: how to prove business value with a small investment – Alex Eagle and Greg Magolan (Google)
- Dynamic scheduling: Fastest clean and incremental builds – Julio Merino (Google)
- Building a great CI with Bazel – Philipp Wollermann (Google)
Source: Google Open Source Blog
We’ve all been there: You have lots of tabs open and one of them starts playing a video, but you can’t figure out which one. Or you’re listening to music in your browser in the background and want to change the song without stopping your work to find the right tab.
With Chrome’s latest update, it’s now easier to control audio and video in your browser. Just click the icon in the top right corner of Chrome on desktop, open the new media hub and manage what’s playing from there.
Designed to minimize any disruptions to whatever you need to get done in your browser, the new media hub helps you to be more productive by bringing all your media notifications to one place and letting you manage each audio and video playback, without having to navigate any tabs. We first brought these media controls to Chromebooks in August, and today we rolled out the media hub in Chrome for Windows, Mac and Linux.
These new controls are the latest in a series of updates to enhance your media experience in Chrome, including support for media hardware keys for easy access to your media, and the Picture-in-Picture extension and API to help you with multitasking in your browser. We'll continue to add more functionality for you to control media in Chrome over time.
When Grow with Google launched the IT Support Professional Certificate, we aimed to equip learners around the world with the fundamentals to kickstart careers in information technology. Now, on the program’s two-year anniversary, we’re expanding our IT training offering with the new Google IT Automation with Python Professional Certificate. Python is now the most in-demand programming language, and more than 530,000 U.S. jobs, including 75,000 entry-level jobs require Python proficiency. With this new certificate, you can learn Python, Git and IT automation within six months. The program includes a final project where learners will use their new skills to solve a problem they might encounter on the job, like building a web service using automation.
With over 100,000 people now enrolled in our original certificate program, we’ve seen how it can aid aspiring IT professionals. While working as a van driver in Washington, D.C., Yves Cooper took the course through Merit America, a Google.org-funded organization that helps working adults find new skills. Within five days of completing the program, he was offered a role as an IT helpdesk technician—a change that’s set him on a career path he’s excited about. All over the world, people like Yves are using this program to change their lives. In fact, 84 percent of people who take the program report a career impact—like getting a raise, finding a new job, or starting a business—within six months.
Among the many people who’ve enrolled in the IT certificate, 60 percent identify as female, Black, Latino, or veteran—backgrounds that have historically been underrepresented in the tech industry. To ensure learners from underserved backgrounds have access to both IT Professional Certificates, Google.org will fund 2,500 need-based scholarships through nonprofits like Goodwill, Merit America, Per Scholas and Upwardly Global. Along with top employers like Walmart, Hulu and Sprint, Google considers program completers when hiring for IT roles.
Self-paced and continuous education is one way we’re helping expand opportunity for all Americans. Our Grow with Google trainings and workshops have helped more than 3 million Americans grow their businesses and careers. With this new professional certificate, even more people can continue to grow their careers through technology.