Category Archives: Online Security Blog

The latest news and insights from Google on security and safety on the Internet

An Investigation of Chrysaor Malware on Android



Google is constantly working to improve our systems that protect users from Potentially Harmful Applications (PHAs). Usually, PHA authors attempt to install their harmful apps on as many devices as possible. However, a few PHA authors spend substantial effort, time, and money to create and install their harmful app on one or a very small number of devices. This is known as a targeted attack.

In this blog post, we describe Chrysaor, a newly discovered family of spyware that was used in a targeted attack on a small number of Android devices, and how investigations like this help Google protect Android users from a variety of threats.

What is Chrysaor?

Chrysaor is spyware believed to be created by NSO Group Technologies, specializing in the creation and sale of software and infrastructure for targeted attacks. Chrysaor is believed to be related to the Pegasus spyware that was first identified on iOS and analyzed by Citizen Lab and Lookout.

Late last year, after receiving a list of suspicious package names from Lookout, we discovered that a few dozen Android devices may have installed an application related to Pegasus, which we named Chrysaor. Although the applications were never available in Google Play, we immediately identified the scope of the problem by using Verify Apps. We gathered information from affected devices, and concurrently, attempted to acquire Chrysaor apps to better understand its impact on users. We’ve contacted the potentially affected users, disabled the applications on affected devices, and implemented changes in Verify Apps to protect all users.

What is the scope of Chrysaor?

Chrysaor was never available in Google Play and had a very low volume of installs outside of Google Play. Among the over 1.4 billion devices protected by Verify Apps, we observed fewer than 3 dozen installs of Chrysaor on victim devices. These devices were located in the following countries:


How we protect you

To protect Android devices and users, Google Play provides a complete set of security services that update outside of platform releases. Users don’t have to install any additional security services to keep their devices safe. In 2016, these services protected over 1.4 billion devices, making Google one of the largest providers of on-device security services in the world:

Additionally, we are providing detailed technical information to help the security industry in our collective work against PHAs.

What do I need to do?

It is extremely unlikely you or someone you know was affected by Chrysaor malware. Through our investigation, we identified less than 3 dozen devices affected by Chrysaor, we have disabled Chrysaor on those devices, and we have notified users of all known affected devices. Additionally, the improvements we made to our protections have been enabled for all users of our security services.

To ensure you are fully protected against PHAs and other threats, we recommend these 5 basic steps:

  • Install apps only from reputable sources: Install apps from a reputable source, such as Google Play. No Chrysaor apps were on Google Play. 
  • Enable a secure lock screen: Pick a PIN, pattern, or password that is easy for you to remember and hard for others to guess. 
  • Update your device: Keep your device up-to-date with the latest security patches. 
  • Verify Apps: Ensure Verify Apps is enabled. 
  • Locate your device: Practice finding your device with Android Device Manager because you are far more likely to lose your device than install a PHA. 
How does Chrysaor work? 

To install Chrysaor, we believe an attacker coaxed specifically targeted individuals to download the malicious software onto their device. Once Chrysaor is installed, a remote operator is able to surveil the victim’s activities on the device and within the vicinity, leveraging microphone, camera, data collection, and logging and tracking application activities on communication apps such as phone and SMS.

One representative sample Chrysaor app that we analyzed was tailored to devices running Jellybean (4.3) or earlier. The following is a review of scope and impact of the Chrysaor app named com.network.android tailored for a Samsung device target, with SHA256 digest:

ade8bef0ac29fa363fc9afd958af0074478aef650adeb0318517b48bd996d5d5 

Upon installation, the app uses known framaroot exploits to escalate privileges and break Android’s application sandbox. If the targeted device is not vulnerable to these exploits, then the app attempts to use a superuser binary pre-positioned at /system/csk to elevate privileges.

After escalating privileges, the app immediately protects itself and starts to collect data, by:

  • Installing itself on the /system partition to persist across factory resets 
  • Removing Samsung’s system update app (com.sec.android.fotaclient) and disabling auto-updates to maintain persistence (sets Settings.System.SOFTWARE_UPDATE_AUTO_UPDATE to 0) 
  • Deleting WAP push messages and changing WAP message settings, possibly for anti-forensic purpose. 
  • Starting content observers and the main task loop to receive remote commands and exfiltrate data.
The app uses six techniques to collect user data:

  • Repeated commands: use alarms to periodically repeat actions on the device to expose data, including gathering location data. 
  • Data collectors: dump all existing content on the device into a queue. Data collectors are used in conjunction with repeated commands to collect user data including, SMS settings, SMS messages, Call logs, Browser History, Calendar, Contacts, Emails, and messages from selected messaging apps, including WhatsApp, Twitter, Facebook, Kakoa, Viber, and Skype by making /data/data directories of the apps world readable. 
  • Content observers: use Android’s ContentObserver framework to gather changes in SMS, Calendar, Contacts, Cell info, Email, WhatsApp, Facebook, Twitter, Kakao, Viber, and Skype. 
  • Screenshots: captures an image of the current screen via the raw frame buffer. 
  • Keylogging: record input events by hooking IPCThreadState::Transact from /system/lib/libbinder.so, and intercepting android::parcel with the interface com.android.internal.view.IInputContext. 
  • RoomTap: silently answers a telephone call and stays connected in the background, allowing the caller to hear conversations within the range of the phone's microphone. If the user unlocks their device, they will see a black screen while the app drops the call, resets call settings and prepares for the user to interact with the device normally. 
Finally, the app can remove itself through three ways:

  • Via a command from the server 
  • Autoremove if the device has not been able to check in to the server after 60 days 
  • Via an antidote file. If /sdcard/MemosForNotes was present on the device, the Chrysaor app removes itself from the device.
Samples uploaded to VirusTotal

To encourage further research in the security community, we’ve uploaded these sample Chrysaor apps to Virus Total.


Additional digests with links to Chrysaor 

As a result of our investigation we have identified these additional Chrysaor-related apps.


Lookout has completed their own independent analysis of the samples we acquired, their report can be viewed here.

Updates to the Google Safe Browsing’s Site Status Tool



Google Safe Browsing gives users tools to help protect themselves from web-based threats like malware, unwanted software, and social engineering. We are best known for our warnings, which users see when they attempt to navigate to dangerous sites or download dangerous files. We also provide other tools, like the Site Status Tool, where people can check the current safety status of a web page (without having to visit it).

We host this tool within Google’s Safe Browsing Transparency Report. As with other sections in Google’s Transparency Report, we make this data available to give the public more visibility into the security and health of the online ecosystem. Users of the Site Status Tool input a webpage (as a URL, website, or domain) into the tool, and the most recent results of the Safe Browsing analysis for that webpage are returned...plus references to troubleshooting help and educational materials.



We’ve just launched a new version of the Site Status Tool that provides simpler, clearer results and is better designed for the primary users of the page: people who are visiting the tool from a Safe Browsing warning they’ve received, or doing casual research on Google’s malware and phishing detection. The tool now features a cleaner UI, easier-to-interpret language, and more precise results. We’ve also moved some of the more technical data on associated ASes (autonomous systems) over to the malware dashboard section of the report.

 While the interface has been streamlined, additional diagnostic information is not gone: researchers who wish to find more details can drill-down elsewhere in Safe Browsing’s Transparency Report, while site-owners can find additional diagnostic information in Search Console. One of the goals of the Transparency Report is to shed light on complex policy and security issues, so, we hope the design adjustments will indeed provide our users with additional clarity.

Reassuring our users about government-backed attack warnings



Since 2012, we’ve warned our users if we believe their Google accounts are being targeted by government-backed attackers.

We send these out of an abundance of caution — the notice does not necessarily mean that the account has been compromised or that there is a widespread attack. Rather, the notice reflects our assessment that a government-backed attacker has likely attempted to access the user’s account or computer through phishing or malware, for example. You can read more about these warnings here.
In order to secure some of the details of our detection, we often send a batch of warnings to groups of at-risk users at the same time, and not necessarily in real-time. Additionally, we never indicate which government-backed attackers we think are responsible for the attempts; different users may be targeted by different attackers.

Security has always been a top priority for us. Robust, automated protections help prevent scammers from signing into your Google account, GMail always uses an encrypted connection when you receive or send email, we filter more than 99.9% of spam — a common source of phishing messages — from GMail, and we show users when messages are from an unverified or unencrypted source.

An extremely small fraction of users will ever see one of these warnings, but if you receive this warning from us, it's important to take action on it. You can always take a two-minute Security Checkup, and for maximum protection from phishing, enable two-step verification with a Security Key.

Diverse protections for a diverse ecosystem: Android Security 2016 Year in Review


Today, we’re sharing the third annual Android Security Year In Review, a comprehensive look at our work to protect more than 1.4 billion Android users and their data.

Our goal is simple: keep users safe. In 2016, we improved our abilities to stop dangerous apps, built new security features into Android 7.0 Nougat, and collaborated with device manufacturers, researchers, and other members of the Android ecosystem. For more details, you can read the full Year in Review report or watch our webinar.

Protecting users from PHAs
It’s critical to keep people safe from Potentially Harmful Apps (PHAs) that may put their data or devices at risk. Our ongoing work in this area requires us to find ways to track and stop existing PHAs, and anticipate new ones that haven’t even emerged yet.

Over the years, we’ve built a variety of systems to address these threats, such as application analyzers that constantly review apps for unsafe behavior, and Verify Apps which regularly checks users’ devices for PHAs. When these systems detect PHAs, we warn users, suggest they think twice about downloading a particular app, or even remove the app from their devices entirely.

We constantly monitor threats and improve our systems over time. Last year’s data reflected those improvements: Verify Apps conducted 750 million daily checks in 2016, up from 450 million the previous year, enabling us to reduce the PHA installation rate in the top 50 countries for Android usage.

Google Play continues to be the safest place for Android users to download their apps. Installs of PHAs from Google Play decreased in nearly every category:

  • Now 0.016 percent of installs, trojans dropped by 51.5 percent compared to 2015
  • Now 0.003 percent of installs, hostile downloaders dropped by 54.6 percent compared to 2015
  • Now 0.003 percent of installs, backdoors dropped by 30.5 percent compared to 2015
  • Now 0.0018 percent of installs, phishing apps dropped by 73.4 percent compared to 2015 

By the end of 2016, only 0.05 percent of devices that downloaded apps exclusively from Play contained a PHA; down from 0.15 percent in 2015.

Still, there’s more work to do for devices overall, especially those that install apps from multiple sources. While only 0.71 percent of all Android devices had PHAs installed at the end of 2016, that was a slight increase from about 0.5 percent in the beginning of 2015. Using improved tools and the knowledge we gained in 2016, we think we can reduce the number of devices affected by PHAs in 2017, no matter where people get their apps.
New security protections in Nougat
Last year, we introduced a variety of new protections in Nougat, and continued our ongoing work to strengthen the security of the Linux Kernel.

  • Encryption improvements: In Nougat, we introduced file-based encryption which enables each user profile on a single device to be encrypted with a unique key. If you have personal and work accounts on the same device, for example, the key from one account can’t unlock data from the other. More broadly, encryption of user data has been required for capable Android devices since in late 2014, and we now see that feature enabled on over 80 percent of Android Nougat devices.
  • New audio and video protections: We did significant work to improve security and re-architect how Android handles video and audio media. One example: we now store different media components into individual sandboxes, where previously they lived together. Now, if one component is compromised, it doesn’t automatically have permissions to other components, which helps contain any additional issues.
  • Even more security for enterprise users: We introduced a variety of new enterprise security features including “Always On” VPN, which protects your data from the moment your device boots up and ensures it isn't traveling from a work phone to your personal device via an insecure connection. We also added security policy transparency, process logging, improved wifi certification handling, and client certification improvements to our growing set of enterprise tools.

Working together to secure the Android ecosystem.

Sharing information about security threats between Google, device manufacturers, the research community, and others helps keep all Android users safer. In 2016, our biggest collaborations were via our monthly security updates program and ongoing partnership with the security research community.

Security updates are regularly highlighted as a pillar of mobile security—and rightly so. We launched our monthly security updates program in 2015, following the public disclosure of a bug in Stagefright, to help accelerate patching security vulnerabilities across devices from many different device makers. This program expanded significantly in 2016:

  • More than 735 million devices from 200+ manufacturers received a platform security update in 2016.
  • We released monthly Android security updates throughout the year for devices running Android 4.4.4 and up—that accounts for 86.3 percent of all active Android devices worldwide.
  • Our carrier and hardware partners helped expand deployment of these updates, releasing updates for over half of the top 50 devices worldwide in the last quarter of 2016.

We provided monthly security updates for all supported Pixel and Nexus devices throughout 2016, and we’re thrilled to see our partners invest significantly in regular updates as well. There’s still a lot of room for improvement, however. About half of devices in use at the end of 2016 had not received a platform security update in the previous year. We’re working to increase device security updates by streamlining our security update program to make it easier for manufacturers to deploy security patches and releasing A/B updates to make it easier for users to apply those patches.

On the research side, our Android Security Rewards program grew rapidly: we paid researchers nearly $1 million dollars for their reports in 2016. In parallel, we worked closely with various security firms to identify and quickly fix issues that may have posed risks to our users.

We appreciate all of the hard work by Android partners, external researchers, and teams at Google that led to the progress the ecosystem has made with security in 2016. But it doesn’t stop there. Keeping users safe requires constant vigilance and effort. We’re looking forward to new insights and progress in 2017 and beyond.

Detecting and eliminating Chamois, a fraud botnet on Android



Google works hard to protect users across a variety of devices and environments. Part of this work involves defending users against Potentially Harmful Applications (PHAs), an effort that gives us the opportunity to observe various types of threats targeting our ecosystem. For example, our security teams recently discovered and defended users of our ads and Android systems against a new PHA family we've named Chamois.

Chamois is an Android PHA family capable of:
  • Generating invalid traffic through ad pop ups having deceptive graphics inside the ad
  • Performing artificial app promotion by automatically installing apps in the background
  • Performing telephony fraud by sending premium text messages
  • Downloading and executing additional plugins

Interference with the ads ecosystem

We detected Chamois during a routine ad traffic quality evaluation. We analyzed malicious apps based on Chamois, and found that they employed several methods to avoid detection and tried to trick users into clicking ads by displaying deceptive graphics. This sometimes resulted in downloading of other apps that commit SMS fraud. So we blocked the Chamois app family using Verify Apps and also kicked out bad actors who were trying to game our ad systems.
Our previous experience with ad fraud apps like this one enabled our teams to swiftly take action to protect both our advertisers and Android users. Because the malicious app didn't appear in the device's app list, most users wouldn't have seen or known to uninstall the unwanted app. This is why Google's Verify Apps is so valuable, as it helps users discover PHAs and delete them.

Under Chamois's hood

Chamois was one of the largest PHA families seen on Android to date and distributed through multiple channels. To the best of our knowledge Google is the first to publicly identify and track Chamois.
Chamois had a number of features that made it unusual, including:
  • Multi-staged payload: Its code is executed in 4 distinct stages using different file formats, as outlined in this diagram.

This multi-stage process makes it more complicated to immediately identify apps in this family as a PHA because the layers have to be peeled first to reach the malicious part. However, Google's pipelines weren't tricked as they are designed to tackle these scenarios properly.
  • Self-protection: Chamois tried to evade detection using obfuscation and anti-analysis techniques, but our systems were able to counter them and detect the apps accordingly.
  • Custom encrypted storage: The family uses a custom, encrypted file storage for its configuration files and additional code that required deeper analysis to understand the PHA.
  • Size: Our security teams sifted through more than 100K lines of sophisticated code written by seemingly professional developers. Due to the sheer size of the APK, it took some time to understand Chamois in detail.

Google's approach to fighting PHAs

Verify Apps protects users from known PHAs by warning them when they are downloading an app that is determined to be a PHA, and it also enables users to uninstall the app if it has already been installed. Additionally, Verify Apps monitors the state of the Android ecosystem for anomalies and investigates the ones that it finds. It also helps finding unknown PHAs through behavior analysis on devices. For example, many apps downloaded by Chamois were highly ranked by the DOI scorer. We have implemented rules in Verify Apps to protect users against Chamois.
Google continues to significantly invest in its counter-abuse technologies for Android and its ad systems, and we're proud of the work that many teams do behind the scenes to fight PHAs like Chamois.

We hope this summary provides insight into the growing complexity of Android botnets. To learn more about Google's anti-PHA efforts and further ameliorate the risks they pose to users, devices, and ad systems. For more details, keep an eye open for the upcoming "Android Security 2016 Year In Review" report.

VRP news from Nullcon


We’re thrilled to be joining the security research community at Nullcon this week in Goa, India. This is a hugely important event for the Google Vulnerability Rewards Program and for our work with the security research community, more broadly. To mark the occasion, we wanted to share a few updates about the VRP.
Tougher bugs, bigger rewards
Since the launch of our program in 2010, Google has offered a range of rewards: from $100 USD for low severity issues, up to $20,000 USD for critical vulnerabilities in our web properties (see Android and Chrome rewards). But, because high severity vulnerabilities have become harder to identify over the years, researchers have needed more time to find them. We want to demonstrate our appreciation for the significant time researchers dedicate to our program, and so we’re making some changes to our VRP.

Starting today we will be increasing the reward for “Remote Code Execution” on the Google VRP from $20,000 USD to $31,337 USD. We are increasing the reward for “Unrestricted file system or database access” from $10,000 USD to $13,337 USD as well. Please check out the VRP site for more details and specifics.

Also, we are now donating rewards attributed to reports generated from our internal web security scanner; we have donated over $8000 to rescue.org this year so far. Cloud Security Scanner allows App Engine customers to utilize a version of the same tool.
Growing the security research community in India
In 2016’s VRP Year in Review, we featured Jasminder Pal Singh, a longtime contributor who uses rewards to fund his startup, Jasminder Web Services Point. He’s emblematic of the vibrant and fast-growing computer security research community in India. We saw that new momentum reflected in last year’s VRP data: India was surpassed only by two other locations in terms of total individual researchers paid. We received reports from ~40% more Indian researchers (as compared to 2015) and gave out 30% more rewards which almost tripled the total, and doubled the average payout (both per researcher and per reward). We are excited to see this growth as all users of Google’s products benefit.
Globally, we’ve noticed other interesting trends. Russia has consistently occupied a position in the top 10 every year the last 7 years. We have noticed a 3X increase in reports from Asia, making up 70% of the Android Security Rewards for 2016. We have seen increases in the number of researchers reporting valid bugs from Germany (27%), and France (44%). France broke into our top 5 countries in 2016 for the first time.


In 2016, we delivered technical talks along with educational trainings to an audience of enthusiastic security professionals in Goa at the Nullcon security conference. This year, we continue our investment at Nullcon to deliver content focused on the growing group of bug hunters we see in India. If you are attending Nullcon please stop by and say “Hello”!

Expanding protection for Chrome users on macOS



Safe Browsing is broadening its protection of macOS devices, enabling safer browsing experiences by improving defenses against unwanted software and malware targeting macOS. As a result, macOS users may start seeing more warnings when they navigate to dangerous sites or download dangerous files (example warning below).
As part of this next step towards reducing macOS-specific malware and unwanted software, Safe Browsing is focusing on two common abuses of browsing experiences: unwanted ad injection, and manipulation of Chrome user settings, specifically the start page, home page, and default search engine. Users deserve full control of their browsing experience and Unwanted Software Policy violations hurt that experience.


The recently released Chrome Settings API for Mac gives developers the tools to make sure users stay in control of their Chrome settings. From here on, the Settings Overrides API will be the only approved path for making changes to Chrome settings on Mac OSX, like it currently is on Windows. Also, developers should know that only extensions hosted in the Chrome Web Store are allowed to make changes to Chrome settings.


Starting March 31 2017, Chrome and Safe Browsing will warn users about software that attempts to modify Chrome settings without using the API.


For more information about the criteria we use to guide our efforts to protect Safe Browsing’s users, please visit our malware and unwanted software help center.

E2EMail research project has left the nest



Whether they’re concerned about insider risks, compelled data disclosure demands, or other perceived dangers, some people prudently use end-to-end email encryption to limit the scope of systems they have to trust. The best-known method, PGP, has long been available in command-line form, as a plug-in for IMAP-based email clients, and it clumsily interoperates with Gmail by cut-and-paste. All these scenarios have demonstrated over 25 years that it’s too hard to use. Chromebook users also have never had a good solution; choosing between strong crypto and a strong endpoint device is unsatisfactory.

These are some of the reasons we’ve continued working on the End-To-End research effort. One of the things we’ve done over the past year is add the resulting E2EMail code to Github: E2EMail is not a Google product, it’s now a fully community-driven open source project, to which passionate security engineers from across the industry have already contributed.

E2EMail offers one approach to integrating OpenPGP into Gmail via a Chrome Extension, with improved usability, and while carefully keeping all cleartext of the message body exclusively on the client. E2EMail is built on a proven, open source Javascript crypto library developed at Google.

E2EMail in its current incarnation uses a bare-bones central keyserver for testing, but the recent Key Transparency announcement is crucial to its further evolution. Key discovery and distribution lie at the heart of the usability challenges that OpenPGP implementations have faced. Key Transparency delivers a solid, scalable, and thus practical solution, replacing the problematic web-of-trust model traditionally used with PGP.

We look forward to working alongside the community to integrate E2EMail with the Key Transparency server, and beyond. If you’re interested in delving deeper, check out the e2email-org/e2email repository on github.

Announcing the first SHA1 collision


Cryptographic hash functions like SHA-1 are a cryptographer’s swiss army knife. You’ll find that hashes play a role in browser security, managing code repositories, or even just detecting duplicate files in storage. Hash functions compress large amounts of data into a small message digest. As a cryptographic requirement for wide-spread use, finding two messages that lead to the same digest should be computationally infeasible. Over time however, this requirement can fail due to attacks on the mathematical underpinnings of hash functions or to increases in computational power.

Today, 10 years after of SHA-1 was first introduced, we are announcing the first practical technique for generating a collision. This represents the culmination of two years of research that sprung from a collaboration between the CWI Institute in Amsterdam and Google. We’ve summarized how we went about generating a collision below. As a proof of the attack, we are releasing two PDFs that have identical SHA-1 hashes but different content.

For the tech community, our findings emphasize the necessity of sunsetting SHA-1 usage. Google has advocated the deprecation of SHA-1 for many years, particularly when it comes to signing TLS certificates. As early as 2014, the Chrome team announced that they would gradually phase out using SHA-1. We hope our practical attack on SHA-1 will cement that the protocol should no longer be considered secure.

We hope that our practical attack against SHA-1 will finally convince the industry that it is urgent to move to safer alternatives such as SHA-256.

What is a cryptographic hash collision?
A collision occurs when two distinct pieces of data—a document, a binary, or a website’s certificate—hash to the same digest as shown above. In practice, collisions should never occur for secure hash functions. However if the hash algorithm has some flaws, as SHA-1 does, a well-funded attacker can craft a collision. The attacker could then use this collision to deceive systems that rely on hashes into accepting a malicious file in place of its benign counterpart. For example, two insurance contracts with drastically different terms.

Finding the SHA-1 collision

In 2013, Marc Stevens published a paper that outlined a theoretical approach to create a SHA-1 collision. We started by creating a PDF prefix specifically crafted to allow us to generate two documents with arbitrary distinct visual contents, but that would hash to the same SHA-1 digest. In building this theoretical attack in practice we had to overcome some new challenges. We then leveraged Google’s technical expertise and cloud infrastructure to compute the collision which is one of the largest computations ever completed.

Here are some numbers that give a sense of how large scale this computation was:

  • Nine quintillion (9,223,372,036,854,775,808) SHA1 computations in total
  • 6,500 years of CPU computation to complete the attack first phase
  • 110 years of GPU computation to complete the second phase

While those numbers seem very large, the SHA-1 shattered attack is still more than 100,000 times faster than a brute force attack which remains impractical.

Mitigating the risk of SHA-1 collision attacks

Moving forward, it’s more urgent than ever for security practitioners to migrate to safer cryptographic hashes such as SHA-256 and SHA-3. Following Google’s vulnerability disclosure policy, we will wait 90 days before releasing code that allows anyone to create a pair of PDFs that hash to the same SHA-1 sum given two distinct images with some pre-conditions. In order to prevent this attack from active use, we’ve added protections for Gmail and GSuite users that detects our PDF collision technique. Furthermore, we are providing a free detection system to the public.

You can find more details about the SHA-1 attack and detailed research outlining our techniques here.

About the team

This result is the product of a long-term collaboration between the CWI institute and Google’s Research security, privacy and anti-abuse group.

Marc Stevens and Elie Bursztein started collaborating on making Marc’s cryptanalytic attacks against SHA-1 practical using Google infrastructure. Ange Albertini developed the PDF attack, Pierre Karpman worked on the cryptanalysis and the GPU implementation, Yarik Markov took care of the distributed GPU code, Alex Petit Bianco implemented the collision detector to protect Google users and Clement Baisse oversaw the reliability of the computations.


Another option for file sharing



Existing mechanisms for file sharing are so fragmented that people waste time on multi-step copying and repackaging. With the new project Upspin, we aim to improve the situation by providing a global name space to name all your files. Given an Upspin name, a file can be shared securely, copied efficiently without "download" and "upload", and accessed by anyone with permission from anywhere with a network connection.

Our target audience is personal users, families, or groups of friends. Although Upspin might have application in enterprise environments, we think that focusing on the consumer case enables easy-to-understand and easy-to-use sharing.

File names begin with the user's email address followed by a slash-separated Unix-like path name:
Any user with appropriate permission can access the contents of this file by using Upspin services to evaluate the full path name, typically via a FUSE filesystem so that unmodified applications just work. Upspin names usually identify regular static files and directories, but may point to dynamic content generated by devices such as sensors or services.

If the user wishes to share a directory (the unit at which sharing privileges are granted), she adds a file called Access to that directory. In that file she describes the rights she wishes to grant and the users she wishes to grant them to. For instance,
allows Joe and Mae to read any of the files in the directory holding the Access file, and also in its subdirectories. As well as limiting who can fetch bytes from the server, this access is enforced end-to-end cryptographically, so cleartext only resides on Upspin clients, and use of cloud storage does not extend the trust boundary.

Upspin looks a bit like a global file system, but its real contribution is a set of interfaces, protocols, and components from which an information management system can be built, with properties such as security and access control suited to a modern, networked world. Upspin is not an "app" or a web service, but rather a suite of software components, intended to run in the network and on devices connected to it, that together provide a secure, modern information storage and sharing network. Upspin is a layer of infrastructure that other software and services can build on to facilitate secure access and sharing. This is an open source contribution, not a Google product. We have not yet integrated with the Key Transparency server, though we expect to eventually, and for now use a similar technique of securely publishing all key updates. File storage is inherently an archival medium without forward secrecy; loss of the user's encryption keys implies loss of content, though we do provide for key rotation.

It’s early days, but we’re encouraged by the progress and look forward to feedback and contributions. To learn more, see the GitHub repository at upspin.