Watch With Me on Google TV: Simu Liu’s watchlist

Movies and TV can make us laugh, cry and even shape who we are. Our watchlists can be surprisingly revealing. We’re teaming up with entertainers, artists and cultural icons on a new Watch With Meseries on Google TV to share their top picks and give you a behind-the-scenes look at the TV and movies that inspired them.

Actor and writer Simu Liu loved movies and TV from a young age -- he liked losing himself in a story. He loved it so much, he wanted to pursue acting, and he broke new ground when he was cast as the first Asian American Super Hero in the Marvel Cinematic Universe, playing the title character in Marvel Studios’ “Shang-Chi and The Legend of The Ten Rings.” Even after playing a superhero himself, he still enjoys watching epic tales. “I’m someone who loves escaping into completely new and different worlds through watching movies and TV. Whether it’s a world of wizards, orcs or Greco-Roman gods, I’m just somebody that loves being whisked away.”

Also important to Simu Liu is seeing Asian representation on the big screen. We recently sat down with Simu Liu to learn about his Google TV watchlist and what his top picks mean to him. “My watchlist says I’m somebody who cares deeply about Asian representation and telling our stories, because they haven't been told yet in Hollywood in a deep and meaningful way or they are just starting to be told,” he says. “All these films have deeply impacted me in some personal way, and I'm so excited to share them with you.”

Google TV showing Watch With Me page with Simu Liu’s watchlist.

Before diving into his top picks, we asked Simu a few questions to get to know him and his love for movies and TV.

What’s your go-to movie snack?

Simu Liu:Essential movie snack is the butter popcorn with the liquid butter drizzle topping. How many pumps of butter is best? That's a very good question that I think about. There's basically two schools of thought: One is to pump all the butter at the top and let it seep down. And the other one is to pump as you fill the bucket to evenly distribute.

What’s your favorite genre?

Simu Liu:My favorite genre is superhero movies. I mean, come on. It’s the best.

What makes a great date-night movie?

Simu Liu:A great date night movie needs a fantastic “meet cute” moment. We all know that moment in the romantic movie where the main characters chases the other down and professes their feelings in this beautiful, perfect monologue.

Do you love watching movies at home or in the theaters?

Simu Liu:I like watching at home, but there's nothing better than sitting at a theater in a dark room, experiencing a movie on a massive screen with a bunch of strangers and just munching on popcorn.

You come from the world of sitcoms and love watching them. What makes a good sitcom for you?

Simu Liu:The characters are really what makes a great sitcom. You have to have a balanced cast of wacky characters that you love to love and love to hate... it's got to have the perfect balance.

Why are movies and TV important to you?

Simu Liu:I believe in the transformative power of storytelling. Movies and TV have the capacity to change people's lives or at the very least, show them something about their lives that is relatable, that is poignant and that makes them think. My life was so affected by the movies and TV that I saw, and was how I taught myself language and culture when my family immigrated to Canada when I was a young child.

Explore Simu’s watchlist and uncover more fun facts about Simu, like his love for musicals, on Google TV, rolling out over the next few days. Tell us your favorites using #WatchWithMe.

Chrome Beta for Android Update

Hi everyone! We've just released Chrome Beta 96 (96.0.4664.45) for Android: it's now available on Google Play.

You can see a partial list of the changes in the Git log. For details on new features, check out the Chromium blog, and for details on web platform updates, check here.

If you find a new issue, please let us know by filing a bug.

Krishna Govind
Google Chrome

Making Better Future Predictions by Watching Unlabeled Videos

Machine learning (ML) agents are increasingly deployed in the real world to make decisions and assist people in their daily lives. Making reasonable predictions about the future at varying timescales is one of the most important capabilities for such agents because it enables them to predict changes in the world around them, including other agents’ behaviors, and plan how to act next. Importantly, successful future prediction requires both capturing meaningful transitions in the environment (e.g., dough transforming into bread) and adapting to how transitions unfold over time in order to make decisions.

Previous work in future prediction from visual observations has largely been constrained by the format of its output (e.g., pixels that represent an image) or a manually-defined set of human activities (e.g., predicting if someone will keep walking, sit down, or jump). These are either too detailed and hard to predict or lack important information about the richness of the real world. For example, predicting “person jumping” does not capture why they’re jumping, what they’re jumping onto, etc. Also, with very few exceptions, previous models were designed to make predictions at a fixed offset into the future, which is a limiting assumption because we rarely know when meaningful future states will happen.

For example, in a video about making ice cream (depicted below), the meaningful transition from “cream” to “ice cream” occurs over 35 seconds, so models predicting such transitions would need to look 35 seconds ahead. But this time interval varies a large amount across different activities and videos — meaningful transitions occur at any distance into the future. Learning to make such predictions at flexible intervals is hard because the desired ground truth may be relatively ambiguous. For example, the correct prediction could be the just-churned ice cream in the machine, or scoops of the ice cream in a bowl. In addition, collecting such annotations at scale (i.e., frame-by-frame for millions of videos) is infeasible. However, many existing instructional videos come with speech transcripts, which often offer concise, general descriptions throughout entire videos. This source of data can guide a model’s attention toward important parts of the video, obviating the need for manual labeling and allowing a flexible, data-driven definition of the future.

In “Learning Temporal Dynamics from Cycles in Narrated Video”, published at ICCV 2021, we propose an approach that is self-supervised, using a recent large unlabeled dataset of diverse human action. The resulting model operates at a high level of abstraction, can make predictions arbitrarily far into the future, and chooses how far into the future to predict based on context. Called Multi-Modal Cycle Consistency (MMCC), it leverages narrated instructional video to learn a strong predictive model of the future. We demonstrate how MMCC can be applied, without fine-tuning, to a variety of challenging tasks, and qualitatively examine its predictions. In the example below, MMCC predicts the future (d) from present frame (a), rather than less relevant potential futures (b) or (c).

This work uses cues from vision and language to predict high-level changes (such as cream becoming ice cream) in video (video from HowTo100M).

Viewing Videos as Graphs
The foundation of our method is to represent narrated videos as graphs. We view videos as a collection of nodes, where nodes are either video frames (sampled at 1 frame per second) or segments of narrated text (extracted with automatic speech recognition systems), encoded by neural networks. During training, MMCC constructs a graph from the nodes, using cross-modal edges to connect video frames and text segments that refer to the same state, and temporal edges to connect the present (e.g., strawberry-flavored cream) and the future (e.g., soft-serve ice cream). The temporal edges operate on both modalities equally — they can start from either a video frame, some text, or both, and can connect to a future (or past) state in either modality. MMCC achieves this by learning a latent representation shared by frames and text and then making predictions in this representation space.

Multi-modal Cycle Consistency
To learn the cross-modal and temporal edge functions without supervision, we apply the idea of cycle consistency. Here, cycle consistency refers to the construction of cycle graphs, in which the model constructs a series of edges from an initial node to other nodes and back again: Given a start node (e.g., a sample video frame), the model is expected to find its cross-modal counterpart (i.e., text describing the frame) and combine them as the present state. To do this, at the start of training, the model assumes that frames and text with the same timestamps are counterparts, but then relaxes this assumption later. The model then predicts a future state, and the node most similar to this prediction is selected. Finally, the model attempts to invert the above steps by predicting the present state backward from the future node, and thus connecting the future node back with the start node.

The discrepancy between the model’s prediction of the present from the future and the actual present is the cycle-consistency loss. Intuitively, this training objective requires the predicted future to contain enough information about its past to be invertible, leading to predictions that correspond to meaningful changes to the same entities (e.g., tomato becoming marinara sauce, or flour and eggs in a bowl becoming dough). Moreover, the inclusion of cross-modal edges ensures future predictions are meaningful in either modality.

To learn the temporal and cross-modal edge functions end-to-end, we use the soft attention technique, which first outputs how likely each node is to be the target node of the edge, and then “picks” a node by taking the weighted average among all possible candidates. Importantly, this cyclic graph constraint makes few assumptions for the kind of temporal edges the model should learn, as long as they end up forming a consistent cycle. This enables the emergence of long-term temporal dynamics critical for future prediction without requiring manual labels of meaningful changes.

An example of the training objective: A cycle graph is expected to be constructed between the chicken with soy sauce and the chicken in chili oil because they are two adjacent steps in the chicken’s preparation (video from HowTo100M).

Discovering Cycles in Real-World Video
MMCC is trained without any explicit ground truth, using only long video sequences and randomly sampled starting conditions (a frame or text excerpt) and asking the model to find temporal cycles. After training, MMCC can identify meaningful cycles that capture complex changes in video.

Given frames as input (left), MMCC selects relevant text from video narrations and uses both modalities to predict a future frame (middle). It then finds text relevant to this future and uses it to predict the past (right). Using its knowledge of how objects and scenes change over time, MMCC “closes the cycle” and ends up where it started (videos from HowTo100M).
The model can also start from narrated text rather than frames and still find relevant transitions (videos from HowTo100M).

Zero-Shot Applications
For MMCC to identify meaningful transitions over time in an entire video, we define a “likely transition score” for each pair (A, B) of frames in a video, according to the model's predictions — the closer B is to our model’s prediction of the future of A, the higher the score assigned. We then rank all pairs according to this score and show the highest-scoring pairs of present and future frames detected in previously unseen videos (examples below).

The highest-scoring pairs from eight random videos, which showcase the versatility of the model across a wide range of tasks (videos from HowTo100M).

We can use this same approach to temporally sort an unordered collection of video frames without any fine-tuning by finding an ordering that maximizes the overall confidence scores between all adjacent frames in the sorted sequence.

Left: Shuffled frames from three videos. Right: MMCC unshuffles the frames. The true order is shown under each frame. Even when MMCC does not predict the ground truth, its predictions often appear reasonable, and so, it can present an alternate ordering (videos from HowTo100M).

Evaluating Future Prediction
We evaluate the model’s ability to anticipate action, potentially minutes in advance, using the top-k recall metric, which here measures a model’s ability to retrieve the correct future (higher is better). On CrossTask, a dataset of instruction videos with labels describing key steps, MMCC outperforms the previous self-supervised state-of-the-art models in inferring possible future actions.

Recall
Model    Top-1       Top-5       Top-10   
Cross-modal    2.9 14.2 24.3
Repr. Ant. 3.0 13.3 26.0
MemDPC 2.9 15.8 27.4
TAP 4.5 17.1 27.9
MMCC 5.4 19.9 33.8

Conclusions
We have introduced a self-supervised method to learn temporal dynamics by cycling through narrated instructional videos. Despite the simplicity of the model’s architecture, it can discover meaningful long-term transitions in vision and language, and can be applied without further training to challenging downstream tasks, such as anticipating far-away action and ordering collections of images. An interesting future direction is transferring the model to agents so they can use it to conduct long-term planning.

Acknowledgements
The core team includes Dave Epstein, Jiajun Wu, Cordelia Schmid, and Chen Sun. We thank Alexei Efros, Mia Chiquier, and Shiry Ginosar for their feedback, and Allan Jabri for inspiration in figure design. Dave would like to thank Dídac Surís and Carl Vondrick for insightful early discussions on cycling through time in video.

Source: Google AI Blog


Beta Channel Update for Desktop

 The Beta channel has been updated to 96.0.4664.45 for Windows, Mac and Linux.



A full list of changes in this build is available in the log. Interested in switching release channels? Find out how here. If you find a new issues, please let us know by filing a bug. The community help forum is also a great place to reach out for help or learn about common issues.


Srinivas Sista

Analyzing a watering hole campaign using macOS exploits

To protect our users, TAG routinely hunts for 0-day vulnerabilities exploited in-the-wild. In late August 2021, TAG discovered watering hole attacks targeting visitors to Hong Kong websites for a media outlet and a prominent pro-democracy labor and political group. The watering hole served an XNU privilege escalation vulnerability (CVE-2021-30869) unpatched in macOS Catalina, which led to the installation of a previously unreported backdoor.

As is our policy, we quickly reported this 0-day to the vendor (Apple) and a patch was released to protect users from these attacks.

Based on our findings, we believe this threat actor to be a well-resourced group, likely state backed, with access to their own software engineering team based on the quality of the payload code.

In this blog we analyze the technical details of the exploit chain and share IOCs to help teams defend against similar style attacks.

Watering Hole

The websites leveraged for the attacks contained two iframes which served exploits from an attacker-controlled server—one for iOS and the other for macOS.

iframes

iOS Exploits

The iOS exploit chain used a framework based on Ironsquirrel to encrypt exploits delivered to the victim's browser. We did not manage to get a complete iOS chain this time, just a partial one where CVE-2019-8506 was used to get code execution in Safari.

macOS Exploits

The macOS exploits did not use the same framework as iOS ones. The landing page contained a simple HTML page loading two scripts—one for Capstone.js and another for the exploit chain.

scripts

The parameter rid is a global counter which records the number of exploitation attempts. This number was in the 200s when we obtained the exploit chain.

While the javascript starting the exploit chain checks whether visitors were running macOS Mojave (10.14) or Catalina (10.15) before proceeding to run the exploits, we only observed remnants of an exploit when visiting the site with Mojave but received the full non-encrypted exploit chain when browsing the site with Catalina.

The exploit chain combined an RCE in WebKit exploiting CVE-2021-1789 which was patched on Jan 5, 2021 before discovery of this campaign and a 0-day local privilege escalation in XNU (CVE-2021-30869) patched on Sept 23, 2021.

Remote Code Execution (RCE)

Loading a page with the WebKit RCE on the latest version of Safari (14.1), we learned the RCE was an n-day since it did not successfully trigger the exploit. To verify this hypothesis, we ran git bisect and determined it was fixed in this commit.

Sandbox Escape and Local Privilege Escalation (LPE)

Capstone.js

It was interesting to see the use of Capstone.js, a port of the Capstone disassembly framework, in an exploit chain as Capstone is typically used for binary analysis. The exploit authors primarily used it to search for the addresses of dlopen and dlsym in memory. Once the embedded Mach-O is loaded, the dlopen and dlsym addresses found using Capstone.js are used to patch the Mach-O loaded in memory.

capstone.js

With the Capstone.js configured for X86-64 and not ARM, we can also derive the target hardware is Intel-based Macs.

configured

Embedded Mach-O

After the WebKit RCE succeeds, an embedded Mach-O binary is loaded into memory, patched, and run. Upon analysis, we realized this binary contained code which could escape the Safari sandbox, elevate privileges, and download a second stage from the C2.

Analyzing the Mach-O was reminiscent of a CTF reverse engineering challenge. It had to be extracted and converted into binary from a Uint32Array.

Mach-O

Then the extracted binary was heavily obfuscated with a relatively tedious encoding mechanism--each string is XOR encoded with a different key. Fully decoding the Mach-O was necessary to obtain all the strings representing the dynamically loaded functions used in the binary. There were a lot of strings and decoding them manually would have taken a long time so we wrote a short Python script to make quick work of the obfuscation. The script parsed the Mach-O at each section where the strings were located, then decoded the strings with their respective XOR keys, and patched the binary with the resulting strings.

decoded strings

Once we had all of the strings decoded, it was time to figure out what capabilities the binary had. There was code to download a file from a C2 but we did not come across any URL strings in the Mach-O so we checked the javascript and saw there were two arguments passed when the binary is run–the url for the payload and its size.

payload

After downloading the payload, it removes the quarantine attribute of the file to bypass Gatekeeper. It then elevated privileges to install the payload.

N-day or 0-day?

Before further analyzing how the exploit elevated privileges, we needed to figure out if we were dealing with an N-day or a 0-day vulnerability. An N-day is a known vulnerability with a publicly available patch. Threat actors have used N-days shortly after a patch is released to capitalize on the patching delay of their targets. In contrast, a 0-day is a vulnerability with no available patch which makes it harder to defend against.

Despite the exploit being an executable instead of shellcode, it was not a standalone binary we could run in our virtual environment. It needed the address of dlopen and dlsym patched after the binary was loaded into memory. These two functions are used in conjunction to dynamically load a shared object into memory and retrieve the address of a symbol from it. They are the equivalent of LoadLibrary and GetProcAddress in Windows.

exploit

To run the exploit in our virtual environment, we decided to write a loader in Python which did the following:

  • load the Mach-O in memory
  • find the address of dlopen and dlsym
  • patch the loaded Mach-O in memory with the address of dlopen and dlsym
  • pass our payload url as a parameter when running the Mach-O

For our payload, we wrote a simple bash script which runs id and pipes the result to a file in /tmp. The result of the id command would tell us whether our script was run as a regular user or as root.

Having a loader and a payload ready, we set out to test the exploit on a fresh install of Catalina (10.15) since it was the version in which we were served the full exploit chain. The exploit worked and ran our bash script as root. We updated our operating system with the latest patch at the time (2021-004) and tried the exploit again. It still worked. We then decided to try it on Big Sur (11.4) where it crashed and gave us the following exception.

exception type

The exception indicates that Apple added generic protections in Big Sur which rendered this exploit useless. Since Apple still supports Catalina and pushes security updates for it, we decided to take a deeper look into this exploit.

Elevating Privileges to Root

The Mach-O was calling a lot of undocumented functions as well as XPC calls to mach_msg with a MACH_SEND_SYNC_OVERRIDE flag. This looked similar to an earlier in-the-wild iOS vulnerability analyzed by Ian Beer of Google Project Zero. Beer was able to quickly recognize this exploit as a variant of an earlier port type confusion vulnerability he analyzed in the XNU kernel (CVE-2020-27932). Furthermore, it seems this exact exploit was presented by Pangu Lab in a public talk at zer0con21 in April 2021 and Mobile Security Conference (MOSEC) in July 2021.

In exploiting this port type confusion vulnerability, the exploit authors were able to change the mach port type from IKOT_NAMED_ENTRY to a more privileged port type like IKOT_HOST_SECURITY allowing them to forge their own sec_token and audit_token, and IKOT_HOST_PRIV enabling them to spoof messages to kuncd.

MACMA Payload

After gaining root, the downloaded payload is loaded and run in the background on the victim's machine via launchtl. The payload seems to be a product of extensive software engineering. It uses a publish-subscribe model via a Data Distribution Service (DDS) framework for communicating with the C2. It also has several components, some of which appear to be configured as modules. For example, the payload we obtained contained a kernel module for capturing keystrokes. There are also other functionalities built-in to the components which were not directly accessed from the binaries included in the payload but may be used by additional stages which can be downloaded onto the victim's machine.

Notable features for this backdoor include:

  • victim device fingerprinting
  • screen capture
  • file download/upload
  • executing terminal commands
  • audio recording
  • keylogging

Conclusion

Our team is constantly working to secure our users and keep them safe from targeted attacks like this one. We continue to collaborate with internal teams like Google Safe Browsing to block domains and IPs used for exploit delivery and industry partners like Apple to mitigate vulnerabilities. We are appreciative of Apple’s quick response and patching of this critical vulnerability.

For those interested in following our in-the-wild work, we will soon publish details surrounding another, unrelated campaign we discovered using two Chrome 0-days (CVE-2021-37973 and CVE-2021-37976). That campaign is not connected to the one described in today’s post.

Related IOCs

Delivery URLs

  • http://103[.]255[.]44[.]56:8372/6nE5dJzUM2wV.html
  • http://103[.]255[.]44[.]56:8371/00AnW8Lt0NEM.html
  • http://103[.]255[.]44[.]56:8371/SxYm5vpo2mGJ?rid=<redacted>
  • http://103[.]255[.]44[.]56:8371/iWBveXrdvQYQ?rid=?rid=<redacted>
  • https://appleid-server[.]com/EvgSOu39KPfT.html
  • https://www[.]apple-webservice[.]com/7pvWM74VUSn2.html
  • https://appleid-server[.]com/server.enc
  • https://amnestyhk[.]org/ss/defaultaa.html
  • https://amnestyhk[.]org/ss/4ba29d5b72266b28.html
  • https://amnestyhk[.]org/ss/mac.js

Javascript

  • cbbfd767774de9fecc4f8d2bdc4c23595c804113a3f6246ec4dfe2b47cb4d34c (capstone.js)
  • bc6e488e297241864417ada3c2ab9e21539161b03391fc567b3f1e47eb5cfef9 (mac.js)
  • 9d9695f5bb10a11056bf143ab79b496b1a138fbeb56db30f14636eed62e766f8

Sandbox escape / LPE

  • 8fae0d5860aa44b5c7260ef7a0b277bcddae8c02cea7d3a9c19f1a40388c223f
  • df5b588f555cccdf4bbf695158b10b5d3a5f463da7e36d26bdf8b7ba0f8ed144

Backdoor

  • cf5edcff4053e29cb236d3ed1fe06ca93ae6f64f26e25117d68ee130b9bc60c8 (2021 sample)
  • f0b12413c9d291e3b9edd1ed1496af7712184a63c066e1d5b2bb528376d66ebc (2019 sample)

C2

  • 123.1.170.152
  • 207.148.102.208

What should we do with old electronics?

Cleaning out a drawer or closet can be extremely therapeutic. Old clothes and items go into a donation pile; other things might be great to give away. But then… you pull open that drawer full of your old electronics: phones, speakers, music players and more. What do you do with these?

As electronics get smaller and more ubiquitous, more devices are hibernating in drawers, closets, attics and garages. Recycling electronics isn’t an everyday activity and doesn’t follow the same process as recycling normal household waste. In 2019, only about 17% of electronic waste was recycled globally.

As part of our sustainability commitments, Google has committed to including recycled materials in all our consumer hardware. The future of electronics recycling depends on developing better technologies that extract materials from discarded products, too. But that’s not the only challenge in creating effective recycling systems: Getting useful but unused products to people who need them and unusable ones into recycling centers are both essential to making electronics more sustainable.

Many cities have numerous drop-off options at retail or municipal locations, and major electronics brands also offer mail-in services for old devices. But it’s not enough to have these services available — it’s critical to truly understand what else people need in order to recycle electronics they’re no longer using.

To learn more, Google talked with individual users about their electronics recycling struggles. The lessons — which are outlined in our white paper Electronics Hibernation: Understanding Barriers to Consumer Participation in Electronics Recycling Services — were both surprising and familiar. People have relationships with their electronics that extend beyond their usefulness — the way we think about our devices is completely different from how we think about an empty juice bottle, for example. Our research identified major barriers to consumer electronics recycling, and we hope that by sharing these initial insights, we will encourage others to join the conversation and inspire new ideas.

The Awareness Barrier

Illustration of a Google Search bar with the words "recycling services near me" in it.

For starters, one issue is that people don’t know about their options — even though some of them exist in plain sight. Think about it this way: Even if you haven’t heard of a specific movie, you know the name of a popular streaming service where you could watch it. Or maybe you don’t know the name of a book everyone is talking about, but you know the bookstore where you could buy it. Electronics recycling services have nowhere near those levels of awareness, even though they are offered by major brands that people are familiar with. A quick internet search will show plenty of results, but it can create more questions than answers as consumers wade through the complexities of what devices are eligible, varying costs and deciding which services seem reputable enough to consider.

The Value Barrier

Illustration of various generations of a smartphone lined up.

An old laptop that still works seems like it should be worth something. When consumers discover their product is worth much less than they thought, it’s a disappointing moment that discourages trade-in and recycling actions alike. For some, an old smartphone might be useful as a backup if they lose or damage their newer one. Other products still have emotional value even as they sit unused — a laptop may represent cherished college years, or a music player may remind us of a fun activity. These all represent value, and ironically, that can make recycling seem like a waste.

The Data Barrier

Illustration of a laptop open; on its screen is an abstract representation of data.

Many people have electronics they don’t want or need, but that still contain their data. While the hardware might not be valuable, the documents, photos and videos often are. Finding a way to transfer data to a new device or storage solution is a daunting task that becomes more challenging with time. The older the product, the harder it is to find the right cables, set up the network and remember how to even use the software. Professional services exist, but they can be expensive, and if the data isn’t urgently needed, it's easy to put the task off until later, making things even more difficult.

The Security Barrier

Illustration of a paper shredder with documents going through it.

Even if data transfer isn’t needed, many people want to securely erase all information before donating or recycling. It’s a technical task, and the process of doing so can be different across devices and products. Similar to transferring data, figuring out how to erase data and settings can become more challenging as products age. Self-service resources exist, but the time and effort can seem monumental for a low priority task.

The Convenience Barrier

Illustration of a box with old electronics in it.

Few people think of filing tax returns as “convenient,” but it’s something that has to be done, and services exist that make it convenient enough. Handing off electronics for recycling might be objectively more convenient than filing tax returns, but for most, it’s a lower priority task, meaning the bar for convenience is higher.

If you’re trying to recycle an old device, you might only experience one or two barriers, but collectively they’re significant, and overcoming them will require new ideas. Until then, online tech support articles from most product brands are helpful in figuring out data transfer and erasure. Looking for R2, e-Stewards, or WEELABEX certification is a good first step in identifying a reputable recycler. And companies like Google, for example, provide resources on how to recycle your old and unused devices.

There’s a lot of work to be done to make it simple and sustainable to say goodbye to old products. By working together with other companies and consumers, we hope to make the sustainable choice to recycle your electronics an easier one.

Beta Channel Update for Chrome OS

The Beta channel is being updated to 96.0.4664.43 (Platform version: 14268.32.0) for most Chrome OS devices.

If you find new issues, please let us know by visiting our forum or filing a bug. Interested in switching channels Find out how. You can submit feedback using ‘Report an issue...’ in the Chrome menu (3 vertical dots in the upper right corner of the browser). 

Daniel Gagnon,
Google Chrome OS

Open source DDR controller framework for mitigating Rowhammer

Rowhammer is a hardware vulnerability that affects DRAM memory chips and can be exploited to modify memory contents, potentially providing root access to the system. It occurs because Dynamic RAM consists of multiple memory cells packed tightly together and specific access patterns can cause unwanted effects that propagate to nearby memory cells and cause bit-flips in cells which have not been accessed by the attacker.

The problem has been known for several years, but as shown by most recent research from Google performed with the open source platform Antmicro developed that we’ll describe in this note, it has yet to be completely solved. The tendency in DRAM manufacturing is to make the chips denser to pack more memory in the same size which inevitably results in increased interdependency between memory cells, making Rowhammer an ongoing problem.

Diagram of Rowhammer attack principle

Solutions like TRR (Target Row Refresh) introduced in newer memory chips mitigate the issue, although only in part—and attack methods like Half-Double or TRRespass keep emerging. To go beyond the all-too-often used “security through obscurity” approach, Antmicro has been helping build open source platforms which give security researchers full control over the entire technology stack, and enables them to find new solutions to emerging threats.

The Rowhammer Tester platform

The Rowhammer Tester platform was developed for and with Google, who just like Antmicro believe that open source, well documented technical infrastructure is critical in speeding up research and increasing collaboration with the industry. In this case, we wanted to enable the memory security researchers and manufacturers to have access to a flexible platform for experimenting with new types of attacks and finding better Rowhammer mitigation techniques.

Current Rowhammer test methods involve using the chip-specific MBIST (Memory Built-in Self-Test) or costly ATE (Automated Test Equipment), which means that the existing approaches are either costly, inflexible, or both. MBIST are specialized IP cores that test memory chips for errors. Although effective, they lack flexibility of changing testing algorithms hardcoded into the IP core. ATEs devices are usually used at foundries to run various tests on wafers. Access to these devices is limited and expensive; chip vendors have to rely on DFT (Design for Test) software to produce compressed test patterns, which require less access time to ATE while ensuring high test coverage.

The main goal of the project was to address those limitations, providing an FPGA-based Rowhammer testing platform that enables full control over the commands sent to the DRAM chip. This is important because DRAM memory requires specialized hardware controllers and any software-based testing approaches have to communicate with the DRAM indirectly via the controller, which pulls the researchers away from the main research subject when studying the DRAM chip behaviour itself.

Platform architecture

Diagram of platform architecture

The Rowhammer Tester consists of two parts: the FPGA gateware that is loaded to the hardware platform and a set of Python scripts used to communicate with the FPGA system from the user’s PC. Internally, all the important modules of the FPGA system are connected to a shared WishBone bus. We use an EtherBone bridge to be able to interface with the FPGA WishBone bus from the host PC. EtherBone is a protocol that allows to perform regular WishBone transactions over Ethernet. This way we can perform all of the communication between the user PC and the FPGA efficiently through an Ethernet cable.

The FPGA gateware has four main parts: a Bulk transfer module, a Payload Executor, the LiteDRAM controller, and a VexRiscv CPU. The Bulk transfer module provides an efficient way of filling and testing the whole memory contents. It supports user-configurable access and data patterns, using high-performance DMA to make use of full bandwidth offered by the LiteDRAM controller. When using the Bulk transfer module, LiteDRAM handles all the required DRAM logic, including row activation, refreshing, etc. and ensuring that all DRAM timings are met.

If more fine-grained control is required, our Rowhammer Tester provides the Payload Executor module. Payload Executor can be thought of as a simple processor that can execute our custom instruction set. Most of the instructions map directly to DRAM commands, with minimal control flow provided by the LOOP instruction. A user can compile a “program” and load it to Rowhammer Tester’s instruction SRAM, which will be then executed. To execute a program, Payload Executor will disconnect the LiteDRAM controller and send the requested command sequences directly to the DRAM chip via the PHY’s DFI interface. After execution the LiteDRAM controller gets reconnected and the contents of the memory can be inspected to search for potential bit-flips.

In our platform, we use LiteDRAM which is an open-source controller that we have been using in multiple different projects. It is part of the wider LiteX ecosystem, which is also a very popular choice for many of our FPGA projects. The controller supports different memory types (SDR, DDR, DDR2, DDR3, DDR4, …), as well as many FPGA platforms (Lattice ECP5, Xilinx Series 6, 7, UltraScale, UltraScale+, …). Since it is an open source FPGA IP core, we have complete control over its internals. That means two things: firstly, we were able to easily integrate it with the rest of our system and contribute back to improve LiteDRAM itself. Secondly, and perhaps even more importantly, groups focused on researching new memory attacking methods can modify the controller in order to expose existing vulnerabilities. The results of such experiments should essentially motivate vendors to work on mitigating the uncovered flaws, rather than rely on the “security by obscurity” based approach.

Our Rowhammer Tester is fully open source. We provide an extensive set of Python scripts for controlling the board, performing rowhammer attacks and harvesting the results. For more complex testing you can use the so-called Playbook, which is a framework that allows to describe complex testing scenarios using JSON files, providing some predefined attack configurations.

Antmicro is actively collaborating with Google and memory makers to help study the Rowhammer vulnerability, contributing to standardization efforts under the JEDEC initiative. The platform has already been used to a lot of success in state-of-the-art Rowhammer research (like the case of finding a new type of Rowhammer attack called Half-Double, as mentioned previously).

New DRAM PHYs

Initially our Rowhammer Tester targeted two easily available and price-optimized boards: Digilent Arty (DDR3, Xilinx Series7 FPGA) and Xilinx ZCU104 (DDR4, Xilinx UltraScale+ FPGA). They were a good starting point, as DDR3 and DDR4 PHYs for these boards were already supported by LiteDRAM. After the initial version of the Rowhammer Tester was ready and tested on these boards, proving the validity of the concept, the next step was to cover more memory types, some of which find their way into many devices that we interact with daily. A natural target was the LPDDR4 DRAM—a relatively new type of memory designed for low-power operation with throughputs up to 3200 MT/s. For this end, we designed our dedicated LPDDR4 Test Board, which has already been covered in a previous blog note.

LPDDR4 Test Board

The design is quite interesting because we decided to put the LPDDR4 memory chips on a module, which is against the usual practice of putting LPDDR4 directly on the PCB, as close as possible to the CPU/FPGA to minimize trace impedance. The reason was trivial—we needed the platform to be able to test many memory types interchangeably without having to desolder and resolder parts, using complicated interposers or other niche techniques—the platform is supposed to be open and approachable to all.

Alongside the hardware platform we had to develop a new LPDDR4 PHY IP as LiteDRAM didn’t have support for LPDDR4 at that time, resolving problems related to the differences between LPDDR4 and previously supported DRAM types, such as new training modes. After a phase of verification and testing on our hardware, the newly implemented PHY has been contributed back to LiteDRAM.

What’s next?

The project does not stop there; we are already working on an LPDDR5 PHY for next-gen low power memory support. This latest low-power memory standard published by JEDEC poses some new and interesting challenges including a new clocking architecture and operation on an even lower voltage. As of today, LPDDR5 chips are hardly available on the market as a bleeding-edge technology, but we are continuing our work to prepare LPDDR5 support for our future hardware platform in simulation using custom and vendor provided simulation models.

The fact that our platform has already been successfully used to demonstrate new types of Rowhammer attacks proves that open source test platforms can make a difference, and we are pleased to see a growing collaborative ecosystem around the project in a joint effort to ensure that we find robust and transparent mitigation techniques for all variants of Rowhammer for the foreseeable future.

Ultimately, our work with the Rowhammer Tester platform shows that by using open source, vendor-neutral IP, tools and hardware, we can create better platforms for more effective research and product development. In the future, building on the success of the FPGA version, our work as part of the CHIPS Alliance will most likely lead to demonstrating the LiteDRAM controller in ASIC form, unlocking even more performance based on the same solid platform.

If you are interested in state of the art, high-speed FPGA I/O and extreme customizability that open source FPGA blocks can offer, get in touch with Antmicro at [email protected] to hire development services to develop your next product.

Originally posted on the Antmicro blog.


By guest author Michael Gielda, Antmicro

This creator built an LGBTQ+-friendly site for car talk

Queer automotive educator, journalist and influencer Chaya Milchtein has carved out an unexpected niche at the intersection of the LGBTQ+ community, car repair and empowerment. Starting with blog posts that answered common questions about auto maintenance, she gradually built upher brand, Mechanic Shop Femme, into a mini-empire that spans workshops, one-on-one consultations, articles and podcasts, and more.

It wasn’t a path she ever expected. On her own at the age of 18, Chaya was “desperate” for a job. A connection landed her a position in the auto department at Sears, even though she didn’t even have a driver’s license when she interviewed for the job. But she really enjoyed working with customers and explaining what was wrong with their vehicles. “I’m what I like to call a translator — I translate complex car topics and information into language that the average consumer can understand,” she explains.

While she enjoyed the work, she felt she had reached a ceiling by 2017. Climbing the corporate ladder was a possibility, but she didn’t want to stop working directly with customers, the part of the job that gave her the most joy. Meanwhile, friends in the queer community were regularly reaching out for car advice. A career coach suggested starting a blog — and even though Chaya didn’t have a lot of confidence in her writing skills, she jumped in.

The blog section of the Mechanic Shop Femme website features thumbnails and preview text about two car-focused posts.

Chaya’s posts demystify all things automotive for an inclusive audience.

Almost immediately, Chaya started planning her next steps and trying to figure out how to turn her concept into something bigger. In addition to the blog, she started offering online classes on car topics, which led to more classes and speaking engagements. She also launched a career as a freelance writer, landing bylines in publications like Real Simple and Shondaland.

All of that came in handy when she got laid off from her job in April 2020 and decided to scale up her efforts. Mechanic Shop Femme is now her full-time gig. Chaya explains how she managed to build a following and unite a diverse range of interests under the umbrella of her website.

Show your whole self

From day one, Chaya was open about who she was, from the name of her site to posts about her wife. “It was important to me that I could show up as my full self,” she says. She also recognized that her unique point of view is an attraction. “There’s lots of places where you can learn about cars, but none quite from my perspective,” she points out. “Cars are what draws people to me. And they learned that I was queer and obviously saw that I was fat and where I come from, and they would stick around for the full meal. Because that's what was interesting.”

Showing that she’s part of the LGBTQ+ community also helps build trust among an audience that may feel intimidated by or excluded from car-centric settings. “I want to make sure that the people who come to my platform know that they’re not just there to learn about cars, that the space I created is not just something where they’re an afterthought, but that they’re welcome.”

Venture outside your niche

One piece of advice Chaya often heard was to focus on one topic. “While that might be great advice for some people, that's not necessarily good advice for everybody,” she says. On the blog, Chaya weaves in a queer or body-positivity angle on everything from fashion to travel in addition to her car content. Exploring different topics helps attract different and new readers, and it keeps her from burning out on car talk.

A tattooed woman in a swimsuit splashes in a pool. A headline below says, “I tried on 10 plus size swimsuits to help you find the perfect swimsuit for your body.”

Besides cars, Chaya regularly posts about fashion, body positivity and sex. Her plus-size swimsuit lookbook is one of the most popular posts on Mechanic Shop Femme.

Treat your site like a business

Chaya refers to her work as an octopus with different tentacles — her blog, her classes, her journalism and her consulting, with her website at the center. “If you want to book a call with me, if you want to pick a class, if you want to read my writing, my website is going to have all of those things,” she says. From the start, it was important for her to own her platform rather than focus solely on social media, where influencers have less control. “I’ve spent a lot of time on TikTok, it’s part of my overall business strategy,” she explains. “But I’m aware this platform can go away, unlike my site, where I own the content.”