The Dev channel has been updated to 119.0.6034.6 for Windows, Mac and Linux.
A partial list of changes is available in the Git log. Interested in switching release channels? Find out how. If you find a new issue, please let us know by filing a bug. The community help forum is also a great place to reach out for help or learn about common issues.
We’re expanding client-side encryption in Gmail to Android and iOS devices, so you can read and write encrypted messages directly from your device. This allows your users to work with your most sensitive data from anywhere on their mobile devices while adhering to compliance and regulatory requirements. The Gmail mobile apps support encrypted mail natively, so users don't need to download multiple apps, or navigate to an external portal, to access their encrypted messages.
While Workspace encrypts data at rest and in transit by using secure-by-design cryptographic libraries, client-side encryption ensures that you have sole control over encryption keys and access to your data. Client-side encryption ensures sensitive data in the email body and attachments are indecipherable to Google servers — you retain control over encryption keys and the identity service to access those keys. For more information, check out our original announcement and the Workspace blog.
Getting started
Admins: Admin will need to enable the Android and iOS clients in the CSE admin interface in order for users to have access. This can be done in the Admin Console by going to Security > Access and data control > Client-side encryption > Identity provider configuration.
End users: To add client-side encryption to any message, click the lock icon and select additional encryption, and compose your message and add attachments as normal. Visit the Help Center to learn more about using client-side encryption for Gmail.
In 2022, we introduced in-line threading for Google Chat and since March 2023, all newly created spaces in Google Chat are in-line threaded by default.
On September 30, 2023, we will begin taking the next step toward a single, streamlined flow of conversation in Google Chat: all existing spaces organized by conversation topic will be upgraded to the in-line threaded experience. We’d like to share more information regarding the migration, what to expect, as well as what’s next for Google Chat.
Who’s impacted
Admins and end users
Why it’s important
Whether it’s a 1:1 conversation or a space, Google Chat plays a critical role in collaborating and communicating. Our goal is to continue evolving Chat to best serve our users and keep teams productive and connected.
As such, we’ve heard from our customers that the way conversations were structured in spaces could be improved. Specifically, users found topic-based conversations to be restrictive and tricky to navigate. With topic-based conversations users would struggle to keep track of individual topics as new replies were added, and often find themselves scrolling back through threads to locate relevant topics.. Over time, topic threads have become noisier and more complex for many users.
What became clear was the need for a continuous conversation flow. In-line threading allows users to reply to any message and create a discussion separate from the main conversation and users have reported a much higher satisfaction rate as compared to topic-based spaces. Users can also follow specific threads, whereby they’ll receive notifications for replies and @ mentions in that thread, helping to cut through clutter and stay on top of what matters most.
What to expect during the upgrade period
Beginning September 30, 2023 we will begin upgrading conversations grouped by topic to in-line threading. We anticipate this change to be completed by March 2024.
In order to minimize disruptions on day-to-day work, we will do our best to initiate upgrades during off-peak times on weekends. Should you be using Chat during these upgrades, spaces that are being upgraded will be inaccessible for a few minutes. Admins can use this form to specify a specific month and whether they prefer the upgrade to take place on weekdays or weekends. Please note this will be on a best effort basis and this form must be submitted by October 15, 2023. Based on the preference selected by a customer, we’ll choose one of the upgrade windows on a weekday or a weekend in the selected month and upgrade all the eligible spaces during off-peak hours.
Prior to the upgrade
A minimum of two weeks prior to the upgrade taking place, a banner will be displayed in impacted spaces notifying users about the impending change. Users will be able to click through to the Help Center for more information.
During the upgrade
As mentioned above, we plan to execute these changes during off-peak times to help minimize disruption. If you’re using an impacted space when the upgrade commences, most functions such as sending and receiving messages will be inaccessible. Typically, this will only last for a few minutes, after which users simply need to refresh the browser tab to access and use the newly upgraded space. Other direct messages, group conversations and in-line threaded spaces will not be impacted and will remain accessible during the upgrade
After the upgrade
Once the upgrade is complete, the space will use the in-line threaded model. Messages sent before the upgrade will be retained, and will be arranged chronologically, instead of by topic. There will also be a separator titled “Begin New Topic” to indicate every time a new topic was started.
In some cases, when people have responded on older topics, the new chronological order takes precedence. This means that messages may not appear next to the original topic, but rather according to their timestamp. When this occurs, the new response will quote the last corresponding message that it is replying to, as seen in the image below.
You’ll also see a separator between the last message sent before the migration, and a message indicating that the space has been upgraded to a space with in-line replies. Going forward, all new messages will feature in-line threaded functionality.
More new ways to work with threads
To further elevate the in-line threading experience, we’ll be introducing several new features during the remainder of the year and into 2024. Here’s a preview of some of those features — be sure to subscribe to the Workspace Updates blog for the latest updates on availability. Meanwhile, please refer to this post for even more features coming to Google Chat.
Resizable threads panel
You’ll be able to easily resize the threads sidebar to best suit your screen size or increase the focus on threads most important to you.
Home shortcut
“Home” is the place to manage and catch up on your Chat messages. Messages from your followed threads will be shown in Home. You can also filter to only see your followed threads and unread conversations. You’ll be able to open a conversation or reply from the Home view.
Select "Home" from the sidebar.
You can choose to get notified for all messages, and automatically follow all threads within a space ensuring you don’t miss any updates.
Thread participants
You will be able to see avatars of users that have replied to a thread to get better context and decide whether the thread is relevant to you.
Getting started
Admins:
If you want to request the upgrade in a specific month, please fill this form by October 15, 2023 (this is on a best effort basis).
You can find more details about the upgrade including FAQs here. If you have any questions or concerns, please reach out to your Google contact.
Posted by Dohyun Kim, Developer Relations Engineer, Android Games
Finding the balance between graphics quality and performance
Ares: Rise of Guardians is a mobile-to-PC sci-fi MMORPG developed by Second Dive, a game studio based in Korea known for its expertise in developing action RPG series and published by Kakao Games. Set in a vast universe with a detailed, futuristic background, Ares is full of exciting gameplay and beautifully rendered characters involving combatants wearing battle suits. However, because of these richly detailed graphics, some users’ devices struggled to handle the gameplay without affecting the performance.
For some users, their device would overheat after just a few minutes of gameplay and enter a thermally throttled state. In this state, the CPU and GPU frequency are reduced, affecting the game’s performance and causing the FPS to drop. However, as soon as the decreased FPS improved the thermal situation, the FPS would increase again and the cycle would repeat. This FPS fluctuation would cause the game to feel janky.
Adjust the performance in real time with Android Adaptability
To solve this problem, Kakao Games used Android Adaptability and Unity Adaptive Performance to improve the performance and thermal management of their game.
Android Adaptability is a set of tools and libraries to understand and respond to changing performance, thermal, and user situations in real time. These include the Android Dynamic Performance Framework’s thermal APIs, which provide information about the thermal state of a device, and the PerformanceHint API, which help Android choose the optimal CPU operating point and core placement. Both APIs work with the Unity Adaptive Performance package to help developers optimize their games.
Android Adaptability and Unity Adaptive Performance work together to adjust the graphics settings of your app or game to match the capabilities of the user’s device. As a result, it can improve performance, reduce thermal throttling and power consumption, and preserve battery life.
Results
After integrating adaptive performance, Ares was better able to manage its thermal situation, which resulted in less throttling. As a result, users were able to enjoy a higher frame rate, and FPS stability increased from 75% to 96%.
In the charts below, the blue line indicates the thermal warning level. The bottom line (0.7) indicates no warning, the midline (0.8) is throttling imminent, and the upper line (0.9) is throttling. As you can see in the first chart, before implementing Android Adaptability, throttling happened after about 16 minutes of gameplay. In the second chart, you can see that after integration, throttling didn’t occur until around 22 minutes.
Kakao Games also wanted to reduce device heating, which they knew wasn’t possible with a continuously high graphic quality setting. The best practice is to gradually lower the graphical fidelity as device temperature increases to maintain a constant framerate and thermal equilibrium. So Kakao Games created a six-step change sequence with Android Adaptability, offering stable FPS and lower device temperatures. Automatic changes in fidelity are reflected in the in-game graphic quality settings (resolution, texture, shadow, effect, etc.) in the settings menu. Because some users want the highest graphic quality even if their device can’t sustain performance at that level, Kakao Games gave them the option to manually disable Unity Adaptive Performance.
Get started with Android Adaptability
Android Adaptability and Unity Adaptive Performance is now available to all Android game developers using the Android provider on most Android devices after API level 30 (thermal) and 31 (performance Hint API). Developers are able to use the Android provider from the Adaptive Performance 5.0.0 version. The thermal APIs are integrated with Adaptive Performance to help developers easily retrieve device thermal information and the performance Hint API is called every Update() automatically without any additional work.
Hi everyone! We've just released Chrome Beta 118 (118.0.5993.32) for Android. It's now available on Google Play.
You can see a partial list of the changes in the Git log. For details on new features, check out the Chromium blog, and for details on web platform updates, check here.
If you find a new issue, please let us know by filing a bug.
Posted by Zhengqi Li and Noah Snavely, Research Scientists, Google Research
A mobile phone’s camera is a powerful tool for capturing everyday moments. However, capturing a dynamic scene using a single camera is fundamentally limited. For instance, if we wanted to adjust the camera motion or timing of a recorded video (e.g., to freeze time while sweeping the camera around to highlight a dramatic moment), we would typically need an expensive Hollywood setup with a synchronized camera rig. Would it be possible to achieve similar effects solely from a video captured using a mobile phone’s camera, without a Hollywood budget?
In “DynIBaR: Neural Dynamic Image-Based Rendering”, a best paper honorable mention at CVPR 2023, we describe a new method that generates photorealistic free-viewpoint renderings from a single video of a complex, dynamic scene. Neural Dynamic Image-Based Rendering (DynIBaR) can be used to generate a range of video effects, such as “bullet time” effects (where time is paused and the camera is moved at a normal speed around a scene), video stabilization, depth of field, and slow motion, from a single video taken with a phone’s camera. We demonstrate that DynIBaR significantly advances video rendering of complex moving scenes, opening the door to new kinds of video editing applications. We have also released the code on the DynIBaR project page, so you can try it out yourself.
Given an in-the-wild video of a complex, dynamic scene, DynIBaR can freeze time while allowing the camera to continue to move freely through the scene.
Background
The last few years have seen tremendous progress in computer vision techniques that use neural radiance fields (NeRFs) to reconstruct and render static (non-moving) 3D scenes. However, most of the videos people capture with their mobile devices depict movingobjects, such as people, pets, and cars. These moving scenes lead to a much more challenging 4D (3D + time) scene reconstructionproblem that cannot be solved using standard view synthesis methods.
Standard view synthesis methods output blurry, inaccurate renderings when applied to videos of dynamic scenes.
Other recent methods tackle view synthesis for dynamic scenes using space-time neural radiance fields (i.e., Dynamic NeRFs), but such approaches still exhibit inherent limitations that prevent their application to casually captured, in-the-wild videos. In particular, they struggle to render high-quality novel views from videos featuring long time duration, uncontrolled camera paths and complex object motion.
The key pitfall is that they store a complicated, moving scene in a single data structure. In particular, they encode scenes in the weights of a multilayer perceptron (MLP) neural network. MLPs can approximate any function — in this case, a function that maps a 4D space-time point (x, y, z, t) to an RGB color and density that we can use in rendering images of a scene. However, the capacity of this MLP (defined by the number of parameters in its neural network) must increase according to the video length and scene complexity, and thus, training such models on in-the-wild videos can be computationally intractable. As a result, we get blurry, inaccurate renderings like those produced by DVS and NSFF (shown below). DynIBaR avoids creating such large scene models by adopting a different rendering paradigm.
DynIBaR (bottom row) significantly improves rendering quality compared to prior dynamic view synthesis methods (top row) for videos of complex dynamic scenes. Prior methods produce blurry renderings because they need to store the entire moving scene in an MLP data structure.
Image-based rendering (IBR)
A key insight behind DynIBaR is that we don’t actually need to store all of the scene contents in a video in a giant MLP. Instead, we directly use pixel data from nearby input video frames to render new views. DynIBaR builds on an image-based rendering (IBR) method called IBRNet that was designed for view synthesis for static scenes. IBR methods recognize that a new target view of a scene should be very similar to nearby source images, and therefore synthesize the target by dynamically selecting and warping pixels from the nearby source frames, rather than reconstructing the whole scene in advance. IBRNet, in particular, learns to blend nearby images together to recreate new views of a scene within a volumetric rendering framework.
DynIBaR: Extending IBR to complex, dynamic videos
To extend IBR to dynamic scenes, we need to take scene motion into account during rendering. Therefore, as part of reconstructing an input video, we solve for the motionof every 3D point, where we represent scene motion using a motion trajectory field encoded by an MLP. Unlike prior dynamic NeRF methods that store the entire scene appearance and geometry in an MLP, we only store motion, a signal that is more smooth and sparse, and use the input video frames to determine everything else needed to render new views.
We optimize DynIBaR for a given video by taking each input video frame, rendering rays to form a 2D image using volume rendering (as in NeRF), and comparing that rendered image to the input frame. That is, our optimized representation should be able to perfectly reconstruct the input video.
We illustrate how DynIBaR renders images of dynamic scenes. For simplicity, we show a 2D world, as seen from above. (a) A set of input source views (triangularcamera frusta) observe a cube moving through the scene (animated square). Each camera is labeled with its timestamp (t-2, t-1, etc). (b) To render a view from camera at time t, DynIBaR shoots a virtual ray through each pixel (blue line), and computes colors and opacities for sample points along that ray. To compute those properties, DyniBaR projects those samples into other views via multi-view geometry, but first, we must compensate for the estimated motion of each point (dashed red line). (c) Using this estimated motion, DynIBaR moves each point in 3D to the relevant time before projecting it into the corresponding source camera, to sample colors for use in rendering. DynIBaR optimizes the motion of each scene point as part of learning how to synthesize new views of the scene.
However, reconstructing and deriving new views for a complex, moving scene is a highly ill-posed problem, since there are many solutions that can explain the input video — for instance, it might create disconnected 3D representations for each time step. Therefore, optimizing DynIBaR to reconstruct the input video alone is insufficient. To obtain high-quality results, we also introduce several other techniques, including a method called cross-time rendering. Cross-time rendering refers to the use of the state of our 4D representation at one time instant to render images from a different time instant, which encourages the 4D representation to be coherent over time. To further improve rendering fidelity, we automatically factorize the scene into two components, a static one and a dynamic one, modeled by time-invariant and time-varying scene representations respectively.
Creating video effects
DynIBaR enables various video effects. We show several examples below.
Video stabilization
We use a shaky, handheld input video to compare DynIBaR’s video stabilization performance to existing 2D video stabilization and dynamic NeRF methods, including FuSta, DIFRINT, HyperNeRF, and NSFF. We demonstrate that DynIBaR produces smoother outputs with higher rendering fidelity and fewer artifacts (e.g., flickering or blurry results). In particular, FuSta yields residual camera shake, DIFRINT produces flicker around object boundaries, and HyperNeRF and NSFF produce blurry results.
Simultaneous view synthesis and slow motion
DynIBaR can perform view synthesis in both space and time simultaneously, producing smooth 3D cinematic effects. Below, we demonstrate that DynIBaR can take video inputs and produce smooth 5X slow-motion videos rendered using novel camera paths.
Video bokeh
DynIBaR can also generate high-quality video bokeh by synthesizing videos with dynamically changing depth of field. Given an all-in-focus input video, DynIBar can generate high-quality output videos with varying out-of-focus regions that call attention to moving (e.g., the running person and dog) and static content (e.g., trees and buildings) in the scene.
Conclusion
DynIBaR is a leap forward in our ability to render complex moving scenes from new camera paths. While it currently involves per-video optimization, we envision faster versions that can be deployed on in-the-wild videos to enable new kinds of effects for consumer video editing using mobile devices.
Acknowledgements
DynIBaR is the result of a collaboration between researchers at Google Research and Cornell University. The key contributors to the work presented in this post include Zhengqi Li, Qianqian Wang, Forrester Cole, Richard Tucker, and Noah Snavely.
We’re announcing Google-Extended, a new control that web publishers can use to manage whether their sites help improve Bard and Vertex AI generative APIs, including futu…
Ramnath Kumar, Pre-Doctoral Researcher, and Arun Sai Suggala, Research Scientist, Google Research
Deep neural networks (DNNs) have become essential for solving a wide range of tasks, from standard supervised learning (image classification using ViT) to meta-learning. The most commonly-used paradigm for learning DNNs isempirical risk minimization (ERM), which aims to identify a network that minimizes the average loss on training data points. Several algorithms, including stochastic gradient descent (SGD), Adam, and Adagrad, have been proposed for solving ERM. However, a drawback of ERM is that it weights all the samples equally, often ignoring the rare and more difficult samples, and focusing on the easier and abundant samples. This leads to suboptimal performance on unseen data, especially when the training data is scarce.
To overcome this challenge, recent works have developed data re-weighting techniques for improving ERM performance. However, these approaches focus on specific learning tasks (such as classification) and/or require learning an additional meta model that predicts the weights of each data point. The presence of an additional model significantly increases the complexity of training and makes them unwieldy in practice.
In “Stochastic Re-weighted Gradient Descent via Distributionally Robust Optimization” we introduce a variant of the classical SGD algorithm that re-weights data points during each optimization step based on their difficulty. Stochastic Re-weighted Gradient Descent (RGD) is a lightweight algorithm that comes with a simple closed-form expression, and can be applied to solve any learning task using just two lines of code. At any stage of the learning process, RGD simply reweights a data point as the exponential of its loss. We empirically demonstrate that the RGD reweighting algorithm improves the performance of numerous learning algorithms across various tasks, ranging from supervised learning to meta learning. Notably, we show improvements over state-of-the-art methods on DomainBed and Tabular classification. Moreover, the RGD algorithm also boosts performance for BERT using the GLUE benchmarks and ViT on ImageNet-1K.
Distributionally robust optimization
Distributionally robust optimization (DRO) is an approach that assumes a “worst-case” data distribution shift may occur, which can harm a model's performance. If a model has focussed on identifying few spurious features for prediction, these “worst-case” data distribution shifts could lead to the misclassification of samples and, thus, a performance drop. DRO optimizes the loss for samples in that “worst-case” distribution, making the model robust to perturbations (e.g., removing a small fraction of points from a dataset, minor up/down weighting of data points, etc.) in the data distribution. In the context of classification, this forces the model to place less emphasis on noisy features and more emphasis on useful and predictive features. Consequently, models optimized using DRO tend to have better generalization guarantees and stronger performance on unseen samples.
Inspired by these results, we develop the RGD algorithm as a technique for solving the DRO objective. Specifically, we focus on Kullback–Leibler divergence-based DRO, where one adds perturbations to create distributions that are close to the original data distribution in the KL divergence metric, enabling a model to perform well over all possible perturbations.
Figure illustrating DRO. In contrast to ERM, which learns a model that minimizes expected loss over original data distribution, DRO learns a model that performs well on several perturbed versions of the original data distribution.
Stochastic re-weighted gradient descent
Consider a random subset of samples (called a mini-batch), where each data point has an associated lossLi. Traditional algorithms like SGD give equal importance to all the samples in the mini-batch, and update the parameters of the model by descending along the averaged gradients of the loss of those samples. With RGD, we reweight each sample in the mini-batch and give more importance to points that the model identifies as more difficult. To be precise, we use the loss as a proxy to calculate the difficulty of a point, and reweight it by the exponential of its loss. Finally, we update the model parameters by descending along the weighted average of the gradients of the samples.
Due to stability considerations, in our experiments we clip and scale the loss before computing its exponential. Specifically, we clip the loss at some threshold T, and multiply it with a scalar that is inversely proportional to the threshold. An important aspect of RGD is its simplicity as it doesn’t rely on a meta model to compute the weights of data points. Furthermore, it can be implemented with two lines of code, and combined with any popular optimizers (such as SGD, Adam, and Adagrad.
Figure illustrating the intuitive idea behind RGD in a binary classification setting. Feature 1 and Feature 2 are the features available to the model for predicting the label of a data point. RGD upweights the data points with high losses that have been misclassified by the model.
Results
We present empirical results comparing RGD with state-of-the-art techniques on standard supervised learning and domain adaptation (refer to the paper for results on meta learning). In all our experiments, we tune the clipping level and the learning rate of the optimizer using a held-out validation set.
Supervised learning
We evaluate RGD on several supervised learning tasks, including language, vision, and tabular classification. For the task of language classification, we apply RGD to the BERT model trained on the General Language Understanding Evaluation (GLUE) benchmark and show that RGD outperforms the BERT baseline by +1.94% with a standard deviation of 0.42%. To evaluate RGD’s performance on vision classification, we apply RGD to the ViT-S model trained on the ImageNet-1K dataset, and show that RGD outperforms the ViT-S baseline by +1.01% with a standard deviation of 0.23%. Moreover, we perform hypothesis tests to confirm that these results are statistically significant with a p-value that is less than 0.05.
RGD’s performance on language and vision classification using GLUE and Imagenet-1K benchmarks. Note that MNLI, QQP, QNLI, SST-2, MRPC, RTE and COLA are diverse datasets which comprise the GLUE benchmark.
For tabular classification, we use MET as our baseline, and consider various binary and multi-class datasets from UC Irvine's machine learning repository. We show that applying RGD to the MET framework improves its performance by 1.51% and 1.27% on binary and multi-class tabular classification, respectively, achieving state-of-the-art performance in this domain.
Performance of RGD for classification of various tabular datasets.
Domain generalization
To evaluate RGD’s generalization capabilities, we use the standard DomainBed benchmark, which is commonly used to study a model’s out-of-domain performance. We apply RGD to FRR, a recent approach that improved out-of-domain benchmarks, and show that RGD with FRR performs an average of 0.7% better than the FRR baseline. Furthermore, we confirm with hypothesis tests that most benchmark results (except for Office Home) are statistically significant with a p-value less than 0.05.
Performance of RGD on DomainBed benchmark for distributional shifts.
Class imbalance and fairness
To demonstrate that models learned using RGD perform well despite class imbalance, where certain classes in the dataset are underrepresented, we compare RGD’s performance with ERM on long-tailed CIFAR-10. We report that RGD improves the accuracy of baseline ERM by an average of 2.55% with a standard deviation of 0.23%. Furthermore, we perform hypothesis tests and confirm that these results are statistically significant with a p-value of less than 0.05.
Performance of RGD on the long-tailed Cifar-10 benchmark for class imbalance domain.
Limitations
The RGD algorithm was developed using popular research datasets, which were already curated to remove corruptions (e.g., noise and incorrect labels). Therefore, RGD may not provide performance improvements in scenarios where training data has a high volume of corruptions. A potential approach to handle such scenarios is to apply an outlier removal technique to the RGD algorithm. This outlier removal technique should be capable of filtering out outliers from the mini-batch and sending the remaining points to our algorithm.
Conclusion
RGD has been shown to be effective on a variety of tasks, including out-of-domain generalization, tabular representation learning, and class imbalance. It is simple to implement and can be seamlessly integrated into existing algorithms with just two lines of code change. Overall, RGD is a promising technique for boosting the performance of DNNs, and could help push the boundaries in various domains.
Acknowledgements
The paper described in this blog post was written by Ramnath Kumar, Arun Sai Suggala, Dheeraj Nagaraj and Kushal Majmundar. We extend our sincere gratitude to the anonymous reviewers, Prateek Jain, Pradeep Shenoy, Anshul Nasery, Lovish Madaan, and the numerous dedicated members of the machine learning and optimization team at Google Research India for their invaluable feedback and contributions to this work.
Alt text: Video of men and women exercising and interacting while all wearing Charge 6.
Work out smarter and understand your body better with the new Fitbit Charge 6, available for pre-order today. (1)
Charge 6 helps you stay on track with your goals thanks to advanced health sensors that, combined with a new machine learning algorithm, bring you our most accurate heart rate tracking on a Fitbit tracker yet,(2) and the ability to connect to compatible gym equipment and fitness apps to see your real-time heart rate during workouts. Plus, it’s helpful when you’re on the go with its new haptic side button, 7 days of battery life (3), and the ability to do even more right from your wrist — like control YouTube music and use Google Maps and Wallet.
Here’s a look at all the ways the Fitbit Charge 6 can take your health and fitness up a notch.
Take a beat with improved heart rate tracking
Charge 6 debuts the most accurate heart rate on a Fitbit tracker yet, thanks to an improved machine learning algorithm that brings over innovation from the Pixel Watch and has been optimised for a tracker. Heart rate tracking during vigorous activities — like HIIT workouts, spinning and rowing — is up to 60% more accurate than before, giving you added confidence in your health stats.(4) Better heart rate accuracy means even more precise readings for you — from calories and Active Zone Minutes to your Daily Readiness Score (5) and Sleep Score. You can still assess your heart rhythm for atrial fibrillation on-wrist with the ECG app,(6) and get high and low heart rate notifications, keeping your beat in check at all times.
Alt text: Man fist bumps while running wearing the Charge 6 in Coral.
See your live heart-pumping progress and connect to fitness apps and machines
Connect your Charge 6 to compatible exercise apps and machines to stay motivated at home or at the gym. Easily and securely connect to compatible exercise equipment with encrypted Bluetooth — from partners like NordicTrack, Peloton and Concept2 (7) — to see your real-time heart rate displayed live during a workout. You can also connect to see your real-time heart rate within popular Android and iOS phone or tablet fitness apps such as Peloton.
Alt text: Woman streams her real-time heart rate from Charge 6 in Coral to the screen of a stationary NordicTrack rower.
Fuel your fitness routine with more ways to track workouts and stay motivated
With even more personalised ways to track and stay motivated during workouts, you’re sure to get your movement in. Choose from more than 40 exercise modes — including 20 new options like HIIT, strength training and snowboarding— to get important workout stats. Need to track an outdoor workout? Leave your compatible phone(8) at home thanks to Charge 6’s built-in GPS that allows you to easily track your distance.
With YouTube Music controls (9) (10)on Charge 6, you can be the DJ of your workouts as you start, stop and skip over 100 million songs right from your wrist. When you want to change things up, YouTube Music Premium can also recommend workout mixes based on your exercise.
Alt text: Woman in a wheelchair plays pickleball while wearing Charge 6 with a sport band.
Bring the helpful tools you need, on-the-go
For the first time, we’re bringing helpful Google tools(11) to a tracker. Charge 6 will have Google Maps and Google Wallet, making it convenient to go from workouts to errands and everywhere in between. Navigate on the go using Google Maps to get turn-by-turn directions right on your wrist, or grab a post-workout snack using Google Wallet to make contactless payments. With just the right smarts you need for your daily routine, it’s never been easier to explore a new running route and quickly tap to pay for a recovery smoothie on the way home.
Charge 6 also features our first Accessibility feature on a Fitbit device with Zoom + Magnification. With just a couple of taps anywhere on the screen, you can magnify on-screen words if it’s difficult to read small text or you prefer a larger font.
Alt text: Biker pays for a snack using Google Wallet on Charge 6 in Coral.
Make sense of your wellbeing
Charge 6 health and wellness features are built from Fitbit’s advanced sensors that power in-depth insights. Here are some of the ways it helps you keep tabs on your health:
Wake up to your Sleep Score each morning to assess how well you slept based on the time you’re in different sleep stages, your heart rate while sleeping, how restless you were and more.
Manage your stress with an electrodermal activity (EDA) scan to measure your body’s physical responses in the moment and get actionable guidance on how to manage your stress. Check your Stress Management Score to see how well your body is handling stress and make a plan for the day.
Access other health metrics like blood oxygen saturation (SpO2),(12) heart rate variability, breathing rate and more.
With six months of Fitbit Premium (13) included, you can access thousands of workout sessions like HIIT, cycling, dance cardio and more, as well as a range of mindfulness sessions.
The all-new Fitbit app helps you focus on your goals and understand the metrics that matter to you like Daily Readiness Score, a Premium feature that helps you understand your body’s readiness to tackle a tough workout or take a day to recover, with daily activity recommendations based on your score.
Alt text: Daily Readiness Score in the newly redesigned Fitbit app.
Ready to get that Fitbit feeling? Beginning today, you can pre-order Charge 6 online for $289.95 at the Google Store, Fitbit.com or major retailers. Available from October 12th. It comes in three colour options: Obsidian, Porcelain and Coral. There are also new accessories to fit your style for any occasion available on Fitbit.com — whether you’re getting the new Charge 6 or want to freshen up another Fitbit device. Check out the Ocean woven band and Hazel sport band for Charge 6 and Charge 5; a Desert Tan leather and Ocean woven sport band for Fitbit smartwatches; and translucent bands and a matte black stainless steel mesh band for Inspire 3.
Alt text: New Ocean woven band; New Hazel sport band for Charge 5 and Charge 6
Post content
Posted by TJ Varghese, Director of Product Management
(1) Fitbit Charge 6 works with most phones running Android 9.0 or newer or iOS 15 or newer and requires a Google Account and internet access. Some features require a Fitbit mobile app and/or a paid subscription. See Fitbit.com/devices for more information.
(2) Compared to other Fitbit fitness trackers as of Fall 2023. Does not include Pixel or Fitbit smartwatches. Performance of heart rate tracking may be affected by physiology, location of device and your movements and activity.
(3) Average battery life is approximate and is based on testing conducted in California in mid 2023 on pre-production hardware and software, using default settings with a median Fitbit user battery usage profile across a mix of data, standby, and use of other features. Battery life depends on features enabled, usage, environment and many other factors. Use of certain features will decrease battery life. Actual battery life may be lower.
(4) Compared to Charge 5. Based on 90th percentile BPM errors from 2023 testing of individuals engaged in HIIT, spinning and rowing using pre-production Charge 6 and Charge 5. Percentage improvement does not relate to other exercises.
(5) Daily Readiness Score requires a Fitbit Premium membership. Premium content recommendations are not available in all locales and may be in English only.
(6) The Fitbit ECG app is only available in select countries. Not intended for use by people under 22 years old. See fitbit.com/ecg for additional details.
(7) Compatible with select workout machines that support the Bluetooth Heart Rate Profile, and coming soon to more. See here for more information on Charge 6-compatible machines.
(8) Fitbit Charge 6 works with most phones running Android 9.0 or newer or iOS 15 or newer and requires a Google Account and internet access. Some features require a Fitbit mobile app and/or a paid subscription. See Fitbit.com/devices for more information.
(9) YouTube Music controls require a compatible phone within Bluetooth range and a paid YouTube Music Premium subscription. Data rates may apply.
(10) YouTube Music controls requires a paid YouTube Music Premium subscription. Try a 1-month free trial to unlock more of the YouTube love. Terms apply.
(11) Google apps and services require a compatible phone within Bluetooth range of your Fitbit device and are not available in all countries or languages. Data rates may apply.
(12) Not available in all countries. The SpO2 feature is not intended to diagnose or treat any medical condition or for any other medical purpose. It is intended to help you manage your well-being and keep track of your information. This feature requires more frequent charging.
(13) With eligible device purchase. New and returning Premium members only. Must activate membership within 60 days of device activation. Valid form of payment required. $9.99/month after expiration of 6-month membership. Cancel anytime. Membership cannot be gifted. Content and features may change. See g.co/fitbitpremium/tos for more details.