The dev channel has been updated to 113.0.5668.0 for Windows, Linux and Mac.
A partial list of changes is available in the log. Interested in switching release channels? Find out how. If you find a new issue, please let us know by filing a bug. The community help forum is also a great place to reach out for help or learn about common issues.
Unless otherwise indicated, the features below are fully launched or in the process of rolling out (rollouts should take no more than 15 business days to complete), launching to both Rapid and Scheduled Release at the same time (if not, each stage of rollout should take no more than 15 business days to complete), and available to all Google Workspace and G Suite customers.
Enhancing the Google Drive mobile experience on Android tablets
We’re optimizing the Google Drive app for tablet displays through several modernizations, including shifting the navigation bar to the side, optimizing visual components to take advantage of the larger screen, and making it easier to see file details for a selected file. | Rolling out to Rapid Release domains now; launch to Scheduled Release domains planned for April 3, 2023.
Previous announcements
The announcements below were published on the Workspace Updates blog earlier this week. Please refer to the original blog posts for complete details.
Improvements to content organization in Google Docs
We’re rolling out improvements to the formatting and customization options for tables of contents in Google Docs. We’re also reorganizing the options included in the table properties sidebar in Docs to make it easier for you to find and utilize table formatting options. Learn more.
Add or remove client-side encryption from a Google Doc
You can now choose to add client-side encryption to an existing document or remove it from an already encrypted document (File > Make a copy > Add/Remove additional encryption). This update gives you the flexibility to control encryption as your documents and projects evolve and progress. | Learn more.
Today, we’re excited to announce the launch of our 2023 Google Mobile Ads SDK Developer Survey. As part of our efforts to continue updating the AdMob and Ad Manager products, we’d like to hear from you about where we should focus our efforts. This includes product feedback as well as feedback on our guides, code samples and other resources. Your feedback will help shape our future product and resource roadmap.
This anonymous survey should only take about 15 minutes to complete and will provide our team with your valuable feedback as we plan for the months ahead. Whether you’re an engineer, Ad Ops personnel, or a PM, your feedback on AdMob, Ad Manager, and the Google Mobile Ads SDK is valuable to us. We appreciate you taking the time to help improve our developer experience!
Today, we’re excited to announce the launch of our 2023 Google Mobile Ads SDK Developer Survey. As part of our efforts to continue updating the AdMob and Ad Manager products, we’d like to hear from you about where we should focus our efforts. This includes product feedback as well as feedback on our guides, code samples and other resources. Your feedback will help shape our future product and resource roadmap.
This anonymous survey should only take about 15 minutes to complete and will provide our team with your valuable feedback as we plan for the months ahead. Whether you’re an engineer, Ad Ops personnel, or a PM, your feedback on AdMob, Ad Manager, and the Google Mobile Ads SDK is valuable to us. We appreciate you taking the time to help improve our developer experience!
Today, we’re excited to announce the launch of our 2023 Google Mobile Ads SDK Developer Survey. As part of our efforts to continue updating the AdMob and Ad Manager products, we’d like to hear from you about where we should focus our efforts. This includes product feedback as well as feedback on our guides, code samples and other resources. Your feedback will help shape our future product and resource roadmap.
This anonymous survey should only take about 15 minutes to complete and will provide our team with your valuable feedback as we plan for the months ahead. Whether you’re an engineer, Ad Ops personnel, or a PM, your feedback on AdMob, Ad Manager, and the Google Mobile Ads SDK is valuable to us. We appreciate you taking the time to help improve our developer experience!
Posted by Boris Babenko, Software Engineer, and Akib Uddin, Product Manager, Google Research
Last year we presented results demonstrating that a deep learning system (DLS) can be trained to analyze external eye photos and predict a person’s diabetic retinal disease status and elevated glycated hemoglobin (or HbA1c, a biomarker that indicates the three-month average level of blood glucose). It was previously unknown that external eye photos contained signals for these conditions. This exciting finding suggested the potential to reduce the need for specialized equipment since such photos can be captured using smartphones and other consumer devices. Encouraged by these findings, we set out to discover what other biomarkers can be found in this imaging modality.
In “A deep learning model for novel systemic biomarkers in photos of the external eye: a retrospective study”, published in Lancet Digital Health, we show that a number of systemic biomarkers spanning several organ systems (e.g., kidney, blood, liver) can be predicted from external eye photos with an accuracy surpassing that of a baseline logistic regression model that uses only clinicodemographic variables, such as age and years with diabetes. The comparison with a clinicodemographic baseline is useful because risk for some diseases could also be assessed using a simple questionnaire, and we seek to understand if the model interpreting images is doing better. This work is in the early stages, but it has the potential to increase access to disease detection and monitoring through new non-invasive care pathways.
A model generating predictions for an external eye photo.
Model development and evaluation
To develop our model, we worked with partners at EyePACS and the Los Angeles County Department of Health Services to create a retrospective de-identified dataset of external eye photos and measurements in the form of laboratory tests and vital signs (e.g., blood pressure). We filtered down to 31 lab tests and vitals that were more commonly available in this dataset and then trained a multi-task DLS with a classification “head” for each lab and vital to predict abnormalities in these measurements.
Importantly, evaluating the performance of many abnormalities in parallel can be problematic because of a higher chance of finding a spurious and erroneous result (i.e., due to the multiple comparisons problem). To mitigate this, we first evaluated the model on a portion of our development dataset. Then, we narrowed the list down to the nine most promising prediction tasks and evaluated the model on our test datasets while correcting for multiple comparisons. Specifically, these nine tasks, their associated anatomy, and their significance for associated diseases are listed in the table below.
Indication of hypoalbuminemia, which can be due to decreased production of albumin from liver disease or increased loss of albumin from kidney disease.
Indication of thrombocytopenia, which can be due to decreased production of platelets from bone marrow disorders, such as leukemia or lymphoma, or increased destruction of platelets due to autoimmune disease or medication side effects.
Indication of leukopenia which can affect the body’s ability to fight infection.
Key results
As in our previous work, we compared our external eye model to a baseline model (a logistic regression model taking clinicodemographic variables as input) by computing the area under the receiver operator curve (AUC). The AUC ranges from 0 to 100%, with 50% indicating random performance and higher values indicating better performance. For all but one of the nine prediction tasks, our model statistically outperformed the baseline model. In terms of absolute performance, the model’s AUCs ranged from 62% to 88%. While these levels of accuracy are likely insufficient for diagnostic applications, it is in line with other initial screening tools, like mammography and pre-screening for diabetes, used to help identify individuals who may benefit from additional testing. And as a non-invasive accessible modality, taking photographs of the external eye may offer the potential to help screen and triage patients for confirmatory blood tests or other clinical follow-up.
Results on the EyePACS test set, showing AUC performance of our DLS compared to a baseline model. The variable “n” refers to the total number of datapoints, and “N” refers to the number of positives. Error bars show 95% confidence intervals computed using the DeLong method. †Indicates that the target was pre-specified as secondary analysis; all others were pre-specified as primary analysis.
The external eye photos used in both this and the prior study were collected using table top cameras that include a head rest for patient stabilization and produce high quality images with good lighting. Since image quality may be worse in other settings, we wanted to explore to what extent the DLS model is robust to quality changes, starting with image resolution. Specifically, we scaled the images in the dataset down to a range of sizes, and measured performance of the DLS when retrained to handle the downsampled images.
Below we show a selection of the results of this experiment (see the paper for more complete results). These results demonstrate that the DLS is fairly robust and, in most cases, outperforms the baseline model even if the images are scaled down to 150x150 pixels. This pixel count is under 0.1 megapixels, much smaller than the typical smartphone camera.
Effect of input image resolution. Top: Sample images scaled to different sizes for this experiment. Bottom: Comparison of the performance of the DLS (red) trained and evaluated on different image sizes and the baseline model (blue). Shaded regions show 95% confidence intervals computed using the DeLong method.
Conclusion and future directions
Our previous research demonstrated the promise of the external eye modality. In this work, we performed a more exhaustive search to identify the possible systemic biomarkers that can be predicted from these photos. Though these results are promising, many steps remain to determine whether technology like this can help patients in the real world. In particular, as we mention above, the imagery in our studies were collected using large tabletop cameras in a setting that controlled factors such as lighting and head positioning. Furthermore, the datasets used in this work consist primarily of patients with diabetes and did not have sufficient representation of a number of important subgroups – more focused data collection for DLS refinement and evaluation on a more general population and across subgroups will be needed before considering clinical use.
We are excited to explore how these models generalize to smartphone imagery given the potential reach and scale that this enables for the technology. To this end, we are continuing to work with our co-authors at partner institutions like Chang Gung Memorial Hospital in Taiwan, Aravind Eye Hospital in India, and EyePACS in the United States to collect datasets of imagery captured on smartphones. Our early results are promising and we look forward to sharing more in the future.
Acknowledgements
This work involved the efforts of a multidisciplinary team of software engineers, researchers, clinicians and cross functional contributors. Key contributors to this project include: Boris Babenko, Ilana Traynis, Christina Chen, Preeti Singh, Akib Uddin, Jorge Cuadros, Lauren P. Daskivich, April Y. Maa, Ramasamy Kim, Eugene Yu-Chuan Kang, Yossi Matias, Greg S. Corrado, Lily Peng, Dale R. Webster, Christopher Semturs, Jonathan Krause, Avinash V Varadarajan, Naama Hammel and Yun Liu. We also thank Dave Steiner, Yuan Liu, and Michael Howell for their feedback on the manuscript; Amit Talreja for reviewing code for the paper; Elvia Figueroa and the Los Angeles County Department of Health Services Teleretinal Diabetic Retinopathy Screening program staff for data collection and program support; Andrea Limon and Nikhil Kookkiri for EyePACS data collection and support; Dr. Charles Demosthenes for extracting the data and Peter Kuzmak for getting images for the VA data. Last but not least, a special thanks to Tom Small for the animation used in this blog post.
Concepts is a digital illustration app created by TopHatch that helps creative thinkers bring their visions to life. The app uses an infinitely-large canvas format, so its users can sketch, plan, and edit all of their big ideas without limitation, while its vector-based ink provides the precision needed to refine and reorganize their ideas as they go.
For Concepts, having more on-screen real estate means more comfort, more creative space, and a better user experience overall. That’s why the app was specifically designed with large screens in mind. Concepts’ designers and engineers are always exploring new ways to expand the app’s large screen capabilities on Android. Thanks to Android’s suite of developer tools and resources, that’s easier than ever.
Evaluating an expanding market of devices
Large screens are the fastest growing segment of Android users, with more than 270 million users on tablets, foldables, and ChromeOS devices. It’s no surprise then that Concepts, an app that benefits users by providing them with more screen space, was attracted to the format. The Concepts team was also excited about innovation with foldables because having the large screen experience with greater portability gives users more opportunities to use the app in the ways that are best for them.
The team at Concepts spends a lot of time evaluating new large screen technologies and experiences, trying to find what hardware or software features might benefit the app the most. The team imagines and storyboards several scenarios, shares the best ones with a close-knit beta group, and quickly builds prototypes to determine whether these updates improve the UX for its larger user base.
For instance, Concepts’ designers recently tested the Samsung Galaxy Fold and found that users benefited from having more screen space when the device was folded. With help from the Jetpack WindowManager library, Concepts’ developers implemented a feature to automatically collapse the UI when the Galaxy’s large screen was folded, allowing for more on-screen space than if the UI were expanded.
Concepts’ first release for Android was optimized for ChromeOS and, because of this, supporting resizable windows was important to their user experience from the very beginning. Initially, they needed to use a physical device to test for various screen sizes. Now, the Concepts team can use Android’s resizeable emulator, which makes testing for different screen sizes much easier.
Android’s APIs and toolkit carry the workload
The developers’ goal with Concepts is to make the illustration experience feel as natural as putting pen to paper. For the Concepts team, this meant achieving as close to zero lag as possible between the stylus tip and the lines drawn on the Concepts canvas.
When Concepts’ engineers first created the app, they put a lot of effort into creating low-latency drawing themselves. Now, Android’s graphical APIs eliminate the complexity of creating efficient inking.
“The hardware to support low-latency inking with higher refresh rate screens and more accurate stylus data keeps getting better,” said David Brittain, co-founder and CEO of TopHatch, parent company of Concepts. “Android’s mature set of APIs make it easy.”
Concepts engineers also found that the core Android View APIs take care of most of the workload for supporting tablets and foldables and make heavy use of custom Views and ViewGroups in Concepts. The app’s color wheel, for example, is a custom View drawing to a Canvas, which uses Animators for the reveal animation. View, Canvas, and Animator are all classes from the Android SDK.
“Android’s tools and platform are making it easier to address the variety of screen sizes and input methods, with well-structured APIs for developing and increasing the number of choices for testing. Plus, Kotlin allows us to create concise, readable code,” said David.
Concepts’ users prefer large screens
Tablets and foldables represent the bulk of Concepts’ investments and user base, and the company doesn’t see that changing any time soon. Currently, tablets deliver 50% higher revenue per user than smartphone users. Tablets also account for eight of the top 10 most frequently used devices among Concepts’ users, with the other two being ChromeOS devices.
Additionally, Concepts’ monthly users spend 70% more time engaging with the app on tablets than on traditional smartphones. The application’s rating is also 0.3 stars higher on tablets.
“We’re looking forward to future improvements in platform usability and customization while increasing experimentation with portable form factors. Continued efforts in this area will ensure high user adoption well into the future,” said David.
Start developing for large screens today
Learn how you can reach a growing audience of users by increasing development for large screens and foldables today.
Posted by Yasmine Evjen, Community lead, Android DevRel
The community of Android developers is at the heart of everything we do. Seeing the community come together to build new things, encourage each other, and share their knowledge encourages us to keep pushing the limits of Android.
At the core of this is our Android Google Developer Experts, a global community that comes together to share best practices through speaking, open-source contributions, workshops, and articles. This is a caring community that mentors, supports each other, and isn’t afraid to get their hands dirty with early access Android releases, providing feedback to make it the best release for developers across the globe.
We asked, “What do you love most about being in the #AndroidDev and Google Developer Expert community?”
Gema Socorro says, ”I love helping other devs in their Android journey,” and Jaewoog Eum shares the joy of “Learning, building, and sharing innovative Android technologies for everyone.”
Hear from the Google Developer Expert Community
We also sat down with Ahmed Tikiwa, Annyce Davis, Dinorah Tovar, Harun Wangereka, Madona S Wambua, and Zarah Dominguez - to hear about their journey as an Android Developer and GDE and what this role means to them - watch them on The Android Show below.
Annyce, VP Engineer Meetup shares, “the community is a great sounding board to solve problems, and helps me stay technical and keep learning.”
Does the community inspire you? Get involved by speaking at your local developer conferences, sharing your latest Android projects, and not being afraid to experiment with new technology. This year, we’re spotlighting community projects! Tag us in your blogs, videos, tips, and tricks to be featured in the latest #AndroidSpotlight.
LTS-108 is being updated in the LTS channel to 108.0.5359.224 (Platform Version: 15183.86.0) for most ChromeOS devices. Want to know more about Long Term Support? Click here.
This update contains multiple Security fixes, including:
1415366Critical CVE-2023-0941 Use after free in Prompts