Google Workspace Updates Weekly Recap – January 12, 2024

4 New updates

Unless otherwise indicated, the features below are available to all Google Workspace customers, and are fully launched or in the process of rolling out. Rollouts should take no more than 15 business days to complete if launching to both Rapid and Scheduled Release at the same time. If not, each stage of rollout should take no more than 15 business days to complete.


Version history limits for Apps Script projects 
For all new scripts, you’ll be able to create and save up to 200 versions of your script. If needed, you can permanently delete a script version from the project history page. | This is available now to all Google Workspace customers. | Learn more using our developer documentation on working with Apps Script versions


Share a link to a specific time in a Google Drive video 
We’re adding new functionality to the Drive sharing button that lets you share timestamped links to specific parts of a video. On web, simply navigate to drive.google.com > find and open a video file > play the video (you can pause the playback before performing the following steps) > select the dropdown on the “Share” button in the top-right corner > select “Copy link to this time” > send the link. | Rolling out now to Rapid Release and Scheduled Release domains. | Available to all Google Workspace customers and users with personal Google Accounts. | Learn more about copying a specific time in the video
Share a link to a specific time in a Google Drive video

Introducing dropdown options on the sharing button in Google Docs, Sheets, Slides and Drawings We’re adding a new feature that ensures a seamless sharing experience across Workspace. In Google Docs, Sheets, Slides and Drawings, you will now see a dropdown on the Share button that surfaces quick actions, such as pending access requests and the “Copy link” option. | Rolling out now to Rapid Release and Scheduled Release domains. | Available to all Google Workspace customers and users with personal Google Accounts. 
dropdown options on the sharing button

Using functions in Connected Sheets for BigQuery 
Today, Connected Sheets for BigQuery supports 23 Sheets functions, such as AVERAGE and XLOOKUP. However, all of these functions behave somewhat differently than their native counterparts. Thus, to help Connected Sheets users write better functions, we now display context-aware Help Center content in Sheets. The ‘formula help’ shows descriptions for Connected Sheets functions when writing a formula that would query BigQuery, and otherwise shows descriptions of native Sheets functions. | This is available now to all Google Workspace customers and users with personal Google Accounts. | Learn more about the XLOOKUP function.
Using functions in Connected Sheets for BigQuery


Previous announcements

The announcements below were published on the Workspace Updates blog earlier this week. Please refer to the original blog posts for complete details.


Updates to metrics in Google Drive Apps Reports and Reports API 
We’re making some updates to the Google Drive metrics in the Admin Console Apps reports and the Reports API. As a result of these improvements, admins who analyze metrics will have more reporting clarity and can better understand activity trends within their domain. | Learn more about metrics in Drive Apps Reports and Reports API. 

Easily share Google Drive files to Google Calendar meeting attendees 
We’re introducing the option to share any file with all meeting participants on a Google Calendar invite via the sharing dialog within a file. | Learn more about sharing Drive files to Calendar. 

Google Meet is now available on Logitech Android appliances 
Google Meet is now supported on Logitech’s Rally Bar and Rally Bar Mini Android-based appliances for collaboration rooms and spaces of just about any size. After initial setup, admins can easily enroll, manage, and monitor these devices using the Google admin console. Google Meet on Logitech Android-based devices is supported on CollabOS v1.11 as a video conferencing provider | Learn more about Meet on Logitech Android appliances. 

Google Meet hardware devices from Poly now support interoperability with Cisco Webex and Zoom 
We’re expanding the existing interoperability between Google Meet, Cisco Webex, and Zoom to include Android-based Meet hardware devices from Poly. Specifically, these devices include: Poly Studio X30, X50, X52, and X70. | Learn more about support interoperability with Cisco Webex and Zoom.

Extending Trusted Types to Gmail
We’re excited to announce the expansion of Trusted Types to Gmail. This will provide a defense against DOM XSS and further enhances our advanced data protection controls to keep users and data safe across more of the apps they use everyday. | Learn more about Trusted Types.


Completed rollouts

The features below completed their rollouts to Rapid Release domains, Scheduled Release domains, or both. Please refer to the original blog posts for additional details.

Rapid Release Domains: 

Scheduled Release Domains: 

Rapid and Scheduled Release Domains: 

For a recap of announcements in the past six months, check out What’s new in Google Workspace (recent releases). 

Extending Trusted Types to Gmail

What’s changing

Last year, we improved the client-side security of Google Docs, Sheets, Slides, Forms, Sites, Drawings, Drive, and Calendar with Trusted Types. This browser-based runtime feature limits the uses of Document Object Model (DOM) APIs that are used by the apps listed above or third-party extensions. Trusted Types also reduce the possibility of Document Object Model Cross Site Scripting (DOM XSS), which continues to be one of the most critical threats to web security. 

DOM XSS occurs when a cyber attacker injects malicious code into a web page, which can then be executed by the victim's browser. This can allow the cyber attacker to steal cookies, hijack sessions, and even take control of the victim's computer. 

To defend against this, we’re excited to announce the expansion of Trusted Types to Gmail. This will provide a defense against DOM XSS and further enhances our advanced data protection controls to keep users and data safe across more of the apps they use everyday. 


Who’s impacted 

Developers (relying on any Chrome extensions that modify DOM APIs.) 


Additional details 

This new enforcement mode will require third-party extensions to use typed objects instead of strings when assigning values to DOM APIs. Once Trusted Types are fully enforced, the Trusted Types directive will be present in the Content Security Policy (CSP) header: 

Content-Security-Policy: require-trusted-types-for 'script';report-uri https://mail.google.com/mail/cspreport 


Getting started 

  • Admins: There is no admin control for this feature. 
  • Developers: 
    • To make code Trusted Types compliant, signal to the browser that data being used within the context of these DOM APIs is trustworthy by creating a Trusted Type special object. 
    • There are several ways to be Trusted Types compliant, such as removing the offending code, using a library (such as safevalues or DOMPurify), or creating a Trusted Types policy. To ensure a seamless experience for users, we recommend employing these techniques before Trusted Types enforcement is rolled out. Failure to make code Trusted Types compliant may cause feature breakages for third-party extensions as their DOM manipulations will be blocked by the browser. 
  • End users: There is no end user setting for this feature. 

Rollout pace 


Availability 

  • Available to all Google Workspace customers and users with personal Google Accounts 

Resources 

MLK Day and the pursuit of equitable internet access

As Martin Luther King Jr. Day approaches, we are reminded of the ongoing pursuit for equity, justice, and the civil rights leader’s legacy and vision of a world with equal access to opportunities. Google Fiber aspires to be a part of this dream, by helping to bridge the digital divide and foster inclusivity in our increasingly connected world. 


Everyone deserves fast, reliable internet at an accessible price, and the ability to put that internet connection to use - to connect to opportunity. We’re grateful to work with many organizations across the country that put that principle to work every day, helping their clients and constituents get more out of their lives, both online and beyond.  


Thumbnail


Here are a few ways our incredible community partners are marking this important day across the country:





GFiber is proud to be a small part of these efforts, and others, working towards a more equitable and just world.


Of course, there is still more work to be done. We will continue to push forward to make the internet more accessible and to help others harness the power and opportunity of the internet. You can help too! One small way to act right now  — the Affordable Connectivity Program helps make high speed internet available to millions of US households through a $30 a month subsidy, but this program will end in April 2024 unless Congress acts to allocate additional funding. There is a bill before Congress to extend funding for this critical program. Take a moment to let your Congresspeople add your voice on the Affordable Connectivity Program Extension Act of 2024 to keep that connection strong for everyone.


Subscribe to get the GFiber Blog in your email.


Posted by Jess George, Head of Digital Equity & Community Impact 



AMIE: A research AI system for diagnostic medical reasoning and conversations

The physician-patient conversation is a cornerstone of medicine, in which skilled and intentional communication drives diagnosis, management, empathy and trust. AI systems capable of such diagnostic dialogues could increase availability, accessibility, quality and consistency of care by being useful conversational partners to clinicians and patients alike. But approximating clinicians’ considerable expertise is a significant challenge.

Recent progress in large language models (LLMs) outside the medical domain has shown that they can plan, reason, and use relevant context to hold rich conversations. However, there are many aspects of good diagnostic dialogue that are unique to the medical domain. An effective clinician takes a complete “clinical history” and asks intelligent questions that help to derive a differential diagnosis. They wield considerable skill to foster an effective relationship, provide information clearly, make joint and informed decisions with the patient, respond empathically to their emotions, and support them in the next steps of care. While LLMs can accurately perform tasks such as medical summarization or answering medical questions, there has been little work specifically aimed towards developing these kinds of conversational diagnostic capabilities.

Inspired by this challenge, we developed Articulate Medical Intelligence Explorer (AMIE), a research AI system based on a LLM and optimized for diagnostic reasoning and conversations. We trained and evaluated AMIE along many dimensions that reflect quality in real-world clinical consultations from the perspective of both clinicians and patients. To scale AMIE across a multitude of disease conditions, specialties and scenarios, we developed a novel self-play based simulated diagnostic dialogue environment with automated feedback mechanisms to enrich and accelerate its learning process. We also introduced an inference time chain-of-reasoning strategy to improve AMIE’s diagnostic accuracy and conversation quality. Finally, we tested AMIE prospectively in real examples of multi-turn dialogue by simulating consultations with trained actors.

AMIE was optimized for diagnostic conversations, asking questions that help to reduce its uncertainty and improve diagnostic accuracy, while also balancing this with other requirements of effective clinical communication, such as empathy, fostering a relationship, and providing information clearly.

Evaluation of conversational diagnostic AI

Besides developing and optimizing AI systems themselves for diagnostic conversations, how to assess such systems is also an open question. Inspired by accepted tools used to measure consultation quality and clinical communication skills in real-world settings, we constructed a pilot evaluation rubric to assess diagnostic conversations along axes pertaining to history-taking, diagnostic accuracy, clinical management, clinical communication skills, relationship fostering and empathy.

We then designed a randomized, double-blind crossover study of text-based consultations with validated patient actors interacting either with board-certified primary care physicians (PCPs) or the AI system optimized for diagnostic dialogue. We set up our consultations in the style of an objective structured clinical examination (OSCE), a practical assessment commonly used in the real world to examine clinicians’ skills and competencies in a standardized and objective way. In a typical OSCE, clinicians might rotate through multiple stations, each simulating a real-life clinical scenario where they perform tasks such as conducting a consultation with a standardized patient actor (trained carefully to emulate a patient with a particular condition). Consultations were performed using a synchronous text-chat tool, mimicking the interface familiar to most consumers using LLMs today.

AMIE is a research AI system based on LLMs for diagnostic reasoning and dialogue.

AMIE: an LLM-based conversational diagnostic research AI system

We trained AMIE on real-world datasets comprising medical reasoning, medical summarization and real-world clinical conversations.

It is feasible to train LLMs using real-world dialogues developed by passively collecting and transcribing in-person clinical visits, however, two substantial challenges limit their effectiveness in training LLMs for medical conversations. First, existing real-world data often fails to capture the vast range of medical conditions and scenarios, hindering the scalability and comprehensiveness. Second, the data derived from real-world dialogue transcripts tends to be noisy, containing ambiguous language (including slang, jargon, humor and sarcasm), interruptions, ungrammatical utterances, and implicit references.

To address these limitations, we designed a self-play based simulated learning environment with automated feedback mechanisms for diagnostic medical dialogue in a virtual care setting, enabling us to scale AMIE’s knowledge and capabilities across many medical conditions and contexts. We used this environment to iteratively fine-tune AMIE with an evolving set of simulated dialogues in addition to the static corpus of real-world data described.

This process consisted of two self-play loops: (1) an “inner” self-play loop, where AMIE leveraged in-context critic feedback to refine its behavior on simulated conversations with an AI patient simulator; and (2) an “outer” self-play loop where the set of refined simulated dialogues were incorporated into subsequent fine-tuning iterations. The resulting new version of AMIE could then participate in the inner loop again, creating a virtuous continuous learning cycle.

Further, we also employed an inference time chain-of-reasoning strategy which enabled AMIE to progressively refine its response conditioned on the current conversation to arrive at an informed and grounded reply.

AMIE uses a novel self-play based simulated dialogue learning environment to improve the quality of diagnostic dialogue across a multitude of disease conditions, specialities and patient contexts.

We tested performance in consultations with simulated patients (played by trained actors), compared to those performed by 20 real PCPs using the randomized approach described above. AMIE and PCPs were assessed from the perspectives of both specialist attending physicians and our simulated patients in a randomized, blinded crossover study that included 149 case scenarios from OSCE providers in Canada, the UK and India in a diverse range of specialties and diseases.

Notably, our study was not designed to emulate either traditional in-person OSCE evaluations or the ways clinicians usually use text, email, chat or telemedicine. Instead, our experiment mirrored the most common way consumers interact with LLMs today, a potentially scalable and familiar mechanism for AI systems to engage in remote diagnostic dialogue.

Overview of the randomized study design to perform a virtual remote OSCE with simulated patients via online multi-turn synchronous text chat.

Performance of AMIE

In this setting, we observed that AMIE performed simulated diagnostic conversations at least as well as PCPs when both were evaluated along multiple clinically-meaningful axes of consultation quality. AMIE had greater diagnostic accuracy and superior performance for 28 of 32 axes from the perspective of specialist physicians, and 24 of 26 axes from the perspective of patient actors.

AMIE outperformed PCPs on multiple evaluation axes for diagnostic dialogue in our evaluations.
Specialist-rated top-k diagnostic accuracy. AMIE and PCPs top-k differential diagnosis (DDx) accuracy are compared across 149 scenarios with respect to the ground truth diagnosis (a) and all diagnoses listed within the accepted differential diagnoses (b). Bootstrapping (n=10,000) confirms all top-k differences between AMIE and PCP DDx accuracy are significant with p <0.05 after false discovery rate (FDR) correction.
Diagnostic conversation and reasoning qualities as assessed by specialist physicians. On 28 out of 32 axes, AMIE outperformed PCPs while being comparable on the rest.

Limitations

Our research has several limitations and should be interpreted with appropriate caution. Firstly, our evaluation technique likely underestimates the real-world value of human conversations, as the clinicians in our study were limited to an unfamiliar text-chat interface, which permits large-scale LLM–patient interactions but is not representative of usual clinical practice. Secondly, any research of this type must be seen as only a first exploratory step on a long journey. Transitioning from a LLM research prototype that we evaluated in this study to a safe and robust tool that could be used by people and those who provide care for them will require significant additional research. There are many important limitations to be addressed, including experimental performance under real-world constraints and dedicated exploration of such important topics as health equity and fairness, privacy, robustness, and many more, to ensure the safety and reliability of the technology.


AMIE as an aid to clinicians

In a recently released preprint, we evaluated the ability of an earlier iteration of the AMIE system to generate a DDx alone or as an aid to clinicians. Twenty (20) generalist clinicians evaluated 303 challenging, real-world medical cases sourced from the New England Journal of Medicine (NEJM) ClinicoPathologic Conferences (CPCs). Each case report was read by two clinicians randomized to one of two assistive conditions: either assistance from search engines and standard medical resources, or AMIE assistance in addition to these tools. All clinicians provided a baseline, unassisted DDx prior to using the respective assistive tools.

Assisted randomized reader study setup to investigate the assistive effect of AMIE to clinicians in solving complex diagnostic case challenges from the New England Journal of Medicine.

AMIE exhibited standalone performance that exceeded that of unassisted clinicians (top-10 accuracy 59.1% vs. 33.6%, p= 0.04). Comparing the two assisted study arms, the top-10 accuracy was higher for clinicians assisted by AMIE, compared to clinicians without AMIE assistance (24.6%, p<0.01) and clinicians with search (5.45%, p=0.02). Further, clinicians assisted by AMIE arrived at more comprehensive differential lists than those without AMIE assistance.

In addition to strong standalone performance, using the AMIE system led to significant assistive effect and improvements in diagnostic accuracy of the clinicians in solving these complex case challenges.

It's worth noting that NEJM CPCs are not representative of everyday clinical practice. They are unusual case reports in only a few hundred individuals so offer limited scope for probing important issues like equity or fairness.


Bold and responsible research in healthcare — the art of the possible

Access to clinical expertise remains scarce around the world. While AI has shown great promise in specific clinical applications, engagement in the dynamic, conversational diagnostic journeys of clinical practice requires many capabilities not yet demonstrated by AI systems. Doctors wield not only knowledge and skill but a dedication to myriad principles, including safety and quality, communication, partnership and teamwork, trust, and professionalism. Realizing these attributes in AI systems is an inspiring challenge that should be approached responsibly and with care. AMIE is our exploration of the “art of the possible”, a research-only system for safely exploring a vision of the future where AI systems might be better aligned with attributes of the skilled clinicians entrusted with our care. It is early experimental-only work, not a product, and has several limitations that we believe merit rigorous and extensive further scientific studies in order to envision a future in which conversational, empathic and diagnostic AI systems might become safe, helpful and accessible.


Acknowledgements

The research described here is joint work across many teams at Google Research and Google Deepmind. We are grateful to all our co-authors - Tao Tu, Mike Schaekermann, Anil Palepu, Daniel McDuff, Jake Sunshine, Khaled Saab, Jan Freyberg, Ryutaro Tanno, Amy Wang, Brenna Li, Mohamed Amin, Sara Mahdavi, Karan Sighal, Shekoofeh Azizi, Nenad Tomasev, Yun Liu, Yong Cheng, Le Hou, Albert Webson, Jake Garrison, Yash Sharma, Anupam Pathak, Sushant Prakash, Philip Mansfield, Shwetak Patel, Bradley Green, Ewa Dominowska, Renee Wong, Juraj Gottweis, Dale Webster, Katherine Chou, Christopher Semturs, Joelle Barral, Greg Corrado and Yossi Matias. We also thank Sami Lachgar, Lauren Winer and John Guilyard for their support with narratives and the visuals. Finally, we are grateful to Michael Howell, James Maynika, Jeff Dean, Karen DeSalvo, Zoubin Gharahmani and Demis Hassabis for their support during the course of this project.


Source: Google AI Blog


AMIE: A research AI system for diagnostic medical reasoning and conversations

The physician-patient conversation is a cornerstone of medicine, in which skilled and intentional communication drives diagnosis, management, empathy and trust. AI systems capable of such diagnostic dialogues could increase availability, accessibility, quality and consistency of care by being useful conversational partners to clinicians and patients alike. But approximating clinicians’ considerable expertise is a significant challenge.

Recent progress in large language models (LLMs) outside the medical domain has shown that they can plan, reason, and use relevant context to hold rich conversations. However, there are many aspects of good diagnostic dialogue that are unique to the medical domain. An effective clinician takes a complete “clinical history” and asks intelligent questions that help to derive a differential diagnosis. They wield considerable skill to foster an effective relationship, provide information clearly, make joint and informed decisions with the patient, respond empathically to their emotions, and support them in the next steps of care. While LLMs can accurately perform tasks such as medical summarization or answering medical questions, there has been little work specifically aimed towards developing these kinds of conversational diagnostic capabilities.

Inspired by this challenge, we developed Articulate Medical Intelligence Explorer (AMIE), a research AI system based on a LLM and optimized for diagnostic reasoning and conversations. We trained and evaluated AMIE along many dimensions that reflect quality in real-world clinical consultations from the perspective of both clinicians and patients. To scale AMIE across a multitude of disease conditions, specialties and scenarios, we developed a novel self-play based simulated diagnostic dialogue environment with automated feedback mechanisms to enrich and accelerate its learning process. We also introduced an inference time chain-of-reasoning strategy to improve AMIE’s diagnostic accuracy and conversation quality. Finally, we tested AMIE prospectively in real examples of multi-turn dialogue by simulating consultations with trained actors.

AMIE was optimized for diagnostic conversations, asking questions that help to reduce its uncertainty and improve diagnostic accuracy, while also balancing this with other requirements of effective clinical communication, such as empathy, fostering a relationship, and providing information clearly.

Evaluation of conversational diagnostic AI

Besides developing and optimizing AI systems themselves for diagnostic conversations, how to assess such systems is also an open question. Inspired by accepted tools used to measure consultation quality and clinical communication skills in real-world settings, we constructed a pilot evaluation rubric to assess diagnostic conversations along axes pertaining to history-taking, diagnostic accuracy, clinical management, clinical communication skills, relationship fostering and empathy.

We then designed a randomized, double-blind crossover study of text-based consultations with validated patient actors interacting either with board-certified primary care physicians (PCPs) or the AI system optimized for diagnostic dialogue. We set up our consultations in the style of an objective structured clinical examination (OSCE), a practical assessment commonly used in the real world to examine clinicians’ skills and competencies in a standardized and objective way. In a typical OSCE, clinicians might rotate through multiple stations, each simulating a real-life clinical scenario where they perform tasks such as conducting a consultation with a standardized patient actor (trained carefully to emulate a patient with a particular condition). Consultations were performed using a synchronous text-chat tool, mimicking the interface familiar to most consumers using LLMs today.

AMIE is a research AI system based on LLMs for diagnostic reasoning and dialogue.

AMIE: an LLM-based conversational diagnostic research AI system

We trained AMIE on real-world datasets comprising medical reasoning, medical summarization and real-world clinical conversations.

It is feasible to train LLMs using real-world dialogues developed by passively collecting and transcribing in-person clinical visits, however, two substantial challenges limit their effectiveness in training LLMs for medical conversations. First, existing real-world data often fails to capture the vast range of medical conditions and scenarios, hindering the scalability and comprehensiveness. Second, the data derived from real-world dialogue transcripts tends to be noisy, containing ambiguous language (including slang, jargon, humor and sarcasm), interruptions, ungrammatical utterances, and implicit references.

To address these limitations, we designed a self-play based simulated learning environment with automated feedback mechanisms for diagnostic medical dialogue in a virtual care setting, enabling us to scale AMIE’s knowledge and capabilities across many medical conditions and contexts. We used this environment to iteratively fine-tune AMIE with an evolving set of simulated dialogues in addition to the static corpus of real-world data described.

This process consisted of two self-play loops: (1) an “inner” self-play loop, where AMIE leveraged in-context critic feedback to refine its behavior on simulated conversations with an AI patient simulator; and (2) an “outer” self-play loop where the set of refined simulated dialogues were incorporated into subsequent fine-tuning iterations. The resulting new version of AMIE could then participate in the inner loop again, creating a virtuous continuous learning cycle.

Further, we also employed an inference time chain-of-reasoning strategy which enabled AMIE to progressively refine its response conditioned on the current conversation to arrive at an informed and grounded reply.

AMIE uses a novel self-play based simulated dialogue learning environment to improve the quality of diagnostic dialogue across a multitude of disease conditions, specialities and patient contexts.

We tested performance in consultations with simulated patients (played by trained actors), compared to those performed by 20 real PCPs using the randomized approach described above. AMIE and PCPs were assessed from the perspectives of both specialist attending physicians and our simulated patients in a randomized, blinded crossover study that included 149 case scenarios from OSCE providers in Canada, the UK and India in a diverse range of specialties and diseases.

Notably, our study was not designed to emulate either traditional in-person OSCE evaluations or the ways clinicians usually use text, email, chat or telemedicine. Instead, our experiment mirrored the most common way consumers interact with LLMs today, a potentially scalable and familiar mechanism for AI systems to engage in remote diagnostic dialogue.

Overview of the randomized study design to perform a virtual remote OSCE with simulated patients via online multi-turn synchronous text chat.

Performance of AMIE

In this setting, we observed that AMIE performed simulated diagnostic conversations at least as well as PCPs when both were evaluated along multiple clinically-meaningful axes of consultation quality. AMIE had greater diagnostic accuracy and superior performance for 28 of 32 axes from the perspective of specialist physicians, and 24 of 26 axes from the perspective of patient actors.

AMIE outperformed PCPs on multiple evaluation axes for diagnostic dialogue in our evaluations.
Specialist-rated top-k diagnostic accuracy. AMIE and PCPs top-k differential diagnosis (DDx) accuracy are compared across 149 scenarios with respect to the ground truth diagnosis (a) and all diagnoses listed within the accepted differential diagnoses (b). Bootstrapping (n=10,000) confirms all top-k differences between AMIE and PCP DDx accuracy are significant with p <0.05 after false discovery rate (FDR) correction.
Diagnostic conversation and reasoning qualities as assessed by specialist physicians. On 28 out of 32 axes, AMIE outperformed PCPs while being comparable on the rest.

Limitations

Our research has several limitations and should be interpreted with appropriate caution. Firstly, our evaluation technique likely underestimates the real-world value of human conversations, as the clinicians in our study were limited to an unfamiliar text-chat interface, which permits large-scale LLM–patient interactions but is not representative of usual clinical practice. Secondly, any research of this type must be seen as only a first exploratory step on a long journey. Transitioning from a LLM research prototype that we evaluated in this study to a safe and robust tool that could be used by people and those who provide care for them will require significant additional research. There are many important limitations to be addressed, including experimental performance under real-world constraints and dedicated exploration of such important topics as health equity and fairness, privacy, robustness, and many more, to ensure the safety and reliability of the technology.


AMIE as an aid to clinicians

In a recently released preprint, we evaluated the ability of an earlier iteration of the AMIE system to generate a DDx alone or as an aid to clinicians. Twenty (20) generalist clinicians evaluated 303 challenging, real-world medical cases sourced from the New England Journal of Medicine (NEJM) ClinicoPathologic Conferences (CPCs). Each case report was read by two clinicians randomized to one of two assistive conditions: either assistance from search engines and standard medical resources, or AMIE assistance in addition to these tools. All clinicians provided a baseline, unassisted DDx prior to using the respective assistive tools.

Assisted randomized reader study setup to investigate the assistive effect of AMIE to clinicians in solving complex diagnostic case challenges from the New England Journal of Medicine.

AMIE exhibited standalone performance that exceeded that of unassisted clinicians (top-10 accuracy 59.1% vs. 33.6%, p= 0.04). Comparing the two assisted study arms, the top-10 accuracy was higher for clinicians assisted by AMIE, compared to clinicians without AMIE assistance (24.6%, p<0.01) and clinicians with search (5.45%, p=0.02). Further, clinicians assisted by AMIE arrived at more comprehensive differential lists than those without AMIE assistance.

In addition to strong standalone performance, using the AMIE system led to significant assistive effect and improvements in diagnostic accuracy of the clinicians in solving these complex case challenges.

It's worth noting that NEJM CPCs are not representative of everyday clinical practice. They are unusual case reports in only a few hundred individuals so offer limited scope for probing important issues like equity or fairness.


Bold and responsible research in healthcare — the art of the possible

Access to clinical expertise remains scarce around the world. While AI has shown great promise in specific clinical applications, engagement in the dynamic, conversational diagnostic journeys of clinical practice requires many capabilities not yet demonstrated by AI systems. Doctors wield not only knowledge and skill but a dedication to myriad principles, including safety and quality, communication, partnership and teamwork, trust, and professionalism. Realizing these attributes in AI systems is an inspiring challenge that should be approached responsibly and with care. AMIE is our exploration of the “art of the possible”, a research-only system for safely exploring a vision of the future where AI systems might be better aligned with attributes of the skilled clinicians entrusted with our care. It is early experimental-only work, not a product, and has several limitations that we believe merit rigorous and extensive further scientific studies in order to envision a future in which conversational, empathic and diagnostic AI systems might become safe, helpful and accessible.


Acknowledgements

The research described here is joint work across many teams at Google Research and Google Deepmind. We are grateful to all our co-authors - Tao Tu, Mike Schaekermann, Anil Palepu, Daniel McDuff, Jake Sunshine, Khaled Saab, Jan Freyberg, Ryutaro Tanno, Amy Wang, Brenna Li, Mohamed Amin, Sara Mahdavi, Karan Sighal, Shekoofeh Azizi, Nenad Tomasev, Yun Liu, Yong Cheng, Le Hou, Albert Webson, Jake Garrison, Yash Sharma, Anupam Pathak, Sushant Prakash, Philip Mansfield, Shwetak Patel, Bradley Green, Ewa Dominowska, Renee Wong, Juraj Gottweis, Dale Webster, Katherine Chou, Christopher Semturs, Joelle Barral, Greg Corrado and Yossi Matias. We also thank Sami Lachgar, Lauren Winer and John Guilyard for their support with narratives and the visuals. Finally, we are grateful to Michael Howell, James Maynika, Jeff Dean, Karen DeSalvo, Zoubin Gharahmani and Demis Hassabis for their support during the course of this project.

Source: Google AI Blog


Google Meet hardware devices from Poly now support interoperability with Cisco Webex and Zoom

What’s changing 

We’re expanding the existing interoperability between Google Meet, Cisco Webex, and Zoom to include Android-based Meet hardware devices from Poly. Specifically, these devices include: Poly Studio X30, X50, X52, and X70. 


Note that Webex and Zoom interoperability supports core video conferencing features. Some advanced features, such as polls, wired present, and dual-screen support may not be available when using Poly Meet hardware to join Webex or Zoom meetings.

Getting started


Admins: 
End users: 
  • When enabled by your admin, you can join a Webex or Zoom meeting from a Poly Android-based Google Meet hardware device by: 
    • Joining an ad-hoc call by tapping "Join or start a meeting" on your touch controller and selecting Webex or Zoom from the dropdown options. 
    • Joining a scheduled call by adding a room to an event with Webex or Zoom meeting details.  
      • Note: Calendar events that originate outside of Google Calendar must be duplicated and populated with room details manually.
  • Visit the Help Center to learn more about Google Meet interoperability.

Rollout pace


Availability

  • Available to all Google Workspace customers with Google Meet hardware subscriptions

Resources


Long Term Support Channel Update for ChromeOS

The new LTS Candidate, LTC-120 version 120.0.6099.203 (Platform Version: 15662.64.3), is being rolled out for most ChromeOS devices. 


If you have devices in the LTC channel, they will be updated to this version. The LTS channel remains on LTS-114 until April 2nd, 2024. 

Release notes for LTC-120 can be found here 
Want to know more about Long-term Support? Click here


Giuliana Pritchard 
Google Chrome OS

A New Approach to Real-Money Games on Google Play

Posted by Karan Gambhir – Director, Global Trust and Safety Partnerships

As a platform, we strive to help developers responsibly build new businesses and reach wider audiences across a variety of content types and genres. In response to strong demand, in 2021 we began onboarding a wider range of real-money gaming (RMG) apps in markets with pre-existing licensing frameworks. Since then, this app category has continued to flourish with developers creating new RMG experiences for mobile.

To ensure Google Play keeps up with the pace of developer innovation, while promoting user safety, we’ve since conducted several pilot programs to determine how to support more RMG operators and game types. For example, many developers in India were eager to bring RMG apps to more Android users, so we launched a pilot program, starting with Rummy and Daily Fantasy Sports (DFS), to understand the best way to support their businesses.

Based on the learnings from the pilots and positive feedback from users and developers, Google Play will begin supporting more RMG apps this year, including game types and operators not covered by an existing licensing framework. We’ll launch this expanded RMG support in June to developers for their users in India, Mexico, and Brazil, and plan to expand to users in more countries in the future.

We’re pleased that this new approach will provide new business opportunities to developers globally while continuing to prioritize user safety. It also enables developers currently participating in RMG pilots in India and Mexico to continue offering their apps on Play.

    • India pilot: For developers in the Google Play Pilot Program for distributing DFS and Rummy apps to users in India, we are extending the grace period for pilot apps to remain on Google Play until June 30, 2024 when the new policy will take effect. After that time, developers can distribute RMG apps on Google Play to users in India, beyond DFS and Rummy, in compliance with local laws and our updated policy.
    • Mexico pilot: For developers in the Google Play Pilot Program for DFS in Mexico, the pilot will end as scheduled on June 30, 2024, at which point developers can distribute RMG apps on Google Play to users in Mexico, beyond DFS, in compliance with local laws and our updated policy.

Google Play’s existing developer policies supporting user safety, such as requiring age-gating to limit RMG experiences to adults and requiring developers use geo-gating to offer RMG apps only where legal, remain unchanged and we’ll continue to strengthen them. In addition, Google Play will continue other key user safety and transparency efforts such as our expanded developer verification mechanisms.

With this policy update, we will also be evolving our service fee model for RMG to reflect the value Google Play provides and to help sustain the Android and Play ecosystems. We are working closely with developers to ensure our new approach reflects the unique economics and various developer earning models of this industry. We will have more to share in the coming months on our new policy and future expansion plans.

For developers already involved in the real-money gaming space, or those looking to expand their involvement, we hope this helps you prepare for the upcoming policy change. As Google Play evolves our support of RMG around the world, we look forward to helping you continue to delight users, grow your businesses, and launch new game types in a safe way.