Chrome Stable for iOS Update

Hi everyone! We've just released Chrome Stable 107 (107.0.5304.66) for iOS; it'll become available on App Store in the next few hours.

This release includes stability and performance improvements. You can see a full list of the changes in the Git log. If you find a new issue, please let us know by filing a bug.

Erhu Eakpobaro
Google Chrome

Chrome Stable for iOS Update

Hi everyone! We've just released Chrome Stable 107 (107.0.5304.66) for iOS; it'll become available on App Store in the next few hours.

This release includes stability and performance improvements. You can see a full list of the changes in the Git log. If you find a new issue, please let us know by filing a bug.

Erhu Eakpobaro
Google Chrome

Chrome Stable for iOS Update

Hi everyone! We've just released Chrome Stable 107 (107.0.5304.66) for iOS; it'll become available on App Store in the next few hours.

This release includes stability and performance improvements. You can see a full list of the changes in the Git log. If you find a new issue, please let us know by filing a bug.

Erhu Eakpobaro
Google Chrome

Preserving one of Nigeria’s last sacred groves

Editor's note:

The Honorable Minister of Information and Culture for the Federal Republic of Nigeria, Alhaji Lai Mohammed, authors this piece in which he talks about the new Osun Osogbo project by Google Arts and Culture, in collaboration with CyArk and the Adunni Olorisha Trust / Osun Foundation  Redefining, which exhibits this sacred UNESCO World Heritage Site and makes it accessible to everyone online.



-----




On the forested banks of the Osun river in Osogbo, Nigeria lies one of the last cultural sites of its kind. In this sacred grove, Yoruba deities are embodied in shapely, sculpted shrines and creativity and spirituality come to life. The Osun Osogbo Sacred Grove is a truly unique and special place.


I’m truly delighted that, for the first time ever, the shrine and its surroundings have been digitized thanks to a collaboration between CyArk, Adunni Olorisa Trust/Osun Foundation and Google Arts & Culture. Now both are protected for posterity, so anybody from anywhere can explore them.





I said when I visited in 2019 that it was important to refocus national and global attention on this site, and I’m glad we achieved our purpose. For even though this place of active worship and art is a UNESCO World Heritage Site and priceless cultural asset, it is in danger of destruction. Flooding and heavy rain due to climate change, along with a number of other risks to conservation, threaten the groves’ survival.


This is why CyArk and the Adunni Olorisha Trust / Osun Foundation partnered with Google Arts & Culture to digitize the shrines and surroundings at Osun Osogbo Sacred Grove – and to tell the stories of its spiritual, artistic and cultural significance. In 2019, the grove’s Busanyin Shrine was wrecked in a flood; the 3D imagery captured in the early phases of the project were among the last images to be taken of the site before it was destroyed. So while this project may not stop the impact of flooding or the activities of land grabbers, it will ensure that future generations can see it as it is today.

“CyArk's work in Osogbo has been a true collaboration between Nigerian government officials, local NGOs, the community of Osogbo and His royal highness Jimoh Oyetunji Olanipekun Larooye II, who have partnered with CyArk and are working together to share the stories of Osogbo with a wider audience.” - Kacey Hadick, Director of Programs and Development, CyArk.


Although this flood was a devastating loss, it reinforces the importance of using a variety of tools to preserve the world’s cultural and spiritual places, from digital documentation to on-site restoration work. And this project highlights the broad spectrum of preservation that, in this case, can help protect a rich Yoruba cultural heritage – through 3D models, Street View, archival and contemporary photographs, video and audio interviews and written stories.


Olufemi A. Akinsanya Akinsanya is Chair of the Save Our Art! Save Our Heritage! Campaign. He says, “We want to expose the world to this incredible Yoruba heritage and art treasure, introduce the remarkable artists of the New Sacred Art Movement who saved it from destruction in the 1960’s and champion the next generation who are preserving it now.”


While a virtual experience of the site can never replace the real thing, we invite you to get lost in the Sacred Grove of Osun Osogbo and experience its art, culture, and preservation like never before on Google Arts & Culture.


This work forms part of the Google Arts & Culture Heritage on the Edge project, which tells of how people around the world are using technology to help protect cultural sites against the effects of climate change.


Google Arts & Culture and CyArk have collaborated with cultural heritage site managers to carry out similar digitization training sessions. Learn more about the stories of five other cultural sites impacted by climate change in Scotland, Bangladesh, Tanzania, Peru and Rapa Nui.





Posted by Alhaji Lai Mohammed, Honourable Minister of Information and Culture, Federal Republic of Nigeria

 ==== 

Squiz Kids partners with Google to help students build media literacy skills

Image: Newshounds by Squiz Kids in the classroom


This week, UNESCO’s Media Literacy Week is focused on nurturing trust in media and information. There’s no better time to educate and empower people to be confident consumers of media than at school, which is why Squiz Kids has partnered with the Google News Initiative to roll out its media literacy program ‘Newshounds’ to primary schools across New Zealand. 



Squiz Kids, a daily news podcast for 8-12yos, has developed Newshounds by Squiz Kids as a plug-and-play media literacy teaching resource comprising eight x 10 minute podcasts and accompanying in-classroom activities, packaged up in an engaging board-game style format. 



Squiz-E the Newshound takes primary-aged kids on a media literacy journey, teaching them to understand the myriad forms of media to which they’re exposed every day and recognise the multiple agendas that drive them. Underpinning it all are exercises that give kids the skills to identify misinformation and disinformation. 



“Kids today have more information coming at them on a daily basis than at any other time in history,”  said Squiz Kids Director Bryce Corbett. “We created Newshounds to make kids critical consumers of media - to teach them to stop, think and check before believing everything they come across on the internet. Teachers and parents alike know it’s important to teach their children media literacy, but few know where to start. By partnering with Google, it’s hoped that Newshounds starts conversations with adults that help kids recognise online fact from fiction.”



The partnership with Google will allow classrooms across New Zealand to access the Newshounds media literacy program for free from this week.


The Manaiakalani Education schools in Tāmaki Makaurau have been running a pilot of the programme in their classrooms over the past few months, and they’ve found students were engaged by the content, and most importantly, were transferring these concepts to other areas of learning when they were online.



Listeners, readers and viewers are incredibly powerful in the fight against misinformation - the more they demand quality information, the higher chance facts have to win the battle. But those audiences need support. 


Understanding the many complex elements that go into deciding what is fact and what is a falsehood starts at an early age, which is why we’re so proud to work with Squiz Kids to launch Newshounds in New Zealand schools. This partnership builds on our efforts to build a sustainable, diverse and innovative news ecosystem.



Teachers are invited to create a free account at newshounds.squizkids.com.au - and start their class on the path to media literacy.

Post content

Android Dev Summit ‘22: Here’s how to tune in!

Posted by Yasmine Evjen, Community Lead, Android Developer Relations

Android Dev Summit is about to kick off at 9AM PT on Monday October 24, so it’s time to tune in! You can watch the livestream on developers.android.com, on YouTube, or right below:

Whether you’re tuning in online or–for the first time since 2019–joining in person at locations around the world, it’s your opportunity to learn from the source, about building excellent apps across devices. We just dropped information on the livestream agenda, technical talks, and speakers — so start planning your schedule!
 
Here’s what you can expect: we’re kicking things off at 9am PT with the Android Dev Summit keynote, where you’ll hear about the latest in Modern Android Development, innovations in our core platform, and how to take advantage of Android’s momentum across devices, including wearables and large screens. And right after the keynote, at 9:50 AM PT, we’ll be broadcasting live the first of three tracks: Modern Android Development (MAD)!
Modern Android Development Track @ Android Dev Summit October 24, 2022 at 9:00 AM PT 
Agenda 9:00 AM Keynote, 9:50 AM Custom Layouts and Graphics in Compose, 10:10 AM Making Apps Blazing Fast with Baseline Profiles, 10:30 State of the Art of Compose Tooling, 10:50 State Holders and State Production in the UI Layer, 11:10 AM 5 ways Compose Improves UI Testing, 11:15 AM 5 Android Studio Features You Don't Want to Miss, 11:30 AM Pre-recorded MAD Technical Talks, 12:20 PM Where to Hoist that State in Compose, 12:25 PM Material You in Compose Apps, 12:30 PM PM Compose Modifiers Deep Dive, 12:50 Practical Room Migrations, 12:55 PM Type Safe, Multi-Module Best Practices with Navigation, 1:00 PM What's New in Android Build, 1:20 PM From Views to Compose: Where Can I Start?, 1:25 PM Test at Scale with Gradle Managed Devices, 1:35 PM MAD #AskAndroid. Broadcast live on d.android.com/dev-summit & YouTube.
Then, ADS continues into November, with two more tracks. First, on November 9, ADS travels to London where we’ll broadcast all of the Form Factors, read the full list of talks here.
Form Factors Track @ Android Dev Summit November 9, 2022 
Sessions: Deep Dive into Wear OS App Architecture, Build Better Uls Across Form Factors with Android Studio, Designing for Large Screens: Canonical Layouts and Visual Hierarchy Compose: Implementing Responsive UI for Large Screens, Creating Helpful Fitness Experiences with Health Services and Health Connect, The Key to Keyboard and Mouse Support across Tablets and ChromeOS Your Camera App on Different Form Factors,  Building Media Apps on Wear OS,  Why and How to Optimize Your App for ChromeOS. 
Broadcast live on d.android.com/dev-summit & YouTube.


Then, on November 14, we’ll broadcast our Platform, you can check out the talks here.
Platform Track @ Android Dev Summit November 14, 2022 
Sessions: Migrate Your Apps to Android 13,  Presenting a High-quality Media Experience for all Users, Improving Your Social Experience Quality with Android Camera, Building for a Multilingual World Everything About Storage on Android, Migrate to Play Billing Library 5: More flexible subscriptions on Google Play, Designing a High Quality App with the Latest Android Features, Hardware Acceleration for ML on-device, Demystifying Attestation, Building Accessibility Support for Compose. 
Broadcast live on d.android.com/dev-summit & YouTube.

Burning question? #AskAndroid to the rescue!

To cap off each of our live streamed tracks, we’ll be hosting a live Q&A – #AskAndroid - for each track topic, so you can get your burning questions answered live by the team who built Android. Post your questions to Twitter or comment in the YouTube livestream using #AskAndroid, for a chance to have your questions answered on the livestream.

We’re so excited for this year’s Android Dev Summit, and we’re looking forward to connecting with you!

Google at ECCV 2022

Google is proud to be a Platinum Sponsor of the European Conference on Computer Vision (ECCV 2022), a premier forum for the dissemination of research in computer vision and machine learning (ML). This year, ECCV 2022 will be held as a hybrid event, in person in Tel Aviv, Israel with virtual attendance as an option. Google has a strong presence at this year’s conference with over 60 accepted publications and active involvement in a number of workshops and tutorials. We look forward to sharing some of our extensive research and expanding our partnership with the broader ML research community.

Registered for ECCV 2022? We hope you’ll visit our on-site or virtual booths to learn more about the research we’re presenting at ECCV 2022, including several demos and opportunities to connect with our researchers. Learn more about Google's research being presented at ECCV 2022 below (Google affiliations in bold).


Organizing Committee

Program Chairs include: Moustapha Cissé

Awards Paper Committee: Todd Zickler

Area Chairs include: Ayan Chakrabarti, Tali Dekel, Alireza Fathi, Vittorio Ferrari, David Fleet, Dilip Krishnan, Michael Rubinstein, Cordelia Schmid, Deqing Sun, Federico Tombari, Jasper Uijlings, Ming-Hsuan Yang, Todd Zickler


Accepted Publications

NeuMesh: Learning Disentangled Neural Mesh-Based Implicit Field for Geometry and Texture Editing
Bangbang Yang, Chong Bao, Junyi Zeng, Hujun Bao, Yinda Zhang, Zhaopeng Cui, Guofeng Zhang

Anti-Neuron Watermarking: Protecting Personal Data Against Unauthorized Neural Networks
Zihang Zou, Boqing Gong, Liqiang Wang

Exploiting Unlabeled Data with Vision and Language Models for Object Detection
Shiyu Zhao, Zhixing Zhang, Samuel Schulter, Long Zhao, Vijay Kumar B G, Anastasis Stathopoulos, Manmohan Chandraker, Dimitris N. Metaxas

Waymo Open Dataset: Panoramic Video Panoptic Segmentation
Jieru Mei, Alex Zhu, Xinchen Yan, Hang Yan, Siyuan Qiao, Yukun Zhu, Liang-Chieh Chen, Henrik Kretzschmar

PRIF: Primary Ray-Based Implicit Function
Brandon Yushan Feng, Yinda Zhang, Danhang Tang, Ruofei Du, Amitabh Varshney

LoRD: Local 4D Implicit Representation for High-Fidelity Dynamic Human Modeling
Boyan Jiang, Xinlin Ren, Mingsong Dou, Xiangyang Xue, Yanwei Fu, Yinda Zhang

k-Means Mask Transformer (see blog post)
Qihang Yu*, Siyuan Qiao, Maxwell D Collins, Yukun Zhu, Hartwig Adam, Alan Yuille, Liang-Chieh Chen

MaxViT: Multi-Axis Vision Transformer (see blog post)
Zhengzhong Tu, Hossein Talebi, Han Zhang, Feng Yang, Peyman Milanfar, Alan Bovik, Yinxiao Li

E-Graph: Minimal Solution for Rigid Rotation with Extensibility Graphs
Yanyan Li, Federico Tombari

RBP-Pose: Residual Bounding Box Projection for Category-Level Pose Estimation
Ruida Zhang, Yan Di, Zhiqiang Lou, Fabian Manhardt, Federico Tombari, Xiangyang Ji

GOCA: Guided Online Cluster Assignment for Self-Supervised Video Representation Learning
Huseyin Coskun, Alireza Zareian, Joshua L Moore, Federico Tombari, Chen Wang

Scaling Open-Vocabulary Image Segmentation with Image-Level Labels
Golnaz Ghiasi, Xiuye Gu, Yin Cui, Tsung-Yi Lin*

Adaptive Transformers for Robust Few-Shot Cross-Domain Face Anti-spoofing
Hsin-Ping Huang, Deqing Sun, Yaojie Liu, Wen-Sheng Chu, Taihong Xiao, Jinwei Yuan, Hartwig Adam, Ming-Hsuan Yang

DualPrompt: Complementary Prompting for Rehearsal-Free Continual Learning
Zifeng Wang*, Zizhao Zhang, Sayna Ebrahimi, Ruoxi Sun, Han Zhang, Chen-Yu Lee, Xiaoqi Ren, Guolong Su, Vincent Perot, Jennifer Dy, Tomas Pfister

BLT: Bidirectional Layout Transformer for Controllable Layout Generation
Xiang Kong, Lu Jiang, Huiwen Chang, Han Zhang, Yuan Hao, Haifeng Gong, Irfan Essa

V2X-ViT: Vehicle-to-Everything Cooperative Perception with Vision Transformer
Runsheng Xu, Hao Xiang, Zhengzhong Tu, Xin Xia, Ming-Hsuan Yang, Jiaqi Ma

Learning Visibility for Robust Dense Human Body Estimation
Chun-Han Yao, Jimei Yang, Duygu Ceylan, Yi Zhou, Yang Zhou, Ming-Hsuan Yang

Are Vision Transformers Robust to Patch Perturbations?
Jindong Gu, Volker Tresp, Yao Qin

PseudoAugment: Learning to Use Unlabeled Data for Data Augmentation in Point Clouds
Zhaoqi Leng, Shuyang Cheng, Ben Caine, Weiyue Wang, Xiao Zhang, Jonathon Shlens, Mingxing Tan, Dragomir Anguelov

Structure and Motion from Casual Videos
Zhoutong Zhang, Forrester Cole, Zhengqi Li, Noah Snavely, Michael Rubinstein, William T. Freeman

PreTraM: Self-Supervised Pre-training via Connecting Trajectory and Map
Chenfeng Xu, Tian Li, Chen Tang, Lingfeng Sun, Kurt Keutzer, Masayoshi Tomizuka, Alireza Fathi, Wei Zhan

Novel Class Discovery Without Forgetting
Joseph K J, Sujoy Paul, Gaurav Aggarwal, Soma Biswas, Piyush Rai, Kai Han, Vineeth N Balasubramanian

Hierarchically Self-Supervised Transformer for Human Skeleton Representation Learning
Yuxiao Chen, Long Zhao, Jianbo Yuan, Yu Tian, Zhaoyang Xia, Shijie Geng, Ligong Han, Dimitris N. Metaxas

PACTran: PAC-Bayesian Metrics for Estimating the Transferability of Pretrained Models to Classification Tasks
Nan Ding, Xi Chen, Tomer Levinboim, Soravit Changpinyo, Radu Soricut

InfiniteNature-Zero: Learning Perpetual View Generation of Natural Scenes from Single Images
Zhengqi Li, Qianqian Wang*, Noah Snavely, Angjoo Kanazawa*

Generalizable Patch-Based Neural Rendering (see blog post)
Mohammed Suhail*, Carlos Esteves, Leonid Sigal, Ameesh Makadia

LESS: Label-Efficient Semantic Segmentation for LiDAR Point Clouds
Minghua Liu, Yin Zhou, Charles R. Qi, Boqing Gong, Hao Su, Dragomir Anguelov

The Missing Link: Finding Label Relations Across Datasets
Jasper Uijlings, Thomas Mensink, Vittorio Ferrari

Learning Instance-Specific Adaptation for Cross-Domain Segmentation
Yuliang Zou, Zizhao Zhang, Chun-Liang Li, Han Zhang, Tomas Pfister, Jia-Bin Huang

Learning Audio-Video Modalities from Image Captions
Arsha Nagrani, Paul Hongsuck Seo, Bryan Seybold, Anja Hauth, Santiago Manen, Chen Sun, Cordelia Schmid

TL;DW? Summarizing Instructional Videos with Task Relevance & Cross-Modal Saliency
Medhini Narasimhan*, Arsha Nagrani, Chen Sun, Michael Rubinstein, Trevor Darrell, Anna Rohrbach, Cordelia Schmid

On Label Granularity and Object Localization
Elijah Cole, Kimberly Wilber, Grant Van Horn, Xuan Yang, Marco Fornoni, Pietro Perona, Serge Belongie, Andrew Howard, Oisin Mac Aodha

Disentangling Architecture and Training for Optical Flow
Deqing Sun, Charles Herrmann, Fitsum Reda, Michael Rubinstein, David J. Fleet, William T. Freeman

NewsStories: Illustrating Articles with Visual Summaries
Reuben Tan, Bryan Plummer, Kate Saenko, J.P. Lewis, Avneesh Sud, Thomas Leung

Improving GANs for Long-Tailed Data Through Group Spectral Regularization
Harsh Rangwani, Naman Jaswani, Tejan Karmali, Varun Jampani, Venkatesh Babu Radhakrishnan

Planes vs. Chairs: Category-Guided 3D Shape Learning Without Any 3D Cues
Zixuan Huang, Stefan Stojanov, Anh Thai, Varun Jampani, James Rehg

A Sketch Is Worth a Thousand Words: Image Retrieval with Text and Sketch
Patsorn Sangkloy, Wittawat Jitkrittum, Diyi Yang, James Hays

Learned Monocular Depth Priors in Visual-Inertial Initialization
Yunwen Zhou, Abhishek Kar, Eric L. Turner, Adarsh Kowdle, Chao Guo, Ryan DuToit, Konstantine Tsotsos

How Stable are Transferability Metrics Evaluations?
Andrea Agostinelli, Michal Pandy, Jasper Uijlings, Thomas Mensink, Vittorio Ferrari

Data-Free Neural Architecture Search via Recursive Label Calibration
Zechun Liu*, Zhiqiang Shen, Yun Long, Eric Xing, Kwang-Ting Cheng, Chas H. Leichner

Fast and High Quality Image Denoising via Malleable Convolution
Yifan Jiang*, Bartlomiej Wronski, Ben Mildenhall, Jonathan T. Barron, Zhangyang Wang, Tianfan Xue

Concurrent Subsidiary Supervision for Unsupervised Source-Free Domain Adaptation
Jogendra Nath Kundu, Suvaansh Bhambri, Akshay R Kulkarni, Hiran Sarkar,
Varun Jampani, Venkatesh Babu Radhakrishnan

Learning Online Multi-Sensor Depth Fusion
Erik Sandström, Martin R. Oswald, Suryansh Kumar, Silvan Weder, Fisher Yu, Cristian Sminchisescu, Luc Van Gool

Hierarchical Semantic Regularization of Latent Spaces in StyleGANs
Tejan Karmali, Rishubh Parihar, Susmit Agrawal, Harsh Rangwani, Varun Jampani, Maneesh K Singh, Venkatesh Babu Radhakrishnan

RayTran: 3D Pose Estimation and Shape Reconstruction of Multiple Objects from Videos with Ray-Traced Transformers
Michał J Tyszkiewicz, Kevis-Kokitsi Maninis, Stefan Popov, Vittorio Ferrari

Neural Video Compression Using GANs for Detail Synthesis and Propagation
Fabian Mentzer, Eirikur Agustsson, Johannes Ballé, David Minnen, Nick Johnston, George Toderici

Exploring Fine-Grained Audiovisual Categorization with the SSW60 Dataset
Grant Van Horn, Rui Qian, Kimberly Wilber, Hartwig Adam, Oisin Mac Aodha, Serge Belongie

Implicit Neural Representations for Image Compression
Yannick Strümpler, Janis Postels, Ren Yang, Luc Van Gool, Federico Tombari

3D Compositional Zero-Shot Learning with DeCompositional Consensus
Muhammad Ferjad Naeem, Evin Pınar Örnek, Yongqin Xian, Luc Van Gool, Federico Tombari

FindIt: Generalized Localization with Natural Language Queries (see blog post)
Weicheng Kuo, Fred Bertsch, Wei Li, AJ Piergiovanni, Mohammad Saffar, Anelia Angelova

A Simple Single-Scale Vision Transformer for Object Detection and Instance Segmentation
Wuyang Chen*, Xianzhi Du, Fan Yang, Lucas Beyer, Xiaohua Zhai, Tsung-Yi Lin, Huizhong Chen, Jing Li, Xiaodan Song, Zhangyang Wang, Denny Zhou

Improved Masked Image Generation with Token-Critic
Jose Lezama, Huiwen Chang, Lu Jiang, Irfan Essa

Learning Discriminative Shrinkage Deep Networks for Image Deconvolution
Pin-Hung Kuo, Jinshan Pan, Shao-Yi Chien, Ming-Hsuan Yang

AudioScopeV2: Audio-Visual Attention Architectures for Calibrated Open-Domain On-Screen Sound Separation
Efthymios Tzinis*, Scott Wisdom, Tal Remez, John Hershey

Simple Open-Vocabulary Object Detection with Vision Transformers
Matthias Minderer, Alexey Gritsenko, Austin C Stone, Maxim Neumann, Dirk Weißenborn, Alexey Dosovitskiy, Aravindh Mahendran, Anurag Arnab, Mostafa Dehghani, Zhuoran Shen, Xiao Wang, Xiaohua Zhai, Thomas Kipf, Neil Houlsby

COMPOSER: Compositional Reasoning of Group Activity in Videos with Keypoint-Only Modality
Honglu Zhou, Asim Kadav, Aviv Shamsian, Shijie Geng, Farley Lai, Long Zhao, Ting Liu, Mubbasir Kapadia, Hans Peter Graf

Video Question Answering with Iterative Video-Text Co-tokenization (see blog post)
AJ Piergiovanni, Kairo Morton*, Weicheng Kuo, Michael S. Ryoo, Anelia Angelova

Class-Agnostic Object Detection with Multi-modal Transformer
Muhammad Maaz, Hanoona Abdul Rasheed, Salman Khan, Fahad Shahbaz Khan, Rao Muhammad Anwer, Ming-Hsuan Yang

FILM: Frame Interpolation for Large Motion (see blog post)
Fitsum Reda, Janne Kontkanen, Eric Tabellion, Deqing Sun, Caroline Pantofaru, Brian Curless

Compositional Human-Scene Interaction Synthesis with Semantic Control
Kaifeng Zhao, Shaofei Wang, Yan Zhang, Thabo Beeler, Siyu Tang


Workshops

LatinX in AI
Mentors include: José Lezama
Keynote Speakers include: Andre Araujo

AI for Creative Video Editing and Understanding
Keynote Speakers include: Tali Dekel, Negar Rostamzadeh

Learning With Limited and Imperfect Data (L2ID)
Invited Speakers include: Xiuye Gu
Organizing Committee includes: Sadeep Jayasumana

International Challenge on Compositional and Multimodal Perception (CAMP)
Program Committee includes: Edward Vendrow

Self-Supervised Learning: What is Next?
Invited Speakers include: Mathilde Caron, Arsha Nagrani
Organizers include: Andrew Zisserman

3rd Workshop on Adversarial Robustness In the Real World
Invited Speakers include: Ekin Dogus Cubuk
Organizers include: Xinyun Chen, Alexander Robey, Nataniel Ruiz, Yutong Bai

AV4D: Visual Learning of Sounds in Spaces
Invited Speakers include: John Hershey

Challenge on Mobile Intelligent Photography and Imaging (MIPI)
Invited Speakers include: Peyman Milanfar

Robust Vision Challenge 2022
Organizing Committee includes: Alina Kuznetsova

Computer Vision in the Wild
Challenge Organizers include: Yi-Ting Chen, Ye Xia
Invited Speakers include: Yin Cui, Yongqin Xian, Neil Houlsby

Self-Supervised Learning for Next-Generation Industry-Level Autonomous Driving (SSLAD)
Organizers include: Fisher Yu

Responsible Computer Vision
Organizing Committee includes: Been Kim
Invited Speakers include: Emily Denton

Cross-Modal Human-Robot Interaction
Invited Speakers include: Peter Anderson

ISIC Skin Image Analysis
Organizing Committee includes: Yuan Liu
Steering Committee includes: Yuan Liu, Dale Webster
Invited Speakers include: Yuan Liu

Observing and Understanding Hands in Action
Sponsored by Google

Autonomous Vehicle Vision (AVVision)
Speakers include: Fisher Yu

Visual Perception for Navigation in Human Environments: The JackRabbot Human Body Pose Dataset and Benchmark
Organizers include: Edward Vendrow

Language for 3D Scenes
Invited Speakers include: Jason Baldridge
Organizers include: Leonidas Guibas

Designing and Evaluating Computer Perception Systems (CoPe)
Organizers include: Andrew Zisserman

Learning To Generate 3D Shapes and Scenes
Panelists include: Pete Florence

Advances in Image Manipulation
Program Committee includes: George Toderici, Ming-Hsuan Yang

TiE: Text in Everything
Challenge Organizers include: Shangbang Long, Siyang Qin
Invited Speakers include: Tali Dekel, Aishwarya Agrawal

Instance-Level Recognition
Organizing Committee: Andre Araujo, Bingyi Cao, Tobias Weyand
Invited Speakers include: Mathilde Caron

What Is Motion For?
Organizing Committee: Deqing Sun, Fitsum Reda, Charles Herrmann
Invited Speakers include: Tali Dekel

Neural Geometry and Rendering: Advances and the Common Objects in 3D Challenge
Invited Speakers include: Ben Mildenhall

Visual Object-Oriented Learning Meets Interaction: Discovery, Representations, and Applications
Invited Speakers include: Klaus Greff, Thomas Kipf
Organizing Committee includes: Leonidas Guibas

Vision with Biased or Scarce Data (VBSD)
Program Committee includes: Yizhou Wang

Multiple Object Tracking and Segmentation in Complex Environments
Invited Speakers include: Xingyi Zhou, Fisher Yu

3rd Visual Inductive Priors for Data-Efficient Deep Learning Workshop
Organizing Committee includes: Ekin Dogus Cubuk

DeeperAction: Detailed Video Action Understanding and Anomaly Recognition
Advisors include: Rahul Sukthankar

Sign Language Understanding Workshop and Sign Language Recognition, Translation & Production Challenge
Organizing Committee includes: Andrew Zisserman
Speakers include: Andrew Zisserman

Ego4D: First-Person Multi-Modal Video Understanding
Invited Speakers include: Michal Irani

AI-Enabled Medical Image Analysis: Digital Pathology & Radiology/COVID19
Program Chairs include: Po-Hsuan Cameron Chen
Workshop Partner: Google Health

Visual Object Tracking Challenge (VOT 2022)
Technical Committee includes: Christoph Mayer

Assistive Computer Vision and Robotics
Technical Committee includes: Maja Mataric

Human Body, Hands, and Activities from Egocentric and Multi-View Cameras
Organizers include: Francis Engelmann

Frontiers of Monocular 3D Perception: Implicit x Explicit
Panelists include: Pete Florence


Tutorials

Self-Supervised Representation Learning in Computer Vision
Invited Speakers include: Ting Chen

Neural Volumetric Rendering for Computer Vision
Organizers include: Ben Mildenhall, Pratul Srinivasan, Jon Barron
Presenters include: Ben Mildenhall, Pratul Srinivasan

New Frontiers in Efficient Neural Architecture Search!
Speakers include: Ruochen Wang



*Work done while at Google.  

Source: Google AI Blog


Google Workspace Updates Weekly Recap – October 21, 2022

New updates 

Unless otherwise indicated, the features below are fully launched or in the process of rolling out (rollouts should take no more than 15 business days to complete), launching to both Rapid and Scheduled Release at the same time (if not, each stage of rollout should take no more than 15 business days to complete), and available to all Google Workspace and G Suite customers. 

Initiate dialog workflows from the Chat app using message cards
Previously, the only way for developers and Chat app users to open dialogs was through slash commands. Now, we’re adding the ability to trigger the dialog by using buttons on an in-stream message card. This addition provides a much more convenient way to initiate workflows that involve dialog surfaces. | Learn more


Add shared drives to specific organizational units, now generally available 
Earlier this year, we launched a beta that allows admins to place shared drives into sub organizational units (OUs). Doing so enables admins to configure sharing policies, data regions, access management, and more at a granular level. We’re excited to announce this is now generally available. | Available to Google Workspace Essentials, Business Standard, Business Plus, Enterprise Standard, Enterprise Plus, Education Fundamentals, Education Standard, Education Plus, the Teaching and Learning Upgrade, and Nonprofits customers only. | Learn more


See when colleagues are out of the office on Android 
When viewing a person information card in Google Voice, Calendar, Gmail, and Chat on Android, you are now able to see your colleagues’ out-of-office status via an out-of-office banner. The banner also shows when the person is expected to return. 

More ways to work with, display, and organize your content across Google Workspace on Android
  • Link previews in Google Sheets: We’re improving the Android experience by adding link previews to Sheets. This feature is already available on the web and allows you to get context from linked content without bouncing between apps and screens. | Gradual rollout (up to 15 days for feature visibility) starting on October 24, 2022. | Learn more
  • Google Sheets drag & drop improvements: We’ve enhanced drag & drop support for the Sheets Android app by adding the ability to drag, copy, and share charts and in-cell images.


Previous announcements


The announcements below were published on the Workspace Updates blog earlier this week. Please refer to the original blog posts for complete details.


Workspace Admins are now notified when Label editing is restricted by set rules
We’ve added a new Label Manager UI feature showing which rules a label is used within. Specifically, a message identifying and linking the label to the exact rule(s) will now appear in the Label Manager to ensure admins understand why label modification is disabled. | Available to Google Workspace Essentials, Business Standard, Business Plus, Enterprise Essentials, Enterprise Standard, Enterprise Plus, Education Plus, Education Standard customers only. | Learn more.


Encouraging Working Location coverage across organizations 
Admins now have access to a new tool that aims to drive Working Location usage across their organizations. This setting adds a customizable banner to users’ Calendar either encouraging or requiring them to set up their working location. | Available to Google Workspace Business Plus, Enterprise Standard, Enterprise Plus, Education Fundamentals, Education Plus, Education Standard, and the Teaching and Learning Upgrade, as well as legacy G Suite Business customers only. | Learn more.


Enhanced menus in Google Slides and Drawings improves findability of key features 
We’re updating the menus in Google Slides and Google Drawings to make it easier to locate the most commonly-used features. | Learn more.


Preview or download client-side encrypted files with Google Drive on Android and iOS 
Admins for select Google Workspace editions can update their client-side encryption configurations to include Drive Android and iOS. When enabled, users can preview or download client-side encrypted files. | Learn more.


Split table cells in Google Docs to better organize information
You can now split table cells into a desired number of rows and columns in Google Docs. | Learn more.


Updates to storage management tools in the Admin console 
To further enhance the set of tools for managing storage, we’re rolling out a new Storage Admin role. The ability to apply storage limits to shared drives and a new column called Shared drive ID in the Manage Shared Drives page are coming soon. | Learn more.


Hold separate conversations in Google Chat spaces with in-line threading 
You can now reply directly to any message in new spaces and some existing spaces. This creates a separate in-line thread where smaller groups of people can continue a conversation on a specific topic. | Learn more.


Conversation summaries in Google Chat help you stay on top of messages in Spaces 
We've introduced conversation summaries in Google Chat on web, which provide a helpful digest of conversations in a space, allowing you to quickly catch-up on unread messages and navigate to the most relevant threads. | Available to Google Workspace Essentials, Business Standard, Business Plus, Enterprise Essentials, Enterprise Standard, Enterprise Plus, Education Plus, Education Standard, the Teaching and Learning Upgrade, Frontline, and Nonprofits customers only. | Learn more.


Present Google Slides directly in Google Meet 
You will now be able to control your Slides and engage with your audience all in one screen by presenting Slides from Meet. This updated experience can help you present with greater confidence and ultimately make digital interactions feel more like when you’re physically together. | Available to Available to Google Workspace Business Standard, Business Plus, Enterprise Essentials, Enterprise Standard, Education Standard, Enterprise Plus, Education Plus, the Teaching and Learning Upgrade, and Nonprofits customers only. | Learn more.


Easily find Google Workspace Marketplace apps with enhanced search filters 
We’ve introduced enhanced search filters in the Google Workspace Marketplace to help you quickly find relevant apps. These new filters allow you to search by category, price, rating whether it’s a private app for the organization, if it works with other apps, and more. | Available to Google Workspace Business Starter, Business Standard, Business Plus, Enterprise Standard, Enterprise Plus, Education Fundamentals, Education Plus, and Nonprofits, as well as legacy G Suite Basic and Business customers only. | Learn more.


Improving the Google Chat and Gmail search experience on web and mobile 
In order to help you find more accurate and customized search suggestions and results, we’ve introduced three features that improve the Google Chat and Gmail search experience on web and mobile: Search suggestions, Gmail labels, and Related results. | Learn more.



How AI can help in the fight against breast cancer

In 2020, there were 2.3 million people diagnosed with breast cancer and 685,000 deaths globally. Early cancer detection is key to better health outcomes. But screenings are work intensive, and patients often find getting mammogramsand waiting for results stressful.

In response to these challenges, Google Health and Northwestern Medicine partnered in 2021 on a clinical research study to explore whether artificial intelligence (AI) models can reduce the time to diagnosis during the screening process, narrowing the assessment gap and improving the patient experience. This work is among the first prospective randomized controlled studies for AI in breast cancer screening, and the results will be published in early 2023.

Behind this work, are scientists and researchers united in the fight against breast cancer. We spoke with Dr. Sunny Jansen, a technical program manager at Google, and Sally Friedewald, MD, the division chief of Breast and Women's Imaging at Northwestern University’s Feinberg School of Medicine, on how they hope this work will help screening providers catch cancer earlier and improve the patient experience.

What were you hoping to achieve with this work in the fight against breast cancer?

Dr. Jansen:Like so many of us, I know how breast cancer can impact families and communities, and how critical early detection can be. The experiences of so many around me have influenced my work in this area. I hope that AI can make the future of breast cancer screening easier, faster, more accurate — and, ultimately, more accessible for women globally.

So we sought to understand how AI can reduce diagnostic delays and help patients receive diagnoses as soon as possible by streamlining care into a single visit. For patients with abnormal findings at screening, the diagnostic delay to get additional imaging tests is typically a couple of weeks in the U.S.Often, the results are normal after the additional imaging tests, but that waiting period can be nerve-racking. Additionally, it can be harder for some patients to come back to get additional imaging tests, which exacerbates delays and leads to disparities in the timeliness of care.

Dr. Friedewald:I anticipate an increase in the demand for screenings and challenges in having enough providers with the necessary specialized training. Using AI, we can identify patients who need additional imaging when they are still in the clinic. We can expedite their care, and, in many cases, eliminate the need for return visits. Patients who aren’t flagged still receive the care they need as well. This translates into operational efficiencies and ultimately leads to patients getting a breast cancer diagnosis faster. We already know the earlier treatment starts, the better.

What were your initial beliefs about applying AI to identify breast cancer? How have these changed through your work on this project?

Dr. Jansen: Most existing publications about AI and breast cancer analyze AI performance retrospectively by reviewing historical datasets. While retrospective studies have a lot of value, they don’t necessarily represent how AI works in the real world. Sally decided early on that it would be important to do a prospective study, incorporating AI into real-world clinical workflows and measuring the impact. I wasn’t sure what to expect!

Dr. Friedewald:Computer-aided detection (CAD), which was developed a few decades ago to help radiologists identify cancers via mammogram, has proven to be helpful in some environments. Overall, in the U.S., CAD has not resulted in increased cancer detection. I was concerned that AI would be similar to CAD in efficacy. However, AI gathers data in a fundamentally different way. I am hopeful that with this new information we can identify cancers earlier with the ultimate goal of saving lives.

The research will be published in early 2023. What did you find most inspiring and hopeful about what you learned?

Dr. Jansen:The patients who consented to participate in the study inspired me. Clinicians and scientists must conduct quality real-world research so that the best ideas can be identified and moved forward, and we need patients as equal partners in our research.

Dr. Friedewald:Agreed! There’s an appetite to improve our processes and make screening easier and less anxiety-provoking. I truly believe that if we can streamline care for our patients, we will decrease the stress associated with screening and hopefully improve access for those who need it.

Additionally, AI has the potential to go beyond the prioritization of patients who need care. By prospectively identifying patients who are at higher risk of developing breast cancer, AI could help us determine patients that might need a more rigorous screening regimen. I am looking forward to collaborating with Google on this topic and others that could ultimately improve cancer survival.

Helping all New Yorkers pursue a career in tech

As New York emerges from the COVID-19 pandemic, the tech sector continues to play a critical role in the city’s economic recovery. While hiring has slowed in many of the city’s industries, tech is still among the fastest areas of job growth. In fact, there were more openings for tech positions during the pandemic than in any other industry.

We believe the city’s good-paying tech jobs should be within reach of all New Yorkers. That’s why earlier this year we announced the Google NYC Tech Opportunity Fund — a $4 million commitment to computer science (CS) education, career development and job-preparedness to make sure every New Yorker, today and in the future, has the chance to get into tech.

With over 680,000 good-paying tech jobs, New York has more tech workers than any other U.S. city. That means for every one Googler in New York, there are over 50 additional tech jobs here. So we’ve extended our support for tech in New York beyond our own hiring to the city’s overall tech employment pipeline — starting from the classroom all the way to the office.

We’ve had some early success: We’ve trained 1,200 New York City high school students through our CS education programs like Code Next and the Computer Science Summer Institute (CSSI). Meanwhile, Grow with Google has partnered with over 530 organizations to train more than 430,000 New Yorkers on digital skills with the help of organizations like public libraries and chambers of commerce. We also launched an apprenticeship program where over 90% of participants nationally landed quality jobs in tech, including at Google, within six months of completing the program. And we’re supporting New York-based startups through Google’s Black Founders Fund and Latino Founders Fund.

With the Google NYC Tech Opportunity Fund, we’re going a step further. We’ve identified key areas we believe Google can help address larger systemic issues and where we’ll focus our investments.

Support for teaching early tech skills

P-12 students with access to CS classes in school are nearly three times more likely to aspire to have a job in the field. But to offer these courses, schools need teachers who are trained in computational skills. After supporting a CS teacher training program at Hunter College in 2021, we committed an additional $1.5 million to The City University of New York (CUNY) and Hunter College to help them train more CS teachers and incorporate computational thinking into their curricula.

New York City's public libraries are essential learning environments for many, especially in under-resourced communities. Thousands of teens use the city’s three library systems annually to get college and career mentoring, build digital literacy, borrow books and more. So we granted a total of $1.5 million to Brooklyn Public Library, The New York Public Library and Queens Public Library to help them create special teen centers. These spaces will offer access to technology, resources and programs teens need to develop essential career skills for the future.

Resources for job seekers

We’re also providing a $1 million Google.org grant to the New York City Employment and Training Coalition (NYCETC) to assemble a consortium of leaders in tech education and workforce development, and to seed a grant fund for organizations that support BIPOC job seekers in NYC.

As part of this effort, we also offer free Google Career Certificates for community colleges, such as The State University of New York’s (SUNY) online center. Over 10,000 New Yorkers have already completed a Google Career certificate and built up their qualifications for high-demand tech jobs.

By taking steps to support students and those already in the workforce, we can help ensure all New Yorkers have access to career opportunities so the tech sector in New York really looks like New York.