Hi everyone! We've just released Chrome Stable 107 (107.0.5304.101) for iOS; it'll become available on App Store in the next few hours.
This release includes stability and performance improvements. You can see a full list of the changes in the Git log. If you find a new issue, please let us know by filing a bug.
Posted by Diana Wong, Product ManagerLast month, we kicked off the first part of Android Dev Summit, and later this week comes the second session track: Form Factors! In this track, we’ll bring you through all things Android form factors, including the API, tooling, and design guidance needed to help make your app look great on Android watches, tablets, TVs and more. We dropped information on the livestream agenda, technical talks, and speakers — so start planning your schedule!
Here’s what to expect on November 9th:
Get ready for all things form factors! We’re kicking the livestream off at 1:00 PM GMT on November 9th on YouTube and developer.android.com, where you’ll be able to watch over 20 sessions and check out the latest announcements on building for different form factors, with talks such as:
Build Better UIs Across Form Factors with Android Studio
Deep Dive into Wear OS App Architecture
Do's and Don'ts: Mindset for Optimizing Apps for Large Screens
And to wrap the livestream up, at 4:20 PM GMT, we’ll be hosting a live Q&A – #AskAndroid - so you can get your burning form factors questions answered live by the team who built Android. Post your questions to Twitter or comment in the YouTube livestream using #AskAndroid, for a chance to have your questions answered on the livestream.
There’s more to come!
There is even more to get excited for as the Android Dev Summit continues later this month with the Platform track. On November 14, we’re broadcasting our Platform technical talks where you’ll learn about the latest innovations and updates to the Android platform. You’ll be able to watch talks such as Android 13: Migrate your apps, Presenting a high-quality media experience for all users, and Migrating to Billing Library 5 and more flexible subscriptions on Google Play. Get a sneak peak at all the Platform talks here.
Missed the kick off event? Watch the keynote on YouTube and check out the keynote recap so you don’t miss a beat! Plus, get up to speed on all things Modern Android Development with a recap video, blog, and the full MAD playlist where you can find case studies and technical sessions.
We’re so excited for all the great content yet to come from Android Dev Summit, and we’re looking forward to connecting with you!
Posted by Noah Snavely and Zhengqi Li, Research Scientists, Google Research
We live in a world of great natural beauty — of majestic mountains, dramatic seascapes, and serene forests. Imagine seeing this beauty as a bird does, flying past richly detailed, three-dimensional landscapes. Can computers learn to synthesize this kind of visual experience? Such a capability would allow for new kinds of content for games and virtual reality experiences: for instance, relaxing within an immersive flythrough of an infinite nature scene. But existing methods that synthesize new views from images tend to allow for only limited camera motion.
In a research effort we call Infinite Nature, we show that computers can learn to generate such rich 3D experiences simply by viewing nature videos and photographs. Our latest work on this theme, InfiniteNature-Zero (presented at ECCV 2022) can produce high-resolution, high-quality flythroughs starting from a single seed image, using a system trained only on still photographs, a breakthrough capability not seen before. We call the underlying research problem perpetual view generation: given a single input view of a scene, how can we synthesize a photorealistic set of output views corresponding to an arbitrarily long, user-controlled 3D path through that scene? Perpetual view generation is very challenging because the system must generate new content on the other side of large landmarks (e.g., mountains), and render that new content with high realism and in high resolution.
Example flythrough generated with InfiniteNature-Zero. It takes a single input image of a natural scene and synthesizes a long camera path flying into that scene, generating new scene content as it goes.
Background: Learning 3D Flythroughs from Videos
To establish the basics of how such a system could work, we’ll describe our first version, “Infinite Nature: Perpetual View Generation of Natural Scenes from a Single Image” (presented at ICCV 2021). In that work we explored a “learn from video” approach, where we collected a set of online videos captured from drones flying along coastlines, with the idea that we could learn to synthesize new flythroughs that resemble these real videos. This set of online videos is called the Aerial Coastline Imagery Dataset (ACID). In order to learn how to synthesize scenes that respond dynamically to any desired 3D camera path, however, we couldn’t simply treat these videos as raw collections of pixels; we also had to compute their underlying 3D geometry, including the camera position at each frame.
The basic idea is that we learn to generate flythroughs step-by-step. Given a starting view, like the first image in the figure below, we first compute a depth map using single-image depth prediction methods. We then use that depth map to render the image forward to a new camera viewpoint, shown in the middle, resulting in a new image and depth map from that new viewpoint.
However, this intermediate image has some problems — it has holes where we can see behind objects into regions that weren’t visible in the starting image. It is also blurry, because we are now closer to objects, but are stretching the pixels from the previous frame to render these now-larger objects.
To handle these problems, we learn a neural image refinement network that takes this low-quality intermediate image and outputs a complete, high-quality image and corresponding depth map. These steps can then be repeated, with this synthesized image as the new starting point. Because we refine both the image and the depth map, this process can be iterated as many times as desired — the system automatically learns to generate new scenery, like mountains, islands, and oceans, as the camera moves further into the scene.
Our Infinite Nature methods take an input view and its corresponding depth map (left). Using this depth map, the system renders the input image to a new desired viewpoint (center). This intermediate image has problems, such as missing pixels revealed behind foreground content (shown in magenta). We learn a deep network that refines this image to produce a new high-quality image (right). This process can be repeated to produce a long trajectory of views. We thus call this approach “render-refine-repeat”.
We train this render-refine-repeat synthesis approach using the ACID dataset. In particular, we sample a video from the dataset and then a frame from that video. We then use this method to render several new views moving into the scene along the same camera trajectory as the ground truth video, as shown in the figure below, and compare these rendered frames to the corresponding ground truth video frames to derive a training signal. We also include an adversarial setup that tries to distinguish synthesized frames from real images, encouraging the generated imagery to appear more realistic.
Infinite Nature can synthesize views corresponding to any camera trajectory. During training, we run our system for T steps to generate T views along a camera trajectory calculated from a training video sequence, then compare the resulting synthesized views to the ground truth ones. In the figure, each camera viewpoint is generated from the previous one by performing a warp operation R, followed by the neural refinement operation gθ.
The resulting system can generate compelling flythroughs, as featured on the project webpage, along with a “flight simulator” Colab demo. Unlike prior methods on video synthesis, this method allows the user to interactively control the camera and can generate much longer camera paths.
InfiniteNature-Zero: Learning Flythroughs from Still Photos
One problem with this first approach is that video is difficult to work with as training data. High-quality video with the right kind of camera motion is challenging to find, and the aesthetic quality of an individual video frame generally cannot compare to that of an intentionally captured nature photograph. Therefore, in “InfiniteNature-Zero: Learning Perpetual View Generation of Natural Scenes from Single Images”, we build on the render-refine-repeat strategy above, but devise a way to learn perpetual view synthesis from collections of still photos — no videos needed. We call this method InfiniteNature-Zero because it learns from “zero” videos. At first, this might seem like an impossible task — how can we train a model to generate video flythroughs of scenes when all it’s ever seen are isolated photos?
To solve this problem, we had the key insight that if we take an image and render a camera path that forms a cycle — that is, where the path loops back such that the last image is from the same viewpoint as the first — then we know that the last synthesized image along this path should be the same as the input image. Such cycle consistency provides a training constraint that helps the model learn to fill in missing regions and increase image resolution during each step of view generation.
However, training with these camera cycles is insufficient for generating long and stable view sequences, so as in our original work, we include an adversarial strategy that considers long, non-cyclic camera paths, like the one shown in the figure above. In particular, if we render T frames from a starting frame, we optimize our render-refine-repeat model such that a discriminator network can’t tell which was the starting frame and which was the final synthesized frame. Finally, we add a component trained to generate high-quality sky regions to increase the perceived realism of the results.
With these insights, we trained InfiniteNature-Zero on collections of landscape photos, which are available in large quantities online. Several resulting videos are shown below — these demonstrate beautiful, diverse natural scenery that can be explored along arbitrarily long camera paths. Compared to our prior work — and to prior video synthesis methods — these results exhibit significant improvements in quality and diversity of content (details available in the paper).
Several nature flythroughs generated by InfiniteNature-Zero from single starting photos.
Conclusion
There are a number of exciting future directions for this work. For instance, our methods currently synthesize scene content based only on the previous frame and its depth map; there is no persistent underlying 3D representation. Our work points towards future algorithms that can generate complete, photorealistic, and consistent 3D worlds.
Acknowledgements
Infinite Nature and InfiniteNature-Zero are the result of a collaboration between researchers at Google Research, UC Berkeley, and Cornell University. The key contributors to the work represented in this post include Angjoo Kanazawa, Andrew Liu, Richard Tucker, Zhengqi Li, Noah Snavely, Qianqian Wang, Varun Jampani, and Ameesh Makadia.
Starting today, Google’s OpenRTB implementation will be updated to only include fields and messages for the latest supported version, OpenRTB 2.5. This will affect the OpenRTB protocol and any APIs or tools used to configure bidder endpoints.
OpenRTB protocol changes As a result of supporting only the latest version, the following fields are deprecated and will no longer be populated:
BidRequest.imp[].banner.hmax
BidRequest.imp[].banner.hmin
BidRequest.imp[].banner.wmax
BidRequest.imp[].banner.wmin
Additionally, the following fields may now be populated for all bidders:
BidRequest.device.ext.user_agent_data
BidRequest.device.geo.accuracy
BidRequest.device.geo.utcoffset
BidRequest.imp[].banner.api: This will support the value MRAID_3.
BidRequest.imp[].banner.format
BidRequest.imp[].banner.vcm
BidRequest.imp[].metric
BidRequest.imp[].native.api: This will support the value MRAID_3.
BidRequest.imp[].video.api: This will support the value MRAID_3.
BidRequest.imp[].video.companionad.api
BidRequest.imp[].video.companionad.format
BidRequest.imp[].video.companionad.vcm
BidRequest.imp[].video.linearity
BidRequest.imp[].video.maxduration
BidRequest.imp[].video.placement
BidRequest.imp[].video.playbackend
BidRequest.imp[].video.skip
BidRequest.site.mobile
BidRequest.test
BidRequest.wlang
If a newer OpenRTB specification is published, Google may upgrade the current supported version to match it. Previously deprecated fields that are removed from the specification will also be removed from the protocol. Non-deprecated fields that are removed will be marked as deprecated in the protocol, and eventually removed following a brief deprecation period.
Authorized Buyers Real-time Bidding API changes The behavior of the bidders.endpoints resource will change. The following enum values for bidProtocol will be deprecated:
OPENRTB_2_2
OPENRTB_2_3
OPENRTB_PROTOBUF_2_3
OPENRTB_2_4
OPENRTB_PROTOBUF_2_4
OPENRTB_2_5
OPENRTB_PROTOBUF_2_5
New enum values for bidProtocol will be added to represent the latest supported OpenRTB version in either JSON or Protobuf formats:
OPENRTB_JSON
OPENRTB_PROTOBUF
If you have existing endpoints with their bidProtocol set to any of the deprecated values above, they will automatically be migrated to either OPENRTB_JSON or OPENRTB_PROTOBUF depending on the format specified by the original value. Additionally, any modifications to your endpoints that would set bidProtocol to the deprecated values will instead set it to OPENRTB_JSON or OPENRTB_PROTOBUF.
Feel free to reach out to us via the Authorized Buyers API support forum with any feedback or questions you may have related to these changes. - Mark Saniscalchi, Authorized Buyers Developer Relations
It’s the most wonderful time of the year! Bustling shoppers, home-cooked meals and quality time with loved ones.
Even with all this holiday cheer, the season can feel overwhelming. To help you save time, money and your sanity, here are five must-know Google Maps tips and tricks to navigate the busy holiday season with ease.
Search for stops along your route ?
Forgot something important for your holiday gathering? Start navigating to your destination and tap on the magnifying glass on the top right-hand side of the app. You’ll find grocery stores, rest stops, gas stations, hotels and more along the way so you can avoid a major detour.
Share your ETA ⌚
Running late to the festivities? Let friends and family know you’re on your way by sharing your ETA right from Maps. When navigating, tap on the bottom screen and then on “Share trip progress.”
Drive sustainably and save money on gas ♻️
Use eco-friendly routing to see and choose the most fuel or energy-efficient route to your destination — whether it’s an ice rink or a tree farm — whenever you navigate with Google Maps. You can also search for “gas prices'' to see the price of fuel at stations nearby so you can pick the cheapest option.
Find your way indoors fast with the Directory tab ?️
If you’re planning to shop or travel this season, you can search for any mall, airport or transit station and tap on the “Directory” tab to see what businesses are inside — like if a specific car rental company is located at the airport. You’ll also see helpful details like what floor a place is on and if it’s open.
Nothing dampers a festive mood like long lines. Search for any place on Google Maps — like a bakery, grocery store or airport — and pull up its business page. Scroll down to see how busy it is at the moment or how busy it tends to be at a given day and hour to save time.
Still hungry for more? Check out these Google Maps traffic predictions, popular times, and activity trends — we promise they’re pumpkin to talk about!
The holidays are upon us once again. While we love the crinkle of wrapping paper and the smell of freshly baked gingerbread, the holiday season’s long lines and endless traffic are enough to turn anyone into a Scrooge. To help you navigate this holiday season like a pro, we’ve pulled together Google Maps traffic predictions, popular times and activity trends to get you out the door and on your merry way with ease.
The best times for Thanksgiving travel
Whether you’re hitting the road early or heading out on Thanksgiving day, you can outsmart a potential slowdown by choosing the right time to leave. We took a look at last year’s Thanksgiving traffic patterns across more than 20 major U.S. cities to help you plan your trip and quit traffic cold turkey.
? When to hit the road. The best time to get on the road before Thanksgiving is typically Monday at 8 p.m. local time. Try to avoid driving on Tuesday or Wednesday around 4 and 5 p.m., as that’s typically when Thanksgiving traffic hits its peak.
? Turkey-day travels. Planning to make your turkey trot on Thanksgiving day? Try to hit the road before noon or after 5 p.m. Roads are typically more congested between 3-5 p.m., which could cause some ruffled feathers.
? Black Friday shopping. If you manage to emerge from your Thanksgiving food coma to shop ‘til you drop, there’s no sense in getting caught in traffic! On Black Friday, we typically see traffic pick up around noon and peak around 4 p.m. in most places across the U.S. You’ll see fewer cars on the road at 7 a.m., 10 a.m., and between 7-8 p.m.
? Home for the weekend. To make sure your journey home is all gravy after the festivities are over, try to avoid the roads at 4 p.m. on Saturday and Sunday. Typically the best times to leave are 6 a.m. or 8 p.m. local time.
The best times to travel, shop and run errands
Holiday crowds are snow joke. We looked at Popular Times information to determine the best and worst times to visit the places you need most during the holidays — so you can spend more time celebrating and less time waiting in line.
✈️ Airports. Turkeys aren’t the only ones trying to fly out of here! Airports in the U.S. are typically at their busiest on Saturdays at noon, so build in extra time if you’re traveling around then. Airports are at their least busy on Wednesdays around 8 p.m.
? Bakeries. Looking for a sweet treat? You can expect to stand in line at the bakery on Saturday at 10 a.m., but you’ll have the best chance of avoiding the crowds if you visit on Monday at 3 p.m.
? Grocery Stores. On a mission to get everything you need to be the hostess with the most-est? Grocery stores across the U.S. are typically busiest on Sunday at 1 p.m. and least busy on Tuesday at 9 a.m.
? Post Offices. We can’t all deliver our gifts in a sleigh, but you can slay your trip to the post office. Visit on Friday around noon to beat the crowds, and make sure you avoid the typical Tuesday 3 p.m. rush.
?️ Shopping Center. For when your presents is requested. Visit a local mall or shopping center around Monday at 3 p.m. and you’ll be in and out faster than you can say Kris Kringle, but visit on Saturday at 1 p.m. and yule surely be sorry.
The most popular holiday activities
From Christmas tree farms and holiday markets to ice skating rinks, we took a look at how popular holiday activities compare in each state. Dive in for some tree-mendous activity trends!
? Order up! The most popular chain restaurants people navigate to on Thanksgiving are McDonald’s, Starbucks, and Dunkin’, according to Google Maps data. Tennessee is the only state that doesn’t favor one of these three, opting for Cracker Barrel (Old) Country Store instead!
Among the most popular holiday activities are Christmas tree farms and ice skating rinks, according to Google Maps direction requests. Let’s see how they compare:
⛸️ Ice, ice baby. Ice skating rinks took the lead in 33 states: Alaska, Arizona, California, Colorado, Connecticut, Delaware, Florida, Georgia, Idaho, Illinois, Indiana, Kentucky, Maryland, Massachusetts, Michigan, Minnesota, Missouri, Montana, Nebraska, Nevada, New Hampshire, New Jersey, New Mexico, New York, North Dakota, Oklahoma, Rhode Island, South Dakota, Tennessee, Utah, Virginia, Texas and Wyoming.
? Rockin’ around the Christmas tree. Christmas tree farms were the more popular activity in 17 states. The tree-huggin’ states are: Alabama, Arkansas, Hawaii, Iowa, Kansas, Louisiana, Maine, Mississippi, North Carolina, Ohio, Oregon, Pennsylvania, South Carolina, Vermont, Washington, West Virginia and Wisconsin.
Whether you’re traveling far and wide or welcoming friends and family to your home, we know you’ve got your work cut out for you. Check out these Google Maps tips and tricks for navigating the holidays with ease.
The Dev channel is being updated to 109.0.5391.0 (Platform version: 15227.0.0) for most ChromeOS devices. This build contains a number of bug fixes and security updates.
If you find new issues, please let us know one of the following ways
Unless otherwise indicated, the features below are fully launched or in the process of rolling out (rollouts should take no more than 15 business days to complete), launching to both Rapid and Scheduled Release at the same time (if not, each stage of rollout should take no more than 15 business days to complete), and available to all Google Workspace and G Suite customers.
Use smart chips in Google Sheets as links on mobile
With smart canvas, you’re currently able quickly add people and files into Google Sheets with smart chips on the web. Starting this week on mobile, these chips are treated like linked text and will show relevant hovercards and context menu items.
Offline printing now available for Google Sheets
We’ve launched offline printing to support users working in a Sheet with offline access. In addition, emojis and system fonts are now included in offline printing. This feature is only available to those using Google Chrome. As a reminder, your organization can encrypt Sheets files with Workspace Client-side encryption, an additional feature supported by offline printing. | Learn more.
Previous announcements
The announcements below were published on the Workspace Updates blog earlier this week. Please refer to the original blog posts for complete details.
Improved flow for expiring access controls in Google Drive
You can now set expiring access when sharing files in My Drive. This update improves the existing expiring access capabilities by allowing you to add an expiration when sharing, as opposed to after a person already has access to the file. | Available to Google Workspace Business Standard, Business Plus, Enterprise Essentials, Enterprise Standard, Enterprise Plus, Education Plus, Education Standard, and Nonprofits customers only. | Learn more.
Reminder to migrate your classic Google Sites before January 30, 2023
We’re extending the previously announced timeline to give Google Workspace customers more time to migrate from classic Google Sites to new Google Sites:
On January 30, 2023 (previously December 1, 2022): We will begin to turn off the ability to edit any remaining classic Sites in your domain. Your classic Sites will remain viewable until the automatic conversion.
After January 30, 2023 (previously January 1, 2023): We will begin to automatically convert all remaining classic Sites to new Sites drafts for site owners to review and publish, and export static archives to Google Takeout. Your classic Sites will be deleted after they are converted to new Sites. Note: Refer to the CSV file included with the export generated after your domain’s autoconversion completes for any classic Sites we were unable to convert. | Learn more.
Google for Education transformation reports window open for customers worldwide
Google for Education transformation reports are available for K-12 Google Workspace for Education customers worldwide, at no cost. Note: transformation reports are only available in English at this time. | Available to K-12 Google Workspace for Education Fundamentals, Education Standard, Education Plus, and the Teaching and Learning Upgrade customers only. | Learn more.
Manage projects & tasks with a new timeline view on Google Sheets
We’ve introduced an interactive timeline view that allows you to track projects in Google Sheets. This new visual layer displays project information stored in Sheets, such as the task start and end date, description, and owner. | Available to Google Workspace Essentials, Business Standard, Business Plus, Enterprise Essentials, Enterprise Standard, Enterprise Plus, Education Fundamentals, Education Plus, Education Standard, the Teaching and Learning Upgrade, and Nonprofits customers only. | Learn more.
Last week marked the UNESCO Global Media and Information Literacy Week— perfect timing for our recently concluded Google News Initiative (GNI) Youth Verification Challenge.
The Challenge is for those aged 15 to 24 across the Asia Pacific region, and presents teams with a series of tutorials and quizzes on identifying online misinformation. This year, over 4,000 participants from 28 countries took part, learning about fact-checking tools used in the real world. Each team had the chance to hone their investigative skills and learn from experts.
We spoke with the winning team from India — made up of students Royal, Amitabh, and Rooka — about their thoughts on fact-checking. Royal attends the National Institute of Technology Karnataka, while Amitabh is a student at the Gaya College of Engineering, and Rooka studies at the Jawaharlal Nehru Technological University. Together they became the self-named team “Espionage Experts”, and experts they truly were! They had to solve more than 200 problems in order to win first place.
How can news be more engaging for a younger audience?
By providing a concise explanation of the story, news can definitely be enjoyable for younger audiences. Simple story formats, explainers and background context makes news easier to digest, but to truly engage younger audiences, media organizations need to consider more diverse points of view and experiment with different kinds of story-telling formats. They can give us a call if they want some tips!
How aware do you think Gen Z is when it comes to online misinformation?
We think the younger generation needs to be even more aware, as many people appear to be falling for misinformation online. Events like the Youth Verification Challenge are great initiatives that can be conducted to inform youths about how to discern true stories from misinformation in an engaging manner.
Why do you think fact-checking is important?
Fact-checking is very important in the present world. We believe that true stories should and must prevail. It is important for us to get into a habit of fact-checking, given the cognitive biases that make us (unfortunately) receptive to fake news. Fact-checking can help mitigate the threat that misinformation poses to factual accuracy.
How will the team continue to bring fact-checking skills in their daily routine?
We are really passionate about sharing our experiences and the skills we picked up from the Youth Verification Challenge, and aspire to help our communities get better at fact-checking, too. Beyond encouraging people around us to use fact-checking tools like Google’s Fact Check Explorer, we will also focus on developing new skills to adapt to external trends in today’s digital world. The tools and strategies we use now will change when technology and the world of disinformation inevitably changes. Things are evolving fast and we all have to keep up!
Through the Youth Verification Challenge, we hope to keep encouraging younger internet users to fight misinformation as they equip themselves with the tools to approach the internet with confidence.
The dev channel has been updated to 109.0.5396.2 for Windows, Linux and Mac coming soon.
A partial list of changes is available in the log. Interested in switching release channels? Find out how. If you find a new issue, please let us know by filing a bug. The community help forum is also a great place to reach out for help or learn about common issues.