The last two years have been challenging across multiple dimensions for people and businesses around the world. There has never been a greater need for authoritative, timely journalism, yet many news organisations are confronting challenges as the world reorients.
Through this year, we have announced several initiatives to address the needs of startup news businesses, as well as large scale news publishers in this altered landscape.
Today we are pleased to launch the Google News Initiative Advertising Lab to support small and medium sized news publishers producing original news for local and regional communities in India. The program, aimed at newsrooms employing up to 100 members, will focus on technical and product training of teams, as well as technical implementation to help grow the organization’s digital ad revenues.
The program will select up to 800 small to medium news publishers and work closely with a select subset to guide them through the optimization / setup of their content management systems, websites and ad setups.
This program is the latest in a host of GNI programs working directly with news organizations of all sizes on developing new products, programs and partnerships to help news publishers grow their businesses.
"The insights and business direction provided through the interactive sessions as part of the Google News Initiative were eye opening for the team. We are excited and thankful that a similar scaled program is being launched to further help us keep pace with the changes in the publishing landscape." - Harisha Bhat,Chief Technology Officer, Udayavani, Manipal Media Network Ltd.
Detailed eligibility criteria and applications can be accessed at the GNI Advertising Lab program website. Applications are open starting today until 5 November 2021.
Posted by Shilpa Jhunjhunwala, Head of India News Partnerships & APAC News Programs
You can now find and view additional information about people within your organization, your Contacts, and more across additional Google Workspace products. This information includes:
Contact information, such as phone number and email address,
Team and manager,
Office and desk location,
Whether you’ve received email from them before, and more.
This feature is already available for Gmail, and will now be available from the following products: Google Chat, Calendar, Docs, Sheets, and Slides.
Getting started
Admins: To maximize this feature, it’s helpful to have user data fully populated across Google Workspace apps. Workspace admins can populate this data in a few locations:
End users: There is no end user setting for this feature. Click “Open Detailed View” while hovering over a user’s information card, or select the Contacts icon in the side panel. Visit the Help Center to learn more about using Google products side by side.
You can now find and view additional information about people within your organization, your Contacts, and more across additional Google Workspace products. This information includes:
Contact information, such as phone number and email address,
Team and manager,
Office and desk location,
Whether you’ve received email from them before, and more.
This feature is already available for Gmail, and will now be available from the following products: Google Chat, Calendar, Docs, Sheets, and Slides.
Getting started
Admins: To maximize this feature, it’s helpful to have user data fully populated across Google Workspace apps. Workspace admins can populate this data in a few locations:
End users: There is no end user setting for this feature. Click “Open Detailed View” while hovering over a user’s information card, or select the Contacts icon in the side panel. Visit the Help Center to learn more about using Google products side by side.
You can now find and view additional information about people within your organization, your Contacts, and more across additional Google Workspace products. This information includes:
Contact information, such as phone number and email address,
Team and manager,
Office and desk location,
Whether you’ve received email from them before, and more.
This feature is already available for Gmail, and will now be available from the following products: Google Chat, Calendar, Docs, Sheets, and Slides.
Getting started
Admins: To maximize this feature, it’s helpful to have user data fully populated across Google Workspace apps. Workspace admins can populate this data in a few locations:
End users: There is no end user setting for this feature. Click “Open Detailed View” while hovering over a user’s information card, or select the Contacts icon in the side panel. Visit the Help Center to learn more about using Google products side by side.
You can now find and view additional information about people within your organization, your Contacts, and more across additional Google Workspace products. This information includes:
Contact information, such as phone number and email address,
Team and manager,
Office and desk location,
Whether you’ve received email from them before, and more.
This feature is already available for Gmail, and will now be available from the following products: Google Chat, Calendar, Docs, Sheets, and Slides.
Getting started
Admins: To maximize this feature, it’s helpful to have user data fully populated across Google Workspace apps. Workspace admins can populate this data in a few locations:
End users: There is no end user setting for this feature. Click “Open Detailed View” while hovering over a user’s information card, or select the Contacts icon in the side panel. Visit the Help Center to learn more about using Google products side by side.
You can now find and view additional information about people within your organization, your Contacts, and more across additional Google Workspace products. This information includes:
Contact information, such as phone number and email address,
Team and manager,
Office and desk location,
Whether you’ve received email from them before, and more.
This feature is already available for Gmail, and will now be available from the following products: Google Chat, Calendar, Docs, Sheets, and Slides.
Getting started
Admins: To maximize this feature, it’s helpful to have user data fully populated across Google Workspace apps. Workspace admins can populate this data in a few locations:
End users: There is no end user setting for this feature. Click “Open Detailed View” while hovering over a user’s information card, or select the Contacts icon in the side panel. Visit the Help Center to learn more about using Google products side by side.
We’re adding an Admin console setting which will enable admins to control whether students can unenroll from classes. If turned on, it will prevent students from unenrolling themselves from classes. A teacher or admin would have to unenroll them from the class instead.
Who’s impacted
Admins and end users
Why you’d use it
Students erroneously unenrolling from classes can cause disruption for teachers and an increased support volume for admins. By using this setting, you can help ensure your class rosters are accurate and up to date. Additionally, if you’re using roster import, this makes it easy to maintain your SIS as the source of truth for the roster.
Getting started
Admins: This feature will be OFF by default, and can be enabled at the domain or OU level. Find the setting at Admin console > Apps > Google Workspace > Settings for Classroom > Student unenrollment. Visit the Help Center to learn more about controlling student unenrollment settings.
End users: If turned on by their admin, students will no longer see the unenroll button on the course cards on the Classroom homepage.
Available as a core service to Google Workspace Education Fundamentals, Education Standard, the Teaching and Learning Upgrade, and Education Plus.
Available as an additional service to Google Workspace Essentials, Business Starter, Business Standard, Business Plus, Enterprise Essentials, Enterprise Standard, Enterprise Plus, Frontline, and Nonprofits, as well as G Suite Basic and Business customers.
We’ve added several new features to Google Meet in Classroom, making it easier and more secure:
The Class Meet link is now accessible on the side of the class stream, so students can easily join and teachers can manage the link from the stream.
Students will be directed to a waiting room until a teacher has officially joined the class Meet link.
Guests not on the class roster will have to “ask to join” and be admitted by the teacher before they can participate, so no unintended participants join the meetings.
All designated co-teachers for a class will automatically be co-hosts in the meeting. This moderation tool will enable co-hosts to start the meeting with the same Meet link without the class teacher needing to be present.
Who’s impacted
Teachers and student end users
Why it matters
These features make it easier for teachers to manage meetings, help prevent unintended meeting participants from joining meetings, and generally help meetings run more smoothly. Overall, teachers and students will have a smoother and more secure experience while using Google Classroom and Google Meet.
Additional details
Please note that after a Meet link is generated, if a co-teacher is added or removed, you must regenerate the Meet link to update the host status. We are working on changing this functionality so the host status auto-updates, and hope to implement the change by the end of the year.
You can follow this Forum post to stay updated on the progress of the rollout, get additional tips, FAQs and other useful updates on this launch.
Getting started
Admins: There is no admin control for this feature, but Google Meet must be turned on for these features to be available in Google Classroom. Learn more about how to set up and manage Meet and use Meet for distance learning.
End users: These features will be ON by default for new meetings created in Google Classroom. Users with existing class Meet links should reset the link to get this updated functionality. Visit the Help Center to learn more about starting a video meeting for education.
Available as a core service to Google Workspace Education Fundamentals, Education Standard, the Teaching and Learning Upgrade, and Education Plus.
Available as an additional service to Google Workspace Essentials, Business Starter, Business Standard, Business Plus, Enterprise Essentials, Enterprise Standard, Enterprise Plus, Frontline, and Nonprofits, as well as G Suite Basic and Business customers
Posted by Jae Hun Ro, Software Engineer and Ananda Theertha Suresh, Research Scientist, Google Research
Federated learning is a machine learning setting where many clients (i.e., mobile devices or whole organizations, depending on the task at hand) collaboratively train a model under the orchestration of a central server, while keeping the training data decentralized. For example, federated learning makes it possible to train virtual keyboard language models based on user data that never leaves a mobile device.
Federated learning algorithms accomplish this by first initializing the model at the server and completing three key steps for each round of training:
The server sends the model to a set of sampled clients.
These sampled clients train the model on local data.
After training, the clients send the updated models to the server and the server aggregates them together.
An example federated learning algorithm with four clients.
Federated learning has become a particularly active area of research due to an increased focus on privacy and security. Being able to easily translate ideas into code, iterate quickly, and compare and reproduce existing baselines is important for such a fast growing field.
In light of this, we are excited to introduce FedJAX, a JAX-based open source library for federated learning simulations that emphasizes ease-of-use in research. With its simple building blocks for implementing federated algorithms, prepackaged datasets, models and algorithms, and fast simulation speed, FedJAX aims to make developing and evaluating federated algorithms faster and easier for researchers. In this post we discuss the library structure and contents of FedJAX. We demonstrate that on TPUs FedJAX can be used to train models with federated averaging on the EMNIST dataset in a few minutes, and the Stack Overflow dataset in roughly an hour with standard hyperparameters.
Library Structure Keeping ease of use in mind, FedJAX introduces only a few new concepts. Code written with FedJAX resembles the pseudo-code used to describe novel algorithms in academic papers, making it easy to get started. Additionally, while FedJAX provides building blocks for federated learning, users can replace these with the most basic implementations using just NumPy and JAX while still keeping the overall training reasonably fast.
Included Dataset and Models In the current landscape of federated learning research, there are a variety of commonly used datasets and models, such as image recognition, language modeling, and more. A growing number of these datasets and models can be used straight out of the box in FedJAX, so the preprocessed datasets and models do not have to be written from scratch. This not only encourages valid comparisons between different federated algorithms but also accelerates the development of new algorithms.
At present, FedJAX comes packaged with the following datasets and sample models:
In addition to these standard setups, FedJAX provides tools to create new datasets and models that can be used with the rest of the library. Finally, FedJAX comes with standard implementations of federated averaging and other federated algorithms for training a shared model on decentralized examples, such as adaptive federated optimizers, agnostic federated averaging, and Mime, to make comparing and evaluating against existing algorithms easier.
Performance Evaluation We benchmarked a standard FedJAX implementation of adaptive federated averaging on two tasks: the image recognition task for the federated EMNIST-62 dataset and the next word prediction task for the Stack Overflow dataset. Federated EMNIST-62 is a smaller dataset that consists of 3400 users and their writing samples, which are one of 62 characters (alphanumeric), while the Stack Overflow dataset is much larger and consists of millions of questions and answers from the Stack Overflow forum for hundreds of thousands of users.
We measured performance on various hardware specialized for machine learning. For federated EMNIST-62, we trained a model for 1500 rounds with 10 clients per round on GPU (NVIDIA V100) and TPU (1 TensorCore on a Google TPU v2) accelerators.
For Stack Overflow, we trained a model for 1500 rounds with 50 clients per round on GPU (NVIDIA V100) using jax.jit, TPU (1 TensorCore on a Google TPU v2) using only jax.jit, and multi-core TPU (8 TensorCores on a Google TPU v2) using jax.pmap. In the charts below, we’ve recorded the average training round completion time, time taken for full evaluation on test data, and time for the overall execution, which includes both training and full evaluation.
Benchmark results for federated EMNIST-62.
Benchmark results for Stack Overflow.
With standard hyperparameters and TPUs, the full experiments for federated EMNIST-62 can be completed in a few minutes and roughly an hour for Stack Overflow.
Stack Overflow average training round duration as the number of clients per round increases.
We also evaluate the Stack Overflow average training round duration as the number of clients per round increases. By comparing the average training round duration between TPU (8 cores) and TPU (1 core) in the figure, it is evident that using multiple TPU cores results in considerable runtime improvement if the number of clients participating per round is large (useful for applications like differentially private learning).
Conclusions and Future Work In this post, we introduced FedJAX, a fast and easy-to-use federated learning simulation library for research. We hope that FedJAX will foster even more investigation and interest in federated learning. Moving forward, we plan to continually grow our existing collection of algorithms, aggregation mechanisms, datasets, and models.
Acknowledgements We would like to thank Ke Wu and Sai Praneeth Kamireddy for contributing to the library and various discussions during development.
We would also like to thank Ehsan Amid, Theresa Breiner, Mingqing Chen, Fabio Costa, Roy Frostig, Zachary Garrett, Alex Ingerman, Satyen Kale, Rajiv Mathews, Lara Mcconnaughey, Brendan McMahan, Mehryar Mohri, Krzysztof Ostrowski, Max Rabinovich, Michael Riley, Vlad Schogol, Jane Shapiro, Gary Sivek, Luciana Toledo-Lopez, and Michael Wunder for helpful comments and contributions.
Today we’re pushing the source to the Android Open Source Project (AOSP) and officially releasing the latest version of Android. Keep an eye out for Android 12 coming to a device near you starting with Pixel in the next few weeks and Samsung Galaxy, OnePlus, Oppo, Realme, Tecno, Vivo, and Xiaomi devices later this year.
As always, thank you for your feedback during Android 12 Beta! More than 225,000 of you tested our early releases on Pixel and devices from our partners, and you sent us nearly 50,000 issue reports to help improve the quality of the release. We also appreciate the many articles, discussions, surveys, and in-person meetings where you voiced your thoughts, as well as the work you’ve done to make your apps compatible in time for today’s release. Your support and contributions are what make Android such a great platform for everyone.
We’ll also be talking about Android 12 in more detail at this year’s Android Dev Summit, coming up on October 27-28. We’ve just released more information on the event, including a snapshot of the technical Android sessions; read on for more details later in the post.
What’s in Android 12 for developers?
Here’s a look at some of what’s new in Android 12 for developers. Make sure to check out the Android 12 developer site for details on all of the new features.
A new UI for Android
Material You - Android 12 introduces a new design language called Material You, helping you to build more personalized, beautiful apps. To bring all of the latest Material Design 3 updates into your apps, try an alpha version of Material Design Components and watch for support for Jetpack Compose coming soon.
Redesigned widgets - We refreshed app widgets to make them more useful, beautiful, and discoverable. Try them with new interactive controls, responsive layouts for any device, and dynamic colors to create a personalized but consistent look. More here.
Notification UI updates - We also refreshed notification designs to make them more modern and useful. Android 12 also decorates custom notifications with standard affordances to make them consistent with all other notifications. More here.
Stretch overscroll - To make scrolling your app’s content more smooth, Android 12 adds a new “stretch” overscroll effect to all scrolling containers. It’s a natural scroll-stop indicator that’s common across the system and apps. More here.
App launch splash screens - Android 12 also introduces splash screens for all apps. Apps can customize the splash screen in a number of ways to meet their unique branding needs. More here.
Performance
Faster, more efficient system performance - We reduced the CPU time used by core system services by 22% and the use of big cores by 15%. We’ve also improved app startup times and optimized I/O for faster app loading, and for database queries we’ve improved CursorWindow by as much as 49x for large windows.
Optimized foreground services - To provide a better experience for users, Android 12 prevents apps from starting foreground services while in the background. Apps can use a new expedited job in JobScheduler instead. More here.
More responsive notifications - Android 12’s restriction on notification trampolines helps reduce latency for apps started from a notification. For example, the Google Photos app now launches 34% faster after moving away from notification trampolines. More here.
Performance class - Performance Class is a set of device capabilities that together support demanding use-cases and higher quality content on Android 12 devices. Apps can check for a device’s performance class at runtime and take full advantage of the device’s performance. More here.
Faster machine learning - Android 12 helps you make the most of ML accelerators and always get the best possible performance through the Neural Networks API. ML accelerator drivers are also now updatable outside of platform releases, through Google Play services, so you can take advantage of the latest drivers on any compatible device.
Privacy
Privacy Dashboard - A new dashboard in Settings gives users better visibility over when your app accesses microphone, camera, and location data. More here.
Approximate location - Users have even more control over their location data, and they can grant your app access to approximate location even if it requests precise location. More here.
Microphone and camera indicators - Indicators in the status bar let users know when your app is using the device camera or microphone. More here.
Microphone and camera toggles - On supported devices, new toggles in Quick Settings make it easy for users to instantly disable app access to the microphone and camera. More here.
Nearby device permissions - Your app can use new permissions to scan for and pair with nearby devices without needing location permission. More here.
Better user experience tools
Rich content insertion - A new unified API lets you receive rich content in your UI from any source: clipboard, keyboard, or drag-and-drop. For back-compatibility, we’ve added the unified API to AndroidX. More here.
Support for rounded screen corners - Many modern devices use screens with rounded corners. To deliver a great UX on these devices, you can use new APIs to query for corner details and then manage your UI elements as needed. More here.
AVIF image support - Android 12 adds platform support for AV1 Image File Format (AVIF). AVIF takes advantage of the intra-frame encoded content from video compression to dramatically improve image quality for the same file size when compared to older image formats, such as JPEG.
Compatible media transcoding - For video, HEVC format offers significant improvements in quality and compression and we recommend that all apps support it. For apps that can’t, the compatible media transcoding feature lets your app request files in AVC and have the system handle the transcoding. More here.
Easier blurs, color filters and other effects - new APIs make it easier to apply common graphics effects to your Views and rendering hierarchies. You can use RenderEffect to apply blurs, color filters, and more to RenderNodes or Views. You can also create a frosted glass effect for your window background using a new Window.setBackgroundBlurRadius() API, or use blurBehindRadius to blur all of the content behind a window.
Enhanced haptic experiences - Android 12 expands the tools you can use to create informative haptic feedback for UI events, immersive and delightful effects for gaming, and attentional haptics for productivity. More here.
New camera effects and sensor capabilities - New vendor extensions let your apps take advantage of the custom camera effects built by device manufacturers—bokeh, HDR, night mode, and others. You can also use new APIs to take full advantage of ultra high-resolution camera sensors that use Quad / Nona Bayer patterns. More here.
Better debugging for native crashes - Android 12 gives you more actionable diagnostic information to make debugging NDK-related crashes easier. Apps can now access detailed crash dump files called tombstones through the App Exit Reasons API.
Android 12 for Games - With Game Mode APIs, you can react to the players' performance profile selection for your game - like better battery life for a long commute, or performance mode to get peak frame rates. Play as you download will allow game assets to be fetched in the background during install, getting your players into gameplay faster.
Get your apps ready for Android 12
Now with today’s public release of Android 12, we’re asking all Android developers to finish your compatibility testing and publish your updates as soon as possible, to give your users a smooth transition to Android 12.
To test your app for compatibility, just install it on a device running Android 12 and work through the app flows looking for any functional or UI issues. Review the Android 12 behavior changes for all apps to focus on areas where your app could be affected. Here are some of the top changes to test:
Privacy dashboard — Use this new dashboard in Settings to check your app’s accesses to microphone, location, and other sensitive data, and consider providing details to users on the reasons. More here.
Microphone & camera indicators — Android 12 shows an indicator in the status bar when an app is using the camera or microphone. Make sure this doesn’t affect your app’s UI. More here.
Microphone & camera toggles — Try using the new toggles in Quick Settings to disable microphone and camera access for apps and ensure that your app handles the change properly. More here.
Clipboard read notification — Watch for toast notifications when your app reads data from the clipboard unexpectedly. Remove unintended accesses. More here.
Stretch overscroll — Try your scrolling content with the new “stretch” overscroll effect and ensure that it displays as expected. More here.
App splash screens — Launch your app from various flows to test the new splash screen animation. If necessary, you can customize it. More here.
Keygen changes — Several deprecated BouncyCastle cryptographic algorithms are removed in favor of Conscrypt versions. If your app uses a 512-bit key with AES, you’ll need to use one of the standard sizes supported by Conscrypt. More here.
Remember to test the libraries and SDKs in your app for compatibility. If you find any SDK issues, try updating to the latest version of the SDK or reaching out to the developer for help.
Tune in to Android Dev Summit to learn about Android 12 and more!
The #AndroidDevSummit is back! Join us October 27-28 to hear about the latest updates in Android development, including Android 12. This year’s theme is excellent apps, across devices; tune in later this month to learn more about the development tools, APIs and technology to help you be more productive and create better apps that run across billions of devices, including tablets, foldables, wearables, and more.
We’ve just released more information on the event, including a snapshot of the 30+ technical Android sessions; you can take a look at some of those sessions here, and start planning which talks you want to check out. Over the coming weeks, we’ll be asking you to share your top #AskAndroid questions, to be answered live by the team during the event.
The show kicks off at 10 AM PT on October 27 with The Android Show, a 50-minute technical keynote where you’ll hear all the latest news and updates for Android developers. You can learn more and sign up for updates here.