Alpa: Automated Model-Parallel Deep Learning

Over the last several years, the rapidly growing size of deep learning models has quickly exceeded the memory capacity of single accelerators. Earlier models like BERT (with a parameter size of < 1GB) can efficiently scale across accelerators by leveraging data parallelism in which model weights are duplicated across accelerators while only partitioning and distributing the training data. However, recent large models like GPT-3 (with a parameter size of 175GB) can only scale using model parallel training, where a single model is partitioned across different devices.

While model parallelism strategies make it possible to train large models, they are more complex in that they need to be specifically designed for target neural networks and compute clusters. For example, Megatron-LM uses a model parallelism strategy to split the weight matrices by rows or columns and then synchronizes results among devices. Device placement or pipeline parallelism partitions different operators in a neural network into multiple groups and the input data into micro-batches that are executed in a pipelined fashion. Model parallelism often requires significant effort from system experts to identify an optimal parallelism plan for a specific model. But doing so is too onerous for most machine learning (ML) researchers whose primary focus is to run a model and for whom the model’s performance becomes a secondary priority. As such, there remains an opportunity to automate model parallelism so that it can easily be applied to large models.

In “Alpa: Automating Inter- and Intra-Operator Parallelism for Distributed Deep Learning”, published at OSDI 2022, we describe a method for automating the complex model parallelism process. We demonstrate that with only one line of code Alpa can transform any JAX neural network into a distributed version with an optimal parallelization strategy that can be executed on a user-provided device cluster. We are also excited to release Alpa’s code to the broader research community.

Alpa Design
We begin by grouping existing ML parallelization strategies into two categories, inter-operator parallelism and intra-operator parallelism. Inter-operator parallelism assigns distinct operators to different devices (e.g., device placement) that are often accelerated with a pipeline execution schedule (e.g., pipeline parallelism). With intra-operator parallelism, which includes data parallelism (e.g., Deepspeed-Zero), operator parallelism (e.g., Megatron-LM), and expert parallelism (e.g., GShard-MoE), individual operators are split and executed on multiple devices, and often collective communication is used to synchronize the results across devices.

The difference between these two approaches maps naturally to the heterogeneity of a typical compute cluster. Inter-operator parallelism has lower communication bandwidth requirements because it is only transmitting activations between operators on different accelerators. But, it suffers from device underutilization because of its pipeline data dependency, i.e., some operators are inactive while waiting on the outputs from other operators. In contrast, intra-operator parallelism doesn’t have the data dependency issue, but requires heavier communication across devices. In a GPU cluster, the GPUs within a node have higher communication bandwidth that can accommodate intra-operator parallelism. However, GPUs across different nodes are often connected with much lower bandwidth (e.g., ethernet) so inter-operator parallelism is preferred.

By leveraging heterogeneous mapping, we design Alpa as a compiler that conducts various passes when given a computational graph and a device cluster from a user. First, the inter-operator pass slices the computational graph into subgraphs and the device cluster into submeshes (i.e., a partitioned device cluster) and identifies the best way to assign a subgraph to a submesh. Then, the intra-operator pass finds the best intra-operator parallelism plan for each pipeline stage from the inter-operator pass. Finally, the runtime orchestration pass generates a static plan that orders the computation and communication and executes the distributed computational graph on the actual device cluster.

An overview of Alpa. In the sliced subgraphs, red and blue represent the way the operators are partitioned and gray represents operators that are replicated. Green represents the actual devices (e.g., GPUs).

Intra-Operator Pass
Similar to previous research (e.g., Mesh-TensorFlow and GSPMD), intra-operator parallelism partitions a tensor on a device mesh. This is shown below for a typical 3D tensor in a Transformer model with a given batch, sequence, and hidden dimensions. The batch dimension is partitioned along device mesh dimension 0 (mesh0), the hidden dimension is partitioned along mesh dimension 1 (mesh1), and the sequence dimension is replicated to each processor.

A 3D tensor that is partitioned on a 2D device mesh.

With the partitions of tensors in Alpa, we further define a set of parallelization strategies for each individual operator in a computational graph. We show example parallelization strategies for matrix multiplication in the figure below. Defining parallelization strategies on operators leads to possible conflicts on the partitions of tensors because one tensor can be both the output of one operator and the input of another. In this case, re-partition is needed between the two operators, which incurs additional communication costs.

The parallelization strategies for matrix multiplication.

Given the partitions of each operator and re-partition costs, we formulate the intra-operator pass as a Integer-Linear Programming (ILP) problem. For each operator, we define a one-hot variable vector to enumerate the partition strategies. The ILP objective is to minimize the sum of compute and communication cost (node cost) and re-partition communication cost (edge cost). The solution of the ILP translates to one specific way to partition the original computational graph.

Inter-Operator Pass
The inter-operator pass slices the computational graph and device cluster for pipeline parallelism. As shown below, the boxes represent micro-batches of input and the pipeline stages represent a submesh executing a subgraph. The horizontal dimension represents time and shows the pipeline stage at which a micro-batch is executed. The goal of the inter-operator pass is to minimize the total execution latency, which is the sum of the entire workload execution on the device as illustrated in the figure below. Alpa uses a Dynamic Programming (DP) algorithm to minimize the total latency. The computational graph is first flattened, and then fed to the intra-operator pass where the performance of all possible partitions of the device cluster into submeshes are profiled.

Pipeline parallelism. For a given time, this figure shows the micro-batches (colored boxes) that a partitioned device cluster and a sliced computational graph (e.g., stage 1, 2, 3) is processing.

Runtime Orchestration
After the inter- and intra-operator parallelization strategies are complete, the runtime generates and dispatches a static sequence of execution instructions for each device submesh. These instructions include RUN a specific subgraph, SEND/RECEIVE tensors from other meshes, or DELETE a specific tensor to free the memory. The devices can execute the computational graph without other coordination by following the instructions.

Evaluation
We test Alpa with eight AWS p3.16xlarge instances, each of which has eight 16 GB V100 GPUs, for 64 total GPUs. We examine weak scaling results of growing the model size while increasing the number of GPUs. We evaluate three models: (1) the standard Transformer model (GPT); (2) the GShard-MoE model, a transformer with mixture-of-expert layers; and (3) Wide-ResNet, a significantly different model with no existing expert-designed model parallelization strategy. The performance is measured by peta-floating point operations per second (PFLOPS) achieved on the cluster.

We demonstrate that for GPT, Alpa outputs a parallelization strategy very similar to the one computed by the best existing framework, Megatron-ML, and matches its performance. For GShard-MoE, Alpa outperforms the best expert-designed baseline on GPU (i.e., Deepspeed) by up to 8x. Results for Wide-ResNet show that Alpa can generate the optimal parallelization strategy for models that have not been studied by experts. We also show the linear scaling numbers for reference.

GPT: Alpa matches the performance of Megatron-ML, the best expert-designed framework.
GShard MoE: Alpa outperforms Deepspeed (the best expert-designed framework on GPU) by up to 8x.
Wide-ResNet: Alpa generalizes to models without manual plans. Pipeline and Data Parallelism (PP-DP) is a baseline model that uses only pipeline and data parallelism but no other intra-operator parallelism.
The parallelization strategy for Wide-ResNet on 16 GPUs consists of three pipeline stages and is a complicated strategy even for an expert to design. Stages 1 and 2 are on 4 GPUs performing data parallelism, and stage 3 is on 8 GPUs performing operator parallelism.

Conclusion
The process of designing an effective parallelization plan for distributed model-parallel deep learning has historically been a difficult and labor-intensive task. Alpa is a new framework that leverages intra- and inter-operator parallelism for automated model-parallel distributed training. We believe that Alpa will democratize distributed model-parallel learning and accelerate the development of large deep learning models. Explore the open-source code and learn more about Alpa in our paper.

Acknowledgements
Thanks to the co-authors of the paper: Lianmin Zheng, Hao Zhang, Yonghao Zhuang, Yida Wang, Danyang Zhuo, Joseph E. Gonzalez, and Ion Stoica. We would also like to thank Shibo Wang, Jinliang Wei, Yanping Huang, Yuanzhong Xu, Zhifeng Chen, Claire Cui, Naveen Kumar, Yash Katariya, Laurent El Shafey, Qiao Zhang, Yonghui Wu, Marcello Maggioni, Mingyao Yang, Michael Isard, Skye Wanderman-Milne, and David Majnemer for their collaborations to this research.



Source: Google AI Blog


Share your video feed when using Companion mode in Google Meet

Quick summary 

When using Companion mode in Google Meet, you can now turn your camera on and share your video feed with all other participants. For in-room participants attending a hybrid meeting, this feature helps improve collaboration and representation equity by giving everyone the ability to share their own video with other on-call participants.


Getting started

  • Admins: There is no admin control for this feature.
  • End users: 
    • This feature will be available by default. You can join a meeting on the web using Companion mode from the green room before your meeting. To share your video feed, select “Turn on camera” from the Meet toolbar.
    • Use this Help Center article and video guide to learn more about using Companion mode in Google Meet.

Rollout pace

Share your video feed when using Companion mode in Google Meet

Quick summary 

When using Companion mode in Google Meet, you can now turn your camera on and share your video feed with all other participants. For in-room participants attending a hybrid meeting, this feature helps improve collaboration and representation equity by giving everyone the ability to share their own video with other on-call participants.


Getting started

  • Admins: There is no admin control for this feature.
  • End users: 
    • This feature will be available by default. You can join a meeting on the web using Companion mode from the green room before your meeting. To share your video feed, select “Turn on camera” from the Meet toolbar.
    • Use this Help Center article and video guide to learn more about using Companion mode in Google Meet.

Rollout pace

Google Ads Scripts, AdWords API and Google Ads API reporting issues on April 25 and 26, 2022

Between April 25th 2:32 PM PT and April 26th 12:24 PM PT, there was an issue which may have impacted some read report requests across Google Ads scripts, the AdWords API, and the Google Ads API. If you were using these products to request reporting data for your accounts, then a small percentage of report downloads may have been missing rows or may have had incorrect data in a given row. This issue has been resolved. As a precaution, we recommend running again any reports that you have executed during this period as the missing data has been restored.

If you have any questions, please contact us via the Google Ads API forum or the Google Ads scripts forum.

Mothers.day: Highlighting inequality in maternal health

The path to parenthood looks different for everyone, but one element of becoming a parent is universal: the need for quality healthcare and community support. Sadly, this basic need is out of reach for far too many people. Every day, more than 800 people around the world die from pregnancy- and childbirth-related causes that could have been prevented, according to the World Health Organization.

Google Registry launched the .day top-level domain earlier this year, and today we’re introducing mothers.day — a resource dedicated to highlighting inequities in maternal health and helping families at different stages of parenthood. The website also lists ways you and your loved ones can help bridge these gaps by volunteering or donating to organizations making an impact in this space.

This year, I've asked my family to make giving to others the focus of our Mother’s Day celebration. To help pass on the value of generosity, the mothers.day website points to several nonprofits for Mother’s Day giving, including:

  • Postpartum Support International is the world’s leading nonprofit organization dedicated to helping women suffering from perinatal mood and anxiety disorders, including postpartum depression.
  • Black Mamas Matter Alliance is a Black women-led group that advocates, drives research, builds power, and shifts culture for Black maternal health, rights and justice.
  • Fistula Foundation provides life-transforming surgery to women injured in childbirth who are left incontinent and often shunned.
  • The Cradle is a nonprofit, licensed child welfare agency providing adoption services, counseling and education and a nursery for birth parents and adoptive families.
  • Hello Neighbor's Smart Start program provides refugee and immigrant mothers with socio-emotional, logistical, and material need support throughout pregnancy and postpartum.

These are just a few organizations committed to making the journey to parenthood equitable for everyone. In addition to giving, mothers.day includes information on how you can make an impact on maternal healthcare by participating in research studies:

  • Powermom is a mobile research platform with the goal of addressing health disparities and partnering with all participants during pregnancy and the postpartum period.
  • PM3 study is a study for Black women, by Black women and helps new moms in the state of Georgia stay healthy after pregnancy.
  • Maternal Near Miss aims to gather insights from women of color who've had near-death experiences during pregnancy and/or childbirth in order to inform maternal health policies and clinical practices.

There are so many ways to support birthing people and their families around the world. For more ways to get involved, visit mothers.day.

Mothers.day: Highlighting inequality in maternal health

The path to parenthood looks different for everyone, but one element of becoming a parent is universal: the need for quality healthcare and community support. Sadly, this basic need is out of reach for far too many people. Every day, more than 800 people around the world die from pregnancy- and childbirth-related causes that could have been prevented, according to the World Health Organization.

Google Registry launched the .day top-level domain earlier this year, and today we’re introducing mothers.day — a resource dedicated to highlighting inequities in maternal health and helping families at different stages of parenthood. The website also lists ways you and your loved ones can help bridge these gaps by volunteering or donating to organizations making an impact in this space.

This year, I've asked my family to make giving to others the focus of our Mother’s Day celebration. To help pass on the value of generosity, the mothers.day website points to several nonprofits for Mother’s Day giving, including:

  • Postpartum Support International is the world’s leading nonprofit organization dedicated to helping women suffering from perinatal mood and anxiety disorders, including postpartum depression.
  • Black Mamas Matter Alliance is a Black women-led group that advocates, drives research, builds power, and shifts culture for Black maternal health, rights and justice.
  • Fistula Foundation provides life-transforming surgery to women injured in childbirth who are left incontinent and often shunned.
  • The Cradle is a nonprofit, licensed child welfare agency providing adoption services, counseling and education and a nursery for birth parents and adoptive families.
  • Hello Neighbor's Smart Start program provides refugee and immigrant mothers with socio-emotional, logistical, and material need support throughout pregnancy and postpartum.

These are just a few organizations committed to making the journey to parenthood equitable for everyone. In addition to giving, mothers.day includes information on how you can make an impact on maternal healthcare by participating in research studies:

  • Powermom is a mobile research platform with the goal of addressing health disparities and partnering with all participants during pregnancy and the postpartum period.
  • PM3 study is a study for Black women, by Black women and helps new moms in the state of Georgia stay healthy after pregnancy.
  • Maternal Near Miss aims to gather insights from women of color who've had near-death experiences during pregnancy and/or childbirth in order to inform maternal health policies and clinical practices.

There are so many ways to support birthing people and their families around the world. For more ways to get involved, visit mothers.day.

Update on cyber activity in Eastern Europe

Google’s Threat Analysis Group (TAG) has been closely monitoring the cybersecurity activity in Eastern Europe with regard to the war in Ukraine. Since our last update, TAG has observed a continuously growing number of threat actors using the war as a lure in phishing and malware campaigns. Similar to other reports, we have also observed threat actors increasingly target critical infrastructure entities including oil and gas, telecommunications and manufacturing.

Government-backed actors from China, Iran, North Korea and Russia, as well as various unattributed groups, have used various Ukraine war-related themes in an effort to get targets to open malicious emails or click malicious links. Financially motivated and criminal actors are also using current events as a means for targeting users.

As always, we continue to publish details surrounding the actions we take against coordinated influence operations in our quarterly TAG bulletin. We promptly identify and remove any such content but have not observed any significant shifts from the normal levels of activity that occur in the region.

Here is a deeper look at the campaign activity TAG has observed and the actions the team has taken to protect our users over the past few weeks:

APT28 or Fancy Bear, a threat actor attributed to Russia GRU, was observed targeting users in Ukraine with a new variant of malware. The malware, distributed via email attachments inside of password protected zip files (ua_report.zip), is a .Net executable that when executed steals cookies and saved passwords from Chrome, Edge and Firefox browsers. The data is then exfiltrated via email to a compromised email account.

Malware samples:

TAG would like to thank the Yahoo! Paranoids Advanced Cyber Threats Team for their collaboration in this investigation.

Turla, a group TAG attributes to Russia FSB, continues to run campaigns against the Baltics, targeting defense and cybersecurity organizations in the region. Similar to recently observed activity, these campaigns were sent via email and contained a unique link per target that led to a DOCX file hosted on attacker controlled infrastructure. When opened, the DOCX file would attempt to download a unique PNG file from the same attacker controlled domain.

Recently observed Turla domains:

  • wkoinfo.webredirect[.]org
  • jadlactnato.webredirect[.]org

COLDRIVER, a Russian-based threat actor sometimes referred to as Callisto, continues to use Gmail accounts to send credential phishing emails to a variety of Google and non-Google accounts. The targets include government and defense officials, politicians, NGOs and think tanks, and journalists. The group's tactics, techniques and procedures (TTPs) for these campaigns have shifted slightly from including phishing links directly in the email, to also linking to PDFs and/or DOCs hosted on Google Drive and Microsoft One Drive. Within these files is a link to an attacker controlled phishing domain.

These phishing domains have been blocked through Google Safe Browsing – a service that identifies unsafe websites across the web and notifies users and website owners of potential harm.

An example of this technique

An example of this technique

Recently observed COLDRIVER credential phishing domains:

  • cache-dns[.]com
  • docs-shared[.]com
  • documents-forwarding[.]com
  • documents-preview[.]com
  • protection-link[.]online
  • webresources[.]live

Ghostwriter, a Belarusian threat actor, has remained active during the course of the war and recently resumed targeting of Gmail accounts via credential phishing. This campaign, targeting high risk individuals in Ukraine, contained links leading to compromised websites where the first stage phishing page was hosted. If the user clicked continue, they would be redirected to an attacker controlled site that collected the users credentials. There were no accounts compromised from this campaign and Google will alert all targeted users of these attempts through our monthly government-backed attacker warnings.

Both pages from this campaign are shown below.

an example webpage
An example page

In mid-April, TAG detected a Ghostwriter credential phishing campaign targeting Facebook users. The targets, primarily located in Lithuania, were sent links to attacker controlled domains from a domain spoofing the Facebook security team.

Facebook campaign

Recently observed Ghostwriter credential phishing domains and emails:

  • noreply.accountsverify[.]top
  • microsoftonline.email-verify[.]top
  • lt-microsoftgroup.serure-email[.]online
  • facebook.com-validation[.]top
  • lt-meta.com-verification[.]top
  • lt-facebook.com-verification[.]top
  • secure@facebookgroup[.]lt

Curious Gorge, a group TAG attributes to China's PLA SSF, has remained active against government, military, logistics and manufacturing organizations in Ukraine, Russia and Central Asia. In Russia, long running campaigns against multiple government organizations have continued, including the Ministry of Foreign Affairs. Over the past week, TAG identified additional compromises impacting multiple Russian defense contractors and manufacturers and a Russian logistics company.

Protecting Our Users

Upon discovery, all identified websites and domains were added to Safe Browsing to protect users from further exploitation. We also send all targeted Gmail and Workspace users government-backed attacker alerts notifying them of the activity. We encourage any potential targets to enable Google Account Level Enhanced Safe Browsing and ensure that all devices are updated.

The team continues to work around the clock, focusing on the safety and security of our users and the platforms that help them access and share important information. We’ll continue to take action, identify bad actors and share relevant information with others across industry and governments, with the goal of bringing awareness to these issues, protecting users and preventing future attacks. While we are actively monitoring activity related to Ukraine and Russia, we continue to be just as vigilant in relation to other threat actors globally, to ensure that they do not take advantage of everyone’s focus on this region.

Update on cyber activity in Eastern Europe

Google’s Threat Analysis Group (TAG) has been closely monitoring the cybersecurity activity in Eastern Europe with regard to the war in Ukraine. Since our last update, TAG has observed a continuously growing number of threat actors using the war as a lure in phishing and malware campaigns. Similar to other reports, we have also observed threat actors increasingly target critical infrastructure entities including oil and gas, telecommunications and manufacturing.

Government-backed actors from China, Iran, North Korea and Russia, as well as various unattributed groups, have used various Ukraine war-related themes in an effort to get targets to open malicious emails or click malicious links. Financially motivated and criminal actors are also using current events as a means for targeting users.

As always, we continue to publish details surrounding the actions we take against coordinated influence operations in our quarterly TAG bulletin. We promptly identify and remove any such content but have not observed any significant shifts from the normal levels of activity that occur in the region.

Here is a deeper look at the campaign activity TAG has observed and the actions the team has taken to protect our users over the past few weeks:

APT28 or Fancy Bear, a threat actor attributed to Russia GRU, was observed targeting users in Ukraine with a new variant of malware. The malware, distributed via email attachments inside of password protected zip files (ua_report.zip), is a .Net executable that when executed steals cookies and saved passwords from Chrome, Edge and Firefox browsers. The data is then exfiltrated via email to a compromised email account.

Malware samples:

TAG would like to thank the Yahoo! Paranoids Advanced Cyber Threats Team for their collaboration in this investigation.

Turla, a group TAG attributes to Russia FSB, continues to run campaigns against the Baltics, targeting defense and cybersecurity organizations in the region. Similar to recently observed activity, these campaigns were sent via email and contained a unique link per target that led to a DOCX file hosted on attacker controlled infrastructure. When opened, the DOCX file would attempt to download a unique PNG file from the same attacker controlled domain.

Recently observed Turla domains:

  • wkoinfo.webredirect[.]org
  • jadlactnato.webredirect[.]org

COLDRIVER, a Russian-based threat actor sometimes referred to as Callisto, continues to use Gmail accounts to send credential phishing emails to a variety of Google and non-Google accounts. The targets include government and defense officials, politicians, NGOs and think tanks, and journalists. The group's tactics, techniques and procedures (TTPs) for these campaigns have shifted slightly from including phishing links directly in the email, to also linking to PDFs and/or DOCs hosted on Google Drive and Microsoft One Drive. Within these files is a link to an attacker controlled phishing domain.

These phishing domains have been blocked through Google Safe Browsing – a service that identifies unsafe websites across the web and notifies users and website owners of potential harm.

An example of this technique

An example of this technique

Recently observed COLDRIVER credential phishing domains:

  • cache-dns[.]com
  • docs-shared[.]com
  • documents-forwarding[.]com
  • documents-preview[.]com
  • protection-link[.]online
  • webresources[.]live

Ghostwriter, a Belarusian threat actor, has remained active during the course of the war and recently resumed targeting of Gmail accounts via credential phishing. This campaign, targeting high risk individuals in Ukraine, contained links leading to compromised websites where the first stage phishing page was hosted. If the user clicked continue, they would be redirected to an attacker controlled site that collected the users credentials. There were no accounts compromised from this campaign and Google will alert all targeted users of these attempts through our monthly government-backed attacker warnings.

Both pages from this campaign are shown below.

an example webpage
An example page

In mid-April, TAG detected a Ghostwriter credential phishing campaign targeting Facebook users. The targets, primarily located in Lithuania, were sent links to attacker controlled domains from a domain spoofing the Facebook security team.

Facebook campaign

Recently observed Ghostwriter credential phishing domains and emails:

  • noreply.accountsverify[.]top
  • microsoftonline.email-verify[.]top
  • lt-microsoftgroup.serure-email[.]online
  • facebook.com-validation[.]top
  • lt-meta.com-verification[.]top
  • lt-facebook.com-verification[.]top
  • secure@facebookgroup[.]lt

Curious Gorge, a group TAG attributes to China's PLA SSF, has remained active against government, military, logistics and manufacturing organizations in Ukraine, Russia and Central Asia. In Russia, long running campaigns against multiple government organizations have continued, including the Ministry of Foreign Affairs. Over the past week, TAG identified additional compromises impacting multiple Russian defense contractors and manufacturers and a Russian logistics company.

Protecting Our Users

Upon discovery, all identified websites and domains were added to Safe Browsing to protect users from further exploitation. We also send all targeted Gmail and Workspace users government-backed attacker alerts notifying them of the activity. We encourage any potential targets to enable Google Account Level Enhanced Safe Browsing and ensure that all devices are updated.

The team continues to work around the clock, focusing on the safety and security of our users and the platforms that help them access and share important information. We’ll continue to take action, identify bad actors and share relevant information with others across industry and governments, with the goal of bringing awareness to these issues, protecting users and preventing future attacks. While we are actively monitoring activity related to Ukraine and Russia, we continue to be just as vigilant in relation to other threat actors globally, to ensure that they do not take advantage of everyone’s focus on this region.

Use new table templates and dropdown chips in Google Docs to create highly collaborative documents

What’s changing 

We’re introducing two new enhancements for our flexible, smart canvas for collaboration: dropdown chips and table templates in Google Docs. 


You can use dropdown chips to easily indicate the status of your document or various project milestones outlined in your document. There are two default dropdown options: 
  • Project Status, which includes selections for “Not Started”, “Blocked”, “In Progress” and “Complete” 
  • Review Status, which includes selections for “Not Started”, “In Progress”, “Under Review” and “Approved”. 


Additionally, you can create a dropdown chip with custom options and colors to best suit your needs.




We’re also adding table templates, which allow you to quickly insert building blocks for common workflows such as a:
  • Launch content tracker
  • Project asset
  • Review tracker
  • Product roadmap



The columns within the template include a sample row of content to help guide you on how they can be used and customized.


Who’s impacted

End users


Why you’d use them

We hope these features help you to create highly customized and organized documents in Google Docs, making it easier to collaborate and drive your project forward. 


Getting started 

  • Admins: There is no admin control for this feature.
  • End users: These features will be available by default. You can insert a dropdown chip by selecting Insert > Dropdown. To insert a table template, select Insert > Table > Table templates

Rollout pace 

Dropdown chips


Table templates

Availability 

  • Available to all Google Workspace customers, as well as legacy G Suite Basic and Business customers
  • Available to users with personal Google Accounts

Resources 



Learn Android with Jetpack Compose (no programming experience needed!)

Posted by Murat Yener, Android Developer Relations Engineer

Blue graphic with Android phone and Jetpack Compose logos 

There are many fulfilling opportunities found in Android development: from launching a career, expressing yourself in fun ways, working on an app that makes a difference, or starting a business. At Google, we’re committed to increasing opportunities for anyone to learn Android development, so more people can experience this. As the next evolution of our journey to make Android development accessible to all, we released the first two units of Android Basics with Compose. This is the first free course that teaches Android development with Jetpack Compose to everyone. Compose simplifies and accelerates Android UI development, bringing your app to life faster with less code, powerful tools, and intuitive Kotlin APIs. If you are curious about learning Android development with Android's latest offering for building native UI, this is a great place to start!

Similar to the Android Basics in Kotlin course, Android Basics with Compose teaches the fundamentals of programming in Kotlin; you do not need any prior programming experience other than basic computer literacy to get started with this course. Not only does the course cover the most recent Android app building techniques, it is also designed to make it easier and more fun for you to learn Android. We built this course from scratch, taking into account feedback we received from learners, instructors, and designers from previous Android development courses.

The course contains learning pathways that teach you the basics of programming along with how to use the Kotlin programming language, with additional development topics introduced during your learning journey! If you are familiar with programming or the Kotlin programming language, you can skip ahead and focus on learning how to develop with Jetpack Compose.

The Android Basics with Compose and Android Basics in Kotlin courses will co-exist as our latest Android training offerings. Android Basics with Compose shares a similar course structure with Android Basics in Kotlin; in many cases they share the same sample apps, but are written using different UI toolkits. This allows you to see, compare, and learn the differences between Views and Compose, you can even work with both courses simultaneously.

This course also introduces new content formats such as code-along videos for Codelabs, practice problems to give you more hands-on coding experience, and open-ended projects to unleash your creativity. These two units are just the beginning; more will be coming soon. Check out Android Basics with Compose to get started on your Android development journey!