The Extended Stable channel has been updated to 106.0.5249.165 for Windows andMac which will roll out over the coming days/weeks.
A full list of changes in this build is available in the log. Interested in switching release channels? Find out how here. If you find a new issue, please let us know by filing a bug. The community help forum is also a great place to reach out for help or learn about common issues.
The Extended Stable channel has been updated to 106.0.5249.165 for Windows andMac which will roll out over the coming days/weeks.
A full list of changes in this build is available in the log. Interested in switching release channels? Find out how here. If you find a new issue, please let us know by filing a bug. The community help forum is also a great place to reach out for help or learn about common issues.
The Extended Stable channel has been updated to 106.0.5249.165 for Windows andMac which will roll out over the coming days/weeks.
A full list of changes in this build is available in the log. Interested in switching release channels? Find out how here. If you find a new issue, please let us know by filing a bug. The community help forum is also a great place to reach out for help or learn about common issues.
The dev channel has been updated to 108.0.5359.19 for Windows,Mac and Linux.
A partial list of changes is available in the log. Interested in switching release channels? Find out how. If you find a new issue, please let us know by filing a bug. The community help forum is also a great place to reach out for help or learn about common issues.
The dev channel has been updated to 108.0.5359.19 for Windows,Mac and Linux.
A partial list of changes is available in the log. Interested in switching release channels? Find out how. If you find a new issue, please let us know by filing a bug. The community help forum is also a great place to reach out for help or learn about common issues.
The dev channel has been updated to 108.0.5359.19 for Windows,Mac and Linux.
A partial list of changes is available in the log. Interested in switching release channels? Find out how. If you find a new issue, please let us know by filing a bug. The community help forum is also a great place to reach out for help or learn about common issues.
The dev channel has been updated to 108.0.5359.19 for Windows,Mac and Linux.
A partial list of changes is available in the log. Interested in switching release channels? Find out how. If you find a new issue, please let us know by filing a bug. The community help forum is also a great place to reach out for help or learn about common issues.
Today, we’re announcing the v12 release of the Google Ads API. To use some v12 features, you’ll need to upgrade your client libraries and client code. The updated client libraries and code examples will be published next week.
conversion_tracking_id will always be greater than 0 for all customers. In the previous versions, this field could be 0 for customers who had never created any conversion actions.
(For allowlisted customers) Added support for adding, updating, and removing CampaignAsset and AdGroupAsset with the field_type: AD_IMAGE (Image assets).
Added support for mutating and retrieving location asset sets for test accounts only. The location assets for the asset sets are automatically generated and non-mutable.
Where can I learn more? The following resources can help you get started:
Posted by Kedem Snir, Software Engineer, and Gal Elidan, Senior Staff Research Scientist, Google Research
Whether it's a professional honing their skills or a child learning to read, coaches and educators play a key role in assessing the learner's answer to a question in a given context and guiding them towards a goal. These interactions have unique characteristics that set them apart from other forms of dialogue, yet are not available when learners practice alone at home. In the field of natural language processing, this type of capability has not received much attention and is technologically challenging. We set out to explore how we can use machine learning to assess answers in a way that facilitates learning.
In this blog, we introduce an important natural language understanding (NLU) capability called Natural Language Assessment (NLA), and discuss how it can be helpful in the context of education. While typical NLU tasks focus on the user's intent, NLA allows for the assessment of an answer from multiple perspectives. In situations where a user wants to know how good their answer is, NLA can offer an analysis of how close the answer is to what is expected. In situations where there may not be a “correct” answer, NLA can offer subtle insights that include topicality, relevance, verbosity, and beyond. We formulate the scope of NLA, present a practical model for carrying out topicality NLA, and showcase how NLA has been used to help job seekers practice answering interview questions with Google's new interview prep tool, Interview Warmup.
Overview of Natural Language Assessment (NLA)
The goal of NLA is to evaluate the user's answer against a set of expectations. Consider the following components for an NLA system interacting with students:
A question presented to the student
Expectations that define what we expect to find in the answer (e.g., a concrete textual answer, a set of topics we expect the answer to cover, conciseness)
An answer provided by the student
An assessment output (e.g., correctness, missing information, too specific or general, stylistic feedback, pronunciation, etc.)
[Optional] A context (e.g., a chapter in a book or an article)
With NLA, both the expectations about the answer and the assessment of the answer can be very broad. This enables teacher-student interactions that are more expressive and subtle. Here are two examples:
A question with a concrete correct answer: Even in situations where there is a clear correct answer, it can be helpful to assess the answer more subtly than simply correct or incorrect. Consider the following:
Context: Harry Potter and the Philosopher's Stone Question: “What is Hogwarts?” Expectation: “Hogwarts is a school of Witchcraft and Wizardry” [expectation is given as text] Answer: “I am not exactly sure, but I think it is a school.”
The answer may be missing salient details but labeling it as incorrect wouldn’t be entirely true or useful to a user. NLA can offer a more subtle understanding by, for example, identifying that the student’s answer is too general, and also that the student is uncertain.
Illustration of the NLA process from input question, answer and expectation to assessment output
This kind of subtle assessment, along with noting the uncertainty the student expressed, can be important in helping students build skills in conversational settings.
Topicality expectations: There are many situations in which a concrete answer is not expected. For example, if a student is asked an opinion question, there is no concrete textual expectation. Instead, there's an expectation of relevance and opinionation, and perhaps some level of succinctness and fluency. Consider the following interview practice setup:
Question: “Tell me a little about yourself?” Expectations: { “Education”, “Experience”, “Interests” } (a set of topics) Answer: “Let’s see. I grew up in the Salinas valley in California and went to Stanford where I majored in economics but then got excited about technology so next I ….”
In this case, a useful assessment output would map the user’s answer to a subset of the topics covered, possibly along with a markup of which parts of the text relate to which topic. This can be challenging from an NLP perspective as answers can be long, topics can be mixed, and each topic on its own can be multi-faceted.
A Topicality NLA Model
In principle, topicality NLA is a standard multi-class task for which one can readily train a classifier using standard techniques. However, training data for such scenarios is scarce and it would be costly and time consuming to collect for each question and topic. Our solution is to break each topic into granular components that can be identified using large language models (LLMs) with a straightforward generic tuning.
We map each topic to a list of underlying questions and define that if the sentence contains an answer to one of those underlying questions, then it covers that topic. For the topic “Experience” we might choose underlying questions such as:
Where did you work?
What did you study?
…
While for the topic “Interests” we might choose underlying questions such as:
What are you interested in?
What do you enjoy doing?
…
These underlying questions are designed through an iterative manual process. Importantly, since these questions are sufficiently granular, current language models (see details below) can capture their semantics. This allows us to offer a zero-shot setting for the NLA topicality task: once trained (more on the model below), it is easy to add new questions and new topics, or adapt existing topics by modifying their underlying content expectation without the need to collect topic specific data. See below the model’s predictions for the sentence “I’ve worked in retail for 3 years” for the two topics described above:
A diagram of how the model uses underlying questions to predict the topic most likely to be covered by the user’s answer.
Since an underlying question for the topic “Experience” was matched, the sentence would be classified as “Experience”.
Application: Helping Job Seekers Prepare for Interviews
Interview Warmup is a new tool developed in collaboration with job seekers to help them prepare for interviews in fast-growing fields of employment such as IT Support and UX Design. It allows job seekers to practice answering questions selected by industry experts and to become more confident and comfortable with interviewing. As we worked with job seekers to understand their challenges in preparing for interviews and how an interview practice tool could be most useful, it inspired our research and the application of topicality NLA.
We build the topicality NLA model (once for all questions and topics) as follows: we train an encoder-only T5 model (EncT5 architecture) with 350 million parameters on Question-Answers data to predict the compatibility of an <underlying question, answer> pair. We rely on data from SQuAD 2.0 which was processed to produce <question, answer, label> triplets.
In the Interview Warmup tool, users can switch between talking points to see which ones were detected in their answer.
The tool does not grade or judge answers. Instead it enables users to practice and identify ways to improve on their own. After a user replies to an interview question, their answer is parsed sentence-by-sentence with the Topicality NLA model. They can then switch between different talking points to see which ones were detected in their answer. We know that there are many potential pitfalls in signaling to a user that their response is “good”, especially as we only detect a limited set of topics. Instead, we keep the control in the user’s hands and only use ML to help users make their own discoveries about how to improve.
So far, the tool has had great results helping job seekers around the world, including in the US, and we have recently expanded it to Africa. We plan to continue working with job seekers to iterate and make the tool even more helpful to the millions of people searching for new jobs.
A short film showing how Interview Warmup and its NLA capabilities were developed in collaboration with job seekers.
Conclusion
Natural Language Assessment (NLA) is a technologically challenging and interesting research area. It paves the way for new conversational applications that promote learning by enabling the nuanced assessment and analysis of answers from multiple perspectives. Working together with communities, from job seekers and businesses to classroom teachers and students, we can identify situations where NLA has the potential to help people learn, engage, and develop skills across an array of subjects, and we can build applications in a responsible way that empower users to assess their own abilities and discover ways to improve.
Acknowledgements
This work is made possible through a collaboration spanning several teams across Google. We’d like to acknowledge contributions from Google Research Israel, Google Creative Lab, and Grow with Google teams among others.
Posted by The Android TeamCameraX is an Android Jetpack library that makes it easy to incorporate camera functionality directly in your Android app. That’s why we focus heavily on device compatibility out-of-the-box, so you can focus on what makes your app unique.
In this post, we’ll look at three ways CameraX makes developers’ lives easier when it comes to device compatibility. First, we’ll take a peek into our CameraX Test Lab where we test over 150 physical phones every day. Second, we’ll look at Quirks, the mechanism CameraX uses to automatically handle device inconsistencies. Third, we’ll discuss the ways CameraX makes it easier to develop apps for foldable phones.
CameraX Test Lab
(Left) A single rack in our CameraX Test Lab. Each test enclosure contains two identical Android phones for testing front and back cameras. (Right) A GIF showing the inside of a test inclosure, with a rotating phone mount (for testing portrait and landscape orientations) and a high-resolution test chart (not pictured).
We built the CameraX Test Lab to ensure CameraX works on the Android devices most people have in their pockets. The Test Lab opened in 2019 with 52 phone models. Today, the Test Lab has 150 phone models. We prioritize devices with the most daily active users over the past 28 days (28DAUs) and devices that leverage a diverse range of systems on a chip (SoCs). The Test Lab currently covers over 750 million 28DAUs. We also test many different Android versions, going back to Android 5.1 (Lollipop).
To generate reliable test results, each phone model has its own test enclosure to control for light and other environmental factors. Each enclosure contains two phones of the same model to simplify testing the front and back cameras. On the opposite side of the test enclosure from the phones, there’s a high-resolution test chart. This chart has many industry-standard tests for camera attributes like color correctness, resolution, sharpness, and dynamic range. The chart also has some specific elements for functional tests like face detection.
When you adopt CameraX in your app, you get the assurance of this continuous testing across many devices and API levels. Additionally, we’re continuously making improvements to the Test Lab, including adding new phones based on market trends to ensure that the majority of your users are well represented. See our current test device list for the latest inventory in our Test Lab.
Quirks
Google provides a Camera Image Test Suite so that OEM’s cameras meet a baseline of consistency. Still, when dealing with the wide range of devices that run Android, there can be differences in the end user camera experience. CameraX includes an abstraction layer, called Quirks, to remove these variations in behavior so that CameraX behaves consistently across all devices with no effort from app developers.
We find these quirks based on our own manual testing, the Test Lab’s automatic testing, and bug reports filed in our public CameraX issue tracker. As of today, CameraX has over 30 Quirks that automatically fix behavior inconsistencies for developers. Here are a few examples:
OnePixelShiftQuirk: Some phones shift a column of pixels when converting YUV data to RGB. CameraX automatically corrects for this on those devices.
ExtensionDisableQuirk: For phones that don’t support extensions or have broken behavior with extensions, CameraX disables certain extensions.
CameraUseInconsistentTimebaseQuirk: Some phones do not properly timestamp video and audio. CameraX fixes the timestamps so that the video and audio align properly.
These are just a few examples of how CameraX automatically handles quirky device behavior. We will continue to add more corrections as we find them, so app developers won’t have to deal with these one-offs on their own. If you find inconsistent behavior on a device you’re testing, you can file an issue in the CameraX component detailing the behavior and the device it’s happening on.
Foldable phones
Foldables continue to be the fastest growing smartphone form factor. Their flexibility in screen size adds complexity to camera development. Here are a few ways that CameraX simplifies the development of camera apps on foldables.
CameraX’s Preview use case handles differences between the aspect ratio of the camera and the aspect ratio of the screen. With traditional phone and tablet form factors, this difference should be small because Section 7.5.5 of the Android Compatibility Definition Document requires that the “long dimension of the camera aligns with the screen’s long dimension.” However, with foldable devices the screen aspect ratio can change, so this relationship might not always hold. With CameraX you can always preserve aspect ratio by filling the PreviewView (which may crop the preview image) or fitting the image into the PreviewView (which may result in letterboxing or pillarboxing). Set PreviewView.ScaleType to specify which method to use.
The increase in foldable devices also increases the possibility that your app may be used in a multi-window environment. CameraX is set up for multi-window support out-of-the-box. CameraX handles all aspects of lifecycle management for you, including the multi-window case where other apps can take priority access of singleton resources, such as the microphone or camera. This means no additional effort is required from app developers when using CameraX in a multi-window environment.
We’re always looking for more ways to improve CameraX to make it even easier to use. With respect to foldables, for example, we’re exploring ways to let developers call setTargetResolution() without having to take into account the different configurations a foldable device can be in. Keep an eye on this blog and our CameraX release notes for updates on new features!
Getting started with CameraX
We have a number of resources to help you get started with CameraX. The best starting place is our CameraX codelab. If you want to dig a bit deeper with CameraX, check out our camera code samples, ranging from a basic app to more advanced features like camera extensions. For an overview of everything CameraX has to offer, see our CameraX documentation. If you have any questions, feel free to reach out to us on our CameraX discussion group.