Category Archives: Google Testing Blog

If it ain’t broke, you’re not trying hard enough

Fixing a Test Hourglass

By Alan Myrvold


Automated tests make it safer and faster to create new features, fix bugs, and refactor code. When planning the automated tests, we envision a pyramid with a strong foundation of small unit tests, some well designed integration tests, and a few large end-to-end tests. From Just Say No to More End-to-End Tests, tests should be fast, reliable, and specific; end-to-end tests, however, are often slow, unreliable, and difficult to debug.


As software projects grow, often the shape of our test distribution becomes undesirable, either top heavy (no unit or medium integration tests), or like an hourglass.


The hourglass test distribution has a large set of unit tests, a large set of end-to-end tests, and few or no medium integration tests.


                      

To transform the hourglass back into a pyramid — so that you can test the integration of components in a reliable, sustainable way — you need to figure out how to architect the system under test and test infrastructure and make system testability improvements and test-code improvements.


I worked on a project with a web UI, a server, and many backends. There were unit tests at all levels with good coverage and a quickly increasing set of end-to-end tests.


The end-to-end tests found issues that the unit tests missed, but they ran slowly, and environmental issues caused spurious failures, including test data corruption. In addition, some functional areas were difficult to test because they covered more than the unit but required state within the system that was hard to set up.




We eventually found a good test architecture for faster, more reliable integration tests, but with some missteps along the way.

An example UI-level end-to-end test, written in protractor, looked something like this:


describe('Terms of service are handled', () => {
it('accepts terms of service', async () => {
const user = getUser('termsNotAccepted');
await login(user);
await see(termsOfServiceDialog());
await click('Accept')
await logoff();
await login(user);
await not.see(termsOfServiceDialog());
});
});


This test logs on as a user, sees the terms of service dialog that the user needs to accept, accepts it, then logs off and logs back on to ensure the user is not prompted again.


This terms of service test was a challenge to run reliably, because once an agreement was accepted, the backend server had no RPC method to reverse the operation and “un-accept” the TOS. We could create a new user with each test, but that was time consuming and hard to clean up.


The first attempt to make the terms of service feature testable without end-to-end testing was to hook the server RPC method and set the expectations within the test. The hook intercepts the RPC call and provides expected results instead of calling the backend API.




This approach worked. The test interacted with the backend RPC without really calling it, but it cluttered the test with extra logic.


describe('Terms of service are handled', () => {
it('accepts terms of service', async () => {
const user = getUser('someUser');
await hook('TermsOfService.Get()', true);
await login(user);
await see(termsOfServiceDialog());
await click('Accept')
await logoff();
await hook('TermsOfService.Get()', false);
await login(user);
await not.see(termsOfServiceDialog());
});
});



The test met the goal of testing the integration of the web UI and server, but it was unreliable. As the system scaled under load, there were several server processes and no guarantee that the UI would access the same server for all RPC calls, so the hook might be set in one server process and the UI accessed in another. 


The hook also wasn't at a natural system boundary, which made it require more maintenance as the system evolved and code was refactored.


The next design of the test architecture was to fake the backend that eventually processes the terms of service call.


The fake implementation can be quite simple:

public class FakeTermsOfService implements TermsOfService.Service {
private static final Map<String, Boolean> accepted = new ConcurrentHashMap<>();

@Override
public TosGetResponse get(TosGetRequest req) {
return accepted.getOrDefault(req.UserID(), Boolean.FALSE);
}

@Override
public void accept(TosAcceptRequest req) {
accepted.put(req.UserID(), Boolean.TRUE);
}
}



And the test is now uncluttered by the expectations:

describe('Terms of service are handled', () => {
  it('accepts terms of service', async () => {
const user = getUser('termsNotAccepted');
await login(user);
await see(termsOfServiceDialog());
await click('Accept')
await logoff();
await login(user);
await not.see(termsOfServiceDialog());
});
});


Because the fake stores the accepted state in memory, there is no need to reset the state for the next test iteration; it is enough just to restart the fake server.


This worked but was problematic when there was a mix of fake and real backends. This was because there was state between the real backends that was now out of sync with the fake backend.


Our final, successful integration test architecture was to provide fake implementations for all except one of the backends, all sharing the same in-memory state. One real backend was included in the system under test because it was tightly coupled with the Web UI. Its dependencies were all wired to fake backends. These are integration tests over the entire system under test, but they remove the backend dependencies. These tests expand the medium size tests in the test hourglass, allowing us to have fewer end-to-end tests with real backends.


Note that these integration tests are not only the option. For logic in the Web UI, we can write page level unit tests, which allow the tests to run faster and more reliably. For the terms of service feature, however, we want to test the Web UI and server logic together, so integration tests are a good solution.





This resulted in UI tests that ran, unmodified, on both the real and fake backend systems. 


When run with fake backends the tests were faster and more reliable. This made it easier to add test scenarios that would have been more challenging to set up with the real backends. We also deleted end-to-end tests that were well duplicated by the integration tests, resulting in more integration tests than end-to-end tests.



By iterating, we arrived at a sustainable test architecture for the integration tests.


If you're facing a test hourglass the test architecture to devise medium tests may not be obvious. I'd recommend experimenting, dividing the system on well defined interfaces, and making sure the new tests are providing value by running faster and more reliably or by unlocking hard to test areas.


References



Testing on the Toilet: Testing UI Logic? Follow the User!

This article was adapted from a Google Testing on the Toilet (TotT) episode. You can download a printer-friendly version of this TotT episode and post it in your office.

By Carlos Israel Ortiz García


After years of anticipation, you're finally able to purchase Google's hottest new product, gShoe*. But after clicking the "Buy" button, nothing happened! Inspecting the HTML, you notice the problem:

<button disabled=”true” click=”$handleBuyClick(data)”>Buy</button>

Users couldn’t buy their gShoes because the “Buy” button was disabled. The problem was due to the unit test for handleBuyClick, which passed even though the user interface had a bug:

it('submits purchase request', () => {
controller = new PurchasePage();
// Call the method that handles the "Buy" button click
controller.handleBuyClick(data);
expect(service).toHaveBeenCalledWith(expectedData);
});

In the above example, the test failed to detect the bug because it bypassed the UI element and instead directly invoked the "Buy" button click handler. To be effective, tests for UI logic should interact with the components on the page as a browser would, which allows testing the behavior that the end user experiences. Writing tests against UI components rather than calling handlers directly faithfully simulates user interactions (e.g., add items to a shopping cart, click a purchase button, or verify an element is visible on the page), making the tests more comprehensive.


The test for the “Buy” button should instead exercise the entire UI component by interacting with the HTML element, which would have caught the disabled button issue:

it('submits purchase request', () => {
// Renders the page with the “Buy” button and its associated code.
render(PurchasePage);
// Tries to click the button, fails the test, and catches the bug!
buttonWithText('Buy').dispatchEvent(new Event(‘click’));
expect(service).toHaveBeenCalledWith(expectedData);
});


Why should tests be written this way? Unlike end-to-end tests, tests for individual UI components don’t require a backend server or the entire app to be rendered. Instead, these  tests run in the same self-contained environment and take a similar amount of time to execute as unit tests that just execute the underlying event handlers directly. Therefore, the UI acts as the public API, leaving the business logic as an implementation detail (also known as the "Use the Front Door First" principle), resulting in better coverage of a feature.

Disclaimer: “gShoe” is not a real Google product. Unfortunately you can’t buy a pair even if the bug is fixed!

Testing on the Toilet: Avoid Hardcoding Values for Better Libraries

This article was adapted from a Google Testing on the Toilet (TotT) episode. You can download a printer-friendly version of this TotT episode and post it in your office.

By Adel Saoud


You may have been in a situation where you're using a value that always remains the same, so you define a constant. This can be a good practice as it removes magic values and improves code readability. But be mindful that hardcoding values can make usability and potential refactoring significantly harder.

Consider the following function that relies on hardcoded values:
// Declared in the module.
constexpr int kThumbnailSizes[] = {480, 576, 720};

// Returns thumbnails of various sizes for the given image.
std::vector<Image> GetThumbnails(const Image& image) {
std::vector<Image> thumbnails;
for (const int size : kThumbnailSizes) {
thumbnails.push_back(ResizeImage(image, size));
}
return thumbnails;
}


Using hardcoded values can make your code:
  • Less predictable: The caller might not expect the function to be relying on hardcoded values outside its parameters; a user of the function shouldn’t need to read the function’s code to know that. Also, it is difficult to predict the product/resource/performance implications of changing these hardcoded values.
  • Less reusable: The caller is not able to call the function with different values and is stuck with the hardcoded values. If the caller doesn’t need all these sizes or needs a different size, the function has to be forked or refactored to avoid aforementioned complications with existing callers.

When designing a library, prefer to pass required values, such as through a function call or a constructor. The code above can be improved as follows:
std::vector<Image> GetThumbnails(const Image& image, absl::Span<const int> sizes) {
std::vector<Image> thumbnails;
for (const int size : sizes) {
thumbnails.push_back(ResizeImage(image, size));
}
return thumbnails;
}


If most of the callers use the same value for a certain parameter, make your code configurable so that this value doesn't need to be duplicated by each caller. For example, you can define a public constant that contains a commonly used value, or use default arguments in languages that support this feature (e.g. C++ or Python).
// Declared in the public header.
inline constexpr int kDefaultThumbnailSizes[] = {480, 576, 720};

// Default argument allows the function to be used without specifying a size.
std::vector<Image> GetThumbnails(const Image& image,
absl::Span<const int> sizes = kDefaultThumbnailSizes);

Code Coverage Best Practices

By Carlos Arguelles, Marko Ivanković‎, and Adam Bender


We have spent several decades driving software testing initiatives in various very large software companies. One of the areas that we have consistently advocated for is the use of code coverage data to assess risk and identify gaps in testing. However, the value of code coverage is a highly debated subject with strong opinions, and a surprisingly polarizing topic. Every time code coverage is mentioned in any large group of people, seemingly endless arguments ensue. These tend to lead the conversation away from any productive progress, as people securely bunker in their respective camps. The purpose of this document is to give you tools to steer people on all ends of the spectrum to find common ground so that you can move forward and use coverage information pragmatically. We put forth best practices in the domain of code coverage to work effectively with code health.

  • Code coverage provides significant benefits to the developer workflow. It is not a perfect measure of test quality, but it does offer a reasonable, objective, industry standard metric with actionable data. It does not require significant human interaction, it applies universally to all products, and there are ample tools available in the industry for most languages. You must treat it with the understanding that it’s a lossy and indirect metric that compresses a lot of information into a single number so it should not be your only source of truth.  Instead, use it in conjunction with other techniques to create a more holistic assessment of your testing efforts.
  • It is an open research question whether code coverage alone reduces defects, but our experience shows that efforts in increasing code coverage can often lead to culture changes in engineering excellence that in the long run reduce defects. For example, teams that give code coverage priority tend to treat testing as a first class citizen, and tend to bake stronger testability into their product design, so that they can achieve their testing goals with less effort. All this in turn leads to writing higher quality code to begin with (more modular, cleaner contracts in their APIs, more manageable code reviews, etc.). They also start caring more about their overall health, and engineering and operational excellence.
  • A high code coverage percentage does not guarantee high quality in the test coverage. Focusing on getting the number as close as possible to 100% leads to a false sense of security. It could also be wasteful, burning machine cycles and creating technical debt from low-value tests that now need to be maintained. Bad code being pushed to production due to missing tests could happen either because (a) your tests did not cover a specific path of code, a test gap that is easy to identify with code coverage analysis, or (b) because your tests did not cover a specific edge case in an area that did have code coverage, which is difficult or impossible to catch with code coverage analysis. Code coverage does not guarantee that the covered lines or branches have been tested correctly, it just guarantees that they have been executed by a test. Be mindful of copy/pasting tests just for the sake of increasing coverage, or adding tests with little actual value, to comply with the number. A better technique to assess whether you’re adequately exercising the lines your tests cover, and adequately asserting on failures, is mutation testing.
  • But a low code coverage number does guarantee that large areas of the product are going completely untested by automation on every single deployment. This increases our risk of pushing bad code to production, so it should receive attention. In fact a lot of the value of code coverage data is to highlight not what’s covered, but what’s not covered.
  • There is no “ideal code coverage number” that universally applies to all products. The level of testing you want/need for a set of code should be a function of (a) business impact/criticality of the code; (b) how often you will need to touch/change the code; (c) how much longer you expect the code to live, its complexity, and domain variables. We cannot mandate every single team should have x% code coverage; this is a business decision best made by the owners of the product with domain-specific knowledge. Any mandate to reach x% code coverage should be accompanied by infrastructure investments to make testing easy, such as integrating tools into the developer workflow. Be mindful that engineers may start treating your target like a checkbox and avoid increasing coverage beyond the target, even if doing so would be prudent.
  • In general code coverage of a lot of products is below the bar; we should aim at significantly improving code coverage across the board. Although there is no “ideal code coverage number,” at Google we offer the general guidelines of 60% as “acceptable”, 75% as “commendable” and 90% as “exemplary.” However we like to stay away from broad top-down mandates and encourage every team to select the value that makes sense for their business needs.
  • We should not be obsessing on how to get from 90% code coverage to 95%. The gains of increasing code coverage beyond a certain point are logarithmic. But we should be taking concrete steps to get from 30% to 70% and always making sure new code meets our desired threshold.
  • More important than the percentage of lines covered is human judgment over the actual lines of code (and behaviors)  that aren’t being covered (analyzing the gaps in testing) and whether this risk is acceptable or not. What’s not covered is more meaningful than what is covered. Pragmatic discussions over specific lines of code not covered that take place during the code review process are more valuable than over-indexing on an arbitrary target number. We have found out that embedding code coverage into your code review process makes code reviews faster and easier. Not all code is equally important, for example testing debug log lines is often not as important, so when developers can see not just the coverage number, but each covered line highlighted as part of the code review, they will make sure that the most important code is covered. 
  • Just because your product has low code coverage doesn’t mean you can’t take concrete, incremental steps to improve it over time. Inheriting a legacy system with poor testing and poor testability can be daunting, and you may not feel empowered to turn it around, or even know where to start. But at the very least, you can adopt the ‘boy-scout rule’ (leave the campground cleaner than you found it). Over time, and incrementally, you will get to a healthy location.
  • Make sure that frequently changing code is covered. While project wide goals above 90% are most likely not worth it, per-commit coverage goals of 99% are reasonable, and 90% is a good lower threshold. We need to ensure that our tests are not getting worse over time.
  • Unit test code coverage is only a piece of the puzzle. Integration/System test code coverage is important too. And the aggregate view of the coverage of all sources in your Pipeline (unit and integration) is paramount, as it gives you the bigger picture of how much of your code is not exercised by your test automation as it makes its way in your pipeline to a production environment. One thing you should be aware of is while unit tests have high correlation between executed and evaluated code, some of the coverage from integration tests and end-to-end tests is incidental and not deliberate. But incorporating code coverage from integration tests can help you avoid situations where you have a false sense of security that even though you’re not covering code in your unit tests, you think you’re covering it in your integration tests.
  • We should gate deployments that do not meet our code coverage standards. Teams should debate and decide which gating mechanism makes sense to them. You should however be careful that it doesn’t turn into being treated as a checkbox that is required to be filled, as it can backfire (pressure to 'hit the metric' almost never yields the desired outcome). There are many mechanisms available:  gate on coverage for all code vs gate on coverage to new code only; gate on a specific hard-coded code coverage number vs gate on delta from prior version, specific parts of the code to ignore or focus on. And then, commit to upholding these as a team. Drops in code coverage violating the gate should prevent the code from being checked in and reaching production. 

If you would like to learn more about Google's coverage infrastructure, we welcome you to read our paper “Coverage at Google” which can be found here.

Testing on the Toilet: Don’t Mock Types You Don’t Own

This article was adapted from a Google Testing on the Toilet (TotT) episode. You can download a printer-friendly version of this TotT episode and post it in your office.

By Stefan Kennedy and Andrew Trenk

The code below mocks a third-party library. What problems can arise when doing this?

// Mock a salary payment library
@Mock SalaryProcessor mockSalaryProcessor;
@Mock TransactionStrategy mockTransactionStrategy;
...
when(mockSalaryProcessor.addStrategy()).thenReturn(mockTransactionStrategy);
when(mockSalaryProcessor.paySalary()).thenReturn(TransactionStrategy.SUCCESS);
MyPaymentService myPaymentService = new MyPaymentService(mockSalaryProcessor);
assertThat(myPaymentService.sendPayment()).isEqualTo(PaymentStatus.SUCCESS);

Mocking types you don’t own can make maintenance more difficult:
  • It can make it harder to upgrade the library to a new version: The expectations of an API hardcoded in a mock can be wrong or get out of date. This may require time-consuming work to manually update your tests when upgrading the library version. In the above example, an update that changes addStrategy() to return a new type derived from TransactionStrategy (e.g. SalaryStrategy) requires the mock to be updated to return this type, even though the code under test doesn’t need to be changed since it can still reference TransactionStrategy.
  • It can make it harder to know whether a library update introduced a bug in your code: The assumptions built into mocks may get out of date as changes are made to the library, resulting in tests that pass even when the code under test has a bug. In the above example, if a library update changes paySalary() to instead return TransactionStrategy.SCHEDULED, a bug could potentially be introduced due to the code under test not handling this return value properly. However, the maintainer wouldn’t know because the mock would not return this value so the test would continue to pass.
Instead of using a mock, use the real implementation, or if that’s not feasible, use a fake implementation that is ideally provided by the library owner. This reduces the maintenance burden since the issues with mocks listed above don’t occur when using a real or fake implementation. For example:
FakeSalaryProcessor fakeProcessor = new FakeSalaryProcessor(); // Designed for tests
MyPaymentService myPaymentService = new MyPaymentService(fakeProcessor);
assertThat(myPaymentService.sendPayment()).isEqualTo(PaymentStatus.SUCCESS);

If you can’t use the real implementation and a fake implementation doesn’t exist (and library owners aren’t able to create one), create a wrapper class that calls the type, and mock this instead. This reduces the maintenance burden by avoiding mocks that rely on the signatures of the library API. For example:


@Mock MySalaryProcessor mockMySalaryProcessor; // Wraps the SalaryProcessor library
...
// Mock the wrapper class rather than the library itself
when(mockMySalaryProcessor.sendSalary()).thenReturn(PaymentStatus.SUCCESS);

MyPaymentService myPaymentService = new MyPaymentService(mockMySalaryProcessor);
assertThat(myPaymentService.sendPayment()).isEqualTo(PaymentStatus.SUCCESS);

To avoid the problems listed above, prefer to test the wrapper class with calls to the real implementation. The downsides of testing with the real implementation (e.g. tests taking longer to run) are limited only to the tests for this wrapper class rather than tests throughout your codebase.

“Don’t mock types you don’t own” is also described by Steve Freeman and Nat Pryce in their book, Growing Object Oriented Software, Guided by TestsFor more details about the downsides of overusing mocks (even for types you do own), see this Google Testing Blog post.

COOL to be a TE @ Google

By Anantha Keesara

Test Engineers are a part of Google’s Engineering Productivity (EngProd) Group. As mentioned in a previous post, we advocate for our users, provide comprehensive testing solutions, and play a key role creating successful and reliable products and platforms. At Google, Test Engineers are not manual testers; we are technical engineers whose focus is on advancing product excellence and engineering productivity.

In short, it’s COOL (Constant learner, Out-of-the-box thinker, Orchestrator, Leading-edge user) to be a Test Engineer at Google:


Constant learning is what keeps Google Test Engineers motivated. We understand holistically how all the pieces of the software stack are interconnected and what kind of coverage exists or is needed to test the connections between the stacks.This product knowledge makes us test experts. We work closely with Software Engineers from the very beginning of the development process to discuss the testability of the designs before the features are implemented.  We develop test strategies, methodologies, and test plans; we write scripts, design systems, and build tools and test infrastructure. We review design docs, do deep dives into Google's massive codebase, evaluate stack traces, and determine the root causes of production outages. Through this constant learning, we not only build deep technical expertise and do risk management by identifying weak spots in the code base, we also find creative ways to break software and identify potential problems. Our job ladder also gives us the flexibility and independence to explore and learn new technologies like ML concepts and Cloud computing and to build new testing solutions or improve existing ones.


Out-of-the-box thinking, a result of constant learning, is another thing that keeps us motivated. As Google Test Engineers, we champion Engineering excellence by  providing optimized solutions to address engineering inefficiencies, testing gaps, and process gaps. We constantly think of ways to make machines do the work to increase testability and productivity. Hundreds and thousands of lines of code get checked-in every minute at Google. To maintain velocity, quality, and code health, we devise creative ways to test and debug the test failures -- like performing diff testing, building dynamic test cases from the logs, designing heuristic algorithms to identify culprits for test failures, building solutions to reduce the test run time, and implementing stubs, fakes, and mock objects and servers to help developers write stable unit and integration testing. Along with devising creative ways to test and debug the test failures, we also focus on improving engineering excellence and product excellence by defining and measuring productivity metrics and product health metrics like quality, stability, and performance. The testing of, for example, Search, Ads, Maps, YouTube, Cloud, self-driving cars, and Google Apps would not have scaled with traditional testing practices. 

Orchestrating the testing efforts is a key responsibility of Google Test Engineers. As orchestrators we can collaborate with cross functional teams including Product Managers, Technical Program Managers, and Software engineers to define critical user journeys (CUJs), determine testing strategies, and ensure that the right tests are run on the right configurations/environments. With our strong communication and collaboration skills, we work with the cross-functional teams and play the role of evangelists in spreading the word on new tools, technologies, and best testing practices.  We also have the opportunity to host Hackathons and Fixits, host interns, drive college recruiting events, engage with the open source community in testing the open source products, listen to feedback, and convert that feedback into product improvements.

Leading-edge user: the fun part of being a Test Engineer! We can engage with product development, participate in the review of product designs, documentation, and prototypes, play with features and products early on, and provide informed feedback. Best of all, as early adopters we get to wear wearables, ride in self driving cars, be in our own world with AR/VR, engage with Google Assistant to get our chores done, and have multiple laptops, phones, and smart display units! 


Stay tuned to learn more COOL things about Test Engineering at Google! 

Testing on the Toilet: Tests Too DRY? Make Them DAMP!

This article was adapted from a Google Testing on the Toilet (TotT) episode. You can download a printer-friendly version of this TotT episode and post it in your office.

By Derek Snyder and Erik Kuefler

The test below follows the DRY principle (“Don’t Repeat Yourself”), a best practice that encourages code reuse rather than duplication, e.g., by extracting helper methods or by using loops. But is it a well-written test?
def setUp(self):
self.users = [User('alice'), User('bob')] # This field can be reused across tests.
self.forum = Forum()

def testCanRegisterMultipleUsers(self):
self._RegisterAllUsers()
for user in self.users: # Use a for-loop to verify that all users are registered.
self.assertTrue(self.forum.HasRegisteredUser(user))
def _RegisterAllUsers(self): # This method can be reused across tests.
for user in self.users:
self.forum.Register(user)

While the test body above is concise, the reader needs to do some mental computation to understand it, e.g., by following the flow of self.users from setUp() through _RegisterAllUsers(). Since tests don't have tests, it should be easy for humans to manually inspect them for correctness, even at the expense of greater code duplication. This means that the DRY principle often isn’t a good fit for unit tests, even though it is a best practice for production code.

In tests we can use the DAMP principle (“Descriptive and Meaningful Phrases”), which emphasizes readability over uniqueness. Applying this principle can introduce code redundancy (e.g., by repeating similar code), but it makes tests more obviously correct. Let’s add some DAMP-ness to the above test:

def setUp(self):
self.forum = Forum()

def testCanRegisterMultipleUsers(self):
# Create the users in the test instead of relying on users created in setUp.
user1 = User('alice')
user2 = User('bob')


# Register the users in the test instead of in a helper method, and don't use a for-loop.
self.forum.Register(user1)
self.forum.Register(user2)
# Assert each user individually instead of using a for-loop.
self.assertTrue(self.forum.HasRegisteredUser(user1))
self.assertTrue(self.forum.HasRegisteredUser(user2))

Note that the DRY principle is still relevant in tests; for example, using a helper function for creating value objects can increase clarity by removing redundant details from the test body. Ideally, test code should be both readable and unique, but sometimes there’s a trade-off. When writing unit tests and faced with a choice between the DRY and DAMP principles, lean more heavily toward DAMP.

Code Health: Respectful Reviews == Useful Reviews

This is another post in our Code Health series. A version of this post originally appeared in Google bathrooms worldwide as a Google Testing on the Toilet episode. You can download a printer-friendly version to display in your office.

By Liz Kammer (Google), Maggie Hodges (UX research consultant), and Ambar Murillo (Google)

While code review is recognized as a valuable tool for improving the quality of software projects, code review comments that are perceived as being unclear or harsh can have unfavorable consequences: slow reviews, blocked dependent code reviews, negative emotions, or negative perceptions of other contributors or colleagues.

Consider these tips to resolve code review comments respectfully.

As a Reviewer or Author:
  • DO: Assume competence. An author’s implementation or a reviewer’s recommendation may be due to the other party having different context than you. Start by asking questions to gain understanding.
  • DO: Provide rationale or context, such as a best practices document, a style guide, or a design document. This can help others understand your decision or provide mentorship.
  • DO: Consider how comments may be interpreted. Be mindful of the differing ways hyperbole, jokes, and emojis may be perceived.
    Author Don’t:
    I prefer short names so I’d rather
    not change this. Unless you make
    me? :)
    Author Do:
    Best practice suggests omitting
    obvious/generic terms. I’m not
    sure how to reconcile that
    advice with this request.
  • DON’T: Criticize the person. Instead, discuss the code. Even the perception that a comment is about a person (e.g., due to using “you” or “your”) distracts from the goal of improving the code.
    Reviewer Don’t:
    Why are you using this approach?
    You’re adding unnecessary
    complexity.
    Reviewer Do:
    This concurrency model appears to
    be adding complexity to the
    system without any visible
    performance benefit.
  • DON’T: Use harsh language. Code review comments with a negative tone are less likely to be useful. For example, prior research found very negative comments were considered useful by authors 57% of the time, while more-neutral comments were useful 79% of the time.  

As a Reviewer:
  • DO: Provide specific and actionable feedback. If you don’t have specific advice, sometimes it’s helpful to ask for clarification on why the author made a decision.
    Reviewer Don’t:
    I don’t understand this.
    Reviewer Do:
    If this is an optimization, can you
    please add comments?
  • DO: Clearly mark nitpicks and optional comments by using prefixes such as ‘Nit’ or ‘Optional’. This allows the author to better gauge the reviewer’s expectations.

As an Author:
  • DO: Clarify code or reply to the reviewer’s comment in response to feedback. Failing to do so can signal a lack of receptiveness to implementing improvements to the code.
    Author Don’t:
    That makes sense in some cases but
    not here.
    Author Do:
    I added a comment about why
    it’s implemented that way.
  • DO: When disagreeing with feedback, explain the advantage of your approach. In cases where you can’t reach consensus, follow Google’s guidance for resolving conflicts in code review.

Truth 1.0: Fluent Assertions for Java and Android Tests

By Chris Povirk, Java Core Libraries

Software testing is important—and sometimes frustrating. The frustration can come from working on innately hard domains, like concurrency, but too often it comes from a thousand small cuts:

assertEquals("Message has been sent", getString(notification, EXTRA_BIG_TEXT));
assertTrue(
getString(notification, EXTRA_TEXT)
.contains("Kurt Kluever <[email protected]>"));


The two assertions above test almost the same thing, but they are structured differently. The difference in structure makes it hard to identify the difference in what's being tested.

A better way to structure these assertions is to use a fluent API:

assertThat(getString(notification, EXTRA_BIG_TEXT))
.isEqualTo("Message has been sent");
assertThat(getString(notification, EXTRA_TEXT))
.contains("Kurt Kluever <[email protected]>");


A fluent API naturally leads to other advantages:
  • IDE autocompletion can suggest assertions that fit the value under test, including rich operations like containsExactly(permission.SEND_SMS, permission.READ_SMS).
  • Failure messages can include the value under test and the expected result. Contrast this with the assertTrue call above, which lacks a failure message entirely.
Google's fluent assertion library for Java and Android is Truth. We're happy to announce that we've released Truth 1.0, which stabilizes our API after years of fine-tuning.



Truth started in 2011 as a Googler's personal open source project. Later, it was donated back to Google and cultivated by the Java Core Libraries team, the people who bring you Guava.

You might already be familiar with assertion libraries like Hamcrest and AssertJ, which provide similar features. We've designed Truth to have a simpler API and more readable failure messages. For example, here's a failure message from AssertJ:

java.lang.AssertionError:
Expecting:
<[year: 2019
month: 7
day: 15
]>
to contain exactly in any order:
<[year: 2019
month: 6
day: 30
]>
elements not found:
<[year: 2019
month: 6
day: 30
]>
and elements not expected:
<[year: 2019
month: 7
day: 15
]>


And here's the equivalent message from Truth:

value of:
iterable.onlyElement()
expected:
year: 2019
month: 6
day: 30

but was:
year: 2019
month: 7
day: 15


For more details, read our comparison of the libraries, and try Truth for yourself.

Also, if you're developing for Android, try AndroidX Test. It includes Truth extensions that make assertions even easier to write and failure messages even clearer:


assertThat(notification).extras().string(EXTRA_BIG_TEXT)
.isEqualTo("Message has been sent");
assertThat(notification).extras().string(EXTRA_TEXT)
.contains("Kurt Kluever <[email protected]>");


Coming soon: Kotlin users of Truth can look forward to Kotlin-specific enhancements.

Android Platform Testing Made Easy

By Simran Basi, Dan Shi, Dan Willemsen, and Clay Murphy

Android Engineering Productivity (Android EngProd) seeks to ease development of the Android operating system for the entire ecosystem. Android EngProd creates tools, processes, and documentation aimed at Android platform development. We are now starting to push the best previously internal development infrastructure into the open for all to benefit.

Although comprehensive, the Android Compatibility Test Suite (CTS) and Trade Federation Test Harness can be unwieldy to configure. So we recently publicly released new tooling and associated docs that simplify device configuration and testing in the form of the Soong build system replacing Make, Test Mapping for easy configs, and Atest to run tests locally.

Configuring tests in Soong builds

The Soong build system was introduced in Android 8.0 (Oreo) to eventually replace the Make-based system (i.e. Android.mk files) used in previous releases. Soong allows simple build configuration with support for android_test declarations arriving in Android Q, now available in the Android Open Source Project (AOSP) master branch.

Soong uses Android.bp files, which are JSON-like simple declarative descriptions of modules to build. Here is an example test configuration in Soong, from: /platform_testing/tests/example/instrumentation/Android.bp
android_test {
    name: "HelloWorldTests",
    srcs: ["src/**/*.java"],
    sdk_version: "current",
    static_libs: ["android-support-test"],
    certificate: "platform",
    test_suites: ["device-tests"],
}

Note the android_test declaration at the beginning indicates this is a test. Including android_app instead would conversely indicate this is a build package. Complex test configuration options still exist for test modules requiring customized setup and tear down that cannot be performed within the test case itself.

Mapping tests in the source tree

Test Mapping allows developers to create pre- and post-submit test rules directly in the Android source tree and leave the decisions of branches and devices to be tested to the test infrastructure itself. Test Mapping definitions are JSON files named TEST_MAPPING that can be placed in any source directory.

Test Mapping categorizes tests via a test group. The name of a test group can be any string. For example, presubmit can be for a group of tests to run when validating changes. And postsubmit tests can be used to validate the builds after changes are merged.

For the directory requiring test coverage, simply add a TEST_MAPPING JSON file resembling the example below. These rules will ensure the tests run in presubmit checks when any files are touched in that directory or any of its subdirectories.

Here is a sample TEST_MAPPING file:
{
  "presubmit": [
    {
      "name": "CtsAccessibilityServiceTestCases",
      "options": [
        {
          "include-annotation": "android.platform.test.annotations.Presubmit"
        }
      ]
    }
  ],
  "postsubmit": [
    {
      "name": "CtsWindowManagerDeviceTestCases"
    }
  ],
  "imports": [
    {
      "path": "frameworks/base/services/core/java/com/android/server/am"
    }
  ]
}

Running tests locally with Atest

Atest is a command line tool that allows developers to build, install, and run Android tests locally, greatly speeding test re-runs without requiring knowledge of Trade Federation Test Harness command line options.

Atest commands take the following form:
atest [optional-arguments] test-to-run

You can run one or more tests by separating test references with spaces, like so:
atest test-to-run-1 test-to-run-2

To run an entire test module, use its module name. Input the name as it appears in the LOCAL_MODULE or LOCAL_PACKAGE_NAME variables in that test's Android.mk or Android.bp file.

For example:
atest FrameworksServicesTests
atest CtsJankDeviceTestCases

Discovering tests with Atest and TEST MAPPING

Atest and TEST MAPPING work together to solve the problem of test discovery, i.e. what tests need to be run when a directory of code is edited. For example, to execute all presubmit test rules for a given directory locally:

  1. Go to the directory containing the TEST_MAPPING file.
  2. Run the command: atest
All presubmit tests configured in the TEST_MAPPING files of the current directory and its parent directories are run. Atest will locate and run two tests for presubmit.

Finding more testing documentation

Further, introductory testing documents were published on source.android.com to support Soong and platform testing in general:
In addition to exposing more testing documentation, Android has recently opened up build infrastructure to monitor submissions through ci.android.com. See the More visibility into the Android Open Source Project blog post and the Continuous Integration Dashboard for instructions on viewing build status and downloading build artifacts.

Android EngProd endeavors to bring you more previously internal-only features to make your life easier. Watch this Google Testing Blog, the Android Developers Blog, and source.android.com for future enhancements.