T
T
totsamiynixon2021-05-30 22:46:57
Continuous Integration
totsamiynixon, 2021-05-30 22:46:57

How to write correct Unit/Integration tests?

The story is completely fictitious and is presented only to use the narrative style.

Let's take the simplest example of a calculator from the Internet, which usually demonstrates how to write unit tests and the TDD approach.

So I have some kind of system that calculates the salary of employees.

Code example:

public class SalaryCalculationService {
    private readonly Calculator_calculator;

    public SalaryCalculationService(Calculator calculator) {
        _calculator = calculator;
    }

     public int CalculateSalary(Employee employee) {
       //logic that uses calculator goes here
     }
}

One of its dependencies is the Calculator class.
SalaryCalculationService test example:
public class SalaryCalculationServiceTest {

  [TestMethod]
  public void ShouldCalculateSalaryCorrectly() {
    //Arrange
    var calculatorMock = new Mock<Calculator>().Sum().Returns((a, b) => Math.Sum(a, b)); //Мы абстрагируемся от конкретной реализации ICalculator
    var calculationService = new SalaryCalculationService(calculatorMock.Object);
    var employee = new Employee();
    employee.SetWorkingDays(10);

    //Act
    var result = calculationService.CalculateSalary(employee);

    //Assert
    Assert.Equals(40, result);
  }
}


Calculator code example:
public class Calculator:  ICalculator {
    public int Sum(int a, int b) {
        return a + b;
    }
}


CalculatorTest code example:
public class CalculatorTest {
 
    public void ShouldCalculateSumCorrectly() {
        //Arrange
        var calculator = new Calculator();
     
       //Act
       var result = calculator.Sum(2, 2);

      //Assert
      Assert.Equals(4, result);
    }
}


And here comes the requirement that the calculator can now only count odd numbers.
My colleague works on TDD, creates a test for this behavior, implements this behavior, runs all the tests - green, which means you can check in.

Obviously, the system as a whole will not work, because the various parts of the system that used the Calculator do not expect it to work in such a strange way now. And so it turned out. Our testers quickly found this bug.

In order to avoid such cases, we introduced integration testing, which checks how modules interact with each other. To ensure speed, we mock all database connections, network, disk, etc. Integration tests revealed a number of bugs of this kind in our application, which were successfully fixed.

But here's the catch, we are still getting bugs, and on those modules that are covered with tests. We understand the situation and understand that one of our services that uses the database sends an SQL query with a syntax error. Well, twenty-five again. We write everything in TDD, all tests are green, but bugs still appear.

There is a feeling that the less we mock, the more bugs our tests catch. Or an alternative formulation: the higher the level we test, the more real the picture is. If we are developing a backend, then there is a feeling that it is worth starting testing from the moment the HTTP request came to the input of the code we manage.

I have read hundreds of articles on unit/integration tests and have not found an answer to the question: how to properly organize Unit/Integration testing, so that the created tests catch bugs associated with at least refactoring, so that they can refactor the code without constant changes in the tests (we change the dependency = rewriting all mocks for it throughout the application) and so on?

Please tell us about your experience, it is very important to get different opinions on this problem!

Answer the question

In order to leave comments, you need to log in

2 answer(s)
S
soloveid, 2021-05-31
@soloveid

that the less we mock, the more bugs our tests catch.

Yes, that is right.
Mocks are used to test for some non-standard behavior, or
some kind of hard-to-reproduce, for example, the connection to the base has fallen off or the network has fallen.
But why mock a calculator, which is somehow used everywhere and, in fact,
its piece of code is substituted into the mock (I don’t know the DRY principle?), I don’t understand.

V
Vasily Bannikov, 2021-05-31
@vabka

> how to
choose the right balance between ease of writing, speed of execution, and coverage.
The coolest in terms of coverage are end to end tests, in which you raise the entire infrastructure and make http requests to the service under test.
But they are quite slow, they require a complex infrastructure, and when writing, you need to think about how to make sure that the tests do not affect each other.
And now about my experience:
There was a project without tests, but at the same time it was clear that it needed to be completely refactored, and even rewritten a couple of modules from scratch, there was also a move from one subd to another. But there was documentation for all endpoints.
In this case, the approach with writing tests for HTTP was ideal.
Then I wrote them in Sharpe, but now I know that in Postman you can not only make requests, but also write tests that you can then run from the console.
And now for the rest:

We write everything in TDD, all tests are green, but bugs still appear.

Most likely an architectural problem. Due to the large number of mocks, you have a lot of untested code.
What you are mocking should also be fully tested.
one of our services that uses a database sends a SQL query with a syntax error

You take out a piece of code that generates this SQL query in a new service, and test that it always gives the correct syntax.
I advise you to read about clean architecture

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question