M
M
Mikhail Akushsky2012-06-29 15:50:04
Software testing
Mikhail Akushsky, 2012-06-29 15:50:04

How to come to automatic testing?

I write a fairly large amount of industrial code, mainly expanding the capabilities of the DocsVision platform (this is an EDMS, if someone is not in the know). I try to use the most modern technologies, for example, WPF and MVVM, frameworks and fewer bicycles (since almost everything has already been stolen before us).

Nevertheless - there is only one problem - I just can't come to the idea of ​​unit testing. Yes, I read about TDD comrades, theoretically I can imagine everything, but, apparently, I can’t figure out where to start.

More specifically - here is a module, well, let's say, attaching files to a document card. What to test here? Match the size of the original file or a new one to check? Ali what else? Or some stage in the life of the document? IMHO, it's easier to create it with your hands and check whether the task has come to the right place or not. What is there to automate?

Simply, there is a feeling of some kind of constant discomfort - that there is some kind of cool mechanism, everyone uses it and is happy, but I know about it, and that’s all.

What am I doing wrong? How to get into the idea of ​​testing? And does it really make life easier with industrial code?

Answer the question

In order to leave comments, you need to log in

9 answer(s)
E
egorinsk, 2012-06-29
@egorinsk

Unfortunately, all sorts of theorists and amateurs write a lot about testing and TDD for the sake of calculating the factorial. TDD is especially poorly compatible with active refactoring.
Unit testing doesn't make much sense for modules with primitive logic. Unit testing, firstly, is applied to modules with mathematics / cunning internal logic, and secondly, it is necessary to check the result in an alternative way.
Example of proper unit testing:
For example, there is a function to solve a square. ur: x1, x2 = solveQuadEq(a, b, c );
We write a test for it:
a, b, c = 1, 2, 3;
x1, x2 = solveQuadEq(a, b, c);
test::assertFloatEqual(a * x1 * x1 + b * x1 + c);
test::assertFloatEqual(a * x2 * x2 + b * x2 + c);
(Note that the result is checked by substituting the roots into the equation backwards, and not by solving the equation). And so several times with different a, b, c.
It makes sense to use unit testing, for example, to test a module for extracting phone numbers from text or an OCR module.
Testing "thin" and "stupid" controllers and views is exactly the same as testing the operation of the printf () function. That is, none.
For your task, functional testing is more suitable, that is, testing individual processes and work scenarios, for example: create a document, add a file, edit a document, delete a document, while checking the absence of freezes (timeouts), errors and warnings on the client and on the server. Optionally (but not necessarily), you can check, for example, that there are 1 more documents, that a file has appeared and disappeared in the repository, that lines have been added to the workflow report, that a letter has been sent to the user that contains such and such a document number or such and such keyword.
I do not know if there is a tool for organizing such testing of your software. It is possible that this does not exist in nature. Even browser-based web application testing tools like Selenium are very inconvenient and underdeveloped.
But checking the result is not so important, since already one check that all the buttons in your application are pressed and do not generate errors and timeouts is already more than enough (with a high probability this means that the program is working). This test is useful, for example, for web applications, because when programmers actively refactor the code, they can break some JavaScript button in a dialog forgotten by everyone, and no one will notice. Imagine how long it takes testers without automation to crawl through all the pages and click all the buttons.
Also, if it seems difficult to you, you can test the product on users: enter the maximum logging of all errors and warnings, put assert() everywhere in the code (it’s worth doing anyway) and collect user complaints about bugs, but this is not always possible, It’s one thing for a free service like Facebook to break down and it’s okay, it’s another thing if some multimillion-dollar business processes in a large corporation or bank accounts are disrupted due to an error.

W
Wott, 2012-06-29
@Wott

To understand, you can look in the direction of regression testing.
For example, you piled a module, created tests for it, and everyone was happy with everything. but the tests are not forgotten, but are collected in a heap, which is completely executed on the next build after successfully passing its unit tests. In this way, we can definitely say that nothing that worked was broken. But the problem is that there are a lot of such tests, and over time, a significant part of the resources is spent on testing. The solution is automated testing, into which unit tests are transferred after the completion of their development cycle. Automated regression tests can be run overnight or on a schedule, on a single environment, or across multiple systems. Iron is cheap - you can stir up entire clusters that will be serviced by one tester-admin. And developers and testers can focus on more intellectual work.

A
Antelle, 2012-06-29
@Antelle

IMHO, it's easier to create it with your hands and check whether the task has come to the right place or not. What is there to automate?
This is the wrong idea. Imagine that you have not one, but 100 such projects in your company. Will you (or the tester) climb in every day and check whether it has arrived where it should be or not? Unlikely. And there will be a test. This is where the salt is.
Where to start - look at what modules you have, what operations they perform - and check these operations. Of course, modules should be loosely coupled so that they can be tested (if we are talking about unit testing here, of course).
tags delivered

B
BigMosquito, 2012-06-29
@BigMosquito

More specifically - here is a module, well, let's say, attaching files to a document card. What to test here? Match the size of the original file or a new one to check? Ali what else? Or some stage in the life of the document?

You can, for example, attach a document, close/save the file, then open the document again, "pull" the attachment and check for size/name/openability in the native application.

R
retran, 2012-06-29
@retran

www.amazon.com/Test-Driven-Development-By-Example/dp/0321146530

V
Vladimir Chernyshev, 2012-06-29
@VolCh

Think about what scenarios for using your module can be, what is input and what should be output. Entry and exit are not only parameters of the constructor/methods and the return value, but also the state of the database, file system, etc., and an exception to exit. For each scenario, prepare the input from scratch (create the necessary runtime objects, files, schema, and data in the database), execute the script (in the simplest case, call one method) and check the output.
If the architecture is not tailored for testing, then most likely the tests will be large, fragile and slow, because in addition to your own logic, they will test the logic of the system, framework, libraries, OS, etc. It's time to refactor to make testing easier. Let's say all file operations in the module under test are allocated to the methods of a separate FileSystem class, which is passed to your modu in parameters or a container. Make sure that the first group of tests still works after refactoring, copy it as a new group and start replacing real calls in it with fake ones, mocks and stubs. Get easier tests. Something like this. The book Working Effectively with Legacy Code
helped me a lot in my time .

P
petuhov_k, 2012-07-06
@petuhov_k

If your module is a simple layer to DV, then there is nothing to test in it - this is the concern of the "guys from St. Petersburg". If your module contains some other logic, then it should be tested.
For example, before attaching a document to a card, you check its size and format. Let's say you don't want users to put full-color scans or videos into documents (we've had this happen too). Then you write a "layer" to the DV so that it can be replaced for the duration of the tests and not to pull the DV itself directly. And then you check various use cases: 1) they gave bw bmp - a call to the “layer” should go through to add a file, 2) they gave avi for 200mb, the call to dv does not go, there is an exception or the corresponding code returns, etc.

A
AM5800, 2012-07-08
@AM5800

I would advise you to start by writing tests for what has already failed. For example, after testing, the tester said that when the "Create Document" button was clicked, an exception was thrown. OK. Write a test that reproduces this. And only then fix the exception.
Make it a rule before headlong rushing to fix such errors - first write a test. Over time, understanding will come and what to test, and how, and most importantly - after heavy torment in the spirit of “Bliiiiin. Well, how to write a test for THIS? there will be an understanding of how to write code so that it can be tested.
A bunch of examples have already been written here on how to write tests for different situations - “if there is this and that, then the test should be written like this.” But the best example generator is life. Write tests. Learn from your mistakes. Well, as usual.
I also strongly recommend the automatic assembly and testing systems described in Wott's post. For example, after pushing to VCS, we build the project and run all tests on a specially dedicated server. The main advantage in this for me personally is that there is practically no need to run tests on my machine. After all, our tests can run for several hours.
I can also recommend various programs for analyzing test coverage. Often such programs, in addition to directly analyzing the coverage, are able to analyze some other useful metrics. For example, C.RAP is a metric . Sorting by such a metric can help you discover complex, untested features. For them it is worth writing unit tests in the first place.

G
ganouver, 2012-07-13
@ganouver

I went through this myself and asked myself the same questions.
And then something clicked and everything fell into place.
I wrote a post about this .
And in the comments to it there are answers to those questions that you ask.

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question