Y
Y
YoungSkipper2013-09-11 20:32:17
Software testing
YoungSkipper, 2013-09-11 20:32:17

Two sets of tests for debug and release builds?

There is a code which it would be desirable to cover with tests. The peculiarity of the application is that if some non-valid situations occur in the debug build (tested by developers and internal testers), it is preferable for us to get a crash (and most of the crash report will go to the developer with information sufficient to understand the problem).

But in the release build (tested by external testers and goes live) - any application behavior (invalid state, visual glitches, and even freezing) is better than a crash.

As a result, on the one hand, the code has assertion checks for invalid situations, but on the other hand, there is a check of returned values, a check of input parameters, where try/catch is needed, etc. — i.e. In any case, we are trying to save the situation.


An exaggerated example - here is a function that returns pointer to some data structure from the collection - getBotTplById (long id) on something that has not yet been added to the content) - there will be an assertion and the game will crash. In release, we will simply return a null pointer, which will need to be checked.

It is clear that situations are possible, as well as when - the assert code was written, but the data was not checked everywhere.
So, in general, situations are possible when the checks were written, but the assert was forgotten to be inserted.

Here we want to write a test for this function to which we pass a deliberately incorrect ID and that it will process normally.

How do we write?

#ifdef DEBUG
ASSERT_THROW(someFunc(notValidId));
ASSERT_NOT_THROW(someFunc(validId));
#else
ASSERT_EQ(someFunc(notValidId), NULL);
ASSERT_NE(someFunc(validId), NULL);
#endif

or?

ASSERT_THROW(ASSERT_EQ(someFunc(notValidId), NULL));
ASSERT_NOT_THROW(ASSERT_NE(someFunc(validId), NULL););

If it is technically possible to do so.

In any case, it's enough of a problem to write two sets. Or to score, and run tests only on the debug build? Or vice versa only on the release?

What do you think?

Answer the question

In order to leave comments, you need to log in

4 answer(s)
S
spiritms, 2013-09-12
@spiritms

Могу рассказать наш опыт. Наша тестовая система принимает на вход билд и параметр, релиз это или дебаг. Потом на этом билде выполняется какой-то сет из тестовый кейсов, при этом анализируются выходные данные, креши и логи (релиз или дебаг — это один и тот же тестовый код). Таким образом, при одинаковом коде тестовой системы, для разных билдов мы просто получаем немного разные отчёты (которые, кстати, полезно потом сравнить и сопоставить, если что-то пойдёт не так). Так же хочу заметить, что «дебажный» билд это не просто билд, который у девелоперов — это специально настроенный билд, построенный таким образом, что ассерты, эксепшены и прочая расширенная информация сбрасывается в лог, который удобно потом автоматически анализировать и формировать отчёт. Действительно, «дебажная» сборка является третим видом сборки, но она от обычной отличается только одним файлом .h с настройкой ассертов и эксепшенов. Весь этот кейс протянут через систему continious integration и проходит в автоматическом режиме. На самом деле это реализовать не так сложно, но при этом ещё проще поддерживать.

A
abby, 2013-09-11
@abby

In my opinion, the behavior of the program should be the same regardless of the debug or release mode. In this regard, I believe that manual checks should be everywhere. Asserts are at your discretion, they only help to notice the wrong state in time. Tests should also be the same without #ifndef DEBUG, for this you can use your own assertions, assert.has a rule, it is specially written so that this can be done.

A
abby, 2013-09-12
@abby

Maybe it's not worth testing that the application should crash? For unit tests, create assertions that never throw exceptions, but write, for example, to a special log. I understand that sometimes a bug is very difficult to reproduce, and a log entry is not enough. For such cases (“complex” integration testing with random data and testers who analyze additional information in the debug version), you can use an assertion implementation that crashes the application so that developers can then identify the cause of the crash from the dump.

N
Nikolai Turnaviotov, 2013-09-14
@foxmuldercp

It seems to me that the debug version from the release should differ not in tests, but in the generation of logs for each sneeze, depending on its level, and the tests should be the same, well, whether or not to display “test fps”.
And it shouldn’t matter to the tests whether it’s a release or a debug, and if the test fails, this is a case for arguing “where is the jamb?”, And if it doesn’t fail, then everything is OK. Well, actually, the studio has an interesting setting - do not start the assembly in TFS if the code does not compile correctly

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question