M
M
Maxim Vasiliev2013-03-07 08:01:10
Python
Maxim Vasiliev, 2013-03-07 08:01:10

How to test an asynchronous multi-headed dragon?

Wrote an asynchronous server about several heads:
ORM (django), frontend (django, websocket, jsonrpc), frontend API, homemade cron, big head, small head, backend api, entropy source (asterisk).
The heads communicate with each other by direct method calls, defered/callbacks and observable/events.
The logic of the components is determined by their behavior and reaction to all sorts of situations signaled through events.
State changes are also signaled through events.
All possible modes of behavior are concentrated in one central component.
The control flow of direct calls is more or less linear: API -> big head -> small -> backend -> asterisk.
The flow of events in general is also (in the opposite direction).
The overall behavior logic is organized into a bunch of micro-workflows (call processing) that are relatively loosely coupled.
The question is how to test such a design?
What are the general methodologies for such cases?
How to divide is understandable, but how to rule is not very clear.
The main source of entropy and asynchrony (asterisk) I can simulate with some certainty with a talking pig in order to generate predictable input data.
But here's what to do with everything else, and especially with time-dependent processes - there are no ideas at all. And scary.
Trained only in the unit-testing method, which is completely off topic here.

Answer the question

In order to leave comments, you need to log in

3 answer(s)
Y
yeg, 2013-03-07
@yeg

The question is not formulated very clearly, it is not quite clear what exactly the problem is to test all this, and why time-dependent processes are difficult here. I would venture to offer an equally vague scheme of actions (without any methodologies, but on the fingers):
1. Ideally, write detailed use cases (Use Case) to regularly check the correct operation of the business logic.
2. Describe each interface that is there.
3. For each interface, write stubs that return the correct values ​​but do nothing.
4. Test the interfaces of each component separately using stubs.
5. Test each workflow by use case. Where there are temporary gaps, you can put artificial delays, preferably random ones, but so that the value is written to the logs.
6. Write automated tests for all interfaces and for running scripts, if possible, and run after each revision. Smoke logs.
7. Conduct creative testing on the topic “how else to bring him down”?

Y
yeg, 2013-03-07
@yeg

If the equipment allows, you can put a record of the appropriate type in the log for each event (dialed, beeped, made a sad face), and then scour the logs with a script for the correct sequence of events.

M
m08pvv, 2013-03-07
@m08pvv

If you want to check the interaction model for correctness, then you can (simplifying to the required minimum) write out your system in a special language and check for problems in the graph - LTSA

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question