V
V
vadik_kmv2018-02-10 12:01:55
Agile
vadik_kmv, 2018-02-10 12:01:55

How to evaluate autotests in story points?

Colleagues, hello!
I'm doing self-testing on a project. The question arose that I was probably underestimating my task.
But there are problems here:
1) suppose the autotest is large, there are many checks of interface elements, but they are not complicated. How can this task be valued at 1 SP or maybe 5 or more? What can I take as a standard, some standard average autotest?;
2) Each sprint drops a lot of autotests from previous sprints. the functionality changes just on the fly and I spend a huge amount of time updating them and very often to the detriment of current tasks, what should I do here (discuss with analysts the need for a more careful selection of stores for automation)?
I would be glad for any advice on evaluating my tasks

Answer the question

In order to leave comments, you need to log in

1 answer(s)
K
kn0ckn0ck, 2018-02-10
@kn0ckn0ck

1. SP is a ruler for measuring difficulty. Something very simple = 1, something complex = 13. This is a relative scale, so everyone has their own. Estimates in SP are empirical, that is, based on previous experience. If something is not clear how much to evaluate, then you need to break it into pieces and evaluate each separately.
2. Developers have the same problem, discuss with them. It's called "technical debt". Every change in functionality requires a rewrite of something previously written. This kind of extra work steals time and reduces the speed of the team. It is obvious that this needs to be dealt with.
I know of two approaches that can be applied here:
a) Build incremental development (it’s like choosing stories for automation more carefully, yes), that is, think through each story more deeply and make functionality consistently so that the next sprints do not break much what was done in the previous ones. This is teamwork - you can always find some kind of compromise, the main thing is that everyone understands that every decision has a price.
b) Continually improve the design (code and tests) to ensure that new stories are not time-consuming to maintain. It is already only in the hands of the developer / tester - to make a flexible design. A banal example, in automatic tests, do not bind to names, and not to their location inside the UI model, but only to ID elements of the interface.

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question