F
F
frontjss2020-03-31 21:02:37
OOP
frontjss, 2020-03-31 21:02:37

What is the law of leaky abstractions?

Please explain clearly what this means?

Answer the question

In order to leave comments, you need to log in

1 answer(s)
V
Victor Bomberow, 2020-03-31
@majstar_Zubr

Any abstraction may include part or all of the definition of another abstraction.
There are some entities.
They interact in some environment in a certain way.
Let's reflect these rules in some specification.
The specification describes the interaction in some language, but what is important: each entity is associated with a certain abstraction, and each interaction too. Perhaps these rules can be generalized, so new concepts and abstractions will be highlighted that will make sense of entity classes, or interaction classes (strategies, template commands). And in the end, it turns out that there are fewer abstractions than the entities themselves.
This is convenient, because if we increase the number of entities, we still manage to describe all interactions without problems.
And now, let's say we want to solve an inverse problem: we want to describe the specification of a conceptually new interaction. How to achieve this if we have a set of abstractions that are tied to specific entities?
Here everything specifically depends on the terminology of the specification, the number of abstractions already described and their interactions.
The first option: we can perhaps make a composite of already composed abstractions. In this case, we will have an abstraction of a higher level, and maybe several abstractions, which in turn will give rise to new interactions, in the description of which our composite will necessarily be present. Making changes like this is relatively easy because it's a simple buildup. For example, this option can come from atomic operations to the concept of a transaction.
Option two: we can't make a composite because we don't really have enough concepts in the specification. We need such an entity that in one case could become one abstraction, and in another case - another. Or we need partial behavior from different entities. This is important to us because we describe interactions through a finite set of terms that mean abstractions that, before setting this task, were entirely synonymous with entities. Now, to solve our problem, we will have to introduce an intermediate mapping layer between entities on the one hand and concepts (abstractions) on the other. In general, we will have to come up with and describe not only a new entity, but a new abstraction.
When we start to describe a new entity, we can come to the conclusion that we lack terms, that the flight will lead to the fact that we will start doing another level of description, a lower one, we will begin to decompose our entities into sets of lower abstractions and already describe interactions for low-level abstractions in order to produce generalizations, form low-level terminology relative to the original entities in order to be able to make such a composite that would be at the same level of abstraction with the original entities. In general, this is a conditionally bad option, because in this case the encapsulation of the entity has leaked, and we had to dive to a lower level of abstraction.
So, the law itself says that in fact, you can always come up with such a problem that for its solution it will be rational to go down to a level lower than the current set of abstractions allows. In general, this is logical, because abstractions are always secondary, it is just a product, only interactions are important.
The corollary of the law is that the ideal API and ideal library are only built up by composites with higher abstractions, never have depricated methods, never have analogues, and never have backwards compatibility issues.

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question