R
R
Ruslan Fedoseenko2020-08-23 15:24:15
.NET
Ruslan Fedoseenko, 2020-08-23 15:24:15

Is it correct to copy data between microservices?

Hello.
Our project uses a micro-service architecture (at least we think so). There is an application for managing certain entities. Let's take a list of movies as an example. Communication between applications occurs both synchronously (HTTP, RPC) and asynchronously using events and commands.
Let's say we have micro-services that perform the following functionality:

  • Movie Management API
  • Payment
  • Issuing a video stream
  • Formation of recommendations

Each micro-service needs different information about the movie.
Now we have done so that when a movie is created / modified / deleted, an event is generated and sent through the message broker to other micro-services. They take the necessary data, save it to the local database and work with it. With this, a lot of problems are possible in the form of data inconsistency, etc.

I considered it as an option to put information about the movie in the cache (Redis, Memcached) and receive it from there. The problem here is that flexibility is lost - when changing the structure of storing information about the film, all consumers of this data will have to be updated.

Well, the 3rd option that I see is synchronously requesting the movie management API.

What other options are there for the proper organization of such data management?

Answer the question

In order to leave comments, you need to log in

3 answer(s)
G
Griboks, 2020-08-23
@Griboks

1. All data is stored in one database.
2. The base has a management system with API. When you change the schema, you change the API device, but not its external interface.
3. All consumers and data producers use this common API. They are used flexibly, for example, through GraphQL. (Personally, I lack its flexibility, I wrote my own.) If the microservice changes, the request is rewritten, not the API.

S
Sanes, 2020-08-23
@Sanes

There must be one base per entry. On reading you can make replication.
Accordingly, according to this scheme, each service that can work relatively independently.
A news site, for example, might have:

  1. Users
  2. Articles
  3. Comments
  4. Banners

Any of these lists can be split into additional services, read/write.
In total, you should survive the failure of the master server (record) of any of the modules as painlessly as possible.

R
Roman Mirilaczvili, 2020-08-24
@2ord

Each micro-service needs different information about the movie.
It may be necessary to send enough information through the message broker that is necessary for the operation of each service. Then you do not need to apply for data to another.
They take the necessary data, save it to the local database and work with it. With this, a lot of problems are possible in the form of data inconsistency, etc.
It would be necessary to clarify why the inconsistency and other problems. Otherwise, how can you help without knowing the problem?
The Event Sourcing pattern says that all application state changes should be represented as a sequence of events.
I don't know what message broker you are using, but in order to be able to reconstruct the entire sequence of changes again in each microservice, you need to initially store them in some kind of central event log. Apache Kafka is very suitable for this purpose.
If not Kafka, then you need to provide the ability to send the same event through several channels so that each of the microservices can get all the necessary information.
Changes to the DBMS must occur atomically to avoid data inconsistencies.
Event Sourcing Pattern
https://martinfowler.com/eaaDev/EventSourcing.html
https://microservices.io/patterns/data/event-sourc...

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question