E
E
Evgeny Nazarchuk2019-11-07 10:37:27
Software testing
Evgeny Nazarchuk, 2019-11-07 10:37:27

"Automation of deployment/testing" in Big Data?

Hello. I work in the field of software testing and recently changed the project, the technology stack changed, from dot net to big data :)
And a lot of questions appeared
Current stack description (not very accurate, but the most important):
Hadoop, Spark, Hive, Cassandra, Kafka , Rest (read/write to Kafka topics), Hue and all sorts of other unimportant things
Problem:
Very roughly speaking, for each task you will need to create ("roughly" because you can use existing entities that other testers, analysts, developers use):
- configure Spark jobs (stream jobs read Kafka topics and write to other Kafka topics (with data modification), batch jobs read Kafka topics and write to Hadoop)
- create topics in Kafka
- create tables in Hive
- create tables in Cassandra (jobs read topics and write to Cassandra)
- raise Rest locally
Is it possible to wrap all this in containers so that, by clicking a button, deploy jobs and other services for the desired branch?
In order not to struggle every time with the job settings for the necessary topics, tables

Answer the question

In order to leave comments, you need to log in

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question