K
K
kator2019-07-17 12:16:54
JavaScript
kator, 2019-07-17 12:16:54

What are the pitfalls in the work of the torrent tracker parser?

There is a parser of popular (and not so) torrent trackers. Initially created for myself, as a way to practice Node.js, but there was a desire to share. In this regard, some related questions have arisen:

  1. With 1 client request, Node makes up to 20 requests for different trackers (something like async.parallel(parseFunctions)) , and maybe more, as new trackers are added over time. And if nothing bad happens in the case of a single request, then how will the node react to 5-10-20 simultaneous requests?
  2. In the direction of which hosting to look for in case of such a possible load?
  3. Is there a possibility of blocking? There was a similar project in my memory, then it was blocked for some time, it changed the domain to .me and now it seems to be working, but very crookedly. What affects the possibility of a banhammer? Again, the site does not store anything and is not a tracker.

Answer the question

In order to leave comments, you need to log in

2 answer(s)
H
hzzzzl, 2019-07-17
@hzzzzl

1 - the node doesn't care at all, everything will be executed not "simultaneously", but in turn anyway (node.js event loop), you just have to wait a little longer for the result if there are 1000 requests, not 10;
"5-10-20 simultaneous requests" is nothing at all
3 - there is certainly a possibility ^_^

A
Alexey Skobkin, 2019-07-17
@skobkin

The answer is somewhat different from your question, but if the task is to index the contents of torrents, then it may be more efficient to index DHT as, for example, magnetico and other projects do (there are also on node.js).
True, if the task is parsing descriptions for distributions, then yes, trackers are needed. But this creates a lot of problems.

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question