F
F
fsfsfs322020-09-27 19:16:25
Communication protocols
fsfsfs32, 2020-09-27 19:16:25

Why do all modern protocols transmit small chunks?

Why, for example, when downloading a 1GB file via http or bittorrent, they do not immediately send a gig over the network to the client, but beat it to pieces and drag it for hours? Okay, when it's something reel-time streaming.streaming.multiplayer, but when static files why? Due to lack of network? What is this lack?

Answer the question

In order to leave comments, you need to log in

4 answer(s)
A
Alexander, 2020-09-27
@NeiroNx

because if there is a loss / error in one of the parts - it is pumped out again. And now imagine you are downloading 1 gig - it is already downloading 60% and "bang" is an error, you start downloading again from 0 - now it is already 89% and again an error. The download will start again from 0. You have already downloaded 1.5 gigabytes, but you still haven't received the file.

I
Ivan Shumov, 2020-09-27
@inoise

By the fact that this is how networks are arranged in principle, this is the time. And by the fact that it is controlled - two. And network loss. Just for reference, packets get lost regularly, so TCP has relays to guarantee data delivery. It will be embarrassing to upload a multi-gigabyte file and get it broken, right? Then start all over

A
Alexey Kharchenko, 2020-09-27
@AVX

You somehow inaccurately put the question.
All protocols are sent in parts. For all information is discrete - bits, bytes. And already the protocols TCP, UDP and above them are transmitted by frames, because there are losses, and delays, and other delights of the network.
If you mean the parts that can be observed in torrents, then there it is for distributing the download - so that you can take different parts from different participants, and not wait until one appears, who has the entire file (and even if everyone starts downloading from him at once , stupidly clog his entire channel).
But besides this, there are SMB, and DC, and FTP, and a bunch of others - and in some, nothing beats at the application level, although resume (FTP, DC) is often supported. And if you can download a file of 10 gigabytes without any problems via SMB in LAN, then on the Internet it will be 90% a broken file, and you will have to redownload it again.

spoiler
P.S. на работе, в начале 2000хх была у нас одна линия связи особо нестабильная (воздушка на 50км), и скорость была что-то около 19кбит/с. Так вот передать на удалённый комп обновление программы на 5 мегабайт было непросто. Приходилось winrar'ом архивировать с разбиением на части максимум по 80-100кБайт, и с избыточной информацией для восстановления. Перекидывали все эти части, а на том конце собирали - почти всегда какие-то части были битые, но избыточность архива спасала. Это потом уже более-менее нормальные качалки появились, стали их использовать, но целостность они проверить не могли, и уж точно восстановить файл после ошибок не могли.

G
Griboks, 2020-09-27
@Griboks

1. It's faster - parallel transmission
2. It's more reliable - the probability of error is reduced
3. What protocols? Almost all the protocols I use transfer whole data. Splitting occurs at the lower levels.
4. How would tcp / ip in any case break any data into packets of ~ 1kb

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question