Answer the question
In order to leave comments, you need to log in
Why do all modern protocols transmit small chunks?
Why, for example, when downloading a 1GB file via http or bittorrent, they do not immediately send a gig over the network to the client, but beat it to pieces and drag it for hours? Okay, when it's something reel-time streaming.streaming.multiplayer, but when static files why? Due to lack of network? What is this lack?
Answer the question
In order to leave comments, you need to log in
because if there is a loss / error in one of the parts - it is pumped out again. And now imagine you are downloading 1 gig - it is already downloading 60% and "bang" is an error, you start downloading again from 0 - now it is already 89% and again an error. The download will start again from 0. You have already downloaded 1.5 gigabytes, but you still haven't received the file.
By the fact that this is how networks are arranged in principle, this is the time. And by the fact that it is controlled - two. And network loss. Just for reference, packets get lost regularly, so TCP has relays to guarantee data delivery. It will be embarrassing to upload a multi-gigabyte file and get it broken, right? Then start all over
You somehow inaccurately put the question.
All protocols are sent in parts. For all information is discrete - bits, bytes. And already the protocols TCP, UDP and above them are transmitted by frames, because there are losses, and delays, and other delights of the network.
If you mean the parts that can be observed in torrents, then there it is for distributing the download - so that you can take different parts from different participants, and not wait until one appears, who has the entire file (and even if everyone starts downloading from him at once , stupidly clog his entire channel).
But besides this, there are SMB, and DC, and FTP, and a bunch of others - and in some, nothing beats at the application level, although resume (FTP, DC) is often supported. And if you can download a file of 10 gigabytes without any problems via SMB in LAN, then on the Internet it will be 90% a broken file, and you will have to redownload it again.
1. It's faster - parallel transmission
2. It's more reliable - the probability of error is reduced
3. What protocols? Almost all the protocols I use transfer whole data. Splitting occurs at the lower levels.
4. How would tcp / ip in any case break any data into packets of ~ 1kb
Didn't find what you were looking for?
Ask your questionAsk a Question
731 491 924 answers to any question