K
K
koshelevatamila47412021-09-13 18:30:02
Solid State Drives
koshelevatamila4741, 2021-09-13 18:30:02

Will the high-speed cache in the SSD work on the entire volume?

I read about the cache, but it's still not clear, this is a temporary high-speed storage into which files are loaded, and then the controller gives a command and the data from the cache is copied to (another memory?), For example, samsung 970 evo (500gb)
Dynamic SLC cache = 18 GB , there is also Static SLC cache = 4 GB (what's the difference?), it turns out when copying a file weighing more than 18 GB, the speed will drop after transferring these 18 GB? and if you copy two files of 10 GB with a difference of 10 minutes? they will be copied with the maximum speed? Or will the cache fill up and the speed will always be lower in the future?

Answer the question

In order to leave comments, you need to log in

2 answer(s)
R
rPman, 2021-09-13
@rPman

The algorithms for the controller of each manufacturer are different, and all are closed, so whether you will get a clear answer here.
A little is written about static and dinamic here

but completely incomprehensible
Types of SLC Cache
There are two types of SLC cache: static and dynamic. Whether an SSD uses static or dynamic cache depends on the firmware algorithm.
Static Cache. As its name implies, the size of the assigned area is fixed. The main advantage is that there is a guaranteed area allocated for SLC cache. This space is disabled only when the rest of the TLC is fully utilized. However, since only a specific area is assigned for SLC cache, such area will sustain more intensive reads and writes, resulting in higher P/E cycles, which could in turn affect the drive’s endurance.
Dynamic Cache. In contrast, dynamic cache refers to an area that is not fixed. The key advantage is that wear leveling is much more uniform across the entire drive. The disadvantage is that, due to its size flexibility, the cache size is not guaranteed.

Cache means writes and reads go first through this memory, and according to some internal algorithm, the data in this cache is moved to disk when written and placed in the cache for reading or bypassing it.
A simple example, you have a cache of 10 gigabytes and you decide to write 12 gigabytes, for example, writing to the cache of 1 gigabyte will cost you 1 second, and writing to flash is 10 times slower, hence the first 10 seconds you will write the first 10 gigabytes to disk, but as soon as the cache that works as a write buffer fills up, your speed will drop and you will write the next 2 gigabytes for another 20 seconds (the disk will slow down the write to wait until the next piece of data from the cache is written to the flash, free up space for new data and then will allow you to place new data there, i.e. it will slow down not because of writing new data, but because of old ones, by the way, not a fact, there are other, non-FIFO strategies for working with the cache).
If you write 10 gigabytes with an empty disk, then the cache will instantly fill up in 10 seconds, i.e. recording will be instant. then, in order to be able to write new 10 gigabytes at maximum speed, you will need to write nothing for 100 seconds (this time is enough in this example to completely flush the cache buffer to flash). Of course, if you wait only 10 seconds, this will give you the opportunity to write only 1 gigabyte at maximum speed.
And this is only about the write cache, but there is also a cache for reading, and somehow you need to reallocate memory in the cache for different data types, i.e. what is better, to throw out the data that you often request to read from the cache so as not to slow down your write, or vice versa, let your read requests be faster and not affect the slow flash memory occupied by the write.
The best answer will be given to you by your own tests for your type of load. Almost certainly, modern flash drive controllers are able to adjust their cache invalidation strategy to the current load ... at least they all perfectly work with popular benchmarks ;)

S
Saboteur, 2021-09-13
@saboteur_kiev

No, you won't notice anything.
Cache is not for copying large files at all, because reading and writing large files is a linear read speed, and caches are good for random access.
Suppose you are compiling a product. thousands of small files are read, processed. A small obj file (from tens of bytes to a couple of megabytes) was created for each and recorded. Then the linker runs through them all and collects them into the main executable file.
That is, several thousand operations are performed to read / write a couple of kilobytes.
A fast cache will help you quickly take a hundred operations into yourself, and then think and write to the main memory in one long operation.
Thus, everything will be written to the main memory as if it were a linear operation.
Dynamic and static are just what to put there. The static one is cut into ready-made pieces, the dynamic one allows you to save both kilobytes and 100 kilobytes and gigabytes separately cut into pieces for each operation, but it will be a little slower.
PS Ultimately, everything will most likely depend on the size of the disk system cluster.
Read the details if you need to https://www.atpinc.com/blog/what-is-SLC-cache-diff...
But. For linear copying of large files, the cache usually has little effect

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question