Answer the question
In order to leave comments, you need to log in
Memory modules for servers (Registered, Unbuffered, ECC-avail)?
Hello,
I know about the difference between Registered and Unbuffered, as well as about ECC. The question is: how much does the performance drop with Registered ECC and Unbuffered (ECC or Non-ECC)?
We select for the HP ProLiant Gen8 (350e) server, on which the services and roles of AD-DS, DNS, DFS, sharing (SMB), as well as the 1C file server (a relic of the past, but alas, relevant), accounts, RemoteApp are planned. Now it has 2 HP 647657-071 (unbuffered Ecc) sticks installed. Well, it is logical that the server works 24/7.
Are there any comparison tests between them at all? I couldn't find it with a search, apparently I'm looking badly and yet ..
I would like to find the optimal solution, reliability with minimal damage to performance.
Answer the question
In order to leave comments, you need to log in
And what's the difference if the server supports only one type of memory?
Depending on the model, this is either RDIMM or UDIMM.
UPD answer is not entirely unambiguous, in the form of a comparison of individual modules is slower, if several, then faster.
Based on guidance from Intel and internal testing, RDIMMs have better bandwidth when using more than one DIMM per memory channel (recall that Nehalem has up to 3 memory channels per socket). But, based on results from Intel, for a single DIMM per channel, UDIMMs produce approximately 0.5% better memory bandwidth than RDIMMs for the same processor frequency and memory frequency (and rank). For two DIMMs per channel, RDIMMs are about 8.7% faster than UDIMMs.
en.community.dell.com/techcenter/b/techcenter/arch...
Are there any comparison tests between RDIMM and UDIMM regarding performance?
I know that in terms of speed - register / buffered may be the loser, because. there is a one-cycle buffer operation delay. The question is, how much is she losing?
If the memory is PC-10600 DDR3-1333, then it will operate at a frequency of 1333 MHz with a linear bandwidth of 10600 Mb / s and it does not matter if it is ECC or not. This is such a standard, the implementation is on the conscience of the memory manufacturer.
As for delays, you yourself wrote that if there is a single-cycle buffer operational delay, then the speed of an instantaneous request for information will be 1/17 lower if the memory has a latency of 17 cycles.
It seems that you can’t see this anywhere, you can only meditate on the Aida cache i59.tinypic.com/51czvd.jpg tests that you can find on the net, they will have both linear speed and delay. The most important point, look from the processor memory controller, if it is garbage, and does not hold a linear speed of more than 17-20GB / s, then no overclocking modules will save
Two additions - I did not look at what modules your server supports, this is on your conscience, so to speak. If the server is on Linux, additionally see how Linuxoids measure memory, since I don’t deal with Linux at all.
If your server only supports unbuffered Ecc, then you can only insert it,
nothing sags, especially on your completely unloaded services, but you have almost a home server and does not smell of any highload.
Didn't find what you were looking for?
Ask your questionAsk a Question
731 491 924 answers to any question