Answer the question
In order to leave comments, you need to log in
Server benchmark gives weird results, wtf?
Preamble:
At the start of the development of a site from which a high load is expected.
Accordingly, the choice between 2
languages
-
Node.js and Go thread per process, you will have to parallelize the server under load, moreover, the standard request balancing is not very suitable, you need user-session-stiky - because the memory of the processes is different
- Even when implementing session-stiky, communication between processes is necessary - and this is the cost of serialization / deserialization
Why GO:
+ native multithreading in a single process, hence shared memory
+ asynchrony on goroutine blocking, lack of callback/promise hell (es7 is just a dream for me, generators are not optimized and require costs to work lib ala co/bluebird)
- I haven’t written anything serious on it yet, so development will be slower, and it will also slow down development implementation of competitive writing of data to memory
- Duplication of code for tasks of the same type, maybe on the go server and on
the
js
client on my working machine (Ubuntu x64 14.04, AMD A8 2.8GHz 4 core, 4Gb RAM, not clean, a lot of extraneous stuff + GUI)
In all tests, ab was launched with concurency 100
1st test - single-threaded, Go was limited to runtime.GOMAXPROCS(1); in func init()
node.js - 4k rps / 25ms per request (hereinafter mspr)
Go - 9k rps / 11mspr
Excellent Go wins in performance by 2-2.5 times, when running ab again on a running server, both slightly improved performance
2nd test - 4 threads, Node was parallelized by the standard cluster module and actually had 5 threads (master and 4 workers)
node.js - 3k rps / 30mspr - everything is clear here, the test is not very good, it takes time to forward the descriptor file, on a real application with logic I think the results will be better than
Go - to my surprise the multi-threaded performance also dropped to 7k rps / 15mspr
wtf? Go has one parallel process with shared memory and file descriptors (or I don’t know something?) why does its performance drop with multithreading?
Answer the question
In order to leave comments, you need to log in
1) upgrade to 1.5.1.
2) instead of slack ab, take at least https://github.com/wg/wrk I have, for example, ab 16567 rps, wrk 45519 rps for 1.5.1
3) this test/code is synthetic and only shows the 1.2 scheduler does not perform well on multi-core architectures. at >1.5 the situation should be different. I have 39637 rps on 1.5.1 for 1 thread, 45519 rps for 4x, if you don’t touch gomaxprocs at all, then 48112 rps
runtime.GOMAXPROCS was always set to 1 before Go 1.5. What version of Go do you have?
Go has one parallel process with shared memory and file descriptors (or I don’t know something?) why does its performance drop with multithreading?
Didn't find what you were looking for?
Ask your questionAsk a Question
731 491 924 answers to any question