B
B
bernex2016-03-26 14:37:24
go
bernex, 2016-03-26 14:37:24

What to choose c++, c or go for the algorithm?

The essence of the service of the future is stored 700 million 64bit uint values ​​in RAM.
When processing a request, values ​​around 100.000 per request will be selected, processed and returned as a result.
The set of bit operations is supposed as a matter of fact and for.
Are there any advantages for C over C++ in terms of speed, while tests show a similar situation, but methods in structures are a big plus for C++ over C.
Go is slower and "eats" 3 times more memory. Liked it a lot, though! The syntax is sometimes worse than C. And in C I can also write simple code without memory management.
D is 4x slower than Go in my test. There's no point in trying it. Processor time is very expensive here.
And the second question, is it really possible to do such for and operations on the load or is it poorly scalable?
I am a beginner in C and C++, I will learn as I go. But 12 years in Php+JS.
+ C and C++ can be made as a module for NodeJS or an extension for Php - this is a huge plus over Go and D
And if C++ should be used non-standard libraries? Will the code slow down?

Answer the question

In order to leave comments, you need to log in

3 answer(s)
U
uvelichitel, 2016-03-26
@bernex

Go is natively concurrent. It scales better out of the box. For example, for two cores something like

var Data []uint64             //700 млн 64бит uint значений
var Respond []uint64      //значения около 100.000 на каждый запрос
var res chan uint64         //канал передачи годных значений
func TakeFromData(DataPart []uint64, res chan uint64){  //функция фильтра которой передается часть данных и канал годных
       for _, val := range DtaPart{
              if Good(val){
                       res <- val           //пишет годные в канал
              }
      }
}
go TakeFromData(Data[:len(Data)/2], res)    //запускается конкурентно в отдельном потоке
go TakeFromData(Data[len(Data)/2:], res)    // несколько экземпляров
for val:=range res{                    //агрегатор в своем потоке читает из канала по мере фильтрации
    Respond:=append(Respond, val)            //и формирует результат
}
Of course, you can write multithreaded in C, but there will be more trouble.

N
nirvimel, 2016-03-26
@nirvimel

And I write such number crushers in Python (don't be in a hurry to laugh) using Numba .
700 million * 64bit == 5.6GB memory. I don't have that many, so I'll take half.
So, sampling 100 thousand 64-bit values ​​from 350 million flies in 0.315 seconds, so with 700 million I would almost fit in 0.6 seconds. All this on a fairly cheap Pentium.
This is clearly the limit of hardware performance and no assemblers will be able to speed up the solution of this problem (by more than a few percent).

import numba as nb
import numpy as np
import time

max_value = np.iinfo(np.intc).max


@nb.jit(nopython=True)
def search(src, dst):
    src_size, = src.shape
    dst_size, = dst.shape
    factor = max_value / src_size * dst_size
    dst_ptr = 0
    for src_ptr in range(src.size):
        value = src[src_ptr]
        if value < factor and dst_ptr < dst_size:
            dst[dst_ptr] = value
            dst_ptr += 1


def search_and_time_it(from_size, to_size):
    src = np.random.randint(max_value, size=from_size)
    dst = np.empty((to_size,))
    t1 = time.time()
    search(src, dst)
    t2 = time.time()
    print('search {0:,d} values from {1:,d} takes {2:.3f} seconds'.format(to_size, from_size, t2 - t1))


# search 100 000 values from 350 000 000
search_and_time_it(350 * 1000 * 1000, 100 * 1000)

Result:
search 100,000 values from 350,000,000 takes 0.315 seconds

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question