N
N
Nikita Zvonarev2015-09-08 16:24:18
PostgreSQL
Nikita Zvonarev, 2015-09-08 16:24:18

How to write a very large JOIN or equivalent?

What is the point.
There is a bunch (m, somewhere around 50) of the same type of plates with two columns: id -- primary key and value -- a certain number of double precision. Each table has the same number of the same id (and there can be quite a lot of lines, several million).
The task seems to be not difficult: to combine all small tables into one large one of the form (id, table_1.value, ..., table_n.value).
There are two solutions. The first is CREATE TABLE blabla as SELECT with a big long JOIN, which seems to work fast enough, but only on Postgres and only on a small dataset (and there is not enough RAM on a large database). On another database engine, Greenplum, JOIN on a large number of columns simply hangs the base.
The second solution is to first make the table a selection of all primary keys, and then sequentially add the corresponding columns. This method takes too long.
Is there any quick solution to the problem?

Answer the question

In order to leave comments, you need to log in

2 answer(s)
A
Artur Nurullin, 2015-09-08
@Splo1ter

Wrong data structure, so they stepped on a rake.

A
Alexander Melekhovets, 2015-09-08
@Blast

Write an external script that will run in parallel through 50 tables in ascending order of id, combine rows and pour them into a pivot table?

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question