D
D
Daniil Smirnov2017-10-25 18:48:58
GPGPU
Daniil Smirnov, 2017-10-25 18:48:58

How to implement GPU on FPGA?

I have been working on FPGAs for more than a year, during this time I have mastered VHDL, confidently implement state machines, and also tried the built-in NIOS and Microblaze soft processors. In addition, once at Panchul Yuriy's seminar he started MIPS. At this point, I would like to go into the architecture of GPUs, to understand the fundamentally important steps for their implementation on FPGAs. What literature should be read first? How to come to the implementation of the GPU on the FPGA? What are the pitfalls?

Answer the question

In order to leave comments, you need to log in

2 answer(s)
Y
yatanai, 2019-08-28
@yatanai

too late but... It depends on what you mean by GPU. If you look at history, then all 3D accelerators and other graphics-related ones are simple cores with a minimal set of commands needed only for certain tasks (and sometimes even "popular" RISC cores with FPU). In the process of evolution, the very meaning of the GPU as a parallel number crusher has not changed much. These are clustered groups of cores with a simple instruction system, and very little program memory (256 instructions, as a note). Each cluster has its own special blocks (approx * Rasterization Blocks) to speed up the process of "forming" an image, or any maps.
If we are talking about what you need to know in order to make a GPU like a 3D drawing tool, then you need to know how images are built by graphic libraries. (at least)
I was helped by articles on Habré about CPU rendering by Yakimto Professor. How it looked "outside"
This is all you need to know about the GPU and further implementation is up to your imagination. And believe me, the options for how to do this are just darkness.
And the problems with FPGA are only in performance, because on a simple EP4CE22 in F24 I squeezed out 1-2Gflops with 10-core 2x (4 + 1). And you also need a fairly fast RAM. SDR SDRAM is poorly suited even for 800x600. Rather, just turn the triangles into a picture. Here it would be necessary DDR * 400 (or higher if it starts)
PS- My cores were not vector if sho

M
Mikhail Usotsky, 2017-10-25
@AquariusStar

There may not be enough FPGA resources to implement the GPU. Here you will need powerful ones like Intel Arria and Stratix or Xilinx Virtex. But for simple graphical implementations, the existing ones will suffice. Opencores.org has ready-made solutions. You can study how they are implemented. But first you need to learn the basics for displaying graphical information. To get started, you will need to master VGA. So it is the basis of everything, even for cool DisplayPort and HDMI.

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question