Answer the question
In order to leave comments, you need to log in
Why are all ordinary fields, inputs, buttons layouts called windows in winApi c++?
Am I correct in assuming that any graphic part in QT, JAVA SWING, HTML under Windows at least is all "windows". And another question is how the system at a very low level determines exactly where I clicked the mouse, it works there, for example, just like in canvas-e, for example, when I create my own drawer, and draw my button there, and write a function that will determine whether there was a click on this button of mine or not, is it comparable in speed to a standard button, does it also determine positions there?
Answer the question
In order to leave comments, you need to log in
In WinAPI user interaction is based on "windows" (windows). The window class is registered by the RegisterClass function, where a structure is passed with the name of the window class, window characteristics and a pointer to a function that will process all window messages (some of which are also responsible for rendering).
Further, by the name of the window class, you can create the desired "window" (visual element).
input, button, etc. these are the names of these window classes.
The WinAPI kernel handles interrupts from input devices, converts them into system messages, and forwards them to the active window. All windows are available to WinAPI and therefore it knows which window to send to. (Get a list of all top-level windows using the EnumWindows function)
When you click the mouse, WinAPI generates many messages for this event (WM_LBUTTONDOWN, WM_LBUTTONUP, etc.)
Didn't find what you were looking for?
Ask your questionAsk a Question
731 491 924 answers to any question