Keith Packard is a big contributor to Linux graphics, a recent blog post notes:
> So, you've got a fine head-mounted display and want to explore the delights of virtual reality. Right now, on Linux, that means getting the window system to cooperate because the window system is the DRM master and holds sole access to all display resources. So, you plug in your device, play with RandR to get it displaying bits from the window system and then carefully configure your VR application to use the whole monitor area and hope that the desktop will actually grant you the boon of page flipping so that you will get reasonable performance and maybe not even experience tearing. Results so far have been mixed, and depend on a lot of pieces working in ways that aren't exactly how they were designed to work.
His blog has some recent updates. He is consulting for Valve, working on VR.
Best i can tell, he is the only one willing to touch Xorg internals. The rest just fitter around the edges, delete code, and hack on Wayland because its shinier.
This is an extremely simplistic view that misunderstands the situation.
For one, those people working on Wayland made enormous contributions to the X Server long before Wayland existed. The creator of Wayland, Kristian Høgsberg did AIGLX, just to name one example.
There are few things I hate with more passion than I do the Linux "Graphics Stack." I realize that it inherited a toxic culture from UNIX (remember the X/Motif/xNews/Suntools/Etc wars? I do.) and even today there is active warfare in the stack Wayland vs Xorg, GTK vs QT, Gnome vs KDE vs *desktop, HW Vendors vs FOSS, OpenGL vs OpenGL, Etc.
<rant> A great example of this is how broken the simplest of things can be. I've got sitting next to me machine which refused to boot into a "good" graphics configuration, it boots with a 'safe' default of 1024 x 800 screen (where the actual screen is a 2K screen) even though it has HDMI (which can tell it all that it needs to know.) Using a widely deployed nVidia card and the current 'non-free-non-FOSS' nVidia graphics stack. Re-install the driver package, and it works (until reboot). Yes, there is a knob somewhere that screws it up, there are a billion knobs, from a dozen layers, is it DRM/KMS? is it Lightdm? is it Nvidia? Is it OpenGL? Is it a friggin boot option buried in the boot args? It can be anywhere and its going to take me a few hours to figure out where it is. Which in terms of wasted salary time is enough for me to buy a Mac Pro or a Windows box which works every time all the time. </rant>
I will never forget the day I decided to dive down that rabbit hole, to get some OpenGL rendering working in a Qt app. I will never look at Linux OpenGL apps the same way again.
And for the curious it was that the nvidia framebuffer kernel module was loaded after the default vga framebuffer module so that first module took precedence.
Try booting with the latest ubuntu/fedora live cd to see if it is a problem with your install. I've had good results blowing away an old install, while preserving /home.
The comp.graphics archives? Mostly that war was fought on mailing lists and Usenet newsgroups of the day. The Open Software Foundation (aka OSF aka Oppose Sun Forever :-)) instigated the Motif war as I recall. SGI, Microsoft, nVidia, S3, and 3DFx instigated the hardware vs FOSS wars armed with PMDs (Patents of Mass Destruction :-).
There is another great resource mainly about X11 called Xplain[1] with its accompanying repository[2] with some hidden (not yet ready I suppose) chapters. For example the override-redirect description[3].
I don't like these slides. It's not that they're terribly wrong. But they gloss over some of the really important aspects. And they make actually rather simple problems appear harder than they are.
Take for example slide 29. This slide suggests, that off-screen redirection of OpenGL applications (as required for composition) is something special and needs to be treated differently than non-OpenGL graphics. This is simply not true. If OpenGL is used in a window system integrated context (WSI is a rather new term, which has been properly defined only recently, but the principle has been the same since the beginning; what's new is, that since OpenGL-3 you can use a GL context without WSI) the window framebuffer is not managed by the OpenGL implementation, but whatever windowing system is used (e.g. X11, Win32 GDI, etc.) and the OpenGL implementation just borrows that. And the same mechanism (and incidently code paths) that allow to use a WSI drawable as rendering destination for OpenGL also allows to turn the flow of data around and use a WSI drawable as source for texture access. Somewhere at the bottom it's just pointers to regions of graphics memory after all.
It's a pretty simple process, actually, and the only complicated thing are the weirdly convoluted APIs that have grown around it, to expose something that has always been there, but had been hidden from applications before. But consider how quickly AIGLX was hacked together after Xgl showed up. There have been, IIRC, just a couple of months between.
That's the main insight that lead to the Wayland project. Do away with the API cruft and expose the one thing that has been possible all the time anyway.
Oh, and it should maybe also pointed out that GLX_EXT_texture_from_pixmap is useful for so much more than just composition, and can be used without Composite redirection.
And then there's slide 13, which is simply wrong in stating that there was a time when "Indirect Rendering (…) didn't allow for hardware acceleration." That's not what it implies. It just implies, that there is no fast path between application process and graphics hardware, which slowed down data transfers. But display lists back then were a staple of OpenGL and they did offer and for the legacy code that uses them still do offer excellent performance (it actually took some time for OpenGL drivers Buffer Object based Vertex Array drawing code paths to catch up with display list performance). And one could use display lists over indirect GLX just fine (and actually there ARB_vertex_buffer_object extension defines GLX opcodes, so you can even have that over indirect contexts, too).
The slides are written from the point of view of XF86/X.org, where originally indirect GLX was implemented by Mesa SW rasterizer in X server and the X server knew mostly nothing about DRM state (For example Xsun supported accelerated indirect GLX since forever and it even worked reasonably over network) .AIGLX is what allows X server to use external libGL.so to implement indirect GLX, GLX_EXT_texture_from_pixmap is mostly unrelated, but was introduced to X.org as part of that.
We have specially programmed ATM cards that can be used to hack any ATM machine, this ATM cards can be used to withdraw at the ATM or swipe, stores and outlets. We sell this cards to all our customers and interested buyers worldwide, the cards has a daily withdrawal limit of $5000 in ATM and up to $50,000 spending limit in stores. and also if you in need of any other cyber hacking services, we are here for you at any time any day.
Here is our price list for ATM cards:
BALANCE PRICE
$2,500----------------$150
$5,000----------------$300
$10,000 ------------- $650
$20,000 ------------- $1,200
$35,000 --------------$1,900
$50,000 ------------- $2,700
$100,000------------- $5,200
The price include shipping fees,order now: via email...braeckmansj@outlook.com.... you can also call or whatsapp us with this mobile number..+2348114499350
Crypto mining is not "graphics" though and bypasses X11 (it talks to the GPU via CUDA or OpenCL). You can happily mine without ever having an X server running.
What I meant is that mining has a totally different way of using the GPU than graphics. For OpenGL you have to have X11 running and the correct libraries installed. Mining bypasses all that and uses OpenCL or CUDA instead.
Everything using Qt Quick uses GPU-based rendering, though that's not exactly intensive. (Much like compositing, it requires graphics hardware that tolerates light use without consuming an inordinate amount of power).
Some Qt Quick applications can be quite heavy on the GPU, though, but that depends more on the application.
> So, you've got a fine head-mounted display and want to explore the delights of virtual reality. Right now, on Linux, that means getting the window system to cooperate because the window system is the DRM master and holds sole access to all display resources. So, you plug in your device, play with RandR to get it displaying bits from the window system and then carefully configure your VR application to use the whole monitor area and hope that the desktop will actually grant you the boon of page flipping so that you will get reasonable performance and maybe not even experience tearing. Results so far have been mixed, and depend on a lot of pieces working in ways that aren't exactly how they were designed to work.
His blog has some recent updates. He is consulting for Valve, working on VR.
https://keithp.com/blogs/DRM-lease/
Xorg will live and die with Packard, sadly.
For one, those people working on Wayland made enormous contributions to the X Server long before Wayland existed. The creator of Wayland, Kristian Høgsberg did AIGLX, just to name one example.
<rant> A great example of this is how broken the simplest of things can be. I've got sitting next to me machine which refused to boot into a "good" graphics configuration, it boots with a 'safe' default of 1024 x 800 screen (where the actual screen is a 2K screen) even though it has HDMI (which can tell it all that it needs to know.) Using a widely deployed nVidia card and the current 'non-free-non-FOSS' nVidia graphics stack. Re-install the driver package, and it works (until reboot). Yes, there is a knob somewhere that screws it up, there are a billion knobs, from a dozen layers, is it DRM/KMS? is it Lightdm? is it Nvidia? Is it OpenGL? Is it a friggin boot option buried in the boot args? It can be anywhere and its going to take me a few hours to figure out where it is. Which in terms of wasted salary time is enough for me to buy a Mac Pro or a Windows box which works every time all the time. </rant>
Intel and AMD graphics have much better open source drivers (but are not as good at 3D acceleration).
[1]: https://magcius.github.io/xplain/article/
[2]: https://github.com/magcius/xplain
[3]: https://magcius.github.io/xplain/article/menu.html
Take for example slide 29. This slide suggests, that off-screen redirection of OpenGL applications (as required for composition) is something special and needs to be treated differently than non-OpenGL graphics. This is simply not true. If OpenGL is used in a window system integrated context (WSI is a rather new term, which has been properly defined only recently, but the principle has been the same since the beginning; what's new is, that since OpenGL-3 you can use a GL context without WSI) the window framebuffer is not managed by the OpenGL implementation, but whatever windowing system is used (e.g. X11, Win32 GDI, etc.) and the OpenGL implementation just borrows that. And the same mechanism (and incidently code paths) that allow to use a WSI drawable as rendering destination for OpenGL also allows to turn the flow of data around and use a WSI drawable as source for texture access. Somewhere at the bottom it's just pointers to regions of graphics memory after all.
It's a pretty simple process, actually, and the only complicated thing are the weirdly convoluted APIs that have grown around it, to expose something that has always been there, but had been hidden from applications before. But consider how quickly AIGLX was hacked together after Xgl showed up. There have been, IIRC, just a couple of months between.
That's the main insight that lead to the Wayland project. Do away with the API cruft and expose the one thing that has been possible all the time anyway.
Oh, and it should maybe also pointed out that GLX_EXT_texture_from_pixmap is useful for so much more than just composition, and can be used without Composite redirection.
And then there's slide 13, which is simply wrong in stating that there was a time when "Indirect Rendering (…) didn't allow for hardware acceleration." That's not what it implies. It just implies, that there is no fast path between application process and graphics hardware, which slowed down data transfers. But display lists back then were a staple of OpenGL and they did offer and for the legacy code that uses them still do offer excellent performance (it actually took some time for OpenGL drivers Buffer Object based Vertex Array drawing code paths to catch up with display list performance). And one could use display lists over indirect GLX just fine (and actually there ARB_vertex_buffer_object extension defines GLX opcodes, so you can even have that over indirect contexts, too).
https://www.youtube.com/watch?v=ZTdUmlGxVo0
https://en.wikipedia.org/wiki/Direct_Rendering_Manager
It's a MESA->this[] of technologies :)
We have specially programmed ATM cards that can be used to hack any ATM machine, this ATM cards can be used to withdraw at the ATM or swipe, stores and outlets. We sell this cards to all our customers and interested buyers worldwide, the cards has a daily withdrawal limit of $5000 in ATM and up to $50,000 spending limit in stores. and also if you in need of any other cyber hacking services, we are here for you at any time any day.
Here is our price list for ATM cards: BALANCE PRICE $2,500----------------$150 $5,000----------------$300 $10,000 ------------- $650 $20,000 ------------- $1,200 $35,000 --------------$1,900 $50,000 ------------- $2,700 $100,000------------- $5,200 The price include shipping fees,order now: via email...braeckmansj@outlook.com.... you can also call or whatsapp us with this mobile number..+2348114499350
Blender can be extremely demanding. Cryptocurrency mining is also very hard on the GPU.
Some Qt Quick applications can be quite heavy on the GPU, though, but that depends more on the application.
[1] http://store.steampowered.com/agecheck/app/252490/
sadly with the move to compositing everything, anything can be considered "graphically intensive".