Discussion:
Retrace sync. in Linux
(too old to reply)
e***@odense.kollegienet.dk
2003-09-24 10:11:35 UTC
Permalink
I've done some more experimenting, and found out some more things (again, only
for people with little to no experience with sdl/graphics under linux
programming I guess).

The tearing in my scrolling example posted a while ago stops completely when I
use the DGA driver and runs the program as root. My framrate drops from about
120-130 to 85, which is my refresh rate on the monitor. So my tearing problems
is because there is no waiting for retrace it seems.

Is there any way to do this without having to use suid programs? I can't use
the DGA driver without being root, so that's not a good idea it seems.

I found this old post by Olofson:
http://www.libsdl.org/pipermail/sdl/2001-January/032774.html

Do anyone know if there has been some progress on the matter? Without having
to run as root.

Regards
Henning
Paulo Pinto
2003-09-24 11:58:20 UTC
Permalink
No, DGA still requires you to setuid your programs.
In Linux if you want hardware acceleration and to
run your application as a normal user your only
soluction is OpenGL.
Post by e***@odense.kollegienet.dk
I've done some more experimenting, and found out some more things (again, only
for people with little to no experience with sdl/graphics under linux
programming I guess).
The tearing in my scrolling example posted a while ago stops completely when I
use the DGA driver and runs the program as root. My framrate drops from about
120-130 to 85, which is my refresh rate on the monitor. So my tearing problems
is because there is no waiting for retrace it seems.
Is there any way to do this without having to use suid programs? I can't use
the DGA driver without being root, so that's not a good idea it seems.
http://www.libsdl.org/pipermail/sdl/2001-January/032774.html
Do anyone know if there has been some progress on the matter? Without having
to run as root.
Regards
Henning
_______________________________________________
SDL mailing list
http://www.libsdl.org/mailman/listinfo/sdl
Sam Lantinga
2003-09-24 14:14:13 UTC
Permalink
Post by Paulo Pinto
No, DGA still requires you to setuid your programs.
Don't ever setuid your programs. This will lead to a false sense of security
on the part of the user.. "Of course it's safe, I'm not running it as root"..

What I typically do is make DGA an option in the program, but default to use
X11. Then, if the user feels comfortable running the program as root, they
can do so for added benefits.

BTW, the version of the DGA library that's built into SDL has support for
using DGA as a normal user if the framebuffer console is properly configured.
I don't think the patch ever made it into XFree86, but it's handy just for
this particular problem.

See ya,
-Sam Lantinga, Software Engineer, Blizzard Entertainment
Patrick McFarland
2003-09-24 14:59:16 UTC
Permalink
Post by Sam Lantinga
BTW, the version of the DGA library that's built into SDL has support for
using DGA as a normal user if the framebuffer console is properly configured.
I don't think the patch ever made it into XFree86, but it's handy just for
this particular problem.
Whats so stupid is, X already has root (usually). DGA was badly designed, and
I wish someone would replace this with a better method. XImage supposibly is
that better method, but Ive yet to see anything really good come out of it.
--
Patrick "Diablo-D3" McFarland || ***@panax.com
"Computer games don't affect kids; I mean if Pac-Man affected us as kids, we'd
all be running around in darkened rooms, munching magic pills and listening to
repetitive electronic music." -- Kristian Wilson, Nintendo, Inc, 1989
e***@odense.kollegienet.dk
2003-09-24 15:41:49 UTC
Permalink
Post by Sam Lantinga
BTW, the version of the DGA library that's built into SDL has support for
using DGA as a normal user if the framebuffer console is properly configured.
I don't think the patch ever made it into XFree86, but it's handy just for
this particular problem.
I don't really understand the benefit of making a graphics library (or
extension, or whatever DGA is) that can only be used as root. I don't know how
DGA is implemented, but I read in an article (from SDL's article links) that
Loki had created their own version of DGA that did not need root priviledges
to run, for some game. Also it seems that OpenGL apps has acces to hardware
acceleration (don't know whether or not retrace sync could be called
acceleration).

I don't understand why some libraries are allowed to acces hardware, while
others are not.
But if it's possible to do it at all, then I guess I could just go ahead and
write my own code for doing a retrace sync. I have absolutely no idea how to
do that yet, tho.

Regards
Henning
David Olofson
2003-09-24 17:25:19 UTC
Permalink
On Wednesday 24 September 2003 17.41, ***@odense.kollegienet.dk
wrote:
[...]
Post by e***@odense.kollegienet.dk
Also it
seems that OpenGL apps has acces to hardware acceleration (don't
know whether or not retrace sync could be called acceleration).
Right. However, there is one important difference: As opposed to DGA
apps, OpenGL apps do *not* have direct access to VRAM or texture RAM.
That makes things quite a bit easier for drivers, and it also
eliminates the dreadfully inefficient situations that occur when
people try to mix s/w rendering and accelerated rendering.
Post by e***@odense.kollegienet.dk
I don't understand why some libraries are allowed to acces
hardware, while others are not.
No libraries are allowed to access hardware. (Well, not unless you're
root, or the lib has a friend that is root or lives in kernel space.)

The OpenGL lib on a Linux box doesn't mess with the hardware. Instead,
it just sends commands to the low level OpenGL driver, which lives in
the X server (which means it's root). OpenGL drivers usually have a
helper kernel module as well, that provides some very low level
services to the X server driver module.
Post by e***@odense.kollegienet.dk
But if it's possible to do it at all, then I guess I could just go
ahead and write my own code for doing a retrace sync. I have
absolutely no idea how to do that yet, tho.
I have some code that does, but there's that problem again: You have
to be root to get access to the ports. What I had in mind was to hack
a retrace sync daemon that applications could get retrace info and
events from, but I doubt that would perform very well without
preemptive lowlatency kernels. I'm quite sure it would perform *very*
well with such kernels though, given that that approach works great
for audio (JACK), which deals with an order of magnitude higher
"frame rates" and much stricter requirements. (Drop-outs are not
tolerated at all.)

Either way, you don't want to go there. Doing the sync on the
application side is not only hard to get right; it's the wrong
answer! This is particularly true if you render using OpenGL or some
other asynchronous rendering system. OpenGL doesn't *flip* when you
tell it to SwapBuffers; it enqueues a "flip here" mark and then lets
your application go on with the next frame - instantly. No blocking
until the command buffer is full, or you're more than one or two
frames ahead of the rendering. (The latter is *very* important. I
once had a driver that didn't do that, and it was completely useless
for animation...)

So, if you hard-sync your app to the retrace while rendering with
OpenGL (using a driver w/o retrace sync), all you do is concentrate
the tearing to a fixed area of the screen, which usually looks worse
than not sync'ing at all.


//David Olofson - Programmer, Composer, Open Source Advocate

.- Audiality -----------------------------------------------.
| Free/Open Source audio engine for games and multimedia. |
| MIDI, modular synthesis, real time effects, scripting,... |
`-----------------------------------> http://audiality.org -'
--- http://olofson.net --- http://www.reologica.se ---
David Olofson
2003-09-24 17:00:41 UTC
Permalink
On Wednesday 24 September 2003 12.11, ***@odense.kollegienet.dk
wrote:
[...]
Post by e***@odense.kollegienet.dk
http://www.libsdl.org/pipermail/sdl/2001-January/032774.html
Do anyone know if there has been some progress on the matter?
Without having to run as root.
Dunno if anything is done to fix DGA, but I'm quite sure it could be
done. However, there's one major difference between DGA and other low
level "direct video" APIs, and OpenGL; the latter is asynchronous,
while the former are synchronous. This means that in the case of
OpenGL, the driver is the only part that has to be in strict "hard
sync" with the retrace, whereas with DGA, the application has to be
sync'ed as well. This turns out to be a real PITA in general purpose
OSes. You basically need an RTOS to do that kind of stuff properly.

Anyway, OpenGL is the way to go. It's the only remotely portable
solution that can reliably provide serious h/w acceleration. OpenGL
also provides asynchronous rendering, which cuts the applications
some more slack, allowing for more scheduling jitter without dropping
frames. Finally, the best part: Most OpenGL drivers for current
hardware (ie the closed source ones...) support retrace sync.


IMNSHO, "traditional" 2D acceleration is effectively obsolete, and s/w
2D rendering directly into VRAM has been an absolute last resort for
a good while. There are many different 2D APIs, most of which are
platform specific and/or have serious limitations and/or very few -
if any - accelerated drivers. As to "pure" s/w rendering, that's
practically useless for high quality animation due to performance
issues with CPU transfers through PCI and AGP. If you really want s/w
rendering, you should use busmaster DMA to transfer the output to the
screen - but unfortutately, virtually no targets, apart from
DirectDraw on reasonably well supported hardware, support this.


That said, s/w 2D rendering can still be useful for less demanding
applications. It allows the applications to run on pretty much
anything, and even though this method scales very poorly (ie your 3+
GHz CPU and 500 MHz GPU won't help much at all), there are ways to
add full h/w acceleration very easily to most such applications.

glSDL (the "compile-in" wrapper) is one way (which is used by Kobo
Deluxe: http://www.olofson.net/kobodl ), and in a not too distant
future, we (the Not So Secret Backend Team ;-) hope to get a true
backend version into SDL, so users can make use of OpenGL without
even recompiling applications.

glSDL/wrapper:
http://olofson.net/mixed.html


Another way is to think twice (or more ;-) when designing your
application, so you can snap in an alternative rendering backend
which uses OpenGL natively. There's a very simple demonstration of
that technique in my smoothscroll example:

smoothscroll:
http://olofson.net/examples.html


//David Olofson - Programmer, Composer, Open Source Advocate

.- Audiality -----------------------------------------------.
| Free/Open Source audio engine for games and multimedia. |
| MIDI, modular synthesis, real time effects, scripting,... |
`-----------------------------------> http://audiality.org -'
--- http://olofson.net --- http://www.reologica.se ---
Henning
2003-09-25 16:34:51 UTC
Permalink
Post by David Olofson
http://olofson.net/mixed.html
Oh my!
This is really neat. My framerate jumps from something like 150 to 1150fps. Now I see, how they make games so fast. I must say I really had no idea that my gfx card was utilized so badly before.
I think the tearing is still there, but as you said earlier it might get less visible with higher framerates.

A funny thing is this. If I set __GL_SYNC_TO_VBLANK=1 my driver will make the retrace sync. This drops my framerate to 60, which is okay, since I guess then that this is my monitors refresh rate at whatever screen mode glSDL gives me.
What I don't understand about this however, is that enabling this makes the lower part of the screen (about half I'd say) seem like its some frames "ahead" of the upper part. That is, the image looks something like this:

<----image scrolls that way
##**********
##**********
**********##
**********##

Don't know if it's important, but I could mention that using DGA my frame rates (monitor refresh rates) are depending on bpp:
8 or 16bpp = 60fps
24 or 32bpp = 85fps

Using glSDL and enabling vsync always puts my frame rate at 60. I dont know if this is because GL ignores my color depth setting passed to SDL_SetVideoMode or what.


Regards
Henning
Bob Pendleton
2003-09-25 17:34:22 UTC
Permalink
Post by Henning
Post by David Olofson
http://olofson.net/mixed.html
Oh my!
This is really neat. My framerate jumps from something like 150 to 1150fps. Now I see, how they make games so fast. I must say I really had no idea that my gfx card was utilized so badly before.
I think the tearing is still there, but as you said earlier it might get less visible with higher framerates.
A funny thing is this. If I set __GL_SYNC_TO_VBLANK=1 my driver will make the retrace sync. This drops my framerate to 60, which is okay, since I guess then that this is my monitors refresh rate at whatever screen mode glSDL gives me.
<----image scrolls that way
##**********
##**********
**********##
**********##
8 or 16bpp = 60fps
24 or 32bpp = 85fps
Using glSDL and enabling vsync always puts my frame rate at 60. I dont know if this is because GL ignores my color depth setting passed to SDL_SetVideoMode or what.
When using OpenGL with SDL only the GL attributes you set have anything
to do with the frame buffer depth. The values passed to
SDL_SetVideoMode() are *always* ignored. Even the attributes are only a
minimum requirement. SDL may give you any visual that has Attribute
values equal to or greater than the attribute values.

Bob Pendleton
Post by Henning
Regards
Henning
_______________________________________________
SDL mailing list
http://www.libsdl.org/mailman/listinfo/sdl
--
+-----------------------------------+
+ Bob Pendleton: independent writer +
+ and programmer. +
+ email: ***@Pendleton.com +
+ web: www.GameProgrammer.com +
+-----------------------------------+
David Olofson
2003-09-25 18:10:32 UTC
Permalink
On Thursday 25 September 2003 19.34, Bob Pendleton wrote:
[...]
Post by Bob Pendleton
Post by Henning
Using glSDL and enabling vsync always puts my frame rate at 60. I
dont know if this is because GL ignores my color depth setting
passed to SDL_SetVideoMode or what.
When using OpenGL with SDL only the GL attributes you set have
anything to do with the frame buffer depth. The values passed to
SDL_SetVideoMode() are *always* ignored. Even the attributes are
only a minimum requirement. SDL may give you any visual that has
Attribute values equal to or greater than the attribute values.
That doesn't apply to glSDL applications, since (gl)SDL_SetVideoMode()
translates most of the relevant flags into GL attributes. glSDL
applications are not really aware that they're running on OpenGL, so
it just wouldn't work without this feature.


//David Olofson - Programmer, Composer, Open Source Advocate

.- Audiality -----------------------------------------------.
| Free/Open Source audio engine for games and multimedia. |
| MIDI, modular synthesis, real time effects, scripting,... |
`-----------------------------------> http://audiality.org -'
--- http://olofson.net --- http://www.reologica.se ---
Mikkel Gjøl
2003-09-25 17:43:54 UTC
Permalink
Post by Henning
Post by David Olofson
http://olofson.net/mixed.html
Oh my!
This is really neat. My framerate jumps from something like 150 to 1150fps. Now I see, how they make games so fast. I must say I really had no idea that my gfx card was utilized so badly before.
Wierd - is SDl slow? Any idea what caused the delay?
(Sorry if I ask questions already answered previosly in this thread, but
I just joined the list)
Post by Henning
A funny thing is this. If I set __GL_SYNC_TO_VBLANK=1 my driver will make the retrace sync. This drops my framerate to 60, which is okay, since I guess then that this is my monitors refresh rate at whatever screen mode glSDL gives me.
<----image scrolls that way
##**********
##**********
**********##
**********##
8 or 16bpp = 60fps
24 or 32bpp = 85fps
In Windoze I believe SDL uses the settings that is set in DirectX - this
is set independently per colordepth. In Linux, you probably use the
vidmodes specified in your /etc/X11/XF86Config (-4). You shouldn't
experience any kind of tearing when flipping the buffer, though? Are you
using double-buffering? You might just have set it up for single-buffer :)

glSDL probably just chooses a default setting of 60Hz as it is just a
"proof of concept" :)


Regards,
\\Mikkel Gjoel
David Olofson
2003-09-25 18:23:14 UTC
Permalink
Post by Mikkel Gjøl
Post by Henning
Post by David Olofson
http://olofson.net/mixed.html
Oh my!
This is really neat. My framerate jumps from something like 150
to 1150fps. Now I see, how they make games so fast. I must say I
really had no idea that my gfx card was utilized so badly before.
Wierd - is SDl slow? Any idea what caused the delay?
On most targets, it's not accelerated, and even when it is, it's using
more or less obsolete 2D APIs, which often have poor driver support.

OpenGL OTOH, is a major 3D acceleration API that many of the current
commercial games use, so drivers tend to be solid (well, relatively
speaking ;-) and fast, and make full use of the capabilities of the
video cards.


[...]
Post by Mikkel Gjøl
Post by Henning
Don't know if it's important, but I could mention that using DGA
my frame rates (monitor refresh rates) are depending on bpp: 8 or
16bpp = 60fps
24 or 32bpp = 85fps
In Windoze I believe SDL uses the settings that is set in DirectX -
this is set independently per colordepth. In Linux, you probably
use the vidmodes specified in your /etc/X11/XF86Config (-4).
Yep.
Post by Mikkel Gjøl
You
shouldn't experience any kind of tearing when flipping the buffer,
though?
If he's actually *flipping*, that is...
Post by Mikkel Gjøl
Are you using double-buffering?
Most probably yes...
Post by Mikkel Gjøl
You might just have set it
up for single-buffer :)
...because I doubt he would fail to notice the hysterical flickering
that single buffering would result in. ;-)

(Yes, you *really* get single buffering if you ask for it with glSDL,
even on X11/Linux, unless you have some weird OpenGL driver that
doesn't support it.)
Post by Mikkel Gjøl
glSDL probably just chooses a default setting of 60Hz as it is just
a "proof of concept" :)
glSDL doesn't chose anything. It just forwards the request to SDL,
with the addition of the SDL_OPENGL flag, if the application sets
SDL_GLSDL.

SDL does chose on some targets, I think, but even so, it'll have to
pick from the list of available modelines in the configuration, so I
my guess is that 16 bpp has a different set of modelines in his
configuration.


//David Olofson - Programmer, Composer, Open Source Advocate

.- Audiality -----------------------------------------------.
| Free/Open Source audio engine for games and multimedia. |
| MIDI, modular synthesis, real time effects, scripting,... |
`-----------------------------------> http://audiality.org -'
--- http://olofson.net --- http://www.reologica.se ---
David Olofson
2003-09-25 22:46:41 UTC
Permalink
Post by David Olofson
Post by Mikkel Gjøl
Post by Henning
Post by David Olofson
http://olofson.net/mixed.html
Oh my!
This is really neat. My framerate jumps from something like 150
to 1150fps. Now I see, how they make games so fast. I must say
I really had no idea that my gfx card was utilized so badly
before.
Wierd - is SDl slow? Any idea what caused the delay?
On most targets, it's not accelerated, and even when it is, it's
using more or less obsolete 2D APIs, which often have poor driver
support.
To make things clear; that's not quite true, and the figures above are
probably not representative for a direct comparison between glSDL and
the traditional 2D backends. (I'm just being affected by the poor
status of 2D APIs on certain platforms...)

It is true that some APIs and driver architectures have bandwidth
limitations, but the ones *most* users will be running perform pretty
well, as long as applications behave properly. (SDL_DisplayFormat(),
use h/w surfaces when needed to make use of acceleration and all that
FAQ stuff.)

DirectDraw is probably *the* most commonly used SDL backend, and it
provides true h/w pageflipping (no back->front blits) and accelerated
blits on pretty much all PCI and AGP video cards, and probably some
older beasts like ISA or VLB "windows accelerators". DMA transfers
from system RAM (s/w surfaces) is possible on most cards that are
still in use, which means you should be able to get full s/w
rendering with decent performance. (Touching VRAM with the CPU over
PCI or AGP is very slow, so you render in system RAM instead, and use
DMA blits to transfer to VRAM.)

However, DirectDraw is available only on Windows, and some other
targets don't have any commonly available alternatives that can
deliver similar performance. XFree86 (used on Linux, FreeBSD and
others) for example has DGA, but requires that applications are run
as root ("Administrator") to gain access to it, and few drivers care
to accelerate it. Usually, all you get is h/w pageflipping and direct
CPU access to VRAM. The X11 API (what most apps use) is usually
accelerated, but the way it's designed, SDL cannot really make use of
it, and falls back to s/w rendering.

Now, there *is* still hope if you want insane frame rates! OpenGL is
used by countless 3D games and applications, and as a result, there's
great h/w support and decent drivers - even on Linux. Now, it is a
"3D" API - but the screen is still 2D, right? That is, OpenGL is
still great for 2D rendering, and although it hates some of the
rendering methods used by a few games, glSDL proves that OpenGL can
be used as an alternative rendering backend for SDL, with great
results. So, you can have the extra performance when accelerated
OpenGL is present, but your applications will still run on pretty
much anything, since there will always be *some* SDL backend that
works.


//David Olofson - Programmer, Composer, Open Source Advocate

.- Audiality -----------------------------------------------.
| Free/Open Source audio engine for games and multimedia. |
| MIDI, modular synthesis, real time effects, scripting,... |
`-----------------------------------> http://audiality.org -'
--- http://olofson.net --- http://www.reologica.se ---
Neil Brown
2003-09-25 23:12:58 UTC
Permalink
Post by David Olofson
Now, there *is* still hope if you want insane frame rates! OpenGL is
used by countless 3D games and applications, and as a result, there's
great h/w support and decent drivers - even on Linux. Now, it is a
"3D" API - but the screen is still 2D, right? That is, OpenGL is
still great for 2D rendering, and although it hates some of the
rendering methods used by a few games, glSDL proves that OpenGL can
be used as an alternative rendering backend for SDL, with great
results. So, you can have the extra performance when accelerated
OpenGL is present, but your applications will still run on pretty
much anything, since there will always be *some* SDL backend that
works.
All looks very promising! Do you have a rough idea in what version glSDL
will be integrated into actual SDL? (IIRC its a separate library atm)

Neil.
David Olofson
2003-09-25 23:45:07 UTC
Permalink
On Friday 26 September 2003 01.12, Neil Brown wrote:
[...glSDL...]
Post by Neil Brown
All looks very promising! Do you have a rough idea in what version
glSDL will be integrated into actual SDL? (IIRC its a separate
library atm)
Actually, the currently available version (which is no longer being
worked on) isn't even a library, but just a header and a .c file that
you throw into your project.

We have a backend version that's mostly working already. There are
some issues with SDL_DisplayFormat*() remaining, and maybe some other
things we haven't found yet.

Can't say how far it is from mainstream SDL, or even when we'll start
releasing patches to the public, but the latter may not be very far
away at all. I guess after we've fixed the currently knowns issues
might be a good time to start beta testing.

BTW, maybe it would be a good idea to add glSDL to the 1.3 tree first,
test it thoroughly, and then backport it to 1.2?

Anyway, it's done when it's done! ;-)


//David Olofson - Programmer, Composer, Open Source Advocate

.- Audiality -----------------------------------------------.
| Free/Open Source audio engine for games and multimedia. |
| MIDI, modular synthesis, real time effects, scripting,... |
`-----------------------------------> http://audiality.org -'
--- http://olofson.net --- http://www.reologica.se ---
David Olofson
2003-09-25 18:08:42 UTC
Permalink
Post by Henning
Post by David Olofson
http://olofson.net/mixed.html
Oh my!
This is really neat. My framerate jumps from something like 150 to
1150fps. Now I see, how they make games so fast. I must say I
really had no idea that my gfx card was utilized so badly before. I
think the tearing is still there, but as you said earlier it might
get less visible with higher framerates.
Yeah. If you get multiple tears per frame, each tear will be smaller
and less visible. With "normal" scrolling speeds, the tearing may not
ever be visible if you get a really insane frame rate.

This is really a tremendous waste of blitting power, rendering the
whole screen over and over, while only a fraction of every frame is
ever seen - but that's what you get with a broken framework and/or
crappy drivers... *heh*
Post by Henning
A funny thing is this. If I set __GL_SYNC_TO_VBLANK=1 my driver
will make the retrace sync. This drops my framerate to 60, which is
okay, since I guess then that this is my monitors refresh rate at
whatever screen mode glSDL gives me.
Yeah, sounds reasonable - and it sounds like you should hack your
configuration, unless you're fine with 60 Hz. (I'm not, on any CRT
monitor I've seen so far. Stroboscope...)
Post by Henning
What I don't understand about
this however, is that enabling this makes the lower part of the
screen (about half I'd say)
Half! That's really bad... I get it somewhere near the top of the
screen. (P-III 933, ATI FireGL 8800, XFree86 4.2.1, fglrx 2.5.1.)
Post by Henning
seem like its some frames "ahead" of
<----image scrolls that way
##**********
##**********
**********##
**********##
That's because double buffering us done with the off-screen buffer +
blit method. The position of the tearing indicates where the
back->front blit passes the rays. This shouldn't really happen, but
unless you have RTLinux, RTAI or similar installed, it's virtually
impossible to avoid, since the retrace only lasts for a tiny fraction
of the duration of a video fram.

Apparently, your driver doesn't even bother to wait for the rendering
to finish before sync'ing and starting the blit, which makes it even
worse, as the sync-to-blit latency becomes dependent on what you're
rendering. If the tear is bouncing around when playing 3D games, I'm
95% certain this is the problem.

Now, there *is* a solution that might work (most of the time) without
RTLinux or RTAI: "Half Buffering". Use a timer to start blitting the
upper half of the screen when the ray is in the middle of the screen
and then start blitting the lower half after sync'ing with the
retrace and setting up the timer for the next frame. However, this
requires ms accurate timers, so it still won't work on 100 Hz kernels
without some high-res timer patch...

Either way, I'd be surprized if any driver vendor bothered
implementing something like this, or proper pageflipping (the Right
Way(TM)). H*ll, they don't even bother enabling retrace sync by
default. :-(
Post by Henning
Don't know if it's important, but I could mention that using DGA my
frame rates (monitor refresh rates) are depending on bpp: 8 or
16bpp = 60fps
24 or 32bpp = 85fps
Using glSDL and enabling vsync always puts my frame rate at 60. I
dont know if this is because GL ignores my color depth setting
passed to SDL_SetVideoMode or what.
Could be something funny with your modelines and/or the DGA driver.
Did you forget to list your preferred modelines in the config for 16
bpp? (Each bpp has a separate configuration...)

Anyway, yes; your depth setting is most probably ignored, since OpenGL
drivers generally don't use DGA, and thus, are not able to change the
display bit depth.

(Instead, they set up a window the normal way and have the 3D
accelerator render directly into the area occupied by that window.
Fullscreen is done by creating a borderless window of the right size
and then switching the display resolution and disabling panning and
resolution switching shortcuts.)


//David Olofson - Programmer, Composer, Open Source Advocate

.- Audiality -----------------------------------------------.
| Free/Open Source audio engine for games and multimedia. |
| MIDI, modular synthesis, real time effects, scripting,... |
`-----------------------------------> http://audiality.org -'
--- http://olofson.net --- http://www.reologica.se ---
Sami Näätänen
2003-09-25 19:06:20 UTC
Permalink
Post by David Olofson
(Instead, they set up a window the normal way and have the 3D
accelerator render directly into the area occupied by that window.
Fullscreen is done by creating a borderless window of the right size
and then switching the display resolution and disabling panning and
resolution switching shortcuts.)
In X they render to some offscreen video memory location, which then is
clip blitted to the window. This can be clearly seen if one runs
glxgears without retrace sync in it's default window size.

Let it run sya 10 seconds after the last output switch to another
desktop for 10 seconds. If you get about double of the seen frame rate
version, then it is simply because the driver needs to blit the rendered
GFX to the X screen. This part can be avoided, but that needs Quatro
class HW, because they can render directly to windows, if the Nvidia
driver settings are enabled.

I think this same problem persist in all Linux and I think in Windows
drivers as well, because the rendering to window can't clip those parts
that are covered by another windows.

Sadly this same condition is still present in the fullscreen in SDL,
because the fullscreen is simply a borderless window. I don't know if
it could be possible to use randr and get the root windows gc in
fullscreen situations. This would eliminate the need for the extra
blit, but I don't know if the drivers support this correctly.
Glenn Maynard
2003-09-25 20:40:37 UTC
Permalink
Post by Sami Näätänen
Post by David Olofson
(Instead, they set up a window the normal way and have the 3D
accelerator render directly into the area occupied by that window.
Fullscreen is done by creating a borderless window of the right size
and then switching the display resolution and disabling panning and
resolution switching shortcuts.)
In X they render to some offscreen video memory location, which then is
clip blitted to the window. This can be clearly seen if one runs
glxgears without retrace sync in it's default window size.
Let it run sya 10 seconds after the last output switch to another
desktop for 10 seconds. If you get about double of the seen frame rate
version, then it is simply because the driver needs to blit the rendered
GFX to the X screen. This part can be avoided, but that needs Quatro
class HW, because they can render directly to windows, if the Nvidia
driver settings are enabled.
I think this same problem persist in all Linux and I think in Windows
drivers as well, because the rendering to window can't clip those parts
that are covered by another windows.
Sadly this same condition is still present in the fullscreen in SDL,
because the fullscreen is simply a borderless window. I don't know if
In SDL under what?

Windows fullscreen windows are borderless windows that cover the screen;
Windows notices this and drivers are able to page flip instead of blit.
There's really no special flag as such that tells Windows to do this;
I've been able to get fast page flipping in Windows without changing the
video mode at all, only making the application cover the whole desktop.

There is a CDS_FULLSCREEN flag for ChangeDisplaySettings, but the main
effect of that is to tell Windows to revert the settings when the
program loses focus, not to enable page flipping (you can use that flag
and not actually make a fullscreen window).

(Of course, be very careful when taking advantage of these properties--I'm
sure there are plenty of undocumented exceptions, and drivers that don't
do things the right way, and some parts of this might well be special
optimizations of my own nVidia drivers.)

I think Direct3D operates in an overlay, which could, in theory, allow
fast page flipping in a window; I don't know if it actually has that
effect, or why OpenGL implementations don't do this.
--
Glenn Maynard
David Olofson
2003-09-25 21:49:09 UTC
Permalink
On Thursday 25 September 2003 22.40, Glenn Maynard wrote:
[...OpenGL, windowed mode & clipping...]
Post by Glenn Maynard
Post by Sami Näätänen
Sadly this same condition is still present in the fullscreen in
SDL, because the fullscreen is simply a borderless window. I
don't know if
In SDL under what?
No, this is about XFree86, which is what most people use on Linux,
FreeBSD and other Unix- and Unix-like systems.

It's possible that some Windows drivers use similar methods, but I
think most of them just take over the display hardware (killing the
desktop) and set up a display. That means they can use true page
flipping if you ask for double buffering, since they manage all
resources directly.
Post by Glenn Maynard
Windows fullscreen windows are borderless windows that cover the screen;
There's one difference from XFree86: Windows changes the desktop size
(which means the display surfaces is reallocated - potentially with
multiple pages), whereas XFree86 sticks with the desktop it allocated
when it was fired up.
Post by Glenn Maynard
Windows notices this and drivers are able to page flip
instead of blit. There's really no special flag as such that tells
Windows to do this; I've been able to get fast page flipping in
Windows without changing the video mode at all, only making the
application cover the whole desktop.
Interesting... That is, DX automatically starts to flip between the
visible part of the desktop (where your window is) and it's back
surfaces, when it discovers that you can't see anything but the
client area of the window? That's a rather cool hack! :-)
Post by Glenn Maynard
There is a CDS_FULLSCREEN flag for ChangeDisplaySettings, but the
main effect of that is to tell Windows to revert the settings when
the program loses focus, not to enable page flipping (you can use
that flag and not actually make a fullscreen window).
...but then it will (normally) blit instead of flip?
Post by Glenn Maynard
(Of course, be very careful when taking advantage of these
properties--I'm sure there are plenty of undocumented exceptions,
and drivers that don't do things the right way, and some parts of
this might well be special optimizations of my own nVidia drivers.)
Right. Just for starters, what happens when the window is partially
occluded? Then it can no longer serve as one of the pages - or can
it...?
Post by Glenn Maynard
I think Direct3D operates in an overlay,
Now, *that* would certainly explain things. :-) However, that's
obviously a hardware feature - and I suspect it's one with certain
limitations, such as a limit to the number of overlays you can have
on-screen. Many cards have such a h/w overlay that's normally used
for video playback, but the ones I've seen so far seem to support
only one h/w overlay. Maybe this is a generalization and extension of
that stuff?
Post by Glenn Maynard
which could, in theory,
allow fast page flipping in a window; I don't know if it actually
has that effect, or why OpenGL implementations don't do this.
Well, it could definitely be used for that purpose, since flipping an
overlay is just like flipping the whole screen; change a pointer and
the RAMDAC DMA will do the rest...


//David Olofson - Programmer, Composer, Open Source Advocate

.- Audiality -----------------------------------------------.
| Free/Open Source audio engine for games and multimedia. |
| MIDI, modular synthesis, real time effects, scripting,... |
`-----------------------------------> http://audiality.org -'
--- http://olofson.net --- http://www.reologica.se ---
Glenn Maynard
2003-09-25 22:23:44 UTC
Permalink
Post by David Olofson
Interesting... That is, DX automatically starts to flip between the
visible part of the desktop (where your window is) and it's back
surfaces, when it discovers that you can't see anything but the
client area of the window? That's a rather cool hack! :-)
OpenGL, not DX. I think that's what it does. I was testing this a
while back, when messing with SDL's DIB bootstrap code; it was a while
ago, and this is from memory, so there may be parity errors here and there.
(Tip for OpenGL apps: compile SDL without DX support. It offers no benefits
over DIB and avoids having to deal with the occasional system with a broken
DX installation.)
Post by David Olofson
Post by Glenn Maynard
There is a CDS_FULLSCREEN flag for ChangeDisplaySettings, but the
main effect of that is to tell Windows to revert the settings when
the program loses focus, not to enable page flipping (you can use
that flag and not actually make a fullscreen window).
...but then it will (normally) blit instead of flip?
Right. At least with OpenGL, I don't think it can ever flip in a window.
Post by David Olofson
Right. Just for starters, what happens when the window is partially
occluded? Then it can no longer serve as one of the pages - or can
it...?
I'm not sure; probably not.
--
Glenn Maynard
David Olofson
2003-09-25 22:56:30 UTC
Permalink
Post by Glenn Maynard
Post by David Olofson
Interesting... That is, DX automatically starts to flip between
the visible part of the desktop (where your window is) and it's
back surfaces, when it discovers that you can't see anything but
the client area of the window? That's a rather cool hack! :-)
OpenGL, not DX.
Ah, I see... Then it's most probably a driver hack, because AFAIK,
Windows doesn't provide much of a standard framework for OpenGL
drivers.
Post by Glenn Maynard
(Tip for OpenGL apps: compile SDL without DX
support. It offers no benefits over DIB and avoids having to deal
with the occasional system with a broken DX installation.)
Speaking of which, how would such a broken installation manifest
itself? (I have an old Win95 box here that refuses to accelerate
OpenGL, despite having opengl32.dll and the proper ICD installed...)


//David Olofson - Programmer, Composer, Open Source Advocate

.- Audiality -----------------------------------------------.
| Free/Open Source audio engine for games and multimedia. |
| MIDI, modular synthesis, real time effects, scripting,... |
`-----------------------------------> http://audiality.org -'
--- http://olofson.net --- http://www.reologica.se ---
Glenn Maynard
2003-09-25 23:18:21 UTC
Permalink
Post by David Olofson
Speaking of which, how would such a broken installation manifest
itself? (I have an old Win95 box here that refuses to accelerate
OpenGL, despite having opengl32.dll and the proper ICD installed...)
I had old bug reports where DX wouldn't initialize with some error or
another, I think. It was a while ago, and when I noticed that SDL had
two init paths and they both ended up mostly on the DIB end anyway, I
just removed DX5.
--
Glenn Maynard
Sami Näätänen
2003-09-26 20:36:42 UTC
Permalink
Post by Glenn Maynard
Post by Sami Näätänen
Post by David Olofson
(Instead, they set up a window the normal way and have the 3D
accelerator render directly into the area occupied by that
window. Fullscreen is done by creating a borderless window of the
right size and then switching the display resolution and
disabling panning and resolution switching shortcuts.)
In X they render to some offscreen video memory location, which
then is clip blitted to the window. This can be clearly seen if one
runs glxgears without retrace sync in it's default window size.
Let it run sya 10 seconds after the last output switch to another
desktop for 10 seconds. If you get about double of the seen frame
rate version, then it is simply because the driver needs to blit
the rendered GFX to the X screen. This part can be avoided, but
that needs Quatro class HW, because they can render directly to
windows, if the Nvidia driver settings are enabled.
I think this same problem persist in all Linux and I think in
Windows drivers as well, because the rendering to window can't clip
those parts that are covered by another windows.
Sadly this same condition is still present in the fullscreen in
SDL, because the fullscreen is simply a borderless window. I don't
know if
In SDL under what?
Windows fullscreen windows are borderless windows that cover the
screen; Windows notices this and drivers are able to page flip
instead of blit. There's really no special flag as such that tells
Windows to do this; I've been able to get fast page flipping in
Windows without changing the video mode at all, only making the
application cover the whole desktop.
So if you make borderless window with a size of the desktop you get same
performance regardless of setting SDL_FULLSCREEN or not?

I think that there is difference, and looking at SDL code (dx5).
This is from setVideoMode where the real resolution is changed. and
above this it says that this is copied fro mthe dib.

if ( video->flags & SDL_FULLSCREEN ) {
top = HWND_TOPMOST;
} else {
top = HWND_NOTOPMOST;
}

So if the window is FULLSCREEN then it is made the topmost ie no window
or requester can cover it thus making it possible to use real page
flipping.

David Olofson
2003-09-25 20:53:58 UTC
Permalink
Post by Sami Näätänen
Post by David Olofson
(Instead, they set up a window the normal way and have the 3D
accelerator render directly into the area occupied by that
window. Fullscreen is done by creating a borderless window of the
right size and then switching the display resolution and
disabling panning and resolution switching shortcuts.)
In X they render to some offscreen video memory location, which
then is clip blitted to the window.
I know *some* drivers do this all the time, but some do indeed render
directly into the window if you tell them to. ATI's FireGL drivers
do, obviously, as that's what I'm using here.
Post by Sami Näätänen
This can be clearly seen if one
runs glxgears without retrace sync in it's default window size.
Let it run sya 10 seconds after the last output switch to another
desktop for 10 seconds. If you get about double of the seen frame
rate version, then it is simply because the driver needs to blit
the rendered GFX to the X screen.
That tells you whether the card is using true pageflipping or
back->front blits.

True single buffering is a lot easier to detect: If everything that
moves flickers like h*ll, you have single buffering. :-)

Frankly, I don't think single buffering is useful for anything but
debugging, or as a poor man's progress indicator when rendering still
images of extremely complex scenes.
Post by Sami Näätänen
This part can be avoided, but
that needs Quatro class HW, because they can render directly to
windows, if the Nvidia driver settings are enabled.
Most cards can render directly into a window, if the driver supports
it. All it takes is changing the frame buffer address and pitch.
However, without hardware support, complex region clipping is going
to be rather slow and awkward to implement.

Dunno' if a FireGL 8800 has complex region clipping, but the driver
I'm using certainly has a few bugs:

Bug #1: When using single buffering + retrace sync, the maximum
frame rate is half the refresh rate. Seems like each
rectangle takes two retrace syncs...

Bug #2: Looks like every rectangular blit is retrace sync'ed
separately, as the resulting frame rate is exactly the
CRT refresh rate divided by the number of rectangles
needed to draw the partially occluded window. Divide
that by 2 again, in single buffer mode. (2 syncs/rect.)

Bug #3: Rectangle subdivision is suboptimal. A corner of another
window results in 3 rects, while 2 would have been
enough. (Easy to conclude after having spotted the two
bugs above.)

Guess I'll try the latest driver some time and see if some of this has
been fixed, though it hasn't annoyed me too much so far... (I don't
use single buffering, and I don't have OpenGL windows partially
occluded by other windows when using them.)
Post by Sami Näätänen
I think this same problem persist in all Linux and I think in
Windows drivers as well, because the rendering to window can't clip
those parts that are covered by another windows.
It sure can (or I'm hallucinating here ;-), but it's hairy and
possibly slow, unless there's hardware support for it.

I think I'll disable retrace sync and do some more tests to see how
the FireGL performs...
Post by Sami Näätänen
Sadly this same condition is still present in the fullscreen in
SDL, because the fullscreen is simply a borderless window. I don't
know if it could be possible to use randr and get the root windows
gc in fullscreen situations. This would eliminate the need for the
extra blit, but I don't know if the drivers support this correctly.
Well, all you need is some way of setting up two "windows" and a way
to flip between them...

How about using two single buffered windows and desktop panning (h/w
scrolling)? Requires a desktop big enough to fit two screens of the
desired resolution, but it might actually be possible to do on the
application level, at least with cards that support true single
buffering.

Anyway, that's just a quick/fun hack, though it demonstrates one of
the issues with XFree86 and drivers that want to do this sort of
stuff; you have one display buffer, and that's it.

DGA seems to be able to get around that, but I don't know if it's of
much help to OpenGL drivers as it is.


//David Olofson - Programmer, Composer, Open Source Advocate

.- Audiality -----------------------------------------------.
| Free/Open Source audio engine for games and multimedia. |
| MIDI, modular synthesis, real time effects, scripting,... |
`-----------------------------------> http://audiality.org -'
--- http://olofson.net --- http://www.reologica.se ---
Sami Näätänen
2003-09-26 19:14:53 UTC
Permalink
Post by David Olofson
Post by Sami Näätänen
Post by David Olofson
(Instead, they set up a window the normal way and have the 3D
accelerator render directly into the area occupied by that
window. Fullscreen is done by creating a borderless window of the
right size and then switching the display resolution and
disabling panning and resolution switching shortcuts.)
In X they render to some offscreen video memory location, which
then is clip blitted to the window.
I know *some* drivers do this all the time, but some do indeed render
directly into the window if you tell them to. ATI's FireGL drivers
do, obviously, as that's what I'm using here.
So ATI have made it differently then. Have they optimized it ie if the
window is ocluded in very complex way it falls back to rendering it to
offscreen and then blit it to the visible part, or will it still render
it in multiple passes? How about if you oclude hte window with a
rounded window corner does it make it look correct?
Post by David Olofson
Post by Sami Näätänen
This can be clearly seen if one
runs glxgears without retrace sync in it's default window size.
Let it run sya 10 seconds after the last output switch to another
desktop for 10 seconds. If you get about double of the seen frame
rate version, then it is simply because the driver needs to blit
the rendered GFX to the X screen.
That tells you whether the card is using true pageflipping or
back->front blits.
Yeah that was the point.
Post by David Olofson
True single buffering is a lot easier to detect: If everything that
moves flickers like h*ll, you have single buffering. :-)
Yeah and I get headeache if i look that. :)
Post by David Olofson
Frankly, I don't think single buffering is useful for anything but
debugging, or as a poor man's progress indicator when rendering still
images of extremely complex scenes.
Well better not use that as progress indicator, because if you oclude
the truly single buffer window you loose the ocluded stuff. :)
Post by David Olofson
Post by Sami Näätänen
This part can be avoided, but
that needs Quatro class HW, because they can render directly to
windows, if the Nvidia driver settings are enabled.
Most cards can render directly into a window, if the driver supports
it. All it takes is changing the frame buffer address and pitch.
However, without hardware support, complex region clipping is going
to be rather slow and awkward to implement.
Yeah it is possible, if the driver allows that, but like we know it
brings problems with the clipping. I think NVidia didn't want to bother
with it and simply made it to flip with a blit. That's a shame, because
it restricts the performance.
Post by David Olofson
Post by Sami Näätänen
I think this same problem persist in all Linux and I think in
Windows drivers as well, because the rendering to window can't clip
those parts that are covered by another windows.
It sure can (or I'm hallucinating here ;-), but it's hairy and
possibly slow, unless there's hardware support for it.
Well the ATI one was a pleasent surprise, but as we can see from the
bugs that you pointed out it isn't easy to make it perfectly working.
Also it might be even slower than doing it the way NVidia does it.
For example you have a small window with rounded corners in the midle of
the opengl window then it gets tons of separate rectangles even with
optimized rectangle calculation. I don't know how this should be taken
care of, but I think that the way ATI has done it allows one to get the
full power out of opengl unlike NVidia's version.
Post by David Olofson
I think I'll disable retrace sync and do some more tests to see how
the FireGL performs...
I would be interested to see your frame rate with glxgears (retrace sync
disabled ofcourse), and how much they differ if you completly oclude
the glxgears window.
Henning
2003-09-25 19:00:17 UTC
Permalink
Post by David Olofson
rendering. If the tear is bouncing around when playing 3D games, I'm
95% certain this is the problem.
Not really tried any 3d games on linux. Apart from... chromium, which uses openGL but is not 3d in the traditional sense, I'd say.
Post by David Olofson
Way(TM)). H*ll, they don't even bother enabling retrace sync by
default. :-(
Which suddenly seems to be a good thing, as it's not very nice to look at that constant tear in the middle. Suddenly I prefer it the other way around.
Post by David Olofson
Did you forget to list your preferred modelines in the config for 16
bpp? (Each bpp has a separate configuration...)
I'm using XFree4.something, and I can't find any modelines in my XF86Config file. I don't know how to change the refresh rate.

I'm really grateful for your help, I've learned a lot about all this already.

Regards
Henning
David Olofson
2003-09-25 19:51:25 UTC
Permalink
Post by Henning
Post by David Olofson
rendering. If the tear is bouncing around when playing 3D games,
I'm 95% certain this is the problem.
Not really tried any 3d games on linux. Apart from... chromium,
which uses openGL but is not 3d in the traditional sense, I'd say.
If you'd like to have some for reference and/or fun, there are
playable demos of Q3, RTCW etc. If you have the full versions for
Windows or Mac, you can just download the latest updates for Linux,
install them and grab the data files from the Mac/Windows versions.
Post by Henning
Post by David Olofson
Way(TM)). H*ll, they don't even bother enabling retrace sync by
default. :-(
Which suddenly seems to be a good thing, as it's not very nice to
look at that constant tear in the middle. Suddenly I prefer it the
other way around.
Right, that's probably it; they can't get the flipping right, so they
disable retracy sync to spread the tearing... *heh*
Post by Henning
Post by David Olofson
Did you forget to list your preferred modelines in the config for
16 bpp? (Each bpp has a separate configuration...)
I'm using XFree4.something, and I can't find any modelines in my
XF86Config file.
That is, you're using the default, and perhaps whatever XFree might
get out of your monitor through the PnP stuff. That, however, should
be independent of resolution, except for the highest resolutions,
which could have refresh rate restrictions due to internal bandwidth
limitations in the video card.
Post by Henning
I don't know how to change the refresh rate.
It's an integral part of each modeline. You can either hack modelines
manually (there's a HOWTO somewhere), or tweak them "live" with
xvidtune. There are modeline calculators and lists of sensible
modelines as well, but I can't really point you at any single
complete resource...

I have a small collection of modelines for weird low resolution modes
for arcade games as well as extreme highres modes, and a bunch of
modes I collected from the standard configurations from various Linux
distros. I can make them available if anyone's interested.

Anyway, I'm not sure it will help in your case, considering that your
X server somehow *does* get the right modelines. It just doesn't use
the ones you want when you go 16 bpp. If there are no modelines in
you config, I honestly have no idea why this happens. (It's the wrong
way too; if anything, the *higher* bpps would have lower refresh
rates.)


I remember some thread about SDL scanning modes and stuff, and that
some version started scanning them in reverse order or something, to
find the highest refresh rates first, instead of te lowest. Might
have something to do with it...
Post by Henning
I'm really grateful for your help, I've learned a lot about all this already.
Well, that's what mailing lists are for. :-)


//David Olofson - Programmer, Composer, Open Source Advocate

.- Audiality -----------------------------------------------.
| Free/Open Source audio engine for games and multimedia. |
| MIDI, modular synthesis, real time effects, scripting,... |
`-----------------------------------> http://audiality.org -'
--- http://olofson.net --- http://www.reologica.se ---
Henning
2003-09-26 11:45:02 UTC
Permalink
Post by David Olofson
If you'd like to have some for reference and/or fun, there are
playable demos of Q3, RTCW etc. If you have the full versions for
Downloaded the Q3 demo. Looks fine, no tearing or anything.
I was worried that the tearing might be very visible again once I had implemented enough stuff to make the framerate drop back to about 120 where it's visible when not using GL. Can't see which framerate I'm running Q3 with, but it looks good.
Post by David Olofson
Right, that's probably it; they can't get the flipping right, so they
disable retracy sync to spread the tearing... *heh*
I read once that nvidia actually was the ones to make the best drivers for linux. Guess that's open for debate, but at least they work.
Post by David Olofson
That is, you're using the default, and perhaps whatever XFree might
This is my version of XFree:
XFree86 Version 4.3.0 (Red Hat Linux release: 4.3.0-2)

I don't think I really want to edit the XF86Config file too much. I usually just end up with an Xserver that doesn't work. I'll just wait for the next redhat release, and hope they've fixed something so I can get the right frequency. I'm using a Philips 107T4, and that doesn't appear anywhere in the list of "known" monitors, so I guess that might have something to do with my weird frequencies at different color depths. Well, at least I've got my 85hz at 24+bpp.
Post by David Olofson
I have a small collection of modelines for weird low resolution modes
for arcade games as well as extreme highres modes, and a bunch of
modes I collected from the standard configurations from various Linux
distros. I can make them available if anyone's interested.
I guess if there were some way for me to find out which color depth glSDL actually gives me, then I could put that modeline into the XF86Config file I guess? That might give me better refresh rate, but I guess it won't solve the vsyncing anyway.
Post by David Olofson
the ones you want when you go 16 bpp. If there are no modelines in
you config, I honestly have no idea why this happens. (It's the wrong
way too; if anything, the *higher* bpps would have lower refresh
rates.)
I would like something like windoze, where you can just pick your preferred refresh rate for the various resolutions and color depths.

Regards
Henning
David Olofson
2003-09-26 12:37:34 UTC
Permalink
On Friday 26 September 2003 13.45, Henning wrote:
[...Q3 frame rate...]
Pull down the console and type: /cg_drawpfs 1
Post by Henning
Post by David Olofson
Right, that's probably it; they can't get the flipping right, so
they disable retracy sync to spread the tearing... *heh*
I read once that nvidia actually was the ones to make the best
drivers for linux. Guess that's open for debate, but at least they
work.
They probably *are* making the best drivers, with the possible
exception of Xi Graphics, which sells drivers for Linux/x86 and
Solaris/x86.

Just look at the bugs I found yesterday... There are more of them,
some of which are much more serious, and most of them exist in the
Windows drivers as well. (The "back and front faces must have same
mode if one is disabled" one that I hacked a GtkRadiant work-around
for, for example.) This seems to be more or less representative for
ATI's drivers.

Matrox's drivers have loads of issues as well (like the G400 cards not
being able to handle multiple contexts, as used in most 3D modeller
apps and the like - major showstopper), and AFAIK, they still don't
have Linux drivers for the Parhelia cards. (Unfortunately, it seems
like Xi doesn't either...)
Post by Henning
Post by David Olofson
That is, you're using the default, and perhaps whatever XFree might
XFree86 Version 4.3.0 (Red Hat Linux release: 4.3.0-2)
I don't think I really want to edit the XF86Config file too much. I
usually just end up with an Xserver that doesn't work.
Great fun, isn't it! You should try *compiling* the SOB from source
some time... ;-)
Post by Henning
I'll just
wait for the next redhat release, and hope they've fixed something
so I can get the right frequency. I'm using a Philips 107T4, and
that doesn't appear anywhere in the list of "known" monitors, so I
guess that might have something to do with my weird frequencies at
different color depths. Well, at least I've got my 85hz at 24+bpp.
You don't really need a known monitor; any reasonably modern monitor
should be able to report what frequencies it supports, and AFAIK,
XFree86 has supported that for a while. Could be wrong, though...


[...]
Post by Henning
I guess if there were some way for me to find out which color depth
glSDL actually gives me, then I could put that modeline into the
XF86Config file I guess? That might give me better refresh rate,
Well, if you ask for 0, it should use the default. If not, it depends
on whether or not the OpenGL driver has some workaround that lets it
change the bpp. If it doesnt't (most drivers don't), the bpp used by
glSDL and OpenGL apps is whatever you're using for your desktop.

Easy way of testing it: Try changing the bpp in the Q3A demo. If it
makes no difference, the driver can't change the bpp.
Post by Henning
but I guess it won't solve the vsyncing anyway.
Most probably not.


[...]
Post by Henning
I would like something like windoze, where you can just pick your
preferred refresh rate for the various resolutions and color
depths.
Yeah, that would be handy... Some distro might have such a tool, but I
haven't really looked for one, as I'm a bit nervous about "nice and
easy" tools hacking my config files. :-)


//David Olofson - Programmer, Composer, Open Source Advocate

.- Audiality -----------------------------------------------.
| Free/Open Source audio engine for games and multimedia. |
| MIDI, modular synthesis, real time effects, scripting,... |
`-----------------------------------> http://audiality.org -'
--- http://olofson.net --- http://www.reologica.se ---
Sami Näätänen
2003-09-25 21:14:02 UTC
Permalink
Post by Henning
Post by David Olofson
Did you forget to list your preferred modelines in the config for
16 bpp? (Each bpp has a separate configuration...)
I'm using XFree4.something, and I can't find any modelines in my
XF86Config file. I don't know how to change the refresh rate.
If you have 4.3.x xfree and SDL is not 1.2.6 then the problem is the
automatic modelines that 4.3.x X versions produce. The newer X will
list multiple instances of different resolutions in descending refresh
rate order. The older X versions simply listed only the highest refresh
rate mode. This leads to the problem in the earlier SDL versions,
because those SDL verisions traversed the list in reverse order and
thus picked the lowest matching refresh rate line.

If this doesn't work out for you with the 1.2.6 version, then I simply
got lucky, because the Gentoo package maker has provided the patch with
the 1.2.6 version as they did with the 1.2.5. :)
Stephane Marchesin
2003-09-26 13:38:20 UTC
Permalink
Post by Henning
Post by David Olofson
I have a small collection of modelines for weird low resolution modes
for arcade games as well as extreme highres modes, and a bunch of
modes I collected from the standard configurations from various Linux
distros. I can make them available if anyone's interested.
I guess if there were some way for me to find out which color depth glSDL actually gives me, then I could put that modeline into the XF86Config file I guess? That might give me better refresh rate, but I guess it won't solve the vsyncing anyway.
You seem to use the linux nvidia drivers. If that's the case, you only have to do the following to get vsync :
export __GL_SYNC_TO_VBLANK=1

You might want to put that in one of your init scripts if you like it.

Stephane
Continue reading on narkive:
Loading...