Setting depth buffer (z buffer) size (16bit / 24bit)
Hello all,
I have a GeForce 8500 GT and installed linux OS and driver version 173.14.12. I would like to configure the size of the depth buffer (or z buffer if you like) from 24bit to 16 bit, but I am not getting any option like that to configure the settings for that, neither in the driver xorg options nor in the nvidia-settings graphic tool.
Is this compatible by the Linux driver? The windows driver allows me to do that.
Setting depth buffer (z buffer) size (16bit / 24bit)
Yes, this would be the one solution for that,the GLUT can solve this issue and allows only you to indicate that you desire a depth buffer.
It does not provide you any management over the size manipulation. To control over the size, you will require to use a platform-specific interface like AGL, GLX, or WGL. Of course, you will also requires graphics hardware that can be compatible to re-size that you want to set.
Setting depth buffer (z buffer) size (16bit / 24bit)
I am trying to perform some sanity debugging during the writing shades. It would be much better if I could locate some values of the depth buffer so I can inside the in my shader. The value that you would put there would have been the depth buffer size. In OpenGL, It is done with 'glClearDepth' as far as I can tell, not much sure about D3D. It assumes as the right location would be in the Viewport class. Maybe the interface would look like this :
Code:
@param value the value to initialize all pixels in the depth buffer with
void Viewport::setDepthClear(double value);
Setting depth buffer (z buffer) size (16bit / 24bit)
One just clicked the OpenGL ES template and everything always enabled the depth buffer by configuring a 0 to 1 in the EAGLView.m file.
Code:
#define USE_DEPTH_BUFFER 0
#define USE_DEPTH_BUFFER 1
When OS 3.0 was first introduced then the OpenGL ES template was configured and implemented to support OpenGL ES 2.0 and the simple method to enable the depth buffer was destroyed.
Setting depth buffer (z buffer) size (16bit / 24bit)
Turning on the depth buffer with the help of QGLFormat was not a issue, and I had studied some of the documentation regarding this issue on different sites, The thing that is not documented anywhere which I can get to see, is how to configure the number of depth bits to use. The GTK code would be as follows:
int attrlist[] =
{
GDK_GL_RGBA,
GDK_GL_DOUBLEBUFFER,
GDK_GL_DEPTH_SIZE, 16,
GDK_GL_NONE
};
if ((glarea = gtk_gl_area_new(attrlist)) == NULL)
{
g_print("Error creating GtkGLArea!\n");
return NULL;