dopod0901
2013-01-25 17:55
浏览 87
已采纳

glClear()在Intel HD 4000(GL 4.0)上提供GL_OUT_OF_MEMORY,但在GeForce(GL 4.2)上却不提供…为什么?

Now, this is an extremely odd behavior.

TL;DR -- in a render-to-texture setup, upon resizing the window (framebuffer 0) only the very next call to glClear(GL_COLOR_BUFFER_BIT) for bound framebuffer 0 (the window's client area) gives GL_OUT_OF_MEMORY, only on one of two GPUs, however rendering still proceeds properly and correctly.

Now, all the gritty details:

So this is on a Vaio Z with two GPUs (that can be switched-to with a physical toggle button on the machine):

  1. OpenGL 4.2.0 @ NVIDIA Corporation GeForce GT 640M LE/PCIe/SSE2 (GLSL: 4.20 NVIDIA via Cg compiler)

  2. OpenGL 4.0.0 - Build 9.17.10.2867 @ Intel Intel(R) HD Graphics 4000 (GLSL: 4.00 - Build 9.17.10.2867)

My program is in Go 1.0.3 64-bit under Win 7 64-bit using GLFW 64-bit.

I have a fairly simple and straightforward render-to-texture "mini pipeline". First, normal 3D geometry is rendered with the simplest of shaders (no lighting, nothing, just textured triangle meshes which are just a number of cubes and planes) to a framebuffer that has both a depth/stencil renderbuffer as depth/stencil attachment and a texture2D as color attachment. For the texture all filtering is disabled as are mip-maps.

Then I render a full-screen quad (a single "oversized" full-screen tri actually) just sampling from said framebuffer texture (color attachment) with texelFetch(tex, gl_FragCoord.xy, 0) so no wrapping is used.

Both GPUs render this just fine, both when I force a core profile and when I don't. No GL errors are ever reported for this, all renders as expected too. Except when I resize the window while using the Intel HD 4000 GPU's GL 4.0 renderer (both in Core profile and Comp profile). Only in that case, a single resize will record a GL_OUT_OF_MEMORY error directly after the very next glClear(GL_COLOR_BUFFER_BIT) call on framebuffer 0 (the screen), but only once after the resize, not in every subsequent loop iteration.

Interestingly, I don't even actually do any allocations on resize! I have temporarily disabled ALL logic occuring on window resize -- that is, right now I simply fully ignore the window-resize event, meaning the RTT framebuffer and its depth and color attachment resolutions are not even changed/recreated. Meaning the next glViewPort will still use the same dimensions as when the window and GL context was first created, but anyhoo the error occurs on glClear() (not before, only after, only once -- I've double- and triple-checked).

Would this be a driver bug, or is there anything I could be doing wrongly here?

  • 写回答
  • 好问题 提建议
  • 关注问题
  • 收藏
  • 邀请回答

1条回答 默认 最新

  • dszn2485 2013-02-04 18:27
    已采纳

    Interesting glitch in the HD's GL driver, it seems:

    When I switched to the render-to-texture setup, I also set the depth/stencil bits at GL context creation for the primary framebuffer 0 (ie. the screen/window) both to 0. This is when I started seeing this error and framerate became quite sluggish compared to before.

    I experimentally set the (strictly speaking unneeded) depth-bits to 8 and both of these issues are gone. So seems like the current HD 4000 GL 4.0 driver version just doesn't like a value of 0 for its depth buffer bits at GL context creation.

    已采纳该答案
    评论
    解决 无用
    打赏 举报

相关推荐 更多相似问题