-
This doesn't seem like the proper place for this, but I've looked over my implementation on and off over the past couple of days and I do not see what's wrong. I think there's something I don't know about that's leading to this issue. I made something that allows me to quickly change font sizes, view the width and height of a specific character, a rendering of that character on console, and a rendering of it as an opengl texture. Whenever the width of a character's bitmap is not a multiple of 4, the rows of the image look out of phase with each other. This is better explained images of course. Font Size: 27 This trend repeats for every multiple 4. My first thought was that there is some padding in the buffer that is given by GetCodepointBitmap, but I quickly dropped that as a possibility because all of the console renders I have made look perfectly fine even though using it as a texture results in the mess you see above. Given the nature of the issue, it seems like something is being done in groups of four bytes rather than single bytes. I don't believe it's where I upload the bitmap to the gpu because I combed through that a few times to ensure it's correct. Here's the code that actually performs the uploading. for (size_t i = 0; i < 93; ++i)
{
Character& character = characters[i];
// 33 is added to i because we are only rendering a chunk of characters
// that have a visible representation.
character.mBitmap = stbtt_GetCodepointBitmap(
&fontInfo,
0,
scale,
(int)(i + 33),
&character.mWidth,
&character.mHeight,
&character.xOffset,
&character.yOffset);
glGenTextures(1, &character.mId);
glBindTexture(GL_TEXTURE_2D, character.mId);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexImage2D(
GL_TEXTURE_2D,
0,
GL_R8,
character.mWidth,
character.mHeight,
0,
GL_RED,
GL_UNSIGNED_BYTE,
character.mBitmap);
glGenerateMipmap(GL_TEXTURE_2D);
} I can show my shader code if that is of interest, but I have looked at that as a potential issue well and I see nothing wrong there after performing some tests to ensure uvs are correct. With the behavior I have seen, the snippet above seems like the only thing that can contain the issue. Also, small nitpick. This is in one of the sample programs within glTexImage2D(GL_TEXTURE_2D, 0, GL_ALPHA, 512,512, 0, GL_ALPHA, GL_UNSIGNED_BYTE, temp_bitmap);
|
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 1 reply
-
It turns out it was definitely an issue of 4 bytes vs 1. When the width of the rendering is a multiple of 4 everything is fine, because opengl expected every row of the image to take up some multiple of 4 bytes. I padded every row by giving it the number of bytes it was missing in my own modified buffer and the character rendered correctly. The padding of rows is actually something you can specify within opengl and you can do that with glPixelStore. https://www.khronos.org/registry/OpenGL-Refpages/gl4/html/glPixelStore.xhtml. I thought something like this would be handled by glTexImage2D because of the GL_UNSIGNED_BYTE param and now feel outplayed. |
Beta Was this translation helpful? Give feedback.
It turns out it was definitely an issue of 4 bytes vs 1. When the width of the rendering is a multiple of 4 everything is fine, because opengl expected every row of the image to take up some multiple of 4 bytes. I padded every row by giving it the number of bytes it was missing in my own modified buffer and the character rendered correctly.
The padding of rows is actually something you can specify within opengl and you can do that with glPixelStore. https://www.khronos.org/registry/OpenGL-Refpages/gl4/html/glPixelStore.xhtml. I thought something like this would be handled by glTexImage2D because of the GL_UNSIGNED_BYTE param and now feel outplayed.