Use varyings for texture2D coordinates!
I’m using WebGL fragment shaders
to apply per-pixel processing to a video stream.
This post shows an inefficiency in my implementation
due to calculating texture coordinates in the fragment shader.
Then I how to improve on it by using a varying
.
Consider a “hello world” identity shader, applied to your camera. to start a demo that captures your webcam stream, then draws each frame to a canvas:
The start of this pipeline looks like:
- Get a
MediaStream
fromnavigator.mediaDevices.getUserMedia(...)
- Set it as the
srcObject
of a<video>
element - Get a callback for each frame of the video,
using
.requestVideoFrameCallback
or.requestAnimationFrame
- Copy each frame into a WebGL texture with
gl.texImage2D
- Draw each texture to the canvas using a
WebGLProgram
, possibly applying some per-pixel transformation.
In that last step, here was my naive implementation of an “identity” shader:
// Vertex shader
attribute vec2 clipspaceCoord;
void main(void) {
gl_Position=vec4(clipspaceCoord, 0.0, 1.0);
}
// Fragment shader
precision mediump float;
uniform sampler2D tex;
uniform vec2 texSize;
varying vec2 v_texCoord;
void main(void) {
vec2 texCoord = gl_FragCoord / texSize;
gl_FragColor = texture2D(tex, texCoord);
}
The fragment shader converts the fragment coordinate to a texture coordinate by dividing by the pixel size of the texture. This works, but it means the fragment shader performs many division operations, and means that we have to pass the size of the texture to the fragment shader.
Here’s a better implementation,
which uses a varying
variable for the texture coordinate:
// Vertex shader
attribute vec2 clipspaceCoord;
attribute vec2 a_texCoord;
varying vec2 v_texCoord;
void main(void) {
v_texCoord = a_texCoord;
gl_Position = vec4(clipspaceCoord, 0.0, 1.0);
}
// Fragment shader
uniform sampler2D tex;
varying vec2 v_texCoord;
void main(void) {
gl_FragColor = texture2D(tex, v_texCoord);
}
The fragment shader no longer does any division,
and our WebGL program never needs to know the pixel size of our texture.
The texture coordinate v_texCoord
is instead assigned once for each vertex,
then linearly interpolated for each fragment.
The vertex shader just copies the texture coordinate from an attribute a_texCoord
,
which ultimately comes from a buffer that we populate from our application,
like this:
const texCoordBuffer = gl.createBuffer();
gl.bindBuffer(gl.ARRAY_BUFFER, texCoordBuffer);
gl.bufferData(gl.ARRAY_BUFFER, new Float32Array([
0, 1, // texture coordinate for top left
0, 0, // texture coordinate for bottom left
1, 0, // texture coordinate for bottom right
1, 1, // texture coordinate for top right
]), gl.STATIC_DRAW);
const texCoordLoc = gl.getAttribLocation(prog, 'a_texCoord');
gl.vertexAttribPointer(texCoordLoc, 2, gl.FLOAT, false, 0, 0);
gl.enableVertexAttribArray(texCoordLoc);
This may all seem obvious,
but I overlooked it for a long time,
because what I was doing worked.
As a rule of thumb:
if you’re using texture2D
in your fragment shader,
you probably want to pass in a varying
.
If you’re not passing in a varying
,
consider how you could do so.
Tagged #programming, #webgl. All content copyright James Fisher 2020. This post is not associated with my employer.