Kinect v2 supports 1080p RGB image and has 30 frames per second. So you can get much better quality than Kinect v1 which supported 640x480.
Let’s talk about the interfaces we need for getting the color stream from the sensor.
RGBQUAD is a structure that stores red, green and blue data fro an image.
IColorFrameReader is the interface we use to read color frames from the sensor.
IColorFrameSource is the interface that opens up a bridge for us to open a color frame reader.
IColorFrame represents a frame from the frame reader.
IFrameDescription stores the properties of an image frame from the sensor, like height, width, field of view etc…
The flow for getting the color data is similar to getting body data. We first have to connect to our sensor, get the color source from the sensor and using the color source open a color frame reader. Then we can use that reader to get our color frame. Then using that frame we get our raw RGBQUAD data. We can use casting to convert RGBQUAD to any RGB representation your graphics library needs. For example, with openFrameworks you can convert RGBQUAD to unsigned char and load that data to a texture. Let’s see how this works out in code. I’m going to be using openFrameworks for this tutorial.
Let’s see our member variables.
Now, let’s setup the sensor for color stream.
From now on, we’re going to use m_ColorFrameReader to access color data. We need to access it every frame to get updated color data. So we’re going to get the new frames in our ofApp::update method.
Lastly we draw the image in our ofApp::draw method.
Keep in mind that color frames come in 30 FPS. If you are developing a game or an interactive application with higher FPS needs you’ll have the get the data from another thread to prevent FPS loss. We’ll look into how you can do that after we finish with the streams.