I recently had to build a demo client that shows short video messages for Ubuntu environment.
After checking out GTK+ I decided to go with the more natively OOP Qt toolbox (GTKmm didn’t look right to me), and I think i made the right choice.
So anyway, I have my video files encoded in some unknown format and I need my program to show them in a some widget. I went around looking for an exiting example, but i couldn’t find anything concrete, except for a good tip here that led me here for an example of using ffmpeg’s libavformat and libavcodec, but no end-to-end example including the Qt code.
The ffmpeg example was simple enough to just copy-paste into my project, but the whole painting over the widget’s canvas was not covered. Turns out painting video is not as simple as overriding paintEvent()…
Firstly, you need a separate thread for grabbing frames from the video file, because you won’t let the GUI event thread do that.
That makes sense, but when the frame-grabbing thread (I called VideoThread) actually grabbed a frame and inserted it somewhere in the memory, I needed to tell the GUI thread to take that buffered pixels and paint them over the widget’s canvas.
This is the moment where I praise Qt’s excellent Signals/Slots mechanism. So I’ll have my VideoThread emit a signal notifying some external entity that a new frame is in the buffer.
Here’s a little code:
void VideoThread::run() { /* ... Initialize libavformat & libavcodec data structures. You can see it in the example i referred to before */ // Open video file if(av_open_input_file(&pFormatCtx, "lala.avi", NULL, 0, NULL)!=0) return -1; // Couldn't open file // Retrieve stream information if(av_find_stream_info(pFormatCtx)<0) return -1; // Couldn't find stream information // Find the first video stream ... // Get a pointer to the codec context for the video // stream... // Find the decoder for the video stream... // Open codec... // Allocate video frame pFrame=avcodec_alloc_frame(); // Allocate an AVFrame structure pFrameRGB=avcodec_alloc_frame(); if(pFrameRGB==NULL) return -1; int dst_fmt = PIX_FMT_RGB24; int dst_w = 160; int dst_h = 120; // Determine required buffer size and allocate buffer numBytes = avpicture_get_size(dst_fmt, dst_w, dst_h); buffer = new uint8_t[numBytes + 64]; //put a PPM header on the buffer int headerlen = sprintf((char *) buffer, "P6n%d %dn255n", dst_w, dst_h); _v->buf = (uchar*)buffer; _v->len = avpicture_get_size(dst_fmt,dst_w,dst_h) + headerlen; // Assign appropriate parts of buffer to image planes // in pFrameRGB... // I use libswscale to scale the frames to the required // size. // Setup the scaling context: SwsContext *img_convert_ctx; img_convert_ctx = sws_getContext( pCodecCtx->width, pCodecCtx->height, pCodecCtx->pix_fmt, dst_w, dst_h, dst_fmt, SWS_BICUBIC, NULL, NULL, NULL); // Read frames and notify i=0; while(av_read_frame(pFormatCtx, &packet)>=0) { // Is this a packet from the video stream? if(packet.stream_index==videoStream) { // Decode video frame avcodec_decode_video(pCodecCtx, pFrame, &frameFinished, packet.data, packet.size); // Did we get a video frame? if(frameFinished) { // Convert the image to RGB sws_scale(img_convert_ctx, pFrame->data, pFrame->linesize, 0, pCodecCtx->height, pFrameRGB->data, pFrameRGB->linesize); emit frameReady(); //My video is 5FPS so sleep for 200ms. this->msleep(200); } } // Free the packet that was allocated by // av_read_frame av_free_packet(&packet); } // Free the RGB image delete [] buffer; av_free(pFrameRGB); // Free the YUV frame av_free(pFrame); // Close the codec... // Close the video file... } //end VideoThread::run
Ok so I have a frame-grabber that emits a frameReady signal everytime the buffer is full and ready for painting.
A couple of things to notice:
- I convert the image format to PIX_FMT_RGB24 (avcodec.h), which is required by Qt’s QImage::fromData() method.
- I scale the image using ffmpeg’s libswscale. All conversion/scaling methods inside libavcodev are deprecated now.
But it’s fairly simple, here‘s a good example. Just remember you need a sws_getContext and then sws_scale. - I totally disregard actual frame rate here, I just sleep for 200ms because i know my file is 5FPS. For a (far-) more sophisticated way to get the FPS, very important if this is not a constant frame-rate video, you can find here.
- I don’t cover audio in this example, although the mechanism to extract it from the file exists… you just need to grabe the audio stream’s frame. For playing audio you also need some Qt-external library. In a different project I used SDL very easily, here‘s an example online.
Now, for painting over the widget.
This is fairly easy:
void VideoWidget::paintEvent(QPaintEvent * e) { QPainter painter(this); if(buf) { QImage i = QImage::fromData(buf,len,"PPM"); painter.drawImage(QPoint(0,0),i); } }
Two things to note:
- The widget needs to be given the pointer to the video frame buffer (buf).
- The frame buffer needs to be in a PPM format. That means it needs to get a PPM header, which looks something like this: “P6n320 240n255n”, and then all the pixels in 3-byte per-pixel format (RGB24). You can see that i take care of that in the previous code block.
Finally we need to orchestrate this whole mess.
So in my GUI-screen class I do:
.... vt = new VideoThread(); connect(vt,SIGNAL(frameReady()),this,SLOT(updateVideoWidget())); vt->start(); ....
And:
void playMessage::updateVideoWidget() { videoWidget->repaint(); //or update(). }
This will make the widget repaint on each frame ready.
Note:
- In this example I don’t take care of multi-threading issues. Since the GUI and the ffmpeg decoder threads share a memory buffer, I should probably have a mutex to protect it. It’s a classic producer-consumer problem.
- Performance wise, Qt’s paint mechanism is by far the worst way to go when displaying video… but it’s great for a quick-and-dirty solution (I only needed 5fps). A more performance favorable solution will probably be using an overlay block and frame-serving with SDL.
Enjoy!
Roy.
4 replies on “Showing video with Qt toolbox and ffmpeg libraries”
Hi,
I nearly got your piece of code to work, but you left out some parts which aren’t really obvious to me.
Especially these:
(1)
_v->buf = (uchar*)buffer;
_v->len = avpicture_get_size(dst_fmt,dst_w,dst_h) + headerlen;
what is _v? is it a pointer to the VideoWidget as a member of VideoThread or a global or something…
(2)
// Assign appropriate parts of buffer to image planes
// in pFrameRGB…
which parts? i had something like something like
buffer[0*numBytes] = pFrameRGB->data[0];
buffer[1*numBytes] = pFrameRGB->data[1];
buffer[2*numBytes] = pFrameRGB->data[2];
But immediately knew it’s completely wrong as this doesn’t put any data in the buffer (it’s copying pointers to variables for a start). So do I have to copy those data-arrays in the buffer? Hmm, I doubt as you do it only once for the whole videostream. So maybe pFrameRGB has to point to parts of the buffer. Is that it? But you already allocated something for pFrameRGB in
pFrameRGB=avcodec_alloc_frame();
And that’s where I don’t get it… 🙂
Can you give a bit more advice on those parts? Or do you have a working copy of that code that you want to share?
Thanks a lot,
Brecht
Hi
I added the code to the google code repository of the blog (http://code.google.com/p/morethantechnical/), you can check it out.
It’s under the QT_ffmpeg_video directory.
If it actually works… you’ll find out yourself. This project is kind of old, and I havn’t kept expanding it since, so this is all I have.
I’m sure though that it’s not too complicated to get it to work..
Good luck
Roy.
Hi,
Is there any way to get this code:
(http://code.google.com/p/morethantechnical/)
This is exactly what I want to do.
Thanks, Malik
Sorry this is a very old project so I don’t have its code anymore.
But if I remember correctly all the code is on the post, just need to copy-paste from it into a file and compile.
Roy.