Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
622 views
in Technique[技术] by (71.8m points)

video encoding - ffmpeg: RGB to YUV conversion loses color and scale

I am trying to convert RGB frames to YUV420P format in ffmpeg/libav. Following is the code for conversion and also the images before and after conversion. The converted image loses all color information and also the scale changes significantly. Does anybody have idea how to handle this? I am completely new to ffmpeg/libav!

// Did we get a video frame?
   if(frameFinished)
   {
       i++;
       sws_scale(img_convert_ctx, (const uint8_t * const *)pFrame->data,
                 pFrame->linesize, 0, pCodecCtx->height,
                 pFrameRGB->data, pFrameRGB->linesize);                   

       //==============================================================
       AVFrame *pFrameYUV = avcodec_alloc_frame();
       // Determine required buffer size and allocate buffer
       int numBytes2 = avpicture_get_size(PIX_FMT_RGB24, pCodecCtx->width,                                 
                                          pCodecCtx->height);
       uint8_t *buffer = (uint8_t *)av_malloc(numBytes2*sizeof(uint8_t));

       avpicture_fill((AVPicture *)pFrameYUV, buffer, PIX_FMT_RGB24,
                       pCodecCtx->width, pCodecCtx->height);


       rgb_to_yuv_ctx = sws_getContext(pCodecCtx->width, pCodecCtx->height,  
                                       PIX_FMT_RGB24,
                                       pCodecCtx->width,pCodecCtx->height, 
                                       PIX_FMT_RGB24,
                                       SWS_BICUBIC, NULL,NULL,NULL);

       sws_scale(rgb_to_yuv_ctx, pFrameRGB->data, pFrameRGB->linesize, 0, 
                 pCodecCtx->height, pFrameYUV->data, pFrameYUV->linesize);

       sws_freeContext(rgb_to_yuv_ctx);

       SaveFrame(pFrameYUV, pCodecCtx->width, pCodecCtx->height, i);

       av_free(buffer);
       av_free(pFrameYUV);
   }

original RGB24 frame

frame after RGB24->YUV420P conversion

See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Reply

0 votes
by (71.8m points)

Well for starters I will assume where you have:

rgb_to_yuv_ctx = sws_getContext(pCodecCtx->width, pCodecCtx->height,  
                                   PIX_FMT_RGB24,
                                   pCodecCtx->width,pCodecCtx->height, 
                                   PIX_FMT_RGB24,
                                   SWS_BICUBIC, NULL,NULL,NULL);

You really intended:

rgb_to_yuv_ctx = sws_getContext(pCodecCtx->width, pCodecCtx->height,  
                                   PIX_FMT_RGB24,
                                   pCodecCtx->width,pCodecCtx->height, 
                                   PIX_FMT_YUV420P,
                                   SWS_BICUBIC, NULL,NULL,NULL);

I'm also not sure why you are calling swscale twice!

YUV is a planar format. This means all three channels are stored independently. Whre RGB is stored like: RGBRGBRGB

YUV420P is stores like: YYYYYYYYYYYYYYYY..UUUUUUUUUU..VVVVVVVV

So swscale required you give it three pointers.

Next, You want your line stride to be a multiple of 16, or 32 so the vector units of the processor can be used. And finally the dimensions of the Y plane need to be divisible by two (because the U and V planes are a quarter size of the Y plane).

So, lets rewrite this:

#define RNDTO2(X) ( ( (X) & 0xFFFFFFFE )
#define RNDTO32(X) ( ( (X) % 32 ) ? ( ( (X) + 32 ) & 0xFFFFFFE0 ) : (X) )




if(frameFinished)
{
    static SwsContext *swsCtx = NULL;
    int width    = RNDTO2 ( pCodecCtx->width );
    int height   = RNDTO2 ( pCodecCtx->height );
    int ystride  = RNDTO32 ( width );
    int uvstride = RNDTO32 ( width / 2 );
    int ysize    = ystride * height;
    int vusize   = uvstride * ( height / 2 );
    int size     = ysize + ( 2 * vusize )

    void * pFrameYUV = malloc( size );
    void *plane[] = { pFrameYUV, pFrameYUV + ysize, pFrameYUV + ysize + vusize, 0 };
    int *stride[] = { ystride, vustride, vustride, 0 };

    swsCtx = sws_getCachedContext ( swsCtx, pCodecCtx->width, pCodecCtx->height,
    pCodecCtx->pixfmt, width, height, AV_PIX_FMT_YUV420P, 
    SWS_LANCZOS | SWS_ACCURATE_RND , NULL, NULL, NULL );
    sws_scale ( swsCtx, pFrameRGB->data, pFrameRGB->linesize, 0, 
    pFrameRGB->height, plane, stride );
}    

I also switched your algorithm to use SWS_LANCZOS | SWS_ACCURATE_RND. This will give you better looking images. Change it back if it is to slow. I also used the pixel format from the source frame instead of assuming it RGB all the time.


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
OGeek|极客中国-欢迎来到极客的世界,一个免费开放的程序员编程交流平台!开放,进步,分享!让技术改变生活,让极客改变未来! Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...