Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
922 views
in Technique[技术] by (71.8m points)

video - ffmpeg splitting RGB and Alpha channels using filter

I'm trying to use ffmpeg to split an input file into two separate files:

  1. An MP4 with only R,G and B channels
  2. An MP4 with the "extracted" A channel (a so-called Key clip, see http://ffmpeg-users.933282.n4.nabble.com/quot-Extracting-quot-Alpha-Channel-td3700227.html)

I've managed to do both, but now I want to combine them into one single command. Here's what I do:

ffmpeg -r $FPS -y -i input.flv -vcodec libx264 -vpre ipod640 -acodec libfaac -s 256x256 -r $FPS -filter_complex INSERT_FILTER_HERE rgb.mp4 alpha.mp4

where INSERT_FILTER_HERE is:

format=rgba, split [rgb_in][alpha_in];
[rgb_in] fifo, lutrgb=a=minval [rgb_out];
[alpha_in] format=rgba, split [T1], fifo, lutrgb=r=maxval:g=maxval:b=maxval, [T2] overlay [out];
[T1] fifo, lutrgb=r=minval:g=minval:b=minval [T2]

In short, I split the file into two streams, for the first stream, I "remove" the alpha channel, for the second stream, I extract a grayscale representation of the alpha channel. When I put this through graph2dot, it works fine, with a nullsink as output.

However, when I run it in ffmpeg with -filter_complex, I get:

ffmpeg version N-41994-g782763e Copyright (c) 2000-2012 the FFmpeg developers
  built on Jun 28 2012 17:45:15 with gcc 4.6.3
  configuration: --enable-gpl --enable-nonfree --enable-pthreads --enable-filters --enable-libfaac --enable-libmp3lame --enable-libx264 --enable-libtheora --enable-libvpx --enable-postproc --enable-avfilter
  libavutil      51. 63.100 / 51. 63.100
  libavcodec     54. 29.101 / 54. 29.101
  libavformat    54. 11.100 / 54. 11.100
  libavdevice    54.  0.100 / 54.  0.100
  libavfilter     3.  0.100 /  3.  0.100
  libswscale      2.  1.100 /  2.  1.100
  libswresample   0. 15.100 /  0. 15.100
  libpostproc    52.  0.100 / 52.  0.100
Input #0, flv, from 'input.flv':
  Metadata:
    audiodelay      : 0
    canSeekToEnd    : true
  Duration: 00:01:10.56, start: 0.000000, bitrate: 1964 kb/s
    Stream #0:0: Video: vp6a, yuva420p, 800x950, 1536 kb/s, 25 tbr, 1k tbn, 1k tbc
    Stream #0:1: Audio: mp3, 44100 Hz, stereo, s16, 128 kb/s
[graph 0 input from stream 0:0 @ 0x2e4c6e0] w:800 h:950 pixfmt:yuva420p tb:1/30 fr:30/1 sar:0/1 sws_param:flags=2
Output pad "default" for the filter "Parsed_lutrgb_3" of type "lutrgb" not connected to any destination

Any ideas on how I make ffmpeg recognize that it has to write [rgb_out] to rgb.mp4 and [out] to alpha.mp3?

Thanks in advance!

See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Reply

0 votes
by (71.8m points)

You need to explicitly map the outputs from the filters to output files using -map. From the documentation for -filter_complex:

Output link labels are referred to with -map. Unlabeled outputs are added to the first output file.

For example, to overlay an image over video

ffmpeg -i video.mkv -i image.png -filter_complex '[0:v][1:v]overlay[out]'
  -map '[out]' out.mkv

So in your case you'd want something like:

ffmpeg ... -i input ... -filter_complex 'split [rgb_in][alpha_in]; ... [rgb_out];
  ... [alpha_out]' -map '[rgb_out]' rgb.mp4 -map '[alpha_out]' alpha.mp4

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
OGeek|极客中国-欢迎来到极客的世界,一个免费开放的程序员编程交流平台!开放,进步,分享!让技术改变生活,让极客改变未来! Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...