Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
909 views
in Technique[技术] by (71.8m points)

python - Convert an h264 byte string to OpenCV images

In Python, how do I convert an h264 byte string to images OpenCV can read, only keeping the latest image?

Long version:

Hi everyone.

Working in Python, I'm trying to get the output from adb screenrecord piped in a way that allows me to capture a frame whenever I need it and use it with OpenCV. As I understand, I need to constantly read the stream because it's h264.

I've tried multiple things to get it working and concluded that I needed to ask for specific help.

The following gets me the stream I need and works very well when I print stream.stdout.read(n).

import subprocess as sp

adbCmd = ['adb', 'exec-out', 'screenrecord', '--output-format=h264', '-']
stream = sp.Popen(adbCmd, stdout = sp.PIPE, universal_newlines = True)

Universal newlines was necessary to get it to work on Windows.

Doing:

sp.call(['ffplay', '-'], stdin = stream.stdout, universal_newlines = True)

Works.

The problem is I am now trying to use ffmpeg to take the input h264 stream and output as many frames as possible, overwriting the last frame if needed.

ffmpegCmd = ['ffmpeg', '-f', 'image2pipe', '-pix_fmt', 'bgr24', '-vcodec', 'h264', 'fps=30', '-']
ffmpeg = sp.Popen(ffmpegCmd, stdin = stream.stdout, stdout = sp.PIPE, universal_newlines = True)

This is what I think should be used, but I always get the error "Output file #0 does not contain any stream".

Edit:

Final Answer

Turns out the universal_newlines option was ruining the line endings and gradually corrupting the output. Also, the ffmpeg command was wrong, see LordNeckbeard's answer.

Here's the correct ffmpeg command to achieve what was used:

ffmpegCmd = ['ffmpeg', '-i', '-', '-f', 'rawvideo', '-vcodec', 'bmp', '-vf', 'fps=5', '-']
ffmpeg = sp.Popen(ffmpegCmd, stdin = stream.stdout, stdout = sp.PIPE)

And then to convert the result into an OpenCV image, you do the following:

fileSizeBytes = ffmpeg.stdout.read(6)
fileSize = 0
for i in xrange(4):
    fileSize += fileSizeBytes[i + 2] * 256 ** i
bmpData = fileSizeBytes + ffmpeg.stdout.read(fileSize - 6)
image = cv2.imdecode(np.fromstring(bmpData, dtype = np.uint8), 1)

This will get every single frame of a stream as an OpenCV image.

See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Reply

0 votes
by (71.8m points)

Use any of these:

ffmpeg -i - -pix_fmt bgr24 -f rawvideo -
ffmpeg -i pipe: -pix_fmt bgr24 -f rawvideo pipe:
ffmpeg -i pipe:0 -pix_fmt bgr24 -f rawvideo pipe:1
  • You didn't provide much info about your input so you may need to add additional input options.

  • You didn't specify your desired output format so I just chose rawvideo. You can see a list of supported output formats (muxers) with ffmpeg -muxers (or ffmpeg -formats if your ffmpeg is outdated). Not all are suitable for piping, such as MP4.

  • See FFmpeg Protocols: pipe.


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
OGeek|极客中国-欢迎来到极客的世界,一个免费开放的程序员编程交流平台!开放,进步,分享!让技术改变生活,让极客改变未来! Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

1.4m articles

1.4m replys

5 comments

56.9k users

...