f***@me.com
2018-10-03 17:10:10 UTC
I recently did just this. frame->data[3] holds a CVPixelBufferRef. Here is some sample code in Swift:
let pixelBuffer =
Unmanaged<CVPixelBuffer>.fromOpaque(frame!.pointee.data.3!)
.retain()
.takeRetainedValue()
In C, you would do something like:
CVPixelBufferRef pixelBuffer = CVPixelBufferRetain((CVPixelBufferRef)frame->data[3]);
Cheers!
fumoboy007
On Sat Aug 18 05:50:45 EEST 2018, Bo Zhou <***@gmail.com> wrote:
Hi !
I'm using the FFmpeg 4.0.2 and successfully executed the example hw_decode
on Mac. And I could see the trace log
[h264 @ 0x7f97d9800000] Format videotoolbox_vld chosen by get_format().
[h264 @ 0x7f97d9800000] Format videotoolbox_vld requires hwaccel
initialisation.
[h264 @ 0x7f97d9800000] Reinit context to 1280x720, pix_fmt:
videotoolbox_vld
Which should mean that it's using videotoolbox_vld .
In the hw_decode.c file, we could find the following piece of code.
if (frame->format == hw_pix_fmt) {
/* retrieve data from GPU to CPU */
if ((ret = av_hwframe_transfer_data(sw_frame, frame, 0)) < 0) {
fprintf(stderr, "Error transferring the data to system
memory\n");
goto fail;
}
tmp_frame = sw_frame;
} else
tmp_frame = frame;
So I'd like to know is that possible to get the GPU buffer directly ? I
could see the frame->data[3] and frame->buf[0] has a pointer, but how
should I use them ? I just want to map the hardware buffer into OpenGL
directly.
Thank you very much.
_______________________________________________
ffmpeg-user mailing list
ffmpeg-***@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-user
To unsubscribe, visit link above, or email
ffmpeg-user-***@ffmpeg.org with su
let pixelBuffer =
Unmanaged<CVPixelBuffer>.fromOpaque(frame!.pointee.data.3!)
.retain()
.takeRetainedValue()
In C, you would do something like:
CVPixelBufferRef pixelBuffer = CVPixelBufferRetain((CVPixelBufferRef)frame->data[3]);
Cheers!
fumoboy007
On Sat Aug 18 05:50:45 EEST 2018, Bo Zhou <***@gmail.com> wrote:
Hi !
I'm using the FFmpeg 4.0.2 and successfully executed the example hw_decode
on Mac. And I could see the trace log
[h264 @ 0x7f97d9800000] Format videotoolbox_vld chosen by get_format().
[h264 @ 0x7f97d9800000] Format videotoolbox_vld requires hwaccel
initialisation.
[h264 @ 0x7f97d9800000] Reinit context to 1280x720, pix_fmt:
videotoolbox_vld
Which should mean that it's using videotoolbox_vld .
In the hw_decode.c file, we could find the following piece of code.
if (frame->format == hw_pix_fmt) {
/* retrieve data from GPU to CPU */
if ((ret = av_hwframe_transfer_data(sw_frame, frame, 0)) < 0) {
fprintf(stderr, "Error transferring the data to system
memory\n");
goto fail;
}
tmp_frame = sw_frame;
} else
tmp_frame = frame;
So I'd like to know is that possible to get the GPU buffer directly ? I
could see the frame->data[3] and frame->buf[0] has a pointer, but how
should I use them ? I just want to map the hardware buffer into OpenGL
directly.
Thank you very much.
_______________________________________________
ffmpeg-user mailing list
ffmpeg-***@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-user
To unsubscribe, visit link above, or email
ffmpeg-user-***@ffmpeg.org with su