Discussion:
[FFmpeg-user] Blending two inputs with custom expression
SviMik
2017-05-09 02:17:36 UTC
Permalink
Let's assume A is a video, and B is a png image with alpha channel. I need
to do the following blending (assuming the format is rgba and B_alpha is in
0...1 range):

red = (A_red - B_red * B_alpha) / (1 - B_alpha);
green = (A_green - B_green * B_alpha) / (1 - B_alpha);
blue = (A_blue - B_blue * B_alpha) / (1 - B_alpha);

I was looking into "blend" filter, but it seems I can't access the other
channels data.
The "geq" filter is limited to a single source.

What shall I use it that case?
I really hate the idea of digging into ffmpeg sources and writing my own
filter. Is there any chance to do this kind of blending without that?
_______________________________________________
ffmpeg-user mailing list
ffmpeg-***@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-***@ffmpeg.org wit
Gyan
2017-05-09 15:50:27 UTC
Permalink
Post by SviMik
Let's assume A is a video, and B is a png image with alpha channel. I need
to do the following blending (assuming the format is rgba and B_alpha is in
red = (A_red - B_red * B_alpha) / (1 - B_alpha);
green = (A_green - B_green * B_alpha) / (1 - B_alpha);
blue = (A_blue - B_blue * B_alpha) / (1 - B_alpha);
You can achieve this using a sequence of filters, where the above
expression is realized piecemeal. I haven't tested this, but it should work.

I assume the first input to ffmpeg is the video, and the (unlooped) image
second. And both are 8-bit RGBA.

-filter_complex "[1]geq=r='p(X,Y)*alpha(X,Y)/255':g='p(X,Y)*alpha(X,Y)/255':
b='p(X,Y)*alpha(X,Y)/255'[imgpremult]; [imgpremult][0]blend=all_expr=
B-A:c3_expr=A,lutrgb=a=maxval-val,geq=r='255*p(X,Y)/alpha(X,
Y)':g='255*p(X,Y)/alpha(X,Y)':b='255*p(X,Y)/alpha(X,Y)'"

The video output will have an alpha plane populated during the blend
operation. If you need to preserve the original alpha, insert
[0]blend=all_expr=B:c3_expr=A at the end. Only one input is specified, the
2nd is the unconnected output from the last geq filter.

There's no attempt to clip or validate the values from any of the
expressions e.g. A_red - B_red * B_alpha = -0.5 if A_red = 0.5, B_red=1,
B_alpha=1. Which is invalid as it's out of range [0,1] and will remain so
even after division by 1 - B_alpha. Not to mention how you wish to handle
B_alpha = 1.
_______________________________________________
ffmpeg-user mailing list
ffmpeg-***@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-***@ffmpeg.org with su
SviMik
2017-05-10 00:11:24 UTC
Permalink
Post by Gyan
Post by SviMik
Let's assume A is a video, and B is a png image with alpha channel. I need
to do the following blending (assuming the format is rgba and B_alpha is in
red = (A_red - B_red * B_alpha) / (1 - B_alpha);
green = (A_green - B_green * B_alpha) / (1 - B_alpha);
blue = (A_blue - B_blue * B_alpha) / (1 - B_alpha);
You can achieve this using a sequence of filters, where the above
expression is realized piecemeal. I haven't tested this, but it should work.
I assume the first input to ffmpeg is the video, and the (unlooped) image
second. And both are 8-bit RGBA.
-filter_complex
b='p(X,Y)*alpha(X,Y)/255'[imgpremult]; [imgpremult][0]blend=all_expr=
B-A:c3_expr=A,lutrgb=a=maxval-val,geq=r='255*p(X,Y)/alpha(X,
Y)':g='255*p(X,Y)/alpha(X,Y)':b='255*p(X,Y)/alpha(X,Y)'"
The video output will have an alpha plane populated during the blend
operation. If you need to preserve the original alpha, insert
[0]blend=all_expr=B:c3_expr=A at the end. Only one input is specified, the
2nd is the unconnected output from the last geq filter.
There's no attempt to clip or validate the values from any of the
expressions e.g. A_red - B_red * B_alpha = -0.5 if A_red = 0.5, B_red=1,
B_alpha=1. Which is invalid as it's out of range [0,1] and will remain so
even after division by 1 - B_alpha. Not to mention how you wish to handle
B_alpha = 1.
Great answer, thank you for the idea! I have tried to run it, and it
produced me a black screen (all zeroes), but I think it may be my fault
somewhere. Maybe the alpha needs to be inverted? In my equation the B_alpha
assumes 0.0=transparent and 1.0=opaque, I really forgot to mention this.
[UPD: tried to invert the alpha channel in png image, no change]
I'm still looking at your code, and it will take me some time to understand
it. Sometimes the complex filters *are* complex :)

Just to make sure we're on the same page, I'm attaching the working php
code. Don't be confused if you're not writing in php, it's much like C and
easy to read. Except the way php operating with alpha value, I tried to
make it explicit by commenting.

for($x=0;$x<$w;$x++){
for($y=0;$y<$h;$y++){
/* A screencap from the video */
$rgb=imagecolorat($im, $x, $y);
$r = ($rgb >> 16) & 0xFF;
$g = ($rgb >> 8) & 0xFF;
$b = $rgb & 0xFF;

/* The png image */
$rgb=imagecolorat($imm, $x, $y);
$mt = ($rgb >> 24) & 0xFF; /* in php the alpha range is 0...127 */
$mr = ($rgb >> 16) & 0xFF;
$mg = ($rgb >> 8) & 0xFF;
$mb = $rgb & 0xFF;

/* in php the 0 is opaque and 127 is transparent */
/* invert and rescale it to 0.0=transparent, and 1.0=opaque */
$mt = (127-$mt)/127;

/* Yes, I'm aware that 100% opaque pixel will cause div by zero */
/* No need to validate it, the png does not have such values */
$r = ($r - $mr*$mt) / (1-$mt);
$g = ($g - $mg*$mt) / (1-$mt);
$b = ($b - $mb*$mt) / (1-$mt);

/* In real screencaps a pixel or two gets occasionally clipped */
$r=max(0, min(255, round($r)));
$g=max(0, min(255, round($g)));
$b=max(0, min(255, round($b)));

imagesetpixel($im2, $x, $y, ($r<<16)|($g<<8)|$b);
}
}
_______________________________________________
ffmpeg-user mailing list
ffmpeg-***@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-***@f
Gyan
2017-05-10 05:07:12 UTC
Permalink
Post by SviMik
Great answer, thank you for the idea! I have tried to run it, and it
produced me a black screen (all zeroes), but I think it may be my fault
somewhere.
In the first geq filter, add a='p(X,Y)' after the b expression. Turns out
the geq initializes alpha to 1 if no expression is provided. Ideally, it
ought to pass through.
_______________________________________________
ffmpeg-user mailing list
ffmpeg-***@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-***@ffmpeg.org with subj
SviMik
2017-05-11 14:38:11 UTC
Permalink
Post by Gyan
Post by SviMik
Great answer, thank you for the idea! I have tried to run it, and it
produced me a black screen (all zeroes), but I think it may be my fault
somewhere.
In the first geq filter, add a='p(X,Y)' after the b expression. Turns out
the geq initializes alpha to 1 if no expression is provided. Ideally, it
ought to pass through.
It works! Thank you so much!

By the way, I got few pixels with >255 value, and it turns out the ffmpeg
can't handle it itself - the pixel with r=266 value became r=10. So I had
to add min() to the last geq to avoid overflow:
r='min(255,255*p(X,Y)/alpha(X,Y))':g='min(255,255*p(X,Y)/alpha(X,Y))':b='min(255,255*p(X,Y)/alpha(X,Y))'

Now it works like a charm.

If you have any link for further reading about geq - it would be highly
appreciated. For example, the min() function was just a guess, which turned
out to be existing completely by chance. I still can't find a comprehensive
list of geq features and functions. Does it have conditions? Can I declare
variables? Is there a way to conditionally turn the whole filter on and off
depending on some pixel color? Or (even better) count the clipping pixels
and revert the whole frame to the original state if too many errors
occurred? That would help to detect automatically when the filter needs to
be applied instead of figuring the video fragments out manually.

Sorry if I'm too annoying :) You can just throw me a link for reading.
_______________________________________________
ffmpeg-user mailing list
ffmpeg-***@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-re
SviMik
2017-05-11 16:02:23 UTC
Permalink
Post by SviMik
Post by Gyan
Post by SviMik
Great answer, thank you for the idea! I have tried to run it, and it
produced me a black screen (all zeroes), but I think it may be my fault
somewhere.
In the first geq filter, add a='p(X,Y)' after the b expression. Turns out
the geq initializes alpha to 1 if no expression is provided. Ideally, it
ought to pass through.
It works! Thank you so much!
By the way, I got few pixels with >255 value, and it turns out the ffmpeg
can't handle it itself - the pixel with r=266 value became r=10. So I had
to add min() to the last geq to avoid overflow:
r='min(255,255*p(X,Y)/alpha(X,Y))':g='min(255,255*p(X,Y)/alpha(X,Y))':b='min(255,255*p(X,Y)/alpha(X,Y))'
Post by SviMik
Now it works like a charm.
I was too fast. I thought that if ffmpeg filter works on an image, then it
should work on the video too. I was wrong. The blend filter makes ffmpeg to
simply drop frames:
frame= 2 fps=0.5 q=-1.0 Lsize= 223kB time=00:00:06.78 bitrate=
268.8kbits/s dup=0 drop=194

I think it tries to process the first frame then fails because the top
layer is png and the bottom is a video. I tried to switch top and bottom
and swap A/B in the expression. Now it doesn't drop the frames, but I
couldn't manage to get the same result on the output. Seems that layer
position is important here, and swapping the layers made the result of the
blending different.
_______________________________________________
ffmpeg-user mailing list
ffmpeg-***@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-***@ffmpeg.org with subject "unsubscribe"
Gyan
2017-05-11 16:10:51 UTC
Permalink
Loop the image input i.e. `-loop 1 -i in.png`

Unfortunately, there's a bug at present which prevents the use of
shortest=1 in the blend filter. (https://trac.ffmpeg.org/ticket/6292)

So, you'll have to terminate using -t N where N is the duration of the
video, or if the video has an audio stream, by adding -shortest.
_______________________________________________
ffmpeg-user mailing list
ffmpeg-***@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg
SviMik
2017-05-11 16:37:50 UTC
Permalink
Post by Gyan
Loop the image input i.e. `-loop 1 -i in.png`
Unfortunately, there's a bug at present which prevents the use of
shortest=1 in the blend filter. (https://trac.ffmpeg.org/ticket/6292)
So, you'll have to terminate using -t N where N is the duration of the
video, or if the video has an audio stream, by adding -shortest.
Works. But the result was still wrong. Then I figured out what was wrong,
and my idea with swapping layers was actually right. The final solution is
to change the video format to rgba to fix the wrong result, and swap the
blend layers to fix the drops, then we don't need that loop. I have also
stored the premult png separately trying to optimize the speed:

ffmpeg -i mask.png -filter_complex
"geq=r='p(X,Y)*alpha(X,Y)/255':g='p(X,Y)*alpha(X,Y)/255':b='p(X,Y)*alpha(X,Y)/255':a='p(X,Y)'"
mask_premult.png

ffmpeg -i tmp.flv -i mask_premult.png -filter_complex
"[0:v]format=rgba[rgbv];[rgbv][1:v]blend=all_expr=A-B:c3_expr=B,lutrgb=a=maxval-val,geq=r='min(255,255*p(X,Y)/alpha(X,Y))':g='min(255,255*p(X,Y)/alpha(X,Y))':b='min(255,255*p(X,Y)/alpha(X,Y))'"
tmp.mkv

But the fps=1.0 makes me sad (comparing with fps=40 without filters when
just encoding). Perhaps writing the filter in C is the right way if I want
something usable in real-time application...
_______________________________________________
ffmpeg-user mailing list
ffmpeg-***@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmp
Paul B Mahol
2017-05-11 19:59:20 UTC
Permalink
Post by SviMik
Post by Gyan
Loop the image input i.e. `-loop 1 -i in.png`
Unfortunately, there's a bug at present which prevents the use of
shortest=1 in the blend filter. (https://trac.ffmpeg.org/ticket/6292)
So, you'll have to terminate using -t N where N is the duration of the
video, or if the video has an audio stream, by adding -shortest.
Works. But the result was still wrong. Then I figured out what was wrong,
and my idea with swapping layers was actually right. The final solution is
to change the video format to rgba to fix the wrong result, and swap the
blend layers to fix the drops, then we don't need that loop. I have also
ffmpeg -i mask.png -filter_complex
"geq=r='p(X,Y)*alpha(X,Y)/255':g='p(X,Y)*alpha(X,Y)/255':b='p(X,Y)*alpha(X,Y)/255':a='p(X,Y)'"
mask_premult.png
ffmpeg -i tmp.flv -i mask_premult.png -filter_complex
"[0:v]format=rgba[rgbv];[rgbv][1:v]blend=all_expr=A-B:c3_expr=B,lutrgb=a=maxval-val,geq=r='min(255,255*p(X,Y)/alpha(X,Y))':g='min(255,255*p(X,Y)/alpha(X,Y))':b='min(255,255*p(X,Y)/alpha(X,Y))'"
tmp.mkv
But the fps=1.0 makes me sad (comparing with fps=40 without filters when
just encoding). Perhaps writing the filter in C is the right way if I want
something usable in real-time application...
If this is about alpha premultiply, there is premultiply filter.
_______________________________________________
ffmpeg-user mailing list
ffmpeg-***@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-***@ff

Paul B Mahol
2017-05-09 22:02:40 UTC
Permalink
Post by SviMik
Let's assume A is a video, and B is a png image with alpha channel. I need
to do the following blending (assuming the format is rgba and B_alpha is in
red = (A_red - B_red * B_alpha) / (1 - B_alpha);
green = (A_green - B_green * B_alpha) / (1 - B_alpha);
blue = (A_blue - B_blue * B_alpha) / (1 - B_alpha);
Those equatitions are very strange.
_______________________________________________
ffmpeg-user mailing list
ffmpeg-***@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-***@ffmp
SviMik
2017-05-09 22:48:13 UTC
Permalink
Post by Paul B Mahol
Those equatitions are very strange.
That's off topic, but I'll answer. I have tested it with my own test
program using some sample screenshots, and it does what expected. Now I
want to apply it to a live video. Of course there may be a little sanity
check needed (I'm not sure yet how the filter deals with clippings), but in
my use case there was no clipping at all (unless the filter is applied to a
video it was not meant for). Of course, there is no completely opaque
pixels in the png image, all they do have some transparency. And I suppose
any equation will look strange without having a clue what the goal is, and
how I even came to this solution. (Spoiler: it's a reverse equation of what
some another filter does. If you like the quest - you may try to guess what
it is).
_______________________________________________
ffmpeg-user mailing list
ffmpeg-***@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
ffmpeg-user-***@ffmpeg.org with subject "unsubsc
Loading...