What's the right way to normalize a convolution filter?












1















I'm following these instructions. But my results aren't coming out right. I'm using this kernel:



-1 -1 -1
-1 8 -1
-1 -1 -1


So that means the sum for a given convolution will be between -8 and 8, assuming I have already normalized my input (0-255 -> 0-1). Then I do the convolution. Then I find the percent my value is between the minimum and maximum values. For example, with this kernel my min is -8 and max is 8. So if the value is 0 that's 50% which works out to 255 * .5 = 127.5. But that's clearly not right and what it gives is a mostly gray image. The parts that aren't gray are still monochrome even though I'm running the kernel on each channel individually.



enter image description here



    static int EvaluateKernelAtPoint(Bitmap bitmap, Matrix<double> kernel, int x, int y, Func<int, int, int> onGetIntensity)
{
double sum = 0;
for (int a = 0; a < kernel.ColumnCount; a++)
{
for (int b = 0; b < kernel.RowCount; b++)
{
var xn = x + a - 1;
var yn = y + b - 1;

var intensity = (double)onGetIntensity(xn, yn); // returns R,G, or B color channel at that pixel
intensity /= 255; // intensity is 0-1
sum += intensity * kernel.At(a, b);
}
}

var result = (sum - (-8d)) / (8d - (-8d)); // find the % between the min and max of -8 and 8
result *= 255; // bring it back to 0-255
}









share|improve this question


















  • 1





    Those extremes are extremely unlikely. You indeed want to set the zero to 127 or 128, then you can choose a scaling that will make your image look good. One approach is to find the min and max in the output image.

    – Cris Luengo
    Nov 26 '18 at 1:50






  • 1





    "The parts that aren't gray are still monochrome": don't expect the Laplacian output to show meaningful colors. Also, applying different gain coefficients to the three channels is not recommended as it introduces a bias.

    – Yves Daoust
    Nov 26 '18 at 7:07
















1















I'm following these instructions. But my results aren't coming out right. I'm using this kernel:



-1 -1 -1
-1 8 -1
-1 -1 -1


So that means the sum for a given convolution will be between -8 and 8, assuming I have already normalized my input (0-255 -> 0-1). Then I do the convolution. Then I find the percent my value is between the minimum and maximum values. For example, with this kernel my min is -8 and max is 8. So if the value is 0 that's 50% which works out to 255 * .5 = 127.5. But that's clearly not right and what it gives is a mostly gray image. The parts that aren't gray are still monochrome even though I'm running the kernel on each channel individually.



enter image description here



    static int EvaluateKernelAtPoint(Bitmap bitmap, Matrix<double> kernel, int x, int y, Func<int, int, int> onGetIntensity)
{
double sum = 0;
for (int a = 0; a < kernel.ColumnCount; a++)
{
for (int b = 0; b < kernel.RowCount; b++)
{
var xn = x + a - 1;
var yn = y + b - 1;

var intensity = (double)onGetIntensity(xn, yn); // returns R,G, or B color channel at that pixel
intensity /= 255; // intensity is 0-1
sum += intensity * kernel.At(a, b);
}
}

var result = (sum - (-8d)) / (8d - (-8d)); // find the % between the min and max of -8 and 8
result *= 255; // bring it back to 0-255
}









share|improve this question


















  • 1





    Those extremes are extremely unlikely. You indeed want to set the zero to 127 or 128, then you can choose a scaling that will make your image look good. One approach is to find the min and max in the output image.

    – Cris Luengo
    Nov 26 '18 at 1:50






  • 1





    "The parts that aren't gray are still monochrome": don't expect the Laplacian output to show meaningful colors. Also, applying different gain coefficients to the three channels is not recommended as it introduces a bias.

    – Yves Daoust
    Nov 26 '18 at 7:07














1












1








1








I'm following these instructions. But my results aren't coming out right. I'm using this kernel:



-1 -1 -1
-1 8 -1
-1 -1 -1


So that means the sum for a given convolution will be between -8 and 8, assuming I have already normalized my input (0-255 -> 0-1). Then I do the convolution. Then I find the percent my value is between the minimum and maximum values. For example, with this kernel my min is -8 and max is 8. So if the value is 0 that's 50% which works out to 255 * .5 = 127.5. But that's clearly not right and what it gives is a mostly gray image. The parts that aren't gray are still monochrome even though I'm running the kernel on each channel individually.



enter image description here



    static int EvaluateKernelAtPoint(Bitmap bitmap, Matrix<double> kernel, int x, int y, Func<int, int, int> onGetIntensity)
{
double sum = 0;
for (int a = 0; a < kernel.ColumnCount; a++)
{
for (int b = 0; b < kernel.RowCount; b++)
{
var xn = x + a - 1;
var yn = y + b - 1;

var intensity = (double)onGetIntensity(xn, yn); // returns R,G, or B color channel at that pixel
intensity /= 255; // intensity is 0-1
sum += intensity * kernel.At(a, b);
}
}

var result = (sum - (-8d)) / (8d - (-8d)); // find the % between the min and max of -8 and 8
result *= 255; // bring it back to 0-255
}









share|improve this question














I'm following these instructions. But my results aren't coming out right. I'm using this kernel:



-1 -1 -1
-1 8 -1
-1 -1 -1


So that means the sum for a given convolution will be between -8 and 8, assuming I have already normalized my input (0-255 -> 0-1). Then I do the convolution. Then I find the percent my value is between the minimum and maximum values. For example, with this kernel my min is -8 and max is 8. So if the value is 0 that's 50% which works out to 255 * .5 = 127.5. But that's clearly not right and what it gives is a mostly gray image. The parts that aren't gray are still monochrome even though I'm running the kernel on each channel individually.



enter image description here



    static int EvaluateKernelAtPoint(Bitmap bitmap, Matrix<double> kernel, int x, int y, Func<int, int, int> onGetIntensity)
{
double sum = 0;
for (int a = 0; a < kernel.ColumnCount; a++)
{
for (int b = 0; b < kernel.RowCount; b++)
{
var xn = x + a - 1;
var yn = y + b - 1;

var intensity = (double)onGetIntensity(xn, yn); // returns R,G, or B color channel at that pixel
intensity /= 255; // intensity is 0-1
sum += intensity * kernel.At(a, b);
}
}

var result = (sum - (-8d)) / (8d - (-8d)); // find the % between the min and max of -8 and 8
result *= 255; // bring it back to 0-255
}






image-processing edge-detection






share|improve this question













share|improve this question











share|improve this question




share|improve this question










asked Nov 26 '18 at 1:42









user875234user875234

6752820




6752820








  • 1





    Those extremes are extremely unlikely. You indeed want to set the zero to 127 or 128, then you can choose a scaling that will make your image look good. One approach is to find the min and max in the output image.

    – Cris Luengo
    Nov 26 '18 at 1:50






  • 1





    "The parts that aren't gray are still monochrome": don't expect the Laplacian output to show meaningful colors. Also, applying different gain coefficients to the three channels is not recommended as it introduces a bias.

    – Yves Daoust
    Nov 26 '18 at 7:07














  • 1





    Those extremes are extremely unlikely. You indeed want to set the zero to 127 or 128, then you can choose a scaling that will make your image look good. One approach is to find the min and max in the output image.

    – Cris Luengo
    Nov 26 '18 at 1:50






  • 1





    "The parts that aren't gray are still monochrome": don't expect the Laplacian output to show meaningful colors. Also, applying different gain coefficients to the three channels is not recommended as it introduces a bias.

    – Yves Daoust
    Nov 26 '18 at 7:07








1




1





Those extremes are extremely unlikely. You indeed want to set the zero to 127 or 128, then you can choose a scaling that will make your image look good. One approach is to find the min and max in the output image.

– Cris Luengo
Nov 26 '18 at 1:50





Those extremes are extremely unlikely. You indeed want to set the zero to 127 or 128, then you can choose a scaling that will make your image look good. One approach is to find the min and max in the output image.

– Cris Luengo
Nov 26 '18 at 1:50




1




1





"The parts that aren't gray are still monochrome": don't expect the Laplacian output to show meaningful colors. Also, applying different gain coefficients to the three channels is not recommended as it introduces a bias.

– Yves Daoust
Nov 26 '18 at 7:07





"The parts that aren't gray are still monochrome": don't expect the Laplacian output to show meaningful colors. Also, applying different gain coefficients to the three channels is not recommended as it introduces a bias.

– Yves Daoust
Nov 26 '18 at 7:07












1 Answer
1






active

oldest

votes


















2














There is no right way to normalize, because the range is content-dependent.



In good RGB images, the range of values is usually [0, 255], provided the dynamic range is well adjusted.



But the output of this Laplacian filter, which can be seen as the difference between the original image and a smoothed version of it, usually has much smaller amplitude. But this depends on the local variations: an already smooth image will give no response, while noise (especially salt & pepper) can yield huge values.



You also need to decide what to do with negative values: shift so that zero appears as mid-gray, clamp to zero or take the absolute value.



Taking the min-max range and mapping it to 0-255 is an option, but leads to a "floating" zero and uncontrolled gain. I would prefer to set a constant gain for a set of images of the same origin.



Last but not least, histogram equalization is another option.






share|improve this answer
























  • Allright, I will try fiddling with those values. It's on the back burner right now though. Quick question, the reason I got confused is because I was comparing my image to what you get in the browser with a -1 -1 -1 -1 8 -1 -1 -1 -1 convolution filter. Any guesses on what the browser might be doing here? jsfiddle.net/5zcro4hf

    – user875234
    Nov 27 '18 at 3:04













  • @user875234: it does the right thing, no ?

    – Yves Daoust
    Nov 27 '18 at 8:16













  • Yeah, it looks good to me. I just can't figure out how to get my own code to do it. If I follow the steps on mozilla (which point to the w3c spec which I have only read a couple parts of) I don't get the same result. I think the code in my question is a faithful interpretation of their pseudo code (bias would be 0 and divisor would be 1 so I've left them out of this question). That's why I'm confused about the results not matching up.

    – user875234
    Nov 27 '18 at 12:28













Your Answer






StackExchange.ifUsing("editor", function () {
StackExchange.using("externalEditor", function () {
StackExchange.using("snippets", function () {
StackExchange.snippets.init();
});
});
}, "code-snippets");

StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "1"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});














draft saved

draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53473765%2fwhats-the-right-way-to-normalize-a-convolution-filter%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown

























1 Answer
1






active

oldest

votes








1 Answer
1






active

oldest

votes









active

oldest

votes






active

oldest

votes









2














There is no right way to normalize, because the range is content-dependent.



In good RGB images, the range of values is usually [0, 255], provided the dynamic range is well adjusted.



But the output of this Laplacian filter, which can be seen as the difference between the original image and a smoothed version of it, usually has much smaller amplitude. But this depends on the local variations: an already smooth image will give no response, while noise (especially salt & pepper) can yield huge values.



You also need to decide what to do with negative values: shift so that zero appears as mid-gray, clamp to zero or take the absolute value.



Taking the min-max range and mapping it to 0-255 is an option, but leads to a "floating" zero and uncontrolled gain. I would prefer to set a constant gain for a set of images of the same origin.



Last but not least, histogram equalization is another option.






share|improve this answer
























  • Allright, I will try fiddling with those values. It's on the back burner right now though. Quick question, the reason I got confused is because I was comparing my image to what you get in the browser with a -1 -1 -1 -1 8 -1 -1 -1 -1 convolution filter. Any guesses on what the browser might be doing here? jsfiddle.net/5zcro4hf

    – user875234
    Nov 27 '18 at 3:04













  • @user875234: it does the right thing, no ?

    – Yves Daoust
    Nov 27 '18 at 8:16













  • Yeah, it looks good to me. I just can't figure out how to get my own code to do it. If I follow the steps on mozilla (which point to the w3c spec which I have only read a couple parts of) I don't get the same result. I think the code in my question is a faithful interpretation of their pseudo code (bias would be 0 and divisor would be 1 so I've left them out of this question). That's why I'm confused about the results not matching up.

    – user875234
    Nov 27 '18 at 12:28


















2














There is no right way to normalize, because the range is content-dependent.



In good RGB images, the range of values is usually [0, 255], provided the dynamic range is well adjusted.



But the output of this Laplacian filter, which can be seen as the difference between the original image and a smoothed version of it, usually has much smaller amplitude. But this depends on the local variations: an already smooth image will give no response, while noise (especially salt & pepper) can yield huge values.



You also need to decide what to do with negative values: shift so that zero appears as mid-gray, clamp to zero or take the absolute value.



Taking the min-max range and mapping it to 0-255 is an option, but leads to a "floating" zero and uncontrolled gain. I would prefer to set a constant gain for a set of images of the same origin.



Last but not least, histogram equalization is another option.






share|improve this answer
























  • Allright, I will try fiddling with those values. It's on the back burner right now though. Quick question, the reason I got confused is because I was comparing my image to what you get in the browser with a -1 -1 -1 -1 8 -1 -1 -1 -1 convolution filter. Any guesses on what the browser might be doing here? jsfiddle.net/5zcro4hf

    – user875234
    Nov 27 '18 at 3:04













  • @user875234: it does the right thing, no ?

    – Yves Daoust
    Nov 27 '18 at 8:16













  • Yeah, it looks good to me. I just can't figure out how to get my own code to do it. If I follow the steps on mozilla (which point to the w3c spec which I have only read a couple parts of) I don't get the same result. I think the code in my question is a faithful interpretation of their pseudo code (bias would be 0 and divisor would be 1 so I've left them out of this question). That's why I'm confused about the results not matching up.

    – user875234
    Nov 27 '18 at 12:28
















2












2








2







There is no right way to normalize, because the range is content-dependent.



In good RGB images, the range of values is usually [0, 255], provided the dynamic range is well adjusted.



But the output of this Laplacian filter, which can be seen as the difference between the original image and a smoothed version of it, usually has much smaller amplitude. But this depends on the local variations: an already smooth image will give no response, while noise (especially salt & pepper) can yield huge values.



You also need to decide what to do with negative values: shift so that zero appears as mid-gray, clamp to zero or take the absolute value.



Taking the min-max range and mapping it to 0-255 is an option, but leads to a "floating" zero and uncontrolled gain. I would prefer to set a constant gain for a set of images of the same origin.



Last but not least, histogram equalization is another option.






share|improve this answer













There is no right way to normalize, because the range is content-dependent.



In good RGB images, the range of values is usually [0, 255], provided the dynamic range is well adjusted.



But the output of this Laplacian filter, which can be seen as the difference between the original image and a smoothed version of it, usually has much smaller amplitude. But this depends on the local variations: an already smooth image will give no response, while noise (especially salt & pepper) can yield huge values.



You also need to decide what to do with negative values: shift so that zero appears as mid-gray, clamp to zero or take the absolute value.



Taking the min-max range and mapping it to 0-255 is an option, but leads to a "floating" zero and uncontrolled gain. I would prefer to set a constant gain for a set of images of the same origin.



Last but not least, histogram equalization is another option.







share|improve this answer












share|improve this answer



share|improve this answer










answered Nov 26 '18 at 7:15









Yves DaoustYves Daoust

38.3k72760




38.3k72760













  • Allright, I will try fiddling with those values. It's on the back burner right now though. Quick question, the reason I got confused is because I was comparing my image to what you get in the browser with a -1 -1 -1 -1 8 -1 -1 -1 -1 convolution filter. Any guesses on what the browser might be doing here? jsfiddle.net/5zcro4hf

    – user875234
    Nov 27 '18 at 3:04













  • @user875234: it does the right thing, no ?

    – Yves Daoust
    Nov 27 '18 at 8:16













  • Yeah, it looks good to me. I just can't figure out how to get my own code to do it. If I follow the steps on mozilla (which point to the w3c spec which I have only read a couple parts of) I don't get the same result. I think the code in my question is a faithful interpretation of their pseudo code (bias would be 0 and divisor would be 1 so I've left them out of this question). That's why I'm confused about the results not matching up.

    – user875234
    Nov 27 '18 at 12:28





















  • Allright, I will try fiddling with those values. It's on the back burner right now though. Quick question, the reason I got confused is because I was comparing my image to what you get in the browser with a -1 -1 -1 -1 8 -1 -1 -1 -1 convolution filter. Any guesses on what the browser might be doing here? jsfiddle.net/5zcro4hf

    – user875234
    Nov 27 '18 at 3:04













  • @user875234: it does the right thing, no ?

    – Yves Daoust
    Nov 27 '18 at 8:16













  • Yeah, it looks good to me. I just can't figure out how to get my own code to do it. If I follow the steps on mozilla (which point to the w3c spec which I have only read a couple parts of) I don't get the same result. I think the code in my question is a faithful interpretation of their pseudo code (bias would be 0 and divisor would be 1 so I've left them out of this question). That's why I'm confused about the results not matching up.

    – user875234
    Nov 27 '18 at 12:28



















Allright, I will try fiddling with those values. It's on the back burner right now though. Quick question, the reason I got confused is because I was comparing my image to what you get in the browser with a -1 -1 -1 -1 8 -1 -1 -1 -1 convolution filter. Any guesses on what the browser might be doing here? jsfiddle.net/5zcro4hf

– user875234
Nov 27 '18 at 3:04







Allright, I will try fiddling with those values. It's on the back burner right now though. Quick question, the reason I got confused is because I was comparing my image to what you get in the browser with a -1 -1 -1 -1 8 -1 -1 -1 -1 convolution filter. Any guesses on what the browser might be doing here? jsfiddle.net/5zcro4hf

– user875234
Nov 27 '18 at 3:04















@user875234: it does the right thing, no ?

– Yves Daoust
Nov 27 '18 at 8:16







@user875234: it does the right thing, no ?

– Yves Daoust
Nov 27 '18 at 8:16















Yeah, it looks good to me. I just can't figure out how to get my own code to do it. If I follow the steps on mozilla (which point to the w3c spec which I have only read a couple parts of) I don't get the same result. I think the code in my question is a faithful interpretation of their pseudo code (bias would be 0 and divisor would be 1 so I've left them out of this question). That's why I'm confused about the results not matching up.

– user875234
Nov 27 '18 at 12:28







Yeah, it looks good to me. I just can't figure out how to get my own code to do it. If I follow the steps on mozilla (which point to the w3c spec which I have only read a couple parts of) I don't get the same result. I think the code in my question is a faithful interpretation of their pseudo code (bias would be 0 and divisor would be 1 so I've left them out of this question). That's why I'm confused about the results not matching up.

– user875234
Nov 27 '18 at 12:28






















draft saved

draft discarded




















































Thanks for contributing an answer to Stack Overflow!


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53473765%2fwhats-the-right-way-to-normalize-a-convolution-filter%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

Wiesbaden

Marschland

Dieringhausen