Removing noise when the signal is not smooth












9












$begingroup$


Suppose we have (an interval of) a time series of measurements:



plot of raw (simulated) data



We assume it can be explained as a "simple" underlying signal overlaid by noise. I'm interested in finding a good algorithm to estimate the value of the simple signal at a given point in time -- primarily for the purpose of displaying it to human users who know, more or less, how to interpret the signal but would be distracted by the noise.



For a human observer of the plot it looks very clearly like the underlying signal has a jump discontinuity at about $t=18$. But that's a problem for automatic noise removal, because the techniques I know are all predicated of a "nice" underlying signal meaning a "smooth" one. A typical anti-noise filter would be something like colvolving with a Gaussian kernel:



plot of Gaussian smoothing



which completely fails to convey that the left slope of the dip is any different from the right one. My current solution is to use a "naive" rolling average (i.e. convolution with a square kernel):



plot of square smoothing



whose benefit (aside from simplicity) is that at least the sharp bends in the signal estimate alert the viewer that something fishy is going on. But it takes quite some training for the viewer to know what fishy thing this pattern indicates. And it is still a tricky business to pinpoint when the abrupt change happened, which is sometimes important in my application.



The widths of the convolution kernels in the two examples above were chosen to give about the same smoothing of the pure noise (since I've cheated and actually constructed the sample data I'm showing as the sum of a crisp deliberate signal and some explicit noise). If we make them narrower, we can get the estimate to show that there's an abrupt change going on, but then they don't remove all of the noise:



plot of estimation with a narrower kernel



I can't be the first person ever to face this problem. Does it ring a bell for anyone? I'd appreciate ideas, pointers to literature, search terms, a conventional name for the problem, whatever.



Miscellaneous remarks:




  1. No, I cannot rigorously define what it is I want to optimize for. That would be too easy.


  2. It will be nice if the smoothed signal can show clearly that there's no significant systematic change in the signal before the jump at $t=18$.


  3. In the example I show here, the jump was much larger than the amplitude of the noise. That's not always the case in practice.


  4. I don't need real-time behavior, so it is fine that the estimate of the signal at some time depends on later samples. In fact I'd prefer to find a solution that commutes with time inversions.


  5. Basing a solution on outside knowledge about how the particular signal I'm looking at ought to behave is not an option. There are too many different measurements I want to apply it to, and often it's something we don't have a good prior expectation for.











share|cite|improve this question









$endgroup$












  • $begingroup$
    (I'm not completely convinced that this is "mathematics", but unsure what else it would be).
    $endgroup$
    – Henning Makholm
    Dec 1 '12 at 22:03








  • 1




    $begingroup$
    Statistics. stats.se might be a better place to ask.
    $endgroup$
    – Jonathan Christensen
    Dec 1 '12 at 22:10










  • $begingroup$
    @JonathanChristensen: Not sure it could be statistics; I can't imagine how to formulate a null hypothesis here.
    $endgroup$
    – Henning Makholm
    Dec 1 '12 at 22:20






  • 1




    $begingroup$
    Statistics isn't all about hypothesis testing. Source: I'm a statistician.
    $endgroup$
    – Jonathan Christensen
    Dec 1 '12 at 22:20










  • $begingroup$
    I think this is a signal processing problem first and then a statistics question.
    $endgroup$
    – Bjorn Roche
    Dec 2 '12 at 1:00
















9












$begingroup$


Suppose we have (an interval of) a time series of measurements:



plot of raw (simulated) data



We assume it can be explained as a "simple" underlying signal overlaid by noise. I'm interested in finding a good algorithm to estimate the value of the simple signal at a given point in time -- primarily for the purpose of displaying it to human users who know, more or less, how to interpret the signal but would be distracted by the noise.



For a human observer of the plot it looks very clearly like the underlying signal has a jump discontinuity at about $t=18$. But that's a problem for automatic noise removal, because the techniques I know are all predicated of a "nice" underlying signal meaning a "smooth" one. A typical anti-noise filter would be something like colvolving with a Gaussian kernel:



plot of Gaussian smoothing



which completely fails to convey that the left slope of the dip is any different from the right one. My current solution is to use a "naive" rolling average (i.e. convolution with a square kernel):



plot of square smoothing



whose benefit (aside from simplicity) is that at least the sharp bends in the signal estimate alert the viewer that something fishy is going on. But it takes quite some training for the viewer to know what fishy thing this pattern indicates. And it is still a tricky business to pinpoint when the abrupt change happened, which is sometimes important in my application.



The widths of the convolution kernels in the two examples above were chosen to give about the same smoothing of the pure noise (since I've cheated and actually constructed the sample data I'm showing as the sum of a crisp deliberate signal and some explicit noise). If we make them narrower, we can get the estimate to show that there's an abrupt change going on, but then they don't remove all of the noise:



plot of estimation with a narrower kernel



I can't be the first person ever to face this problem. Does it ring a bell for anyone? I'd appreciate ideas, pointers to literature, search terms, a conventional name for the problem, whatever.



Miscellaneous remarks:




  1. No, I cannot rigorously define what it is I want to optimize for. That would be too easy.


  2. It will be nice if the smoothed signal can show clearly that there's no significant systematic change in the signal before the jump at $t=18$.


  3. In the example I show here, the jump was much larger than the amplitude of the noise. That's not always the case in practice.


  4. I don't need real-time behavior, so it is fine that the estimate of the signal at some time depends on later samples. In fact I'd prefer to find a solution that commutes with time inversions.


  5. Basing a solution on outside knowledge about how the particular signal I'm looking at ought to behave is not an option. There are too many different measurements I want to apply it to, and often it's something we don't have a good prior expectation for.











share|cite|improve this question









$endgroup$












  • $begingroup$
    (I'm not completely convinced that this is "mathematics", but unsure what else it would be).
    $endgroup$
    – Henning Makholm
    Dec 1 '12 at 22:03








  • 1




    $begingroup$
    Statistics. stats.se might be a better place to ask.
    $endgroup$
    – Jonathan Christensen
    Dec 1 '12 at 22:10










  • $begingroup$
    @JonathanChristensen: Not sure it could be statistics; I can't imagine how to formulate a null hypothesis here.
    $endgroup$
    – Henning Makholm
    Dec 1 '12 at 22:20






  • 1




    $begingroup$
    Statistics isn't all about hypothesis testing. Source: I'm a statistician.
    $endgroup$
    – Jonathan Christensen
    Dec 1 '12 at 22:20










  • $begingroup$
    I think this is a signal processing problem first and then a statistics question.
    $endgroup$
    – Bjorn Roche
    Dec 2 '12 at 1:00














9












9








9


2



$begingroup$


Suppose we have (an interval of) a time series of measurements:



plot of raw (simulated) data



We assume it can be explained as a "simple" underlying signal overlaid by noise. I'm interested in finding a good algorithm to estimate the value of the simple signal at a given point in time -- primarily for the purpose of displaying it to human users who know, more or less, how to interpret the signal but would be distracted by the noise.



For a human observer of the plot it looks very clearly like the underlying signal has a jump discontinuity at about $t=18$. But that's a problem for automatic noise removal, because the techniques I know are all predicated of a "nice" underlying signal meaning a "smooth" one. A typical anti-noise filter would be something like colvolving with a Gaussian kernel:



plot of Gaussian smoothing



which completely fails to convey that the left slope of the dip is any different from the right one. My current solution is to use a "naive" rolling average (i.e. convolution with a square kernel):



plot of square smoothing



whose benefit (aside from simplicity) is that at least the sharp bends in the signal estimate alert the viewer that something fishy is going on. But it takes quite some training for the viewer to know what fishy thing this pattern indicates. And it is still a tricky business to pinpoint when the abrupt change happened, which is sometimes important in my application.



The widths of the convolution kernels in the two examples above were chosen to give about the same smoothing of the pure noise (since I've cheated and actually constructed the sample data I'm showing as the sum of a crisp deliberate signal and some explicit noise). If we make them narrower, we can get the estimate to show that there's an abrupt change going on, but then they don't remove all of the noise:



plot of estimation with a narrower kernel



I can't be the first person ever to face this problem. Does it ring a bell for anyone? I'd appreciate ideas, pointers to literature, search terms, a conventional name for the problem, whatever.



Miscellaneous remarks:




  1. No, I cannot rigorously define what it is I want to optimize for. That would be too easy.


  2. It will be nice if the smoothed signal can show clearly that there's no significant systematic change in the signal before the jump at $t=18$.


  3. In the example I show here, the jump was much larger than the amplitude of the noise. That's not always the case in practice.


  4. I don't need real-time behavior, so it is fine that the estimate of the signal at some time depends on later samples. In fact I'd prefer to find a solution that commutes with time inversions.


  5. Basing a solution on outside knowledge about how the particular signal I'm looking at ought to behave is not an option. There are too many different measurements I want to apply it to, and often it's something we don't have a good prior expectation for.











share|cite|improve this question









$endgroup$




Suppose we have (an interval of) a time series of measurements:



plot of raw (simulated) data



We assume it can be explained as a "simple" underlying signal overlaid by noise. I'm interested in finding a good algorithm to estimate the value of the simple signal at a given point in time -- primarily for the purpose of displaying it to human users who know, more or less, how to interpret the signal but would be distracted by the noise.



For a human observer of the plot it looks very clearly like the underlying signal has a jump discontinuity at about $t=18$. But that's a problem for automatic noise removal, because the techniques I know are all predicated of a "nice" underlying signal meaning a "smooth" one. A typical anti-noise filter would be something like colvolving with a Gaussian kernel:



plot of Gaussian smoothing



which completely fails to convey that the left slope of the dip is any different from the right one. My current solution is to use a "naive" rolling average (i.e. convolution with a square kernel):



plot of square smoothing



whose benefit (aside from simplicity) is that at least the sharp bends in the signal estimate alert the viewer that something fishy is going on. But it takes quite some training for the viewer to know what fishy thing this pattern indicates. And it is still a tricky business to pinpoint when the abrupt change happened, which is sometimes important in my application.



The widths of the convolution kernels in the two examples above were chosen to give about the same smoothing of the pure noise (since I've cheated and actually constructed the sample data I'm showing as the sum of a crisp deliberate signal and some explicit noise). If we make them narrower, we can get the estimate to show that there's an abrupt change going on, but then they don't remove all of the noise:



plot of estimation with a narrower kernel



I can't be the first person ever to face this problem. Does it ring a bell for anyone? I'd appreciate ideas, pointers to literature, search terms, a conventional name for the problem, whatever.



Miscellaneous remarks:




  1. No, I cannot rigorously define what it is I want to optimize for. That would be too easy.


  2. It will be nice if the smoothed signal can show clearly that there's no significant systematic change in the signal before the jump at $t=18$.


  3. In the example I show here, the jump was much larger than the amplitude of the noise. That's not always the case in practice.


  4. I don't need real-time behavior, so it is fine that the estimate of the signal at some time depends on later samples. In fact I'd prefer to find a solution that commutes with time inversions.


  5. Basing a solution on outside knowledge about how the particular signal I'm looking at ought to behave is not an option. There are too many different measurements I want to apply it to, and often it's something we don't have a good prior expectation for.








reference-request soft-question signal-processing data-analysis






share|cite|improve this question













share|cite|improve this question











share|cite|improve this question




share|cite|improve this question










asked Dec 1 '12 at 22:03









Henning MakholmHenning Makholm

242k17308551




242k17308551












  • $begingroup$
    (I'm not completely convinced that this is "mathematics", but unsure what else it would be).
    $endgroup$
    – Henning Makholm
    Dec 1 '12 at 22:03








  • 1




    $begingroup$
    Statistics. stats.se might be a better place to ask.
    $endgroup$
    – Jonathan Christensen
    Dec 1 '12 at 22:10










  • $begingroup$
    @JonathanChristensen: Not sure it could be statistics; I can't imagine how to formulate a null hypothesis here.
    $endgroup$
    – Henning Makholm
    Dec 1 '12 at 22:20






  • 1




    $begingroup$
    Statistics isn't all about hypothesis testing. Source: I'm a statistician.
    $endgroup$
    – Jonathan Christensen
    Dec 1 '12 at 22:20










  • $begingroup$
    I think this is a signal processing problem first and then a statistics question.
    $endgroup$
    – Bjorn Roche
    Dec 2 '12 at 1:00


















  • $begingroup$
    (I'm not completely convinced that this is "mathematics", but unsure what else it would be).
    $endgroup$
    – Henning Makholm
    Dec 1 '12 at 22:03








  • 1




    $begingroup$
    Statistics. stats.se might be a better place to ask.
    $endgroup$
    – Jonathan Christensen
    Dec 1 '12 at 22:10










  • $begingroup$
    @JonathanChristensen: Not sure it could be statistics; I can't imagine how to formulate a null hypothesis here.
    $endgroup$
    – Henning Makholm
    Dec 1 '12 at 22:20






  • 1




    $begingroup$
    Statistics isn't all about hypothesis testing. Source: I'm a statistician.
    $endgroup$
    – Jonathan Christensen
    Dec 1 '12 at 22:20










  • $begingroup$
    I think this is a signal processing problem first and then a statistics question.
    $endgroup$
    – Bjorn Roche
    Dec 2 '12 at 1:00
















$begingroup$
(I'm not completely convinced that this is "mathematics", but unsure what else it would be).
$endgroup$
– Henning Makholm
Dec 1 '12 at 22:03






$begingroup$
(I'm not completely convinced that this is "mathematics", but unsure what else it would be).
$endgroup$
– Henning Makholm
Dec 1 '12 at 22:03






1




1




$begingroup$
Statistics. stats.se might be a better place to ask.
$endgroup$
– Jonathan Christensen
Dec 1 '12 at 22:10




$begingroup$
Statistics. stats.se might be a better place to ask.
$endgroup$
– Jonathan Christensen
Dec 1 '12 at 22:10












$begingroup$
@JonathanChristensen: Not sure it could be statistics; I can't imagine how to formulate a null hypothesis here.
$endgroup$
– Henning Makholm
Dec 1 '12 at 22:20




$begingroup$
@JonathanChristensen: Not sure it could be statistics; I can't imagine how to formulate a null hypothesis here.
$endgroup$
– Henning Makholm
Dec 1 '12 at 22:20




1




1




$begingroup$
Statistics isn't all about hypothesis testing. Source: I'm a statistician.
$endgroup$
– Jonathan Christensen
Dec 1 '12 at 22:20




$begingroup$
Statistics isn't all about hypothesis testing. Source: I'm a statistician.
$endgroup$
– Jonathan Christensen
Dec 1 '12 at 22:20












$begingroup$
I think this is a signal processing problem first and then a statistics question.
$endgroup$
– Bjorn Roche
Dec 2 '12 at 1:00




$begingroup$
I think this is a signal processing problem first and then a statistics question.
$endgroup$
– Bjorn Roche
Dec 2 '12 at 1:00










3 Answers
3






active

oldest

votes


















3












$begingroup$

I am working at a group where we study sudden drops in current (due to ion current blockades) in some particular measurement setup.



We detect these drops by calculating the moving average (and in fact, as of recently also a moving standard deviation, since the noise level varies over time in "bad" measurements) and selecting current data points whose drop is more than $5sigma$ (ie significant drop).



Any contiguous set of these points is called an "event", and to make sure we have the right kind of events (as opposed to noise), we further select "events" by integrating the difference of the event and the moving average, looking at its duration (ie. number of data points), and its amplitude, but I suppose you won't need all of those.



What you could do is detect events like we do, and then do interpolation or smoothing only between events (piecewise).



I must admit that we probably miss (don't detect) some "actual" events because of noise issues. Looking at your example, however, this should not be a problem for you and our code would detect it without hesitation.





One more word on our research because it's so much fun:



In fact the drops in current are so small, and the noise is so big, in our case, that we first do an 8 pole Bessel filter (typically at 10kHz) before we can even see these drops. But once the events have been detected, we can go back to the unfiltered signal and continue analysis there.



For an introduction to our research, see this paper.






share|cite|improve this answer











$endgroup$





















    2












    $begingroup$

    If I understand you correctly, your current approach (Gaussian and moving average filters) is adequate for detecting overall changes (what akkkk calls "events") but not for displaying the shape of the underlying data. Obviously, you can't truly know the shape of your overall data because it's masked by noise, but the eye seems to see it, so, the question is, shouldn't we be able to automatically show more or less the same line that the eye sees.



    There are some techniques for this, but you'll have to experiment to see what works. I'm sure there are other techniques, but here's what I can think of:




    • Broad-band noise reduction is used in audio to remove noise. It works well if your noise is broad spectrum and your signal is narrow spectrum, which seems to be the case here. It is tricky to implement. I don't have any good references for you, but maybe this book. I believe it's also covered in DAFX, but there have got to be better sources on it.


    • Median filtering is an excellent technique that works well in certain kinds of image processing and would probably work well here. The mathematics behind it don't work out well (ie it's hard to prove it works), but in practical situations it often works well, and that's good enough for many people. I suspect it will work well here, but you'll have to try it.


    • The gaussian filter and moving average filter you've used can be though of special cases of low-pass filters. If you have some idea of the frequency of your signal vs the frequency content of your noise (or if you are willing to experiment with your real data until your output corresponds to your eyes) you can design a low-pass filter. This is what akkkk was suggesting when he talked about the 8 pole Bessel filter (I'm not sure this particular filter would preserve the shape of the signal, but it probably would). Even a second order recursive filter would be a lot better than these filters, I imagine. I have a tutorial on simple audio filters to get you started.


    • There exist adaptive filters which, depending on your situation, you might be able to use. See Wiener and Kalman filtering. I've never used a Kalman filter, but skimming the Wikipedia page it looks like it might be purpose-built for the kind of thing you are doing.



    If these techniques seem daunting to you, try median filtering first. It's simple to implement and will probably help significantly.






    share|cite|improve this answer











    $endgroup$













    • $begingroup$
      +1 for median filtering. I've used that to great effect many times, and I think it would work well for this data.
      $endgroup$
      – Adrian Keister
      Dec 13 '18 at 14:05



















    1












    $begingroup$

    Have two algorithms: one that checks for discontinuities, the other that takes each "smooth" section and uses a typical noise removal algorithm. For the first algorithm, I would recur through the entire time interval, first looking at very large chunks but getting smaller and smaller with each recursion, comparing the slope of each chunk's linear fit.






    share|cite|improve this answer









    $endgroup$













      Your Answer





      StackExchange.ifUsing("editor", function () {
      return StackExchange.using("mathjaxEditing", function () {
      StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
      StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
      });
      });
      }, "mathjax-editing");

      StackExchange.ready(function() {
      var channelOptions = {
      tags: "".split(" "),
      id: "69"
      };
      initTagRenderer("".split(" "), "".split(" "), channelOptions);

      StackExchange.using("externalEditor", function() {
      // Have to fire editor after snippets, if snippets enabled
      if (StackExchange.settings.snippets.snippetsEnabled) {
      StackExchange.using("snippets", function() {
      createEditor();
      });
      }
      else {
      createEditor();
      }
      });

      function createEditor() {
      StackExchange.prepareEditor({
      heartbeatType: 'answer',
      autoActivateHeartbeat: false,
      convertImagesToLinks: true,
      noModals: true,
      showLowRepImageUploadWarning: true,
      reputationToPostImages: 10,
      bindNavPrevention: true,
      postfix: "",
      imageUploader: {
      brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
      contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
      allowUrls: true
      },
      noCode: true, onDemand: true,
      discardSelector: ".discard-answer"
      ,immediatelyShowMarkdownHelp:true
      });


      }
      });














      draft saved

      draft discarded


















      StackExchange.ready(
      function () {
      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f248843%2fremoving-noise-when-the-signal-is-not-smooth%23new-answer', 'question_page');
      }
      );

      Post as a guest















      Required, but never shown

























      3 Answers
      3






      active

      oldest

      votes








      3 Answers
      3






      active

      oldest

      votes









      active

      oldest

      votes






      active

      oldest

      votes









      3












      $begingroup$

      I am working at a group where we study sudden drops in current (due to ion current blockades) in some particular measurement setup.



      We detect these drops by calculating the moving average (and in fact, as of recently also a moving standard deviation, since the noise level varies over time in "bad" measurements) and selecting current data points whose drop is more than $5sigma$ (ie significant drop).



      Any contiguous set of these points is called an "event", and to make sure we have the right kind of events (as opposed to noise), we further select "events" by integrating the difference of the event and the moving average, looking at its duration (ie. number of data points), and its amplitude, but I suppose you won't need all of those.



      What you could do is detect events like we do, and then do interpolation or smoothing only between events (piecewise).



      I must admit that we probably miss (don't detect) some "actual" events because of noise issues. Looking at your example, however, this should not be a problem for you and our code would detect it without hesitation.





      One more word on our research because it's so much fun:



      In fact the drops in current are so small, and the noise is so big, in our case, that we first do an 8 pole Bessel filter (typically at 10kHz) before we can even see these drops. But once the events have been detected, we can go back to the unfiltered signal and continue analysis there.



      For an introduction to our research, see this paper.






      share|cite|improve this answer











      $endgroup$


















        3












        $begingroup$

        I am working at a group where we study sudden drops in current (due to ion current blockades) in some particular measurement setup.



        We detect these drops by calculating the moving average (and in fact, as of recently also a moving standard deviation, since the noise level varies over time in "bad" measurements) and selecting current data points whose drop is more than $5sigma$ (ie significant drop).



        Any contiguous set of these points is called an "event", and to make sure we have the right kind of events (as opposed to noise), we further select "events" by integrating the difference of the event and the moving average, looking at its duration (ie. number of data points), and its amplitude, but I suppose you won't need all of those.



        What you could do is detect events like we do, and then do interpolation or smoothing only between events (piecewise).



        I must admit that we probably miss (don't detect) some "actual" events because of noise issues. Looking at your example, however, this should not be a problem for you and our code would detect it without hesitation.





        One more word on our research because it's so much fun:



        In fact the drops in current are so small, and the noise is so big, in our case, that we first do an 8 pole Bessel filter (typically at 10kHz) before we can even see these drops. But once the events have been detected, we can go back to the unfiltered signal and continue analysis there.



        For an introduction to our research, see this paper.






        share|cite|improve this answer











        $endgroup$
















          3












          3








          3





          $begingroup$

          I am working at a group where we study sudden drops in current (due to ion current blockades) in some particular measurement setup.



          We detect these drops by calculating the moving average (and in fact, as of recently also a moving standard deviation, since the noise level varies over time in "bad" measurements) and selecting current data points whose drop is more than $5sigma$ (ie significant drop).



          Any contiguous set of these points is called an "event", and to make sure we have the right kind of events (as opposed to noise), we further select "events" by integrating the difference of the event and the moving average, looking at its duration (ie. number of data points), and its amplitude, but I suppose you won't need all of those.



          What you could do is detect events like we do, and then do interpolation or smoothing only between events (piecewise).



          I must admit that we probably miss (don't detect) some "actual" events because of noise issues. Looking at your example, however, this should not be a problem for you and our code would detect it without hesitation.





          One more word on our research because it's so much fun:



          In fact the drops in current are so small, and the noise is so big, in our case, that we first do an 8 pole Bessel filter (typically at 10kHz) before we can even see these drops. But once the events have been detected, we can go back to the unfiltered signal and continue analysis there.



          For an introduction to our research, see this paper.






          share|cite|improve this answer











          $endgroup$



          I am working at a group where we study sudden drops in current (due to ion current blockades) in some particular measurement setup.



          We detect these drops by calculating the moving average (and in fact, as of recently also a moving standard deviation, since the noise level varies over time in "bad" measurements) and selecting current data points whose drop is more than $5sigma$ (ie significant drop).



          Any contiguous set of these points is called an "event", and to make sure we have the right kind of events (as opposed to noise), we further select "events" by integrating the difference of the event and the moving average, looking at its duration (ie. number of data points), and its amplitude, but I suppose you won't need all of those.



          What you could do is detect events like we do, and then do interpolation or smoothing only between events (piecewise).



          I must admit that we probably miss (don't detect) some "actual" events because of noise issues. Looking at your example, however, this should not be a problem for you and our code would detect it without hesitation.





          One more word on our research because it's so much fun:



          In fact the drops in current are so small, and the noise is so big, in our case, that we first do an 8 pole Bessel filter (typically at 10kHz) before we can even see these drops. But once the events have been detected, we can go back to the unfiltered signal and continue analysis there.



          For an introduction to our research, see this paper.







          share|cite|improve this answer














          share|cite|improve this answer



          share|cite|improve this answer








          edited Dec 1 '12 at 22:59

























          answered Dec 1 '12 at 22:38









          akkkkakkkk

          1,764816




          1,764816























              2












              $begingroup$

              If I understand you correctly, your current approach (Gaussian and moving average filters) is adequate for detecting overall changes (what akkkk calls "events") but not for displaying the shape of the underlying data. Obviously, you can't truly know the shape of your overall data because it's masked by noise, but the eye seems to see it, so, the question is, shouldn't we be able to automatically show more or less the same line that the eye sees.



              There are some techniques for this, but you'll have to experiment to see what works. I'm sure there are other techniques, but here's what I can think of:




              • Broad-band noise reduction is used in audio to remove noise. It works well if your noise is broad spectrum and your signal is narrow spectrum, which seems to be the case here. It is tricky to implement. I don't have any good references for you, but maybe this book. I believe it's also covered in DAFX, but there have got to be better sources on it.


              • Median filtering is an excellent technique that works well in certain kinds of image processing and would probably work well here. The mathematics behind it don't work out well (ie it's hard to prove it works), but in practical situations it often works well, and that's good enough for many people. I suspect it will work well here, but you'll have to try it.


              • The gaussian filter and moving average filter you've used can be though of special cases of low-pass filters. If you have some idea of the frequency of your signal vs the frequency content of your noise (or if you are willing to experiment with your real data until your output corresponds to your eyes) you can design a low-pass filter. This is what akkkk was suggesting when he talked about the 8 pole Bessel filter (I'm not sure this particular filter would preserve the shape of the signal, but it probably would). Even a second order recursive filter would be a lot better than these filters, I imagine. I have a tutorial on simple audio filters to get you started.


              • There exist adaptive filters which, depending on your situation, you might be able to use. See Wiener and Kalman filtering. I've never used a Kalman filter, but skimming the Wikipedia page it looks like it might be purpose-built for the kind of thing you are doing.



              If these techniques seem daunting to you, try median filtering first. It's simple to implement and will probably help significantly.






              share|cite|improve this answer











              $endgroup$













              • $begingroup$
                +1 for median filtering. I've used that to great effect many times, and I think it would work well for this data.
                $endgroup$
                – Adrian Keister
                Dec 13 '18 at 14:05
















              2












              $begingroup$

              If I understand you correctly, your current approach (Gaussian and moving average filters) is adequate for detecting overall changes (what akkkk calls "events") but not for displaying the shape of the underlying data. Obviously, you can't truly know the shape of your overall data because it's masked by noise, but the eye seems to see it, so, the question is, shouldn't we be able to automatically show more or less the same line that the eye sees.



              There are some techniques for this, but you'll have to experiment to see what works. I'm sure there are other techniques, but here's what I can think of:




              • Broad-band noise reduction is used in audio to remove noise. It works well if your noise is broad spectrum and your signal is narrow spectrum, which seems to be the case here. It is tricky to implement. I don't have any good references for you, but maybe this book. I believe it's also covered in DAFX, but there have got to be better sources on it.


              • Median filtering is an excellent technique that works well in certain kinds of image processing and would probably work well here. The mathematics behind it don't work out well (ie it's hard to prove it works), but in practical situations it often works well, and that's good enough for many people. I suspect it will work well here, but you'll have to try it.


              • The gaussian filter and moving average filter you've used can be though of special cases of low-pass filters. If you have some idea of the frequency of your signal vs the frequency content of your noise (or if you are willing to experiment with your real data until your output corresponds to your eyes) you can design a low-pass filter. This is what akkkk was suggesting when he talked about the 8 pole Bessel filter (I'm not sure this particular filter would preserve the shape of the signal, but it probably would). Even a second order recursive filter would be a lot better than these filters, I imagine. I have a tutorial on simple audio filters to get you started.


              • There exist adaptive filters which, depending on your situation, you might be able to use. See Wiener and Kalman filtering. I've never used a Kalman filter, but skimming the Wikipedia page it looks like it might be purpose-built for the kind of thing you are doing.



              If these techniques seem daunting to you, try median filtering first. It's simple to implement and will probably help significantly.






              share|cite|improve this answer











              $endgroup$













              • $begingroup$
                +1 for median filtering. I've used that to great effect many times, and I think it would work well for this data.
                $endgroup$
                – Adrian Keister
                Dec 13 '18 at 14:05














              2












              2








              2





              $begingroup$

              If I understand you correctly, your current approach (Gaussian and moving average filters) is adequate for detecting overall changes (what akkkk calls "events") but not for displaying the shape of the underlying data. Obviously, you can't truly know the shape of your overall data because it's masked by noise, but the eye seems to see it, so, the question is, shouldn't we be able to automatically show more or less the same line that the eye sees.



              There are some techniques for this, but you'll have to experiment to see what works. I'm sure there are other techniques, but here's what I can think of:




              • Broad-band noise reduction is used in audio to remove noise. It works well if your noise is broad spectrum and your signal is narrow spectrum, which seems to be the case here. It is tricky to implement. I don't have any good references for you, but maybe this book. I believe it's also covered in DAFX, but there have got to be better sources on it.


              • Median filtering is an excellent technique that works well in certain kinds of image processing and would probably work well here. The mathematics behind it don't work out well (ie it's hard to prove it works), but in practical situations it often works well, and that's good enough for many people. I suspect it will work well here, but you'll have to try it.


              • The gaussian filter and moving average filter you've used can be though of special cases of low-pass filters. If you have some idea of the frequency of your signal vs the frequency content of your noise (or if you are willing to experiment with your real data until your output corresponds to your eyes) you can design a low-pass filter. This is what akkkk was suggesting when he talked about the 8 pole Bessel filter (I'm not sure this particular filter would preserve the shape of the signal, but it probably would). Even a second order recursive filter would be a lot better than these filters, I imagine. I have a tutorial on simple audio filters to get you started.


              • There exist adaptive filters which, depending on your situation, you might be able to use. See Wiener and Kalman filtering. I've never used a Kalman filter, but skimming the Wikipedia page it looks like it might be purpose-built for the kind of thing you are doing.



              If these techniques seem daunting to you, try median filtering first. It's simple to implement and will probably help significantly.






              share|cite|improve this answer











              $endgroup$



              If I understand you correctly, your current approach (Gaussian and moving average filters) is adequate for detecting overall changes (what akkkk calls "events") but not for displaying the shape of the underlying data. Obviously, you can't truly know the shape of your overall data because it's masked by noise, but the eye seems to see it, so, the question is, shouldn't we be able to automatically show more or less the same line that the eye sees.



              There are some techniques for this, but you'll have to experiment to see what works. I'm sure there are other techniques, but here's what I can think of:




              • Broad-band noise reduction is used in audio to remove noise. It works well if your noise is broad spectrum and your signal is narrow spectrum, which seems to be the case here. It is tricky to implement. I don't have any good references for you, but maybe this book. I believe it's also covered in DAFX, but there have got to be better sources on it.


              • Median filtering is an excellent technique that works well in certain kinds of image processing and would probably work well here. The mathematics behind it don't work out well (ie it's hard to prove it works), but in practical situations it often works well, and that's good enough for many people. I suspect it will work well here, but you'll have to try it.


              • The gaussian filter and moving average filter you've used can be though of special cases of low-pass filters. If you have some idea of the frequency of your signal vs the frequency content of your noise (or if you are willing to experiment with your real data until your output corresponds to your eyes) you can design a low-pass filter. This is what akkkk was suggesting when he talked about the 8 pole Bessel filter (I'm not sure this particular filter would preserve the shape of the signal, but it probably would). Even a second order recursive filter would be a lot better than these filters, I imagine. I have a tutorial on simple audio filters to get you started.


              • There exist adaptive filters which, depending on your situation, you might be able to use. See Wiener and Kalman filtering. I've never used a Kalman filter, but skimming the Wikipedia page it looks like it might be purpose-built for the kind of thing you are doing.



              If these techniques seem daunting to you, try median filtering first. It's simple to implement and will probably help significantly.







              share|cite|improve this answer














              share|cite|improve this answer



              share|cite|improve this answer








              edited Dec 31 '18 at 16:37

























              answered Dec 2 '12 at 1:36









              Bjorn RocheBjorn Roche

              1516




              1516












              • $begingroup$
                +1 for median filtering. I've used that to great effect many times, and I think it would work well for this data.
                $endgroup$
                – Adrian Keister
                Dec 13 '18 at 14:05


















              • $begingroup$
                +1 for median filtering. I've used that to great effect many times, and I think it would work well for this data.
                $endgroup$
                – Adrian Keister
                Dec 13 '18 at 14:05
















              $begingroup$
              +1 for median filtering. I've used that to great effect many times, and I think it would work well for this data.
              $endgroup$
              – Adrian Keister
              Dec 13 '18 at 14:05




              $begingroup$
              +1 for median filtering. I've used that to great effect many times, and I think it would work well for this data.
              $endgroup$
              – Adrian Keister
              Dec 13 '18 at 14:05











              1












              $begingroup$

              Have two algorithms: one that checks for discontinuities, the other that takes each "smooth" section and uses a typical noise removal algorithm. For the first algorithm, I would recur through the entire time interval, first looking at very large chunks but getting smaller and smaller with each recursion, comparing the slope of each chunk's linear fit.






              share|cite|improve this answer









              $endgroup$


















                1












                $begingroup$

                Have two algorithms: one that checks for discontinuities, the other that takes each "smooth" section and uses a typical noise removal algorithm. For the first algorithm, I would recur through the entire time interval, first looking at very large chunks but getting smaller and smaller with each recursion, comparing the slope of each chunk's linear fit.






                share|cite|improve this answer









                $endgroup$
















                  1












                  1








                  1





                  $begingroup$

                  Have two algorithms: one that checks for discontinuities, the other that takes each "smooth" section and uses a typical noise removal algorithm. For the first algorithm, I would recur through the entire time interval, first looking at very large chunks but getting smaller and smaller with each recursion, comparing the slope of each chunk's linear fit.






                  share|cite|improve this answer









                  $endgroup$



                  Have two algorithms: one that checks for discontinuities, the other that takes each "smooth" section and uses a typical noise removal algorithm. For the first algorithm, I would recur through the entire time interval, first looking at very large chunks but getting smaller and smaller with each recursion, comparing the slope of each chunk's linear fit.







                  share|cite|improve this answer












                  share|cite|improve this answer



                  share|cite|improve this answer










                  answered Dec 1 '12 at 22:31









                  cheepychappycheepychappy

                  47129




                  47129






























                      draft saved

                      draft discarded




















































                      Thanks for contributing an answer to Mathematics Stack Exchange!


                      • Please be sure to answer the question. Provide details and share your research!

                      But avoid



                      • Asking for help, clarification, or responding to other answers.

                      • Making statements based on opinion; back them up with references or personal experience.


                      Use MathJax to format equations. MathJax reference.


                      To learn more, see our tips on writing great answers.




                      draft saved


                      draft discarded














                      StackExchange.ready(
                      function () {
                      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f248843%2fremoving-noise-when-the-signal-is-not-smooth%23new-answer', 'question_page');
                      }
                      );

                      Post as a guest















                      Required, but never shown





















































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown

































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown







                      Popular posts from this blog

                      To store a contact into the json file from server.js file using a class in NodeJS

                      Redirect URL with Chrome Remote Debugging Android Devices

                      Dieringhausen