What is meant by a continuous-time white noise process?












33












$begingroup$


What is meant by a continuous-time white noise process?



In a discussion following a question a few months ago, I stated that as an engineer, I am used to thinking
of a continuous-time wide-sense-stationary white noise process
${X(t) colon -infty < t < infty}$
as a zero-mean process having autocorrelation function $R_X(tau) = E[X(t)X(t+tau)] = sigma^2delta(tau)$ where $delta(tau)$ is the Dirac delta or impulse, and power spectral density $S_X(f) = sigma^2, -infty < f < infty$. At that time, several people with very high reputation on Math.SE
assured me
that this was an unduly restrictive notion, and that no difficulties arise
if one takes the autocorrelation function to be
$$E[X(t)X(t+tau)] = begin{cases}sigma^2, & tau = 0,\
0, & tau neq 0. end{cases}$$



What engineers like to call a white noise process is a hypothetical
beast that is never observed directly in any physical system, but
which can be used to account for
the fact that the output of a linear time-invariant system whose
input is thermal noise is well-modeled by a wide-sense-stationary
Gaussian process whose power spectral density is proportional to
$|H(f)|^2$ where $H(f)$ is the transfer function of the linear
system. Standard second-order random process theory says that
the input and output power spectral densities $S_X(f)$ nd $S_Y(f)$
are related as
$$S_Y(f) = S_X(f)|H(f)|^2.$$
Thus, pretending that thermal noise is a white Gaussian noise process in the
engineering sense and pretending that the second-order theory
extends to white noise processes (even though their variance is
not finite) allows us to get to the result that the output power
spectral density is proportional to $|H(f)|^2$.



My query about the definition of a white noise process
is occasioned by a more recent question regarding the variance of a random variable $Y$ defined as
$$Y = int_0^T h(t)X(t) mathrm dt$$
where ${X(t)}$ is a white Gaussian noise process.
The answer given by Nate Eldredge
leads to
$$operatorname{var}(Y) = sigma^2 int_0^T |h(t)|^2 mathrm dt$$
(as I pointed out in a comment on the answer) if the autocorrelation
function is taken to be $R_X(tau) = sigma^2delta(tau)$
(the engineering definition). However, the OP on that question
specified $R_X(0) = sigma^2$, not $sigma^2delta(tau)$,
that is, the definition accepted by mathematicians. For
this autocorrelation function, the variance is
$$int_0^T int_0^T E[X(t)X(s)]h(t)h(s)mathrm dtmathrm ds = 0$$
since the integrand is nonzero only on a set of measure $0$.



So, what is the variance of the random variable $Y$? and what
do readers of Math.SE understand by the phrase white noise process?



Perhaps this question should be converted to a Community wiki?










share|cite|improve this question











$endgroup$












  • $begingroup$
    CW-hammered per OP request.
    $endgroup$
    – Willie Wong
    Apr 24 '12 at 12:38






  • 5




    $begingroup$
    Short answer: (1.) $Y$ is undefined; (2.) the RHS of $Y$ is a shorthand for a mathematically elaborate object called stochastic integral; (3.) applying to this object operations valid for classical (deterministic) integrals can lead to chaos.
    $endgroup$
    – Did
    Jan 15 '13 at 7:16










  • $begingroup$
    I’m not the best at this, but I feel that there may be a confusion about $delta$. My understanding is that $delta(x)$ is infinity if $x=0$ and $0$ otherwise, such that $int_a^bdelta(x)dx=1$ if $a<0<b$. This function should be consistent with the mathematical definition, but is usually not used by mathematicians.
    $endgroup$
    – Teepeemm
    Jun 8 '14 at 20:27
















33












$begingroup$


What is meant by a continuous-time white noise process?



In a discussion following a question a few months ago, I stated that as an engineer, I am used to thinking
of a continuous-time wide-sense-stationary white noise process
${X(t) colon -infty < t < infty}$
as a zero-mean process having autocorrelation function $R_X(tau) = E[X(t)X(t+tau)] = sigma^2delta(tau)$ where $delta(tau)$ is the Dirac delta or impulse, and power spectral density $S_X(f) = sigma^2, -infty < f < infty$. At that time, several people with very high reputation on Math.SE
assured me
that this was an unduly restrictive notion, and that no difficulties arise
if one takes the autocorrelation function to be
$$E[X(t)X(t+tau)] = begin{cases}sigma^2, & tau = 0,\
0, & tau neq 0. end{cases}$$



What engineers like to call a white noise process is a hypothetical
beast that is never observed directly in any physical system, but
which can be used to account for
the fact that the output of a linear time-invariant system whose
input is thermal noise is well-modeled by a wide-sense-stationary
Gaussian process whose power spectral density is proportional to
$|H(f)|^2$ where $H(f)$ is the transfer function of the linear
system. Standard second-order random process theory says that
the input and output power spectral densities $S_X(f)$ nd $S_Y(f)$
are related as
$$S_Y(f) = S_X(f)|H(f)|^2.$$
Thus, pretending that thermal noise is a white Gaussian noise process in the
engineering sense and pretending that the second-order theory
extends to white noise processes (even though their variance is
not finite) allows us to get to the result that the output power
spectral density is proportional to $|H(f)|^2$.



My query about the definition of a white noise process
is occasioned by a more recent question regarding the variance of a random variable $Y$ defined as
$$Y = int_0^T h(t)X(t) mathrm dt$$
where ${X(t)}$ is a white Gaussian noise process.
The answer given by Nate Eldredge
leads to
$$operatorname{var}(Y) = sigma^2 int_0^T |h(t)|^2 mathrm dt$$
(as I pointed out in a comment on the answer) if the autocorrelation
function is taken to be $R_X(tau) = sigma^2delta(tau)$
(the engineering definition). However, the OP on that question
specified $R_X(0) = sigma^2$, not $sigma^2delta(tau)$,
that is, the definition accepted by mathematicians. For
this autocorrelation function, the variance is
$$int_0^T int_0^T E[X(t)X(s)]h(t)h(s)mathrm dtmathrm ds = 0$$
since the integrand is nonzero only on a set of measure $0$.



So, what is the variance of the random variable $Y$? and what
do readers of Math.SE understand by the phrase white noise process?



Perhaps this question should be converted to a Community wiki?










share|cite|improve this question











$endgroup$












  • $begingroup$
    CW-hammered per OP request.
    $endgroup$
    – Willie Wong
    Apr 24 '12 at 12:38






  • 5




    $begingroup$
    Short answer: (1.) $Y$ is undefined; (2.) the RHS of $Y$ is a shorthand for a mathematically elaborate object called stochastic integral; (3.) applying to this object operations valid for classical (deterministic) integrals can lead to chaos.
    $endgroup$
    – Did
    Jan 15 '13 at 7:16










  • $begingroup$
    I’m not the best at this, but I feel that there may be a confusion about $delta$. My understanding is that $delta(x)$ is infinity if $x=0$ and $0$ otherwise, such that $int_a^bdelta(x)dx=1$ if $a<0<b$. This function should be consistent with the mathematical definition, but is usually not used by mathematicians.
    $endgroup$
    – Teepeemm
    Jun 8 '14 at 20:27














33












33








33


20



$begingroup$


What is meant by a continuous-time white noise process?



In a discussion following a question a few months ago, I stated that as an engineer, I am used to thinking
of a continuous-time wide-sense-stationary white noise process
${X(t) colon -infty < t < infty}$
as a zero-mean process having autocorrelation function $R_X(tau) = E[X(t)X(t+tau)] = sigma^2delta(tau)$ where $delta(tau)$ is the Dirac delta or impulse, and power spectral density $S_X(f) = sigma^2, -infty < f < infty$. At that time, several people with very high reputation on Math.SE
assured me
that this was an unduly restrictive notion, and that no difficulties arise
if one takes the autocorrelation function to be
$$E[X(t)X(t+tau)] = begin{cases}sigma^2, & tau = 0,\
0, & tau neq 0. end{cases}$$



What engineers like to call a white noise process is a hypothetical
beast that is never observed directly in any physical system, but
which can be used to account for
the fact that the output of a linear time-invariant system whose
input is thermal noise is well-modeled by a wide-sense-stationary
Gaussian process whose power spectral density is proportional to
$|H(f)|^2$ where $H(f)$ is the transfer function of the linear
system. Standard second-order random process theory says that
the input and output power spectral densities $S_X(f)$ nd $S_Y(f)$
are related as
$$S_Y(f) = S_X(f)|H(f)|^2.$$
Thus, pretending that thermal noise is a white Gaussian noise process in the
engineering sense and pretending that the second-order theory
extends to white noise processes (even though their variance is
not finite) allows us to get to the result that the output power
spectral density is proportional to $|H(f)|^2$.



My query about the definition of a white noise process
is occasioned by a more recent question regarding the variance of a random variable $Y$ defined as
$$Y = int_0^T h(t)X(t) mathrm dt$$
where ${X(t)}$ is a white Gaussian noise process.
The answer given by Nate Eldredge
leads to
$$operatorname{var}(Y) = sigma^2 int_0^T |h(t)|^2 mathrm dt$$
(as I pointed out in a comment on the answer) if the autocorrelation
function is taken to be $R_X(tau) = sigma^2delta(tau)$
(the engineering definition). However, the OP on that question
specified $R_X(0) = sigma^2$, not $sigma^2delta(tau)$,
that is, the definition accepted by mathematicians. For
this autocorrelation function, the variance is
$$int_0^T int_0^T E[X(t)X(s)]h(t)h(s)mathrm dtmathrm ds = 0$$
since the integrand is nonzero only on a set of measure $0$.



So, what is the variance of the random variable $Y$? and what
do readers of Math.SE understand by the phrase white noise process?



Perhaps this question should be converted to a Community wiki?










share|cite|improve this question











$endgroup$




What is meant by a continuous-time white noise process?



In a discussion following a question a few months ago, I stated that as an engineer, I am used to thinking
of a continuous-time wide-sense-stationary white noise process
${X(t) colon -infty < t < infty}$
as a zero-mean process having autocorrelation function $R_X(tau) = E[X(t)X(t+tau)] = sigma^2delta(tau)$ where $delta(tau)$ is the Dirac delta or impulse, and power spectral density $S_X(f) = sigma^2, -infty < f < infty$. At that time, several people with very high reputation on Math.SE
assured me
that this was an unduly restrictive notion, and that no difficulties arise
if one takes the autocorrelation function to be
$$E[X(t)X(t+tau)] = begin{cases}sigma^2, & tau = 0,\
0, & tau neq 0. end{cases}$$



What engineers like to call a white noise process is a hypothetical
beast that is never observed directly in any physical system, but
which can be used to account for
the fact that the output of a linear time-invariant system whose
input is thermal noise is well-modeled by a wide-sense-stationary
Gaussian process whose power spectral density is proportional to
$|H(f)|^2$ where $H(f)$ is the transfer function of the linear
system. Standard second-order random process theory says that
the input and output power spectral densities $S_X(f)$ nd $S_Y(f)$
are related as
$$S_Y(f) = S_X(f)|H(f)|^2.$$
Thus, pretending that thermal noise is a white Gaussian noise process in the
engineering sense and pretending that the second-order theory
extends to white noise processes (even though their variance is
not finite) allows us to get to the result that the output power
spectral density is proportional to $|H(f)|^2$.



My query about the definition of a white noise process
is occasioned by a more recent question regarding the variance of a random variable $Y$ defined as
$$Y = int_0^T h(t)X(t) mathrm dt$$
where ${X(t)}$ is a white Gaussian noise process.
The answer given by Nate Eldredge
leads to
$$operatorname{var}(Y) = sigma^2 int_0^T |h(t)|^2 mathrm dt$$
(as I pointed out in a comment on the answer) if the autocorrelation
function is taken to be $R_X(tau) = sigma^2delta(tau)$
(the engineering definition). However, the OP on that question
specified $R_X(0) = sigma^2$, not $sigma^2delta(tau)$,
that is, the definition accepted by mathematicians. For
this autocorrelation function, the variance is
$$int_0^T int_0^T E[X(t)X(s)]h(t)h(s)mathrm dtmathrm ds = 0$$
since the integrand is nonzero only on a set of measure $0$.



So, what is the variance of the random variable $Y$? and what
do readers of Math.SE understand by the phrase white noise process?



Perhaps this question should be converted to a Community wiki?







probability-theory stochastic-processes noise






share|cite|improve this question















share|cite|improve this question













share|cite|improve this question




share|cite|improve this question








edited Apr 13 '17 at 12:20


























community wiki





4 revs, 2 users 66%
Dilip Sarwate













  • $begingroup$
    CW-hammered per OP request.
    $endgroup$
    – Willie Wong
    Apr 24 '12 at 12:38






  • 5




    $begingroup$
    Short answer: (1.) $Y$ is undefined; (2.) the RHS of $Y$ is a shorthand for a mathematically elaborate object called stochastic integral; (3.) applying to this object operations valid for classical (deterministic) integrals can lead to chaos.
    $endgroup$
    – Did
    Jan 15 '13 at 7:16










  • $begingroup$
    I’m not the best at this, but I feel that there may be a confusion about $delta$. My understanding is that $delta(x)$ is infinity if $x=0$ and $0$ otherwise, such that $int_a^bdelta(x)dx=1$ if $a<0<b$. This function should be consistent with the mathematical definition, but is usually not used by mathematicians.
    $endgroup$
    – Teepeemm
    Jun 8 '14 at 20:27


















  • $begingroup$
    CW-hammered per OP request.
    $endgroup$
    – Willie Wong
    Apr 24 '12 at 12:38






  • 5




    $begingroup$
    Short answer: (1.) $Y$ is undefined; (2.) the RHS of $Y$ is a shorthand for a mathematically elaborate object called stochastic integral; (3.) applying to this object operations valid for classical (deterministic) integrals can lead to chaos.
    $endgroup$
    – Did
    Jan 15 '13 at 7:16










  • $begingroup$
    I’m not the best at this, but I feel that there may be a confusion about $delta$. My understanding is that $delta(x)$ is infinity if $x=0$ and $0$ otherwise, such that $int_a^bdelta(x)dx=1$ if $a<0<b$. This function should be consistent with the mathematical definition, but is usually not used by mathematicians.
    $endgroup$
    – Teepeemm
    Jun 8 '14 at 20:27
















$begingroup$
CW-hammered per OP request.
$endgroup$
– Willie Wong
Apr 24 '12 at 12:38




$begingroup$
CW-hammered per OP request.
$endgroup$
– Willie Wong
Apr 24 '12 at 12:38




5




5




$begingroup$
Short answer: (1.) $Y$ is undefined; (2.) the RHS of $Y$ is a shorthand for a mathematically elaborate object called stochastic integral; (3.) applying to this object operations valid for classical (deterministic) integrals can lead to chaos.
$endgroup$
– Did
Jan 15 '13 at 7:16




$begingroup$
Short answer: (1.) $Y$ is undefined; (2.) the RHS of $Y$ is a shorthand for a mathematically elaborate object called stochastic integral; (3.) applying to this object operations valid for classical (deterministic) integrals can lead to chaos.
$endgroup$
– Did
Jan 15 '13 at 7:16












$begingroup$
I’m not the best at this, but I feel that there may be a confusion about $delta$. My understanding is that $delta(x)$ is infinity if $x=0$ and $0$ otherwise, such that $int_a^bdelta(x)dx=1$ if $a<0<b$. This function should be consistent with the mathematical definition, but is usually not used by mathematicians.
$endgroup$
– Teepeemm
Jun 8 '14 at 20:27




$begingroup$
I’m not the best at this, but I feel that there may be a confusion about $delta$. My understanding is that $delta(x)$ is infinity if $x=0$ and $0$ otherwise, such that $int_a^bdelta(x)dx=1$ if $a<0<b$. This function should be consistent with the mathematical definition, but is usually not used by mathematicians.
$endgroup$
– Teepeemm
Jun 8 '14 at 20:27










2 Answers
2






active

oldest

votes


















15












$begingroup$

This is a bit late, but I see that the main points in this question have not been completely addressed. I'll set
begin{equation}
sigma = 1
end{equation}
for this answer.



The definition of white noise may be context-dependent: How you define it depends on what you want to do with it. There's nothing inherently wrong with saying that white noise (indexed by a set $T$) is just the process of iid standard normal random variables indexed by $T$, i.e. $E[X(t)X(s)] = begin{cases} 1 & t = s \ 0 & t neq s end{cases}.$ However, as cardinal noted here, Example 1.2.5 of Kallianpur's text shows that this process is not measurable (as a function of $(t, omega)$). This is why, as Did commented above, $Y$ is undefined (with this definition of $X$). Thus, this definition of white noise is not appropriate for defining objects like $Y$.



Rather, you want $Y$ to have covariance given by the Dirac delta. But the $delta$ function is not a function but rather a measure and the best context for understanding it is the theory of distributions (or generalized functions---these are not to be confused with "probability distributions"). Likewise, the appropriate context for white noise is the theory of random distributions.



Let's warm up with a heuristic explanation: We'll think of white noise as the "derivative" of Brownian motion: "$dB_t/dt = X_t$". So ignoring rigor for a moment, we could write
begin{equation}
int_0^T h(t) X(t) dt = int_0^T h(t) frac{dB_t}{dt} dt = int_0^T h(t) dB_t.
end{equation}



The reason this isn't rigorous is that Brownian motion is nowhere differentiable. However, the theory of distributions allows us to "differentiate" non-differentiable functions. First of all, a distribution is a linear functional (linear map taking values in the real numbers) on a space of "test functions" (usually smooth functions of compact support). A continuous function $F$ can be viewed as a distribution via the pairing
begin{equation}
(F, f) = int_0^infty F(t) f(t) dt.
end{equation}
The distributional derivative of $F$ is the distribution $F'$ whose pairing with a test function $f$ is defined by
begin{equation}
(F', f) = -(F, f').
end{equation}



Thinking of Brownian motion as a random function, we can define white noise $X$ as its distributional derivative. Thus, $X$ is a random distribution whose pairing with a test function $f$ is the random variable
begin{equation}
(X, f) = -(B, f') = -int_0^infty B(t) f'(t) dt.
end{equation}
By stochastic integration by parts,
begin{equation}
(X, f) = int_0^infty f(t) dB_t;
end{equation}
this is the Itô integral of $f$ with respect to $B$.



Now a well-known fact in stochastic calculus is that $M_T = int_0^T f(t) dB_t$ is a martingale starting at $M_0 = 0$, so $E (X, f) = 0$. Moreover, by the Itô isometry,
begin{equation}
mathrm{Var}((X, f)) = E (X, f)^2 = int_0^infty f(t)^2 dt.
end{equation}
It can also be verified that $(X, f)$ is Gaussian.



My main point is that a more appropriate definition of $Y$ might be
begin{equation}
Y = int_0^T h(t) dB_t.
end{equation}



As a last note, because of the way $X$ was defined above, $X_t$ is not defined but $(X, f)$ is. That is, $X$ is a stochastic process but whose index set is given by $T = { text{test functions} }$ rather than $T = [0, infty)$. Moreover, again by the Itô isometry,
begin{equation}
E (X, f) (X, g) = int_0^infty f(t) g(t) dt.
end{equation}
Abandoning rigor again, this becomes
begin{equation}
E (X, f) (X, g) = int_0^infty int_0^infty f(s) delta(s - t) g(t) ds dt
end{equation}
and it is in this sense that the covariance of $X$ is the Dirac delta.



Edit: Note that we could leave the definition of $(X, f)$ in terms of the ordinary integral and do all the above calculations using Fubini's theorem and (ordinary) integration by parts (it's just a bit messier).






share|cite|improve this answer











$endgroup$





















    -4












    $begingroup$

    Actually X is the digital signal and Y is the average analog signal power generated in the time-domain. The variance shows how deviation goes if signal Y goes for a long time.



    Moreover, you should learn more about the delta function (your function becomes one if x = 0 ) . The engineering and mathematical aspects to explain the autocorrelation function are also correct. There is no contracdiction between them.






    share|cite|improve this answer











    $endgroup$













      Your Answer





      StackExchange.ifUsing("editor", function () {
      return StackExchange.using("mathjaxEditing", function () {
      StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
      StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
      });
      });
      }, "mathjax-editing");

      StackExchange.ready(function() {
      var channelOptions = {
      tags: "".split(" "),
      id: "69"
      };
      initTagRenderer("".split(" "), "".split(" "), channelOptions);

      StackExchange.using("externalEditor", function() {
      // Have to fire editor after snippets, if snippets enabled
      if (StackExchange.settings.snippets.snippetsEnabled) {
      StackExchange.using("snippets", function() {
      createEditor();
      });
      }
      else {
      createEditor();
      }
      });

      function createEditor() {
      StackExchange.prepareEditor({
      heartbeatType: 'answer',
      autoActivateHeartbeat: false,
      convertImagesToLinks: true,
      noModals: true,
      showLowRepImageUploadWarning: true,
      reputationToPostImages: 10,
      bindNavPrevention: true,
      postfix: "",
      imageUploader: {
      brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
      contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
      allowUrls: true
      },
      noCode: true, onDemand: true,
      discardSelector: ".discard-answer"
      ,immediatelyShowMarkdownHelp:true
      });


      }
      });














      draft saved

      draft discarded


















      StackExchange.ready(
      function () {
      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f134193%2fwhat-is-meant-by-a-continuous-time-white-noise-process%23new-answer', 'question_page');
      }
      );

      Post as a guest















      Required, but never shown

























      2 Answers
      2






      active

      oldest

      votes








      2 Answers
      2






      active

      oldest

      votes









      active

      oldest

      votes






      active

      oldest

      votes









      15












      $begingroup$

      This is a bit late, but I see that the main points in this question have not been completely addressed. I'll set
      begin{equation}
      sigma = 1
      end{equation}
      for this answer.



      The definition of white noise may be context-dependent: How you define it depends on what you want to do with it. There's nothing inherently wrong with saying that white noise (indexed by a set $T$) is just the process of iid standard normal random variables indexed by $T$, i.e. $E[X(t)X(s)] = begin{cases} 1 & t = s \ 0 & t neq s end{cases}.$ However, as cardinal noted here, Example 1.2.5 of Kallianpur's text shows that this process is not measurable (as a function of $(t, omega)$). This is why, as Did commented above, $Y$ is undefined (with this definition of $X$). Thus, this definition of white noise is not appropriate for defining objects like $Y$.



      Rather, you want $Y$ to have covariance given by the Dirac delta. But the $delta$ function is not a function but rather a measure and the best context for understanding it is the theory of distributions (or generalized functions---these are not to be confused with "probability distributions"). Likewise, the appropriate context for white noise is the theory of random distributions.



      Let's warm up with a heuristic explanation: We'll think of white noise as the "derivative" of Brownian motion: "$dB_t/dt = X_t$". So ignoring rigor for a moment, we could write
      begin{equation}
      int_0^T h(t) X(t) dt = int_0^T h(t) frac{dB_t}{dt} dt = int_0^T h(t) dB_t.
      end{equation}



      The reason this isn't rigorous is that Brownian motion is nowhere differentiable. However, the theory of distributions allows us to "differentiate" non-differentiable functions. First of all, a distribution is a linear functional (linear map taking values in the real numbers) on a space of "test functions" (usually smooth functions of compact support). A continuous function $F$ can be viewed as a distribution via the pairing
      begin{equation}
      (F, f) = int_0^infty F(t) f(t) dt.
      end{equation}
      The distributional derivative of $F$ is the distribution $F'$ whose pairing with a test function $f$ is defined by
      begin{equation}
      (F', f) = -(F, f').
      end{equation}



      Thinking of Brownian motion as a random function, we can define white noise $X$ as its distributional derivative. Thus, $X$ is a random distribution whose pairing with a test function $f$ is the random variable
      begin{equation}
      (X, f) = -(B, f') = -int_0^infty B(t) f'(t) dt.
      end{equation}
      By stochastic integration by parts,
      begin{equation}
      (X, f) = int_0^infty f(t) dB_t;
      end{equation}
      this is the Itô integral of $f$ with respect to $B$.



      Now a well-known fact in stochastic calculus is that $M_T = int_0^T f(t) dB_t$ is a martingale starting at $M_0 = 0$, so $E (X, f) = 0$. Moreover, by the Itô isometry,
      begin{equation}
      mathrm{Var}((X, f)) = E (X, f)^2 = int_0^infty f(t)^2 dt.
      end{equation}
      It can also be verified that $(X, f)$ is Gaussian.



      My main point is that a more appropriate definition of $Y$ might be
      begin{equation}
      Y = int_0^T h(t) dB_t.
      end{equation}



      As a last note, because of the way $X$ was defined above, $X_t$ is not defined but $(X, f)$ is. That is, $X$ is a stochastic process but whose index set is given by $T = { text{test functions} }$ rather than $T = [0, infty)$. Moreover, again by the Itô isometry,
      begin{equation}
      E (X, f) (X, g) = int_0^infty f(t) g(t) dt.
      end{equation}
      Abandoning rigor again, this becomes
      begin{equation}
      E (X, f) (X, g) = int_0^infty int_0^infty f(s) delta(s - t) g(t) ds dt
      end{equation}
      and it is in this sense that the covariance of $X$ is the Dirac delta.



      Edit: Note that we could leave the definition of $(X, f)$ in terms of the ordinary integral and do all the above calculations using Fubini's theorem and (ordinary) integration by parts (it's just a bit messier).






      share|cite|improve this answer











      $endgroup$


















        15












        $begingroup$

        This is a bit late, but I see that the main points in this question have not been completely addressed. I'll set
        begin{equation}
        sigma = 1
        end{equation}
        for this answer.



        The definition of white noise may be context-dependent: How you define it depends on what you want to do with it. There's nothing inherently wrong with saying that white noise (indexed by a set $T$) is just the process of iid standard normal random variables indexed by $T$, i.e. $E[X(t)X(s)] = begin{cases} 1 & t = s \ 0 & t neq s end{cases}.$ However, as cardinal noted here, Example 1.2.5 of Kallianpur's text shows that this process is not measurable (as a function of $(t, omega)$). This is why, as Did commented above, $Y$ is undefined (with this definition of $X$). Thus, this definition of white noise is not appropriate for defining objects like $Y$.



        Rather, you want $Y$ to have covariance given by the Dirac delta. But the $delta$ function is not a function but rather a measure and the best context for understanding it is the theory of distributions (or generalized functions---these are not to be confused with "probability distributions"). Likewise, the appropriate context for white noise is the theory of random distributions.



        Let's warm up with a heuristic explanation: We'll think of white noise as the "derivative" of Brownian motion: "$dB_t/dt = X_t$". So ignoring rigor for a moment, we could write
        begin{equation}
        int_0^T h(t) X(t) dt = int_0^T h(t) frac{dB_t}{dt} dt = int_0^T h(t) dB_t.
        end{equation}



        The reason this isn't rigorous is that Brownian motion is nowhere differentiable. However, the theory of distributions allows us to "differentiate" non-differentiable functions. First of all, a distribution is a linear functional (linear map taking values in the real numbers) on a space of "test functions" (usually smooth functions of compact support). A continuous function $F$ can be viewed as a distribution via the pairing
        begin{equation}
        (F, f) = int_0^infty F(t) f(t) dt.
        end{equation}
        The distributional derivative of $F$ is the distribution $F'$ whose pairing with a test function $f$ is defined by
        begin{equation}
        (F', f) = -(F, f').
        end{equation}



        Thinking of Brownian motion as a random function, we can define white noise $X$ as its distributional derivative. Thus, $X$ is a random distribution whose pairing with a test function $f$ is the random variable
        begin{equation}
        (X, f) = -(B, f') = -int_0^infty B(t) f'(t) dt.
        end{equation}
        By stochastic integration by parts,
        begin{equation}
        (X, f) = int_0^infty f(t) dB_t;
        end{equation}
        this is the Itô integral of $f$ with respect to $B$.



        Now a well-known fact in stochastic calculus is that $M_T = int_0^T f(t) dB_t$ is a martingale starting at $M_0 = 0$, so $E (X, f) = 0$. Moreover, by the Itô isometry,
        begin{equation}
        mathrm{Var}((X, f)) = E (X, f)^2 = int_0^infty f(t)^2 dt.
        end{equation}
        It can also be verified that $(X, f)$ is Gaussian.



        My main point is that a more appropriate definition of $Y$ might be
        begin{equation}
        Y = int_0^T h(t) dB_t.
        end{equation}



        As a last note, because of the way $X$ was defined above, $X_t$ is not defined but $(X, f)$ is. That is, $X$ is a stochastic process but whose index set is given by $T = { text{test functions} }$ rather than $T = [0, infty)$. Moreover, again by the Itô isometry,
        begin{equation}
        E (X, f) (X, g) = int_0^infty f(t) g(t) dt.
        end{equation}
        Abandoning rigor again, this becomes
        begin{equation}
        E (X, f) (X, g) = int_0^infty int_0^infty f(s) delta(s - t) g(t) ds dt
        end{equation}
        and it is in this sense that the covariance of $X$ is the Dirac delta.



        Edit: Note that we could leave the definition of $(X, f)$ in terms of the ordinary integral and do all the above calculations using Fubini's theorem and (ordinary) integration by parts (it's just a bit messier).






        share|cite|improve this answer











        $endgroup$
















          15












          15








          15





          $begingroup$

          This is a bit late, but I see that the main points in this question have not been completely addressed. I'll set
          begin{equation}
          sigma = 1
          end{equation}
          for this answer.



          The definition of white noise may be context-dependent: How you define it depends on what you want to do with it. There's nothing inherently wrong with saying that white noise (indexed by a set $T$) is just the process of iid standard normal random variables indexed by $T$, i.e. $E[X(t)X(s)] = begin{cases} 1 & t = s \ 0 & t neq s end{cases}.$ However, as cardinal noted here, Example 1.2.5 of Kallianpur's text shows that this process is not measurable (as a function of $(t, omega)$). This is why, as Did commented above, $Y$ is undefined (with this definition of $X$). Thus, this definition of white noise is not appropriate for defining objects like $Y$.



          Rather, you want $Y$ to have covariance given by the Dirac delta. But the $delta$ function is not a function but rather a measure and the best context for understanding it is the theory of distributions (or generalized functions---these are not to be confused with "probability distributions"). Likewise, the appropriate context for white noise is the theory of random distributions.



          Let's warm up with a heuristic explanation: We'll think of white noise as the "derivative" of Brownian motion: "$dB_t/dt = X_t$". So ignoring rigor for a moment, we could write
          begin{equation}
          int_0^T h(t) X(t) dt = int_0^T h(t) frac{dB_t}{dt} dt = int_0^T h(t) dB_t.
          end{equation}



          The reason this isn't rigorous is that Brownian motion is nowhere differentiable. However, the theory of distributions allows us to "differentiate" non-differentiable functions. First of all, a distribution is a linear functional (linear map taking values in the real numbers) on a space of "test functions" (usually smooth functions of compact support). A continuous function $F$ can be viewed as a distribution via the pairing
          begin{equation}
          (F, f) = int_0^infty F(t) f(t) dt.
          end{equation}
          The distributional derivative of $F$ is the distribution $F'$ whose pairing with a test function $f$ is defined by
          begin{equation}
          (F', f) = -(F, f').
          end{equation}



          Thinking of Brownian motion as a random function, we can define white noise $X$ as its distributional derivative. Thus, $X$ is a random distribution whose pairing with a test function $f$ is the random variable
          begin{equation}
          (X, f) = -(B, f') = -int_0^infty B(t) f'(t) dt.
          end{equation}
          By stochastic integration by parts,
          begin{equation}
          (X, f) = int_0^infty f(t) dB_t;
          end{equation}
          this is the Itô integral of $f$ with respect to $B$.



          Now a well-known fact in stochastic calculus is that $M_T = int_0^T f(t) dB_t$ is a martingale starting at $M_0 = 0$, so $E (X, f) = 0$. Moreover, by the Itô isometry,
          begin{equation}
          mathrm{Var}((X, f)) = E (X, f)^2 = int_0^infty f(t)^2 dt.
          end{equation}
          It can also be verified that $(X, f)$ is Gaussian.



          My main point is that a more appropriate definition of $Y$ might be
          begin{equation}
          Y = int_0^T h(t) dB_t.
          end{equation}



          As a last note, because of the way $X$ was defined above, $X_t$ is not defined but $(X, f)$ is. That is, $X$ is a stochastic process but whose index set is given by $T = { text{test functions} }$ rather than $T = [0, infty)$. Moreover, again by the Itô isometry,
          begin{equation}
          E (X, f) (X, g) = int_0^infty f(t) g(t) dt.
          end{equation}
          Abandoning rigor again, this becomes
          begin{equation}
          E (X, f) (X, g) = int_0^infty int_0^infty f(s) delta(s - t) g(t) ds dt
          end{equation}
          and it is in this sense that the covariance of $X$ is the Dirac delta.



          Edit: Note that we could leave the definition of $(X, f)$ in terms of the ordinary integral and do all the above calculations using Fubini's theorem and (ordinary) integration by parts (it's just a bit messier).






          share|cite|improve this answer











          $endgroup$



          This is a bit late, but I see that the main points in this question have not been completely addressed. I'll set
          begin{equation}
          sigma = 1
          end{equation}
          for this answer.



          The definition of white noise may be context-dependent: How you define it depends on what you want to do with it. There's nothing inherently wrong with saying that white noise (indexed by a set $T$) is just the process of iid standard normal random variables indexed by $T$, i.e. $E[X(t)X(s)] = begin{cases} 1 & t = s \ 0 & t neq s end{cases}.$ However, as cardinal noted here, Example 1.2.5 of Kallianpur's text shows that this process is not measurable (as a function of $(t, omega)$). This is why, as Did commented above, $Y$ is undefined (with this definition of $X$). Thus, this definition of white noise is not appropriate for defining objects like $Y$.



          Rather, you want $Y$ to have covariance given by the Dirac delta. But the $delta$ function is not a function but rather a measure and the best context for understanding it is the theory of distributions (or generalized functions---these are not to be confused with "probability distributions"). Likewise, the appropriate context for white noise is the theory of random distributions.



          Let's warm up with a heuristic explanation: We'll think of white noise as the "derivative" of Brownian motion: "$dB_t/dt = X_t$". So ignoring rigor for a moment, we could write
          begin{equation}
          int_0^T h(t) X(t) dt = int_0^T h(t) frac{dB_t}{dt} dt = int_0^T h(t) dB_t.
          end{equation}



          The reason this isn't rigorous is that Brownian motion is nowhere differentiable. However, the theory of distributions allows us to "differentiate" non-differentiable functions. First of all, a distribution is a linear functional (linear map taking values in the real numbers) on a space of "test functions" (usually smooth functions of compact support). A continuous function $F$ can be viewed as a distribution via the pairing
          begin{equation}
          (F, f) = int_0^infty F(t) f(t) dt.
          end{equation}
          The distributional derivative of $F$ is the distribution $F'$ whose pairing with a test function $f$ is defined by
          begin{equation}
          (F', f) = -(F, f').
          end{equation}



          Thinking of Brownian motion as a random function, we can define white noise $X$ as its distributional derivative. Thus, $X$ is a random distribution whose pairing with a test function $f$ is the random variable
          begin{equation}
          (X, f) = -(B, f') = -int_0^infty B(t) f'(t) dt.
          end{equation}
          By stochastic integration by parts,
          begin{equation}
          (X, f) = int_0^infty f(t) dB_t;
          end{equation}
          this is the Itô integral of $f$ with respect to $B$.



          Now a well-known fact in stochastic calculus is that $M_T = int_0^T f(t) dB_t$ is a martingale starting at $M_0 = 0$, so $E (X, f) = 0$. Moreover, by the Itô isometry,
          begin{equation}
          mathrm{Var}((X, f)) = E (X, f)^2 = int_0^infty f(t)^2 dt.
          end{equation}
          It can also be verified that $(X, f)$ is Gaussian.



          My main point is that a more appropriate definition of $Y$ might be
          begin{equation}
          Y = int_0^T h(t) dB_t.
          end{equation}



          As a last note, because of the way $X$ was defined above, $X_t$ is not defined but $(X, f)$ is. That is, $X$ is a stochastic process but whose index set is given by $T = { text{test functions} }$ rather than $T = [0, infty)$. Moreover, again by the Itô isometry,
          begin{equation}
          E (X, f) (X, g) = int_0^infty f(t) g(t) dt.
          end{equation}
          Abandoning rigor again, this becomes
          begin{equation}
          E (X, f) (X, g) = int_0^infty int_0^infty f(s) delta(s - t) g(t) ds dt
          end{equation}
          and it is in this sense that the covariance of $X$ is the Dirac delta.



          Edit: Note that we could leave the definition of $(X, f)$ in terms of the ordinary integral and do all the above calculations using Fubini's theorem and (ordinary) integration by parts (it's just a bit messier).







          share|cite|improve this answer














          share|cite|improve this answer



          share|cite|improve this answer








          edited Apr 13 '17 at 12:19


























          community wiki





          4 revs
          Ben CW
























              -4












              $begingroup$

              Actually X is the digital signal and Y is the average analog signal power generated in the time-domain. The variance shows how deviation goes if signal Y goes for a long time.



              Moreover, you should learn more about the delta function (your function becomes one if x = 0 ) . The engineering and mathematical aspects to explain the autocorrelation function are also correct. There is no contracdiction between them.






              share|cite|improve this answer











              $endgroup$


















                -4












                $begingroup$

                Actually X is the digital signal and Y is the average analog signal power generated in the time-domain. The variance shows how deviation goes if signal Y goes for a long time.



                Moreover, you should learn more about the delta function (your function becomes one if x = 0 ) . The engineering and mathematical aspects to explain the autocorrelation function are also correct. There is no contracdiction between them.






                share|cite|improve this answer











                $endgroup$
















                  -4












                  -4








                  -4





                  $begingroup$

                  Actually X is the digital signal and Y is the average analog signal power generated in the time-domain. The variance shows how deviation goes if signal Y goes for a long time.



                  Moreover, you should learn more about the delta function (your function becomes one if x = 0 ) . The engineering and mathematical aspects to explain the autocorrelation function are also correct. There is no contracdiction between them.






                  share|cite|improve this answer











                  $endgroup$



                  Actually X is the digital signal and Y is the average analog signal power generated in the time-domain. The variance shows how deviation goes if signal Y goes for a long time.



                  Moreover, you should learn more about the delta function (your function becomes one if x = 0 ) . The engineering and mathematical aspects to explain the autocorrelation function are also correct. There is no contracdiction between them.







                  share|cite|improve this answer














                  share|cite|improve this answer



                  share|cite|improve this answer








                  edited Oct 9 '12 at 5:53


























                  community wiki





                  2 revs, 2 users 80%
                  Raju Gujarati
































                      draft saved

                      draft discarded




















































                      Thanks for contributing an answer to Mathematics Stack Exchange!


                      • Please be sure to answer the question. Provide details and share your research!

                      But avoid



                      • Asking for help, clarification, or responding to other answers.

                      • Making statements based on opinion; back them up with references or personal experience.


                      Use MathJax to format equations. MathJax reference.


                      To learn more, see our tips on writing great answers.




                      draft saved


                      draft discarded














                      StackExchange.ready(
                      function () {
                      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f134193%2fwhat-is-meant-by-a-continuous-time-white-noise-process%23new-answer', 'question_page');
                      }
                      );

                      Post as a guest















                      Required, but never shown





















































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown

































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown







                      Popular posts from this blog

                      Wiesbaden

                      Marschland

                      Dieringhausen