Solve general 2nd order ODE numerically with 2nd order time-differences
up vote
1
down vote
favorite
I want to solve a 2nd order ODE of the following general form
$ddot{x} = f(dot{x}, x)$
where dot indicates a time-derivative. A simple numerical solution that is first-order accurate in time would be
$
begin{eqnarray}
x(t + Delta t) &=& x(t) + v(t)Delta t \
v(t + Delta t) &=& v(t) + f(v(t),x(t))Delta t \
end{eqnarray}
$
However, the accuracy of this solution is unsatisfactory. I would like a solution that is 2nd order accurate in time. Naively, I apply 2nd order central differences for 1st and 2nd order derivatives
$frac{x(t+Delta t) - 2 x(t) + x(t-Delta t)}{Delta t^2} = fbiggl(frac{x(t+Delta t) - x(t-Delta t)}{2Delta t}, x(t)biggr)$
In order to numerically solve this equation, I need to express future as a function of the past, that is, solve the above equation for $x(t+Delta t)$. However, for a general $f$ there might not be a way to solve the above equation explicitly.
Question: Is there a way to solve the above ODE numerically to 2nd order precision, if the only thing we are allowed to do with the RHS is finding its value for given inputs.
Note: For the initial conditions, $x$ and $v$ are known at $t=0$. If the proposed method requires more than 2 steps of memory, please indicate how to correctly initialize it.
differential-equations numerical-methods finite-differences
add a comment |
up vote
1
down vote
favorite
I want to solve a 2nd order ODE of the following general form
$ddot{x} = f(dot{x}, x)$
where dot indicates a time-derivative. A simple numerical solution that is first-order accurate in time would be
$
begin{eqnarray}
x(t + Delta t) &=& x(t) + v(t)Delta t \
v(t + Delta t) &=& v(t) + f(v(t),x(t))Delta t \
end{eqnarray}
$
However, the accuracy of this solution is unsatisfactory. I would like a solution that is 2nd order accurate in time. Naively, I apply 2nd order central differences for 1st and 2nd order derivatives
$frac{x(t+Delta t) - 2 x(t) + x(t-Delta t)}{Delta t^2} = fbiggl(frac{x(t+Delta t) - x(t-Delta t)}{2Delta t}, x(t)biggr)$
In order to numerically solve this equation, I need to express future as a function of the past, that is, solve the above equation for $x(t+Delta t)$. However, for a general $f$ there might not be a way to solve the above equation explicitly.
Question: Is there a way to solve the above ODE numerically to 2nd order precision, if the only thing we are allowed to do with the RHS is finding its value for given inputs.
Note: For the initial conditions, $x$ and $v$ are known at $t=0$. If the proposed method requires more than 2 steps of memory, please indicate how to correctly initialize it.
differential-equations numerical-methods finite-differences
1
For the second order formula for first derivative you have $$ dot{x} approx frac{x(t+Delta t)-x(t-Delta t)}{2Delta t} $$
– rafa11111
Nov 26 at 12:03
Thanks, my bad, will fix now
– Aleksejs Fomins
Nov 26 at 12:05
1
You're welcome. AFAIK, there are specific methods for solving movement equations, such as the Verlet integration or the Leapfrog integration. I'm not familiar with those methods, but I think you can have a good start looking at this, rather than trying to derive a scheme from scratch.
– rafa11111
Nov 26 at 12:11
1
Thanks a lot, I'll have a look. I used to know these things, but it was ages ago...
– Aleksejs Fomins
Nov 26 at 12:18
1
@rafa11111: Verlet integration is exactly this scheme for situations where $f$ does not depend on the first derivative $dot x$, in other words, where the force is conservative resp. the system Hamiltonian. The special properties of symplectic integration methods depend on this symplectic framework for the ODE system.
– LutzL
Nov 26 at 12:25
add a comment |
up vote
1
down vote
favorite
up vote
1
down vote
favorite
I want to solve a 2nd order ODE of the following general form
$ddot{x} = f(dot{x}, x)$
where dot indicates a time-derivative. A simple numerical solution that is first-order accurate in time would be
$
begin{eqnarray}
x(t + Delta t) &=& x(t) + v(t)Delta t \
v(t + Delta t) &=& v(t) + f(v(t),x(t))Delta t \
end{eqnarray}
$
However, the accuracy of this solution is unsatisfactory. I would like a solution that is 2nd order accurate in time. Naively, I apply 2nd order central differences for 1st and 2nd order derivatives
$frac{x(t+Delta t) - 2 x(t) + x(t-Delta t)}{Delta t^2} = fbiggl(frac{x(t+Delta t) - x(t-Delta t)}{2Delta t}, x(t)biggr)$
In order to numerically solve this equation, I need to express future as a function of the past, that is, solve the above equation for $x(t+Delta t)$. However, for a general $f$ there might not be a way to solve the above equation explicitly.
Question: Is there a way to solve the above ODE numerically to 2nd order precision, if the only thing we are allowed to do with the RHS is finding its value for given inputs.
Note: For the initial conditions, $x$ and $v$ are known at $t=0$. If the proposed method requires more than 2 steps of memory, please indicate how to correctly initialize it.
differential-equations numerical-methods finite-differences
I want to solve a 2nd order ODE of the following general form
$ddot{x} = f(dot{x}, x)$
where dot indicates a time-derivative. A simple numerical solution that is first-order accurate in time would be
$
begin{eqnarray}
x(t + Delta t) &=& x(t) + v(t)Delta t \
v(t + Delta t) &=& v(t) + f(v(t),x(t))Delta t \
end{eqnarray}
$
However, the accuracy of this solution is unsatisfactory. I would like a solution that is 2nd order accurate in time. Naively, I apply 2nd order central differences for 1st and 2nd order derivatives
$frac{x(t+Delta t) - 2 x(t) + x(t-Delta t)}{Delta t^2} = fbiggl(frac{x(t+Delta t) - x(t-Delta t)}{2Delta t}, x(t)biggr)$
In order to numerically solve this equation, I need to express future as a function of the past, that is, solve the above equation for $x(t+Delta t)$. However, for a general $f$ there might not be a way to solve the above equation explicitly.
Question: Is there a way to solve the above ODE numerically to 2nd order precision, if the only thing we are allowed to do with the RHS is finding its value for given inputs.
Note: For the initial conditions, $x$ and $v$ are known at $t=0$. If the proposed method requires more than 2 steps of memory, please indicate how to correctly initialize it.
differential-equations numerical-methods finite-differences
differential-equations numerical-methods finite-differences
edited Nov 26 at 12:05
asked Nov 26 at 11:22
Aleksejs Fomins
435211
435211
1
For the second order formula for first derivative you have $$ dot{x} approx frac{x(t+Delta t)-x(t-Delta t)}{2Delta t} $$
– rafa11111
Nov 26 at 12:03
Thanks, my bad, will fix now
– Aleksejs Fomins
Nov 26 at 12:05
1
You're welcome. AFAIK, there are specific methods for solving movement equations, such as the Verlet integration or the Leapfrog integration. I'm not familiar with those methods, but I think you can have a good start looking at this, rather than trying to derive a scheme from scratch.
– rafa11111
Nov 26 at 12:11
1
Thanks a lot, I'll have a look. I used to know these things, but it was ages ago...
– Aleksejs Fomins
Nov 26 at 12:18
1
@rafa11111: Verlet integration is exactly this scheme for situations where $f$ does not depend on the first derivative $dot x$, in other words, where the force is conservative resp. the system Hamiltonian. The special properties of symplectic integration methods depend on this symplectic framework for the ODE system.
– LutzL
Nov 26 at 12:25
add a comment |
1
For the second order formula for first derivative you have $$ dot{x} approx frac{x(t+Delta t)-x(t-Delta t)}{2Delta t} $$
– rafa11111
Nov 26 at 12:03
Thanks, my bad, will fix now
– Aleksejs Fomins
Nov 26 at 12:05
1
You're welcome. AFAIK, there are specific methods for solving movement equations, such as the Verlet integration or the Leapfrog integration. I'm not familiar with those methods, but I think you can have a good start looking at this, rather than trying to derive a scheme from scratch.
– rafa11111
Nov 26 at 12:11
1
Thanks a lot, I'll have a look. I used to know these things, but it was ages ago...
– Aleksejs Fomins
Nov 26 at 12:18
1
@rafa11111: Verlet integration is exactly this scheme for situations where $f$ does not depend on the first derivative $dot x$, in other words, where the force is conservative resp. the system Hamiltonian. The special properties of symplectic integration methods depend on this symplectic framework for the ODE system.
– LutzL
Nov 26 at 12:25
1
1
For the second order formula for first derivative you have $$ dot{x} approx frac{x(t+Delta t)-x(t-Delta t)}{2Delta t} $$
– rafa11111
Nov 26 at 12:03
For the second order formula for first derivative you have $$ dot{x} approx frac{x(t+Delta t)-x(t-Delta t)}{2Delta t} $$
– rafa11111
Nov 26 at 12:03
Thanks, my bad, will fix now
– Aleksejs Fomins
Nov 26 at 12:05
Thanks, my bad, will fix now
– Aleksejs Fomins
Nov 26 at 12:05
1
1
You're welcome. AFAIK, there are specific methods for solving movement equations, such as the Verlet integration or the Leapfrog integration. I'm not familiar with those methods, but I think you can have a good start looking at this, rather than trying to derive a scheme from scratch.
– rafa11111
Nov 26 at 12:11
You're welcome. AFAIK, there are specific methods for solving movement equations, such as the Verlet integration or the Leapfrog integration. I'm not familiar with those methods, but I think you can have a good start looking at this, rather than trying to derive a scheme from scratch.
– rafa11111
Nov 26 at 12:11
1
1
Thanks a lot, I'll have a look. I used to know these things, but it was ages ago...
– Aleksejs Fomins
Nov 26 at 12:18
Thanks a lot, I'll have a look. I used to know these things, but it was ages ago...
– Aleksejs Fomins
Nov 26 at 12:18
1
1
@rafa11111: Verlet integration is exactly this scheme for situations where $f$ does not depend on the first derivative $dot x$, in other words, where the force is conservative resp. the system Hamiltonian. The special properties of symplectic integration methods depend on this symplectic framework for the ODE system.
– LutzL
Nov 26 at 12:25
@rafa11111: Verlet integration is exactly this scheme for situations where $f$ does not depend on the first derivative $dot x$, in other words, where the force is conservative resp. the system Hamiltonian. The special properties of symplectic integration methods depend on this symplectic framework for the ODE system.
– LutzL
Nov 26 at 12:25
add a comment |
1 Answer
1
active
oldest
votes
up vote
0
down vote
accepted
You could just iterate
$$
x^{[m+1]}(t+Δt)=2x(t)-x(t−Δt)+Δt^2fleft(frac{x^{[m]}(t+Δt)−x(t−Δt)}{2Δt},x(t)right)
$$
starting with a simple extrapolation $x^{[0]}(t+Δt)=2x(t)-x(t−Δt)$. Usually with the first or second iteration you should have reached sufficient accuracy.
For any twice differentiable functions you get $x(t+Δt)−2x(t)+x(t−Δt)=O(Δt^2)$, so that the error of the iteration is $O(Δt^2(LΔt/2)^k)$ for $k$ iterations, where $L$ is a Lipschitz constant of $f$ relative to its first argument. As the expected truncation error of the method is $O(Δt^4)$, $k=2$ iterations should be sufficient, all further iterations just reduce and regularize the error coefficient.
You get convergence to the solution closest to the initial estimate by some contraction argument like the Banach fixed-point theorem as long as for the combined Lipschitz constant of the iteration $LΔt/2<1$.
For faster or more robust convergence, you would need to implement a Newton-like method.
I would appreciate an elaboration of the statement that the initial extrapolation is not significantly damaging to the accuracy of the method (proof or citation). Also, a link to a more robust newton-like method would be appreciated.
– Aleksejs Fomins
Nov 26 at 11:42
Added some semi-quantitative estimates.
– LutzL
Nov 26 at 11:55
Ok, I think I get the convergence rate. One last thing I am worried about is whether it converges to the right thing. In principle, I can imagine that a nonlinear equation can have multiple solutions for $x(t+1)$. Is there an intuition why it should not converge to some spurious root of the equation?
– Aleksejs Fomins
Nov 26 at 12:03
Also, ignore my question about what a newton method is. I was being stupid. It is just a better way of finding root of an equation
– Aleksejs Fomins
Nov 26 at 12:04
I corrected an error, the Lipschitz constant of the iteration is $Δt^2cdot Lcdot 1/(2Δt)=LΔt/2$, thus 2 iterations are necessary to reach error order 4, 3 or more give better results (with rapidly diminishing improvements).
– LutzL
Nov 26 at 12:15
|
show 2 more comments
Your Answer
StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "69"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3014204%2fsolve-general-2nd-order-ode-numerically-with-2nd-order-time-differences%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
0
down vote
accepted
You could just iterate
$$
x^{[m+1]}(t+Δt)=2x(t)-x(t−Δt)+Δt^2fleft(frac{x^{[m]}(t+Δt)−x(t−Δt)}{2Δt},x(t)right)
$$
starting with a simple extrapolation $x^{[0]}(t+Δt)=2x(t)-x(t−Δt)$. Usually with the first or second iteration you should have reached sufficient accuracy.
For any twice differentiable functions you get $x(t+Δt)−2x(t)+x(t−Δt)=O(Δt^2)$, so that the error of the iteration is $O(Δt^2(LΔt/2)^k)$ for $k$ iterations, where $L$ is a Lipschitz constant of $f$ relative to its first argument. As the expected truncation error of the method is $O(Δt^4)$, $k=2$ iterations should be sufficient, all further iterations just reduce and regularize the error coefficient.
You get convergence to the solution closest to the initial estimate by some contraction argument like the Banach fixed-point theorem as long as for the combined Lipschitz constant of the iteration $LΔt/2<1$.
For faster or more robust convergence, you would need to implement a Newton-like method.
I would appreciate an elaboration of the statement that the initial extrapolation is not significantly damaging to the accuracy of the method (proof or citation). Also, a link to a more robust newton-like method would be appreciated.
– Aleksejs Fomins
Nov 26 at 11:42
Added some semi-quantitative estimates.
– LutzL
Nov 26 at 11:55
Ok, I think I get the convergence rate. One last thing I am worried about is whether it converges to the right thing. In principle, I can imagine that a nonlinear equation can have multiple solutions for $x(t+1)$. Is there an intuition why it should not converge to some spurious root of the equation?
– Aleksejs Fomins
Nov 26 at 12:03
Also, ignore my question about what a newton method is. I was being stupid. It is just a better way of finding root of an equation
– Aleksejs Fomins
Nov 26 at 12:04
I corrected an error, the Lipschitz constant of the iteration is $Δt^2cdot Lcdot 1/(2Δt)=LΔt/2$, thus 2 iterations are necessary to reach error order 4, 3 or more give better results (with rapidly diminishing improvements).
– LutzL
Nov 26 at 12:15
|
show 2 more comments
up vote
0
down vote
accepted
You could just iterate
$$
x^{[m+1]}(t+Δt)=2x(t)-x(t−Δt)+Δt^2fleft(frac{x^{[m]}(t+Δt)−x(t−Δt)}{2Δt},x(t)right)
$$
starting with a simple extrapolation $x^{[0]}(t+Δt)=2x(t)-x(t−Δt)$. Usually with the first or second iteration you should have reached sufficient accuracy.
For any twice differentiable functions you get $x(t+Δt)−2x(t)+x(t−Δt)=O(Δt^2)$, so that the error of the iteration is $O(Δt^2(LΔt/2)^k)$ for $k$ iterations, where $L$ is a Lipschitz constant of $f$ relative to its first argument. As the expected truncation error of the method is $O(Δt^4)$, $k=2$ iterations should be sufficient, all further iterations just reduce and regularize the error coefficient.
You get convergence to the solution closest to the initial estimate by some contraction argument like the Banach fixed-point theorem as long as for the combined Lipschitz constant of the iteration $LΔt/2<1$.
For faster or more robust convergence, you would need to implement a Newton-like method.
I would appreciate an elaboration of the statement that the initial extrapolation is not significantly damaging to the accuracy of the method (proof or citation). Also, a link to a more robust newton-like method would be appreciated.
– Aleksejs Fomins
Nov 26 at 11:42
Added some semi-quantitative estimates.
– LutzL
Nov 26 at 11:55
Ok, I think I get the convergence rate. One last thing I am worried about is whether it converges to the right thing. In principle, I can imagine that a nonlinear equation can have multiple solutions for $x(t+1)$. Is there an intuition why it should not converge to some spurious root of the equation?
– Aleksejs Fomins
Nov 26 at 12:03
Also, ignore my question about what a newton method is. I was being stupid. It is just a better way of finding root of an equation
– Aleksejs Fomins
Nov 26 at 12:04
I corrected an error, the Lipschitz constant of the iteration is $Δt^2cdot Lcdot 1/(2Δt)=LΔt/2$, thus 2 iterations are necessary to reach error order 4, 3 or more give better results (with rapidly diminishing improvements).
– LutzL
Nov 26 at 12:15
|
show 2 more comments
up vote
0
down vote
accepted
up vote
0
down vote
accepted
You could just iterate
$$
x^{[m+1]}(t+Δt)=2x(t)-x(t−Δt)+Δt^2fleft(frac{x^{[m]}(t+Δt)−x(t−Δt)}{2Δt},x(t)right)
$$
starting with a simple extrapolation $x^{[0]}(t+Δt)=2x(t)-x(t−Δt)$. Usually with the first or second iteration you should have reached sufficient accuracy.
For any twice differentiable functions you get $x(t+Δt)−2x(t)+x(t−Δt)=O(Δt^2)$, so that the error of the iteration is $O(Δt^2(LΔt/2)^k)$ for $k$ iterations, where $L$ is a Lipschitz constant of $f$ relative to its first argument. As the expected truncation error of the method is $O(Δt^4)$, $k=2$ iterations should be sufficient, all further iterations just reduce and regularize the error coefficient.
You get convergence to the solution closest to the initial estimate by some contraction argument like the Banach fixed-point theorem as long as for the combined Lipschitz constant of the iteration $LΔt/2<1$.
For faster or more robust convergence, you would need to implement a Newton-like method.
You could just iterate
$$
x^{[m+1]}(t+Δt)=2x(t)-x(t−Δt)+Δt^2fleft(frac{x^{[m]}(t+Δt)−x(t−Δt)}{2Δt},x(t)right)
$$
starting with a simple extrapolation $x^{[0]}(t+Δt)=2x(t)-x(t−Δt)$. Usually with the first or second iteration you should have reached sufficient accuracy.
For any twice differentiable functions you get $x(t+Δt)−2x(t)+x(t−Δt)=O(Δt^2)$, so that the error of the iteration is $O(Δt^2(LΔt/2)^k)$ for $k$ iterations, where $L$ is a Lipschitz constant of $f$ relative to its first argument. As the expected truncation error of the method is $O(Δt^4)$, $k=2$ iterations should be sufficient, all further iterations just reduce and regularize the error coefficient.
You get convergence to the solution closest to the initial estimate by some contraction argument like the Banach fixed-point theorem as long as for the combined Lipschitz constant of the iteration $LΔt/2<1$.
For faster or more robust convergence, you would need to implement a Newton-like method.
edited Nov 26 at 12:13
answered Nov 26 at 11:34
LutzL
54.9k42053
54.9k42053
I would appreciate an elaboration of the statement that the initial extrapolation is not significantly damaging to the accuracy of the method (proof or citation). Also, a link to a more robust newton-like method would be appreciated.
– Aleksejs Fomins
Nov 26 at 11:42
Added some semi-quantitative estimates.
– LutzL
Nov 26 at 11:55
Ok, I think I get the convergence rate. One last thing I am worried about is whether it converges to the right thing. In principle, I can imagine that a nonlinear equation can have multiple solutions for $x(t+1)$. Is there an intuition why it should not converge to some spurious root of the equation?
– Aleksejs Fomins
Nov 26 at 12:03
Also, ignore my question about what a newton method is. I was being stupid. It is just a better way of finding root of an equation
– Aleksejs Fomins
Nov 26 at 12:04
I corrected an error, the Lipschitz constant of the iteration is $Δt^2cdot Lcdot 1/(2Δt)=LΔt/2$, thus 2 iterations are necessary to reach error order 4, 3 or more give better results (with rapidly diminishing improvements).
– LutzL
Nov 26 at 12:15
|
show 2 more comments
I would appreciate an elaboration of the statement that the initial extrapolation is not significantly damaging to the accuracy of the method (proof or citation). Also, a link to a more robust newton-like method would be appreciated.
– Aleksejs Fomins
Nov 26 at 11:42
Added some semi-quantitative estimates.
– LutzL
Nov 26 at 11:55
Ok, I think I get the convergence rate. One last thing I am worried about is whether it converges to the right thing. In principle, I can imagine that a nonlinear equation can have multiple solutions for $x(t+1)$. Is there an intuition why it should not converge to some spurious root of the equation?
– Aleksejs Fomins
Nov 26 at 12:03
Also, ignore my question about what a newton method is. I was being stupid. It is just a better way of finding root of an equation
– Aleksejs Fomins
Nov 26 at 12:04
I corrected an error, the Lipschitz constant of the iteration is $Δt^2cdot Lcdot 1/(2Δt)=LΔt/2$, thus 2 iterations are necessary to reach error order 4, 3 or more give better results (with rapidly diminishing improvements).
– LutzL
Nov 26 at 12:15
I would appreciate an elaboration of the statement that the initial extrapolation is not significantly damaging to the accuracy of the method (proof or citation). Also, a link to a more robust newton-like method would be appreciated.
– Aleksejs Fomins
Nov 26 at 11:42
I would appreciate an elaboration of the statement that the initial extrapolation is not significantly damaging to the accuracy of the method (proof or citation). Also, a link to a more robust newton-like method would be appreciated.
– Aleksejs Fomins
Nov 26 at 11:42
Added some semi-quantitative estimates.
– LutzL
Nov 26 at 11:55
Added some semi-quantitative estimates.
– LutzL
Nov 26 at 11:55
Ok, I think I get the convergence rate. One last thing I am worried about is whether it converges to the right thing. In principle, I can imagine that a nonlinear equation can have multiple solutions for $x(t+1)$. Is there an intuition why it should not converge to some spurious root of the equation?
– Aleksejs Fomins
Nov 26 at 12:03
Ok, I think I get the convergence rate. One last thing I am worried about is whether it converges to the right thing. In principle, I can imagine that a nonlinear equation can have multiple solutions for $x(t+1)$. Is there an intuition why it should not converge to some spurious root of the equation?
– Aleksejs Fomins
Nov 26 at 12:03
Also, ignore my question about what a newton method is. I was being stupid. It is just a better way of finding root of an equation
– Aleksejs Fomins
Nov 26 at 12:04
Also, ignore my question about what a newton method is. I was being stupid. It is just a better way of finding root of an equation
– Aleksejs Fomins
Nov 26 at 12:04
I corrected an error, the Lipschitz constant of the iteration is $Δt^2cdot Lcdot 1/(2Δt)=LΔt/2$, thus 2 iterations are necessary to reach error order 4, 3 or more give better results (with rapidly diminishing improvements).
– LutzL
Nov 26 at 12:15
I corrected an error, the Lipschitz constant of the iteration is $Δt^2cdot Lcdot 1/(2Δt)=LΔt/2$, thus 2 iterations are necessary to reach error order 4, 3 or more give better results (with rapidly diminishing improvements).
– LutzL
Nov 26 at 12:15
|
show 2 more comments
Thanks for contributing an answer to Mathematics Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Some of your past answers have not been well-received, and you're in danger of being blocked from answering.
Please pay close attention to the following guidance:
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3014204%2fsolve-general-2nd-order-ode-numerically-with-2nd-order-time-differences%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
1
For the second order formula for first derivative you have $$ dot{x} approx frac{x(t+Delta t)-x(t-Delta t)}{2Delta t} $$
– rafa11111
Nov 26 at 12:03
Thanks, my bad, will fix now
– Aleksejs Fomins
Nov 26 at 12:05
1
You're welcome. AFAIK, there are specific methods for solving movement equations, such as the Verlet integration or the Leapfrog integration. I'm not familiar with those methods, but I think you can have a good start looking at this, rather than trying to derive a scheme from scratch.
– rafa11111
Nov 26 at 12:11
1
Thanks a lot, I'll have a look. I used to know these things, but it was ages ago...
– Aleksejs Fomins
Nov 26 at 12:18
1
@rafa11111: Verlet integration is exactly this scheme for situations where $f$ does not depend on the first derivative $dot x$, in other words, where the force is conservative resp. the system Hamiltonian. The special properties of symplectic integration methods depend on this symplectic framework for the ODE system.
– LutzL
Nov 26 at 12:25