Is it legitimate to specify a statement as a Theorem if it is proved using numerical methods (since it can't...












12












$begingroup$


I have some statement, which is "proved" by numerical methods (particularly, using simulations), since it isn't possible to prove analytically. Is it legitimate to articulate this statement as a Theorem, if it isn't proved analytically. If no, what should I call it?



P.S. More details regarding the statement.



I have some data generating process and need to prove that the generated data as a function of some variable has some properties. For example, I have $x^1,...,x^n$ generated data (n=100000) and need to show that the function $x(gamma)$ is positive defined over the domain. Therefore, on the generated data I apply interpolation technique in order to construct the function and I check the property.










share|cite|improve this question











$endgroup$








  • 15




    $begingroup$
    How is it proved by numerical methods? There are computer proofs that use reliable numerical methods based on interval arithmetic. See scicomp.stackexchange.com/a/1028/640
    $endgroup$
    – lhf
    Jan 6 at 10:32








  • 12




    $begingroup$
    Not really, no. You could call it a conjecture, and cite the computation as evidence towards it being true.
    $endgroup$
    – Theo Bendit
    Jan 6 at 10:43






  • 3




    $begingroup$
    @sane This might be something that I'd have to see the specifics to comment on properly, but in general, if you want to call it a theorem, you'll need to have a proof attached. You might have sampled 100 000 points, but why that number? Could your opinion change if you sampled 100 000 0000 points? Why didn't you sample, say, 10 instead? What is it about this property that makes you absolutely certain that 100 000 points will tell you definitively whether a function has it or not, but sampling 10 won't? A proof should leave no room for doubt, when verified.
    $endgroup$
    – Theo Bendit
    Jan 6 at 10:53






  • 4




    $begingroup$
    @sane It won't change things. It'll provide more evidence, but no (finite) number of points will constitute a proof in itself. You have no way of knowing whether the function ducks below zero at other points, maybe at a value much larger than the values tested, or maybe at one tricky point in between the points you've tested. Most numbers cannot be expressed by notation, let alone computers! That is, most numbers have absolutely no chance of being tested numerically. How do you know the function doesn't become negative there?
    $endgroup$
    – Theo Bendit
    Jan 6 at 11:08








  • 13




    $begingroup$
    @sane How do you know that it cannot be proved?
    $endgroup$
    – Noah Schweber
    Jan 6 at 12:12
















12












$begingroup$


I have some statement, which is "proved" by numerical methods (particularly, using simulations), since it isn't possible to prove analytically. Is it legitimate to articulate this statement as a Theorem, if it isn't proved analytically. If no, what should I call it?



P.S. More details regarding the statement.



I have some data generating process and need to prove that the generated data as a function of some variable has some properties. For example, I have $x^1,...,x^n$ generated data (n=100000) and need to show that the function $x(gamma)$ is positive defined over the domain. Therefore, on the generated data I apply interpolation technique in order to construct the function and I check the property.










share|cite|improve this question











$endgroup$








  • 15




    $begingroup$
    How is it proved by numerical methods? There are computer proofs that use reliable numerical methods based on interval arithmetic. See scicomp.stackexchange.com/a/1028/640
    $endgroup$
    – lhf
    Jan 6 at 10:32








  • 12




    $begingroup$
    Not really, no. You could call it a conjecture, and cite the computation as evidence towards it being true.
    $endgroup$
    – Theo Bendit
    Jan 6 at 10:43






  • 3




    $begingroup$
    @sane This might be something that I'd have to see the specifics to comment on properly, but in general, if you want to call it a theorem, you'll need to have a proof attached. You might have sampled 100 000 points, but why that number? Could your opinion change if you sampled 100 000 0000 points? Why didn't you sample, say, 10 instead? What is it about this property that makes you absolutely certain that 100 000 points will tell you definitively whether a function has it or not, but sampling 10 won't? A proof should leave no room for doubt, when verified.
    $endgroup$
    – Theo Bendit
    Jan 6 at 10:53






  • 4




    $begingroup$
    @sane It won't change things. It'll provide more evidence, but no (finite) number of points will constitute a proof in itself. You have no way of knowing whether the function ducks below zero at other points, maybe at a value much larger than the values tested, or maybe at one tricky point in between the points you've tested. Most numbers cannot be expressed by notation, let alone computers! That is, most numbers have absolutely no chance of being tested numerically. How do you know the function doesn't become negative there?
    $endgroup$
    – Theo Bendit
    Jan 6 at 11:08








  • 13




    $begingroup$
    @sane How do you know that it cannot be proved?
    $endgroup$
    – Noah Schweber
    Jan 6 at 12:12














12












12








12


2



$begingroup$


I have some statement, which is "proved" by numerical methods (particularly, using simulations), since it isn't possible to prove analytically. Is it legitimate to articulate this statement as a Theorem, if it isn't proved analytically. If no, what should I call it?



P.S. More details regarding the statement.



I have some data generating process and need to prove that the generated data as a function of some variable has some properties. For example, I have $x^1,...,x^n$ generated data (n=100000) and need to show that the function $x(gamma)$ is positive defined over the domain. Therefore, on the generated data I apply interpolation technique in order to construct the function and I check the property.










share|cite|improve this question











$endgroup$




I have some statement, which is "proved" by numerical methods (particularly, using simulations), since it isn't possible to prove analytically. Is it legitimate to articulate this statement as a Theorem, if it isn't proved analytically. If no, what should I call it?



P.S. More details regarding the statement.



I have some data generating process and need to prove that the generated data as a function of some variable has some properties. For example, I have $x^1,...,x^n$ generated data (n=100000) and need to show that the function $x(gamma)$ is positive defined over the domain. Therefore, on the generated data I apply interpolation technique in order to construct the function and I check the property.







proof-writing numerical-methods






share|cite|improve this question















share|cite|improve this question













share|cite|improve this question




share|cite|improve this question








edited Jan 6 at 15:40







sane

















asked Jan 6 at 10:14









sanesane

7611




7611








  • 15




    $begingroup$
    How is it proved by numerical methods? There are computer proofs that use reliable numerical methods based on interval arithmetic. See scicomp.stackexchange.com/a/1028/640
    $endgroup$
    – lhf
    Jan 6 at 10:32








  • 12




    $begingroup$
    Not really, no. You could call it a conjecture, and cite the computation as evidence towards it being true.
    $endgroup$
    – Theo Bendit
    Jan 6 at 10:43






  • 3




    $begingroup$
    @sane This might be something that I'd have to see the specifics to comment on properly, but in general, if you want to call it a theorem, you'll need to have a proof attached. You might have sampled 100 000 points, but why that number? Could your opinion change if you sampled 100 000 0000 points? Why didn't you sample, say, 10 instead? What is it about this property that makes you absolutely certain that 100 000 points will tell you definitively whether a function has it or not, but sampling 10 won't? A proof should leave no room for doubt, when verified.
    $endgroup$
    – Theo Bendit
    Jan 6 at 10:53






  • 4




    $begingroup$
    @sane It won't change things. It'll provide more evidence, but no (finite) number of points will constitute a proof in itself. You have no way of knowing whether the function ducks below zero at other points, maybe at a value much larger than the values tested, or maybe at one tricky point in between the points you've tested. Most numbers cannot be expressed by notation, let alone computers! That is, most numbers have absolutely no chance of being tested numerically. How do you know the function doesn't become negative there?
    $endgroup$
    – Theo Bendit
    Jan 6 at 11:08








  • 13




    $begingroup$
    @sane How do you know that it cannot be proved?
    $endgroup$
    – Noah Schweber
    Jan 6 at 12:12














  • 15




    $begingroup$
    How is it proved by numerical methods? There are computer proofs that use reliable numerical methods based on interval arithmetic. See scicomp.stackexchange.com/a/1028/640
    $endgroup$
    – lhf
    Jan 6 at 10:32








  • 12




    $begingroup$
    Not really, no. You could call it a conjecture, and cite the computation as evidence towards it being true.
    $endgroup$
    – Theo Bendit
    Jan 6 at 10:43






  • 3




    $begingroup$
    @sane This might be something that I'd have to see the specifics to comment on properly, but in general, if you want to call it a theorem, you'll need to have a proof attached. You might have sampled 100 000 points, but why that number? Could your opinion change if you sampled 100 000 0000 points? Why didn't you sample, say, 10 instead? What is it about this property that makes you absolutely certain that 100 000 points will tell you definitively whether a function has it or not, but sampling 10 won't? A proof should leave no room for doubt, when verified.
    $endgroup$
    – Theo Bendit
    Jan 6 at 10:53






  • 4




    $begingroup$
    @sane It won't change things. It'll provide more evidence, but no (finite) number of points will constitute a proof in itself. You have no way of knowing whether the function ducks below zero at other points, maybe at a value much larger than the values tested, or maybe at one tricky point in between the points you've tested. Most numbers cannot be expressed by notation, let alone computers! That is, most numbers have absolutely no chance of being tested numerically. How do you know the function doesn't become negative there?
    $endgroup$
    – Theo Bendit
    Jan 6 at 11:08








  • 13




    $begingroup$
    @sane How do you know that it cannot be proved?
    $endgroup$
    – Noah Schweber
    Jan 6 at 12:12








15




15




$begingroup$
How is it proved by numerical methods? There are computer proofs that use reliable numerical methods based on interval arithmetic. See scicomp.stackexchange.com/a/1028/640
$endgroup$
– lhf
Jan 6 at 10:32






$begingroup$
How is it proved by numerical methods? There are computer proofs that use reliable numerical methods based on interval arithmetic. See scicomp.stackexchange.com/a/1028/640
$endgroup$
– lhf
Jan 6 at 10:32






12




12




$begingroup$
Not really, no. You could call it a conjecture, and cite the computation as evidence towards it being true.
$endgroup$
– Theo Bendit
Jan 6 at 10:43




$begingroup$
Not really, no. You could call it a conjecture, and cite the computation as evidence towards it being true.
$endgroup$
– Theo Bendit
Jan 6 at 10:43




3




3




$begingroup$
@sane This might be something that I'd have to see the specifics to comment on properly, but in general, if you want to call it a theorem, you'll need to have a proof attached. You might have sampled 100 000 points, but why that number? Could your opinion change if you sampled 100 000 0000 points? Why didn't you sample, say, 10 instead? What is it about this property that makes you absolutely certain that 100 000 points will tell you definitively whether a function has it or not, but sampling 10 won't? A proof should leave no room for doubt, when verified.
$endgroup$
– Theo Bendit
Jan 6 at 10:53




$begingroup$
@sane This might be something that I'd have to see the specifics to comment on properly, but in general, if you want to call it a theorem, you'll need to have a proof attached. You might have sampled 100 000 points, but why that number? Could your opinion change if you sampled 100 000 0000 points? Why didn't you sample, say, 10 instead? What is it about this property that makes you absolutely certain that 100 000 points will tell you definitively whether a function has it or not, but sampling 10 won't? A proof should leave no room for doubt, when verified.
$endgroup$
– Theo Bendit
Jan 6 at 10:53




4




4




$begingroup$
@sane It won't change things. It'll provide more evidence, but no (finite) number of points will constitute a proof in itself. You have no way of knowing whether the function ducks below zero at other points, maybe at a value much larger than the values tested, or maybe at one tricky point in between the points you've tested. Most numbers cannot be expressed by notation, let alone computers! That is, most numbers have absolutely no chance of being tested numerically. How do you know the function doesn't become negative there?
$endgroup$
– Theo Bendit
Jan 6 at 11:08






$begingroup$
@sane It won't change things. It'll provide more evidence, but no (finite) number of points will constitute a proof in itself. You have no way of knowing whether the function ducks below zero at other points, maybe at a value much larger than the values tested, or maybe at one tricky point in between the points you've tested. Most numbers cannot be expressed by notation, let alone computers! That is, most numbers have absolutely no chance of being tested numerically. How do you know the function doesn't become negative there?
$endgroup$
– Theo Bendit
Jan 6 at 11:08






13




13




$begingroup$
@sane How do you know that it cannot be proved?
$endgroup$
– Noah Schweber
Jan 6 at 12:12




$begingroup$
@sane How do you know that it cannot be proved?
$endgroup$
– Noah Schweber
Jan 6 at 12:12










5 Answers
5






active

oldest

votes


















50












$begingroup$

In general, numerical methods don't constitute proofs. If all we have is an unknown blackbox function $f:mathbb{R} to mathbb{R}$ that we know nothing about, and all we can do is compute its value at (finitely) many points, then we simply can't prove that $f$ is positive.



However, in specific cases, we could have arguments based on numerical methods that are valid. Typically, we'd need to make a numerical approximation, and then prove, using non-numerical methods, that our numerical approximation is accurate enough for the theorem to follow. As such, how numerical methods can aid us in proving a statement is very statement-specific.



Take, for example the following problem: Prove that $f(x) = x^2 + 1 > 0 forall x in [-1, 1]$.



Invalid proof: We computed $f(x)$ at $10^{10^{10000}}$ random points and used linear interpolation between them. Here's a plot. We can see that $f(x)$ is always positive.



Valid proof 1: We computed $f(x)$ at points three points: $f(-1) = 2$, $f(0) = 1$, and $f(1)=2$. Let $g(x)$ be the linear interpolation of the points $(-1, 2)$, $(0, 1)$, and $(1, 2)$. $g$ attains its minimum at $g(0) = 1$. Since $f^{prime prime} = 2$, we can compute an error bound on our interpolation (see https://ccrma.stanford.edu/~jos/resample/Linear_Interpolation_Error_Bound.html): $|f(x) - g(x)| leq frac{2}{8}$. Therefore, we can conclude that $f(x) geq frac{3}{4} > 0$.



Note: Often, if we need to resort to numerical methods, if would be just as hard to compute derivatives. However, we don't need the actual derivatives, we just need an upper bound. The better the bound, the less points we would need to evaluate $f(x)$ at. Furthermore, bound to the first derivative is enough, but having second could also reduce the number of points needed.



Valid proof 2: We know that $f(x)$ is convex. We use a numerical method to compute its minimum. find that $min f(x) approx 1.0000000075$. We also have an (true, non-numerical) error bound on our approximation: $|1.0000000075 - min f(x)| < 0.001$. Therefore, $f(x) > 1.0000000075 - 0.001 > 0$.



Finally, it doesn't really matter whether analytical proofs exist or not. The validity of any proof is only determined by that proof and no others.



In fact, it has been proven that not all true statements can be proven. But that is no reason to reduce our standards of rigor.






share|cite|improve this answer











$endgroup$









  • 2




    $begingroup$
    Thank you for your answer. I have a question regarding "valid proof 1". You have computed the error bound of the interpolation, but this computation is affordable, when you have the functional form $f(x)$ (actually you have assumed it). But in my case I don't have the functional form, therefore how can I replicate Valid proof 1 to my statement?
    $endgroup$
    – sane
    Jan 6 at 11:34






  • 3




    $begingroup$
    "Valid proof 1" is perverse (it belongs in the well known book "Mathematics made difficult" IMO) - if you know enough calculus to find the second derivative and understand use the result quoted, you know enough to show that the minimum value is when $x = 0$ without using the "numerical method" at all. Numerical method 2 is not a proof, unless you have a non-numerical proof that the error bound of your algorithm is guaranteed to be correct, when using approximate arithmetic (e.g. working to a fixed number of decimal places, or using IEEE floating point arithmetic on a computer).
    $endgroup$
    – alephzero
    Jan 6 at 12:34








  • 22




    $begingroup$
    @alephzero The point of the first proof was to explain how one would argue something "numerically" using a simple example. Of course one can show that $x^2+1$ is non-negative in a billion easier ways, but they would be totally irrelevant to the question.
    $endgroup$
    – user98602
    Jan 6 at 13:30






  • 7




    $begingroup$
    @sane A guess. Because that's what it is. An educated guess with a reasonable amount of evidence behind it.
    $endgroup$
    – user3482749
    Jan 6 at 20:14






  • 1




    $begingroup$
    @dmtri no, we don't like it because they're not actual proofs. Without the proper regularity arguments we can end up "proving" numerically things that are actually wrong.
    $endgroup$
    – RcnSc
    Jan 7 at 9:54



















13












$begingroup$

Well, is it proven or is it not proven? The way you've phrased your question is "if I've proven something with numerical methods, have I proven it?". Well, yes - you just said you had.



Say you want to prove that $f(n)$ is a prime number for $n<10^7$. Well then if you loop through all the numbers less than $10^7$ and check that $f(n)$ is prime for all of them, you have a proof. It isn't somehow less of a proof just because it doesn't involve a bunch of algebraic symbols and words like "therefore" and "by contradiction".



However, if you want to prove that $f(n)$ is a prime number for all numbers, and you check for $nleq10^7$, that simply isn't a proof. There's no sophisticated philosophical reason why it isn't, it just clearly isn't, is it? I mean, $f(10^7+1)$ might not be prime, for all you know.



So what you really should be asking is whether or not you actually have proven your statement.






share|cite|improve this answer









$endgroup$









  • 1




    $begingroup$
    Thank you for your answer. Actually, I pick up (finite) points, therefore according to your answer I can't consider it as a proof. Then, please suggest what should I call it?
    $endgroup$
    – sane
    Jan 6 at 15:12






  • 2




    $begingroup$
    @sane It's computations which suggest a result might be true.
    $endgroup$
    – Jack M
    Jan 6 at 16:32






  • 2




    $begingroup$
    @sane You don't call it anything. It's just data.
    $endgroup$
    – The Great Duck
    Jan 7 at 4:14






  • 1




    $begingroup$
    @sane: In short, your claim must be as accurate as your proof. If you've run your number simulation on a set of numbers (which is obviously finite), then you can only claim that it if proven for that set of numbers. You cannot make any claims about untested numbers. What you should do is amend your claim. Instead of saying "f(n) is a prime", you should say "f(n) is a prime for 0 < n < 10^7" (for example).
    $endgroup$
    – Flater
    Jan 7 at 14:37



















5












$begingroup$

And it turns out that with further analysis of your actual problem you could have found an exact proof with first semester calculus means. There is no need for speculative numerical explorations.



For your actual problem, to find solutions $x(γ,y)>0$ for the equation
$$
(γ^{n−1}y)^n+(γ^{n−2}y)^{n−1}+...+(γy)^2+y=A(γ,y)=x^n+x^{n−1}+...+x,
$$

it is rather easy to answer that there is exactly one positive root $x$ for fixed positive $y$ and $γ$.



The right side at $x=0$ is zero and grows monotonically and convexly towards infinity, thus there is a positive root of the equation by the intermediate value theorem. Further, there is a lower bound
$$
x>frac{A}{1+A}gefrac{y}{1+y}
$$

for the positive root, as either $xge 1$ or, for $0<x<1$,
$$
A(1-x)=x(1-x^n)<ximplies x>frac{A}{1+A}
$$





In general, there are methods to determine the number of positive roots like Descartes rule or the Hurwitz criteria of stability. Descartes rule counts the sign changes in the coefficient sequence. Here $x^n+...+x^2+x-A=0$ has exactly one sign change which proves the existence of exactly one positive root.



The polynomial equation has lower root bounds $a_nx^n+...+a_1x+a_0=0$
$$
|x|gefrac{|a_0|}{|a_0|+max_{k=1..n}|a_k|}
~~ text{ or } ~~
|x|gefrac{|a_0|}{max(|a_0|,,|a_1|+...+|a_n|)}
$$

which are lower bound for a single positive root.






share|cite|improve this answer











$endgroup$













  • $begingroup$
    Sorry, I had a typo, this isn't the problem that I consider.
    $endgroup$
    – sane
    Jan 6 at 16:05






  • 2




    $begingroup$
    @sane : Then please add the correct problem statement somewhere, preferably to the body of the question text. It is better to add relevant, essential information there than in comments. There are many elementary and not so elementary results on the location and bounds of real roots, starting with Descartes' rule.
    $endgroup$
    – LutzL
    Jan 6 at 16:09








  • 4




    $begingroup$
    @sane: This should be in your Question, not hidden in comments. In your first equation, where are the transitions between $a$ and $a+1$ in the numerators on each side? In your second equation, does the sum on the right terminate?
    $endgroup$
    – Eric Towers
    Jan 6 at 18:53










  • $begingroup$
    Ok, it looks more difficult to decide on the positivity of $h$. Still, I do not see it as hopeless to try to derive theoretical results, the right sides are still monotonic and convex polynomials in $1/(1+x)$ resp. $1/(1+z)$ with a nice structure.
    $endgroup$
    – LutzL
    Jan 6 at 18:55






  • 2




    $begingroup$
    @sane : It belongs in your Question, because your question is an instance of an XY problem.
    $endgroup$
    – Eric Towers
    Jan 6 at 19:02



















3












$begingroup$

I'm going to take the first paragraph and you saying "it cannot be proven analytically" to mean "it was proven that a proof or disproof does not exist". Well I hate to break it to you then. You're in one of two scenarios.




  1. The statement you are working with can be true or false without contradiction (think the 5th postulate of Euclidean Geometry of which many people attempted to prove).


  2. An actual truth value exists for this statement, but the answer is not Turing-Computable. If you don't know what it means, basically it means that given nothing but finite pencil and paper and an arbitrary finite amount of time you will never get an answer, ever. You would require either infinite paper being manipulated at once or infinite steps somehow done in a finite period of time.



What this means is that the answer is quite simple here. It doesn't matter what your method was. It doesn't matter if it was sophisticated AI powered real analysis proof solving system with the capabilities of writing a proof for pretty much anything you ask it to. It doesn't matter because you've already proven that a proof doesn't exist. Therefore anything you obtain that claims to be a proof cannot be a proof. This would be a contradiction but you obviously can't have that.



Your only exception would be if you have a computer that is stronger computationally than a Turing-Machine. No, that does not mean processing power. I mean it can solve Halting Problem. However, if you have a computer that can do that then you probably shouldn't post about it on here. The current record on number of Hypercomputers built by the entire human race as they have been referred to as is.... zero. If you did build one then obviously refer to other answers for more info on the validity of various methods.



So which is it? Is it impossible to prove the statement or was that reasoning flawed and the numerical method proof (possibly) valid depending on the exact methods used? It cannot be both. This would imply that mathematics as a whole is broken, and if you were to discover or prove that, the entire notion of truth would go completely out the door. It would also likely result in the death of this site. Please don't do that sir. (Actually more likely to happen is that the axioms currently being used for whatever you are studying will get a correction and all will be well.)






share|cite|improve this answer









$endgroup$





















    2












    $begingroup$

    As other answers have said, a statement is either proven or unproven. If it is proven, we call it a theorem, regardless of the methods used. I'm answering separately to call attention to some of the pitfalls of trying to "prove" things with numerical methods.



    The vast majority of real-world numerical computing (outside of the financial sector) uses IEEE 754 floating point to represent numbers which may not be integers. In short, the floating point numbers are a finite (but large) subset of the dyadic rationals, which become sparser (at a roughly logarithmic rate) as we move away from zero. There are also a few "special" values which are used to represent the results of invalid operations or arithmetic overflow.



    Why does this matter? It's important to understand the limitations of floating point arithmetic. In particular:




    • It does not support irrational numbers whatsoever. The Dirichlet function and its cousins cannot be usefully analyzed at all.

    • Numbers which are farther from zero have less precision, so functions with essential singularities will be difficult to work with in the neighborhoods of their singularities.

    • Limits are potentially an issue. Normally, a limit only exists if it exists under any possible choice of epsilon, but the floating point numbers can only choose finitely many epsilons, all of which are dyadic rationals. So if you take a limit of a function which behaves "specially" over the dyadic rationals (e.g. certain combinations of trigonometric functions), you may derive an incorrect value or incorrectly conclude that the limit exists at all.

    • The floating point numbers are non-associative under every operation that matters, because of intermediate rounding. No useful algebraic manipulations will preserve equality. So equality is basically useless and you have to approximate it by looking at whether numbers are "close enough" instead. For some applications, this is a problem (e.g. proving that a value is never exactly zero is impossible if it comes arbitrarily close to zero).






    share|cite|improve this answer









    $endgroup$













    • $begingroup$
      And even for integers, a real world computer is probably really working with the integers modulo some power of 2. In many cases, they silently overflow so considerable care is needed even if you expect that a solution exists within the available range.
      $endgroup$
      – badjohn
      Jan 7 at 14:22












    Your Answer





    StackExchange.ifUsing("editor", function () {
    return StackExchange.using("mathjaxEditing", function () {
    StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
    StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
    });
    });
    }, "mathjax-editing");

    StackExchange.ready(function() {
    var channelOptions = {
    tags: "".split(" "),
    id: "69"
    };
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function() {
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled) {
    StackExchange.using("snippets", function() {
    createEditor();
    });
    }
    else {
    createEditor();
    }
    });

    function createEditor() {
    StackExchange.prepareEditor({
    heartbeatType: 'answer',
    autoActivateHeartbeat: false,
    convertImagesToLinks: true,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: 10,
    bindNavPrevention: true,
    postfix: "",
    imageUploader: {
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    },
    noCode: true, onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    });


    }
    });














    draft saved

    draft discarded


















    StackExchange.ready(
    function () {
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3063680%2fis-it-legitimate-to-specify-a-statement-as-a-theorem-if-it-is-proved-using-numer%23new-answer', 'question_page');
    }
    );

    Post as a guest















    Required, but never shown

























    5 Answers
    5






    active

    oldest

    votes








    5 Answers
    5






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes









    50












    $begingroup$

    In general, numerical methods don't constitute proofs. If all we have is an unknown blackbox function $f:mathbb{R} to mathbb{R}$ that we know nothing about, and all we can do is compute its value at (finitely) many points, then we simply can't prove that $f$ is positive.



    However, in specific cases, we could have arguments based on numerical methods that are valid. Typically, we'd need to make a numerical approximation, and then prove, using non-numerical methods, that our numerical approximation is accurate enough for the theorem to follow. As such, how numerical methods can aid us in proving a statement is very statement-specific.



    Take, for example the following problem: Prove that $f(x) = x^2 + 1 > 0 forall x in [-1, 1]$.



    Invalid proof: We computed $f(x)$ at $10^{10^{10000}}$ random points and used linear interpolation between them. Here's a plot. We can see that $f(x)$ is always positive.



    Valid proof 1: We computed $f(x)$ at points three points: $f(-1) = 2$, $f(0) = 1$, and $f(1)=2$. Let $g(x)$ be the linear interpolation of the points $(-1, 2)$, $(0, 1)$, and $(1, 2)$. $g$ attains its minimum at $g(0) = 1$. Since $f^{prime prime} = 2$, we can compute an error bound on our interpolation (see https://ccrma.stanford.edu/~jos/resample/Linear_Interpolation_Error_Bound.html): $|f(x) - g(x)| leq frac{2}{8}$. Therefore, we can conclude that $f(x) geq frac{3}{4} > 0$.



    Note: Often, if we need to resort to numerical methods, if would be just as hard to compute derivatives. However, we don't need the actual derivatives, we just need an upper bound. The better the bound, the less points we would need to evaluate $f(x)$ at. Furthermore, bound to the first derivative is enough, but having second could also reduce the number of points needed.



    Valid proof 2: We know that $f(x)$ is convex. We use a numerical method to compute its minimum. find that $min f(x) approx 1.0000000075$. We also have an (true, non-numerical) error bound on our approximation: $|1.0000000075 - min f(x)| < 0.001$. Therefore, $f(x) > 1.0000000075 - 0.001 > 0$.



    Finally, it doesn't really matter whether analytical proofs exist or not. The validity of any proof is only determined by that proof and no others.



    In fact, it has been proven that not all true statements can be proven. But that is no reason to reduce our standards of rigor.






    share|cite|improve this answer











    $endgroup$









    • 2




      $begingroup$
      Thank you for your answer. I have a question regarding "valid proof 1". You have computed the error bound of the interpolation, but this computation is affordable, when you have the functional form $f(x)$ (actually you have assumed it). But in my case I don't have the functional form, therefore how can I replicate Valid proof 1 to my statement?
      $endgroup$
      – sane
      Jan 6 at 11:34






    • 3




      $begingroup$
      "Valid proof 1" is perverse (it belongs in the well known book "Mathematics made difficult" IMO) - if you know enough calculus to find the second derivative and understand use the result quoted, you know enough to show that the minimum value is when $x = 0$ without using the "numerical method" at all. Numerical method 2 is not a proof, unless you have a non-numerical proof that the error bound of your algorithm is guaranteed to be correct, when using approximate arithmetic (e.g. working to a fixed number of decimal places, or using IEEE floating point arithmetic on a computer).
      $endgroup$
      – alephzero
      Jan 6 at 12:34








    • 22




      $begingroup$
      @alephzero The point of the first proof was to explain how one would argue something "numerically" using a simple example. Of course one can show that $x^2+1$ is non-negative in a billion easier ways, but they would be totally irrelevant to the question.
      $endgroup$
      – user98602
      Jan 6 at 13:30






    • 7




      $begingroup$
      @sane A guess. Because that's what it is. An educated guess with a reasonable amount of evidence behind it.
      $endgroup$
      – user3482749
      Jan 6 at 20:14






    • 1




      $begingroup$
      @dmtri no, we don't like it because they're not actual proofs. Without the proper regularity arguments we can end up "proving" numerically things that are actually wrong.
      $endgroup$
      – RcnSc
      Jan 7 at 9:54
















    50












    $begingroup$

    In general, numerical methods don't constitute proofs. If all we have is an unknown blackbox function $f:mathbb{R} to mathbb{R}$ that we know nothing about, and all we can do is compute its value at (finitely) many points, then we simply can't prove that $f$ is positive.



    However, in specific cases, we could have arguments based on numerical methods that are valid. Typically, we'd need to make a numerical approximation, and then prove, using non-numerical methods, that our numerical approximation is accurate enough for the theorem to follow. As such, how numerical methods can aid us in proving a statement is very statement-specific.



    Take, for example the following problem: Prove that $f(x) = x^2 + 1 > 0 forall x in [-1, 1]$.



    Invalid proof: We computed $f(x)$ at $10^{10^{10000}}$ random points and used linear interpolation between them. Here's a plot. We can see that $f(x)$ is always positive.



    Valid proof 1: We computed $f(x)$ at points three points: $f(-1) = 2$, $f(0) = 1$, and $f(1)=2$. Let $g(x)$ be the linear interpolation of the points $(-1, 2)$, $(0, 1)$, and $(1, 2)$. $g$ attains its minimum at $g(0) = 1$. Since $f^{prime prime} = 2$, we can compute an error bound on our interpolation (see https://ccrma.stanford.edu/~jos/resample/Linear_Interpolation_Error_Bound.html): $|f(x) - g(x)| leq frac{2}{8}$. Therefore, we can conclude that $f(x) geq frac{3}{4} > 0$.



    Note: Often, if we need to resort to numerical methods, if would be just as hard to compute derivatives. However, we don't need the actual derivatives, we just need an upper bound. The better the bound, the less points we would need to evaluate $f(x)$ at. Furthermore, bound to the first derivative is enough, but having second could also reduce the number of points needed.



    Valid proof 2: We know that $f(x)$ is convex. We use a numerical method to compute its minimum. find that $min f(x) approx 1.0000000075$. We also have an (true, non-numerical) error bound on our approximation: $|1.0000000075 - min f(x)| < 0.001$. Therefore, $f(x) > 1.0000000075 - 0.001 > 0$.



    Finally, it doesn't really matter whether analytical proofs exist or not. The validity of any proof is only determined by that proof and no others.



    In fact, it has been proven that not all true statements can be proven. But that is no reason to reduce our standards of rigor.






    share|cite|improve this answer











    $endgroup$









    • 2




      $begingroup$
      Thank you for your answer. I have a question regarding "valid proof 1". You have computed the error bound of the interpolation, but this computation is affordable, when you have the functional form $f(x)$ (actually you have assumed it). But in my case I don't have the functional form, therefore how can I replicate Valid proof 1 to my statement?
      $endgroup$
      – sane
      Jan 6 at 11:34






    • 3




      $begingroup$
      "Valid proof 1" is perverse (it belongs in the well known book "Mathematics made difficult" IMO) - if you know enough calculus to find the second derivative and understand use the result quoted, you know enough to show that the minimum value is when $x = 0$ without using the "numerical method" at all. Numerical method 2 is not a proof, unless you have a non-numerical proof that the error bound of your algorithm is guaranteed to be correct, when using approximate arithmetic (e.g. working to a fixed number of decimal places, or using IEEE floating point arithmetic on a computer).
      $endgroup$
      – alephzero
      Jan 6 at 12:34








    • 22




      $begingroup$
      @alephzero The point of the first proof was to explain how one would argue something "numerically" using a simple example. Of course one can show that $x^2+1$ is non-negative in a billion easier ways, but they would be totally irrelevant to the question.
      $endgroup$
      – user98602
      Jan 6 at 13:30






    • 7




      $begingroup$
      @sane A guess. Because that's what it is. An educated guess with a reasonable amount of evidence behind it.
      $endgroup$
      – user3482749
      Jan 6 at 20:14






    • 1




      $begingroup$
      @dmtri no, we don't like it because they're not actual proofs. Without the proper regularity arguments we can end up "proving" numerically things that are actually wrong.
      $endgroup$
      – RcnSc
      Jan 7 at 9:54














    50












    50








    50





    $begingroup$

    In general, numerical methods don't constitute proofs. If all we have is an unknown blackbox function $f:mathbb{R} to mathbb{R}$ that we know nothing about, and all we can do is compute its value at (finitely) many points, then we simply can't prove that $f$ is positive.



    However, in specific cases, we could have arguments based on numerical methods that are valid. Typically, we'd need to make a numerical approximation, and then prove, using non-numerical methods, that our numerical approximation is accurate enough for the theorem to follow. As such, how numerical methods can aid us in proving a statement is very statement-specific.



    Take, for example the following problem: Prove that $f(x) = x^2 + 1 > 0 forall x in [-1, 1]$.



    Invalid proof: We computed $f(x)$ at $10^{10^{10000}}$ random points and used linear interpolation between them. Here's a plot. We can see that $f(x)$ is always positive.



    Valid proof 1: We computed $f(x)$ at points three points: $f(-1) = 2$, $f(0) = 1$, and $f(1)=2$. Let $g(x)$ be the linear interpolation of the points $(-1, 2)$, $(0, 1)$, and $(1, 2)$. $g$ attains its minimum at $g(0) = 1$. Since $f^{prime prime} = 2$, we can compute an error bound on our interpolation (see https://ccrma.stanford.edu/~jos/resample/Linear_Interpolation_Error_Bound.html): $|f(x) - g(x)| leq frac{2}{8}$. Therefore, we can conclude that $f(x) geq frac{3}{4} > 0$.



    Note: Often, if we need to resort to numerical methods, if would be just as hard to compute derivatives. However, we don't need the actual derivatives, we just need an upper bound. The better the bound, the less points we would need to evaluate $f(x)$ at. Furthermore, bound to the first derivative is enough, but having second could also reduce the number of points needed.



    Valid proof 2: We know that $f(x)$ is convex. We use a numerical method to compute its minimum. find that $min f(x) approx 1.0000000075$. We also have an (true, non-numerical) error bound on our approximation: $|1.0000000075 - min f(x)| < 0.001$. Therefore, $f(x) > 1.0000000075 - 0.001 > 0$.



    Finally, it doesn't really matter whether analytical proofs exist or not. The validity of any proof is only determined by that proof and no others.



    In fact, it has been proven that not all true statements can be proven. But that is no reason to reduce our standards of rigor.






    share|cite|improve this answer











    $endgroup$



    In general, numerical methods don't constitute proofs. If all we have is an unknown blackbox function $f:mathbb{R} to mathbb{R}$ that we know nothing about, and all we can do is compute its value at (finitely) many points, then we simply can't prove that $f$ is positive.



    However, in specific cases, we could have arguments based on numerical methods that are valid. Typically, we'd need to make a numerical approximation, and then prove, using non-numerical methods, that our numerical approximation is accurate enough for the theorem to follow. As such, how numerical methods can aid us in proving a statement is very statement-specific.



    Take, for example the following problem: Prove that $f(x) = x^2 + 1 > 0 forall x in [-1, 1]$.



    Invalid proof: We computed $f(x)$ at $10^{10^{10000}}$ random points and used linear interpolation between them. Here's a plot. We can see that $f(x)$ is always positive.



    Valid proof 1: We computed $f(x)$ at points three points: $f(-1) = 2$, $f(0) = 1$, and $f(1)=2$. Let $g(x)$ be the linear interpolation of the points $(-1, 2)$, $(0, 1)$, and $(1, 2)$. $g$ attains its minimum at $g(0) = 1$. Since $f^{prime prime} = 2$, we can compute an error bound on our interpolation (see https://ccrma.stanford.edu/~jos/resample/Linear_Interpolation_Error_Bound.html): $|f(x) - g(x)| leq frac{2}{8}$. Therefore, we can conclude that $f(x) geq frac{3}{4} > 0$.



    Note: Often, if we need to resort to numerical methods, if would be just as hard to compute derivatives. However, we don't need the actual derivatives, we just need an upper bound. The better the bound, the less points we would need to evaluate $f(x)$ at. Furthermore, bound to the first derivative is enough, but having second could also reduce the number of points needed.



    Valid proof 2: We know that $f(x)$ is convex. We use a numerical method to compute its minimum. find that $min f(x) approx 1.0000000075$. We also have an (true, non-numerical) error bound on our approximation: $|1.0000000075 - min f(x)| < 0.001$. Therefore, $f(x) > 1.0000000075 - 0.001 > 0$.



    Finally, it doesn't really matter whether analytical proofs exist or not. The validity of any proof is only determined by that proof and no others.



    In fact, it has been proven that not all true statements can be proven. But that is no reason to reduce our standards of rigor.







    share|cite|improve this answer














    share|cite|improve this answer



    share|cite|improve this answer








    edited Jan 10 at 19:23

























    answered Jan 6 at 11:13









    Todor MarkovTodor Markov

    2,420412




    2,420412








    • 2




      $begingroup$
      Thank you for your answer. I have a question regarding "valid proof 1". You have computed the error bound of the interpolation, but this computation is affordable, when you have the functional form $f(x)$ (actually you have assumed it). But in my case I don't have the functional form, therefore how can I replicate Valid proof 1 to my statement?
      $endgroup$
      – sane
      Jan 6 at 11:34






    • 3




      $begingroup$
      "Valid proof 1" is perverse (it belongs in the well known book "Mathematics made difficult" IMO) - if you know enough calculus to find the second derivative and understand use the result quoted, you know enough to show that the minimum value is when $x = 0$ without using the "numerical method" at all. Numerical method 2 is not a proof, unless you have a non-numerical proof that the error bound of your algorithm is guaranteed to be correct, when using approximate arithmetic (e.g. working to a fixed number of decimal places, or using IEEE floating point arithmetic on a computer).
      $endgroup$
      – alephzero
      Jan 6 at 12:34








    • 22




      $begingroup$
      @alephzero The point of the first proof was to explain how one would argue something "numerically" using a simple example. Of course one can show that $x^2+1$ is non-negative in a billion easier ways, but they would be totally irrelevant to the question.
      $endgroup$
      – user98602
      Jan 6 at 13:30






    • 7




      $begingroup$
      @sane A guess. Because that's what it is. An educated guess with a reasonable amount of evidence behind it.
      $endgroup$
      – user3482749
      Jan 6 at 20:14






    • 1




      $begingroup$
      @dmtri no, we don't like it because they're not actual proofs. Without the proper regularity arguments we can end up "proving" numerically things that are actually wrong.
      $endgroup$
      – RcnSc
      Jan 7 at 9:54














    • 2




      $begingroup$
      Thank you for your answer. I have a question regarding "valid proof 1". You have computed the error bound of the interpolation, but this computation is affordable, when you have the functional form $f(x)$ (actually you have assumed it). But in my case I don't have the functional form, therefore how can I replicate Valid proof 1 to my statement?
      $endgroup$
      – sane
      Jan 6 at 11:34






    • 3




      $begingroup$
      "Valid proof 1" is perverse (it belongs in the well known book "Mathematics made difficult" IMO) - if you know enough calculus to find the second derivative and understand use the result quoted, you know enough to show that the minimum value is when $x = 0$ without using the "numerical method" at all. Numerical method 2 is not a proof, unless you have a non-numerical proof that the error bound of your algorithm is guaranteed to be correct, when using approximate arithmetic (e.g. working to a fixed number of decimal places, or using IEEE floating point arithmetic on a computer).
      $endgroup$
      – alephzero
      Jan 6 at 12:34








    • 22




      $begingroup$
      @alephzero The point of the first proof was to explain how one would argue something "numerically" using a simple example. Of course one can show that $x^2+1$ is non-negative in a billion easier ways, but they would be totally irrelevant to the question.
      $endgroup$
      – user98602
      Jan 6 at 13:30






    • 7




      $begingroup$
      @sane A guess. Because that's what it is. An educated guess with a reasonable amount of evidence behind it.
      $endgroup$
      – user3482749
      Jan 6 at 20:14






    • 1




      $begingroup$
      @dmtri no, we don't like it because they're not actual proofs. Without the proper regularity arguments we can end up "proving" numerically things that are actually wrong.
      $endgroup$
      – RcnSc
      Jan 7 at 9:54








    2




    2




    $begingroup$
    Thank you for your answer. I have a question regarding "valid proof 1". You have computed the error bound of the interpolation, but this computation is affordable, when you have the functional form $f(x)$ (actually you have assumed it). But in my case I don't have the functional form, therefore how can I replicate Valid proof 1 to my statement?
    $endgroup$
    – sane
    Jan 6 at 11:34




    $begingroup$
    Thank you for your answer. I have a question regarding "valid proof 1". You have computed the error bound of the interpolation, but this computation is affordable, when you have the functional form $f(x)$ (actually you have assumed it). But in my case I don't have the functional form, therefore how can I replicate Valid proof 1 to my statement?
    $endgroup$
    – sane
    Jan 6 at 11:34




    3




    3




    $begingroup$
    "Valid proof 1" is perverse (it belongs in the well known book "Mathematics made difficult" IMO) - if you know enough calculus to find the second derivative and understand use the result quoted, you know enough to show that the minimum value is when $x = 0$ without using the "numerical method" at all. Numerical method 2 is not a proof, unless you have a non-numerical proof that the error bound of your algorithm is guaranteed to be correct, when using approximate arithmetic (e.g. working to a fixed number of decimal places, or using IEEE floating point arithmetic on a computer).
    $endgroup$
    – alephzero
    Jan 6 at 12:34






    $begingroup$
    "Valid proof 1" is perverse (it belongs in the well known book "Mathematics made difficult" IMO) - if you know enough calculus to find the second derivative and understand use the result quoted, you know enough to show that the minimum value is when $x = 0$ without using the "numerical method" at all. Numerical method 2 is not a proof, unless you have a non-numerical proof that the error bound of your algorithm is guaranteed to be correct, when using approximate arithmetic (e.g. working to a fixed number of decimal places, or using IEEE floating point arithmetic on a computer).
    $endgroup$
    – alephzero
    Jan 6 at 12:34






    22




    22




    $begingroup$
    @alephzero The point of the first proof was to explain how one would argue something "numerically" using a simple example. Of course one can show that $x^2+1$ is non-negative in a billion easier ways, but they would be totally irrelevant to the question.
    $endgroup$
    – user98602
    Jan 6 at 13:30




    $begingroup$
    @alephzero The point of the first proof was to explain how one would argue something "numerically" using a simple example. Of course one can show that $x^2+1$ is non-negative in a billion easier ways, but they would be totally irrelevant to the question.
    $endgroup$
    – user98602
    Jan 6 at 13:30




    7




    7




    $begingroup$
    @sane A guess. Because that's what it is. An educated guess with a reasonable amount of evidence behind it.
    $endgroup$
    – user3482749
    Jan 6 at 20:14




    $begingroup$
    @sane A guess. Because that's what it is. An educated guess with a reasonable amount of evidence behind it.
    $endgroup$
    – user3482749
    Jan 6 at 20:14




    1




    1




    $begingroup$
    @dmtri no, we don't like it because they're not actual proofs. Without the proper regularity arguments we can end up "proving" numerically things that are actually wrong.
    $endgroup$
    – RcnSc
    Jan 7 at 9:54




    $begingroup$
    @dmtri no, we don't like it because they're not actual proofs. Without the proper regularity arguments we can end up "proving" numerically things that are actually wrong.
    $endgroup$
    – RcnSc
    Jan 7 at 9:54











    13












    $begingroup$

    Well, is it proven or is it not proven? The way you've phrased your question is "if I've proven something with numerical methods, have I proven it?". Well, yes - you just said you had.



    Say you want to prove that $f(n)$ is a prime number for $n<10^7$. Well then if you loop through all the numbers less than $10^7$ and check that $f(n)$ is prime for all of them, you have a proof. It isn't somehow less of a proof just because it doesn't involve a bunch of algebraic symbols and words like "therefore" and "by contradiction".



    However, if you want to prove that $f(n)$ is a prime number for all numbers, and you check for $nleq10^7$, that simply isn't a proof. There's no sophisticated philosophical reason why it isn't, it just clearly isn't, is it? I mean, $f(10^7+1)$ might not be prime, for all you know.



    So what you really should be asking is whether or not you actually have proven your statement.






    share|cite|improve this answer









    $endgroup$









    • 1




      $begingroup$
      Thank you for your answer. Actually, I pick up (finite) points, therefore according to your answer I can't consider it as a proof. Then, please suggest what should I call it?
      $endgroup$
      – sane
      Jan 6 at 15:12






    • 2




      $begingroup$
      @sane It's computations which suggest a result might be true.
      $endgroup$
      – Jack M
      Jan 6 at 16:32






    • 2




      $begingroup$
      @sane You don't call it anything. It's just data.
      $endgroup$
      – The Great Duck
      Jan 7 at 4:14






    • 1




      $begingroup$
      @sane: In short, your claim must be as accurate as your proof. If you've run your number simulation on a set of numbers (which is obviously finite), then you can only claim that it if proven for that set of numbers. You cannot make any claims about untested numbers. What you should do is amend your claim. Instead of saying "f(n) is a prime", you should say "f(n) is a prime for 0 < n < 10^7" (for example).
      $endgroup$
      – Flater
      Jan 7 at 14:37
















    13












    $begingroup$

    Well, is it proven or is it not proven? The way you've phrased your question is "if I've proven something with numerical methods, have I proven it?". Well, yes - you just said you had.



    Say you want to prove that $f(n)$ is a prime number for $n<10^7$. Well then if you loop through all the numbers less than $10^7$ and check that $f(n)$ is prime for all of them, you have a proof. It isn't somehow less of a proof just because it doesn't involve a bunch of algebraic symbols and words like "therefore" and "by contradiction".



    However, if you want to prove that $f(n)$ is a prime number for all numbers, and you check for $nleq10^7$, that simply isn't a proof. There's no sophisticated philosophical reason why it isn't, it just clearly isn't, is it? I mean, $f(10^7+1)$ might not be prime, for all you know.



    So what you really should be asking is whether or not you actually have proven your statement.






    share|cite|improve this answer









    $endgroup$









    • 1




      $begingroup$
      Thank you for your answer. Actually, I pick up (finite) points, therefore according to your answer I can't consider it as a proof. Then, please suggest what should I call it?
      $endgroup$
      – sane
      Jan 6 at 15:12






    • 2




      $begingroup$
      @sane It's computations which suggest a result might be true.
      $endgroup$
      – Jack M
      Jan 6 at 16:32






    • 2




      $begingroup$
      @sane You don't call it anything. It's just data.
      $endgroup$
      – The Great Duck
      Jan 7 at 4:14






    • 1




      $begingroup$
      @sane: In short, your claim must be as accurate as your proof. If you've run your number simulation on a set of numbers (which is obviously finite), then you can only claim that it if proven for that set of numbers. You cannot make any claims about untested numbers. What you should do is amend your claim. Instead of saying "f(n) is a prime", you should say "f(n) is a prime for 0 < n < 10^7" (for example).
      $endgroup$
      – Flater
      Jan 7 at 14:37














    13












    13








    13





    $begingroup$

    Well, is it proven or is it not proven? The way you've phrased your question is "if I've proven something with numerical methods, have I proven it?". Well, yes - you just said you had.



    Say you want to prove that $f(n)$ is a prime number for $n<10^7$. Well then if you loop through all the numbers less than $10^7$ and check that $f(n)$ is prime for all of them, you have a proof. It isn't somehow less of a proof just because it doesn't involve a bunch of algebraic symbols and words like "therefore" and "by contradiction".



    However, if you want to prove that $f(n)$ is a prime number for all numbers, and you check for $nleq10^7$, that simply isn't a proof. There's no sophisticated philosophical reason why it isn't, it just clearly isn't, is it? I mean, $f(10^7+1)$ might not be prime, for all you know.



    So what you really should be asking is whether or not you actually have proven your statement.






    share|cite|improve this answer









    $endgroup$



    Well, is it proven or is it not proven? The way you've phrased your question is "if I've proven something with numerical methods, have I proven it?". Well, yes - you just said you had.



    Say you want to prove that $f(n)$ is a prime number for $n<10^7$. Well then if you loop through all the numbers less than $10^7$ and check that $f(n)$ is prime for all of them, you have a proof. It isn't somehow less of a proof just because it doesn't involve a bunch of algebraic symbols and words like "therefore" and "by contradiction".



    However, if you want to prove that $f(n)$ is a prime number for all numbers, and you check for $nleq10^7$, that simply isn't a proof. There's no sophisticated philosophical reason why it isn't, it just clearly isn't, is it? I mean, $f(10^7+1)$ might not be prime, for all you know.



    So what you really should be asking is whether or not you actually have proven your statement.







    share|cite|improve this answer












    share|cite|improve this answer



    share|cite|improve this answer










    answered Jan 6 at 12:40









    Jack MJack M

    18.9k33882




    18.9k33882








    • 1




      $begingroup$
      Thank you for your answer. Actually, I pick up (finite) points, therefore according to your answer I can't consider it as a proof. Then, please suggest what should I call it?
      $endgroup$
      – sane
      Jan 6 at 15:12






    • 2




      $begingroup$
      @sane It's computations which suggest a result might be true.
      $endgroup$
      – Jack M
      Jan 6 at 16:32






    • 2




      $begingroup$
      @sane You don't call it anything. It's just data.
      $endgroup$
      – The Great Duck
      Jan 7 at 4:14






    • 1




      $begingroup$
      @sane: In short, your claim must be as accurate as your proof. If you've run your number simulation on a set of numbers (which is obviously finite), then you can only claim that it if proven for that set of numbers. You cannot make any claims about untested numbers. What you should do is amend your claim. Instead of saying "f(n) is a prime", you should say "f(n) is a prime for 0 < n < 10^7" (for example).
      $endgroup$
      – Flater
      Jan 7 at 14:37














    • 1




      $begingroup$
      Thank you for your answer. Actually, I pick up (finite) points, therefore according to your answer I can't consider it as a proof. Then, please suggest what should I call it?
      $endgroup$
      – sane
      Jan 6 at 15:12






    • 2




      $begingroup$
      @sane It's computations which suggest a result might be true.
      $endgroup$
      – Jack M
      Jan 6 at 16:32






    • 2




      $begingroup$
      @sane You don't call it anything. It's just data.
      $endgroup$
      – The Great Duck
      Jan 7 at 4:14






    • 1




      $begingroup$
      @sane: In short, your claim must be as accurate as your proof. If you've run your number simulation on a set of numbers (which is obviously finite), then you can only claim that it if proven for that set of numbers. You cannot make any claims about untested numbers. What you should do is amend your claim. Instead of saying "f(n) is a prime", you should say "f(n) is a prime for 0 < n < 10^7" (for example).
      $endgroup$
      – Flater
      Jan 7 at 14:37








    1




    1




    $begingroup$
    Thank you for your answer. Actually, I pick up (finite) points, therefore according to your answer I can't consider it as a proof. Then, please suggest what should I call it?
    $endgroup$
    – sane
    Jan 6 at 15:12




    $begingroup$
    Thank you for your answer. Actually, I pick up (finite) points, therefore according to your answer I can't consider it as a proof. Then, please suggest what should I call it?
    $endgroup$
    – sane
    Jan 6 at 15:12




    2




    2




    $begingroup$
    @sane It's computations which suggest a result might be true.
    $endgroup$
    – Jack M
    Jan 6 at 16:32




    $begingroup$
    @sane It's computations which suggest a result might be true.
    $endgroup$
    – Jack M
    Jan 6 at 16:32




    2




    2




    $begingroup$
    @sane You don't call it anything. It's just data.
    $endgroup$
    – The Great Duck
    Jan 7 at 4:14




    $begingroup$
    @sane You don't call it anything. It's just data.
    $endgroup$
    – The Great Duck
    Jan 7 at 4:14




    1




    1




    $begingroup$
    @sane: In short, your claim must be as accurate as your proof. If you've run your number simulation on a set of numbers (which is obviously finite), then you can only claim that it if proven for that set of numbers. You cannot make any claims about untested numbers. What you should do is amend your claim. Instead of saying "f(n) is a prime", you should say "f(n) is a prime for 0 < n < 10^7" (for example).
    $endgroup$
    – Flater
    Jan 7 at 14:37




    $begingroup$
    @sane: In short, your claim must be as accurate as your proof. If you've run your number simulation on a set of numbers (which is obviously finite), then you can only claim that it if proven for that set of numbers. You cannot make any claims about untested numbers. What you should do is amend your claim. Instead of saying "f(n) is a prime", you should say "f(n) is a prime for 0 < n < 10^7" (for example).
    $endgroup$
    – Flater
    Jan 7 at 14:37











    5












    $begingroup$

    And it turns out that with further analysis of your actual problem you could have found an exact proof with first semester calculus means. There is no need for speculative numerical explorations.



    For your actual problem, to find solutions $x(γ,y)>0$ for the equation
    $$
    (γ^{n−1}y)^n+(γ^{n−2}y)^{n−1}+...+(γy)^2+y=A(γ,y)=x^n+x^{n−1}+...+x,
    $$

    it is rather easy to answer that there is exactly one positive root $x$ for fixed positive $y$ and $γ$.



    The right side at $x=0$ is zero and grows monotonically and convexly towards infinity, thus there is a positive root of the equation by the intermediate value theorem. Further, there is a lower bound
    $$
    x>frac{A}{1+A}gefrac{y}{1+y}
    $$

    for the positive root, as either $xge 1$ or, for $0<x<1$,
    $$
    A(1-x)=x(1-x^n)<ximplies x>frac{A}{1+A}
    $$





    In general, there are methods to determine the number of positive roots like Descartes rule or the Hurwitz criteria of stability. Descartes rule counts the sign changes in the coefficient sequence. Here $x^n+...+x^2+x-A=0$ has exactly one sign change which proves the existence of exactly one positive root.



    The polynomial equation has lower root bounds $a_nx^n+...+a_1x+a_0=0$
    $$
    |x|gefrac{|a_0|}{|a_0|+max_{k=1..n}|a_k|}
    ~~ text{ or } ~~
    |x|gefrac{|a_0|}{max(|a_0|,,|a_1|+...+|a_n|)}
    $$

    which are lower bound for a single positive root.






    share|cite|improve this answer











    $endgroup$













    • $begingroup$
      Sorry, I had a typo, this isn't the problem that I consider.
      $endgroup$
      – sane
      Jan 6 at 16:05






    • 2




      $begingroup$
      @sane : Then please add the correct problem statement somewhere, preferably to the body of the question text. It is better to add relevant, essential information there than in comments. There are many elementary and not so elementary results on the location and bounds of real roots, starting with Descartes' rule.
      $endgroup$
      – LutzL
      Jan 6 at 16:09








    • 4




      $begingroup$
      @sane: This should be in your Question, not hidden in comments. In your first equation, where are the transitions between $a$ and $a+1$ in the numerators on each side? In your second equation, does the sum on the right terminate?
      $endgroup$
      – Eric Towers
      Jan 6 at 18:53










    • $begingroup$
      Ok, it looks more difficult to decide on the positivity of $h$. Still, I do not see it as hopeless to try to derive theoretical results, the right sides are still monotonic and convex polynomials in $1/(1+x)$ resp. $1/(1+z)$ with a nice structure.
      $endgroup$
      – LutzL
      Jan 6 at 18:55






    • 2




      $begingroup$
      @sane : It belongs in your Question, because your question is an instance of an XY problem.
      $endgroup$
      – Eric Towers
      Jan 6 at 19:02
















    5












    $begingroup$

    And it turns out that with further analysis of your actual problem you could have found an exact proof with first semester calculus means. There is no need for speculative numerical explorations.



    For your actual problem, to find solutions $x(γ,y)>0$ for the equation
    $$
    (γ^{n−1}y)^n+(γ^{n−2}y)^{n−1}+...+(γy)^2+y=A(γ,y)=x^n+x^{n−1}+...+x,
    $$

    it is rather easy to answer that there is exactly one positive root $x$ for fixed positive $y$ and $γ$.



    The right side at $x=0$ is zero and grows monotonically and convexly towards infinity, thus there is a positive root of the equation by the intermediate value theorem. Further, there is a lower bound
    $$
    x>frac{A}{1+A}gefrac{y}{1+y}
    $$

    for the positive root, as either $xge 1$ or, for $0<x<1$,
    $$
    A(1-x)=x(1-x^n)<ximplies x>frac{A}{1+A}
    $$





    In general, there are methods to determine the number of positive roots like Descartes rule or the Hurwitz criteria of stability. Descartes rule counts the sign changes in the coefficient sequence. Here $x^n+...+x^2+x-A=0$ has exactly one sign change which proves the existence of exactly one positive root.



    The polynomial equation has lower root bounds $a_nx^n+...+a_1x+a_0=0$
    $$
    |x|gefrac{|a_0|}{|a_0|+max_{k=1..n}|a_k|}
    ~~ text{ or } ~~
    |x|gefrac{|a_0|}{max(|a_0|,,|a_1|+...+|a_n|)}
    $$

    which are lower bound for a single positive root.






    share|cite|improve this answer











    $endgroup$













    • $begingroup$
      Sorry, I had a typo, this isn't the problem that I consider.
      $endgroup$
      – sane
      Jan 6 at 16:05






    • 2




      $begingroup$
      @sane : Then please add the correct problem statement somewhere, preferably to the body of the question text. It is better to add relevant, essential information there than in comments. There are many elementary and not so elementary results on the location and bounds of real roots, starting with Descartes' rule.
      $endgroup$
      – LutzL
      Jan 6 at 16:09








    • 4




      $begingroup$
      @sane: This should be in your Question, not hidden in comments. In your first equation, where are the transitions between $a$ and $a+1$ in the numerators on each side? In your second equation, does the sum on the right terminate?
      $endgroup$
      – Eric Towers
      Jan 6 at 18:53










    • $begingroup$
      Ok, it looks more difficult to decide on the positivity of $h$. Still, I do not see it as hopeless to try to derive theoretical results, the right sides are still monotonic and convex polynomials in $1/(1+x)$ resp. $1/(1+z)$ with a nice structure.
      $endgroup$
      – LutzL
      Jan 6 at 18:55






    • 2




      $begingroup$
      @sane : It belongs in your Question, because your question is an instance of an XY problem.
      $endgroup$
      – Eric Towers
      Jan 6 at 19:02














    5












    5








    5





    $begingroup$

    And it turns out that with further analysis of your actual problem you could have found an exact proof with first semester calculus means. There is no need for speculative numerical explorations.



    For your actual problem, to find solutions $x(γ,y)>0$ for the equation
    $$
    (γ^{n−1}y)^n+(γ^{n−2}y)^{n−1}+...+(γy)^2+y=A(γ,y)=x^n+x^{n−1}+...+x,
    $$

    it is rather easy to answer that there is exactly one positive root $x$ for fixed positive $y$ and $γ$.



    The right side at $x=0$ is zero and grows monotonically and convexly towards infinity, thus there is a positive root of the equation by the intermediate value theorem. Further, there is a lower bound
    $$
    x>frac{A}{1+A}gefrac{y}{1+y}
    $$

    for the positive root, as either $xge 1$ or, for $0<x<1$,
    $$
    A(1-x)=x(1-x^n)<ximplies x>frac{A}{1+A}
    $$





    In general, there are methods to determine the number of positive roots like Descartes rule or the Hurwitz criteria of stability. Descartes rule counts the sign changes in the coefficient sequence. Here $x^n+...+x^2+x-A=0$ has exactly one sign change which proves the existence of exactly one positive root.



    The polynomial equation has lower root bounds $a_nx^n+...+a_1x+a_0=0$
    $$
    |x|gefrac{|a_0|}{|a_0|+max_{k=1..n}|a_k|}
    ~~ text{ or } ~~
    |x|gefrac{|a_0|}{max(|a_0|,,|a_1|+...+|a_n|)}
    $$

    which are lower bound for a single positive root.






    share|cite|improve this answer











    $endgroup$



    And it turns out that with further analysis of your actual problem you could have found an exact proof with first semester calculus means. There is no need for speculative numerical explorations.



    For your actual problem, to find solutions $x(γ,y)>0$ for the equation
    $$
    (γ^{n−1}y)^n+(γ^{n−2}y)^{n−1}+...+(γy)^2+y=A(γ,y)=x^n+x^{n−1}+...+x,
    $$

    it is rather easy to answer that there is exactly one positive root $x$ for fixed positive $y$ and $γ$.



    The right side at $x=0$ is zero and grows monotonically and convexly towards infinity, thus there is a positive root of the equation by the intermediate value theorem. Further, there is a lower bound
    $$
    x>frac{A}{1+A}gefrac{y}{1+y}
    $$

    for the positive root, as either $xge 1$ or, for $0<x<1$,
    $$
    A(1-x)=x(1-x^n)<ximplies x>frac{A}{1+A}
    $$





    In general, there are methods to determine the number of positive roots like Descartes rule or the Hurwitz criteria of stability. Descartes rule counts the sign changes in the coefficient sequence. Here $x^n+...+x^2+x-A=0$ has exactly one sign change which proves the existence of exactly one positive root.



    The polynomial equation has lower root bounds $a_nx^n+...+a_1x+a_0=0$
    $$
    |x|gefrac{|a_0|}{|a_0|+max_{k=1..n}|a_k|}
    ~~ text{ or } ~~
    |x|gefrac{|a_0|}{max(|a_0|,,|a_1|+...+|a_n|)}
    $$

    which are lower bound for a single positive root.







    share|cite|improve this answer














    share|cite|improve this answer



    share|cite|improve this answer








    edited Jan 6 at 16:53

























    answered Jan 6 at 15:54









    LutzLLutzL

    60.4k42057




    60.4k42057












    • $begingroup$
      Sorry, I had a typo, this isn't the problem that I consider.
      $endgroup$
      – sane
      Jan 6 at 16:05






    • 2




      $begingroup$
      @sane : Then please add the correct problem statement somewhere, preferably to the body of the question text. It is better to add relevant, essential information there than in comments. There are many elementary and not so elementary results on the location and bounds of real roots, starting with Descartes' rule.
      $endgroup$
      – LutzL
      Jan 6 at 16:09








    • 4




      $begingroup$
      @sane: This should be in your Question, not hidden in comments. In your first equation, where are the transitions between $a$ and $a+1$ in the numerators on each side? In your second equation, does the sum on the right terminate?
      $endgroup$
      – Eric Towers
      Jan 6 at 18:53










    • $begingroup$
      Ok, it looks more difficult to decide on the positivity of $h$. Still, I do not see it as hopeless to try to derive theoretical results, the right sides are still monotonic and convex polynomials in $1/(1+x)$ resp. $1/(1+z)$ with a nice structure.
      $endgroup$
      – LutzL
      Jan 6 at 18:55






    • 2




      $begingroup$
      @sane : It belongs in your Question, because your question is an instance of an XY problem.
      $endgroup$
      – Eric Towers
      Jan 6 at 19:02


















    • $begingroup$
      Sorry, I had a typo, this isn't the problem that I consider.
      $endgroup$
      – sane
      Jan 6 at 16:05






    • 2




      $begingroup$
      @sane : Then please add the correct problem statement somewhere, preferably to the body of the question text. It is better to add relevant, essential information there than in comments. There are many elementary and not so elementary results on the location and bounds of real roots, starting with Descartes' rule.
      $endgroup$
      – LutzL
      Jan 6 at 16:09








    • 4




      $begingroup$
      @sane: This should be in your Question, not hidden in comments. In your first equation, where are the transitions between $a$ and $a+1$ in the numerators on each side? In your second equation, does the sum on the right terminate?
      $endgroup$
      – Eric Towers
      Jan 6 at 18:53










    • $begingroup$
      Ok, it looks more difficult to decide on the positivity of $h$. Still, I do not see it as hopeless to try to derive theoretical results, the right sides are still monotonic and convex polynomials in $1/(1+x)$ resp. $1/(1+z)$ with a nice structure.
      $endgroup$
      – LutzL
      Jan 6 at 18:55






    • 2




      $begingroup$
      @sane : It belongs in your Question, because your question is an instance of an XY problem.
      $endgroup$
      – Eric Towers
      Jan 6 at 19:02
















    $begingroup$
    Sorry, I had a typo, this isn't the problem that I consider.
    $endgroup$
    – sane
    Jan 6 at 16:05




    $begingroup$
    Sorry, I had a typo, this isn't the problem that I consider.
    $endgroup$
    – sane
    Jan 6 at 16:05




    2




    2




    $begingroup$
    @sane : Then please add the correct problem statement somewhere, preferably to the body of the question text. It is better to add relevant, essential information there than in comments. There are many elementary and not so elementary results on the location and bounds of real roots, starting with Descartes' rule.
    $endgroup$
    – LutzL
    Jan 6 at 16:09






    $begingroup$
    @sane : Then please add the correct problem statement somewhere, preferably to the body of the question text. It is better to add relevant, essential information there than in comments. There are many elementary and not so elementary results on the location and bounds of real roots, starting with Descartes' rule.
    $endgroup$
    – LutzL
    Jan 6 at 16:09






    4




    4




    $begingroup$
    @sane: This should be in your Question, not hidden in comments. In your first equation, where are the transitions between $a$ and $a+1$ in the numerators on each side? In your second equation, does the sum on the right terminate?
    $endgroup$
    – Eric Towers
    Jan 6 at 18:53




    $begingroup$
    @sane: This should be in your Question, not hidden in comments. In your first equation, where are the transitions between $a$ and $a+1$ in the numerators on each side? In your second equation, does the sum on the right terminate?
    $endgroup$
    – Eric Towers
    Jan 6 at 18:53












    $begingroup$
    Ok, it looks more difficult to decide on the positivity of $h$. Still, I do not see it as hopeless to try to derive theoretical results, the right sides are still monotonic and convex polynomials in $1/(1+x)$ resp. $1/(1+z)$ with a nice structure.
    $endgroup$
    – LutzL
    Jan 6 at 18:55




    $begingroup$
    Ok, it looks more difficult to decide on the positivity of $h$. Still, I do not see it as hopeless to try to derive theoretical results, the right sides are still monotonic and convex polynomials in $1/(1+x)$ resp. $1/(1+z)$ with a nice structure.
    $endgroup$
    – LutzL
    Jan 6 at 18:55




    2




    2




    $begingroup$
    @sane : It belongs in your Question, because your question is an instance of an XY problem.
    $endgroup$
    – Eric Towers
    Jan 6 at 19:02




    $begingroup$
    @sane : It belongs in your Question, because your question is an instance of an XY problem.
    $endgroup$
    – Eric Towers
    Jan 6 at 19:02











    3












    $begingroup$

    I'm going to take the first paragraph and you saying "it cannot be proven analytically" to mean "it was proven that a proof or disproof does not exist". Well I hate to break it to you then. You're in one of two scenarios.




    1. The statement you are working with can be true or false without contradiction (think the 5th postulate of Euclidean Geometry of which many people attempted to prove).


    2. An actual truth value exists for this statement, but the answer is not Turing-Computable. If you don't know what it means, basically it means that given nothing but finite pencil and paper and an arbitrary finite amount of time you will never get an answer, ever. You would require either infinite paper being manipulated at once or infinite steps somehow done in a finite period of time.



    What this means is that the answer is quite simple here. It doesn't matter what your method was. It doesn't matter if it was sophisticated AI powered real analysis proof solving system with the capabilities of writing a proof for pretty much anything you ask it to. It doesn't matter because you've already proven that a proof doesn't exist. Therefore anything you obtain that claims to be a proof cannot be a proof. This would be a contradiction but you obviously can't have that.



    Your only exception would be if you have a computer that is stronger computationally than a Turing-Machine. No, that does not mean processing power. I mean it can solve Halting Problem. However, if you have a computer that can do that then you probably shouldn't post about it on here. The current record on number of Hypercomputers built by the entire human race as they have been referred to as is.... zero. If you did build one then obviously refer to other answers for more info on the validity of various methods.



    So which is it? Is it impossible to prove the statement or was that reasoning flawed and the numerical method proof (possibly) valid depending on the exact methods used? It cannot be both. This would imply that mathematics as a whole is broken, and if you were to discover or prove that, the entire notion of truth would go completely out the door. It would also likely result in the death of this site. Please don't do that sir. (Actually more likely to happen is that the axioms currently being used for whatever you are studying will get a correction and all will be well.)






    share|cite|improve this answer









    $endgroup$


















      3












      $begingroup$

      I'm going to take the first paragraph and you saying "it cannot be proven analytically" to mean "it was proven that a proof or disproof does not exist". Well I hate to break it to you then. You're in one of two scenarios.




      1. The statement you are working with can be true or false without contradiction (think the 5th postulate of Euclidean Geometry of which many people attempted to prove).


      2. An actual truth value exists for this statement, but the answer is not Turing-Computable. If you don't know what it means, basically it means that given nothing but finite pencil and paper and an arbitrary finite amount of time you will never get an answer, ever. You would require either infinite paper being manipulated at once or infinite steps somehow done in a finite period of time.



      What this means is that the answer is quite simple here. It doesn't matter what your method was. It doesn't matter if it was sophisticated AI powered real analysis proof solving system with the capabilities of writing a proof for pretty much anything you ask it to. It doesn't matter because you've already proven that a proof doesn't exist. Therefore anything you obtain that claims to be a proof cannot be a proof. This would be a contradiction but you obviously can't have that.



      Your only exception would be if you have a computer that is stronger computationally than a Turing-Machine. No, that does not mean processing power. I mean it can solve Halting Problem. However, if you have a computer that can do that then you probably shouldn't post about it on here. The current record on number of Hypercomputers built by the entire human race as they have been referred to as is.... zero. If you did build one then obviously refer to other answers for more info on the validity of various methods.



      So which is it? Is it impossible to prove the statement or was that reasoning flawed and the numerical method proof (possibly) valid depending on the exact methods used? It cannot be both. This would imply that mathematics as a whole is broken, and if you were to discover or prove that, the entire notion of truth would go completely out the door. It would also likely result in the death of this site. Please don't do that sir. (Actually more likely to happen is that the axioms currently being used for whatever you are studying will get a correction and all will be well.)






      share|cite|improve this answer









      $endgroup$
















        3












        3








        3





        $begingroup$

        I'm going to take the first paragraph and you saying "it cannot be proven analytically" to mean "it was proven that a proof or disproof does not exist". Well I hate to break it to you then. You're in one of two scenarios.




        1. The statement you are working with can be true or false without contradiction (think the 5th postulate of Euclidean Geometry of which many people attempted to prove).


        2. An actual truth value exists for this statement, but the answer is not Turing-Computable. If you don't know what it means, basically it means that given nothing but finite pencil and paper and an arbitrary finite amount of time you will never get an answer, ever. You would require either infinite paper being manipulated at once or infinite steps somehow done in a finite period of time.



        What this means is that the answer is quite simple here. It doesn't matter what your method was. It doesn't matter if it was sophisticated AI powered real analysis proof solving system with the capabilities of writing a proof for pretty much anything you ask it to. It doesn't matter because you've already proven that a proof doesn't exist. Therefore anything you obtain that claims to be a proof cannot be a proof. This would be a contradiction but you obviously can't have that.



        Your only exception would be if you have a computer that is stronger computationally than a Turing-Machine. No, that does not mean processing power. I mean it can solve Halting Problem. However, if you have a computer that can do that then you probably shouldn't post about it on here. The current record on number of Hypercomputers built by the entire human race as they have been referred to as is.... zero. If you did build one then obviously refer to other answers for more info on the validity of various methods.



        So which is it? Is it impossible to prove the statement or was that reasoning flawed and the numerical method proof (possibly) valid depending on the exact methods used? It cannot be both. This would imply that mathematics as a whole is broken, and if you were to discover or prove that, the entire notion of truth would go completely out the door. It would also likely result in the death of this site. Please don't do that sir. (Actually more likely to happen is that the axioms currently being used for whatever you are studying will get a correction and all will be well.)






        share|cite|improve this answer









        $endgroup$



        I'm going to take the first paragraph and you saying "it cannot be proven analytically" to mean "it was proven that a proof or disproof does not exist". Well I hate to break it to you then. You're in one of two scenarios.




        1. The statement you are working with can be true or false without contradiction (think the 5th postulate of Euclidean Geometry of which many people attempted to prove).


        2. An actual truth value exists for this statement, but the answer is not Turing-Computable. If you don't know what it means, basically it means that given nothing but finite pencil and paper and an arbitrary finite amount of time you will never get an answer, ever. You would require either infinite paper being manipulated at once or infinite steps somehow done in a finite period of time.



        What this means is that the answer is quite simple here. It doesn't matter what your method was. It doesn't matter if it was sophisticated AI powered real analysis proof solving system with the capabilities of writing a proof for pretty much anything you ask it to. It doesn't matter because you've already proven that a proof doesn't exist. Therefore anything you obtain that claims to be a proof cannot be a proof. This would be a contradiction but you obviously can't have that.



        Your only exception would be if you have a computer that is stronger computationally than a Turing-Machine. No, that does not mean processing power. I mean it can solve Halting Problem. However, if you have a computer that can do that then you probably shouldn't post about it on here. The current record on number of Hypercomputers built by the entire human race as they have been referred to as is.... zero. If you did build one then obviously refer to other answers for more info on the validity of various methods.



        So which is it? Is it impossible to prove the statement or was that reasoning flawed and the numerical method proof (possibly) valid depending on the exact methods used? It cannot be both. This would imply that mathematics as a whole is broken, and if you were to discover or prove that, the entire notion of truth would go completely out the door. It would also likely result in the death of this site. Please don't do that sir. (Actually more likely to happen is that the axioms currently being used for whatever you are studying will get a correction and all will be well.)







        share|cite|improve this answer












        share|cite|improve this answer



        share|cite|improve this answer










        answered Jan 7 at 4:26









        The Great DuckThe Great Duck

        25732047




        25732047























            2












            $begingroup$

            As other answers have said, a statement is either proven or unproven. If it is proven, we call it a theorem, regardless of the methods used. I'm answering separately to call attention to some of the pitfalls of trying to "prove" things with numerical methods.



            The vast majority of real-world numerical computing (outside of the financial sector) uses IEEE 754 floating point to represent numbers which may not be integers. In short, the floating point numbers are a finite (but large) subset of the dyadic rationals, which become sparser (at a roughly logarithmic rate) as we move away from zero. There are also a few "special" values which are used to represent the results of invalid operations or arithmetic overflow.



            Why does this matter? It's important to understand the limitations of floating point arithmetic. In particular:




            • It does not support irrational numbers whatsoever. The Dirichlet function and its cousins cannot be usefully analyzed at all.

            • Numbers which are farther from zero have less precision, so functions with essential singularities will be difficult to work with in the neighborhoods of their singularities.

            • Limits are potentially an issue. Normally, a limit only exists if it exists under any possible choice of epsilon, but the floating point numbers can only choose finitely many epsilons, all of which are dyadic rationals. So if you take a limit of a function which behaves "specially" over the dyadic rationals (e.g. certain combinations of trigonometric functions), you may derive an incorrect value or incorrectly conclude that the limit exists at all.

            • The floating point numbers are non-associative under every operation that matters, because of intermediate rounding. No useful algebraic manipulations will preserve equality. So equality is basically useless and you have to approximate it by looking at whether numbers are "close enough" instead. For some applications, this is a problem (e.g. proving that a value is never exactly zero is impossible if it comes arbitrarily close to zero).






            share|cite|improve this answer









            $endgroup$













            • $begingroup$
              And even for integers, a real world computer is probably really working with the integers modulo some power of 2. In many cases, they silently overflow so considerable care is needed even if you expect that a solution exists within the available range.
              $endgroup$
              – badjohn
              Jan 7 at 14:22
















            2












            $begingroup$

            As other answers have said, a statement is either proven or unproven. If it is proven, we call it a theorem, regardless of the methods used. I'm answering separately to call attention to some of the pitfalls of trying to "prove" things with numerical methods.



            The vast majority of real-world numerical computing (outside of the financial sector) uses IEEE 754 floating point to represent numbers which may not be integers. In short, the floating point numbers are a finite (but large) subset of the dyadic rationals, which become sparser (at a roughly logarithmic rate) as we move away from zero. There are also a few "special" values which are used to represent the results of invalid operations or arithmetic overflow.



            Why does this matter? It's important to understand the limitations of floating point arithmetic. In particular:




            • It does not support irrational numbers whatsoever. The Dirichlet function and its cousins cannot be usefully analyzed at all.

            • Numbers which are farther from zero have less precision, so functions with essential singularities will be difficult to work with in the neighborhoods of their singularities.

            • Limits are potentially an issue. Normally, a limit only exists if it exists under any possible choice of epsilon, but the floating point numbers can only choose finitely many epsilons, all of which are dyadic rationals. So if you take a limit of a function which behaves "specially" over the dyadic rationals (e.g. certain combinations of trigonometric functions), you may derive an incorrect value or incorrectly conclude that the limit exists at all.

            • The floating point numbers are non-associative under every operation that matters, because of intermediate rounding. No useful algebraic manipulations will preserve equality. So equality is basically useless and you have to approximate it by looking at whether numbers are "close enough" instead. For some applications, this is a problem (e.g. proving that a value is never exactly zero is impossible if it comes arbitrarily close to zero).






            share|cite|improve this answer









            $endgroup$













            • $begingroup$
              And even for integers, a real world computer is probably really working with the integers modulo some power of 2. In many cases, they silently overflow so considerable care is needed even if you expect that a solution exists within the available range.
              $endgroup$
              – badjohn
              Jan 7 at 14:22














            2












            2








            2





            $begingroup$

            As other answers have said, a statement is either proven or unproven. If it is proven, we call it a theorem, regardless of the methods used. I'm answering separately to call attention to some of the pitfalls of trying to "prove" things with numerical methods.



            The vast majority of real-world numerical computing (outside of the financial sector) uses IEEE 754 floating point to represent numbers which may not be integers. In short, the floating point numbers are a finite (but large) subset of the dyadic rationals, which become sparser (at a roughly logarithmic rate) as we move away from zero. There are also a few "special" values which are used to represent the results of invalid operations or arithmetic overflow.



            Why does this matter? It's important to understand the limitations of floating point arithmetic. In particular:




            • It does not support irrational numbers whatsoever. The Dirichlet function and its cousins cannot be usefully analyzed at all.

            • Numbers which are farther from zero have less precision, so functions with essential singularities will be difficult to work with in the neighborhoods of their singularities.

            • Limits are potentially an issue. Normally, a limit only exists if it exists under any possible choice of epsilon, but the floating point numbers can only choose finitely many epsilons, all of which are dyadic rationals. So if you take a limit of a function which behaves "specially" over the dyadic rationals (e.g. certain combinations of trigonometric functions), you may derive an incorrect value or incorrectly conclude that the limit exists at all.

            • The floating point numbers are non-associative under every operation that matters, because of intermediate rounding. No useful algebraic manipulations will preserve equality. So equality is basically useless and you have to approximate it by looking at whether numbers are "close enough" instead. For some applications, this is a problem (e.g. proving that a value is never exactly zero is impossible if it comes arbitrarily close to zero).






            share|cite|improve this answer









            $endgroup$



            As other answers have said, a statement is either proven or unproven. If it is proven, we call it a theorem, regardless of the methods used. I'm answering separately to call attention to some of the pitfalls of trying to "prove" things with numerical methods.



            The vast majority of real-world numerical computing (outside of the financial sector) uses IEEE 754 floating point to represent numbers which may not be integers. In short, the floating point numbers are a finite (but large) subset of the dyadic rationals, which become sparser (at a roughly logarithmic rate) as we move away from zero. There are also a few "special" values which are used to represent the results of invalid operations or arithmetic overflow.



            Why does this matter? It's important to understand the limitations of floating point arithmetic. In particular:




            • It does not support irrational numbers whatsoever. The Dirichlet function and its cousins cannot be usefully analyzed at all.

            • Numbers which are farther from zero have less precision, so functions with essential singularities will be difficult to work with in the neighborhoods of their singularities.

            • Limits are potentially an issue. Normally, a limit only exists if it exists under any possible choice of epsilon, but the floating point numbers can only choose finitely many epsilons, all of which are dyadic rationals. So if you take a limit of a function which behaves "specially" over the dyadic rationals (e.g. certain combinations of trigonometric functions), you may derive an incorrect value or incorrectly conclude that the limit exists at all.

            • The floating point numbers are non-associative under every operation that matters, because of intermediate rounding. No useful algebraic manipulations will preserve equality. So equality is basically useless and you have to approximate it by looking at whether numbers are "close enough" instead. For some applications, this is a problem (e.g. proving that a value is never exactly zero is impossible if it comes arbitrarily close to zero).







            share|cite|improve this answer












            share|cite|improve this answer



            share|cite|improve this answer










            answered Jan 6 at 16:02









            KevinKevin

            1,648722




            1,648722












            • $begingroup$
              And even for integers, a real world computer is probably really working with the integers modulo some power of 2. In many cases, they silently overflow so considerable care is needed even if you expect that a solution exists within the available range.
              $endgroup$
              – badjohn
              Jan 7 at 14:22


















            • $begingroup$
              And even for integers, a real world computer is probably really working with the integers modulo some power of 2. In many cases, they silently overflow so considerable care is needed even if you expect that a solution exists within the available range.
              $endgroup$
              – badjohn
              Jan 7 at 14:22
















            $begingroup$
            And even for integers, a real world computer is probably really working with the integers modulo some power of 2. In many cases, they silently overflow so considerable care is needed even if you expect that a solution exists within the available range.
            $endgroup$
            – badjohn
            Jan 7 at 14:22




            $begingroup$
            And even for integers, a real world computer is probably really working with the integers modulo some power of 2. In many cases, they silently overflow so considerable care is needed even if you expect that a solution exists within the available range.
            $endgroup$
            – badjohn
            Jan 7 at 14:22


















            draft saved

            draft discarded




















































            Thanks for contributing an answer to Mathematics Stack Exchange!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid



            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.


            Use MathJax to format equations. MathJax reference.


            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3063680%2fis-it-legitimate-to-specify-a-statement-as-a-theorem-if-it-is-proved-using-numer%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            Popular posts from this blog

            Wiesbaden

            Marschland

            Dieringhausen