How many pairs of brackets are sufficient to make Brainfuck Turing complete?












12












$begingroup$


Brainfuck is a Turing complete programming language that uses only 8 symbols (6 if you ignore I/O).



The two most notable ones that push it to Turing completeness are [and ], essentially Brainfuck's label and goto.



Normally, programs in Brainfuck use multiple sets of , but I was wondering exactly how many pairs of these brackets have to be used to make Brainfuck Turing complete?



More simply, what is the least amount of brackets that you'd need to simulate an n-state Turing Machine (Give the number of brackets for 1, 2, and three state Turing Machines)?



Notes:



We are assuming an infinite tape and no computational limits.



It is a 2-symbol Turing Machine










share|cite|improve this question











$endgroup$








  • 1




    $begingroup$
    "how many pairs of these brackets have to be used?" Can you clarify "have to be used". For example, what if I ask BrainF to count to $2^{1000000}$?
    $endgroup$
    – Apass.Jack
    Jan 4 at 4:08












  • $begingroup$
    @Apass.Jack the minimum number of brackets
    $endgroup$
    – MilkyWay90
    Jan 4 at 4:13








  • 1




    $begingroup$
    Oh, do you meant the minimum number of brackets to simulate an $n$-state Turing machine as a function of $n$? Anyway, can you give a nontrivial example that is as simple as possible?
    $endgroup$
    – Apass.Jack
    Jan 4 at 4:16








  • 1




    $begingroup$
    @Apass.Jack Okay, I'm coming up with a buggy BF program which works for a one-state Turing Machine
    $endgroup$
    – MilkyWay90
    Jan 4 at 4:28










  • $begingroup$
    @Apass.Jack Nevermind, it is way too hard for me. Basically make a BF interpreter for my programming language, Turing Machine But Way Worse, when it uses only two possible symbols (0 and 1) and remove the I/O and halting aspect completely
    $endgroup$
    – MilkyWay90
    Jan 4 at 4:38
















12












$begingroup$


Brainfuck is a Turing complete programming language that uses only 8 symbols (6 if you ignore I/O).



The two most notable ones that push it to Turing completeness are [and ], essentially Brainfuck's label and goto.



Normally, programs in Brainfuck use multiple sets of , but I was wondering exactly how many pairs of these brackets have to be used to make Brainfuck Turing complete?



More simply, what is the least amount of brackets that you'd need to simulate an n-state Turing Machine (Give the number of brackets for 1, 2, and three state Turing Machines)?



Notes:



We are assuming an infinite tape and no computational limits.



It is a 2-symbol Turing Machine










share|cite|improve this question











$endgroup$








  • 1




    $begingroup$
    "how many pairs of these brackets have to be used?" Can you clarify "have to be used". For example, what if I ask BrainF to count to $2^{1000000}$?
    $endgroup$
    – Apass.Jack
    Jan 4 at 4:08












  • $begingroup$
    @Apass.Jack the minimum number of brackets
    $endgroup$
    – MilkyWay90
    Jan 4 at 4:13








  • 1




    $begingroup$
    Oh, do you meant the minimum number of brackets to simulate an $n$-state Turing machine as a function of $n$? Anyway, can you give a nontrivial example that is as simple as possible?
    $endgroup$
    – Apass.Jack
    Jan 4 at 4:16








  • 1




    $begingroup$
    @Apass.Jack Okay, I'm coming up with a buggy BF program which works for a one-state Turing Machine
    $endgroup$
    – MilkyWay90
    Jan 4 at 4:28










  • $begingroup$
    @Apass.Jack Nevermind, it is way too hard for me. Basically make a BF interpreter for my programming language, Turing Machine But Way Worse, when it uses only two possible symbols (0 and 1) and remove the I/O and halting aspect completely
    $endgroup$
    – MilkyWay90
    Jan 4 at 4:38














12












12








12


2



$begingroup$


Brainfuck is a Turing complete programming language that uses only 8 symbols (6 if you ignore I/O).



The two most notable ones that push it to Turing completeness are [and ], essentially Brainfuck's label and goto.



Normally, programs in Brainfuck use multiple sets of , but I was wondering exactly how many pairs of these brackets have to be used to make Brainfuck Turing complete?



More simply, what is the least amount of brackets that you'd need to simulate an n-state Turing Machine (Give the number of brackets for 1, 2, and three state Turing Machines)?



Notes:



We are assuming an infinite tape and no computational limits.



It is a 2-symbol Turing Machine










share|cite|improve this question











$endgroup$




Brainfuck is a Turing complete programming language that uses only 8 symbols (6 if you ignore I/O).



The two most notable ones that push it to Turing completeness are [and ], essentially Brainfuck's label and goto.



Normally, programs in Brainfuck use multiple sets of , but I was wondering exactly how many pairs of these brackets have to be used to make Brainfuck Turing complete?



More simply, what is the least amount of brackets that you'd need to simulate an n-state Turing Machine (Give the number of brackets for 1, 2, and three state Turing Machines)?



Notes:



We are assuming an infinite tape and no computational limits.



It is a 2-symbol Turing Machine







turing-completeness






share|cite|improve this question















share|cite|improve this question













share|cite|improve this question




share|cite|improve this question








edited Jan 9 at 19:42









Jacob Bundgaard

1032




1032










asked Jan 4 at 3:03









MilkyWay90MilkyWay90

1619




1619








  • 1




    $begingroup$
    "how many pairs of these brackets have to be used?" Can you clarify "have to be used". For example, what if I ask BrainF to count to $2^{1000000}$?
    $endgroup$
    – Apass.Jack
    Jan 4 at 4:08












  • $begingroup$
    @Apass.Jack the minimum number of brackets
    $endgroup$
    – MilkyWay90
    Jan 4 at 4:13








  • 1




    $begingroup$
    Oh, do you meant the minimum number of brackets to simulate an $n$-state Turing machine as a function of $n$? Anyway, can you give a nontrivial example that is as simple as possible?
    $endgroup$
    – Apass.Jack
    Jan 4 at 4:16








  • 1




    $begingroup$
    @Apass.Jack Okay, I'm coming up with a buggy BF program which works for a one-state Turing Machine
    $endgroup$
    – MilkyWay90
    Jan 4 at 4:28










  • $begingroup$
    @Apass.Jack Nevermind, it is way too hard for me. Basically make a BF interpreter for my programming language, Turing Machine But Way Worse, when it uses only two possible symbols (0 and 1) and remove the I/O and halting aspect completely
    $endgroup$
    – MilkyWay90
    Jan 4 at 4:38














  • 1




    $begingroup$
    "how many pairs of these brackets have to be used?" Can you clarify "have to be used". For example, what if I ask BrainF to count to $2^{1000000}$?
    $endgroup$
    – Apass.Jack
    Jan 4 at 4:08












  • $begingroup$
    @Apass.Jack the minimum number of brackets
    $endgroup$
    – MilkyWay90
    Jan 4 at 4:13








  • 1




    $begingroup$
    Oh, do you meant the minimum number of brackets to simulate an $n$-state Turing machine as a function of $n$? Anyway, can you give a nontrivial example that is as simple as possible?
    $endgroup$
    – Apass.Jack
    Jan 4 at 4:16








  • 1




    $begingroup$
    @Apass.Jack Okay, I'm coming up with a buggy BF program which works for a one-state Turing Machine
    $endgroup$
    – MilkyWay90
    Jan 4 at 4:28










  • $begingroup$
    @Apass.Jack Nevermind, it is way too hard for me. Basically make a BF interpreter for my programming language, Turing Machine But Way Worse, when it uses only two possible symbols (0 and 1) and remove the I/O and halting aspect completely
    $endgroup$
    – MilkyWay90
    Jan 4 at 4:38








1




1




$begingroup$
"how many pairs of these brackets have to be used?" Can you clarify "have to be used". For example, what if I ask BrainF to count to $2^{1000000}$?
$endgroup$
– Apass.Jack
Jan 4 at 4:08






$begingroup$
"how many pairs of these brackets have to be used?" Can you clarify "have to be used". For example, what if I ask BrainF to count to $2^{1000000}$?
$endgroup$
– Apass.Jack
Jan 4 at 4:08














$begingroup$
@Apass.Jack the minimum number of brackets
$endgroup$
– MilkyWay90
Jan 4 at 4:13






$begingroup$
@Apass.Jack the minimum number of brackets
$endgroup$
– MilkyWay90
Jan 4 at 4:13






1




1




$begingroup$
Oh, do you meant the minimum number of brackets to simulate an $n$-state Turing machine as a function of $n$? Anyway, can you give a nontrivial example that is as simple as possible?
$endgroup$
– Apass.Jack
Jan 4 at 4:16






$begingroup$
Oh, do you meant the minimum number of brackets to simulate an $n$-state Turing machine as a function of $n$? Anyway, can you give a nontrivial example that is as simple as possible?
$endgroup$
– Apass.Jack
Jan 4 at 4:16






1




1




$begingroup$
@Apass.Jack Okay, I'm coming up with a buggy BF program which works for a one-state Turing Machine
$endgroup$
– MilkyWay90
Jan 4 at 4:28




$begingroup$
@Apass.Jack Okay, I'm coming up with a buggy BF program which works for a one-state Turing Machine
$endgroup$
– MilkyWay90
Jan 4 at 4:28












$begingroup$
@Apass.Jack Nevermind, it is way too hard for me. Basically make a BF interpreter for my programming language, Turing Machine But Way Worse, when it uses only two possible symbols (0 and 1) and remove the I/O and halting aspect completely
$endgroup$
– MilkyWay90
Jan 4 at 4:38




$begingroup$
@Apass.Jack Nevermind, it is way too hard for me. Basically make a BF interpreter for my programming language, Turing Machine But Way Worse, when it uses only two possible symbols (0 and 1) and remove the I/O and halting aspect completely
$endgroup$
– MilkyWay90
Jan 4 at 4:38










2 Answers
2






active

oldest

votes


















9












$begingroup$

This is a further development of @ais523's answer, reducing it to only two sets of brackets, and also using a more compact cell placement based on Golomb ruler theory. ais523 has made a compiler for this construction, as well as this TIO session showing a sample resulting BF program running with debug tracing of the TWM counters.



Like the original, this starts with a program in The Waterfall Model, with some restrictions that don't lose generality:




  1. All counters have the same self-reset value $R$; that is, the TWM trigger map $f$ has the property that $f(x,x)=R$ for all $x$.

  2. There is a single halting counter $h$.

  3. The number $c$ of counters is $(p-1)/2$ for some prime number $p$.


Golomb ruler



We combine the Erdős–Turán construction with the permutation function of a Welch–Costas array in order to get a Golomb ruler with the necessary properties.



(I'm sure this combined construction cannot be a new idea but we just found and fit together these two pieces from Wikipedia.)



Let $r$ be a primitive root of $p=2c+1$. Define the function



$$g(k)=4ck - ((r^k-1)bmod(2c+1)), k=0,ldots,2c-1.$$





  1. $g$ is a Golomb ruler of order $2c$. That is, the difference $g(i)-g(j)$ is unique for every pair of distinct numbers $i,j in {0,ldots,2c-1}$.


  2. $g(k)bmod(2c)$ takes on every value $0,ldots,2c-1$ exactly once.


Tape structure



For each TWM counter $xin {0,ldots,c-1}$, we assign two BF tape cell positions, a fallback cell $u(x)$ and a value cell $v(x)$:



$$u(x)=g(k_1)<v(x)=g(k_2)mbox{ with }u(x)equiv v(x)equiv xpmod c$$



By the second property of $g$ there are exactly two distinct $k_1,k_2$ values to choose from.



A fallback cell's content will most of the time be kept at $0$, except when its counter has just been visited, when it will be at $2R$, twice the counter self-reset value. A value cell will be kept at twice the value of the corresponding TWM counter.



All other cells that can be reached by the BF program execution (a finite number) will be kept at odd values, so that they always test as nonzero. After initialization this is automatic because all cell adjustments are by even amounts.



If desired, all cell positions can be shifted rightwards by a constant in order to avoid moving to the left of the initial BF tape position.



BF program structure



Let $H = v(h)-u(h)$ be the distance between the halting counter's value and fallback cells, and let $N$ be a number large enough that $cN+1 geq v((x+1)bmod c) - u(x)$ for all counters $x$. Then the basic BF program structure is




initialization [ >$times (H+cN+1)$ [ <$times c$ ] adjustments <$times H$ ]




Initialization



The initialization phase sets all cells reachable by the program to their initial values, in a state as if the last counter had just been visited and the just active cell was its fallback cell $u(c-1)$:




  1. Value cells are initialized to twice the initial content of the corresponding TWM counter, except that counter $0$ is pre-decremented.

  2. Fallback cells are set to $0$, except cell $u(c-1)$, which is set to $2R$.

  3. All other cells reachable by the program (a finite number) are set to $1$.


Then the tape pointer is moved to position $u(c-1)-H$ (an always non-zero cell) before we reach the program's first [.



Beginning of outer loop



At the beginning of an iteration of the outer loop, the tape pointer will be at either $u(x)-H$ or $v(x)-H$ for a counter $x$.



Let $y=((x+1)bmod c)$ be the next counter to visit.



The movement >$times (H+cN+1)$ places the tape pointer on a position that is $equiv ypmod c$ and not to the left of $v(y)$.



The inner loop [ <$times c$ ] now searches leftwards in steps of $c$ for a zero cell. If counter $y$ is zero, then it will stop at the (zero) value cell $v(y)$; otherwise it will find the fallback cell $u(y)$.



Whichever cell is found becomes the new active cell.



Adjustments



The adjustment phase adjusts various cells on the tape based on their position relative to the active cell. This section contains only +->< commands and so these adjustments happen unconditionally. However, because all counter-related cells are in a Golomb ruler pattern, any adjustments that are not proper for the current active cell will miss all the important cells and adjust some irrelevant cell instead (while keeping it odd).



Separate code must thus be included in the program for each possible required pair of active and adjusted cell, except for an active cell's self-adjustment, which, because adjustment is based solely on relative position, must be shared between all of them.



The required adjustments are:




  1. Adjust the previous counter's fallback cell $u(x)$ by $-2R$.

  2. Adjust the current counter's fallback cell $u(y)$ by $2R$, except if the current active cell is $v(h)$ and so we should halt.

  3. Adjust the next counter's value cell $v((y+1)bmod c)$ by $-2$ (decrementing the counter).

  4. When the active cell is a value cell $v(y)$ (so the counter $y$ has reached zero), adjust all value cells $v(z)$ by $2f(y,z)$ from the TWM trigger map. $v(y)$ itself becomes adjusted by $2R$.


The first and second adjustments above are made necessary by the fact that all active cells must adjust themselves by the same value, which is $2R$ for value cells, and thus also for fallback cells. This requires preparing and cleaning up the fallback cells to ensure they get back to $0$ in both the value and fallback branches.



End of outer loop



The movement <$times H$ represents that at the end of the adjustment phase, the tape pointer is moved $H$ places to the left of the active cell.



For all active cells other than the halting counter's value cell $v(h)$, this is an irrelevant cell, and so odd and non-zero, and the outer loop continues for another iteration.



For $v(h)$, the pointer is instead placed on its corresponding fallback cell $u(h)$, for which we have made an exception above to keep it zero, and so the program exits through the final ] and halts.






share|cite|improve this answer









$endgroup$





















    12












    $begingroup$

    I'm not 100% sure that it's impossible to do this with two sets of brackets. However, if the cells of the BF tape allow unbounded values, three sets of brackets are enough. (For simplicity, I'll also assume that we can move the tape head left past its starting point, although because this construction only uses a finite region of the tape, we could lift that restriction by adding sufficiently many > commands at the start of the program.) The construction below requires assuming Artin's conjecture to be able to compile arbitrarily large programs; however, even if Artin's conjecture is false, it will still be possible to show Turing-completeness indirectly via translating an interpreter for a Turing-complete language into BF using the construction below, and running arbitrary programs via giving them as input to that interpreter.



    The Turing-complete language that we're compiling into unbounded BF is The Waterfall Model, which is one of the simplest known computational models. For people who don't know it already, it consists of a number of counters (and initial values for them), and a function $f$ from pairs of counters to integers; program execution consists of repeatedly subtracting 1 from every counter, then if any counter $x$ is 0, adding $f(x,y)$ to each counter $y$ (the program is written in such a way that this never happens to multiple counters simultaneously). There's a Turing-completeness proof for this language behind my link. Without loss of generality, we'll assume that all counters have the same self-reset value (i.e. $f(x,x)$ is the same for all $x$); this is a safe assumption because for any specific $x$, adding the same constant to each $f(x,y)$ will not change the behaviour of the program.



    Let $p$ be the number of counters; without loss of generality (assuming Artin's conjecture), assume that $p$ has a primitive root 2. Let $q$ be $p(1+s+s^2)$, where $s$ is the lowest power of 2 greater than $p$. Without loss of generality, $2q$ will be less than $2^p$ ($2q$ is bounded polynomially, $2^p$ grows exponentially, so any sufficiently large $p$ will work).



    The tape arrangement is as follows: we number each counter with an integer $0 leq i < p$ (and without loss of generality, we assume that there's a single halt counter and number it $2$). The value of most counters is stored on tape cell $2^i$, with the exception of counter 0, which is stored on tape cell $2q$. For each odd-numbered tape cell from cell -1 up to and including $2^{p+1}+2p+1$, that tape cell always holds 1, unless it's immediately to the left of a counter, in which case it always holds 0. Even-numbered tape cells that aren't being used as counters have irrelevant values (which might or might not be 0); and odd-numbered tape cells outside the stated range also have irrelevant values. Note that setting the tape into an appropriate initial state requires initialising only finitely many tape elements to constant values, meaning that we can do it with a sequence of <>+- instructions (in fact, only >+ are needed), thus no brackets. At the end of this initialisation, we move the tape pointer to cell -1.



    The general shape of our program will look like this:




    initialisation [>>>[ >$times(2^p-1)$ [ <$times(2p)$ ]>-] adjustment <<<]




    The initialisation puts the tape into the expected shape and the pointer on cell -1. This is not the cell to the left of a counter (0 is not a power of 2), so it has value 1, and we enter the loop. The loop invariant for this outermost loop is that the tape pointer is (at the start and end of each loop iteration) three cells to the left of a counter; it can be seen that the loop will thus only exit if we're three cells to the left of counter 2 (each other counter has a 1 three cells to its left, as to have a 0 there would imply that two counters' tape positions were 2 cells apart; the only two powers of 2 that differ by 2 are $2^1$ and $2^2$, and $q$'s binary representation changes from strings of $0$s to strings of $1$s or vice versa at least four times and thus cannot be 1 away from a power of 2).



    The second loop repeatedly loops over the counters, decrementing them. The loop invariant is that the tape pointer is always pointing to a counter; thus the loop will exit when some counter becomes 0. The decrement is just -; the way we get from one counter to the next is more complex. The basic idea is that moving $2^p-1$ spaces to the right from $2^x$ will place us on an odd-numbered cell $2^p+2^x-1$, which is to the right of any counter ($2^p$ is the last counter, $2^x-1$ is positive because $x$ is positive); modulo $2p$, this value is congruent to (by Fermat's Little Theorem) $2^x+1$. The innermost loop repeatedly moves left by $2p$ spaces, also not changing the index of the tape cell modulo $2p$, and must eventually find the cell congruent to $2^x+1$ modulo $2p$ that has the value (which will be the cell to the left of some counter); because of our primitive root requirement there's exactly one such cell ($2q-1$ is congruent to $-1$ modulo $2p$, and $2^{log_{2,p}(r)+1}-1$ is congruent to $2r-1$ for any other $r$, where $log_{2,p}$ is the discrete logarithm to base 2 modulo $p$). Additionally, it can be seen that the position of the tape pointer modulo $2p$ increases by $2$ each time round the middle loop. Thus, the tape pointer must cycle between all $p$ counters (in order of their values modulo $2p$). Thus, every $p$ iterations, we decrease every counter (as required). If the loop breaks partway through an iteration, we'll resume the decrease when we re-enter the loop (because the rest of the outermost loop makes no net change to the tape pointer position).



    When a counter does hit 0, the middle loop breaks, taking us to the "adjustment" code. This is basically just an encoding of $f$; for every pair $(x,y)$, it adds $f(x,y)$ to the tape element which is the same distance left/right of the current tape pointer as counter $y$'s tape location is left/right of counter $x$'s tape location (and then removes the tape pointer back to where it started). Whenever $xneq y$, this distance turns out to be unique:




    • The difference between two powers of 2 is a binary number consisting of a string of 1 or more $1$s followed by a string of 0 or more $0$s (with the place values of the start of the number, and the start of the $0$ string, depending on the larger and smaller respectively of $x$ and $y$); thus all those differences are distinct. * As for the difference of a power of 2 and $q$, it must contain at least two transitions between strings of $1$s and $0$s ($q$ contains at least four such transitions, the subtraction can only remove 2), thus is distinct from all differences of two powers of two, and these differences are obviously also distinct from each other.


    For $x=y$, we obviously find that the distance moved is 0. But because all $f(x,y)$ are equal, we can just use this as the adjustment for the current cell. And it can be seen that the adjustment code thus implements the "when a counter hits 0" effect for every counter; all the cells that actually represent counters will be adjusted by the correct amount, and all the other adjustments will affect non-counter even-numbered cells (the difference between two even numbers is even), which have no effect on the program's behaviour.



    Thus, we now have a working compilation of any program in The Waterfall Model to BF (including halt behaviour, but not including I/O, which isn't needed for Turing-completeness) using only three pairs of brackets, and thus three pairs of brackets are enough for Turing-completeness.






    share|cite|improve this answer











    $endgroup$













    • $begingroup$
      Nice job! I see you worked on this in TNB!
      $endgroup$
      – MilkyWay90
      Jan 4 at 5:20










    • $begingroup$
      I think you need s to be at least p+2. When s=p+1, q is 1 less than a power of 2.
      $endgroup$
      – Ørjan Johansen
      Jan 5 at 2:32










    • $begingroup$
      I think I found a much simpler (as in requiring no prime number theory) counter placement: 2p*2^i+2i.
      $endgroup$
      – Ørjan Johansen
      Jan 5 at 4:56












    • $begingroup$
      @ØrjanJohansen: Right, I think I mentioned that construction in #esoteric (some time after I wrote this post)? All you actually need is a Golomb ruler for which each element is distinct modulo the number of elements, and there are various ways to construct those (although finding optimal ones is hard; the longest I've found (via brute force) is [0, 1, 3, 7, 20, 32, 42, 53, 58] for p=9).
      $endgroup$
      – ais523
      Jan 5 at 17:04










    • $begingroup$
      Oh so you did (just before I said my brain refused to be in math mode, so there's my excuse :P). I guess I found out k=0 was sufficient, then. I think Wikipedia's Erdős–Turan_construction gives a polynomially growing (and presumably O()-optimal?) one if you use just the first half of the elements (the other half repeats (mod p)).
      $endgroup$
      – Ørjan Johansen
      Jan 5 at 18:54












    Your Answer





    StackExchange.ifUsing("editor", function () {
    return StackExchange.using("mathjaxEditing", function () {
    StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
    StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
    });
    });
    }, "mathjax-editing");

    StackExchange.ready(function() {
    var channelOptions = {
    tags: "".split(" "),
    id: "419"
    };
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function() {
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled) {
    StackExchange.using("snippets", function() {
    createEditor();
    });
    }
    else {
    createEditor();
    }
    });

    function createEditor() {
    StackExchange.prepareEditor({
    heartbeatType: 'answer',
    autoActivateHeartbeat: false,
    convertImagesToLinks: false,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: null,
    bindNavPrevention: true,
    postfix: "",
    imageUploader: {
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    },
    onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    });


    }
    });














    draft saved

    draft discarded


















    StackExchange.ready(
    function () {
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fcs.stackexchange.com%2fquestions%2f102363%2fhow-many-pairs-of-brackets-are-sufficient-to-make-brainfuck-turing-complete%23new-answer', 'question_page');
    }
    );

    Post as a guest















    Required, but never shown

























    2 Answers
    2






    active

    oldest

    votes








    2 Answers
    2






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes









    9












    $begingroup$

    This is a further development of @ais523's answer, reducing it to only two sets of brackets, and also using a more compact cell placement based on Golomb ruler theory. ais523 has made a compiler for this construction, as well as this TIO session showing a sample resulting BF program running with debug tracing of the TWM counters.



    Like the original, this starts with a program in The Waterfall Model, with some restrictions that don't lose generality:




    1. All counters have the same self-reset value $R$; that is, the TWM trigger map $f$ has the property that $f(x,x)=R$ for all $x$.

    2. There is a single halting counter $h$.

    3. The number $c$ of counters is $(p-1)/2$ for some prime number $p$.


    Golomb ruler



    We combine the Erdős–Turán construction with the permutation function of a Welch–Costas array in order to get a Golomb ruler with the necessary properties.



    (I'm sure this combined construction cannot be a new idea but we just found and fit together these two pieces from Wikipedia.)



    Let $r$ be a primitive root of $p=2c+1$. Define the function



    $$g(k)=4ck - ((r^k-1)bmod(2c+1)), k=0,ldots,2c-1.$$





    1. $g$ is a Golomb ruler of order $2c$. That is, the difference $g(i)-g(j)$ is unique for every pair of distinct numbers $i,j in {0,ldots,2c-1}$.


    2. $g(k)bmod(2c)$ takes on every value $0,ldots,2c-1$ exactly once.


    Tape structure



    For each TWM counter $xin {0,ldots,c-1}$, we assign two BF tape cell positions, a fallback cell $u(x)$ and a value cell $v(x)$:



    $$u(x)=g(k_1)<v(x)=g(k_2)mbox{ with }u(x)equiv v(x)equiv xpmod c$$



    By the second property of $g$ there are exactly two distinct $k_1,k_2$ values to choose from.



    A fallback cell's content will most of the time be kept at $0$, except when its counter has just been visited, when it will be at $2R$, twice the counter self-reset value. A value cell will be kept at twice the value of the corresponding TWM counter.



    All other cells that can be reached by the BF program execution (a finite number) will be kept at odd values, so that they always test as nonzero. After initialization this is automatic because all cell adjustments are by even amounts.



    If desired, all cell positions can be shifted rightwards by a constant in order to avoid moving to the left of the initial BF tape position.



    BF program structure



    Let $H = v(h)-u(h)$ be the distance between the halting counter's value and fallback cells, and let $N$ be a number large enough that $cN+1 geq v((x+1)bmod c) - u(x)$ for all counters $x$. Then the basic BF program structure is




    initialization [ >$times (H+cN+1)$ [ <$times c$ ] adjustments <$times H$ ]




    Initialization



    The initialization phase sets all cells reachable by the program to their initial values, in a state as if the last counter had just been visited and the just active cell was its fallback cell $u(c-1)$:




    1. Value cells are initialized to twice the initial content of the corresponding TWM counter, except that counter $0$ is pre-decremented.

    2. Fallback cells are set to $0$, except cell $u(c-1)$, which is set to $2R$.

    3. All other cells reachable by the program (a finite number) are set to $1$.


    Then the tape pointer is moved to position $u(c-1)-H$ (an always non-zero cell) before we reach the program's first [.



    Beginning of outer loop



    At the beginning of an iteration of the outer loop, the tape pointer will be at either $u(x)-H$ or $v(x)-H$ for a counter $x$.



    Let $y=((x+1)bmod c)$ be the next counter to visit.



    The movement >$times (H+cN+1)$ places the tape pointer on a position that is $equiv ypmod c$ and not to the left of $v(y)$.



    The inner loop [ <$times c$ ] now searches leftwards in steps of $c$ for a zero cell. If counter $y$ is zero, then it will stop at the (zero) value cell $v(y)$; otherwise it will find the fallback cell $u(y)$.



    Whichever cell is found becomes the new active cell.



    Adjustments



    The adjustment phase adjusts various cells on the tape based on their position relative to the active cell. This section contains only +->< commands and so these adjustments happen unconditionally. However, because all counter-related cells are in a Golomb ruler pattern, any adjustments that are not proper for the current active cell will miss all the important cells and adjust some irrelevant cell instead (while keeping it odd).



    Separate code must thus be included in the program for each possible required pair of active and adjusted cell, except for an active cell's self-adjustment, which, because adjustment is based solely on relative position, must be shared between all of them.



    The required adjustments are:




    1. Adjust the previous counter's fallback cell $u(x)$ by $-2R$.

    2. Adjust the current counter's fallback cell $u(y)$ by $2R$, except if the current active cell is $v(h)$ and so we should halt.

    3. Adjust the next counter's value cell $v((y+1)bmod c)$ by $-2$ (decrementing the counter).

    4. When the active cell is a value cell $v(y)$ (so the counter $y$ has reached zero), adjust all value cells $v(z)$ by $2f(y,z)$ from the TWM trigger map. $v(y)$ itself becomes adjusted by $2R$.


    The first and second adjustments above are made necessary by the fact that all active cells must adjust themselves by the same value, which is $2R$ for value cells, and thus also for fallback cells. This requires preparing and cleaning up the fallback cells to ensure they get back to $0$ in both the value and fallback branches.



    End of outer loop



    The movement <$times H$ represents that at the end of the adjustment phase, the tape pointer is moved $H$ places to the left of the active cell.



    For all active cells other than the halting counter's value cell $v(h)$, this is an irrelevant cell, and so odd and non-zero, and the outer loop continues for another iteration.



    For $v(h)$, the pointer is instead placed on its corresponding fallback cell $u(h)$, for which we have made an exception above to keep it zero, and so the program exits through the final ] and halts.






    share|cite|improve this answer









    $endgroup$


















      9












      $begingroup$

      This is a further development of @ais523's answer, reducing it to only two sets of brackets, and also using a more compact cell placement based on Golomb ruler theory. ais523 has made a compiler for this construction, as well as this TIO session showing a sample resulting BF program running with debug tracing of the TWM counters.



      Like the original, this starts with a program in The Waterfall Model, with some restrictions that don't lose generality:




      1. All counters have the same self-reset value $R$; that is, the TWM trigger map $f$ has the property that $f(x,x)=R$ for all $x$.

      2. There is a single halting counter $h$.

      3. The number $c$ of counters is $(p-1)/2$ for some prime number $p$.


      Golomb ruler



      We combine the Erdős–Turán construction with the permutation function of a Welch–Costas array in order to get a Golomb ruler with the necessary properties.



      (I'm sure this combined construction cannot be a new idea but we just found and fit together these two pieces from Wikipedia.)



      Let $r$ be a primitive root of $p=2c+1$. Define the function



      $$g(k)=4ck - ((r^k-1)bmod(2c+1)), k=0,ldots,2c-1.$$





      1. $g$ is a Golomb ruler of order $2c$. That is, the difference $g(i)-g(j)$ is unique for every pair of distinct numbers $i,j in {0,ldots,2c-1}$.


      2. $g(k)bmod(2c)$ takes on every value $0,ldots,2c-1$ exactly once.


      Tape structure



      For each TWM counter $xin {0,ldots,c-1}$, we assign two BF tape cell positions, a fallback cell $u(x)$ and a value cell $v(x)$:



      $$u(x)=g(k_1)<v(x)=g(k_2)mbox{ with }u(x)equiv v(x)equiv xpmod c$$



      By the second property of $g$ there are exactly two distinct $k_1,k_2$ values to choose from.



      A fallback cell's content will most of the time be kept at $0$, except when its counter has just been visited, when it will be at $2R$, twice the counter self-reset value. A value cell will be kept at twice the value of the corresponding TWM counter.



      All other cells that can be reached by the BF program execution (a finite number) will be kept at odd values, so that they always test as nonzero. After initialization this is automatic because all cell adjustments are by even amounts.



      If desired, all cell positions can be shifted rightwards by a constant in order to avoid moving to the left of the initial BF tape position.



      BF program structure



      Let $H = v(h)-u(h)$ be the distance between the halting counter's value and fallback cells, and let $N$ be a number large enough that $cN+1 geq v((x+1)bmod c) - u(x)$ for all counters $x$. Then the basic BF program structure is




      initialization [ >$times (H+cN+1)$ [ <$times c$ ] adjustments <$times H$ ]




      Initialization



      The initialization phase sets all cells reachable by the program to their initial values, in a state as if the last counter had just been visited and the just active cell was its fallback cell $u(c-1)$:




      1. Value cells are initialized to twice the initial content of the corresponding TWM counter, except that counter $0$ is pre-decremented.

      2. Fallback cells are set to $0$, except cell $u(c-1)$, which is set to $2R$.

      3. All other cells reachable by the program (a finite number) are set to $1$.


      Then the tape pointer is moved to position $u(c-1)-H$ (an always non-zero cell) before we reach the program's first [.



      Beginning of outer loop



      At the beginning of an iteration of the outer loop, the tape pointer will be at either $u(x)-H$ or $v(x)-H$ for a counter $x$.



      Let $y=((x+1)bmod c)$ be the next counter to visit.



      The movement >$times (H+cN+1)$ places the tape pointer on a position that is $equiv ypmod c$ and not to the left of $v(y)$.



      The inner loop [ <$times c$ ] now searches leftwards in steps of $c$ for a zero cell. If counter $y$ is zero, then it will stop at the (zero) value cell $v(y)$; otherwise it will find the fallback cell $u(y)$.



      Whichever cell is found becomes the new active cell.



      Adjustments



      The adjustment phase adjusts various cells on the tape based on their position relative to the active cell. This section contains only +->< commands and so these adjustments happen unconditionally. However, because all counter-related cells are in a Golomb ruler pattern, any adjustments that are not proper for the current active cell will miss all the important cells and adjust some irrelevant cell instead (while keeping it odd).



      Separate code must thus be included in the program for each possible required pair of active and adjusted cell, except for an active cell's self-adjustment, which, because adjustment is based solely on relative position, must be shared between all of them.



      The required adjustments are:




      1. Adjust the previous counter's fallback cell $u(x)$ by $-2R$.

      2. Adjust the current counter's fallback cell $u(y)$ by $2R$, except if the current active cell is $v(h)$ and so we should halt.

      3. Adjust the next counter's value cell $v((y+1)bmod c)$ by $-2$ (decrementing the counter).

      4. When the active cell is a value cell $v(y)$ (so the counter $y$ has reached zero), adjust all value cells $v(z)$ by $2f(y,z)$ from the TWM trigger map. $v(y)$ itself becomes adjusted by $2R$.


      The first and second adjustments above are made necessary by the fact that all active cells must adjust themselves by the same value, which is $2R$ for value cells, and thus also for fallback cells. This requires preparing and cleaning up the fallback cells to ensure they get back to $0$ in both the value and fallback branches.



      End of outer loop



      The movement <$times H$ represents that at the end of the adjustment phase, the tape pointer is moved $H$ places to the left of the active cell.



      For all active cells other than the halting counter's value cell $v(h)$, this is an irrelevant cell, and so odd and non-zero, and the outer loop continues for another iteration.



      For $v(h)$, the pointer is instead placed on its corresponding fallback cell $u(h)$, for which we have made an exception above to keep it zero, and so the program exits through the final ] and halts.






      share|cite|improve this answer









      $endgroup$
















        9












        9








        9





        $begingroup$

        This is a further development of @ais523's answer, reducing it to only two sets of brackets, and also using a more compact cell placement based on Golomb ruler theory. ais523 has made a compiler for this construction, as well as this TIO session showing a sample resulting BF program running with debug tracing of the TWM counters.



        Like the original, this starts with a program in The Waterfall Model, with some restrictions that don't lose generality:




        1. All counters have the same self-reset value $R$; that is, the TWM trigger map $f$ has the property that $f(x,x)=R$ for all $x$.

        2. There is a single halting counter $h$.

        3. The number $c$ of counters is $(p-1)/2$ for some prime number $p$.


        Golomb ruler



        We combine the Erdős–Turán construction with the permutation function of a Welch–Costas array in order to get a Golomb ruler with the necessary properties.



        (I'm sure this combined construction cannot be a new idea but we just found and fit together these two pieces from Wikipedia.)



        Let $r$ be a primitive root of $p=2c+1$. Define the function



        $$g(k)=4ck - ((r^k-1)bmod(2c+1)), k=0,ldots,2c-1.$$





        1. $g$ is a Golomb ruler of order $2c$. That is, the difference $g(i)-g(j)$ is unique for every pair of distinct numbers $i,j in {0,ldots,2c-1}$.


        2. $g(k)bmod(2c)$ takes on every value $0,ldots,2c-1$ exactly once.


        Tape structure



        For each TWM counter $xin {0,ldots,c-1}$, we assign two BF tape cell positions, a fallback cell $u(x)$ and a value cell $v(x)$:



        $$u(x)=g(k_1)<v(x)=g(k_2)mbox{ with }u(x)equiv v(x)equiv xpmod c$$



        By the second property of $g$ there are exactly two distinct $k_1,k_2$ values to choose from.



        A fallback cell's content will most of the time be kept at $0$, except when its counter has just been visited, when it will be at $2R$, twice the counter self-reset value. A value cell will be kept at twice the value of the corresponding TWM counter.



        All other cells that can be reached by the BF program execution (a finite number) will be kept at odd values, so that they always test as nonzero. After initialization this is automatic because all cell adjustments are by even amounts.



        If desired, all cell positions can be shifted rightwards by a constant in order to avoid moving to the left of the initial BF tape position.



        BF program structure



        Let $H = v(h)-u(h)$ be the distance between the halting counter's value and fallback cells, and let $N$ be a number large enough that $cN+1 geq v((x+1)bmod c) - u(x)$ for all counters $x$. Then the basic BF program structure is




        initialization [ >$times (H+cN+1)$ [ <$times c$ ] adjustments <$times H$ ]




        Initialization



        The initialization phase sets all cells reachable by the program to their initial values, in a state as if the last counter had just been visited and the just active cell was its fallback cell $u(c-1)$:




        1. Value cells are initialized to twice the initial content of the corresponding TWM counter, except that counter $0$ is pre-decremented.

        2. Fallback cells are set to $0$, except cell $u(c-1)$, which is set to $2R$.

        3. All other cells reachable by the program (a finite number) are set to $1$.


        Then the tape pointer is moved to position $u(c-1)-H$ (an always non-zero cell) before we reach the program's first [.



        Beginning of outer loop



        At the beginning of an iteration of the outer loop, the tape pointer will be at either $u(x)-H$ or $v(x)-H$ for a counter $x$.



        Let $y=((x+1)bmod c)$ be the next counter to visit.



        The movement >$times (H+cN+1)$ places the tape pointer on a position that is $equiv ypmod c$ and not to the left of $v(y)$.



        The inner loop [ <$times c$ ] now searches leftwards in steps of $c$ for a zero cell. If counter $y$ is zero, then it will stop at the (zero) value cell $v(y)$; otherwise it will find the fallback cell $u(y)$.



        Whichever cell is found becomes the new active cell.



        Adjustments



        The adjustment phase adjusts various cells on the tape based on their position relative to the active cell. This section contains only +->< commands and so these adjustments happen unconditionally. However, because all counter-related cells are in a Golomb ruler pattern, any adjustments that are not proper for the current active cell will miss all the important cells and adjust some irrelevant cell instead (while keeping it odd).



        Separate code must thus be included in the program for each possible required pair of active and adjusted cell, except for an active cell's self-adjustment, which, because adjustment is based solely on relative position, must be shared between all of them.



        The required adjustments are:




        1. Adjust the previous counter's fallback cell $u(x)$ by $-2R$.

        2. Adjust the current counter's fallback cell $u(y)$ by $2R$, except if the current active cell is $v(h)$ and so we should halt.

        3. Adjust the next counter's value cell $v((y+1)bmod c)$ by $-2$ (decrementing the counter).

        4. When the active cell is a value cell $v(y)$ (so the counter $y$ has reached zero), adjust all value cells $v(z)$ by $2f(y,z)$ from the TWM trigger map. $v(y)$ itself becomes adjusted by $2R$.


        The first and second adjustments above are made necessary by the fact that all active cells must adjust themselves by the same value, which is $2R$ for value cells, and thus also for fallback cells. This requires preparing and cleaning up the fallback cells to ensure they get back to $0$ in both the value and fallback branches.



        End of outer loop



        The movement <$times H$ represents that at the end of the adjustment phase, the tape pointer is moved $H$ places to the left of the active cell.



        For all active cells other than the halting counter's value cell $v(h)$, this is an irrelevant cell, and so odd and non-zero, and the outer loop continues for another iteration.



        For $v(h)$, the pointer is instead placed on its corresponding fallback cell $u(h)$, for which we have made an exception above to keep it zero, and so the program exits through the final ] and halts.






        share|cite|improve this answer









        $endgroup$



        This is a further development of @ais523's answer, reducing it to only two sets of brackets, and also using a more compact cell placement based on Golomb ruler theory. ais523 has made a compiler for this construction, as well as this TIO session showing a sample resulting BF program running with debug tracing of the TWM counters.



        Like the original, this starts with a program in The Waterfall Model, with some restrictions that don't lose generality:




        1. All counters have the same self-reset value $R$; that is, the TWM trigger map $f$ has the property that $f(x,x)=R$ for all $x$.

        2. There is a single halting counter $h$.

        3. The number $c$ of counters is $(p-1)/2$ for some prime number $p$.


        Golomb ruler



        We combine the Erdős–Turán construction with the permutation function of a Welch–Costas array in order to get a Golomb ruler with the necessary properties.



        (I'm sure this combined construction cannot be a new idea but we just found and fit together these two pieces from Wikipedia.)



        Let $r$ be a primitive root of $p=2c+1$. Define the function



        $$g(k)=4ck - ((r^k-1)bmod(2c+1)), k=0,ldots,2c-1.$$





        1. $g$ is a Golomb ruler of order $2c$. That is, the difference $g(i)-g(j)$ is unique for every pair of distinct numbers $i,j in {0,ldots,2c-1}$.


        2. $g(k)bmod(2c)$ takes on every value $0,ldots,2c-1$ exactly once.


        Tape structure



        For each TWM counter $xin {0,ldots,c-1}$, we assign two BF tape cell positions, a fallback cell $u(x)$ and a value cell $v(x)$:



        $$u(x)=g(k_1)<v(x)=g(k_2)mbox{ with }u(x)equiv v(x)equiv xpmod c$$



        By the second property of $g$ there are exactly two distinct $k_1,k_2$ values to choose from.



        A fallback cell's content will most of the time be kept at $0$, except when its counter has just been visited, when it will be at $2R$, twice the counter self-reset value. A value cell will be kept at twice the value of the corresponding TWM counter.



        All other cells that can be reached by the BF program execution (a finite number) will be kept at odd values, so that they always test as nonzero. After initialization this is automatic because all cell adjustments are by even amounts.



        If desired, all cell positions can be shifted rightwards by a constant in order to avoid moving to the left of the initial BF tape position.



        BF program structure



        Let $H = v(h)-u(h)$ be the distance between the halting counter's value and fallback cells, and let $N$ be a number large enough that $cN+1 geq v((x+1)bmod c) - u(x)$ for all counters $x$. Then the basic BF program structure is




        initialization [ >$times (H+cN+1)$ [ <$times c$ ] adjustments <$times H$ ]




        Initialization



        The initialization phase sets all cells reachable by the program to their initial values, in a state as if the last counter had just been visited and the just active cell was its fallback cell $u(c-1)$:




        1. Value cells are initialized to twice the initial content of the corresponding TWM counter, except that counter $0$ is pre-decremented.

        2. Fallback cells are set to $0$, except cell $u(c-1)$, which is set to $2R$.

        3. All other cells reachable by the program (a finite number) are set to $1$.


        Then the tape pointer is moved to position $u(c-1)-H$ (an always non-zero cell) before we reach the program's first [.



        Beginning of outer loop



        At the beginning of an iteration of the outer loop, the tape pointer will be at either $u(x)-H$ or $v(x)-H$ for a counter $x$.



        Let $y=((x+1)bmod c)$ be the next counter to visit.



        The movement >$times (H+cN+1)$ places the tape pointer on a position that is $equiv ypmod c$ and not to the left of $v(y)$.



        The inner loop [ <$times c$ ] now searches leftwards in steps of $c$ for a zero cell. If counter $y$ is zero, then it will stop at the (zero) value cell $v(y)$; otherwise it will find the fallback cell $u(y)$.



        Whichever cell is found becomes the new active cell.



        Adjustments



        The adjustment phase adjusts various cells on the tape based on their position relative to the active cell. This section contains only +->< commands and so these adjustments happen unconditionally. However, because all counter-related cells are in a Golomb ruler pattern, any adjustments that are not proper for the current active cell will miss all the important cells and adjust some irrelevant cell instead (while keeping it odd).



        Separate code must thus be included in the program for each possible required pair of active and adjusted cell, except for an active cell's self-adjustment, which, because adjustment is based solely on relative position, must be shared between all of them.



        The required adjustments are:




        1. Adjust the previous counter's fallback cell $u(x)$ by $-2R$.

        2. Adjust the current counter's fallback cell $u(y)$ by $2R$, except if the current active cell is $v(h)$ and so we should halt.

        3. Adjust the next counter's value cell $v((y+1)bmod c)$ by $-2$ (decrementing the counter).

        4. When the active cell is a value cell $v(y)$ (so the counter $y$ has reached zero), adjust all value cells $v(z)$ by $2f(y,z)$ from the TWM trigger map. $v(y)$ itself becomes adjusted by $2R$.


        The first and second adjustments above are made necessary by the fact that all active cells must adjust themselves by the same value, which is $2R$ for value cells, and thus also for fallback cells. This requires preparing and cleaning up the fallback cells to ensure they get back to $0$ in both the value and fallback branches.



        End of outer loop



        The movement <$times H$ represents that at the end of the adjustment phase, the tape pointer is moved $H$ places to the left of the active cell.



        For all active cells other than the halting counter's value cell $v(h)$, this is an irrelevant cell, and so odd and non-zero, and the outer loop continues for another iteration.



        For $v(h)$, the pointer is instead placed on its corresponding fallback cell $u(h)$, for which we have made an exception above to keep it zero, and so the program exits through the final ] and halts.







        share|cite|improve this answer












        share|cite|improve this answer



        share|cite|improve this answer










        answered Jan 7 at 4:32









        Ørjan JohansenØrjan Johansen

        2063




        2063























            12












            $begingroup$

            I'm not 100% sure that it's impossible to do this with two sets of brackets. However, if the cells of the BF tape allow unbounded values, three sets of brackets are enough. (For simplicity, I'll also assume that we can move the tape head left past its starting point, although because this construction only uses a finite region of the tape, we could lift that restriction by adding sufficiently many > commands at the start of the program.) The construction below requires assuming Artin's conjecture to be able to compile arbitrarily large programs; however, even if Artin's conjecture is false, it will still be possible to show Turing-completeness indirectly via translating an interpreter for a Turing-complete language into BF using the construction below, and running arbitrary programs via giving them as input to that interpreter.



            The Turing-complete language that we're compiling into unbounded BF is The Waterfall Model, which is one of the simplest known computational models. For people who don't know it already, it consists of a number of counters (and initial values for them), and a function $f$ from pairs of counters to integers; program execution consists of repeatedly subtracting 1 from every counter, then if any counter $x$ is 0, adding $f(x,y)$ to each counter $y$ (the program is written in such a way that this never happens to multiple counters simultaneously). There's a Turing-completeness proof for this language behind my link. Without loss of generality, we'll assume that all counters have the same self-reset value (i.e. $f(x,x)$ is the same for all $x$); this is a safe assumption because for any specific $x$, adding the same constant to each $f(x,y)$ will not change the behaviour of the program.



            Let $p$ be the number of counters; without loss of generality (assuming Artin's conjecture), assume that $p$ has a primitive root 2. Let $q$ be $p(1+s+s^2)$, where $s$ is the lowest power of 2 greater than $p$. Without loss of generality, $2q$ will be less than $2^p$ ($2q$ is bounded polynomially, $2^p$ grows exponentially, so any sufficiently large $p$ will work).



            The tape arrangement is as follows: we number each counter with an integer $0 leq i < p$ (and without loss of generality, we assume that there's a single halt counter and number it $2$). The value of most counters is stored on tape cell $2^i$, with the exception of counter 0, which is stored on tape cell $2q$. For each odd-numbered tape cell from cell -1 up to and including $2^{p+1}+2p+1$, that tape cell always holds 1, unless it's immediately to the left of a counter, in which case it always holds 0. Even-numbered tape cells that aren't being used as counters have irrelevant values (which might or might not be 0); and odd-numbered tape cells outside the stated range also have irrelevant values. Note that setting the tape into an appropriate initial state requires initialising only finitely many tape elements to constant values, meaning that we can do it with a sequence of <>+- instructions (in fact, only >+ are needed), thus no brackets. At the end of this initialisation, we move the tape pointer to cell -1.



            The general shape of our program will look like this:




            initialisation [>>>[ >$times(2^p-1)$ [ <$times(2p)$ ]>-] adjustment <<<]




            The initialisation puts the tape into the expected shape and the pointer on cell -1. This is not the cell to the left of a counter (0 is not a power of 2), so it has value 1, and we enter the loop. The loop invariant for this outermost loop is that the tape pointer is (at the start and end of each loop iteration) three cells to the left of a counter; it can be seen that the loop will thus only exit if we're three cells to the left of counter 2 (each other counter has a 1 three cells to its left, as to have a 0 there would imply that two counters' tape positions were 2 cells apart; the only two powers of 2 that differ by 2 are $2^1$ and $2^2$, and $q$'s binary representation changes from strings of $0$s to strings of $1$s or vice versa at least four times and thus cannot be 1 away from a power of 2).



            The second loop repeatedly loops over the counters, decrementing them. The loop invariant is that the tape pointer is always pointing to a counter; thus the loop will exit when some counter becomes 0. The decrement is just -; the way we get from one counter to the next is more complex. The basic idea is that moving $2^p-1$ spaces to the right from $2^x$ will place us on an odd-numbered cell $2^p+2^x-1$, which is to the right of any counter ($2^p$ is the last counter, $2^x-1$ is positive because $x$ is positive); modulo $2p$, this value is congruent to (by Fermat's Little Theorem) $2^x+1$. The innermost loop repeatedly moves left by $2p$ spaces, also not changing the index of the tape cell modulo $2p$, and must eventually find the cell congruent to $2^x+1$ modulo $2p$ that has the value (which will be the cell to the left of some counter); because of our primitive root requirement there's exactly one such cell ($2q-1$ is congruent to $-1$ modulo $2p$, and $2^{log_{2,p}(r)+1}-1$ is congruent to $2r-1$ for any other $r$, where $log_{2,p}$ is the discrete logarithm to base 2 modulo $p$). Additionally, it can be seen that the position of the tape pointer modulo $2p$ increases by $2$ each time round the middle loop. Thus, the tape pointer must cycle between all $p$ counters (in order of their values modulo $2p$). Thus, every $p$ iterations, we decrease every counter (as required). If the loop breaks partway through an iteration, we'll resume the decrease when we re-enter the loop (because the rest of the outermost loop makes no net change to the tape pointer position).



            When a counter does hit 0, the middle loop breaks, taking us to the "adjustment" code. This is basically just an encoding of $f$; for every pair $(x,y)$, it adds $f(x,y)$ to the tape element which is the same distance left/right of the current tape pointer as counter $y$'s tape location is left/right of counter $x$'s tape location (and then removes the tape pointer back to where it started). Whenever $xneq y$, this distance turns out to be unique:




            • The difference between two powers of 2 is a binary number consisting of a string of 1 or more $1$s followed by a string of 0 or more $0$s (with the place values of the start of the number, and the start of the $0$ string, depending on the larger and smaller respectively of $x$ and $y$); thus all those differences are distinct. * As for the difference of a power of 2 and $q$, it must contain at least two transitions between strings of $1$s and $0$s ($q$ contains at least four such transitions, the subtraction can only remove 2), thus is distinct from all differences of two powers of two, and these differences are obviously also distinct from each other.


            For $x=y$, we obviously find that the distance moved is 0. But because all $f(x,y)$ are equal, we can just use this as the adjustment for the current cell. And it can be seen that the adjustment code thus implements the "when a counter hits 0" effect for every counter; all the cells that actually represent counters will be adjusted by the correct amount, and all the other adjustments will affect non-counter even-numbered cells (the difference between two even numbers is even), which have no effect on the program's behaviour.



            Thus, we now have a working compilation of any program in The Waterfall Model to BF (including halt behaviour, but not including I/O, which isn't needed for Turing-completeness) using only three pairs of brackets, and thus three pairs of brackets are enough for Turing-completeness.






            share|cite|improve this answer











            $endgroup$













            • $begingroup$
              Nice job! I see you worked on this in TNB!
              $endgroup$
              – MilkyWay90
              Jan 4 at 5:20










            • $begingroup$
              I think you need s to be at least p+2. When s=p+1, q is 1 less than a power of 2.
              $endgroup$
              – Ørjan Johansen
              Jan 5 at 2:32










            • $begingroup$
              I think I found a much simpler (as in requiring no prime number theory) counter placement: 2p*2^i+2i.
              $endgroup$
              – Ørjan Johansen
              Jan 5 at 4:56












            • $begingroup$
              @ØrjanJohansen: Right, I think I mentioned that construction in #esoteric (some time after I wrote this post)? All you actually need is a Golomb ruler for which each element is distinct modulo the number of elements, and there are various ways to construct those (although finding optimal ones is hard; the longest I've found (via brute force) is [0, 1, 3, 7, 20, 32, 42, 53, 58] for p=9).
              $endgroup$
              – ais523
              Jan 5 at 17:04










            • $begingroup$
              Oh so you did (just before I said my brain refused to be in math mode, so there's my excuse :P). I guess I found out k=0 was sufficient, then. I think Wikipedia's Erdős–Turan_construction gives a polynomially growing (and presumably O()-optimal?) one if you use just the first half of the elements (the other half repeats (mod p)).
              $endgroup$
              – Ørjan Johansen
              Jan 5 at 18:54
















            12












            $begingroup$

            I'm not 100% sure that it's impossible to do this with two sets of brackets. However, if the cells of the BF tape allow unbounded values, three sets of brackets are enough. (For simplicity, I'll also assume that we can move the tape head left past its starting point, although because this construction only uses a finite region of the tape, we could lift that restriction by adding sufficiently many > commands at the start of the program.) The construction below requires assuming Artin's conjecture to be able to compile arbitrarily large programs; however, even if Artin's conjecture is false, it will still be possible to show Turing-completeness indirectly via translating an interpreter for a Turing-complete language into BF using the construction below, and running arbitrary programs via giving them as input to that interpreter.



            The Turing-complete language that we're compiling into unbounded BF is The Waterfall Model, which is one of the simplest known computational models. For people who don't know it already, it consists of a number of counters (and initial values for them), and a function $f$ from pairs of counters to integers; program execution consists of repeatedly subtracting 1 from every counter, then if any counter $x$ is 0, adding $f(x,y)$ to each counter $y$ (the program is written in such a way that this never happens to multiple counters simultaneously). There's a Turing-completeness proof for this language behind my link. Without loss of generality, we'll assume that all counters have the same self-reset value (i.e. $f(x,x)$ is the same for all $x$); this is a safe assumption because for any specific $x$, adding the same constant to each $f(x,y)$ will not change the behaviour of the program.



            Let $p$ be the number of counters; without loss of generality (assuming Artin's conjecture), assume that $p$ has a primitive root 2. Let $q$ be $p(1+s+s^2)$, where $s$ is the lowest power of 2 greater than $p$. Without loss of generality, $2q$ will be less than $2^p$ ($2q$ is bounded polynomially, $2^p$ grows exponentially, so any sufficiently large $p$ will work).



            The tape arrangement is as follows: we number each counter with an integer $0 leq i < p$ (and without loss of generality, we assume that there's a single halt counter and number it $2$). The value of most counters is stored on tape cell $2^i$, with the exception of counter 0, which is stored on tape cell $2q$. For each odd-numbered tape cell from cell -1 up to and including $2^{p+1}+2p+1$, that tape cell always holds 1, unless it's immediately to the left of a counter, in which case it always holds 0. Even-numbered tape cells that aren't being used as counters have irrelevant values (which might or might not be 0); and odd-numbered tape cells outside the stated range also have irrelevant values. Note that setting the tape into an appropriate initial state requires initialising only finitely many tape elements to constant values, meaning that we can do it with a sequence of <>+- instructions (in fact, only >+ are needed), thus no brackets. At the end of this initialisation, we move the tape pointer to cell -1.



            The general shape of our program will look like this:




            initialisation [>>>[ >$times(2^p-1)$ [ <$times(2p)$ ]>-] adjustment <<<]




            The initialisation puts the tape into the expected shape and the pointer on cell -1. This is not the cell to the left of a counter (0 is not a power of 2), so it has value 1, and we enter the loop. The loop invariant for this outermost loop is that the tape pointer is (at the start and end of each loop iteration) three cells to the left of a counter; it can be seen that the loop will thus only exit if we're three cells to the left of counter 2 (each other counter has a 1 three cells to its left, as to have a 0 there would imply that two counters' tape positions were 2 cells apart; the only two powers of 2 that differ by 2 are $2^1$ and $2^2$, and $q$'s binary representation changes from strings of $0$s to strings of $1$s or vice versa at least four times and thus cannot be 1 away from a power of 2).



            The second loop repeatedly loops over the counters, decrementing them. The loop invariant is that the tape pointer is always pointing to a counter; thus the loop will exit when some counter becomes 0. The decrement is just -; the way we get from one counter to the next is more complex. The basic idea is that moving $2^p-1$ spaces to the right from $2^x$ will place us on an odd-numbered cell $2^p+2^x-1$, which is to the right of any counter ($2^p$ is the last counter, $2^x-1$ is positive because $x$ is positive); modulo $2p$, this value is congruent to (by Fermat's Little Theorem) $2^x+1$. The innermost loop repeatedly moves left by $2p$ spaces, also not changing the index of the tape cell modulo $2p$, and must eventually find the cell congruent to $2^x+1$ modulo $2p$ that has the value (which will be the cell to the left of some counter); because of our primitive root requirement there's exactly one such cell ($2q-1$ is congruent to $-1$ modulo $2p$, and $2^{log_{2,p}(r)+1}-1$ is congruent to $2r-1$ for any other $r$, where $log_{2,p}$ is the discrete logarithm to base 2 modulo $p$). Additionally, it can be seen that the position of the tape pointer modulo $2p$ increases by $2$ each time round the middle loop. Thus, the tape pointer must cycle between all $p$ counters (in order of their values modulo $2p$). Thus, every $p$ iterations, we decrease every counter (as required). If the loop breaks partway through an iteration, we'll resume the decrease when we re-enter the loop (because the rest of the outermost loop makes no net change to the tape pointer position).



            When a counter does hit 0, the middle loop breaks, taking us to the "adjustment" code. This is basically just an encoding of $f$; for every pair $(x,y)$, it adds $f(x,y)$ to the tape element which is the same distance left/right of the current tape pointer as counter $y$'s tape location is left/right of counter $x$'s tape location (and then removes the tape pointer back to where it started). Whenever $xneq y$, this distance turns out to be unique:




            • The difference between two powers of 2 is a binary number consisting of a string of 1 or more $1$s followed by a string of 0 or more $0$s (with the place values of the start of the number, and the start of the $0$ string, depending on the larger and smaller respectively of $x$ and $y$); thus all those differences are distinct. * As for the difference of a power of 2 and $q$, it must contain at least two transitions between strings of $1$s and $0$s ($q$ contains at least four such transitions, the subtraction can only remove 2), thus is distinct from all differences of two powers of two, and these differences are obviously also distinct from each other.


            For $x=y$, we obviously find that the distance moved is 0. But because all $f(x,y)$ are equal, we can just use this as the adjustment for the current cell. And it can be seen that the adjustment code thus implements the "when a counter hits 0" effect for every counter; all the cells that actually represent counters will be adjusted by the correct amount, and all the other adjustments will affect non-counter even-numbered cells (the difference between two even numbers is even), which have no effect on the program's behaviour.



            Thus, we now have a working compilation of any program in The Waterfall Model to BF (including halt behaviour, but not including I/O, which isn't needed for Turing-completeness) using only three pairs of brackets, and thus three pairs of brackets are enough for Turing-completeness.






            share|cite|improve this answer











            $endgroup$













            • $begingroup$
              Nice job! I see you worked on this in TNB!
              $endgroup$
              – MilkyWay90
              Jan 4 at 5:20










            • $begingroup$
              I think you need s to be at least p+2. When s=p+1, q is 1 less than a power of 2.
              $endgroup$
              – Ørjan Johansen
              Jan 5 at 2:32










            • $begingroup$
              I think I found a much simpler (as in requiring no prime number theory) counter placement: 2p*2^i+2i.
              $endgroup$
              – Ørjan Johansen
              Jan 5 at 4:56












            • $begingroup$
              @ØrjanJohansen: Right, I think I mentioned that construction in #esoteric (some time after I wrote this post)? All you actually need is a Golomb ruler for which each element is distinct modulo the number of elements, and there are various ways to construct those (although finding optimal ones is hard; the longest I've found (via brute force) is [0, 1, 3, 7, 20, 32, 42, 53, 58] for p=9).
              $endgroup$
              – ais523
              Jan 5 at 17:04










            • $begingroup$
              Oh so you did (just before I said my brain refused to be in math mode, so there's my excuse :P). I guess I found out k=0 was sufficient, then. I think Wikipedia's Erdős–Turan_construction gives a polynomially growing (and presumably O()-optimal?) one if you use just the first half of the elements (the other half repeats (mod p)).
              $endgroup$
              – Ørjan Johansen
              Jan 5 at 18:54














            12












            12








            12





            $begingroup$

            I'm not 100% sure that it's impossible to do this with two sets of brackets. However, if the cells of the BF tape allow unbounded values, three sets of brackets are enough. (For simplicity, I'll also assume that we can move the tape head left past its starting point, although because this construction only uses a finite region of the tape, we could lift that restriction by adding sufficiently many > commands at the start of the program.) The construction below requires assuming Artin's conjecture to be able to compile arbitrarily large programs; however, even if Artin's conjecture is false, it will still be possible to show Turing-completeness indirectly via translating an interpreter for a Turing-complete language into BF using the construction below, and running arbitrary programs via giving them as input to that interpreter.



            The Turing-complete language that we're compiling into unbounded BF is The Waterfall Model, which is one of the simplest known computational models. For people who don't know it already, it consists of a number of counters (and initial values for them), and a function $f$ from pairs of counters to integers; program execution consists of repeatedly subtracting 1 from every counter, then if any counter $x$ is 0, adding $f(x,y)$ to each counter $y$ (the program is written in such a way that this never happens to multiple counters simultaneously). There's a Turing-completeness proof for this language behind my link. Without loss of generality, we'll assume that all counters have the same self-reset value (i.e. $f(x,x)$ is the same for all $x$); this is a safe assumption because for any specific $x$, adding the same constant to each $f(x,y)$ will not change the behaviour of the program.



            Let $p$ be the number of counters; without loss of generality (assuming Artin's conjecture), assume that $p$ has a primitive root 2. Let $q$ be $p(1+s+s^2)$, where $s$ is the lowest power of 2 greater than $p$. Without loss of generality, $2q$ will be less than $2^p$ ($2q$ is bounded polynomially, $2^p$ grows exponentially, so any sufficiently large $p$ will work).



            The tape arrangement is as follows: we number each counter with an integer $0 leq i < p$ (and without loss of generality, we assume that there's a single halt counter and number it $2$). The value of most counters is stored on tape cell $2^i$, with the exception of counter 0, which is stored on tape cell $2q$. For each odd-numbered tape cell from cell -1 up to and including $2^{p+1}+2p+1$, that tape cell always holds 1, unless it's immediately to the left of a counter, in which case it always holds 0. Even-numbered tape cells that aren't being used as counters have irrelevant values (which might or might not be 0); and odd-numbered tape cells outside the stated range also have irrelevant values. Note that setting the tape into an appropriate initial state requires initialising only finitely many tape elements to constant values, meaning that we can do it with a sequence of <>+- instructions (in fact, only >+ are needed), thus no brackets. At the end of this initialisation, we move the tape pointer to cell -1.



            The general shape of our program will look like this:




            initialisation [>>>[ >$times(2^p-1)$ [ <$times(2p)$ ]>-] adjustment <<<]




            The initialisation puts the tape into the expected shape and the pointer on cell -1. This is not the cell to the left of a counter (0 is not a power of 2), so it has value 1, and we enter the loop. The loop invariant for this outermost loop is that the tape pointer is (at the start and end of each loop iteration) three cells to the left of a counter; it can be seen that the loop will thus only exit if we're three cells to the left of counter 2 (each other counter has a 1 three cells to its left, as to have a 0 there would imply that two counters' tape positions were 2 cells apart; the only two powers of 2 that differ by 2 are $2^1$ and $2^2$, and $q$'s binary representation changes from strings of $0$s to strings of $1$s or vice versa at least four times and thus cannot be 1 away from a power of 2).



            The second loop repeatedly loops over the counters, decrementing them. The loop invariant is that the tape pointer is always pointing to a counter; thus the loop will exit when some counter becomes 0. The decrement is just -; the way we get from one counter to the next is more complex. The basic idea is that moving $2^p-1$ spaces to the right from $2^x$ will place us on an odd-numbered cell $2^p+2^x-1$, which is to the right of any counter ($2^p$ is the last counter, $2^x-1$ is positive because $x$ is positive); modulo $2p$, this value is congruent to (by Fermat's Little Theorem) $2^x+1$. The innermost loop repeatedly moves left by $2p$ spaces, also not changing the index of the tape cell modulo $2p$, and must eventually find the cell congruent to $2^x+1$ modulo $2p$ that has the value (which will be the cell to the left of some counter); because of our primitive root requirement there's exactly one such cell ($2q-1$ is congruent to $-1$ modulo $2p$, and $2^{log_{2,p}(r)+1}-1$ is congruent to $2r-1$ for any other $r$, where $log_{2,p}$ is the discrete logarithm to base 2 modulo $p$). Additionally, it can be seen that the position of the tape pointer modulo $2p$ increases by $2$ each time round the middle loop. Thus, the tape pointer must cycle between all $p$ counters (in order of their values modulo $2p$). Thus, every $p$ iterations, we decrease every counter (as required). If the loop breaks partway through an iteration, we'll resume the decrease when we re-enter the loop (because the rest of the outermost loop makes no net change to the tape pointer position).



            When a counter does hit 0, the middle loop breaks, taking us to the "adjustment" code. This is basically just an encoding of $f$; for every pair $(x,y)$, it adds $f(x,y)$ to the tape element which is the same distance left/right of the current tape pointer as counter $y$'s tape location is left/right of counter $x$'s tape location (and then removes the tape pointer back to where it started). Whenever $xneq y$, this distance turns out to be unique:




            • The difference between two powers of 2 is a binary number consisting of a string of 1 or more $1$s followed by a string of 0 or more $0$s (with the place values of the start of the number, and the start of the $0$ string, depending on the larger and smaller respectively of $x$ and $y$); thus all those differences are distinct. * As for the difference of a power of 2 and $q$, it must contain at least two transitions between strings of $1$s and $0$s ($q$ contains at least four such transitions, the subtraction can only remove 2), thus is distinct from all differences of two powers of two, and these differences are obviously also distinct from each other.


            For $x=y$, we obviously find that the distance moved is 0. But because all $f(x,y)$ are equal, we can just use this as the adjustment for the current cell. And it can be seen that the adjustment code thus implements the "when a counter hits 0" effect for every counter; all the cells that actually represent counters will be adjusted by the correct amount, and all the other adjustments will affect non-counter even-numbered cells (the difference between two even numbers is even), which have no effect on the program's behaviour.



            Thus, we now have a working compilation of any program in The Waterfall Model to BF (including halt behaviour, but not including I/O, which isn't needed for Turing-completeness) using only three pairs of brackets, and thus three pairs of brackets are enough for Turing-completeness.






            share|cite|improve this answer











            $endgroup$



            I'm not 100% sure that it's impossible to do this with two sets of brackets. However, if the cells of the BF tape allow unbounded values, three sets of brackets are enough. (For simplicity, I'll also assume that we can move the tape head left past its starting point, although because this construction only uses a finite region of the tape, we could lift that restriction by adding sufficiently many > commands at the start of the program.) The construction below requires assuming Artin's conjecture to be able to compile arbitrarily large programs; however, even if Artin's conjecture is false, it will still be possible to show Turing-completeness indirectly via translating an interpreter for a Turing-complete language into BF using the construction below, and running arbitrary programs via giving them as input to that interpreter.



            The Turing-complete language that we're compiling into unbounded BF is The Waterfall Model, which is one of the simplest known computational models. For people who don't know it already, it consists of a number of counters (and initial values for them), and a function $f$ from pairs of counters to integers; program execution consists of repeatedly subtracting 1 from every counter, then if any counter $x$ is 0, adding $f(x,y)$ to each counter $y$ (the program is written in such a way that this never happens to multiple counters simultaneously). There's a Turing-completeness proof for this language behind my link. Without loss of generality, we'll assume that all counters have the same self-reset value (i.e. $f(x,x)$ is the same for all $x$); this is a safe assumption because for any specific $x$, adding the same constant to each $f(x,y)$ will not change the behaviour of the program.



            Let $p$ be the number of counters; without loss of generality (assuming Artin's conjecture), assume that $p$ has a primitive root 2. Let $q$ be $p(1+s+s^2)$, where $s$ is the lowest power of 2 greater than $p$. Without loss of generality, $2q$ will be less than $2^p$ ($2q$ is bounded polynomially, $2^p$ grows exponentially, so any sufficiently large $p$ will work).



            The tape arrangement is as follows: we number each counter with an integer $0 leq i < p$ (and without loss of generality, we assume that there's a single halt counter and number it $2$). The value of most counters is stored on tape cell $2^i$, with the exception of counter 0, which is stored on tape cell $2q$. For each odd-numbered tape cell from cell -1 up to and including $2^{p+1}+2p+1$, that tape cell always holds 1, unless it's immediately to the left of a counter, in which case it always holds 0. Even-numbered tape cells that aren't being used as counters have irrelevant values (which might or might not be 0); and odd-numbered tape cells outside the stated range also have irrelevant values. Note that setting the tape into an appropriate initial state requires initialising only finitely many tape elements to constant values, meaning that we can do it with a sequence of <>+- instructions (in fact, only >+ are needed), thus no brackets. At the end of this initialisation, we move the tape pointer to cell -1.



            The general shape of our program will look like this:




            initialisation [>>>[ >$times(2^p-1)$ [ <$times(2p)$ ]>-] adjustment <<<]




            The initialisation puts the tape into the expected shape and the pointer on cell -1. This is not the cell to the left of a counter (0 is not a power of 2), so it has value 1, and we enter the loop. The loop invariant for this outermost loop is that the tape pointer is (at the start and end of each loop iteration) three cells to the left of a counter; it can be seen that the loop will thus only exit if we're three cells to the left of counter 2 (each other counter has a 1 three cells to its left, as to have a 0 there would imply that two counters' tape positions were 2 cells apart; the only two powers of 2 that differ by 2 are $2^1$ and $2^2$, and $q$'s binary representation changes from strings of $0$s to strings of $1$s or vice versa at least four times and thus cannot be 1 away from a power of 2).



            The second loop repeatedly loops over the counters, decrementing them. The loop invariant is that the tape pointer is always pointing to a counter; thus the loop will exit when some counter becomes 0. The decrement is just -; the way we get from one counter to the next is more complex. The basic idea is that moving $2^p-1$ spaces to the right from $2^x$ will place us on an odd-numbered cell $2^p+2^x-1$, which is to the right of any counter ($2^p$ is the last counter, $2^x-1$ is positive because $x$ is positive); modulo $2p$, this value is congruent to (by Fermat's Little Theorem) $2^x+1$. The innermost loop repeatedly moves left by $2p$ spaces, also not changing the index of the tape cell modulo $2p$, and must eventually find the cell congruent to $2^x+1$ modulo $2p$ that has the value (which will be the cell to the left of some counter); because of our primitive root requirement there's exactly one such cell ($2q-1$ is congruent to $-1$ modulo $2p$, and $2^{log_{2,p}(r)+1}-1$ is congruent to $2r-1$ for any other $r$, where $log_{2,p}$ is the discrete logarithm to base 2 modulo $p$). Additionally, it can be seen that the position of the tape pointer modulo $2p$ increases by $2$ each time round the middle loop. Thus, the tape pointer must cycle between all $p$ counters (in order of their values modulo $2p$). Thus, every $p$ iterations, we decrease every counter (as required). If the loop breaks partway through an iteration, we'll resume the decrease when we re-enter the loop (because the rest of the outermost loop makes no net change to the tape pointer position).



            When a counter does hit 0, the middle loop breaks, taking us to the "adjustment" code. This is basically just an encoding of $f$; for every pair $(x,y)$, it adds $f(x,y)$ to the tape element which is the same distance left/right of the current tape pointer as counter $y$'s tape location is left/right of counter $x$'s tape location (and then removes the tape pointer back to where it started). Whenever $xneq y$, this distance turns out to be unique:




            • The difference between two powers of 2 is a binary number consisting of a string of 1 or more $1$s followed by a string of 0 or more $0$s (with the place values of the start of the number, and the start of the $0$ string, depending on the larger and smaller respectively of $x$ and $y$); thus all those differences are distinct. * As for the difference of a power of 2 and $q$, it must contain at least two transitions between strings of $1$s and $0$s ($q$ contains at least four such transitions, the subtraction can only remove 2), thus is distinct from all differences of two powers of two, and these differences are obviously also distinct from each other.


            For $x=y$, we obviously find that the distance moved is 0. But because all $f(x,y)$ are equal, we can just use this as the adjustment for the current cell. And it can be seen that the adjustment code thus implements the "when a counter hits 0" effect for every counter; all the cells that actually represent counters will be adjusted by the correct amount, and all the other adjustments will affect non-counter even-numbered cells (the difference between two even numbers is even), which have no effect on the program's behaviour.



            Thus, we now have a working compilation of any program in The Waterfall Model to BF (including halt behaviour, but not including I/O, which isn't needed for Turing-completeness) using only three pairs of brackets, and thus three pairs of brackets are enough for Turing-completeness.







            share|cite|improve this answer














            share|cite|improve this answer



            share|cite|improve this answer








            edited Jan 4 at 5:34

























            answered Jan 4 at 5:16









            ais523ais523

            22315




            22315












            • $begingroup$
              Nice job! I see you worked on this in TNB!
              $endgroup$
              – MilkyWay90
              Jan 4 at 5:20










            • $begingroup$
              I think you need s to be at least p+2. When s=p+1, q is 1 less than a power of 2.
              $endgroup$
              – Ørjan Johansen
              Jan 5 at 2:32










            • $begingroup$
              I think I found a much simpler (as in requiring no prime number theory) counter placement: 2p*2^i+2i.
              $endgroup$
              – Ørjan Johansen
              Jan 5 at 4:56












            • $begingroup$
              @ØrjanJohansen: Right, I think I mentioned that construction in #esoteric (some time after I wrote this post)? All you actually need is a Golomb ruler for which each element is distinct modulo the number of elements, and there are various ways to construct those (although finding optimal ones is hard; the longest I've found (via brute force) is [0, 1, 3, 7, 20, 32, 42, 53, 58] for p=9).
              $endgroup$
              – ais523
              Jan 5 at 17:04










            • $begingroup$
              Oh so you did (just before I said my brain refused to be in math mode, so there's my excuse :P). I guess I found out k=0 was sufficient, then. I think Wikipedia's Erdős–Turan_construction gives a polynomially growing (and presumably O()-optimal?) one if you use just the first half of the elements (the other half repeats (mod p)).
              $endgroup$
              – Ørjan Johansen
              Jan 5 at 18:54


















            • $begingroup$
              Nice job! I see you worked on this in TNB!
              $endgroup$
              – MilkyWay90
              Jan 4 at 5:20










            • $begingroup$
              I think you need s to be at least p+2. When s=p+1, q is 1 less than a power of 2.
              $endgroup$
              – Ørjan Johansen
              Jan 5 at 2:32










            • $begingroup$
              I think I found a much simpler (as in requiring no prime number theory) counter placement: 2p*2^i+2i.
              $endgroup$
              – Ørjan Johansen
              Jan 5 at 4:56












            • $begingroup$
              @ØrjanJohansen: Right, I think I mentioned that construction in #esoteric (some time after I wrote this post)? All you actually need is a Golomb ruler for which each element is distinct modulo the number of elements, and there are various ways to construct those (although finding optimal ones is hard; the longest I've found (via brute force) is [0, 1, 3, 7, 20, 32, 42, 53, 58] for p=9).
              $endgroup$
              – ais523
              Jan 5 at 17:04










            • $begingroup$
              Oh so you did (just before I said my brain refused to be in math mode, so there's my excuse :P). I guess I found out k=0 was sufficient, then. I think Wikipedia's Erdős–Turan_construction gives a polynomially growing (and presumably O()-optimal?) one if you use just the first half of the elements (the other half repeats (mod p)).
              $endgroup$
              – Ørjan Johansen
              Jan 5 at 18:54
















            $begingroup$
            Nice job! I see you worked on this in TNB!
            $endgroup$
            – MilkyWay90
            Jan 4 at 5:20




            $begingroup$
            Nice job! I see you worked on this in TNB!
            $endgroup$
            – MilkyWay90
            Jan 4 at 5:20












            $begingroup$
            I think you need s to be at least p+2. When s=p+1, q is 1 less than a power of 2.
            $endgroup$
            – Ørjan Johansen
            Jan 5 at 2:32




            $begingroup$
            I think you need s to be at least p+2. When s=p+1, q is 1 less than a power of 2.
            $endgroup$
            – Ørjan Johansen
            Jan 5 at 2:32












            $begingroup$
            I think I found a much simpler (as in requiring no prime number theory) counter placement: 2p*2^i+2i.
            $endgroup$
            – Ørjan Johansen
            Jan 5 at 4:56






            $begingroup$
            I think I found a much simpler (as in requiring no prime number theory) counter placement: 2p*2^i+2i.
            $endgroup$
            – Ørjan Johansen
            Jan 5 at 4:56














            $begingroup$
            @ØrjanJohansen: Right, I think I mentioned that construction in #esoteric (some time after I wrote this post)? All you actually need is a Golomb ruler for which each element is distinct modulo the number of elements, and there are various ways to construct those (although finding optimal ones is hard; the longest I've found (via brute force) is [0, 1, 3, 7, 20, 32, 42, 53, 58] for p=9).
            $endgroup$
            – ais523
            Jan 5 at 17:04




            $begingroup$
            @ØrjanJohansen: Right, I think I mentioned that construction in #esoteric (some time after I wrote this post)? All you actually need is a Golomb ruler for which each element is distinct modulo the number of elements, and there are various ways to construct those (although finding optimal ones is hard; the longest I've found (via brute force) is [0, 1, 3, 7, 20, 32, 42, 53, 58] for p=9).
            $endgroup$
            – ais523
            Jan 5 at 17:04












            $begingroup$
            Oh so you did (just before I said my brain refused to be in math mode, so there's my excuse :P). I guess I found out k=0 was sufficient, then. I think Wikipedia's Erdős–Turan_construction gives a polynomially growing (and presumably O()-optimal?) one if you use just the first half of the elements (the other half repeats (mod p)).
            $endgroup$
            – Ørjan Johansen
            Jan 5 at 18:54




            $begingroup$
            Oh so you did (just before I said my brain refused to be in math mode, so there's my excuse :P). I guess I found out k=0 was sufficient, then. I think Wikipedia's Erdős–Turan_construction gives a polynomially growing (and presumably O()-optimal?) one if you use just the first half of the elements (the other half repeats (mod p)).
            $endgroup$
            – Ørjan Johansen
            Jan 5 at 18:54


















            draft saved

            draft discarded




















































            Thanks for contributing an answer to Computer Science Stack Exchange!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid



            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.


            Use MathJax to format equations. MathJax reference.


            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fcs.stackexchange.com%2fquestions%2f102363%2fhow-many-pairs-of-brackets-are-sufficient-to-make-brainfuck-turing-complete%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            Popular posts from this blog

            Wiesbaden

            Marschland

            Dieringhausen