Introduction

In section 5.5, Theorem 20 of The Busy Beaver Frontier1, Scott Aaronson describes proof 5.1 by Chaitin2 that shows that, for any universal prefix-free language L, K(BBL(n+1) | BBL(n)) = O(log n), where K(A|B) is the Kolmogorov complexity of A given the input string B. That is, if you know the n‘th Busy Beaver, you could determine BBL(n+1) with O(log n) additional bits. At the end of the section, Aaronson poses the following question:

On the other hand, for bit-based programs, could Theorem 20 be improved to get the number of advice bits below O(log n)—possibly even down to a constant? How interrelated are the successive values of BB(n)?

In this paper, I give a proof that the required number of advice bits needed to determine BBL(n+1) from BBL(n) is O(log(log(n))), for almost all n. This uses a similar approach to that used in Theorem 20, but instead of incrementally checking lower bounds to Chaitin’s constant, we incrementally check a tally of the number of halting programs in L that are of length n+1.

The intuition for why the proof provides better bounds than Theorem 20 (for almost all n) comes from the fact that Chaitin’s constant is “overpowered” for determining whether an L-program halts. The first n+1 bits of Chaitin’s constant, Ωn+1, can determine the number of halting programs of length n+1, denoted Hn+1, and therefore BBL(n+1). However, Ωn+1 also encodes information about much larger programs. Ωn+1 tells you Hn+2 within a factor of 2, Hn+3 within a factor of 4, and so on. That’s a lot to ask from just BBL(n), requiring O(log(n)) advice bits to make up the difference to generate Ωn+1. However, by focusing solely on Hn+1 rather than later values of H, we reduce the “level of effort”. Usually, anyway – if Hn+1 is very large (i.e. O(2n)), then that does tell you something about Hn+2 and so on (because the language is prefix-free, a large number of halting L-programs of length n+1 “rules out” many longer strings), which means you would need O(log n) advice bits. However, it turns out that the asymptotic density over n of such large values of Hn+1 is 0.

More precisely: Lemma 1 shows that for almost all n, Hn < 2n / n. Lemma 2 provides a method for determining Hn+1 with a number of advice bits in terms of n and Hn+1. Theorem 3 then puts these two lemmas together: whenever Hn+1 fits within that typical bound, the number of advice bits is O(log log n). Finally, Corollary 4 shows that this same bound on advice bits applies when using the Busy Beaver variant ΣL(n), described below.

A note about the language L

The language L in this proof is prefix-free, universal, and has self-delimiting input. That is, a string in L is a concatenation of an L-expression bitstring and an input bitstring, b, where the L-expression reads exactly |b| bits of input before halting. The L-expression is not given the length of b, and instead must “decide” how many input bits to read, making L self-delimiting and therefore prefix-free. Languages with this feature have also been referred to as “Chaitin machines”3, “optimal description”4, or simply “universal”5.

A key benefit of this property is that L can accept strings in any other prefix-free language L2, with an O(1) overhead for an L-expression that is an L2-interpreter. This makes L “optimal” in the sense of the invariance theorem. This property does make L resistant to analysis, since an L2-interpreter might have syntax entirely different from L (which could even be uncomputable, e.g. “a string in L2 consists of a prefix-free binary encoding of a Turing Machine, followed by a ‘1’ bit for every shift that machine performs when run on an all-zero tape”). L’s analysis resistance does not affect the results of this proof.

Examples of languages that have this property are Chaitin’s LISP variant6, self-delimited Binary Lambda Calculus7, and Keraia3. It can also be envisioned4 as a multi-tape Turing Machine where one of the tapes is a read-only unidirectional “input” tape, and the machine has no way to determine if it has reached the end of the input tape – in the example languages, reading too many bits causes the program to block forever, never halting.

While this does differ from Theorem 20 (which covers “standard prefix-free universal” languages), I’m adding the self-delimiting input constraint for a number of reasons:

  • It matches Chaitin’s proof 5.12 and the definition of Chaitin’s constant
  • It makes the Busy Beaver function “optimal”, i.e. for any universal prefix-free language L2 (not necessarily with self-delimited input) there exists a constant c such that for all n, BBL(n+c) >= BBL2(n)8
  • It lets us add arbitrary properties (such as efficient Levenshtein coding of integers) to L with only a constant overhead
  • It lets us mostly ignore (with O(1) overhead) the distinction between BBL(n) and ΣL(n) (see Corollary 4)

The implications for this proof are:

  • We don’t have to enforce any requirements about the features or behavior of L, other than that it is prefix-free, universal, and has self-delimiting input.
    • If there’s some feature we need, we can assume that the feature is in some language L2, which means that L plus constant overhead also has that feature via the L2-interpreter.
  • To determine if a string is in L, we need to check if it read all of the string’s input bits and no more.
    • For example: take a string s with total length m that consists of an L-expression plus zero or more input bits appended.
      • If s halts after reading all of the input bits, it’s a complete L-program with length m.
      • If s halts without reading some bits, then it’s instead a shorter program with some unnecessary bits appended, and is not in L because L is prefix-free.
      • If s tries to read more bits than are provided, it is not in L (though s might be a prefix of a longer L-program).
A note about Busy Beaver variants

There are a number of different Busy Beaver variants, including:

  1. BB(n): the maximum number of shifts performed by a two-symbol, halting Turing machine with up to n states when started on an all-zero tape. Radó9 originally named this function S(n) but BB(n) is much more commonly used, including in Frontier1.
  2. Σ(n): the maximum number of ones printed by a two-symbol, halting Turing machine with up to n states when started on an all-zero tape. This is the “traditional” Busy Beaver function envisioned by Radó.
  3. BBL(n): the maximum runtime of a halting program in a prefix-free and universal language L that is n bits or less in length.
  4. ΣL(n): the largest number returned by a halting program in a prefix-free and universal language L that is n bits or less in length. Denoted BB’L(n) in Frontier.

Variant (4), ΣL(n), is perhaps the most interesting for Algorithmic Information Theory. It has the property that it does not depend on implementation details – the precise value of BBL(n) depends on whether “runtime” means the number of CPU cycles, Turing machine shifts, β-reductions in Lambda calculus, or some other notion of a “computational step”, but ΣL(n) is only determined by outputs. If the language L has self-delimited inputs, ΣL(n) becomes linked to Kolmogorov complexity, as ΣL(n) is then equivalent to “the largest number whose Kolmogorov complexity in L is less than or equal to n”. It has been observed by Chaitin10 that, with such an L, ΣL(n) bounds BBL(n) within O(1). That is, BBL(n - O(1)) <= ΣL(n) for all n.

This paper uses variant (3), BBL(n), consistent with Frontier. If ΣL(n) is known rather than BBL(n), there are a few amendments needed to account for the O(1) overhead. These changes are described in Corollary 4, and do not affect the O(log log n) bounds of the proof.


Proof

First, some notation:

  • Hx is the number of halting L-programs (L-expression plus input bits) with program length equal to x.
  • log(x) is always base 2, rounded down to the nearest integer.
  • enc(x) returns the (prefix-free) Levenshtein coding of the integer x. Note that for all x, |enc(x)| = log(x) + O(log(log(x))) and |enc(x + 1)| >= |enc(x)|.
  • “Almost all n” is meant in the sense described here. That is, the number of numbers less than n that satisfy the condition, divided by n, tends asymptotically to one.

Lemma 1: For almost all n, Hn < 2n / n

The definition of Chaitin’s constant Ω is, for all halting programs p in L, Ω = Σ 2-|p|. An equivalent expression is Ω = Σ 2-nHn for all n = 1 → ∞. This means that:

  • Ω = Σ 2-nHn for all n = 1 → ∞
  • Ω >= Σ 2-nHn for all n = 1 → ∞ such that Hn >= 2n / n
  • Ω >= Σ 2-n2n/n for all n = 1 → ∞ such that Hn >= 2n / n
  • Ω >= Σ 1/n for all n = 1 → ∞ such that Hn >= 2n / n

So, Chaitin’s constant is greater than or equal to some subset of the harmonic series, specifically the subset of n where Hn >= 2n / n. However, Chaitin’s constant is finite, which per Theorem 1 of Lubeck and Ponomarenko11 means that the elements of the harmonic series that add to Ω must have asymptotic density of 0, so such n must have asymptotic density 0. Conversely, n such that Hn < 2n / n must have asymptotic density 1. □

Lemma 2: There is a procedure that can be used to determine BBL(n+1) from BBL(n) and a number of advice bits, p, such that p = log(Hn+1) + O(1) + |enc(n)| + |enc(p)| - n

The procedure we describe will be similar to that described in Theorem 201, except with testing approximations of Hn+1 rather than approximations of Chaitin’s constant. The other primary difference will be a parameter p, which will encode the imprecision in our candidate for Hn+1. We describe an L-program doesThisManyHalt (pseudocode in Appendix A). We run this program multiple times, each time varying p and the approximation of Hn+1. By using BBL(n) to evaluate whether the program halts on any particular set of inputs, we will be able to arrive at an estimate for Hn+1 with p bits of lost precision.

Inputs and total program size: doesThisManyHalt consists of its constant-length program definition with three inputs. The first two inputs are n (the length of the known BBL(n)) and p (the amount of imprecision in our candidate estimate for Hn+1), in the prefix-free Levenshtein coding. The third input, the most significant digits of the candidate, is encoded as a simple non-prefix-free binary integer. This is legitimate because the candidate is the last input and the program will be able to compute its length from the other inputs, resulting in a program of length n:

  • candidate_length = n - |doesThisManyHalt| - |enc(n)| - |enc(p)|
  • candidate_length = n - O(1) - |enc(n)| - |enc(p)|

Leading zeroes are allowed, so the length of the binary expression of candidate can be less than candidate_length. This gives us log(candidate) + 1 <= candidate_length.

Program logic: First, the number candidate_length is inferred by subtracting from n the length of the program definition (a hard-coded constant) and the lengths of the prefix-free encodings of the inputs n and p. It reads in the candidate bits as a binary number (with read_bit() in the pseudocode), and stores that number as candidate. The special case where candidate = 0 is checked, and immediately halts if so. It then iterates through each of the 2n+1 strings of length n+1, emulating them as L-programs in parallel. Whenever one of the programs halts with total size n+1 bits (both L-expression and input bits), it is added to a tally halted. If that tally reaches candidate multiplied by 2p, then doesThisManyHalt halts.

Recall that a string is only in L if all input bits are read. So, the L-program emulator in doesThisManyHalt must track how many input bits are read by the L-expression, so that only those that halt with a total size of n+1 bits are added to the tally. If the L-expression reads fewer bits of input, then the string is not a valid L-program. Similarly, if a program tries to read more bits than are provided as input, then the string is not a valid L-program, and should not be added to the tally either. These features of the L-program emulator are part of the program definition of doesThisManyHalt.

Estimating Hn+1: Suppose we know BBL(n). Because doesThisManyHalt plus its inputs are a length n L-program, we can evaluate whether it halts. This can be used in a test to estimate Hn+1 given BBL(n):

  • Start with candidate = 0 and p = 0.
  • Run doesThisManyHalt with prefix-free n and p, with candidate encoded into a non-prefix-free binary string of length candidate_length, left-padded if necessary. Use BBL(n) to determine if it halts.
  • If doesThisManyHalt halts, Hn+1 >= candidate * 2p, because candidate * 2p halting programs of length n+1 were found. Increment candidate by one. If that pushes the length of the binary expression of candidate over candidate_length, reset candidate to zero and increment p instead. Return to the previous step.
  • If doesThisManyHalt does not halt, Hn+1 < candidate * 2p. Call the inputs to the first non-halting run candidate’ and p’.

Because the input p is adjustable rather than hard-coded, candidate*2p can get to 2n+1, the worst-case upper bound of Hn+1, with only O(log(n)) bits of p. That is, as long as candidate_length > 0 for p = n+1, or:

  • 0 < n - O(1) - |enc(n)| - |enc(p)|
  • O(1) + |enc(n)| + |enc(n + 1)| < n

we will eventually find inputs candidate’ and p’ such that doesThisManyHalt does not halt.

Consider the inputs candidate’’ and p’’ of the last halting run of doesThisManyHalt, just before the one with non-halting inputs candidate’ and p’. We can infer that p’’ = p’ and candidate’’ = candidate’ - 1. Suppose otherwise: p was incremented instead, so p’’ = p’ - 1. Any time p is incremented, candidate is reset to zero, so candidate’ = 0. However, whenever the candidate input is zero, doesThisManyHalt halts immediately, so the inputs candidate’ and p’ would have halted, a contradiction. Therefore:

  • (candidate’ - 1) * 2p’ <= Hn+1 < candidate’ * 2p’

giving us Hn+1 within p’ bits of precision.

Specifically, there must exist an integer k, 0 <= k < 2p’, such that (candidate’ - 1) * 2p’ + k = Hn+1. If these p’ bits of k are provided as advice bits, we will have the exact value of Hn+1. We can then run all programs of length n+1 in parallel until that many halt with total program length n+1 bits, select the runtime of the longest-running such program, and set BBL(n+1) to either that program’s runtime or BBL(n), whichever is greater.

Lower bounds on candidate: Recall from “Inputs and total program size” that:

  • candidate_length = n - O(1) - |enc(n)| - |enc(p)|

We can infer that either p’ = 0 (in which case we have a precise value for Hn+1 and can determine BBL(n+1)) or candidate_length = log(candidate’) + 1. For sake of contradiction, assume otherwise: p’ > 0 and candidate_length > log(candidate’) + 1. For this to be true, there would need to be at least one bit of padding in front of candidate’. Recall also that (candidate’ - 1) * 2p’ <= Hn+1 < candidate’ * 2p’.

Observe that |enc(p’-1)| <= |enc(p’)|, so candidate_length with p = p’-1 is at least as long as candidate_length with p = p’. Both candidate’ * 2 and candidate’ * 2 - 1 are at most one bit longer than candidate’, so they would have fit into the same candidate_length as candidate’ because candidate’ has at least one bit of padding. So, doesThisManyHalt could have been run with inputs n’-1 and either candidate’ * 2 or candidate’ * 2 - 1 while satisfying the constraint on candidate_length.

At least one of these inputs would have resulted in non-halting doesThisManyHalt:

  • If (candidate’ - 1) * 2p’ <= Hn+1 < (candidate’ - 1/2) * 2p’ then doesThisManyHalt with inputs p’-1 and candidate’ * 2 - 1 would not have halted
  • If (candidate’ - 1/2) * 2p’ <= Hn+1 < candidate’ * 2p’ then doesThisManyHalt with inputs p’-1 and candidate’ * 2 would not have halted
  • (candidate’ - 1) * 2p’ <= Hn+1 < candidate’ * 2p’, so one of the two sets of inputs would have caused doesThisManyHalt not to have halted

However, the runs of doesThisManyHalt are ordered by ascending p, exhausting all possible values of candidate that can fit into candidate_length. The inputs p’-1 and either candidate’ * 2 or candidate’ * 2 - 1 both would have been tried before p’ and candidate’, because p’-1 < p’. One of them would have been found to not halt, resulting in different values for p’ and candidate’ than the ones found. A contradiction.

So if p’ is nonzero:

  • log(candidate’) + 1 = n - O(1) - |enc(n)| - |enc(p’)|
  • log(candidate’) = n - O(1) - |enc(n)| - |enc(p’)|

Upper bounds on p’: Recall that (candidate’ - 1) * 2p’ <= Hn+1.

  • (candidate’ - 1) * 2p’ <= Hn+1
  • log(candidate’ - 1) + p’ <= log(Hn+1)
  • p’ <= log(Hn+1) - log(candidate’ - 1)
  • p’ <= log(Hn+1) - log(candidate’) + 1
  • p’ <= log(Hn+1) - (n - O(1) - |enc(n)| - |enc(p’)|) + 1
  • p’ <= log(Hn+1) + O(1) + |enc(n)| + |enc(p’)| - n

Remarks: It is noticeable that, while this Lemma and Theorem 201 use similar procedures (estimating Hn+1 or Ωn+1 through iterated runs, tallying halting machines), this Lemma required significantly more paperwork than Theorem 20. This is because Theorem 20 was proving that the advice bits needed were O(log n), but the savings from Lemma 1 provide exactly log n advice bits. Rather than encoding both n and the candidate bitstring in a prefix-free way, which would have simplified the reasoning, we could only get away with encoding a single one of these.

Theorem 3: Given BBL(n), BBL(n+1) can be determined with O(log(log(n))) advice bits for almost all n

Substituting the upper bound of Hn from Lemma 1 into the advice bit upper bound from Lemma 2, we have that for almost all n the number of advice bits p satisfies:

  • p <= log(Hn+1) + O(1) + |enc(n)| + |enc(p)| - n
  • p <= log(2^(n+1) / (n+1)) + O(1) + |enc(n)| + |enc(p)| - n
  • p <= (n + 1) - log(n+1) + O(1) + log(n) + O(log(log(n))) + |enc(p)| - n
  • p <= O(1) + O(log(log(n))) + |enc(p)|
  • p <= O(log(log(n)))

Corollary 4: Given ΣL(n), ΣL(n+1) can be determined with O(log(log(n))) advice bits for almost all n

Often, ΣL(n) is considered rather than BBL(n). We will show that the same (big-O) number of advice bits is needed to determine ΣL(n+1) given ΣL(n). Lemma 1 and Theorem 3 do not directly use BBL(n), so it suffices to show that Lemma 2 holds with ΣL(n+1) instead of BBL(n+1).

Recall that, while BBL(n) is the maximum runtime of a halting n-bit program in L, ΣL(n) is the largest number returned by a halting n-bit program in L. As Chaitin10 observes, because L has self-delimited input, BBL(n - O(1)) <= ΣL(n). This can be shown with a constant-length L-interpreter that runs a program in L, and returns the number of steps taken during that program’s runtime. If the L-interpreter for calculating runtime is c bits long, then ΣL(n) is at least as large as BBL(n-c). Since any function that gives an upper bound for BBL(n-c) can be used to compute BBL(n-c), ΣL(n) gives us BBL(n-c).

In Lemma 2, the only place where we use BBL(n) is when we use it to determine if doesThisManyHalt and its inputs halt. Because ΣL(n) only gives us BBL(n-c), we have to make doesThisManyHalt and its inputs c bits shorter. The program itself and most of its inputs are either fixed (|doesThisManyHalt|, n) or independent of n (p), so we reduce candidate_length. That is, in “Inputs and total program size”:

  • candidate_length = n - |doesThisManyHalt| - |enc(n)| - |enc(p)| - c
  • candidate_length = n - O(1) - |enc(n)| - |enc(p)|

This gives the same (big-O) value for candidate_length as was found with BBL(n), so all of the derivations based on candidate_length hold without modification. We then update “Program logic” to reflect that the program length of doesThisManyHalt and its inputs are now n-c bits, by subtracting an additional c when inferring candidate_length in the logic of doesThisManyHalt (the updated pseudocode would read int candidate_length = n - |doesThisManyHalt| - |enc(n)| - |enc(p)| - c;).

Finally, we update “Estimating Hn+1”. doesThisManyHalt and its inputs are now n-c bits, so the known BBL(n-c) can be used to determine whether it halts. Once we have sufficient advice bits to determine Hn+1, we compute ΣL(n+1): we run all programs of length 2n+1 until Hn+1 of them halt with length n+1, select the largest number returned by these programs, compare that to ΣL(n), and select whichever is larger.

Every other use of n (e.g. the first input to doesThisManyHalt) remains unchanged. The rest of the proof follows. □


Discussion

Unfortunately, this proof doesn’t provide better bounds for BB(n), with n-state, two-symbol Turing machines. In Theorem 211 Aaronson finds that the conditional complexity of K(BB(n+1) | BB(n)) = O(n). Note that the number of bits to describe an n-state Turing machine is n log(n) + O(n), compared to n for an n-bit program in a universal prefix-free language. If we want to improve the bounds of Theorem 21 using the proof above, we find that the bits “saved” for an n-state Turing machine would be only O(log(n)). So, the required bits would still be O(n).

An interesting conclusion is that all but O(log(n)k) programs of length n+1 halt before BBL(n), for some integer k. Specifically, the maximum halting runtime of doesThisManyHalt with n total bits runs longer than all but 2O(log(log(n))) halting programs of length n+1. So, incrementing n only adds a relatively small number of “interesting” programs (i.e. programs of length n+1 that run longer than BBL(n)). This goes against my intuition that things ought to get “exponentially more interesting” as program lengths increase.

Limitations of the method in Lemma 2

Lemma 1 lets us save log(n) bits due to the harmonic series diverging. But the series (n log(n))-1 diverges as well, as does (n log(n) log(log(n)))-1 and so on. Similarly, the Levenshtein coding of n uses log(n) + log(log(n)) + … + 2 + log*(n) bits (where log*(x) is the iterated logarithm of x). So, it seems like we could use the repeated logarithms in the diverging series to save more bits from the encoding of n, as we do in Theorem 3 with the first log(n) term. Could the method in Lemma 2 perhaps do even better than we’ve shown? As it turns out, probably not.

While it is shown11 that a convergent subseries of the harmonic series must have asymptotic density zero, the general case with a convergent subseries of any diverging series of monotonically-decreasing terms doesn’t give such strict bounds. Šalát12 gives Theorem 2 that for such convergent subseries, the density has lim inf = 0, and Theorem 1 which includes an example where lim sup is nonzero. Nonzero lim sup but zero lim inf means that, for this to affect the needed advice bits in Theorem 3 for more than asymptotic-density-zero n, the density of n where Hn+1 >= 2n(n log(n))-1 would have to “jump around” between zero and some nonzero value as n tends towards infinity.

While this seems implausible for any sensible language, L has self-delimiting input. Therefore, the design space of L includes all possible prefix-free languages, with O(1) overhead for an interpreter. Pathological languages that fit the above description will contribute to the tally of Hn+1, once n is large enough to fit the interpreter for those languages. Similarly, languages where there exist n such that K(BBL(n+1) | BBL(n)) > O(log log n), such as in Appendix B, will contribute as well – though, per Lemma 1, the asymptotic density of such n will be zero.

This would imply that if it’s possible to determine BBL(n+1) from BBL(n) with much less than O(log log n) advice bits, it would require some other method than the one described here.


Appendix

Appendix A: Pseudocode for doesThisManyHalt

doesThisManyHalt(int n, int p) {
    int candidate_length = n - |doesThisManyHalt| - |enc(n)| - |enc(p)|;
    int candidate = 0;
    int bitsRead = 0;
    while (bitsRead < candidate_length) {
        candidate = candidate * 2 + read_bit();
        bitsRead++;
    }
    if (candidate == 0) {
        exit;
    }
    int halted = 0;
    getAllStringsOfLength(n + 1).runInParallel().whenHalt(program -> {
        if (program.expression_length + program.read_bits == n+1) {
            halted++;
            if (halted >= candidate * 2 ^ p) {
                exit;
            }
        }
    });
}

Appendix B: A language with some n such that K(BBL(n+1) | BBL(n)) > O(log log n)

Consider the prefix-free language which is defined:

  • If the first bit is a zero, treat the rest of the stream as input to the interpreter of some other prefix-free language (Lisp etc).
  • If the first bit is a one, continue reading the input until you reach a zero. Set b to be the number of initial ones read. Read 22b - b - 1 bits after the first zero (for a total of 22b bits), then halt.

Let k be any integer that can be expressed as 22b for an integer b (noting that therefore b = log log k), and c be the length of the constant-length interpreter for this language in L. From the above definition, there are at least 2^(k - b - 1) halting programs of length k+c. Per Lemma 2, for n=k+c this gives us:

  • p <= log(Hk+c) + O(1) + |enc(n)| + |enc(p)| - n
  • p <= log(2^(k - b - 1)) + O(1) + |enc(n)| + |enc(p)| - n
  • p <= k - b + O(1) + |enc(n)| + |enc(p)| - n
  • p <= (n - c) - log(log(n - c)) + O(1) + log(n) + log(log(n)) + O(log log log n) + |enc(p)| - n
  • p <= O(log(n))

so we’ll need O(log(n)) advice bits for infinitely many n. Specifically, for all n such that n=k+c=22b+c. Such n occur with density n-1log(log(n-c)), which asymptotically approaches zero, so Lemma 1 still holds.


  1. Scott Aaronson. 2020. The Busy Beaver Frontier. https://www.scottaaronson.com/papers/bb.pdf  2 3 4 5

  2. G. Chaitin. To a mathematical theory of evolution and biological creativity. Technical Report 391, Centre for Discrete Mathematics and Theoretical Computer Science, 2010. https://www.cs.auckland.ac.nz/research/groups/CDMTCS/researchreports/391greg.pdf 2

  3. Michael Stay. 2005. Very Simple Chaitin Machines for Concrete AIT. Fundam. Inf. 68, 3 (August 2005), 231–247.  2

  4. Alexander Shen, Vladimir A. Uspensky, and and Nikolay Vereshchagin. Kolmogorov complexity and algorithmic randomness. Kolmogorov complexity and algorithmic randomness, volume 220. American Mathematical Soc., 2017.  2

  5. Tromp, John. (2006). Binary Lambda Calculus and Combinatory Logic. 10.1142/9789812770837_0014. 

  6. Chaitin, G.J. (1995). The Limits of Mathematics—Tutorial Version. arXiv: Chaotic Dynamics. 

  7. Tromp, John. Binary Lambda Calculus. 

  8. Tromp, John. (2023). OEIS A361211 

  9. Radó, Tibor (May 1962). On non-computable functions. Bell System Technical Journal. 41 (3): 877–884. 

  10. Chaitin, G.J. (1987). Computing the Busy Beaver Function. In: Cover, T.M., Gopinath, B. (eds) Open Problems in Communication and Computation. Springer, New York, NY. https://doi.org/10.1007/978-1-4612-4808-8_28  2

  11. Lubeck, Brian & Ponomarenko, Vadim. (2018). Subsums of the Harmonic Series. The American Mathematical Monthly. 125. 351-355. 10.1080/00029890.2018.1420996.  2

  12. Tibor Šalát. 1964. On subseries. Mathematische Zeitschrift, Volume 85, Number 3, 209-225. 


<
Blog Archive
Archive of all previous blog posts
>
Blog Archive
Archive of all previous blog posts