Misplaced Pages

Iterated logarithm

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.

In computer science , the iterated logarithm of n {\displaystyle n} , written log *   n {\displaystyle n} (usually read " log star "), is the number of times the logarithm function must be iteratively applied before the result is less than or equal to 1 {\displaystyle 1} . The simplest formal definition is the result of this recurrence relation :

#318681

49-650: In computer science, lg * is often used to indicate the binary iterated logarithm , which iterates the binary logarithm (with base 2 {\displaystyle 2} ) instead of the natural logarithm (with base e ). Mathematically, the iterated logarithm is well defined for any base greater than e 1 / e ≈ 1.444667 {\displaystyle e^{1/e}\approx 1.444667} , not only for base 2 {\displaystyle 2} and base e . The "super-logarithm" function s l o g b ( n ) {\displaystyle \mathrm {slog} _{b}(n)}

98-446: A log ratio of −1 , and an unchanged expression rate can be described by a log ratio of zero, for instance. Data points obtained in this way are often visualized as a scatterplot in which one or both of the coordinate axes are binary logarithms of intensity ratios, or in visualizations such as the MA plot and RA plot that rotate and scale these log ratio scatterplots. In music theory ,

147-456: A measure is given by the cent , which divides the octave into 1200 equal intervals ( 12 semitones of 100 cents each). Mathematically, given tones with frequencies f 1 and f 2 , the number of cents in the interval from f 1 to f 2 is The millioctave is defined in the same way, but with a multiplier of 1000 instead of 1200 . In competitive games and sports involving two players or teams in each game or match,

196-443: A sample of biological material. Different rates of expression of a gene are often compared by using the binary logarithm of the ratio of expression rates: the log ratio of two expression rates is defined as the binary logarithm of the ratio of the two rates. Binary logarithms allow for a convenient comparison of expression rates: a doubled expression rate can be described by a log ratio of 1 , a halved expression rate can be described by

245-539: A solution for a problem of size n . Similarly, a perfectly balanced binary search tree containing n elements has height log 2 ( n + 1) − 1 . The running time of an algorithm is usually expressed in big O notation , which is used to simplify expressions by omitting their constant factors and lower-order terms. Because logarithms in different bases differ from each other only by a constant factor, algorithms that run in O (log 2   n ) time can also be said to run in, say, O (log 13 n ) time. The base of

294-419: A unique integer n such that 2 ≤ x < 2 , or equivalently 1 ≤ 2 x < 2 . Now the integer part of the logarithm is simply n , and the fractional part is log 2 (2 x ) . In other words: For normalized floating-point numbers, the integer part is given by the floating-point exponent, and for integers it can be determined by performing a count leading zeros operation. The fractional part of

343-545: Is O ( log ∗ ⁡ n ) {\displaystyle O(\log ^{*}n)} . In computational complexity theory , Santhanam shows that the computational resources DTIME — computation time for a deterministic Turing machine — and NTIME — computation time for a non-deterministic Turing machine — are distinct up to n log ∗ ⁡ n . {\displaystyle n{\sqrt {\log ^{*}n}}.} Binary logarithm In mathematics ,

392-458: Is "essentially equivalent" to the base b {\displaystyle b} iterated logarithm (although differing in minor details of rounding ) and forms an inverse to the operation of tetration . The iterated logarithm is useful in analysis of algorithms and computational complexity , appearing in the time and space complexity bounds of some algorithms such as: The iterated logarithm grows at an extremely slow rate, much slower than

441-404: Is helpful to have a measure of the size of an interval that is finer than an octave and is additive (as logarithms are) rather than multiplicative (as frequency ratios are). That is, if tones x , y , and z form a rising sequence of tones, then the measure of the interval from x to y plus the measure of the interval from y to z should equal the measure of the interval from x to z . Such

490-409: Is invertible on this interval. The inverse, the generalized exponential function , is defined by The density of values X represented by x has no discontinuities as we go from level ℓ to ℓ  + 1 (a very desirable property) since: The generalized logarithm function is closely related to the iterated logarithm used in computer science analysis of algorithms. Formally, we can define

539-479: Is necessary to have at least one round in which not all remaining competitors play. For example, log 2  6 is approximately 2.585 , which rounds up to 3 , indicating that a tournament of 6 teams requires 3 rounds (either two teams sit out the first round, or one team sits out the second round). The same number of rounds is also necessary to determine a clear winner in a Swiss-system tournament . In photography , exposure values are measured in terms of

SECTION 10

#1733084635319

588-488: Is often written as log 2   n . However, several other notations for this function have been used or proposed, especially in application areas. Some authors write the binary logarithm as lg n , the notation listed in The Chicago Manual of Style . Donald Knuth credits this notation to a suggestion of Edward Reingold , but its use in both information theory and computer science dates to before Reingold

637-406: Is related to the number of leading zeros of the 32-bit unsigned binary representation of x , nlz( x ) . The integer binary logarithm can be interpreted as the zero-based index of the most significant 1 bit in the input. In this sense it is the complement of the find first set operation, which finds the index of the least significant 1 bit. Many hardware platforms include support for finding

686-401: Is the inverse function of the power of two function. As well as log 2 , an alternative notation for the binary logarithm is lb (the notation preferred by ISO 31-11 and ISO 80000-2 ). Historically, the first application of binary logarithms was in music theory , by Leonhard Euler : the binary logarithm of a frequency ratio of two musical tones gives the number of octaves by which

735-459: Is the number of seconds of exposure. Binary logarithms (expressed as stops) are also used in densitometry , to express the dynamic range of light-sensitive materials or digital sensors. An easy way to calculate log 2   n on calculators that do not have a log 2 function is to use the natural logarithm ( ln ) or the common logarithm ( log or log 10 ) functions, which are found on most scientific calculators . To change

784-401: The binary logarithm ( log 2   n ) is the power to which the number 2 must be raised to obtain the value  n . That is, for any real number x , For example, the binary logarithm of 1 is 0 , the binary logarithm of 2 is 1 , the binary logarithm of 4 is  2 , and the binary logarithm of 32 is  5 . The binary logarithm is the logarithm to the base 2 and

833-406: The complex logarithm in this definition allows the binary logarithm to be extended to the complex numbers . As with other logarithms, the binary logarithm obeys the following equations, which can be used to simplify formulas that combine binary logarithms with multiplication or exponentiation: For more, see list of logarithmic identities . In mathematics, the binary logarithm of a number n

882-445: The integer part , ⌊ log 2 ⁡ x ⌋ {\displaystyle \lfloor \log _{2}x\rfloor } (called the characteristic of the logarithm). This reduces the problem to one where the argument of the logarithm is in a restricted range, the interval [1, 2) , simplifying the second step of computing the fractional part (the mantissa of the logarithm). For any x > 0 , there exists

931-541: The interval or perceptual difference between two tones is determined by the ratio of their frequencies. Intervals coming from rational number ratios with small numerators and denominators are perceived as particularly euphonious. The simplest and most important of these intervals is the octave , a frequency ratio of 2:1 . The number of octaves by which two tones differ is the binary logarithm of their frequency ratio. To study tuning systems and other aspects of music theory that require finer distinctions between tones, it

980-404: The 8th century Jain mathematician Virasena is credited with a precursor to the binary logarithm. Virasena's concept of ardhacheda has been defined as the number of times a given number can be divided evenly by two. This definition gives rise to a function that coincides with the binary logarithm on the powers of two, but it is different for other integers, giving the 2-adic order rather than

1029-475: The algorithm for symmetric level-index ( SLI ) arithmetic, and a parallel implementation of it. There has been extensive work on developing the SLI arithmetic algorithms and extending them to complex and vector arithmetic operations. The idea of the level-index system is to represent a non-negative real number X as where 0 ≤ f < 1 {\displaystyle 0\leq f<1} and

SECTION 20

#1733084635319

1078-435: The algorithm reduces the number of choices by a factor of two, then the number of iterations needed to select a single choice is again the integral part of log 2   n . This idea is used in the analysis of several algorithms and data structures . For example, in binary search , the size of the problem to be solved is halved with each iteration, and therefore roughly log 2   n iterations are needed to obtain

1127-844: The algorithm: log 2 ⁡ x = n + 2 − m 1 ( 1 + 2 − m 2 ( 1 + 2 − m 3 ( 1 + ⋯ ) ) ) = n + 2 − m 1 + 2 − m 1 − m 2 + 2 − m 1 − m 2 − m 3 + ⋯ {\displaystyle {\begin{aligned}\log _{2}x&=n+2^{-m_{1}}\left(1+2^{-m_{2}}\left(1+2^{-m_{3}}\left(1+\cdots \right)\right)\right)\\&=n+2^{-m_{1}}+2^{-m_{1}-m_{2}}+2^{-m_{1}-m_{2}-m_{3}}+\cdots \end{aligned}}} In

1176-550: The argument to the log2 function is allowed to be a negative number , returning a complex one. Generalized logarithm function The level-index ( LI ) representation of numbers, and its algorithms for arithmetic operations, were introduced by Charles Clenshaw and Frank Olver in 1984. The symmetric form of the LI system and its arithmetic operations were presented by Clenshaw and Peter Turner in 1987. Michael Anuta, Daniel Lozier, Nicolas Schabanel and Turner developed

1225-406: The binary logarithm has several applications in combinatorics : The binary logarithm also frequently appears in the analysis of algorithms , not only because of the frequent use of binary number arithmetic in algorithms, but also because binary logarithms occur in the analysis of algorithms based on two-way branching. If a problem initially has n choices for its solution, and each iteration of

1274-419: The binary logarithm indicates the number of rounds necessary in a single-elimination tournament required to determine a winner. For example, a tournament of 4 players requires log 2  4 = 2 rounds to determine the winner, a tournament of 32 teams requires log 2  32 = 5 rounds, etc. In this case, for n players/teams where n is not a power of 2, log 2   n is rounded up since it

1323-460: The binary logarithm of a power of two is just its position in the ordered sequence of powers of two. On this basis, Michael Stifel has been credited with publishing the first known table of binary logarithms in 1544. His book Arithmetica Integra contains several tables that show the integers with their corresponding powers of two. Reversing the rows of these tables allow them to be interpreted as tables of binary logarithms. Earlier than Stifel,

1372-474: The binary logarithm of the amount of light reaching the film or sensor, in accordance with the Weber–Fechner law describing a logarithmic response of the human visual system to light. A single stop of exposure is one unit on a base- 2 logarithmic scale. More precisely, the exposure value of a photograph is defined as where N is the f-number measuring the aperture of the lens during the exposure, and t

1421-413: The binary logarithm, as it is instead reserved for the common logarithm log 10 n . The number of digits ( bits ) in the binary representation of a positive integer n is the integral part of 1 + log 2   n , i.e. In information theory, the definition of the amount of self-information and information entropy is often expressed with the binary logarithm, corresponding to making

1470-537: The bit the fundamental unit of information . With these units, the Shannon–Hartley theorem expresses the information capacity of a channel as the binary logarithm of its signal-to-noise ratio, plus one. However, the natural logarithm and the nat are also used in alternative notations for these definitions. Although the natural logarithm is more important than the binary logarithm in many areas of pure mathematics such as number theory and mathematical analysis ,

1519-532: The design of sports tournaments , and photography . Binary logarithms are included in the standard C mathematical functions and other mathematical software packages. The powers of two have been known since antiquity; for instance, they appear in Euclid's Elements , Props. IX.32 (on the factorization of powers of two) and IX.36 (half of the Euclid–Euler theorem , on the structure of even perfect numbers ). And

Iterated logarithm - Misplaced Pages Continue

1568-607: The exponents of the time bounds for some divide and conquer algorithms , such as the Karatsuba algorithm for multiplying n -bit numbers in time O ( n ) , and the Strassen algorithm for multiplying n × n matrices in time O ( n ) . The occurrence of binary logarithms in these running times can be explained by reference to the master theorem for divide-and-conquer recurrences . In bioinformatics , microarrays are used to measure how strongly different genes are expressed in

1617-412: The integers from 1 to 8, to seven decimal digits of accuracy. The binary logarithm function may be defined as the inverse function to the power of two function, which is a strictly increasing function over the positive real numbers and therefore has a unique inverse. Alternatively, it may be defined as ln n /ln 2 , where ln is the natural logarithm , defined in any of its standard ways. Using

1666-401: The inverse grows much slower: log b ∗ ⁡ x ≪ log b n ⁡ x {\displaystyle \log _{b}^{*}x\ll \log _{b}^{n}x} . For all values of n relevant to counting the running times of algorithms implemented in practice (i.e., n  ≤ 2, which is far more than the estimated number of atoms in

1715-413: The known universe), the iterated logarithm with base 2 has a value no more than 5. Higher bases give smaller iterated logarithms. The iterated logarithm is closely related to the generalized logarithm function used in symmetric level-index arithmetic . The additive persistence of a number , the number of times someone must replace the number by the sum of its digits before reaching its digital root ,

1764-526: The logarithm base from e or 10 to 2 one can use the formulae : or approximately The binary logarithm can be made into a function from integers and to integers by rounding it up or down. These two forms of integer binary logarithm are related by this formula: The definition can be extended by defining ⌊ log 2 ⁡ ( 0 ) ⌋ = − 1 {\displaystyle \lfloor \log _{2}(0)\rfloor =-1} . Extended in this way, this function

1813-581: The logarithm in expressions such as O (log n ) or O ( n log n ) is therefore not important and can be omitted. However, for logarithms that appear in the exponent of a time bound, the base of the logarithm cannot be omitted. For example, O (2 ) is not the same as O (2 ) because the former is equal to O ( n ) and the latter to O ( n ) . Algorithms with running time O ( n  log  n ) are sometimes called linearithmic . Some examples of algorithms with running time O (log n ) or O ( n log n ) are: Binary logarithms also occur in

1862-527: The logarithm itself, or repeats of it. This is because the tetration grows much faster than iterated exponential: y b = b b ⋅ ⋅ b ⏟ y ≫ b b ⋅ ⋅ b y ⏟ n {\displaystyle {^{y}b}=\underbrace {b^{b^{\cdot ^{\cdot ^{b}}}}} _{y}\gg \underbrace {b^{b^{\cdot ^{\cdot ^{b^{y}}}}}} _{n}}

1911-401: The logarithm. The modern form of a binary logarithm, applying to any number (not just powers of two) was considered explicitly by Leonhard Euler in 1739. Euler established the application of binary logarithms to music theory, long before their applications in information theory and computer science became known. As part of his work in this area, Euler published a table of binary logarithms of

1960-451: The number of leading zeros, or equivalent operations, which can be used to quickly find the binary logarithm. The fls and flsl functions in the Linux kernel and in some versions of the libc software library also compute the binary logarithm (rounded up to an integer, plus one). For a general positive real number , the binary logarithm may be computed in two parts. First, one computes

2009-519: The process of exponentiation is performed ℓ times, with ℓ ≥ 0 {\displaystyle \ell \geq 0} . ℓ and f are the level and index of X respectively. x = ℓ + f is the LI image of X . For example, so its LI image is The symmetric form is used to allow negative exponents, if the magnitude of X is less than 1. One takes sgn (log( X )) or sgn(| X | − | X | ) and stores it (after substituting +1 for 0 for

Iterated logarithm - Misplaced Pages Continue

2058-411: The reciprocal sign since for X  = 1 =  e the LI image is x  = 1.0 and uniquely defines X =1 and we can do away without a third state and use only one bit for the two states −1 and +1 ) as the reciprocal sign r X . Mathematically, this is equivalent to taking the reciprocal (multiplicative inverse) of a small magnitude number, and then finding the SLI image for

2107-418: The reciprocal. Using one bit for the reciprocal sign enables the representation of extremely small numbers. A sign bit may also be used to allow negative numbers. One takes sgn (X) and stores it (after substituting +1 for 0 for the sign since for X  = 0 the LI image is x  = 0.0 and uniquely defines X  = 0 and we can do away without a third state and use only one bit for

2156-418: The result is log 2   y and can be computed iteratively, using only elementary multiplication and division. The algorithm for computing the fractional part can be described in pseudocode as follows: The result of this is expressed by the following recursive formulas, in which m i {\displaystyle m_{i}} is the number of squarings required in the i -th iteration of

2205-435: The series is truncated after the i -th term, then the error in the result is less than 2 . The log2 function is included in the standard C mathematical functions . The default version of this function takes double precision arguments but variants of it allow the argument to be single-precision or to be a long double . In computing environments supporting complex numbers and implicit type conversion such as MATLAB

2254-400: The special case where the fractional part in step 1 is found to be zero, this is a finite sequence terminating at some point. Otherwise, it is an infinite series that converges according to the ratio test , since each term is strictly less than the previous one (since every m i > 0 ). For practical use, this infinite series must be truncated to reach an approximate result. If

2303-422: The tones differ. Binary logarithms can be used to calculate the length of the representation of a number in the binary numeral system , or the number of bits needed to encode a message in information theory . In computer science , they count the number of steps needed for binary search and related algorithms. Other areas in which the binary logarithm is frequently used include combinatorics , bioinformatics ,

2352-494: The two states −1 and +1 ) as the sign s X . Mathematically, this is equivalent to taking the inverse (additive inverse) of a negative number, and then finding the SLI image for the inverse. Using one bit for the sign enables the representation of negative numbers. The mapping function is called the generalized logarithm function . It is defined as and it maps [ 0 , ∞ ) {\displaystyle [0,\infty )} onto itself monotonically and so it

2401-568: Was active. The binary logarithm has also been written as log n with a prior statement that the default base for the logarithm is  2 . Another notation that is often used for the same function (especially in the German scientific literature) is ld n , from Latin logarithmus dualis or logarithmus dyadis . The DIN 1302  [ de ] , ISO 31-11 and ISO 80000-2 standards recommend yet another notation, lb n . According to these standards, lg n should not be used for

#318681