Recursion

From Free net encyclopedia

Image:Sierpinski triangle.png In mathematics and computer science, recursion specifies (or constructs) a class of objects or methods (or an object from a certain class) by defining a few very simple base cases or methods (often just one), and then defining rules to break down complex cases into simpler cases.

For example, the following is a recursive definition of person's ancestors:

  • One's parents are one's ancestors (base case);
  • The parents of any ancestor are also ancestors of the person under consideration (recursion step).

It is convenient to think that a recursive definition defines objects in terms of "previously defined" objects of the class to define.

Definitions such as these are often found in mathematics. For example, the formal definition of natural numbers is: 0 is a natural number, and each natural number has a successor, which is also a natural number.

Contents

Recursion in mathematics

Recursively defined sets

  • Example: the natural numbers

The canonical example of a recursively defined set is given by the natural numbers:

0 is in N
if n is in N, then n + 1 is in N
The set of natural numbers is the smallest set satisfying the previous two properties.

Here's an alternative recursive definition of N:

0, 1 are in N;
if n and n + 1 are in N, then n + 2 is in N;
N is the smallest set satisfying the previous two properties.
  • Example: The set of true reachable propositions

Another interesting example is the set of all true "reachable" propositions in an axiomatic system.

  • if a proposition is an axiom, it is a true reachable proposition.
  • if a proposition can be obtained from true reachable propositions by means of inference rules, it is a true reachable proposition.
  • The set of true reachable propositions is the smallest set of reachable propositions satisfying these conditions.

This set is called 'true reachable propositions' because: in non-constructive approaches to the foundations of mathematics, the set of true propositions is larger than the set recursively constructed from the axioms and rules of inference. See also Gödel's incompleteness theorems.

(Note that determining whether a certain object is in a recursively defined set is not an algorithmic task.)

Functional Recursion

A function may be partly defined in terms of itself. A familiar example is the Fibonacci sequence: F(n) = F(n-1) + F(n-2). For such a definition to be useful, it must lead to values which are non-recursively defined, in this case F(0) = 0 and F(1) = 1.

A famous recursive function is the Ackerman function which, unlike the Fibonacci sequence, is rather difficult to express without recursion.

Recursive Proofs

The standard way to define new systems of mathematics or logic is to define objects (such as "true" and "false", or "all natural numbers"), then define operations on these. These are the base cases. After this, all valid computations in the system are defined with rules for assembling these. In this way, if the base cases and rules are all proven to be calculable, then any formula in the mathematical system will also be calculable.

This sounds unexciting, but this type of proof is the normal way to prove that a calculation is impossible. This can often save a lot of time. For example, this type of proof was used to prove that the area of a circle is not a simple ratio of its diameter, and that no angle can be trisected with compass and straightedge -- both puzzles that fascinated the ancients.

Recursion in language

The use of recursion in linguistics, and the use of recursion in general, dates back to the ancient Indian linguist Pāṇini in the 5th century BC, who made use of recursion in his grammar rules of Sanskrit.

Linguist Noam Chomsky produced evidence that unlimited extension of a language such as English is possible only by the recursive device of embedding sentences in sentences. Thus, a talky little girl may say, "Dorothy, who met the wicked Witch of the West in Munchkin Land where her wicked Witch sister was killed, liquidated her with a pail of water." Clearly, two simple sentences — "Dorothy met the Wicked Witch of the West in Munchkin Land" and "Her sister was killed in Munchkin Land" — can be embedded in a third sentence, "Dorothy liquidated her with a pail of water," to obtain a very talky sentence.

Niels K. Jerne, the 1984 Nobel Prize laureate in Medicine and Physiology, used Chomsky's transformational-generative grammar model to explain the human immune system, equating "components of a generative grammar ... with various features of protein structures." The title of Jerne's Stockholm Nobel lecture was The Generative Grammar of the Immune System.

Here is another, perhaps simpler way to understand recursive processes:

  1. Are we done yet? If so, return the results. Without such a termination condition a recursion would go on forever.
  2. If not, simplify the problem, solve those simpler problem(s), and assemble the results into a solution for the original problem. Then return that solution.

A more humorous illustration goes: "In order to understand recursion, one must first understand recursion." Or perhaps more accurate is the following due to Andrew Plotkin: "If you already know what recursion is, just remember the answer. Otherwise, find someone who is standing closer to Douglas Hofstadter than you are; then ask him or her what recursion is."

Examples of mathematical objects often defined recursively are functions, sets, and especially fractals.

Recursion in plain English

Recursion is the process a procedure goes through when one of the steps of the procedure involves rerunning the entire same procedure. A procedure that goes through recursion is said to be recursive. Something is also said to be recursive when it is the result of a recursive procedure.

To understand recursion, one must recognize the distinction between a procedure and the running of a procedure. A procedure is a set of steps that are to be taken based on a set of rules. The running of a procedure involves actually following the rules and performing the steps. An analogy might be that a procedure is like a menu in that it is the possible steps, while running a procedure is actually choosing the courses for the meal from the menu.

A procedure is recursive if one of the steps that makes up the procedure calls for a new running of the procedure. Therefore a recursive four course meal would be a meal in which one of the choices of appetizer, salad, entrée, or dessert was an entire meal unto itself. So a recursive meal might be potato skins, baby greens salad, chicken parmesan, and for dessert, a four course meal, consisting of crab cakes, Caesar salad, for an entrée, a four course meal, and chocolate cake for dessert, so on until each of the meals within the meals is completed.

A recursive procedure must complete every one of its steps. Even if a new running is called in one of its steps, each running must run through the remaining steps. What this means is that even if the salad is an entire four course meal unto itself, you still have to eat your entrée and dessert.

Recursive humour

A common geeky joke (for example [1]) is the following "definition" of recursion.

Recursion
See "Recursion".

This is a parody on references in dictionaries, which in some careless cases may lead to circular definitions. Every joke has an element of wisdom, and also an element of misunderstanding. This one is also the second-shortest possible example of an erroneous recursive definition of an object, the error being the absence of the termination condition (or lack of the initial state, if to look at it from an opposite point of view). Newcomers to recursion are often bewildered by its apparent circularity, until they learn to appreciate that a termination condition is key.

Other examples are recursive acronyms, such as GNU, PHP or TTP (Dilbert; "The TTP Project").

Recursion in computer science

Template:Main 

A common method of simplification is to divide a problem into subproblems of the same type. As a computer programming technique, this is called divide and conquer and is key to the design of many important algorithms, as well as being a fundamental part of dynamic programming.

Recursion in computer programming is exemplified when a function is defined in terms of itself. One example application of recursion is in parsers for programming languages. The great advantage of recursion is that an infinite set of possible sentences, designs or other data can be defined, parsed or produced by a finite computer program.

Recurrence relations are equations to define one or more sequences recursively. Some specific kinds of recurrence relation can be "solved" to obtain a non-recursive definition.

A classic example of recursion is the definition of the factorial function, given here in pseudocode:

function factorial(n) {
  if (n <= 1)
    return n;
  else
    return n * factorial(n-1);
}

The function calls itself recursively on a smaller version of the input (n - 1) and multiplies the result of the recursive call by n, until reaching the base case, analogously to the mathematical definition of factorial.

The recursion theorem

In set theory, this is a theorem guaranteeing that recursively defined functions exist. Given a set <math>X</math>, an element <math>a</math> of <math>X</math> and a function <math>f: X \rightarrow X</math>, the theorem states that there is a unique function <math>F: N \rightarrow X</math> (where <math>N</math> denotes the set of natural numbers) such that

<math>F(0) = a</math>
<math>F(n + 1) = f(F(n))</math>

for any natural number <math>n</math>.

Proof of uniqueness

Take two functions <math>f</math> and <math>g</math> of domain <math>N</math> and codomain <math>A</math> such that:

<math>f(0) = a</math>
<math>g(0) = a</math>
<math>f(n + 1) = F(f(n))</math>
<math>g(n + 1) = F(g(n))</math>

where <math>a</math> is an element of <math>A</math>. We want to prove that <math>f = g</math>. Two functions are equal if they:

i. have equal domains/codomains;
ii. have the same graphic.
i. Done!
ii. Mathematical induction: for all <math>n</math> in <math>N</math>, <math>f(n) = g(n)</math>? (We shall call this condition, say, <math>Eq(n))</math>:
1.<math>Eq(0)</math> iff <math>f(0) = g(0)</math> iff <math>a = a</math>. Done!
2.Let <math>n</math> be an element of <math>N</math>. Assuming that <math>Eq(n)</math> holds, we want to show that <math>Eq(n + 1)</math> holds as well, which is easy because: <math>f(n + 1) = F(f(n)) = F(g(n)) = g(n + 1)</math>. Done!

you should consider N union {0} as a domain of F.

Proof of existence

[See Hungerford, "Algebra", first chapter on set theory]


Some common recurrence relations are: Template:Col-begin Template:Col-break

Template:Col-break

Template:Col-end


See also

Template:Col-begin Template:Col-break

Template:Col-break

Template:Col-break

Template:Col-end

References

External links

ca:Algoritme recursiu cs:Rekurze da:Rekursiv de:Rekursion es:Algoritmo recursivo fr:Récursivité he:רקורסיה id:Rekursi io:Rekurso is:Endurkvæmt fall it:Ricorsione ja:再帰呼び出し lt:Rekursija nl:Recursie pl:Rekursja ru:Рекурсия sl:Rekurzija sv:Rekursion zh:递归