Addition
From Free net encyclopedia
Image:Addition01.svg Addition is the basic operation of arithmetic. In its simplest form, addition combines two numbers, the addends or terms, into a single number, the sum.
Adding more than two numbers can be viewed as repeated addition; this procedure is known as summation and includes ways to add infinitely many numbers in an infinite series. Repeated addition of the number one is the most basic form of counting.
Addition is also defined for mathematical objects other than numbers — for example, matrices or polynomials, often by analogy.
Contents |
Notation and terminology
Image:PlusCM128.svg Addition is written using the plus sign "+" between the terms; that is, in infix notation. The result is expressed with an equals sign. For example,
- 1 + 1 = 2
- 2 + 2 = 4
- 5 + 4 + 2 = 11 (see "associativity" below)
- 3 + 3 + 3 + 3 = 12 (see "multiplication" below)
Image:AdditionVertical.svg There are also situations where addition is "understood" even though no symbol appears:
- A column of numbers, with the last number in the column underlined, usually indicates that the numbers in the column are to be added, with the sum written below the underlined number.
- A whole number followed immediately by a fraction indicates the sum of the two, called a mixed number.Template:Ref For example,
31⁄2 = 3 + 1⁄2 = 3.5.
This notation can cause confusion, since in most other contexts, juxtaposition denotes multiplication instead.
The numbers or the objects to be added are generally called the "terms", the "addends", or the "summands"; this terminology carries over to the summation of multiple terms. This is to be distinguished from factors, which are multiplied. Some authors call the first addend the augend. In fact, during the Renaissance, many authors did not consider the first addend an "addend" at all. Today, due to the symmetry of addition, "augend" is rarely used, and both terms are generally called addends.Template:Ref
All of this terminology derives from Latin. "Addition" and "add" are English words derived from the Latin verb addere, which is in turn a compound of ad "to" and dare "to give", from the Indo-European root do- "to give"; thus to add is to give to.Template:Ref Using the gerundive suffix -nd results in "addend", "thing to be added".Template:Ref Likewise from augere "to increase", one gets "augend", "thing to be increased".
Image:AdditionNombryng.svg "Sum" and "summand" derive from the Latin noun summa "the highest, the top" and associated verb summare. This is appropriate not only because the sum of two positive numbers is greater than either, but because it was once common to add upward, contrary to the modern practice of adding downward, so that a sum was literally higher than the addends.Template:Ref Addere and summare date back at least to Boethius, if not to earlier Roman writers such as Vitruvius and Frontinus; Boethius also used several other terms for the addition operation. The later Middle English terms "adden" and "adding" were popularized by Chaucer.Template:Ref
Interpretations
Addition is used to model countless physical processes. Even for the simple case of adding natural numbers, there are many possible interpretations and even more visual representations.
Combining sets
Image:AdditionShapes.svg Possibly the most fundamental interpretation of addition lies in combining sets:
- When two or more collections are combined into a single collection, the number of objects in the single collection is the sum of the number of objects in the original collections.
This interpretation is easy to visualize, with little danger of ambiguity. However, it is not obvious how one should extend this version of addition to include fractional numbers or negative numbers. See this article for an example of the sophistication involved in adding with sets of "fractional cardinality".
One possible fix is to consider collections of objects that can be easily divided, such as pies or, still better, segmented rods.Template:Ref Rather than just combining collections of segments, rods can be joined end-to-end.
Extending a measure
- When an original measure is extended by a given amount, the final measure is the sum of the original measure and the measure of the extension.
Image:AdditionLineUnary.svg Under this interpretation, the parts of a sum a + b play asymmetric roles; instead of calling both a and b addends, it is more appropriate to call a the augend, since a plays a passive role. In geometry, a might be a point and b a vector; their sum is then another point, the translation of a by b. In analytic geometry, a and b might both be represented by ordered pairs of numbers, but they remain conceptually different.Template:Ref
Here, the addition operation is not so much a binary operation as a family of unary operations; the function (+b) is acting on a.Template:Ref The unary and binary views are formally equivalent: if X is the set of all possible augends and Y is the set of all possible addends, there is a natural identification of sets of functions
- <math>X^{X\times Y}\cong \left(X^X\right)^Y.</math>Template:Ref
This formula is a special case of a law of exponentiation that may be more familiar for numbers.
The unary view is useful, for example, when discussing subtraction. Addition and subtraction are not inverses as binary operations, but they are inverses as families of unary operations.
- This section is under construction.
Combining translations
- When two motions are performed in succession, the measure of the resulting motion is the sum of the measures of the original motions.
Image:AdditionLineAlgebraic.svg
- This section is under construction.
Properties
Commutativity
Image:AdditionComm01.svg Addition is commutative, meaning that one can reverse the terms in a sum left-to-right, and the result will be the same. Symbolically, if a and b are any two numbers, then
- a + b = b + a.
The fact that addition is commutative is known as the "commutative law of addition". This phrase suggests that there are other commutative laws: for example, there is a commutative law of multiplication. However, many binary operations are not commutative, such as subtraction and division, so it is misleading to speak of an unqualified "commutative law".
Associativity
Image:AdditionAsc.svg A somewhat subtler property of addition is associativity, which comes up when one tries to define repeated addition. Should the expression
- "a + b + c"
be defined to mean (a + b) + c or a + (b + c)? That addition is associative tells us that the choice of definition is irrelevant. For any three numbers a, b, and c, it is true that
- (a + b) + c = a + (b + c).
Not all operations are associative, so in expressions with operations other than addition, it is important to specify the order of operations.
Zero and one
Image:AdditionZero.svg If one adds zero to any number, the quantity won't change; zero is the identity element for addition, also known as the additive identity. In symbols, for any a,
- a + 0 = 0 + a = a.
This law was first identified in Brahmagupta's Brahmasphutasiddhanta in 628, although he wrote it as three separate laws, depending on whether a is negative, positive, or zero itself, and he preferred words to algebraic symbols. Later Indian mathematicians refined the concept; around the year 830, Mahavira wrote, "zero becomes the same as what is added to it", corresponding to the unary statement 0 + a = a. In the 12th century, Bhaskara wrote, "In the addition of cipher, or subtraction of it, the quantity, positive or negative, remains the same", corresponding to the unary statement a + 0 = a.Template:Ref
In the context of integers, addition of one also plays a special role: for any integer a, the integer (a + 1) is the least integer greater than a, also known as the successor of a.
Units
In order to numerically add certain types of numbers, such as vulgar fractions and physical quantities with units, they must first be expressed with a common denominator. For example, if a measure of 5 feet is extended by 2 inches, the sum is 62 inches, since 60 inches is another name for 5 feet. On the other hand, it is usually meaningless to try to add 3 meters and 4 square meters, since those units are incomparable; this sort of consideration is fundamental in dimensional analysis.
Performing addition
Innate ability
Studies on mathematical development starting around the 1980s have exploited the phenomenon of habituation: infants look longer at situations that are unexpected.Template:Ref A seminal experiment by Karen Wynn in 1992 involving Mickey Mouse dolls manipulated behind a screen demonstrated that five-month-old infants expect 1 + 1 to be 2, and they are comparatively surprised when a physical situation seems to imply that 1 + 1 is either 1 or 3. This finding has since been affirmed by a variety of laboratories using different methodologies.Template:Ref Another 1992 experiment with older toddlers, between 18 to 35 months, exploited their development of motor control by allowing them to retrieve ping-pong balls from a box; the youngest responded well for small numbers, while older subjects were able to compute sums up to 5.Template:Ref
Even some nonhuman animals show a limited ability to add, particularly primates. In a 1995 experiment imitating Wynn's 1992 result (but using eggplants instead of dolls), rhesus macaques and cottontop tamarins performed similarly to human infants. More dramatically, after being taught the meanings of the Arabic numerals 0 through 4, one chimpanzee was able to compute the sum of two numerals without further training.Template:Ref
Elementary methods
Typically children master the art of counting first, and this skill extends into a form of addition called "counting-on"; asked to find three plus two, children count two past three, arriving at four five. This strategy seems almost universal; children can easily pick it up from peers or teachers, and some even invent it independently.Template:Ref Those who count to add also quickly learn to exploit the commutativity of addition by counting up from the larger number.
Decimal system
Image:AdditionTable.svg The prerequisitive to addition in the decimal system is the internalization of the 100 single-digit "addition facts". Conceivably one could memorize all the facts, but many strategies besides rote learning are more enlightening and, for most people, more efficient:Template:Ref
- One or two more: Adding 1 or 2 is a basic task, and it can be accomplished through counting on or, ultimately, intuition.
- Zero: Since zero is the additive identity, adding zero is trivial. Nonetheless, some children are introduced to addition as a process that always increases the addends; word problems may help rationalize the "exception" of zero.
- Doubles: Adding a number to itself is related to counting by two and to multiplication; doubles facts form a backbone for many related facts, and fortunately, children find them relatively easy to grasp. near-doubles...
- Five and ten...
- Making ten: An advanced strategy uses 10 as an intermediate for sums involving 8 or 9; for example, 8 + 6 = 8 + 2 + 4 = 10 + 4 = 14.
To add multidigit numbers, one typically aligns the addends vertically and adds the columns, starting from the ones column on the right. If a column exceeds ten, the extra digit is "carried" into the next column.Template:Ref For a more detailed description of this algorithm, see Elementary arithmetic: Addition. An alternate strategy starts adding from the most significant digit on the left; this route makes carrying a little clumsier, but it is faster at getting a rough estimate of the sum.
Computers
Image:Opampsumming2.png Analog computers work directly with physical quantities, so their addition mechanisms depend on the form of the addends. A mechanical adder might represent two addends as the positions of sliding blocks, in which case they can be added with an averaging lever. If the addends are the rotation speeds of two shafts, they can be added with a differential. A hydraulic adder might need to add the pressures in two chambers, to be done by balancing forces on an assembly of pistons via Newton's second law. The most common situation for a general-purpose analog computer is to add two voltages (referenced to ground); this can be accomplished roughly with a resistor network, but a better design exploits an operational amplifier.Template:Ref
Addition is not tremendously important to analog computers, whose essential function is integration.Template:Ref By contrast, addition is fundamental to the operation of digital computers. For digital computers, the efficiency of addition, in particular the carry mechanism, is an important limitation to overall performance.
In fact, addition was not only a tool but also a basic goal for the earliest automatic, digital computers, and as late as the 20th century, mechanical calculators have been called "adding machines". Wilhelm Schickard's 1623 Calculating Clock could add and subtract, but it was severely limited by an awkward carry mechanism. As he wrote to Kepler describing the novel device, "You would burst out laughing if you were present to see how it carries by itself from one column of tens to the next..." Adding 999,999 and 1 on Schickard's machine would require enough force to propagate the carries that the gears might be damaged, so he limited his machines to six digits, even though Kepler required more. By 1642 Blaise Pascal independently developed an adding machine with an ingenious gravity-assisted carry mechanism. Pascal's calculator was limited by its carry machanism in a different sense: its wheels turned only one way, so it could add but not subtract, except by the method of complements. By 1674 Gottfried Leibniz made the first mechanical multiplier; it was still powered, but not motivated, by addition.Template:Ref
Image:BabbageDifferenceEngine.jpg Adders execute integer addition in electronic digital computers, usually using binary arithmetic. The simplest architecture is the ripple carry adder, which follows the standard multi-digit algorithm taught to children. One slight improvement is the carry skip design, again following human intuition; one does not perform all the carries in computing 999 + 1, but one bypasses the group of 9s and skips to the answer. This old method predates electronic computing; it was known to Charles Babbage as "carriage anticipating".Template:Ref
Since they compute digits one at a time, the above methods are too slow for most modern purposes. In modern digital computers, integer addition is typically the fastest arithmetic instruction, yet it has the largest impact on performance, since it underlies all the floating-point operations as well as such basic tasks as address generation during memory access and fetching instructions during branching. To increase speed, modern designs calculate digits in parallel; these schemes go by such names as carry select, carry lookahead, and the Ling pseudocarry. Almost all modern implementations are, in fact, hybrids of these last three designs.Template:Ref
Unlike addition on paper, addition on a computer often changes the addends. On the ancient abacus and adding board, both addends are destroyed, leaving only the sum. The influence of the abacus on mathematical thinking was strong enough that early Latin texts often claimed that in the process of adding "a number to a number", both numbers vanish.Template:Ref In modern times, the ADD instruction of a microprocessor replaces the augend with the sum but preserves the addend.Template:Ref In a high-level programming language, evaluating a + b does not change either a or b; to change the value of a one uses the addition assignment operator a += b.
Definitions and proofs for the real numbers
In order to prove the usual properties of addition, one must first define addition for the context in question. Addition is first defined on the natural numbers. In set theory, addition is then extended to larger sets that include the natural numbers: the integers, the rational numbers, and the real numbers.Template:Ref (In mathematics education,Template:Ref positive fractions are added before negative numbers are even considered; this is also the historical route.Template:Ref)
Naturals
There are two popular ways to define the sum of two natural numbers a and b. If one defines natural numbers to be the cardinalities of finite sets, then it is appropriate to define their sum as follows:
- Let N(S) be the cardinality of a set S. Take two disjoint sets A and B, with N(A) = a and N(B) = b. Then a + b is defined as N(A U B).Template:Ref
Here, A U B is the union of A and B. An alternate version of this definition allows A and B to possibly overlap and then takes their disjoint union, a mechanism which allows any common elements to be separated out and therefore counted twice.
The other popular definition is recursive:
- Let n+ be the successor of n. Define a + 0 = a. Define the general sum recursively by a + (b+) = (a + b)+.Template:Ref
Again, there are minor variations upon this definition in the literature. Taken literally, the above definition is an application of the Recursion Theorem on the poset N2.Template:Ref On the other hand, some sources prefer to state a restricted Recursion Theorem that applies only to the natural numbers. One then considers a to be temporarily "fixed", applies recursion on b to define a function "a + ", and pastes these unary operations for all a together to form the full binary operation.Template:Ref
This recursive formulation of addition was developed by Dedekind as early as 1854, and he would expand upon it in the following decades.Template:Ref He proved the associative and commutative properties, among others, through mathematical induction; for examples of such inductive proofs, see Addition of natural numbers.
Integers
Image:GrothInt.svg Template:Further The simplest conception of an integer is that it consists of an absolute value (which is a natural number) and a sign (generally either positive or negative). The integer zero is a special third case, being neither positive nor negative. The corresponding definition of addition must proceed by cases:
- For an integer n, let |n| be its absolute value. Let a and b be integers. If either a or b is zero, treat it as an identity. If a and b are both positive, define a + b = |a| + |b|. If a and b are both negative, define a + b = −(|a|+|b|). If a and b have different signs, define a + b to be the difference between |a| and |b|, with the sign of the term whose absolute value is larger.Template:Ref
Although this definition can be useful for concrete problems, it is far too complicated to produce elegant general proofs; there are too many cases to consider.
A much more convenient conception of the integers is the Grothendieck group construction. The essential observation is that every integer can be expressed (not uniquely) as the difference of two natural numbers, so we may as well define an integer as the difference of two natural numbers. Addition is then defined to be compatible with subtraction:
- Given two integers a − b and c − d, where a, b, c, and d are natural numbers, define (a − b) + (c − d) = (a + c) − (b + d).Template:Ref
Rationals
Addition of rational numbers can be computed using the least common denominator, but a conceptually simpler definition involves only integer addition and multiplication:
- Define <math>\frac ab + \frac cd = \frac{ad+bc}{bd}.</math>
The commutativity and associativity of rational addition is an easy consequence of the laws of integer arithmetic.Template:Ref For a more rigorous and general discussion, see field of fractions.
Reals
Image:AdditionRealDedekind.svg Template:Further A common construction of the set of real numbers is the Dedekind completion of the set of rational numbers. A real number is defined to be a Dedekind cut of rationals: a non-empty set of rationals that is closed downward and has no greatest element. The sum of real numbers a and b is defined element by element:
- Define <math>a+b = \{q+r | q\in a, r\in b\}.</math>Template:Ref
This definition was first published, in a slightly modified form, by Richard Dedekind in 1872.Template:Ref The commutativity and associativity of real addition are immediate; defining the real number 0 to be the set of negative rationals, it is easily seen to be the additive identity. Probably the trickiest part of this construction pertaining to addition is the definition of additive inverses.Template:Ref
Image:AdditionRealCauchy.svg Unfortunately, dealing with multiplication of Dedekind cuts is a case-by-case nightmare similar to the addition of signed integers. Another approach is the metric completion of the rational numbers. A real number is essentially defined to be the a limit of a Cauchy sequence of rationals, lim an. Addition is defined term by term:
- Define <math>\lim_na_n+\lim_nb_n = \lim_n(a_n+b_n).</math>Template:Ref
This definition was first published by Georg Cantor, also in 1872, although his formalism was slightly different.Template:Ref One must prove that this operation is well-defined, dealing with co-Cauchy sequences. Once that task is done, all the properties of real addition follow immediately from the properties of rational numbers. Furthermore, the other arithmetic operations, including multiplication, have straightforward, analogous definitions.Template:Ref
Generalizations
- There are many things that can be added: numbers, vectors, matrices, spaces, shapes, sets, functions, equations, strings, chains... —Alexander Bogomolny
Real addition extends to addition operations on even larger sets, such as the set of complex numbers or a many-dimensional vector space in linear algebra.
In algebra
There are many more sets that support an operation called addition.
There are already infinitely many natural numbers, and the set of real numbers is even larger. It is also useful to study addition on smaller sets, even finite ones. In modular arithmetic, the set of integers modulo 12 has twelve elements; it inherits an addition operation from the integers that is central to musical set theory. The set of integers modulo 2 has just two elements; the addition operation it inherits is known in Boolean logic as "exclusive or".
The ideas of extending and compacting sets can be combined. In geometry, the sum of two angles is often taken to be their sum as two real numbers modulo 2π. This amounts to an addition operation on the circle, which in turn generalizes to addition operations on many-dimensional tori.
A general form of addition occurs in abstract algebra, where addition may be almost any well-defined binary operation on a set. For an operation to be called "addition" in abstract algebra, it is required to be associative and commutative. Basic algebraic structures with an addition operation include commutative monoids and abelian groups.
Addition of sets
One extraordinary generalization of the addition of natural numbers is the addition of ordinal numbers. Unlike most addition operations, ordinal addition is not commutative. However, passing to the "smaller" class of cardinal numbers, we recover a commutative operation. Cardinal addition is closely related to the disjoint union of two sets. In category theory, the disjoint union is a kind of coproduct, so coproducts are perhaps the most abstract of all the generalizations of addition. Some coproducts are named to evoke their connection with addition; see Direct sum and Wedge sum.
Related operations
Arithmetic
Subtraction can be thought of as a kind of addition—that is, the addition of an additive inverse. Subtraction is itself a sort of inverse to addition, in that adding x and subtracting x are inverse functions.
Given a set with an addition operation, one cannot always define a corresponding subtraction operation on that set; the set of natural numbers is a simple example. On the other hand, a subtraction operation uniquely determines an addition operation, an additive inverse operation, and an additive identity; for this reason, an additive group can be desribed as a set that is closed under subtraction.Template:Ref
Multiplication can be thought of as repeated addition. If a single term x appears in a sum n times, then the sum is the product of n and x. If n is not a natural number, the product may still make sense; for example, multiplication by −1 yields the additive inverse of a number.
In the real and complex numbers, addition and multiplication can be interchanged by the exponential function:
- ea + b = ea eb.Template:Ref
This identity allows multiplication to be carried out by consulting a table of logarithms and computing addition by hand; it also enables multiplication on a slide rule. The formula is still a good first-order approximation in the broad context of Lie groups, where it relates multiplication of infinitesimal group elements with addition of vectors in the associated Lie algebra.Template:Ref
There are even more generalizations of multiplication than addition.Template:Ref In general, multiplication operations always distribute over addition; this requirement is formalized in the definition of a ring. In some contexts, such as the integers, distributivity over addition and the existence of a multiplicative identity is enough to uniquely determine the multiplication operation. The distributive property also provides information about addition; by expanding the product (1 + 1)(a + b) in both ways, one concludes that addition is forced to be commutative. For this reason, ring addition is commutative in general.Template:Ref
Division is an arithmetic operation remotely related to addition. Since a/b = a(b−1), division is right distributive over addition: (a + b) / c = a / c + b / c.Template:Ref However, division is not left distributive over addition; 1/ (2 + 2) is not the same as 1/2 + 1/2.
Ordering
Image:XPlusOne.svg The maximum operation "max (a, b)" is a binary operation similar to addition. In fact, if two nonnegative numbers a and b are of different orders of magnitude, then their sum is approximately equal to their maximum. This approximation is extremely useful in the applications of mathematics, for example in truncating Taylor series. However, it presents a perpetual difficulty in numerical analysis, essentially since "max" is not invertible. If b is much greater than a, then a straightforward calculation of (a + b) - b can accumulate an unacceptable round-off error, perhaps even returning zero. See also Loss of significance.
The approximation becomes exact in a kind of infinite limit; if either a or b is an infinite cardinal number, their cardinal sum is exactly equal to the greater of the two.Template:Ref Accordingly, there is no subtraction operation for infinite cardinals.Template:Ref
Maximization is commutative and associative, like addition. Furthermore, since addition preserves the ordering of real numbers, addition distributes over "max" in the same way that multiplication distributes over addition:
- a + max (b, c) = max (a + b, a + c).
For these reasons, in tropical geometry one replaces multiplication with addition and addition with maximization. In this context, addition is called "tropical multiplication", maximization is called "tropical addition", and the tropical "additive identity" is negative infinity.Template:Ref Some authors prefer to replace addition with minimization; then the additive identity is positive infinity.Template:Ref
Tying these observations together, tropical addition is approximately related to regular addition through the logarithm:
- log (a + b) ≈ max (log a, log b),
which becomes more accurate as the base of the logarithm increases.Template:Ref The approximation can be made exact by extracting a constant h, named by analogy with Planck's constant from quantum mechanics,Template:Ref and taking the "classical limit" as h tends to zero:
- <math>\max(a,b) = \lim_{h\to 0}h\log(e^{a/h}+e^{b/h}).</math>
In this sense, the maximum operation is a dequantized version of addition.Template:Ref
Other ways to add
Incrementation, also known as the successor operation, is the addition of 1 to a number. In formal treatments of addition, such as the Peano axioms, the successor is an elementary operation, and addition is defined from successors through recursion.
Summation describes the addition of arbitrarily many numbers, usually more than just two. It includes the idea of the sum of a single number, which is itself, and the empty sum, which is 0. An infinite summation is known as a series.
Counting is an intuitive procedure that can be formalized as the summation of 1 over some finite domain. In everyday counting, the domain is typically a small set of physical objects; in mathematics it may be large and abstract, as it is for the prime counting function.
Integration is a kind of "summation" over a continuum, or more precisely and generally, over a differentiable manifold. Integration over a zero-dimensional manifold reduces to summation.
Linear combinations combine multiplication and summation; they are sums in which each term has a multiplier, usually a real or complex number. Linear combinations are especially useful in contexts where straightforward addition would violate some normalization rule, such as mixing of strategies in game theory or superposition of states in quantum mechanics.
Convolution is used to add two independent random variables defined by distribution functions. Its usual definition combines integration, subtraction, and multiplication. In general, convolution is useful as a kind of domain-side addition; by contrast, vector addition is a kind of range-side addition.
In literature
- In chapter 9 of Lewis Carroll's Through the Looking-Glass, the White Queen asks Alice, "And you do Addition? ... What's one and one and one and one and one and one and one and one and one and one?" Alice admits that she lost count, and the Red Queen declares, "She can't do Addition".
- In George Orwell's Nineteen Eighty-Four, the value of 2 + 2 is questioned; the State contends that if it declares 2 + 2 = 5, then it is so. See Two plus two make five for the history of this idea.
Notes
- Template:Note From Enderton (p.138): "...select two sets K and L with card K = 2 and card L = 3. Sets of fingers are handy; sets of apples are preferred by textbooks."
- Template:Note Karpinski pp.56–57, reproduced on p.104
- Template:Note Compare figures in Van de Walle pp.160–164
- Template:Note Compare Viro Figure 1 (p.2)
- Template:Note Devine et al p.263
- Template:Note Schwartzman p.19
- Template:Note Schwartzman p.19
- Template:Note "Addend" is not a Latin word; in Latin it must be further conjugated, as in numerus addendus "the number to be added".
- Template:Note Schwartzman (p.212) attributes adding upwards to the Greeks and Romans, saying it was about as common as adding downwards. On the other hand, Karpinski (p.103) writes that Leonard of Pisa "introduces the novelty of writing the sum above the addends"; it is unclear whether Karpinski is claiming this as an original invention of simply the introduction of the practice to Europe.
- Template:Note Karpinski pp.150–153
- Template:Note Adding it up (p.73) compares adding measuring rods to adding sets of cats: "For example, inches can be subdivided into parts, which are hard to tell from the wholes, except that they are shorter; whereas it is painful to cats to divide them into parts, and it seriously changes their nature."
- Template:Note Stewart makes the distinction by writing angle brackets for vectors and parentheses for points, although this notation is not widely used. See the chapter Vectors.
- Template:Note Weaver (p.62) argues for the importance of contrasting the two views, going so far as to term the version of commutativity satisfied by unary addition "pseudocommutativity".
- Template:Note Enderton (p.142, Theorem 6I) discusses this relationship in the context of cardinal arithmetic identities.
- Template:Note Kaplan pp.69–71
- Template:Note Wynn p.5
- Template:Note Wynn p.15
- Template:Note Wynn p.17
- Template:Note Wynn p.19
- Template:Note F. Smith p.130
- Template:Note Fosnot and Dolk p.99
- Template:Note The word "carry" may be inappropriate for education; Van de Walle (p.211) calls it "obsolete and conceptually misleading", preferring the word "trade".
- Template:Note Truitt and Rogers pp.1;44–49 and pp.2;77&ndash78
- Template:Note Truitt and Rogers p.1;86
- Template:Note Williams pp.122–140
- Template:Note Flynn and Overman pp.2, 8
- Template:Note Flynn and Overman pp.1–9
- Template:Note Karpinski pp.102–103
- Template:Note The identity of the augend and addend varies with architecture. For ADD in x86 see Horowitz and Hill p.679; for ADD in 68k see p.767.
- Template:Note Enderton chapters 4 and 5, for example, follow this development.
- Template:Note California standards; see grades 2, 3, and 4.
- Template:Note Baez (p.37) explains the historical development, in "stark contrast" with the set theory presentation: "Apparently, half an apple is easier to understand than a negative apple!"
- Template:Note Begle p.49, Johnson p.120, Devine et al p.75
- Template:Note Enderton p.79
- Template:Note For a version that applies to any poset with the descending chain condition, see Bergman p.100.
- Template:Note Enderton (p.79) observes, "But we want one binary operation +, not all these little one-place functions."
- Template:Note Ferreirós p.223
- Template:Note K. Smith p.234, Sparks and Rees p.66
- Template:Note Enderton p.92
- Template:Note The verifications are carried out in Enderton p.104 and sketched for a general field of fractions over a commutative ring in Dummit and Foote p.263.
- Template:Note Enderton p.114
- Template:Note Ferreirós p.135; see section 6 of Stetigkeit und irrationale Zahlen.
- Template:Note The intuitive approach, inverting every element of a cut and taking its complement, works only for irrational numbers; see Enderton p.117 for details.
- Template:Note Textbook constructions are usually not so cavalier with the "lim" symbol; see Burrill (p.138) for a more careful, drawn-out development of addition with Cauchy sequences.
- Template:Note Ferreirós p.128
- Template:Note Burrill p.140
- Template:Note The set still must be nonempty. Dummit and Foote (p.48) discuss this criterion written multiplicatively.
- Template:Note Rudin p.178
- Template:Note Lee p.526, Proposition 20.9
- Template:Note Linderholm (p.49) observes, "By multiplication, properly speaking, a mathematician may mean practically anything. By addition he may mean a great variety of things, but not so great a variety as he will mean by 'multiplication'."
- Template:Note Dummit and Foote p.224. For this argument to work, one still must assume that addition is a group operation and that multiplication has an identity.
- Template:Note For an example of left and right distributivity, see Loday, especially p.15.
- Template:Note Enderton calls this statement the "Absorption Law of Cardinal Arithmetic"; it depends on the comparability of cardinals and therefore on the Axiom of Choice.
- Template:Note Enderton p.164
- Template:Note Mikhalkin p.1
- Template:Note Akian et al p.4
- Template:Note Mikhalkin p.2
- Template:Note Litvinov et al p.3
- Template:Note Viro p.4
References
- History
- Template:Cite book
- Template:Cite book
- Template:Cite book
- Template:Cite book}}
- Template:Cite book
- Template:Cite book
- Elementary mathematics
- Education
- Template:Cite book
- California State Board of Education mathematics content standards Adopted December 1997, accessed December 2005.
- Template:Cite book
- Template:Cite book
- Template:Cite book
- Cognitive science
- Mathematical exposition
- Template:Cite web
- Template:Cite book
- Template:Cite book
- Template:Cite book
- Template:Cite book
- Template:Cite book
- Advanced mathematics
- Template:Cite book
- Template:Cite book}}
- Template:Cite book
- Template:Cite book
- Template:Cite book
- Template:Cite book
- Template:Cite book
- Mathematical research
- Template:Cite journal
- Template:Cite conference
- Litvinov, Maslov, and Sobolevskii (1999). Idempotent mathematics and interval analysis. Reliable Computing, Kluwer.
- Template:Cite journal
- Template:Cite journal
- Viro, Oleg (2000). Dequantization of real algebraic geometry on logarithmic paper. (HTML) Plenary talk at 3rd ECM, Barcelona.
- Computing
cs:Sčítání da:Addition de:Addition eo:Operacioj per nombroj es:Suma et:Liitmine fi:Yhteenlasku fr:Addition hr:Zbrajanje is:Samlagning it:Addizione ja:総和 ko:덧셈 la:Additio lt:Sudėtis jbo:sumji nl:Optellen pl:Dodawanie pt:Adição ru:Сложение (чисел) simple:Addition sl:Vsota sv:Addition th:การบวก tl:Pagdaragdag zh:加法