Byte

From Free net encyclopedia

(Redirected from Bytes)
For the computer industry magazine, see BYTE.

Template:Quantities of bytes

A byte is commonly used as a unit of storage measurement in computers, regardless of the type of data being stored. It is also one of the basic integral data types in many programming languages.

Contents

Meanings

The word "byte" has numerous closely-related meanings:

  1. A contiguous sequence of a fixed number of bits (binary digits). In recent years, the use of a byte to mean 8 bits is nearly ubiquitous.
  2. A contiguous sequence of bits within a binary computer, that comprises the smallest addressable sub-field of the computer's natural word-size. That is, the smallest unit of binary data on which meaningful computation, or natural data boundaries, could be applied. For example, CDC 6400 (and other) scientific mainframes divided their 60-bit floating-point words into 10 six-bit bytes. These bytes conveniently held Hollerith data from punched cards, typically the upper-case alphabet and decimal digits. The PDP-10 used assembly instructions LDB and DPB to extract bytes—these operations survive today in Common Lisp. Bytes of six, seven, or nine bits were used on some computers, for example within the 36-bit word of the PDP-10.
  3. A contiguous sequence of binary bits in a serial data stream, such as in modem or satellite communications, or from a disk-drive head, which is the smallest meaningful unit of data. These bytes might include start bits, stop bits, or parity bits, and thus could vary from 7 to 12 bits to contain a single 7-bit ASCII code.
  4. A datatype or synonym for a datatype in certain programming languages. C, for example, defines byte as a storage unit capable of at least being large enough to hold any character of the execution environment (clause 3.5 of the C standard). Since the C char integral data type can hold at least 8 bits (clause 5.2.4.2.1), a byte in C is at least capable of holding 256 different values (signed or unsigned char doesn't matter). Java's primitive byte data type is always defined as consisting of 8 bits and being a signed data type, holding values from -128 to 127.

The term "byte" came from "bite," as in the smallest amount of data a computer could "bite" at once. The spelling change not only reduced the chance of a "bite" being mistaken for a "bit," but also was consistent with the penchant of early computer scientists to make up words and change spellings.

Early microprocessors, such as Intel's 8008 (the direct predecessor of the 8080, and then the Pentium) could perform a small number of operations on four bits, such as the DAA (decimal adjust) instruction, and the "half carry" flag, that were used to implement decimal arithmetic routines. These four-bit quantities were called "nibbles," in homage to the then-common 8-bit "bytes."

History

The term byte was coined by Werner Buchholz in 1957 during the early design phase for the IBM Stretch computer. Originally it was defined in instructions by a 4-bit byte-size field, allowing from one to sixteen bits; typical I/O equipment of the period used six-bit units. A fixed eight-bit byte size was later adopted and promulgated as a standard by the System/360. The word was coined by mutating the word bite so it would not be accidentally misspelled as bit.

Alternate words

The eight-bit byte is often called an octet in formal contexts such as industry standards, as well as in networking and telecommunication, in order to avoid any confusion about the number of bits involved. However, 8-bit bytes are now firmly embedded in such common standards as Ethernet and HTML. Octet is also the word used for the eight-bit quantity in many non-English languages, where the pun on bite does not translate.

Half of an eight-bit byte (four bits) is sometimes called a nibble (sometimes spelled nybble) or a hex digit. The nibble is often called a semioctet in a networking or telecommunication context and also by some standards organizations.

Abbreviation/Symbol

IEEE 1541 and Metric-Interchange-Format specify "B" as the symbol for byte (e.g. MB means megabyte), whilst IEC 60027 seems silent on the subject.

IEEE 1541 specifies "b" as the symbol for bit; however the IEC 60027 and Metric-Interchange-Format specify "bit" (e.g. Mbit for megabit) for the symbol, achieving maximum disambiguation from byte.

French-speaking countries sometimes use an uppercase "o" for "octet". This is unacceptable in SI because of the risk of confusion with the zero.

Names for larger units

Note: the names "kilobyte", "megabyte", etc. may be used to mean either the SI or binary multipliers. For further discussion, see Binary prefix.ar:بايت ast:Byte ca:Byte cs:Byte da:Byte de:Byte et:Bait es:Byte eo:Bitoko eu:Byte fa:بایت fr:Octet gl:Byte ko:바이트 hr:Bajt id:Byte ia:Byte it:Byte he:בית (מחשב) lt:Baitas hu:Bájt ms:Bait nl:Byte ja:バイト (情報) no:Byte pl:Bajt (informatyka) pt:Byte ro:Octet ru:Байт simple:Byte sk:Bajt sl:Bajt fi:Tavu (tietotekniikka) sv:Byte (enhet) th:ไบต์ vi:Byte tr:Bayt uk:Байт zh:字节