UTF-8
From Free net encyclopedia
Unicode |
---|
Encodings |
UCS |
Mapping |
Bi-directional text |
BOM |
Han unification |
Unicode and HTML |
Unicode and e-mail |
UTF-8 (8-bit Unicode Transformation Format) is a variable-length character encoding for Unicode created by Ken Thompson and Rob Pike. It is able to represent any universal character in the Unicode standard, yet is backwards compatible with ASCII. For this reason, it is steadily becoming the preferred encoding for email, web pages, and other places where characters are stored or streamed.
UTF-8 uses one to four bytes (strictly, octets) per character, depending on the Unicode symbol. Only one byte is needed to encode the 128 US-ASCII characters (Unicode range Template:U to U+007F). Two bytes are needed for Latin letters with diacritics, combining diacritics and for Greek, Cyrillic, Armenian, Hebrew, Arabic, Syriac and Thaana (Unicode range U+0080 to U+07FF). Three bytes are needed for the rest of the Basic Multilingual Plane (which contains virtually all characters in common use). Four bytes are needed for characters in other planes of Unicode.
Four bytes may seem like a lot for one character (code point). However code points outside the Basic Multilingual Plane are generally very rare. Furthermore, UTF-16 (the main alternative to UTF-8) also needs four bytes for these code points. Whether UTF-8 or UTF-16 is more efficient depends on the range of code points being used. However, the differences between different encoding schemes can become negligible with the use of traditional compression systems like DEFLATE. For short items of text where traditional algorithms do not perform well and size is important, the Standard Compression Scheme for Unicode could be considered instead.
The Internet Engineering Task Force (IETF) requires all Internet protocols to identify the encoding used for character data with UTF-8 as at least one supported encoding. The Internet Mail Consortium (IMC) recommends[1] that all email programs must be able to display and create mail using UTF-8.
Contents |
Description
There are several current, slightly different definitions of UTF-8 in various standards documents:
- RFC 3629 / STD 63 (2003), which establishes UTF-8 as a standard Internet protocol element
- The Unicode Standard, Version 4.0, §3.9–§3.10 (2003)
- ISO/IEC 10646-1:2000 Annex D (2000)
They supersede the definitions given in the following obsolete works:
- ISO/IEC 10646-1:1993 Amendment 2 / Annex R (1996)
- The Unicode Standard, Version 2.0, Appendix A (1996)
- RFC 2044 (1996)
- RFC 2279 (1998)
- The Unicode Standard, Version 3.0, §2.3 (2000) plus Corrigendum #1: UTF-8 Shortest Form (2000)
- Unicode Standard Annex #27: Unicode 3.1 (2001)
They are all the same in their general mechanics with the main differences being on issues such as allowed range of code point values and safe handling of invalid input.
The bits of a Unicode character are divided into several groups which are then divided among the lower bit positions inside the UTF-8 bytes.
A character whose code point is below U+0080 is encoded with a single byte that contains its code point: these correspond exactly to the 128 characters of 7-bit ASCII.
In other cases, up to four bytes are required. The uppermost bit of these bytes is 1, to prevent confusion with 7-bit ASCII characters, particularly characters with code points lower than U+0020, traditionally called control characters, for example, carriage return.
Code range hexadecimal | Scalar value binary | UTF-16 (big endian) | UTF-8 binary | Notes |
---|---|---|---|---|
000000–00007F | 0xxxxxxx | 00000000 0xxxxxxx | 0xxxxxxx | ASCII equivalence range; byte begins with zero |
seven x | seven x | |||
000080–0007FF | 00000zzz zxxxxxxx | 00000zzz zxxxxxxx | 110zzzzx 10xxxxxx | first byte begins with 110, the following byte(s) begin with 10. zzzz > 0000 |
three x, eight x | five x, six x | |||
000800–00FFFF | zzzzzxxx xxxxxxxx | zzzzzxxx xxxxxxxx | 1110zzzz 10zxxxxx 10xxxxxx | first byte begins with 1110, the following byte(s) begin with 10. zzzzz > 00000 |
eight x, eight x | four x, six x, six x | |||
010000–10FFFF | 000zzzzz xxxxxxxx xxxxxxxx | 110110yy yyxxxxxx 110111xx xxxxxxxx | 11110zzz 10zzxxxx 10xxxxxx 10xxxxxx | UTF-16 "surrogate pair", yyyy = zzzzz − 1, zzzzz > 00000. the bit pattern in UTF-8 is identical to that in the abstract Unicode code point. |
two x, eight x, two x, eight x | three x, six x, six x, six x |
For example, the character aleph (א), which is Unicode U+05D0, is encoded into UTF-8 in this way:
- It falls into the range of U+0080 to U+07FF. The table shows it will be encoded using two bytes, 110xxxxx 10xxxxxx.
- Hexadecimal 0x05D0 is equivalent to binary 101-1101-0000.
- The eleven bits are put in their order into the positions marked by "x"-s: 11010111 10010000.
- The final result is the two bytes, more conveniently expressed as the two hexadecimal bytes 0xD7 0x90. That is the letter alef in UTF-8.
So the first 128 characters need one byte. The next 1920 characters need two bytes to encode. This includes Latin alphabet characters with diacritics, Greek, Cyrillic, Coptic, Armenian, Hebrew, and Arabic characters. The rest of the BMP characters use three bytes, and additional characters are encoded in four bytes.
By continuing the pattern given above it is possible to deal with much larger numbers. The original specification allowed for sequences of up to six bytes covering the whole area U+0000 to U+7FFFFFFF (31 bits). However, UTF-8 was restricted by RFC 3629 to use only the area covered by the formal Unicode definition, U+0000 to U+10FFFF, in November 2003. Before this, only the bytes 0xC0, 0xC1, 0xFE , and 0xFF did not occur in UTF-8 encoded data. After this limit was introduced, the number of unused bytes increased to 13:
Codes (binary) | Codes (hexadecimal) | Notes |
---|---|---|
110000x | C0, C1 | Overlong encoding: lead-byte of a 2 byte sequence, but code point <= 127 |
11111110 11111111 | FE, FF | Invalid: lead-byte of a 7/8 byte sequence |
111110xx 1111110x | F8, F9, FA, FB, FC, FD | Restricted by RFC 3629: lead-byte of a 5/6 byte sequence |
11110101 1111011x | F5, F6, F7 | Restricted by RFC 3629: lead byte of codepoint above 10FFFF |
UTF-8 derivations
Java
The Java programming language, which uses UTF-16 for its internal text representation, supports a non-standard modification of UTF-8 for string serialization. This encoding is called Template:Javadoc:SE. There are two differences between modified and standard UTF-8. The first difference is that the null character (U+0000) is encoded with two bytes instead of one, specifically as 11000000 10000000. This ensures that there are no embedded nulls in the encoded string, presumably to address the concern that if the encoded string is processed in a language such as C where a null byte signifies the end of a string, an embedded null would cause the string to be truncated.
The second difference is in the way characters outside the BMP are encoded. In standard UTF-8 these characters are encoded using the four-byte format above. In modified UTF-8 these characters are first represented as surrogate pairs (as in UTF-16), and then the surrogate pairs are encoded individually in sequence as in CESU-8. The reason for this modification is more subtle. In Java a character is 16 bits long; therefore some Unicode characters require two Java characters in order to be represented. This aspect of the language predates the supplementary planes of Unicode; however, it is important for performance as well as backwards compatibility, and is unlikely to change. The modified encoding ensures that an encoded string can be decoded one UTF-16 code unit at a time, rather than one Unicode code point at a time. Unfortunately, this also means that characters requiring four bytes in UTF-8 require six bytes in modified UTF-8.
Mac OS X
The Mac OS X Operating System uses a special form of UTF-8 (sometimes referred to as UTF-8-MAC) for writing file and foldernames to the filesystem. All UTF-8-MAC text is valid UTF-8 but precomposed characters are forbidden and combining diacritics must be used to replace them. This makes sorting far simpler but can be confusing for software built arround the assumption that precomposed characters are the norm and combining diacritics only used to form unusual combinations. (This is an example of the NFD variant of text normalization).
Rationale behind UTF-8's mechanics
The encoding of UTF-8 is based loosely on Huffman coding, a way of representing frequency-sorted binary trees. As a consequence of the exact mechanics of UTF-8, the following properties of multi-byte sequences hold:
- The most significant bit of a single-byte character is always
0
. - The most significant bits of the first byte of a multi-byte sequence determine the length of the sequence. These most significant bits are
110
for two-byte sequences;1110
for three-byte sequences, and so on. - The remaining bytes in a multi-byte sequence have
10
as their two most significant bits.
UTF-8 was designed to satisfy these properties in order to guarantee that no byte sequence of one character is contained within a longer byte sequence of another character. This ensures that byte-wise sub-string matching can be applied to search for words or phrases within a text; some older variable-length 8-bit encodings (such as Shift-JIS) did not have this property and thus made string-matching algorithms rather complicated. Although this property adds redundancy to UTF-8–encoded text, the advantages outweigh this concern; besides, data compression is not one of Unicode's aims and must be considered independently. This also means that if one or more complete bytes are lost due to error or corruption, one can resynchronize at the beginning of the next character and thus limit the damage.
Also due to the design of the byte sequences, if a sequence of bytes supposed to represent text validates as UTF-8 then it is fairly safe to assume it is UTF-8. The chance of a random sequence of bytes being valid UTF-8 and not pure ASCII is 1 in 32 for a 2 byte sequence, 5 in 256 for a 3 byte sequence and even lower for longer sequences.
While natural languages encoded in traditional encodings are far from random byte sequences they are also unlikely to produce byte sequences that would pass a UTF-8 validity test and then be misinterpreted (obviously pure ASCII text would pass a UTF-8 validity test but provided the legacy encodings under consideration are also ASCII based this is not a problem). For example for ISO-8859-1 text to be misrecognized as UTF-8 the only non-ASCII characters in it would have to be in sequences starting with either an accented letter or the multiplication symbol and ending with a symbol.
You can use the bit patterns to identify UTF-8 characters. If the byte's first hex code begins with 0–7, it is an ASCII character. If it begins with C or D, it is an 11 bit character (expressed in two bytes.) If it begins with E, it is 16 bit (expressed in 3 bytes,) and if it begins with F, it is 21 bits (expressed in 4 bytes.) 8 through B cannot be first hex codes, but all following bytes must begin with a hex code between 8 through B. Thus, you can tell at a glance that "0xA9" is not a valid UTF-8 character, but that "0x54" or "0xE3 0xB4 0xB1" is a valid UTF-8 character.
Overlong forms, invalid input, and security considerations
The exact response required of a UTF-8 decoder on invalid input is not uniformly defined by the standards. In general, there are several ways a UTF-8 decoder might behave in the event of an invalid byte sequence:
- Insert a replacement character (e.g. '?', '�').
- Ignore the bytes.
- Interpret the bytes according to a different character encoding (often the ISO-8859-1 character map).
- Not notice and decode as if the bytes were some similar bit of UTF-8 (this would indicate the decoder is buggy).
- Stop decoding and report an error.
It is possible for a decoder to behave in different ways for different types of invalid input.
RFC 3629 only requires that UTF-8 decoders must not decode "overlong sequences" (where a character is encoded in more bytes than needed but still adheres to the forms above). The Unicode Standard requires a Unicode-compliant decoder to "…treat any ill-formed code unit sequence as an error condition. This guarantees that it will neither interpret nor emit an ill-formed code unit sequence."
Overlong forms are one of the most troublesome types of data. The current RFC says they must not be decoded but older specifications for UTF-8 only gave a warning and many simpler decoders will happily decode them. Overlong forms have been used to bypass security validations in high profile products including Microsoft's IIS web server. Therefore, care must be taken to avoid security issues if validation is performed before conversion from UTF-8.
To maintain security in the case of invalid input there are two options. The first is to decode the UTF-8 before doing any input validation checks. The second is to use a decoder that, in the event of invalid input, returns either an error or text that the application considers to be harmless.
Advantages and disadvantages
- General
- Advantages
- UTF-8 is a superset of ASCII. A plain ASCII string is also a valid UTF-8 string. This backwards-compatibility means that no conversion needs to be done for ASCII text and existing software based on ASCII and its extensions can handle UTF-8.
- Sorting of UTF-8 strings using standard byte-oriented sorting routines will produce the same results as sorting them based on Unicode code points. (This has limited usefulness, though, since it is unlikely to represent the culturally acceptable sort order of any particular language or locale.)
- UTF-8 and UTF-16 are the standard encodings for XML documents. All other encodings must be specified explicitly either externally or through a text declaration. [2]
- Any byte oriented string search algorithm can be used with UTF-8 data (as long as one ensures that the inputs only consist of complete UTF-8 characters). Care must be taken with regular expressions and other constructs that count characters, however.
- UTF-8 strings can be fairly reliably recognized as such by a simple algorithm. That is, the probability that a string of characters in any other encoding appears as valid UTF-8 is low, diminishing with increasing string length. For instance, the octet values C0, C1, F5 to FF never appear. For better reliability, regular expressions can be used to take into account illegal overlong and surrogate values (see the W3 FAQ: Multilingual Forms for a Perl regular expression to validate a UTF-8 string).
- Disadvantages
- A badly-written (and not compliant with current versions of the standard) UTF-8 parser could accept a number of different pseudo-UTF-8 representations and convert them to the same Unicode output. This provides a way for information to leak past validation routines designed to process data in its eight-bit representation.
- Advantages
- Compared to legacy encodings
- Advantages
- UTF-8 can encode any Unicode character. In most cases, legacy encodings can be converted to Unicode and back with no loss and—as UTF-8 is an encoding of Unicode—this applies to it too.
- Character boundaries are easily found from anywhere in an octet stream (scanning either forwards or backwards). This implies that if a stream of bytes is scanned starting in the middle of a multibyte sequence, only the information represented by the partial sequence is lost and decoding can begin correctly on the next character. Similarly, if a number of bytes are corrupted or dropped then correct decoding can resume on the next character boundary. Many legacy multi-byte encodings are much harder to resynchronise.
- A byte sequence for one character never occurs as part of a longer sequence for another character as it did in older variable-length encodings like Shift-JIS (see the previous section on this). For instance, US-ASCII octet values do not appear otherwise in a UTF-8 encoded character stream. This provides compatibility with file systems or other software (e.g., the printf() function in C libraries) that parse based on US-ASCII values but are transparent to other values.
- The first byte of a multibyte sequence is enough to determine the length of the multibyte sequence. This makes it extremely simple to extract a substring from a given string without elaborate parsing. This was often not the case in legacy multibyte encodings.
- Efficient to encode using simple bit operations. UTF-8 does not require slower mathematical operations such as multiplication or division (unlike the obsolete UTF-1 encoding).
- Disadvantages
- UTF-8 is generally larger than the appropriate legacy encoding for everything except diacritic-free, Latin-alphabet text. Most alphabetic scripts had only a single byte per character in legacy encodings but their letters take at least two bytes in UTF-8. Ideographic scripts generally had two bytes per character in their legacy encodings yet take three bytes per character in UTF-8.
- Legacy encodings for almost all non-ideographic scripts use a single byte per character making string cutting and joining easy.
- Advantages
- Compared to UTF-7
- Advantages
- UTF-8 uses significantly fewer bytes per character for all non-ASCII characters.
- UTF-8 encodes "+" as itself whereas UTF-7 encodes it as "+-".
- Disadvantages
- UTF-8 requires the transmission system to be eight-bit clean. In the case of e-mail this means it has to be further encoded using quoted printable or base64. This extra stage of encoding carries a significant size penalty. For base64, the overhead is 33⅓%, while the overhead for quoted printable varies depending on how ASCII-heavy the text is; for French the overhead is about 14%, but non-Roman scripts containing no ASCII characters have a 200% overhead! For any text that contains more non-ASCII characters than "+" signs, UTF-7 will be smaller than the combination of UTF-8 with quoted printable with the difference increasing with the proportion of non-ASCII characters (even for ASCII-heavy languages such as French, UTF-7 is about 4% smaller than UTF-8 quoted printable) and for many texts it will beat the combination of UTF-8 with base64.
- Advantages
- Compared to UTF-16
- Advantages
- Byte values of 0 (The ASCII NUL character) do not appear in the encoding unless U+0000 (the Unicode NUL character) is represented. This means that legacy C library string functions (such as strcpy()) that use a null-terminator will not incorrectly truncate strings.
- Since ASCII characters can be represented in a single byte, text consisting of mostly diacritic-free Latin letters will be around half the size in UTF-8 than it would be in UTF-16. Text in many other alphabets will be slightly smaller in UTF-8 than it would be in UTF-16 because of the presence of spaces.
- Most existing computer programs (including operating systems) were not written with Unicode in mind, and using UTF-16 with them would create major compatibility issues as it is not a superset of ASCII. UTF-8 allows programs to treat ASCII as they always did, and changes behaviour only for non-ASCII characters that were different by location anyway.
- In UTF-8, characters outside the basic multilingual plane are not a special case.
- UTF-8 uses a byte as its atomic unit whilst UTF-16 uses a 16-bit word which is generally represented by a pair of bytes. This representation raises a couple of potential problems of its own.
- When representing a word in UTF-16 as two bytes, the order of those two bytes becomes an issue. A variety of mechanisms can be used to deal with this issue (for example, the Byte Order Mark), but they still present an added complication for software and protocol design.
- If an odd number of bytes are removed from the beginning of UTF-16-encoded text, the result will be either invalid UTF-16 or completely meaningless text. In UTF-8, if part of a multi-byte character is removed, only that character is affected and not the rest of the text.
- Disadvantages
- UTF-8 is variable-length; that means that different characters take sequences of different lengths to encode. The acuteness of this could be decreased, however, by creating an abstract interface to work with UTF-8 strings, and making it all transparent to the user. While UTF-16 is technically also variable length many people do not know this or simply do not care about the rarely used code points outside the BMP.
- Chinese, Japanese, and Korean (CJK) ideographs use three bytes in UTF-8, but only two in UTF-16. So CJK text takes up more space when represented in UTF-8. There are a few other less-well-known groups of code points that this also applies to.
- Advantages
History
UTF-8 was invented by Ken Thompson on September 2, 1992, on a placemat in a New Jersey diner with Rob Pike. The day after, Pike and Thompson implemented it and updated their Plan 9 operating system to use it throughout.
UTF-8 was first officially presented at the USENIX conference in San Diego January 25–29 1993.
See also
- ASCII
- ISO 8859
- GB18030
- Universal Character Set
- Byte Order Mark
- Unicode and HTML
- Character encodings in HTML
- Unicode and e-mail
- Comparison of Unicode encodings
- Alt codes
External links
- Rob Pike tells the story of UTF-8's creation
- Original UTF-8 paper
- RFC 3629, the UTF-8 standard
- RFC 2277, IETF policy on character sets and languages
- UTF-8 and Unicode FAQ for Unix/Linux
- UTF-8
- a UTF-8 test page · another UTF-8 test page
- UTF-8 and Debian and Linux UTF-8 How-To
- Using UTF-8 with Gentoo Linux
- UTF-8 encoding table and Unicode characters - UTF-8 encoding displayed in a variety of formats, with Unicode and HTML encoding information.ar:صيغة التحويل الموحد-8
cs:UTF-8 da:UTF-8 de:UTF-8 el:UTF-8 es:UTF-8 eo:UTF-8 fr:UTF-8 ko:UTF-8 it:UTF-8 he:UTF-8 lv:UTF-8 lt:UTF-8 hu:UTF-8 nl:UTF-8 ja:UTF-8 no:UTF-8 nn:UTF-8 pl:UTF-8 pt:UTF-8 sk:UTF-8 sl:UTF-8 fi:Unicode#UTF-8 sv:UTF-8 tr:UTF-8 zh:UTF-8