The ASCII character set (excluding the extended characters defined by IBM) is divided into four groups of 32 characters.
The first 32 characters, ASCII codes 0 through 1Fh, form a special set of non-printing characters called the control characters. We call them control characters because they perform various printer/display control operations rather than displaying symbols.
Examples of common control characters include:
Unfortunately, different control characters perform different operations
on different output devices. There is very little standardization among
output devices. To find out exactly how a control character affects a
particular device, you will need to consult its manual.
The second group of 32 ASCII character codes comprise various punctuation symbols, special characters, and the numeric digits. The most notable characters in this group include the:
Note that the numeric digits differ from their numeric values only in the
high order nibble. By subtracting 30h from the ASCII code for any particular
digit you can obtain the numeric equivalent of that digit. You will learn
how to Strip Off the ASCII part of a number in Laboratory Exercise 6.
The third group of 32 ASCII characters is reserved for the upper case alphabetic
characters.
The ASCII codes for the characters "A" through "Z" lie in the range 41h
through 5Ah. Since there are only 26 different alphabetic characters, the
remaining six codes hold various special symbols.
The fourth,
and final, group of 32 ASCII character codes are reserved for the lower
case alphabetic symbols, five additional special symbols, and another
control character (delete).
Note that the lower case character symbols use the ASCII codes 61h through 7Ah. If you compare the ASCII codes for the upper and lower case characters to binary, you will notice that the upper case symbols differ from their lower case equivalents in exactly one bit position.
The only place these two codes differ is in bit five. Upper case characters always contain a zero in bit five; lower case alphabetic characters always contain a one in bit five. You can use this fact to quickly convert between upper and lower case. If you have an upper case character you can force it to lower case by setting bit five to one. If you have a lower case character and you wish to force it to upper case, you can do so by setting bit five to zero. You can toggle an alphabetic character between upper and lower case by simply inverting bit five.
Indeed, bits five and six determine which of the four groups in the ASCII character set you're in:
Bit 6 | Bit 5 | Group |
0 | 0 | Control Characters |
0 | 1 | Digits and Punctuation |
1 | 0 | Upper Case and Special |
1 | 1 | Lower Case and Special |
So you could, for instance, convert any upper or lower case (or corresponding special) character to its equivalent control character by setting bits five and six to zero.
Consider, for a moment, the ASCII codes of the numeric digit characters:
Char | Dec | Hex |
'0' | 48 | 30 |
'1' | 49 | 31 |
'2' | 50 | 32 |
'3' | 51 | 33 |
'4' | 52 | 34 |
'5' | 53 | 35 |
'6' | 54 | 36 |
'7' | 55 | 37 |
'8' | 56 | 38 |
'9' | 57 | 39 |
The decimal representations of these ASCII codes are not very enlightening. However, the hexadecimal representation of these ASCII codes reveals something very important; the low order nibble of the ASCII code is the binary equivalent of the represented number.
By stripping away (i.e., setting to zero) the high order nibble of a numeric character, you can convert that character code to the corresponding binary representation. Conversely, you can convert a binary value in the range 0 through 9 to its ASCII character representation by simply setting the high order nibble to three. Note that you can use the logical-AND operation to force the high order bits to zero; likewise, you can use the logical-OR operation to force the high order bits to 0011 (three).
Note that you cannot convert a string of numeric characters to their equivalent binary representation by simply stripping the high order nibble from each digit in the string. Converting 123 (31h 32h 33h) in this fashion yields three bytes: 010203h, not the correct value which is 7Bh. Converting a string of digits to an integer requires more sophistication than this; the conversion above works only for single digits.
Bit seven in standard ASCII is always zero. This means that the ASCII character set consumes only half of the possible character codes in an eight bit byte. The PC uses the remaining 128 character codes for various special characters including international characters (those with accents, etc.), math symbols, and line drawing characters. Note that these extra characters are a non-standard extension to the ASCII character set. Most printers support the PC's extended character set.
Should you need to exchange data with other machines which are not PC-compatible, you have only two alternatives: stick to standard ASCII or ensure that the target machine supports the extended IBM-PC character set. Some machines, like the Apple Macintosh, do not provide native support for the extended IBM-PC character set. However you may obtain a PC font which lets you display the extended character set. Other computers (e.g., Amiga and Atari ST) have similar capabilities. However, the 128 characters in the standard ASCII character set are the only ones you should count on transferring from system to system.
Despite the fact that it is a "standard", simply encoding your data using standard ASCII characters does not guarantee compatibility across systems. While it's true that an "A" on one machine is most likely an "A" on another machine, there is very little standardization across machines with respect to the use of the control characters. Indeed, of the 32 control codes plus delete, there are only four control codes commonly supported ^P; backspace (BS), tab, carriage return (CR), and line feed (LF). Worse still, different machines often use these control codes in different ways. End of line is a particularly troublesome example. MS-DOS, CP/M, and other systems mark end of line by the two-character sequence CR/LF. Apple Macintosh, Apple II, and many other systems mark the end of line by a single CR character. UNIX systems mark the end of a line with a single LF character. Needless to say, attempting to exchange simple text files between such systems can be an experience in frustration. Even if you use standard ASCII characters in all your files on these systems, you will still need to convert the data when exchanging files between them. Fortunately, such conversions are rather simple.
Despite some major shortcomings, ASCII data is the standard for data
interchange across computer systems and programs. Most programs can accept
ASCII data; likewise most programs can produce ASCII data. Since you will be
dealing with ASCII characters in assembly language, it would be wise to
study the layout of the character set and memorize a few key ASCII codes
(e.g., "0", "A", "a", etc.).
Page adapted from Brookdale Community College