In this article, we’re going to dive into how computers store and represent numbers. We’ll look at the math behind it and see how it all works.
Computers use a system called binary, which only has two states: 1 and 0. This system is crucial for representing all kinds of information, not just simple yes or no answers. Each binary digit, or “bit,” can represent a number, and by using more bits, we can represent bigger numbers.
In the decimal system, which we use every day, each digit can be one of ten values (0 through 9). To show numbers bigger than 9, we add more digits. For example, the number 263 means 2 hundreds, 6 tens, and 3 ones.
Binary works in a similar way but uses base-two, so each column is a power of two. For example, the binary number 101 means 1 four, 0 twos, and 1 one, which equals 5 in decimal.
Binary arithmetic is like decimal arithmetic, but it can be a bit tricky because there are only two symbols. When adding binary numbers, if the sum is more than 1, we carry over to the next column, just like in decimal addition.
Let’s look at adding the binary numbers for 183 and 19. In decimal, this equals 202. In binary, we carry over just like in decimal, and the result is 11001010, which is also 202 in decimal.
A byte is made up of 8 bits, which allows for 256 different values ($2^8$). This was a big deal in early computers because it limited graphics and sounds to 256 colors or tones. Data sizes are often described with terms like kilobytes (KB), megabytes (MB), and gigabytes (GB), where 1 kilobyte is traditionally 1024 bytes in binary terms.
Modern computers often use 32-bit or 64-bit systems, which can handle much larger numbers. A 32-bit number can represent values up to about 4.3 billion, while a 64-bit number can handle around 9.2 quintillion. This is important for managing large datasets and memory in today’s computers.
To show both positive and negative numbers, computers use the first bit as a sign (0 for positive, 1 for negative). This method allows for a range of about ±2 billion with 32 bits.
For numbers that aren’t whole, like 12.7 or 3.14, computers use floating point representation. The IEEE 754 standard is the most common way to encode these numbers, using a format similar to scientific notation.
Computers also need to represent text, which they do by assigning numbers to letters. The American Standard Code for Information Interchange (ASCII) was created in 1963 as a 7-bit code that could represent 128 different characters, including letters, numbers, and symbols.
While ASCII was great for basic text, it was mainly for English, which caused problems with other languages. This led to the creation of Unicode, a system that can represent over a million characters from different languages and scripts.
In summary, computers use binary to store and work with numbers, using bits and bytes to encode everything from numbers to text. Understanding these basics is key to knowing how computers work. In the next part, we’ll see how computers manipulate these binary sequences to perform computations.
Try converting decimal numbers to binary and vice versa. Start with simple numbers like 5 and 10, and then try larger numbers. Remember, in binary, each column represents a power of two. For example, the binary number 101 equals 5 in decimal. Use this activity to practice and reinforce your understanding of binary representation.
Pair up with a classmate and race to solve binary addition problems. Start with simple additions like 101 + 110 and gradually increase the complexity. Remember to carry over when the sum exceeds 1, just like in decimal addition. This will help you get comfortable with binary arithmetic.
Research and list different file sizes you encounter in everyday life, such as a song, a photo, or a video. Convert these sizes from bytes to kilobytes (KB), megabytes (MB), and gigabytes (GB). Remember, 1 kilobyte is traditionally 1024 bytes. This will help you understand data sizes and their significance.
Practice representing positive and negative numbers using the sign and magnitude method. Choose a few numbers, such as 25 and -25, and represent them in binary using the first bit as the sign bit. This will help you understand how computers handle negative numbers.
Explore the ASCII and Unicode tables to see how characters are represented in computers. Try encoding your name in ASCII and find out how it would be represented in Unicode. This activity will help you understand text representation and the limitations of ASCII.
Binary – A number system that uses only two digits, 0 and 1, to represent all numbers. It is the foundation of all binary code used in computing systems. – Computers process data in binary because they use electrical signals that have two states: on (1) and off (0).
Bit – The smallest unit of data in a computer, represented by a 0 or 1 in binary code. – A single bit can represent two values, such as true or false, in a computer program.
Byte – A group of eight bits, which is the standard unit of data used to represent a character in a computer. – The word “hello” is stored in a computer using five bytes, one for each letter.
Decimal – A number system based on ten, using digits from 0 to 9. It is the standard system for denoting integer and non-integer numbers. – When converting the binary number $1010_2$ to decimal, we get $10_{10}$.
Arithmetic – The branch of mathematics dealing with numbers and basic operations like addition, subtraction, multiplication, and division. – In computer science, arithmetic operations are performed using algorithms that manipulate binary numbers.
Negative – Referring to numbers less than zero, often represented in computers using two’s complement notation. – The negative number $-5$ is stored in binary as $11111011$ using an 8-bit two’s complement representation.
Floating – Referring to floating-point numbers, which are numbers that have a decimal point and can represent fractions. – The number $3.14$ is stored in a computer as a floating-point number to allow for precision in calculations.
Text – A sequence of characters that can be processed by a computer, often encoded using standards like ASCII or Unicode. – When you type a message on your computer, it is stored as text data that can be read and edited.
Unicode – A character encoding standard that allows computers to represent and manipulate text from any writing system in the world. – Unicode enables the display of characters from different languages, such as Chinese, Arabic, and Russian, on the same webpage.
ASCII – A character encoding standard that uses 7 or 8 bits to represent text characters, primarily used for English letters and symbols. – The ASCII code for the letter ‘A’ is 65, which is stored as $01000001$ in binary.