Bits and Bytes

Computers don’t understand words or numbers or anything that is fed into it. Then how is it possible for computers to understand, interpret and process the data? To do so, everything is represented by a binary electrical signal that registers in one of two states: ON (1) or OFF (0). And hence we generally use the term Bits and Bytes in computers.

Such type of language or system where everything is represented in the form of a series of 0s and 1s is called Binary Number System.

What is Binary Number System

A binary number system, in mathematics, is a positional numeral system employing 2 as the base and so requiring only two different symbols for its digits, 0 and 1, instead of the usual 10 different symbols needed in the decimal system. The numbers from 0 to 10 are thus in binary 0, 1, 10, 11, 100, 101, 110, 111, 1000, 1001, and 1010.

Why Do Computers Use Binary

So you may be thinking, “why only 0 and 1? Couldn’t you just add another digit?” Let’s look into the answer to this question.

Every information in your computer is an electrical signal, and in the early days of computing, electrical signals were much harder to measure and control very precisely. It made more sense to only distinguish between an “ON” state—represented by a negative charge—and an “OFF” state—represented by a positive charge.

Why “OFF” state is represented by a positive charge? It’s because electrons have a negative charge—more electrons mean more current with a negative charge.

Bits and Bytes

So computers work by manipulating 1s and 0s. These are BInary digiT or bit for short. Single bits are too small to be much use, so they are grouped together into units of 8 bits. Each 8-bit unit is called a byte. A byte is a basic unit that is passed around the computer, often in groups. Because of this the number 8 and its multiples have become important in computing. And that’s why a computer’s memory is measured in terms of bytes.

1 Kilobyte (KB) = 1024 Bytes

1 Megabyte (MB) = 1024 Kilobytes

1 Gigabyte (GB) = 1024 Megabytes

1 Terabyte (TB) = 1024 Gigabytes

Why 1 Byte is 8 Bits

To understand this, let’s see how many patterns we can create using 1 bit, 2 bits, 3 bits, and so on.

Number of BitsNumber of patterns
121 = 2
222 = 4
323 = 8
424 = 16
525 = 32
626 = 64
727 = 128
828 = 256

When computers were designed initially, it was thought that 256 patterns or symbols are enough to represent the data. That’s how 1 byte was defined as a group of 8 bits.

Why 1 Kilo = 1024 in Computers

Generally 1 kilo = 1000, like 1 kilogram = 1000 gram, 1 kilometre = 1000 metre or 1 kilolitre = 1000 litre. But why in computers 1 kilo = 1024?

In our daily math we use metric system where the units are represented as powers of 10

100 = 1, 101 = 10, 102 = 100, 103 = 1000 = 1 Kilo

And, higher units as 106 = 103 Kilo = 1 Mega; 109 = 103 Mega = 1 Giga and so on.

As discussed above, in computers we use a binary number system and hence units are measured as powers of 2.

20 = 1, 21 = 2, 22 = 4, 23 = 8, …, 210 = 1024.

Since 1024 is very close to 1000 (1 kilo), 1024 is taken as 1 kilo in computers.

Bytes and Characters

In computer science, a character is a display unit of information equivalent to one alphabetic letter or symbol. This relies on the general definition of a character as a single unit of written speech.
A universal system for characters has been developed by ASCII (American Standard Code for Information Interchange). Individual ASCII characters require one byte, or eight bits, of data storage.

Some of the characters and their ASCII codes are:

011000001000001A1100001a
011000111000010B1100010b
011001021000011C1100011c
011001131000100D1100100d
011010041000101E1100101e
011010151000110F1100110f
011011061000111G1100111g
011011171001000H1101000h
011100081001001I1101001i
011100191001010J1101010j

Leave a Comment