I bet you are wondering why the intensity values for RGB range from 0 to 255. Why not 0 to 99, or 0 to 999? What is so special about 256 anyway?
A bit is simply a binary digit that has only two possible values; a zero (off) or 1 (on). Ultimately, a computer is a binary machine and sees everything as zeros and ones.
A byte is the smallest addressable unit of memory for storing digital information and for the vast majority of computers consists of eight bits.
Consider this.
There is a pattern to this!
# bits (b) | 2b | # possible values |
1 | 21 | 2 |
2 | 22 | 4 |
3 | 23 | 8 |
4 | 24 | 16 |
5 | 25 | 32 |
# bits (b) | 2b | # possible values |
8 |
Remember a single RGB color includes intensity values for red, blue, and green.
Recall from the video that this should be over 16 million!
Using fewer than 8 bits per byte would have allowed for fewer possible values, but would have taken up less space (memory). For RGB color data this would have meant fewer colors (and less memory required).
Using more than 8 bits per byte would have allowed for more possible values, but would have taken up more space (memory). For RGB color data this would have meant more colors (and more memory required).
The goal would be to use the fewest amount of bits needed to sufficiently store data comprised of a given set of values. For instance, if I needed to be able to store data that could have 20 different values, then I would use 5 bits (capacity of 32 values) because 4 bits would not be sufficient (capacity of 16 values) and 6 bits would be overkill (capacity of 64 values).
Computer monitors originally were not in color and displayed visual elements by simply turning pixels on (light) or off (no light). So the size of a byte was not determined for the purpose of color but instead for some other values that computers stored.
So, if not color intensity values, what did computers need to store that required 8 bits of memory? The answer to this mystery is a character set (symbols such as those in an alphabet) used to communicate meaning. The character set used (which included the English alphabet) consisted of more than 128 symbols (achievable with 7 bits) but less than 512 (achievable with 9 bits).
Of course, even counting each letter of the 26 letter English alphabet twice (for uppercase and lowercase characters) would have only required 52 symbols. So...