In the computer world, the unit of calculation is very complex and its standard is not perfect. So, I am going to write down the standards of most of them.

K 1000 Upper Case (K)
k 1024 Lower Case (k)

It is unbelievable to me at the very first, that, K and k, the upper and lower case one are representing different values in computer world, 1000 and 1024 respectively.


Next, it comes to the smallest countable unit in computer, bit (b), while eight b (8b) equals to one byte (B).

b 1 bit (1/8 Byte) Lower Case (b)
B 1 Byte (8 bit) Upper Case (B)

Computer is a binary-based machine, while operating systems are also. Therefore, the concept of binary system is extremely important to know.

n-bit 2^n
11 2048
12 4096
13 8192
14 16384
15 32768
16 65536
17 131072
18 262144
19 524288
20 1048576
21 2097152
n-bit 2^n
0 1
1 2
2 4
3 8
4 16
5 32
6 64
7 128
8 256
9 512
10 1024

It is easily to understand why k stands for 1024 now.

k is 2^10, 2 power of 10, 1024.

Some ask: "Why is K for 1000, and k for 1024?"

In reality, we always use K to represent 1000, you know, "a thoudsand" "thoudsands" we always say.

But in computer, everything is base on binary system, therefore, to represent "a thoudsand", we use 2^10, 1024. This makes the real storage cpacity is less than that stated on the storage devices, computer recgonize k as 1024 while storage devices manufacture recgonize k as 1000.

No worries. It is not that hard to remember the differences of "K" and "k". When you are confused, just think about when is the unit invented and is it related to computer. If that unit exists before computer is invented, then it should be "K", stands for 1000, vice versa. If that unit is not related to computer, then it should be "K" as well, vice versa.