long  unsigned int in c syntax
What does “1” represent in the value range for unsigned int and signed int? (4)
I am learning C and have a dumb question regarding the "1" in the value range for unsigned int and signed int. I can't seem to find an explanation for it anywhere.
The paragraph below explains the data range. However, it does not explain the "1". What does "1" represent/mean? Is it 1 because it skips 0 and 0 has no value?
In 32bit integers, an unsigned integer has a range of 0 to 2^32 1 = 0 to 4,294,967,295 or about 4 billion. The signed version goes from 2^31 1 to 2^31, which is –2,147,483,648 to 2,147,483,647 or about 2 billion to +2 billion. The range is the same, but it is shifted on the number line.
Adding to @Yunnosch's excellent explanation on unsigned numbers, almost all modern computers use "two's complement" to represent signed binary integers. In two's complement, the most significant bit is used as the "sign bit" and bits are the complement of absolute value of the number + 1. So for the 3 bit example, while the range for unsigned values is 0 to 7, the range for signed values is 4 to 3:
100 : 4
101 : 3
110 : 2
111 : 1
000 : 0
001 : 1
010 : 2
011 : 3
Notice that for signed numbers the range of negative numbers is one greater than the range of positive numbers. That's because, while in number theory,
0
is neither positive or negative, in binary representation,
0
has to be either negative or positive. Because it has the most significant bit cleared,
0
is part of the positive number domain, so that leaves one less positive number available.
Consider the values you can achieve with 2 bits:
00 : 0
01 : 1
10 : 2
11 : 3
There are 4 of them, 2 to the power of 2.
But the highest value is not 4, it is 3.
The highest value is 2 to the power of 2 minus 1. I.e. in your representation
2^21
or 2
^{
2
}
1
Add a bit and you get twice the number, by adding
100 : 4
101 : 5
110 : 6
111 : 7
Total number 8, but highest number 7.
So the "1" is because always the first of the total of 2
^{
n
}
is used for 0,
the 2nd is used for 1, the 3rd is used for 2.
In the end (2
^{
n
}
)th one is not available for 2
^{
n
}
, it is already used for 2
^{
n
}
1.
Where did you find this incorrect paragraph? It appears to be about 2's complement but has the
1
in the wrong place.
For C implementations using one's complement or sign/magnitude signed integers, the range is symmetric around zero (with 2 bit patterns that both represent
0
, so the positive range and the negative range are the same size).
Basically nothing ever uses that these days, but the ISO C standard specifies that signed integers are binary and use either two's complement, one's complement, or sign/magnitude.
In
twos'complement
(nearly universal these days), the range of representable values using n bits is
[ 2
^{
n1
}
, 2
^{
n1
}
 1 ].
One bitpattern (all bits zero) represents the value zero
. Every bit has a placevalue of
2^i
, except the final one which has a place value of
2^(n1)
.
The bitpattern with all bits set represents
1
because
sum(2^i, i=0..n1)
is one less than
2^n
.
With
only
the sign bit set, we get the mostnegative number
:
INT_MIN
is signed overflow (undefined behaviour) because it's not representable as an
int
; it requires a wider integer. Or with wrapping,
INT_MIN = INT_MIN
. This is the "2's complement anomaly".
https://en.wikipedia.org/wiki/Two%27s_complement#Most_negative_number
You can avoid widening if you're doing an absolute value operation: e.g.
unsigned abs = i >= 0 ? i : (unsigned)i;
(Converting a negative value to
unsigned
in C has welldefined behaviour of moduloreducing until it's in the representable range. In C this is independent of the signedinteger encoding; what matters is the
value
. So
(uint8_t)1
is
always
255. For 2's complement it just copies the bitpattern. For sign/magnitude or one's complement a C implementation would have to do some math to cast from signed to signed. Notice that I did this
before
negation, which means
0  i
with the usual unsigned wrapping.)
n bits can hold 2 ^{ n } different values. (The first bit can have two values * the second bit can have two values * the third bit can have two values * ...)
For example, 3 bits can hold 2 ^{ 3 } = 8 different values.
000
001
010
011
100
101
110
111
If each bit pattern represents an integer, then an nbit integer can represent 2 ^{ n } different integers. For example,

It could represent the integers from 0 to 2 ^{ n } 1 inclusively
(because (2 ^{ n } 1)  (0) + 1 = 2 ^{ n } different values).For example,
000 0 001 1 010 2 011 3 100 4 101 5 110 6 111 7

It could represent the integers from 2 ^{ n1 } to 2 ^{ n1 } 1 inclusively
(because (2 ^{ n1 } 1)  (2 ^{ n1 } ) + 1 = 2 ^{ n } different values).For example,
100 4 101 3 110 2 111 1 000 0 001 1 010 2 011 3
You could assign any meaning to these values, but the previously stated ranges the are the understood by twos'complement machines for unsigned integers and signed integers respectively. ^{ [1] }
 On a ones'complement machine, there are two ways of writing zero (0000...0000 _{ 2 } and 1000...0000 _{ 2 } ), so the range is only 2 ^{ n1 } 1 to 2 ^{ n1 } 1. I think all modern machines are twos'complement machines, though.