C does not check if a pointer is out of bounds. But the underlying hardware might behave in strange ways when an address is computed that falls outside the object boundaries, pointing just after the end of an object being an exception. The C Standard explicitly describes this as causing undefined behavior.
For most current environments, the above code does not pose a problem, but similar situations could cause segmentation faults in x86 16-bit protected mode, some 25 years ago.
In the language of the Standard, such a value could be a trap value, something that cannot be manipulated without invoking undefined behavior.
The pertinent section of the C11 Standard is:
6.5.6 Additive operators
- When an expression that has integer type is added to or subtracted from a pointer, the result has the type of the pointer operand. If the pointer operand points to an element of an array object, and the array is large enough, the result points to an element offset from the original element such that the difference of the subscripts of the resulting and original array elements equals the integer expression. [...] If both the pointer operand and the result point to elements of the same array object, or one past the last element of the array object, the evaluation shall not produce an overflow; otherwise, the behavior is undefined. If the result points one past the last element of the array object, it shall not be used as the operand of a unary
*
operator that is evaluated.
A similar example of undefined behavior is this:
char *p;
char *q = p;
Merely loading the value of uninitialized pointer p
invokes undefined behavior, even if it is never dereferenced.
EDIT: it is a moot point trying to argue about this. The Standard says computing such an address invokes undefined behavior, so it does. The fact that some implementations might just compute some value and store it or not is irrelevant. Do not rely on any assumptions regarding undefined behavior: the compiler might take advantage of its inherently unpredictable nature to perform optimizations that you cannot imagine.
For example this loop:
for (int i = 1; i != 0; i++) {
...
}
might compile to an infinite loop without any test at all: i++
invokes undefined behavior if i
is INT_MAX
, so the compiler's analysis is this:
- initial value of
i
is > 0
.
- for any positive value of
i < INT_MAX
, i++
is still > 0
- for
i = INT_MAX
, i++
invokes undefined behavior, so we can assume i > 0
because we can assume anything we please.
Therefore i
is always > 0
and the test code can be removed.
与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…