C Programming/stdint.h

stdint.h is a header file in the C standard library introduced in the C99 standard library section 7.18 to allow programmers to write more portable code by providing a set of typedefs that specify exact-width integer types, together with the defined minimum and maximum allowable values for each type, using macros. This header is particularly useful for embedded programming which often involves considerable manipulation of hardware specific I/O registers requiring integer data of fixed widths, specific locations and exact alignments. stdint.h (for C or C++), and cstdint (for C++) can be downloaded or quickly created if they are not provided.

The naming convention for exact-width integer types is intN_t for signed int and uintN_t for unsigned int. For example  and   amongst others could be declared together with defining their corresponding ranges   to   and   (zero) to  ; again using a similar but upper case naming convention. In addition stdint.h defines limits of integer types capable of holding object pointers such as, the value of which depends on the processor and its address range.

The exact-width types and their corresponding ranges are only included in that header if they exist for that specific compiler/processor. Note that even on the same processor, two different compiler implementations can differ. The use of  or   would allow the inclusion or exclusion of types by the use of compilers preprocessor so that the correct exact-width set is selected for a compiler and its processor target.

The related include file  provides macros values for the range limits of common integer variable types. In C  is already included in , but in contrast to   which is implementation independent; all maximum and minimum integer values defined in   are compiler implementation specific. For example a compiler generating 32 bit executables will define LONG_MIN as −2,147,483,648 [−231] however for 64 bit processors targets, LONG_MIN can be −9,223,372,036,854,775,808 [−263].

Corresponding integer types
The C standard has a notion of "corresponding integer types". Informally, what this means is for any integer type T: the type  and the type   are said to be corresponding integer types (note:   doesn't create a new type, it creates a new identifier as a synonym for the given type). This is important for two reasons:


 * corresponding types are friendly to aliasing and type puns
 * corresponding types have a similar object representation

Both of these combined require code like: to have defined behavior by the standard (as opposed to being undefined in the general case). There are many caveats to how far you can push this, so it's important to actually read the C standard to see what's legal or not (the bulk of this has to deal with padding bits and out of range representations).

Representation
The C99 standard elaborated the difference between value representations and object representations.

The object representation of an integer consists of 0 or more padding bits, 1 or more value bits, and either 0 or 1 sign bits (this doesn't count as a value bit) depending on the signedness of the integer type.

The value representation is a conceptual representation of an integer. The value representation ignores any padding bits and does a (possible) rearrangement to the bits so that the integer is ordered sequentially from most significant value bit to least significant value bit. Most programmers deal with this representation because it allows them easily to write portable code by only dealing with −0 and out of range values as opposed to both of those in addition to tricky aliasing rules and trap representations if they choose to deal with the object representation directly.

Signed representation
The C standard allows for only three signed integer representations specified by the compiler writer:


 * sign and magnitude
 * one's complement
 * two's complement (the most widely used)

Integer types
The types  and   are required to be corresponding signed and unsigned integer types. For the types that are marked optional, an implementation must either define both  and   or neither of the two. The limits of these types shall be defined with macros with a similar name in the same fashion as described below.

If a type is of the form  (or similarly for a preprocessor define), N must be a positive decimal integer with no leading 0's.

Exact-width integer types
These are of the form  and. Both types must be represented by exactly N bits with no padding bits. must be encoded as a two's complement signed integer and  as an unsigned integer. These types are optional unless the implementation supports types with widths of 8, 16, 32 or 64, then it shall typedef them to the corresponding types with corresponding N. Any other N is optional.

The limits of these types are defined with macros with the following formats:
 * is the maximum value (2N−1 − 1) of the signed version of intN_t.
 * is the minimum value (−2N−1) of the signed version of intN_t.
 * is the maximum value (2N – 1) of the unsigned version of uintN_t.

Minimum-width integer types
These are of the form  and. is a signed integer and  is an unsigned integer.

The standard mandates that these have widths greater than or equal to N, and that no smaller type with the same signedness has N or more bits. For example, if a system provided only a  and ,   must be equivalent to.

An implementation is required to define these for the following N: 8, 16, 32, 64. Any other N is optional.

The limits of these types are defined with macros with the following formats:
 * is the maximum value (2N−1 − 1 or greater) of the signed version of.
 * is the minimum value (−2N−1 + 1 or less) of the signed version of.
 * is the maximum value (2N − 1 or greater) of the unsigned version of.

stdint.h should also define macros which will convert constant decimal, octal or hexadecimal value which are guaranteed to be suitable for the corresponding types and to be usable with the :
 * is substituted for a value suitable for . For example if   is "typedefed" to ,   corresponds to.
 * is substituted for a value suitable for.

Fastest minimum-width integer types
These are of the form  and.

The standard does not mandate anything about these types except that their widths must be greater than or equal to N. It also leaves it up to the implementer to decide what it means to be a "fast" integer type.

An implementation is required to define these for the following N: 8, 16, 32, 64.

The limits of these types are defined with macros with the following formats:
 * is the maximum value (2N−1 − 1 or greater) of the signed version of.
 * is the minimum value (−2N−1 + 1 or less) of the signed version of.
 * is the maximum value (2N − 1 or greater) of the unsigned version of.

Integers wide enough to hold pointers
and  are, respectively, signed and unsigned integers which are guaranteed to be able to hold the value of a pointer. These two types are optional.

The limits of these types are defined with the following macros:
 * is the minimum value (−32,767 [−215 + 1] or less) of.
 * is the maximum value (32,767 [215 − 1] or greater) of.
 * is the maximum value (65,535 [216 − 1] or greater) of.

Greatest-width integer types
and  is a signed and unsigned integer which are of the greatest supported width. They are, in other words, the integer types which have the greatest limits.

The limits of these types are defined with macros with the following formats:
 * is the maximum value (9,223,372,036,854,775,807 [263 − 1] or greater) of the signed version of.
 * is the minimum value (−9,223,372,036,854,775,807 [−263 + 1] or less) of the signed version of.
 * is the maximum value (18,446,744,073,709,551,615 [264 − 1] or greater) of the unsigned version of.

Macros which will convert constant decimal, octal or hexadecimal value which will suit the corresponding type are also defined:
 * is substituted for a value suitable for.
 * is substituted for a value suitable for.

Other integer limits

 * is the minimum value of.
 * is the maximum value of.
 * is the maximum value (216 − 1 or greater) of.
 * is the minimum value of.
 * is the maximum value of.
 * is the minimum value of.
 * is the maximum value of.
 * is the minimum value of.
 * is the maximum value of.

Criticisms and caveats

 * Some (non-conforming) implementations tack C99 support on top of a C89 runtime library. One of the consequences of this is that the new  and   specifiers aren't recognized and will probably lead to something undefined. The typical ways of working around this are:
 * The most common (and the most wrong) way is to use the  or   types as an intermediate step and pass these types into   or  . This works reasonably well for the exact, minimum, and fast integer types less than 32-bits but may cause trouble with   and   and the types larger than 32-bits, typically on platforms that use 32-bit  s and 64-bit pointers.
 * Not using  directly but manually reading in a buffer, calling , and then converting it to the desired type. This doesn't help with printing out integers though.
 * Using a 3rd-party  and   library that is C99 compatible.
 * Using the C99 standard printing format specifiers. PRId64 for example. These are declared in inttypes.h.


 * The rules for integer rank and corresponding integer types may force implementers to choose the lesser of two evils in not supporting an integer type, making a bad compromise, or supporting an integer type a non-conforming way.
 * For example, there are machines that either have special support for an extremely large signed integer register or an extremely large unsigned integer register without supporting the other type. An implementation can either choose not to expose this to a C implementation, synthesize a slow type as the corresponding integer type, synthesizing a weird corresponding integer type, or expose the integer to the programmer without setting it to the  types or synthesizing a corresponding integer type.


 * The  types are a compromise between the desire to have guaranteed two's complement integer types and the desire to have guaranteed types with no padding bits (as opposed to a more fine grained approach which would define more types). Because of the "all or nothing" approach to the   types, an implementation might have to play the same sort of games described above depending on whether they care about speed, programmer convenience, or standards conformance.