At my code, I do not use int or unsigned int. I only use size_t or ssize_t for portable. For example:

typedef size_t intc;    // (instead of unsigned int)
typedef ssize_t uintc;  // (instead of int)

Because strlen, string, vector... all use size_t, so I usually use size_t. And I only use ssize_t when it may be negative.

But I find that:

The unsigned integer types are ideal for uses that treat storage as a bit array. Using an unsigned instead of an int to gain one more bit to represent positive integers is almost never a good idea. Attempts to ensure that some values are positive by declaring variables unsigned will typically be defeated by the implicit conversion rules.

in the book The C++ Programming Language.

So I am puzzled. Am I wrong? Why does the STL not abide by the suggest on the book?

Ar Rakin
  • 534
  • 3
  • 15
  • 1,317
  • 2
  • 9
  • 9
  • 1
    `size_t` used in standart library for representing sizes. It would be strange if the size of container could be negative. Interface states it's behavior. I think book assumes day-to-day usage, not interface – kassak Apr 01 '13 at 07:49
  • 2
    @kassak - No, in this case an unsigned type actually *is* used to get one extra bit for the value. Some members of the committee saw it important to be able to have a `std::vector` larger than half the available memory. And the quote says "*almost* never"... – Bo Persson Apr 01 '13 at 08:03
  • @Bo Persson, thanks, C++ lib use size_t for range. But it brought us the trouble. If we use int, we must be carefull to compare them. – hgyxbll Apr 01 '13 at 12:24
  • 5
    Nominate for re-opening as the duplicate cited does not address this post close enough. – chux - Reinstate Monica Nov 24 '15 at 18:48
  • 4
    Using `intc` for _unsigned_ `size_t` (as you say, '[signed] `int`' in comment, even though it's probably longer), and `uintc` for _signed_ `ssize_t` ('`unsigned int`' in comment), is confusing to me, because normally the _'u'_ stands for _unsigned_, e.g. `uint32_t` is the _unsigned_ version of `int32_t`, a 4-byte integer. – RastaJedi Aug 20 '16 at 20:39
  • 5
    To those who marked as duplicate. This is definitely not a duplicate. The question is not about signed vs unsigned, but size_t vs ssize_t, that is "when should I use either"? – EnzoR Oct 12 '17 at 13:06
  • for (size_t i = a; i < b; i += 2), how many iterations?, a = 0, b = 0xffffffff (32-bit pointers) => infinity loop, for (ssize_t i = a; i < b; i+= 2), how many iterations? (b - a + 1)/2 or none (in case b <= a). Why so? By standart unsigned overflow is well defined and signed overflow is UB (undefined behavior). Compilers some times use UB for optimizations (they assume that several UBs never happen). Some optimizations need to calculate number of loop iterations. From such point of view signed counters is preferrable. – Павел Jul 15 '20 at 21:07
  • now I always use 'int' for most cases, because 'int' is natural, and default a digit's type is 'int', and the range of 'int' is enough for me. – hgyxbll Apr 28 '21 at 02:00

2 Answers2


ssize_t is used for functions whose return value could either be a valid size, or a negative value to indicate an error. It is guaranteed to be able to store values at least in the range [-1, SSIZE_MAX] (SSIZE_MAX is system-dependent).

So you should use size_t whenever you mean to return a size in bytes, and ssize_t whenever you would return either a size in bytes or a (negative) error value.

See: http://pubs.opengroup.org/onlinepubs/007908775/xsh/systypes.h.html

Chen Li
  • 4,824
  • 3
  • 28
  • 55
  • 8,626
  • 2
  • 28
  • 28
  • I have used ssize_t instead of int. Because CPU handle fast with ssize_t when it is x64. – hgyxbll Apr 01 '13 at 12:09
  • 41
    Well, this answer fails to fully explain the consequences of basing such decisions on pure interface considerations. It is unlikely that any implementation will use wider type for `ssize_t` than it uses for `size_t`. This immediately means that the price you will pay for the ability to return negative values is *halving* of the positive range of the type. I.e. `SSIZE_MAX` is usually `SIZE_MAX / 2`. This should be kept in mind. In many cases this price is not worth paying just for the ability to return `-1` as a negative value. – AnT stands with Russia Dec 25 '14 at 19:56
  • 12
    @AnT having unsigned values at all is one of the greatest failures in C++. There is no case where the price is not worth paying. If you need such large numbers, use int64_t instead... – thesaint May 02 '15 at 11:01
  • 11
    @thesaint What the heck are you talking about? It's not about capacity. How would you add four to the stack pointer if its current value may or may not be negative? – josaphatv Sep 17 '15 at 23:33
  • 2
    @josaphatv What? How would you add four to the value of the stack pointer if you don't know if that would overflow it? Not to mention that the whole phrase of "adding for to the stack pointer" has nothing to do with C++. Unsigned values are bad because 1) They cause type conversion issues all over the place 2) They add absolutely no benefit 3) They are even less intuitive and predictable than signed ints, since they overflow already at zero. And expression like "while(some_uint - 1 >= 0)" is wrong and hard to spot. Signed overflows are much less likely. – thesaint Sep 19 '15 at 08:26
  • 2
    Besides that your statement makes absolutely no sense. Adding a number to either int or uint is only a matter of wether or not it overflows. If the stack pointer was indeed negative then you have a different problem lol. And even if so, adding something doesn't inherently break anything. Something was broken before. How you think this is any worse than adding four to 0x7FFFFFFF is beyond me (which overflows into kernel space => BOOM). – thesaint Sep 19 '15 at 08:31
  • 2
    @hgyxbll What do you mean by ssize_t is faster than int? ssize_t can be either int, or long, it would not be an actual type. I could not understand this, please explain. – xis Nov 06 '15 at 20:25
  • 11
    Didn't unsigned integers come about *first* (prior to signed ones) because of the hardware - obviously this is closer to bare-metal C than most C++ usage but the use of the MSB to allow signed arithmetic was not done just to halve the absolute magnitude that a "integer" could represent but because there was a need for subtractive maths. signed ints and unsigned ints are pears and apples - different but able to cross under certain, limited circumstances...! (*Half of the range of each is shared.*) – SlySven Jan 14 '16 at 17:57
  • 3
    Where the confusion seems to arise, to me, is when a function that seems it should produce only an unsigned value i.e. the number of bytes that `read(2)` has been able to actually read. However in the case of an error the value of -1 (probably encoded as ALL bits set) is returned - not to make things difficult but because it is a **sentinel** value that cannot arise normally. – SlySven Jan 14 '16 at 17:59
  • 1
    @thesaint Conversely, if a value cannot be lower than 0, allowing it to be signed is a failure to communicate intent. You can't logically have an array, or other container, of -10 `char`s, for example (logically, a container with negative size would require the power of creation, because it's so small that it actually _increases_ the number of uncontained objects in its vicinity, creating new ones out of nothing), so it wouldn't make sense to use a signed type for array bounds instead of an unsigned one. – Justin Time - Reinstate Monica Dec 02 '16 at 20:10
  • 2
    The problem with unsigned types isn't that they exist, it's that they're considered and treated as integers (since all integers, by mathematical definition, are signed); this is directly responsible for people treating them the same as signed types, and by extension for all the problems associated with them. Unsigned types are inherently more predictable (due to having defined overflow behaviour), but a lot of people completely fail to take it into account when checking them because they're used to signed counters (which, IMO, is a failure at teaching, when the counter cannot be less than 0). – Justin Time - Reinstate Monica Dec 02 '16 at 20:54
  • Instead of `while(some_uint - 1 >= 0)` or `for(some_uint = 10; some_uint >= 0; --i)`, a proper check that takes unsigned properties into account would look something like `while(some_uint != 0)` or `for(some_uint = 10; some_uint + 1 != 0; --some_uint)`. This loop condition works properly regardless of the loop counter's signedness: It ends the loop when `some_uint` reaches `-1`, (which is a valid value for signed types, and equal to the maximum storeable value for unsigned types due to overflow/underflow rules); regardless of signedness, `-1 + 1` must always equal `0`. – Justin Time - Reinstate Monica Dec 02 '16 at 20:54
  • The `while` loop works because: For `some_uint - 1` to be `>= 0`, `some_uint` must logically be `>= 1`; therefore, once `some_uint == 0`, and is thus no longer `>= 1`, `some_uint - 1` can no longer be `>= 0`, and the loop is terminated. The `for` loop works on the same principle: For `some_uint` to be `>= 0`, `some_uint + 1` must be `>= 1`; therefore, once `some_uint + 1 == 0`, and is thus no longer `>= 1`, `some_uint` can no longer be `>= 0`, and the loop is terminated. Simple, well-defined, and makes perfect sense if one knows how unsigned types work. – Justin Time - Reinstate Monica Dec 02 '16 at 20:54
  • Alternatively, it could be written to specifically check for `static_cast(-1)`; this is more verbose, but communicates intent more clearly. – Justin Time - Reinstate Monica Dec 02 '16 at 21:01
  • The type ssize_t has a range of [-SSIZE_MAX, SSIZE_MAX], not [-1, SSIZE_MAX]. – dtouch3d Jan 20 '17 at 14:22
  • 2
    Supporters of unsigned sizes do not understand two things. One is, that they implicitly advocate for unsigned arithmetic which is totally broken for calculating sizes (also in terms of perforamance). The second thing they may not understand is a bit more subtle (to complicated for me I admit) but here is an example. In transportation they give negative waiting times for events all the time. Negative waiting times are not an error in this case but a useful concept similar to imaginary numbers. BTW, not sure, but was there ever a discussion about negative numbers in book keeping? – Patrick Fromberg Apr 13 '20 at 11:52
  • @PatrickFromberg what's broken about using unsigned arithmetic for calculating sizes? I've never seen a negative size. You? – Tom Lint Sep 15 '21 at 16:24
  • @TomLint, what is broken is that 1-2 equals some hardware dependent arbitrary looking number when using arithmetic operators on unsigned numbers. There are some exotic uses for that, but most of the time this is not the result that you want. `int main () {unsigned int a = 1, b = 2; std::cout << (a-b) << std::endl;}` – Patrick Fromberg Sep 17 '21 at 05:03

ssize_t is not included in the standard and isn't portable. size_t should be used when handling the size of objects (there's ptrdiff_t too, for pointer differences).

  • 20,568
  • 5
  • 58
  • 77
  • 26
    ssize_t is from POSIX: http://pubs.opengroup.org/onlinepubs/009696799/basedefs/sys/types.h.html – Kafumanto Jan 09 '17 at 13:34
  • 1
    C++11 and later implements template `std::make_signed`, but it's somewhat grey area if using `size_t` as its parameter is well-defined. In c++20 use of this template with types not allowed by standard results in ill-formed code, but existing implementations allow use of `size_t` – Swift - Friday Pie Jul 01 '20 at 15:34