Another C writer's nightmare is overflow. We must remember that C does have fixed lenghts assigned to variable types. For example, short integer is 16 bits wide, and integer is 32. When we consider what this means in terms of 10-base numbers... Well, 32 bits, each of which can be 1 or 0. So largest number one can have in such a field, is 2^32. And when we think of signed integers, then its half of that, because the other half of possibilities are reserved for negative numbers. It is quite a big number, but it can be exceeded. Especially this is a problem in mathematical calculations, which often involve calculating large series of powers, which may get really large values. There's also other issues with mathematical programming, like accuracy of real numbers (as accurcy what comes to results, but also as accuracy what comes to comparing numbers), but I won't get into that.

Just remember that when you overflow an signed integer, it results really negative value, and when you overflow unsigned integer, it results really tiny positive value. Also casting really large unsigned value to signed, will cause negative value... So beware - C is full of pitfalls :)

Well, I'll once again give you something to ponder for a while:

Example of a common - yet dirty and dangerous habit.

(No, I am not talking about smoking!)

often one sees things like following:

typedef struct foosNbars

{

int foo[20];

char bar[200];

}foosNbars;

foosNbars * obtain_foosNbars(unsigned int amount)

{

foosNbars *fbs;

fbs=(foosNbars*)malloc(amount*sizeof(foosNbars));

if(NULL==fbs)

{

HANDLE_ERR("could neither allocate foos nor bars", ErrType_noMem);

return (foosNbars *) NULL;

}

return fbs;

}

HANDLE_ERR() is here just some custom error handler, which gets the error type, and handles it. It is not really relevant in here. So question is, what happens? What is the issue? Where is the problem?

## No comments:

## Post a Comment