Here are a few guidelines that will help the compiler generate more efficient code, some of the tips are specific to this compiler others are generally good programming practice.
unsigned char uc = 0xfe;
if (uc * uc < 0) /* this is true! */
{
....
}
uc * uc is evaluated as (int) uc * (int) uc =
(int) 0xfe * (int) 0xfe = (int) 0xfc04 = -1024.
Another one:
(unsigned char) -12 / (signed char) -3 = ...
No, the result is not 4:
(int) (unsigned char) -12 / (int) (signed char) -3 =
(int) (unsigned char) 0xf4 / (int) (signed char) 0xfd =
(int) 0x00f4 / (int) 0xfffd =
(int) 0x00f4 / (int) 0xfffd =
(int) 244 / (int) -3 =
(int) -81 = (int) 0xffaf;
Don't complain, that gcc gives you a different result. gcc uses 32
bit ints, while SDCC uses 16 bit ints. Therefore the results are different.
From ''comp.lang.c FAQ'':
If well-defined overflow characteristics are important and negative values are not, or if you want to steer clear of sign-extension problems when manipulating bits or bytes, use one of the corresponding unsigned types. (Beware when mixing signed and unsigned values in expressions, though.)
Although character types (especially unsigned char) can be used as tiny integers, doing so is sometimes more trouble than it's worth, due to unpredictable sign extension and increased code size.