Here are a few guidelines that will help the compiler generate more efficient code, some of the tips are specific to this compiler others are generally good programming practice.
unsigned char uc = 0xfe;uc * uc is evaluated as (int) uc * (int) uc = (int) 0xfe * (int) 0xfe = (int) 0xfc04 = -1024.
if (uc * uc < 0) /* this is true! */
{
....
}
(unsigned char) -12 / (signed char) -3 = ...No, the result is not 4:
(int) (unsigned char) -12 / (int) (signed char) -3 =Don't complain, that gcc gives you a different result. gcc uses 32 bit ints, while SDCC uses 16 bit ints. Therefore the results are different.
(int) (unsigned char) 0xf4 / (int) (signed char) 0xfd =
(int) 0x00f4 / (int) 0xfffd =
(int) 0x00f4 / (int) 0xfffd =
(int) 244 / (int) -3 =
(int) -81 = (int) 0xffaf;
If well-defined overflow characteristics are important and negative values are not, or if you want to steer clear of sign-extension problems when manipulating bits or bytes, use one of the corresponding unsigned types. (Beware when mixing signed and unsigned values in expressions, though.)
Although character types (especially unsigned char) can be used as "tiny" integers, doing so is sometimes more trouble than it's worth, due to unpredictable sign extension and increased code size.
foobar(unsigned int p1, unsigned char ch)For the modulus operation the variable ch will be promoted to unsigned int first then the modulus operation will be performed (this will lead to a call to support routine _moduint()), and the result will be casted to a char. If the code is changed to
{
unsigned char ch1 = p1 % ch ;
....
}
foobar(unsigned int p1, unsigned char ch)It would substantially reduce the code generated (future versions of the compiler will be smart enough to detect such optimization opportunities).
{
unsigned char ch1 = (unsigned char)p1 % ch ;
....
}