About Question enthuware.ocajp.i.v7.2.1332 :
Posted: Wed Nov 19, 2014 3:31 am
Hi,
I have a question regarding the narrowing conversion of primitive numeric data types in the context of the provided question: In my Java book I've read that a conversion from a "big" to a "small" data type cuts of leading bits. This might cause a change of the sign, e.g.:
The value of an integer might be
01001010011101111000011001010111 (32 bit)
A conversion to a short might simply delete the first 16 bits, resulting in
1000011001010111 (16 bit)
For the short value, now the first bit is 1, which means the value is negative...
But I cannot apply this approach on the "correct" answer of the given question: in this line b = (byte) i; I would assume that the trailing bits of the integer i are removed, so the sign information gets lost.
Where is my mistake?
Thank you!
I have a question regarding the narrowing conversion of primitive numeric data types in the context of the provided question: In my Java book I've read that a conversion from a "big" to a "small" data type cuts of leading bits. This might cause a change of the sign, e.g.:
The value of an integer might be
01001010011101111000011001010111 (32 bit)
A conversion to a short might simply delete the first 16 bits, resulting in
1000011001010111 (16 bit)
For the short value, now the first bit is 1, which means the value is negative...
But I cannot apply this approach on the "correct" answer of the given question: in this line b = (byte) i; I would assume that the trailing bits of the integer i are removed, so the sign information gets lost.
Where is my mistake?
Thank you!