The following two operations are identical. However, MaxValues1 will not compile due to a "The operation overflows at compile time in checked mode." Can someone please explain what is going on with the compiler and how I can get around it without have to use a hard-coded value as in MaxValues2?
public const ulong MaxValues1 = 0xFFFF * 0xFFFF * 0xFFFF;
public const ulong MaxValues2 = 0xFFFD0002FFFF;
To get unsigned literals, add u
suffix, and to make them long a l
suffix. i.e. you need ul
.
If you really want the overflow behavior, you can add unchecked
to get unchecked(0xFFFF * 0xFFFF * 0xFFFF)
but that's likely not what you want. You get the overflow because the literals get interpreted as Int32
and not as ulong
, and 0xFFFF * 0xFFFF * 0xFFFF
does not fit a 32 bit integer, since it is approximately 2^48.
public const ulong MaxValues1 = 0xFFFFul * 0xFFFFul * 0xFFFFul;
By default, integer literals are of type int
. You can add the 'UL' suffix to change them to ulong
literals.
public const ulong MaxValues1 = 0xFFFFUL * 0xFFFFUL * 0xFFFFUL;
public const ulong MaxValues2 = 0xFFFD0002FFFFUL;
I think its actually not a ulong
until you assign it at the end, try
public const ulong MaxValues1 = (ulong)0xFFFF * (ulong)0xFFFF * (ulong)0xFFFF;
i.e. in MaxValues1
you are multiplying 3 32bit ints together which overflows as the result is implied as another 32bit int, when you cast the op changes to multiplying 3 ulongs
together which wont overflow as you are inferring the result is a ulong
(ulong)0xFFFF * 0xFFFF * 0xFFFF;
0xFFFF * (ulong)0xFFFF * 0xFFFF;
also work as the result type is calculated based on the largest type
but
0xFFFF * 0xFFFF * (ulong)0xFFFF;
won't work as the first 2 will overflow the int
Add numeric suffixes 'UL' to each of the numbers. Otherwise, C# considers them as Int32.