Trying to Assign a Large Number at Compile Time

Go To StackoverFlow.com

3

The following two operations are identical. However, MaxValues1 will not compile due to a "The operation overflows at compile time in checked mode." Can someone please explain what is going on with the compiler and how I can get around it without have to use a hard-coded value as in MaxValues2?

public const ulong MaxValues1 = 0xFFFF * 0xFFFF * 0xFFFF;

public const ulong MaxValues2 = 0xFFFD0002FFFF;
2012-04-05 00:14
by George Funke
Please add the appropriate language tag.. - Oliver Charlesworth 2012-04-05 00:15


4

To get unsigned literals, add u suffix, and to make them long a l suffix. i.e. you need ul.

If you really want the overflow behavior, you can add unchecked to get unchecked(0xFFFF * 0xFFFF * 0xFFFF) but that's likely not what you want. You get the overflow because the literals get interpreted as Int32 and not as ulong, and 0xFFFF * 0xFFFF * 0xFFFF does not fit a 32 bit integer, since it is approximately 2^48.

public const ulong MaxValues1 = 0xFFFFul * 0xFFFFul * 0xFFFFul;
2012-04-05 00:19
by CodesInChaos
'and thus doesn't fit a long'. Might want to change that statement - Kendall Frey 2012-04-05 00:23
@KendallFrey I already reformulated that part, and wasn't sure if I wrote doesn't fit a long or doesn't fit an int. long is of course wrong - CodesInChaos 2012-04-05 00:26


2

By default, integer literals are of type int. You can add the 'UL' suffix to change them to ulong literals.

public const ulong MaxValues1 = 0xFFFFUL * 0xFFFFUL * 0xFFFFUL;

public const ulong MaxValues2 = 0xFFFD0002FFFFUL;
2012-04-05 00:21
by Kendall Frey


1

I think its actually not a ulong until you assign it at the end, try

public const ulong MaxValues1 = (ulong)0xFFFF * (ulong)0xFFFF * (ulong)0xFFFF;

i.e. in MaxValues1 you are multiplying 3 32bit ints together which overflows as the result is implied as another 32bit int, when you cast the op changes to multiplying 3 ulongs together which wont overflow as you are inferring the result is a ulong

(ulong)0xFFFF * 0xFFFF * 0xFFFF;

0xFFFF * (ulong)0xFFFF * 0xFFFF;

also work as the result type is calculated based on the largest type

but

0xFFFF * 0xFFFF * (ulong)0xFFFF;

won't work as the first 2 will overflow the int

2012-04-05 00:20
by Luke McGregor
That got it working. The 0xFFFF's are actually hard coded constants within another package whose source I cannot change. Adding the ulong's seem to have fixed the problem. Thank you - George Funke 2012-04-05 00:24
@George, if this solved your problem you should click the check mark to the left of the answer - Dour High Arch 2012-04-05 00:30


1

Add numeric suffixes 'UL' to each of the numbers. Otherwise, C# considers them as Int32.

C# - Numeric Suffixes

2012-04-05 00:21
by woodings
Ads