What is the rationale behind "0xHHHHHHHH" formatted Microsoft error codes?

Go To StackoverFlow.com

-5

Why does Microsoft tend to report "error codes" as hexadecimal values?

Error codes are 32-bit double word values (4 byte values.) This is likely the raw integer return code of whatever C-style function has reported an error.

However, why report the error to a user in hexadecimal? The "0x" prefix is worthless, and the savings in character length is minimal. These errors end up displayed to end users in Microsoft software and even on Microsoft websites.

For example:

  • 0x80302010 is 10 characters long, and very cryptic.
  • 2150637584 is the decimal equivalent, and much more user friendly.

Is there any description of the "standard" use of a 32-bit field as an error code mechanism (possibly dividing the field into multiple fields for developer interpretation) or of the logic behind presenting a hexadecimal code to end users?

2012-04-03 20:41
by James
How is 2150637584 more user friendly - Cornstalks 2012-04-03 20:45
You are missing a rather major insight into the way a HRESULT error code works. There are 4 pieces of information crammed into a 32-bit integer. The severity of the error, whether it is a system or app defined error, what kind of library generated the error and the error code. You can instantly see those when you got the hex version of the error. The decimal version requires firing up calc.exe to convert it to hex. Look in the WinError.h SDK header file for the definitions - Hans Passant 2012-04-03 21:25
It's more user friendly because it drops the "0x". I said END USER, not DEVELOPER, people - James 2012-04-16 02:29
As end users tend to search for the error codes. Its seems to be a good kind of convention. Besides my old and hated friend, FILENOTFOUND on dll loads, i found the errors where it could happen quite often. Even if i wasn't bothered to solved such problems as end_user when a program gave me this error - JackGrinningCat 2018-06-13 12:53


3

We can only guess about the reason, so this question cannot be answered for sure. But let's guess:

One reason might be that with hex numbers, you know the number will have 8 digits. If it has more or less digits the number is "corrupt" (for example, the customer mistyped). With decimal numbers the number of digits for the same value varies.

Also, to a developer, hex numbers are more convenient and natural than decimal numbers. For example, if some info is coded as bit flags you can decipher them manually easily in hex numbers but not in decimal numbers.

2012-04-03 20:49
by DarkDust


3

It is a little bit subjective as to whether hexadecimal or decimal error codes are more user friendly. Here is a scenario where the hexadecimal error codes are significantly more convenient, which could be part of the reason that hexadecimal error codes are used in the first place.

Consider the documentation for Win32 Error Codes for Active Directory Service Interfaces, ADSI uses error codes with the format 0x8007XXXX, where the XXXX corresponds to a DWORD value that maps to a Win32 error code.

This makes it extremely easy to get the corresponding Win32 error code, because you can just strip off the last 4 digits. This would not be possible with a decimal error code representation.

2012-04-03 20:53
by Andrew Clark


0

The middle ground answer to this would be that formatting the number like an IPv4 address would be more luser-friendly while preserving some sort of formatting that helps the dev guys.

Although TBH I think hex is fine, the hypothetical non-technical user has no more idea what 0x1234ABCD means than 1234101112 or "Cracked gangle pin on fwip valve".

2012-10-15 12:03
by John U
Ads