Specifically, we found that the number 10100401822940525 appears to simply not exist in these programming environment. You can test this for yourself, simply trace or log the following statements:
All of the above statements will output “10100401822940524”, when obviously all but the first one should return “10100401822940525”.
Theres no obvious significance to that value. The binary/hex value isn’t flipping any high level bit, and there are no results for it on google or Wolfram Alpha. Additional testing showed the same result with other numbers in this range.
I thought it might be related to the value being larger than the max 32bit uint value, but adding a decimal (forcing it to use a floating point Number) does not help, it simply rounds it up to “10100401822940526” instead. Number should support much higher values (up to ~1.8e308). I confirmed this with:
Number.MAX_VALUE > 10100401822940526 // outputs true
As it turns out, the issue is a side effect of how Numbers are stored. Only 53bits out of the 64bits in a Number are used to store an accurate value. The remainder are used to represent the magnitude of the value. You can read about this here. So, the maximum accurate value we can store is:
9007199254740992 < 10100401822940525 This means that the precise value is not stored. Knowing this, we can rework our logic to store the value in two uints / Numbers. Thanks to everyone on Twitter that provided input to this article. I'd name contributors individually, but I had a lot of duplicate responses, and don't want to leave anyone out.