Number

The number type is inhabited by real numeric values. Specifically, it has 53 bits of precision, meaning it can accurately represent integral values in the interval $$\left[-2^{53},2^{53}\right]$$, after which arithmetic operations will lose precision. This is because the number type is implemented using double-precision floating-point numbers.