It is not the first time that a user is confused with this R output:
> is.integer(1) [1] FALSE
The reason is the type of such a number is actually:
> typeof(1) [1] "double"
The help of is.integer is actually doing a good job of pointing this out. However in this help an interesting suggestion is made for a function is.wholenumber. The purpose of this function is to check if the value of a number (opposed to the type) is an integer:
> is.wholenumber <-
+ function(x, tol = .Machine$double.eps^0.5) abs(x - round(x)) < tol
> is.wholenumber(1)
[1] TRUE
Now the question is: why this tolerance of \(\sqrt{\epsilon}\)?
I probably would have implemented this function with the more naïve:
> my.is.wholenumber <- function(x) x==round(x)
i.e. use a tolerance of zero. May be a tolerance equal to the machine precision .Machine$double.eps could be useful (I am not sure of that), but a tolerance of \(\sqrt{\epsilon}\)? I came up with this hypothesis. \(\epsilon\approx \)1.0e-16 so about 15 decimal places. The square root of this is about 7 decimal places. The default number of decimals in a print statement is 7:
> .Machine$double.eps
[1] 2.220446e-16
> .Options$digits
[1] 7
So may be the goal was to keep the print precision and the behavior of is.wholenumber in sync:
> 1.000005 [1] 1.000005 > 1.0000000005 [1] 1 > is.wholenumber(1.000005) [1] FALSE > is.wholenumber(1.0000000005) [1] TRUE
It does not work perfectly:
> 1.00000005
[1] 1
> is.wholenumber(1.00000005)
[1] FALSE
Any better explanations around?
No comments:
Post a Comment