shannon.information {truecluster} | R Documentation |
These functions calculate univariate informationtheoretic measures, like information, entropy and redundancy.
laplace.probability(n) laplace.information(n) shannon.information(p) absolute.entropy(p) normalized.entropy(p) redundancy(p) distribution.entropy(x, FUN = pnorm, ..., normalized = TRUE) density.entropy(lower = -Inf, upper = Inf, FUN = dnorm, ..., normalized = TRUE)
n |
scalar: number of (equal likely) possibilities |
p |
vector of probabilities |
x |
sequence of values at which FUN is evaluated |
lower |
lower bound for integrate |
upper |
upper bound for integrate |
FUN |
a cumulative distribution function |
... |
further arguments to FUN |
normalized |
FALSE to use absolute.entropy instead of TRUE (default) normalized.entropy |
Function shannon.information
returns a vector of log2 (or zero where p=0). Entropy functions return vector of entropy contributions for each p.
Jens Oehlschlägel
MacKay, David J.C. (2003). Information Theory, Inference, and Learning Algorithms (chapter 8). Cambridge University Press.
twoway.entropy
, dist.entropy
, Kullback.Leibler
, log
, exact.margin.info
# elementary laplace.information(2) # = log2(n) p <- laplace.probability(2) # = rep(1/n, n) p shannon.information(p) # = log2(p) absolute.entropy(p) # = p*log2(p) normalized.entropy(p) # = p*log2(p) / log2(n) sum(absolute.entropy(p)) # Max = log2(n) sum(normalized.entropy(p)) # Max = 1 sum(redundancy(p)) # Redundancy = 1 - NormalizedEntropy laplace.information(2) laplace.information(4) # more categories, more information sum(absolute.entropy(laplace.probability(2))) sum(absolute.entropy(laplace.probability(4))) # more categories, more entropy sum(normalized.entropy(laplace.probability(2))) sum(normalized.entropy(laplace.probability(4))) # more categories, constant normalized entropy p <- c(0.5, 0.25, 0.15, 0.1) sum(normalized.entropy(p)) # unequal probabilities, lower entropy (normalized or not) sum(distribution.entropy(seq(-3, 3, 0.01))) sum(distribution.entropy(seq(-3, 3, 0.001))) sum(distribution.entropy(seq(-30, 30, 0.01), sd=10)) sum(distribution.entropy(seq(-3, 3, 0.01), FUN=punif, -3, 3)) sum(distribution.entropy(seq(-3, 3, 0.01), normalized=FALSE)) sum(distribution.entropy(seq(-3, 3, 0.001), normalized=FALSE)) sum(distribution.entropy(seq(-30, 30, 0.01), sd=10, normalized=FALSE)) sum(distribution.entropy(seq(-3, 3, 0.01), FUN=punif, -3, 3, normalized=FALSE)) density.entropy(-3, 3) density.entropy(-30, 30, sd=10)