Gaussian mixture model for multi-dimensional data.
|
__init__(self,
X,
K,
train=True,
axis=None)
x.__init__(...) initializes x; see help(type(x)) for signature |
source code
|
|
|
|
numpy array
|
datapoint(self,
m,
k)
Training point number m as if it would belong to
component k |
source code
|
|
|
del_cache(self)
Clear model parameter cache (force recalculation) |
source code
|
|
|
|
|
em(self,
n_iter=100,
eps=1e-30)
Expectation maximization |
source code
|
|
|
estimate_means(self)
Update means from current model and samples |
source code
|
|
|
|
GaussianMixture subclass
|
|
|
|
float in interval [0.0, 1.0]
|
overlap(self,
other)
Similarity of two mixtures measured in membership overlap |
source code
|
|
|
randomize_means(self)
Pick K samples from X as means |
source code
|
|
|
randomize_scales(self,
ordered=True)
Random scales initialization |
source code
|
|
Inherited from object :
__delattr__ ,
__format__ ,
__getattribute__ ,
__hash__ ,
__new__ ,
__reduce__ ,
__reduce_ex__ ,
__repr__ ,
__setattr__ ,
__sizeof__ ,
__str__ ,
__subclasshook__
|
float
|
BIC
Bayesian information criterion, calculated as BIC = M * ln(sigma_e^2)
+ K * ln(M)
|
int
|
K
Number of components
|
int
|
M
Number of data points
|
int
|
N
Length of component axis
|
(N, K) numpy array
|
delta
Squared "distances" between data and components
|
int
|
dimension
Dimensionality of the mixture domain
|
float
|
log_likelihood
Log-likelihood of the extended model (with indicators)
|
float
|
log_likelihood_reduced
Log-likelihood of the marginalized model (no auxiliary indicator
variables)
|
(K, ...) numpy array
|
means
|
(N,) numpy array
|
membership
Membership array
|
(K, N) numpy array
|
scales
|
(K,) numpy array
|
sigma
Component variations
|
(K,) numpy array
|
w
Component weights
|
Inherited from object :
__class__
|