Statistic distribution classes.
Macros:
Definition of an event for a distribution (to abstract from continuous and discrete) Vector to identify a continuous event CapyVec vals;
Pointer toward a user data structure to identify a discrete event void* ptr;
Id to identify a discrete event size_t id;
Definition of a CapyDist class (virtual parent class for all the distribution)
Type of the distribution CapyDistType type;
Get the probability of a given event.
Input argument(s):
evt: the event
Output and side effect(s):
Return the probability of the event double (*getProbability)(CapyDistEvt const* const evt);
Get the surprise of a given event.
Input argument(s):
evt: the event
Output and side effect(s):
Return the surprise of the event (h(e) = log(1/p(e)). The higher the surprise the less probable is the event. double (*getSurprise)(CapyDistEvt const* const evt);
Get the entropy of the distribution.
Output and side effect(s):
Return the entropy (average of the surprise). The higher the entropy the more uncertain the outcome of drawing a random sample. double (*getEntropy)(void);
Check if an event is within the most probable up to a given threshold
Input argument(s):
evt: the event to check
threshold: the threshold
Output and side effect(s):
Return true if the event is in the most probable events up to the threshold. bool (*isEvtInMostProbable)( CapyDistEvt const* const evt, double const threshold);
Definition of a CapyDistContinuous class (probability distribution for continuous random variables)
dimEvt: the number of dimension of an event range: the range of possible values for each dimension of an event
Enumerations:
Type of distributions
Typedefs:
Event for a distribution
CapyDist class
CapyDistContinuous class
Struct CapyDistNormal :
Struct CapyDistNormal's properties:
Mean and standard deviation (aka sigma) for each dimension The event variable is considered isotropic (dimensions are uncorrelated)
Struct CapyDistNormal's methods:
Destructor
Get the derivative of the probability of a given event along a given axis.
Input argument(s):
evt: the event
iAxis: the derivative axis
Output and side effect(s):
Return the derivative of the probability of the event
Struct CapyDistDiscreteOccurence :
Struct CapyDistDiscreteOccurence's properties:
Probability of the event
Event definition
Struct CapyDistDiscreteOccurence's methods:
None.
Struct CapyDistDiscrete :
Struct CapyDistDiscrete's properties:
Parent class
Occurences defining the distribution
Struct CapyDistDiscrete's methods:
Destructor
Functions:
Create a CapyDistEvt
Output and side effect(s):
Return a CapyDistEvt.
Create a CapyDist
Input argument(s):
type: type of ditribution
Output and side effect(s):
Return a CapyDist
Create a CapyDistContinuous
Input argument(s):
dimEvt: the dimension of the random variable describing an event
Output and side effect(s):
Return a CapyDistContinuous
Create a CapyDistNormal
Input argument(s):
dimEvent: the dimension of the random variable describing an event
mean: the means in each dimension of the random variable
stdDev: the standard deviations in each dimension of the random variable
Output and side effect(s):
Return a CapyDistNormal
Allocate memory for a new CapyDistNormal and create it
Input argument(s):
dimEvent: the dimension of the random variable describing an event
mean: the means in each dimension of the random variable
stdDev: the standard deviations in each dimension of the random variable
Output and side effect(s):
Return a CapyDistNormal
Free the memory used by a CapyDistNormal
Input argument(s):
that: the CapyDistNormal to free
Create a CapyDistDiscrete
Input argument(s):
nbEvt: the number of events in the distribution
Output and side effect(s):
Return a CapyDistDiscrete
Allocate memory for a CapyDistDiscrete and create a CapyDistDiscrete
Input argument(s):
nbEvt: the number of events in the distribution
Output and side effect(s):
Return a CapyDistDiscrete
Free the memory used by a CapyDistDiscrete
Input argument(s):
that: the CapyDistDiscrete to free
Get the cross entropy of two discrete distributions
Input argument(s):
distA: the first distribution
distB: the second distribution
Output and side effect(s):
Return the cross entropy of distA relative to distB. It is higher or equal to the entropy of distA, and increase with the discrepancy between the probabilities of the two distribtions.
Get the KL divergence of two discrete distributions
Input argument(s):
distA: the first distribution
distB: the second distribution
Output and side effect(s):
Return the KL divergence of distA relative to distB (equals to cross entropy of (distA, distB) minus entropy of distA. If the distribution are the same it returns 0.0. The more they diverge the higher the returned value.