Subclasses of Class Function used to Implement Transformations of Statistical Models
Abstract
A library of software for inductive inference guided by the Minimum Message Length (MML) principle was created previously. It contains various (object-oriented-) classes and subclasses of statistical Model and can be used to infer Models from given data sets in machine learning problems. Here transformations of statistical Models are considered and implemented within the library so as to have desirable properties from the object-oriented programming and mathematical points of view. The subclasses of class Function needed to do such transformations are defined.
keywords: Statistical Model, transformation, class Function, machine learning, inference, information, MML
1 Introduction
A library of software111See https://www.cantab.net/users/mmlist/MML/A/ for source-code and documentation. based on the Minimum Message Length (MML) principle [1, 2] was created previously [3, 4] for use in writing programs to solve machine learning problems, that is to infer statistical Models from given data sets. It follows on from an earlier prototype[5]. The software defines various classes222In the object-oriented sense of the word “class”. of statistical Model which can be used independently or combined to create structured Models. Here mathematical transformations of statistical Models are added, and the properties of transformations are considered and they are implemented in the library; this relies on defining and using certain subclasses of class Function.
MML is a Bayesian method of inference devised by Wallace and Boulton [1] in the 1960s, their initial application being mixture modelling (clustering, unsupervised classification [6]). MML was subsequently developed both theoretically and practically and has been used on many and varied problems [2] including, but not limited to, megalithic astronomical alignments [7], factor analysis [8], decision-trees [9] and protein structural alignments [10]. In general “Strict” MML inference is NP-hard [11, 12] but there are good and efficient approximations [13, 2] for many cases. MML can be seen as a realisation of Ockham’s razor [14, 3]. The fact that it measures the complexity of statistical Models and of data in the same units makes it particularly suitable for choosing between competing models and for implementing structured Models.
MML inference [1, 2] relies on Bayes’s theorem [15]333Typical usage is ‘’ for , ‘’ for a parameterised Model, ‘’ for statistical parameters, ‘’ for an unparameterised Model, ‘’ for miscellaneous parameters, ‘’ for a data space, ‘’ for a datum, and ‘’ for a data set of several data.
(1) |
and on Shannon’s mathematical theory of communication [16], hence “message”,
(2) |
where is the information content, or message length, of an event . Base-two logarithms measure information in ‘bits’ and natural logarithms measure information in ‘nits’ (natural bits). Writing for it follows that
(3) |
for Model (hypothesis) and data set . The message length of and of together is the length of a two-part message: first transmit and then transmit assuming that is true. Minimising the total message length clarifies the trade-off between Model complexity and its fit to the data ; the Model that achieves the minimum is the best Model, the best answer to the inference problem being posed. (A one-part message may be shorter, although by a surprisingly small amount, but it does not provide an answer to any inference problem.) Note that MML considers the accuracy of measurement of continuous data (section 4) and the optimal precision to which parameters should be stated so every continuous datum, and parameter, has a probability not just a probability density. This is one of the reasons that, in general, MML is not the same as maximum a posteriori (MAP) estimation. Even a discrete valued parameter of a Model may have an optimal precision that is less than its data type would allow.
UPModel {estimator(ps); ...} | |--Discretes | | | |--MultiState | | | etc. | |--ByPdf | | | |--Continuous | | | | | |--NormalUPM -- (Gaussian) | | | | | etc. | | | |--R_D -- multivariate, R^D | etc.
It is not the place of this paper to argue for the usefulness of transformations of probability distributions or for a certain transformed distribution being a good fit for a particular kind of data – these have been well-studied by statisticians. Rather it examines a way of implementing such transformations more expressively in programming languages.
2 The Kinds of Model and Associates
There are two stages of Model – unparameterised Models which are instances of class UPModel444 Note, on a suitable device, but probably not in an email reader, hyperlinks in this pdf file such as UPModel(click) lead to online documentation and software within https://www.cantab.net/users/mmlist/MML/A/ and parameterised Models which are instances of class Model.
An unparameterised Model can be applied to appropriate statistical parameter(s) to create a parameterised Model , for example, is the Normal Model (Gaussian distribution) with mean and standard deviation . An unparameterised Model has problem-defining parameters, for example, Bounded has the bounds of its data space. Problem-defining parameters are given, not estimated. For some, such as the Normal distribution, the problem-defining parameters are trivial, or , and in such cases a single instance of the unparameterised Model is sufficient. An unparameterised Model (figure 1) can create parameterised Models (figure 2). Hopefully it can also estimate (figure 3) a parameterised Model to fit a given data set .
Model {pr(d); nlPr(d); ...} | |--Discretes.M -- e.g. "fair coin" | | | etc. | |--ByPdf.M {pdf(d); ...} | | | |--Continuous.M | | | | | etc. | | | |--R_D.M -- multivariate, R^D | etc.
A parameterised Model has statistical parameters, for example, the (standard) Normal has its mean and standard deviation. In some cases statistical parameters are trivial as, for example, in a Uniform distribution, but statistical parameters are generally estimated from a given data set although they can also be given, as with . In accord with the MML framework [1, 2], an estimated Model has a message length, , and the data set from which it was estimated has a message length, , calculated under the assumption that the Model is true. An MML estimator attempts to find a Model to minimise the two-part message length, . (In the case of given statistical parameters, is zero as the parameters are common knowledge.)
The principal responsibility of a parameterised Model is to give the probability and negative log probability 555 Many calculations in the implementation are actually done in terms of negative log probabilities as those quantities are typically more manageable than plain probabilities (similar considerations may apply to probability densities). of a datum from ’s data space. It may also do other things such as generate a random value, , from its probability distribution.
Estimator { ds2Model(ds); -- data set -> Model ...}
An Estimator (figure 3) may have parameters to control its actions, for example, to set the amount of lookahead in a search algorithm, or to set a prior distribution on the statistical parameters of the Models that it will estimate.
A further important class in the software is Function (figure 4). The library includes a simple interpreter for the -calculus [17] and Functions can be defined by -expressions. Functions can also be defined by Native code, that is Java code. Functions of Continuous data Cts2Cts, in mathematics , and CtsD2CtsD, , will be important later. Note that a UPModel is a Function because it can be applied to statistical parameters to return a parameterised Model and an Estimator is a Function because it can be applied to a data set to return a parameterised Model. Models, Functions, and hence UPModels and Estimators, are first class Values.
Function | {apply(d); ...} | |--Lambda | |--Native | |--Cts2Cts {apply_x(x); | d_dx(); ...} -- df/dx | |--CtsD2CtsD{J(); -- Jacobian | nlJ(); ...} -- -log |J| | |--UPModel | |--Estimator | etc. interface HasInverse{inverse();}
3 Model Transformations
Perhaps the most widely known transformed Model is the log-Normal probability distribution: for a data set , , it assumes that the values are modelled by a Normal distribution. Conversely, to generate a random value from a log-Normal, generate a random value from the underlying Normal distribution and apply to , that is, return . We will see more of the log-Normal in section 4.

In general, a Model can be transformed by a one-to-one function, , having an inverse, . Both unparameterised and parameterised Models can be transformed; upmf = remains an unparameterised Model and mf = is a parameterised Model. We also have that, as distributions, transforming with and parameterising with commute
(4) |
the left and right sides of equation 4 have different histories but effect the same probability distribution. Similarly, as distributions, estimating (on appropriate data) and transforming commute (figure 5)
(5) |
In addition,
(6) |
that is, the amounts of information in and in are the same. Conditions (4), (5) and (6) are a kind of “invariance” of probability distributions (Models). Of course a general transformation operation of an arbitrary Model by an arbitrary one-to-one Function (of the right kind) cannot possibly know what are equivalent transformations, if any, on the statistical parameters of the arbitrary Model.
Given a data set, , upmf’s estimator operates by applying to all members () of and giving the result to ’s estimator. Model mf generates a value by getting to generate one and applying to it. (It is a quirk of common usage that when transforming a Model with Function one applies to data and to random values generated by the Model.)
Since the transforming function must be one-to-one, in the case of a discrete data space and its Models, such a function must either permute the data space in some simple way or set up a one to one correspondence with a same sized space; continuous data spaces, section 4 and section 5, are more interesting.
4 Continuous Models
First note that, due to the properties of object-oriented programming, an unparameterised ‘Continuous’ Model – one of continuous data – is a subclass of UPModel. Hence an instance of Continuous can be transformed by treating it just like any other UPModel (section 3), however, the transformed result is then just a UPModel and is not an instance of the Continuous subclass. If we want the transformed Continuous itself to also be an instance of Continuous, a little more work is required. In particular a must be defined for the parameterised transformed Continuous: For example, we would like log-Normal to be a Continuous not just a UPModel, see figure 1.

Each continuous datum, , has a nominal value and an accuracy of measurement () and stands for . Usually it is assumed that is small and that a varies little across so that is a good approximation for . When a continuous function is applied to the result has an AoM of where is the derivative666Mathematically, most functions in are neither continuous nor differentiable but in practice most of those that we are interested in are both continuous and differentiable over all or most of their domain. of : If the exact value of can be somewhere in a range of , can be somewhere in a range of (figure 6). For a continuous, one-to-one Function with an inverse the pdf of is
(7) |
The factor adjusts the of and, in effect, adjusts the AoM of when is used by . Having a is sufficient to make the transformed Model an instance of Continuous.
In the case of the log-Normal Model, , ’s inverse is and ’s derivative is . In the source code:
logNormal = Normal.transform(log);
Naturally Function has the inverse and derivative , and turns a Model of into one of .
5 Models
Multivariate continuous data are members of for some dimension, , and an unparameterised Model of such data is an instances of class R_D.777No(?) programming language allows R^D () as an identifier; R_D is the closest we can get to it in program code. Note that each component of a measured multivariate datum has an accuracy of measurement – as in section 4. As an example of transformation, consider Cartesian coordinates in the plane and polar coordinates, . The functions and effect mappings between these coordinate systems and are inverses of each other. If is an unparameterised Model of cartesian coordinates then is a Model of polar coordinates.
When transforming a univariate Continuous Model with function , the derivative of was used to “adjust” the accuracy of measurement of a datum. With multivariate continuous data, the Jacobian matrix of , and its determinant, take on that role. A suitable for is
(8) |
For polar2cartesian
(9) |
and for cartesian2polar
(10) |
giving , , and .
A Detail
The of a transformed R_D Model calls upon the of the Model being transformed and the determinant of the Jacobian of the transforming Function. This does not necessarily require the of each component of a transformed datum – there is already provision for Vectors where the of a Vector as a whole is known but not that of each component (however every component of a measured datum does have an AoM). However some structured Models, such as Dependent and Independent, apply sub-Models to one or more components (columns, variables) of data. In such cases it may be be necessary to attribute the of a transformed datum among its components. This particularly arises in Estimators. Therefore the apply(.) method of a Function in () uses the Jacobian matrix of the Function to set the ratios of the result’s component’s s and uses its determinant to scale them to arrive at the correct total area (volume, …) in .
The matter is relevant to Estimators because the AoM of a datum influences the amount of information in the datum and an Estimator trades-off the complexity (information) of an estimated Model against the complexity of a data set. Other things being equal, doubling the AoM of a datum reduces its information by one bit and, in the limit, to know that is to know nothing at all about . A sub-Model may need to know how much information is in those columns of the data in which it deals.
6 Conclusion
Transformations have been implemented in the MML software library for unparameterised Models and parameterised Models using a one-to-one Function . To make the transformed Model’s random() work must have an inverse. For Models of continuous data ’s derivative must be defined, and for multivariate Models of continuous data ’s Jacobian must be defined. As probability distributions, parameterising and transforming a Model commute, and transforming and estimating (on corresponding data) commute. Applying such a Function to all the members of a data set leaves the information content of the data set unchanged. In the source code the definition of the log-Normal distribution is simply Normal.transform(log) (section 4).
The author is not aware of any widely used programming language where all functions (subroutines, procedures, methods), whether built-in (‘’, ‘’, , and so on) or user-defined, are instances of an explicit ‘class Function’ (the consensus [18] seems to be that it would be possible in Haskell say). Every function does actually have at least one method, ‘apply’. Applying apply is almost invariably implicit – or – it is the space between the and the , and is just the same as after all. Note that Haskell [19] also has an explicit alternative (‘$’ as in ) for apply. Given a ‘class Function’, subclasses and interfaces such as ‘’, continuous, differentiable, invertible and so on are possible and, as suggested above, interesting and useful.
References
- [1] Wallace, C. S. and Boulton, D. M. (1968) An information measure for classification. The Computer Journal, 11, 185–194. https://doi.org/10.1093/comjnl/11.2.185.
- [2] Wallace, C. S. (2005) Statistical and Inductive Inference by Minimum Message Length. Springer. https://doi.org/10.1007/0-387-27656-4.
- [3] Allison, L. (2018) Coding Ockham’s Razor. Springer. https://doi.org/10.1007/978-3-319-76433-7.
- [4] Allison, L. (2018, 2020). Documentation for and source-code of MML software. https://www.cantab.net/users/mmlist/MML/A/.
- [5] Allison, L. (2005) Models for machine learning and data mining in functional programming. Journal of Functional Programming, 15, 15–32. https://doi.org/10.1017/S0956796804005301.
- [6] Jorgensen, M. A. and McLachlan, G. J. (2008) Wallace’s approach to unsupervised learning: The Snob program. The Computer Journal, 51, 571–578. https://doi.org/10.1093/comjnl/bxm121.
- [7] Patrick, J. D. and Freeman, P. R. (1988) A cluster analysis of astronomical orientations. In Ruggles, C. L. N. (ed.), Records in Stone, pp. 252–261. Cambridge University Press.
- [8] Wallace, C. S. and Freeman, P. R. (1992) Single-factor analysis by minimum message length estimation. Journal of the Royal Statistical Society, Series B, 54, 195–209. https://www.jstor.org/stable/2345956.
- [9] Wallace, C. S. and Patrick, J. D. (1993) Coding decision trees. Machine Learning, 11, 7–22. https://doi.org/10.1023/A:1022646101185.
- [10] Collier, J., Allison, L., Lesk, A., Garcia de La Banda, M., and Konagurthu, A. (2014) A new statistical framework to assess structural alignment quality using information compression. Bioinformatics, 30, i512–i518. https://doi.org/10.1093/bioinformatics/btu460.
- [11] Farr, G. E. and Wallace, C. S. (2002) The complexity of strict minimum message length inference. The Computer Journal, 45, 285–292. https://doi.org/10.1093/comjnl/45.3.285.
- [12] Dowty, J. G. (2015) SMML estimators for 1-dimensional continuous data. The Computer Journal, 58, 126–133. https://doi.org/10.1093/comjnl/bxt145.
- [13] Wallace, C. S. and Freeman, P. R. (1987) Estimation and inference by compact coding. Journal of the Royal Statistical Society, Series B, 49, 240–265. https://www.jstor.org/stable/2985992.
- [14] Spade, P. V. (1999) The Cambridge Companion to Ockham. Cambridge University Press. https://doi.org/10.1017/CCOL052158244X.
- [15] Bayes, T. (1763) An essay towards solving a problem in the doctrine of chances. Philosophical Transactions of the Royal Society of London, 53, 370–418. reprinted in Biometrika 45(3/4) pp.293–315, 1958, https://doi.org/10.2307/2333180.
- [16] Shannon, C. E. (1948) A mathematical theory of communication. Bell Systems Technical Journal, 27, 379–423, 623–656. https://doi.org/10.1002/j.1538-7305.1948.tb01338.x.
- [17] Church, A. (1941) The Calculi of Lambda Conversion, Annals of Mathematical Studies, 6. Princeton University Press. https://press.princeton.edu/titles/2390.html.
- [18] various (2002). class Function? Discussion in the Haskell mailing list (29 Oct. 2002); see extract at monash.edu.
- [19] Peyton Jones, S. et al. (2003) Haskell 98 Language and Libraries, the Revised Report. Cambridge University Press. Also see haskell.org.