Library of Congress Cataloging-in-Publication Data Names: Dougherty, Edward R., author. Title: Optimal signal processing under uncertainty / Edward R. Dougherty. Description: Bellingham, Washington, USA : SPIE Press, [2018] | Includes bibliographical references. Identifiers: LCCN 2018001894 | ISBN 9781510619296 (softcover) | ISBN 1510619291 (softcover) | ISBN 9781510619302 (PDF) | ISBN 1510619305 (PDF) | ISBN 9781510619319 (ePub) | ISBN 1510619313 (ePub) | ISBN 9781510619326 (Kindle/ Mobi) | ISBN 1510619321 (Kindle/Mobi) Subjects: LCSH: Signal processing–Mathematics. | Mathematical optimization. Classification: LCC TK5102.9 .D66 2018 | DDC 621.382/2015196–dc23 LC record available at https://lccn.loc.gov/2018001894
Published by SPIE P.O. Box 10 Bellingham, Washington 98227-0010 USA Phone: +1 360.676.3290 Fax: +1 360.647.1445 Email:
[email protected] Web: http://spie.org
Copyright © 2018 Society of Photo-Optical Instrumentation Engineers (SPIE) All rights reserved. No part of this publication may be reproduced or distributed in any form or by any means without written permission of the publisher. The content of this book reflects the work and thought of the author. Every effort has been made to publish reliable and accurate information herein, but the publisher is not responsible for the validity of the information or for any outcomes resulting from reliance thereon. Printed in the United States of America. First Printing. For updates to this book, visit http://spie.org and type “PM287” in the search field.
To Edward Townes Dougherty
Contents Preface Acknowledgments
xi xvii
1 Random Functions 1.1 1.2 1.3
1.4 1.5
1
Moments Calculus Three Fundamental Processes 1.3.1 Poisson process 1.3.2 White noise 1.3.3 Wiener process Stationarity Linear Systems
2 6 9 9 11 12 14 17
2 Canonical Expansions 2.1 2.2 2.3
2.4 2.5 2.6
23
Fourier Representation and Projections Constructing Canonical Expansions Orthonormal Coordinate Functions 2.3.1 Canonical expansions and the Parseval identity 2.3.2 Karhunen–Loève expansion Derivation from a Covariance Expansion Integral Canonical Expansions Expansions of WS Stationary Processes 2.6.1 Power spectral density and linear operators 2.6.2 Integral representation of a WS stationary process
3 Optimal Filtering 3.1 3.2
3.3 3.4
23 30 34 34 36 40 43 46 47 48 51
Optimal Mean-Square-Error Filters Optimal Finite-Observation Linear Filters 3.2.1 Orthogonality principle 3.2.2 Optimal filter for linearly independent observations 3.2.3 Optimal filter for linearly dependent observations Optimal Linear Filters for Random Vectors Recursive Linear Filters 3.4.1 Recursive generation of direct sums
vii
51 54 54 55 57 60 63 63
viii
Contents
3.5
3.6
3.7
3.8
3.4.2 Static recursive optimal linear filtering 3.4.3 Dynamic recursive optimal linear filtering Optimal Infinite-Observation Linear Filters 3.5.1 Wiener–Hopf equation 3.5.2 Wiener filter Optimal Filtering via Canonical Expansions 3.6.1 Integral decomposition into white noise 3.6.2 Extended Wiener–Hopf equation 3.6.3 Solution via discrete canonical expansions Optimal Morphological Bandpass Filters 3.7.1 Granulometries 3.7.2 Optimal granulometric bandpass filters 3.7.3 Reconstructive granulometric bandpass filters General Schema for Optimal Design
4 Optimal Robust Filtering 4.1
4.2 4.3 4.4
4.5 4.6
4.7
Intrinsically Bayesian Robust Filters 4.1.1 Effective characteristics 4.1.2 IBR linear filtering 4.1.3 IBR granulometric bandpass filters 4.1.4 The general schema under uncertainty Optimal Bayesian Filters Model-Constrained Bayesian Robust Filters Robustness via Integral Canonical Expansions 4.4.1 Robustness in the blurred-signal-plus-noise model 4.4.2 Robustness relative to the power spectral density 4.4.3 Global filters Minimax Robust Filters 4.5.1 Minimax robust granulometric bandpass filters IBR Kalman Filtering 4.6.1 IBR Kalman predictor 4.6.2 IBR Kalman filter 4.6.3 Robustness of IBR Kalman filtering IBR Kalman–Bucy Filtering
5 Optimal Experimental Design 5.1 5.2
5.3
Mean Objective Cost of Uncertainty 5.1.1 Utility Experimental Design for IBR Linear Filtering 5.2.1 Wiener filtering 5.2.2 WS stationary blurring-plus-noise model IBR Karhunen–Loève Compression 5.3.1 IBR compression 5.3.2 Experimental design for Karhunen–Loève compression
66 69 73 74 76 77 77 79 80 82 83 85 87 90 93 94 95 97 103 106 108 112 113 115 117 119 121 122 126 129 135 137 142 149 149 152 153 155 156 159 160 164
Contents
5.4
5.5
5.6 5.7
5.8
ix
Markovian Regulatory Networks 5.4.1 Markov chain perturbations 5.4.2 Application to Boolean networks 5.4.3 Dynamic intervention via external control Complexity Reduction 5.5.1 Optimal network reduction 5.5.2 Preliminary node elimination 5.5.3 Performance Sequential Experimental Design 5.6.1 Experimental design using dynamic programming Design with Inexact Measurements 5.7.1 Design with measurement error 5.7.2 Design via surrogate measurements General MOCU-based Experimental Design 5.8.1 Connection to knowledge gradient
6 Optimal Classification 6.1 6.2
6.3
6.4
6.5 6.6 6.7
6.8
Bayes Classifier 6.1.1 Discriminants Optimal Bayesian Classifier 6.2.1 Bayesian MMSE error estimator 6.2.2 Effective class-conditional densities Classification Rules 6.3.1 Discrete classification 6.3.2 Quadratic discriminant analysis OBC in the Discrete and Gaussian Models 6.4.1 Discrete model 6.4.2 Gaussian model Consistency Optimal Sampling via Experimental Design Prior Construction 6.7.1 General framework 6.7.2 From prior knowledge to constraints 6.7.3 Dirichlet prior distribution Epistemology 6.8.1 RMS bounds 6.8.2 Validity in the context of a prior distribution
7 Optimal Clustering 7.1 7.2
Clustering Bayes Clusterer 7.2.1 Clustering error 7.2.2 Bayes clustering error
168 169 171 176 178 178 180 181 184 188 190 190 192 193 194 197 197 200 201 202 205 207 208 210 211 211 216 221 225 228 231 235 238 241 243 244 249 249 253 254 258
x
Contents
7.3 7.4
Separable Point Processes Intrinsically Bayesian Robust Clusterer 7.4.1 Effective random labeled point processes
259 265 266
References
271
Index
285
Preface Whereas modern science concerns the mathematical modeling of phenomena, essentially a passive activity, modern engineering involves determining operations to actively alter phenomena to effect desired changes of behavior. It begins with a scientific (mathematical) model and applies mathematical methods to derive a suitable intervention for the given objective. Since one would prefer the best possible intervention, engineering inevitably becomes optimization and, since all but very simple systems must account for randomness, modern engineering might be defined as the study of optimal operators on random processes. The seminal work in the birth of modern engineering is the Wiener–Kolmogorov theory of optimal linear filtering on stochastic processes developed in the 1930s. As Newton’s laws constitute the gateway into modern science, the Wiener–Kolmogorov theory is the gateway into modern engineering. The design of optimal operators takes different forms depending on the random process constituting the scientific model and the operator class of interest. The operators might be linear filters, morphological filters, controllers, classifiers, or cluster operators, each having numerous domains of application. The underlying random process might be a random signal/ image for filtering, a Markov process for control, a feature-label distribution for classification, or a random point set for clustering. In all cases, operator class and random process must be united in a criterion (cost function) that characterizes the operational objective and, relative to the criterion, an optimal operator found. For the classical Wiener filter, the model is a pair of jointly distributed wide-sense stationary random signals, the objective is to estimate a desired signal from an observed signal via a linear filter, and the cost function to be minimized is the mean-square error between the filtered observation and the desired signal. Besides the mathematical and computational issues that arise with classical operator optimization, especially with nonlinear operators, nonstationary processes, and high dimensions, a more profound issue is uncertainty in the scientific model. For instance, a long recognized problem with linear filtering is incomplete knowledge regarding the covariance functions (or power spectra in the case of Wiener filtering). Not only must optimization be
xi
xii
Preface
relative to the original cost function, be it mean-square error in linear filtering or classification/clustering error in pattern recognition, optimization must also take into account uncertainty in the underlying random process. Optimization is no longer relative to a single random process but instead relative to an uncertainty class of random processes. This means the postulation of a new cost function integrating the original cost function with the model uncertainty. If there is a prior distribution (or posterior distribution if data are employed) governing likelihood in the uncertainty class, then one can choose an operator from some class of operators that minimizes the expected cost over the uncertainty class. In the absence of a prior distribution, one might take a minimax approach and choose an operator that minimizes the maximum cost over the uncertainty class. A prior (or posterior) distribution places the problem in a Bayesian framework. A critical point, and one that will be emphasized in the text, is that the prior distribution is not on the parameters of the operator model, but on the unknown parameters of the scientific model. This is natural. If the model were known with certainty, then one would optimize with respect to the known model; if the model is uncertain, then the optimization is naturally extended to include model uncertainty and the prior distribution should be on that uncertainty. For instance, in the case of linear filtering the covariance function might be uncertain, meaning that some of its parameters are unknown, in which case the prior distribution characterizes uncertainty relative to the unknown parameters. A basic principle embodied in the book is to express an optimal operator under the joint probability space formed from the joint internal and external uncertainty in the same form as an optimal operator for a known model by replacing the mathematical structures characterizing the standard optimal operator with corresponding structures, called effective structures, that incorporate model uncertainty. For instance, in Wiener filtering the power spectra are replaced by effective power spectra, in Kalman filtering the Kalman gain matrix is replaced by the effective Kalman gain matrix, and in classification the class-conditional distributions are replaced by effective classconditional distributions. The first three chapters of the book review those aspects of random processes that are necessary for developing optimal operators under uncertainty. Chapter 1 covers random functions, including the moments and the calculus of random functions. Chapter 2 treats canonical expansions for random functions, a topic often left uncovered in basic courses on stochastic processes. It treats discrete expansions within the context of Hilbert space theory for random functions, in particular, the equivalence of canonical expansions of the random function and the covariance function entailed by Parseval’s identity. It then goes on to treat integral canonical expansions in the framework of generalized functions. Chapter 3 covers the basics of
Preface
xiii
classical optimal filtering: optimal finite-observation linear filtering, optimal infinite-observation linear filtering via the Wiener–Hopf integral equation, Wiener filtering for wide-stationary processes, recursive (Kalman) filtering via direct-sum decomposition of the evolving observation space, and optimal morphological filtering via granulometric bandpass filters. For the most part, although not entirely, the first three chapters are a compression of Chapters 2 through 4 of my book Random Processes for Image and Signal Processing, aimed directly at providing a tight background for optimal signal processing under uncertainty, the goal being to make a onesemester course for Ph.D. students. Indeed, the book has been developed from precisely such a course, attended by Ph.D. students, post-doctoral students, and faculty. Chapter 4 covers optimal robust filtering. The first section lays out the basic definitions for intrinsically Bayesian robust (IBR) filtering, the fundamental principle being filter optimization with respect to both internal model stochasticity and external model uncertainty, the latter characterized by a prior distribution over an uncertainty class of random-process models. The first section introduces the concepts of effective process and effective characteristic, whereby the structure of the classical solutions is retained with characteristics such as the power spectra and the Wiener–Hopf equation generalized to effective power spectra and the effective Wiener–Hopf equation, which are relative to the uncertainty class. Section 4.2 covers optimal Bayesian filters, which are analogous to IBR filters except that new observations are employed to update the prior distribution to a posterior distribution. Section 4.3 treats model-constrained Bayesian robust (MCBR) filters, for which optimization is restricted to filters that are optimal for some model in the uncertainty class. In Section 4.4 the term “robustness” is defined quantitatively via the loss of performance and is characterized for linear filters in the context of integral canonical expansions, where random process representation is now parameterized via the uncertainty. Section 4.5 reviews classical minimax filtering and applies it to minimax morphological filtering. Sections 4.6 and 4.7 extend the classical Kalman (discrete time) and Kalman– Bucy (continuous time) recursive predictors and filters to the IBR framework, where classical concepts such as the Kalman gain matrix get extended to their effective counterparts (effective Kalman gain matrix). When there is model uncertainty, a salient issue is the design of experiments to reduce uncertainty; in particular, which unknown parameter should be determined to optimally reduce uncertainty. To this end, Section 5.1 introduces the mean objective cost of uncertainty (MOCU), which is the expected cost increase relative to the objective resulting from the uncertainty, expectation being taken with respect to the prior (posterior) distribution. Whereas entropy is a global measure of uncertainty not related to any particular operational objective, MOCU is based directly on the engineering
xiv
Preface
objective. Section 5.2 analyzes optimal MOCU-based experimental design for IBR linear filtering. Section 5.3 revisits Karhunen–Loève optimal compression when there is model uncertainty, and therefore uncertainty as to the Karhunen–Loève expansion. The IBR compression is found and optimal experimental design is analyzed relative to unknown elements of the covariance matrix. Section 5.4 discusses optimal intervention in regulatory systems modeled by Markov chains when the transition probability matrix is uncertain and derives the experiment that optimally reduces model uncertainty relative to the objective of minimizing undesirable steady-state mass. The solution is computationally troublesome, and the next section discusses complexity reduction. Section 5.6 examines sequential experimental design, both greedy and dynamic-programming approaches, and compares MOCU-based and entropy-based sequential design. To this point, the chapter assumes that parameters can be determined exactly. Section 5.7 addresses the issue of inexact measurements owing to either experimental error or the use of surrogate measurements in place of the actually desired measurements, which are practically unattainable. The chapter closes with a section on a generalized notion of MOCU-based experimental design, a particular case being the knowledge gradient. The optimal Bayesian filter paradigm was first introduced in classification with the design of optimal Bayesian classifiers (OBCs). Classically (Section 6.1), if the feature-label distribution is known, then an optimal classifier, one that minimizes classification error, is found as the Bayes classifier. As discussed in Section 6.2, when there are unknown parameters in the featurelabel distribution, then there is an uncertainty class of feature-label distributions, and an optimal Bayesian classifier minimizes the expected error across the uncertainty class relative to the posterior distribution derived from the prior and the sample data. In order to compare optimal Bayesian classification with classical methods, Section 6.3 reviews the methodology of classification rules based solely on data. Section 6.4 derives the OBC in the discrete and Gaussian models. Section 6.5 examines consistency, that is, convergence of the OBC as the sample size goes to infinity. Rather than sample randomly or separately (randomly given the class sizes), sampling can be done in a nonrandom fashion by iteratively deciding which class to sample from prior to the selection of each point or by deciding which feature vector to observe. Optimal sequential sampling in these paradigms is discussed in Section 6.7 using MOCU-based experimental design. Section 6.8 provides a general framework for constructing prior distributions via optimization of an objective function subject to knowledge-based constraints. Epistemological issues regarding classification are briefly discussed in Section 6.9. Clustering shares some commonality with classification in that both involve operations on points, classification on single points and clustering on point sets, and both have performances measured via a natural definition of
Preface
xv
error, classification error for the former and cluster (partition) error for the latter. But they also differ fundamentally in that the underlying random process for classification is a feature-label distribution, and for clustering it is a random point set. Section 7.1 describes some classical clustering algorithms. Section 7.2 discusses the probabilistic foundation of clustering and optimal clustering (Bayes cluster operator) when the underlying random point set is known. Section 7.3 describes a special class of random point sets that can be used for modeling in practical situations. Finally, Section 7.4 discusses IBR clustering, which involves optimization over both the random set and its uncertainty when the random point set is unknown and belongs to an uncertainty class of random point sets. Whereas with linear and morphological filtering the structure of the IBR or optimal Bayesian filter essentially results from replacing the original characteristics with effective characteristics, optimal clustering cannot be so conveniently represented. Thus, the entire random point process must be replaced by an effective random point process. Let me close this preface by noting that this book views science and engineering teleologically: a system within Nature is modeled for a purpose; an operator is designed for a purpose; and an optimal operator is obtained relative to a cost function quantifying the achievement of that purpose. Right at the outset, with model formation, purpose plays a critical role. As stated by Erwin Schrödinger (Schrödinger, 1957), “A selection has been made on which the present structure of science is built. That selection must have been influenced by circumstances that are other than purely scientific. . . . The origin of science [is] without any doubt the very anthropomorphic necessity of man’s struggle for life.” Norbert Wiener, whose thinking is the genesis behind this book, states (Rosenblueth and Wiener, 1945) this fundamental insight from the engineering perspective: “The intention and the result of a scientific inquiry is to obtain an understanding and a control of some part of the universe.” It is not serendipity that leads us inexorably from optimal operator representation to optimal experimental design. This move, too, is teleological, as Wiener makes perfectly clear (Rosenblueth and Wiener, 1945): “An experiment is a question. A precise answer is seldom obtained if the question is not precise; indeed, foolish answers — i.e., inconsistent, discrepant or irrelevant experimental results — are usually indicative of a foolish question.” Only in the context of optimization can one know the most relevant questions to ask Nature. Edward R. Dougherty College Station, Texas June 2018
Acknowledgments First, let me recognize the large contributions of my former students Lori Dalton and Roozbeh Dehghannasiri to the theory developed in this book. Many other students have made contributions: Mohammad Sharokh Esfahani, Shahin Boluki, Mahdi Imani, Amin Zollanvari, Jason Knight, Mohammadmahdi Yousefi, Ranadip Pal, Yidong Chen, Marcel Brun, Marco Benalcazar, and Artyom Grigoryan. Notable contributions have been made by my colleagues Xiaoning Qian and Byung-Jun Yoon. Let me also thank Corey Barrett, who created the unified file for the book, and Alireza Karbalayghareh, who updated that file as I modified the manuscript.
xvii
Chapter 1
Random Functions The basic filtering problem in signal processing is to operate on an observed signal to estimate a desired signal. The immediate difficulty is obvious: how to construct the filter when all one has is the observed signal. One might try some naïve approach like using a standard low-pass filter, but why should that produce a good result? The proper formulation of the problem, as laid down by Norbert Wiener in the 1930s, is to treat the observed and desired signals as random functions that are jointly probabilistically related, in which case one can find a filter that produces the best estimate of the desired random signal based on the observed random signal, where optimality is relative to some probabilistic error criterion. When an actual signal is observed, the optimal filter is applied. It makes no sense to enquire about the accuracy of the filter relative to any single observation, since if we knew the desired signal that led to the observation, a filter would not be needed. It would be like processing data in the absence of a criterion beyond the data itself and asking if the processing is beneficial. This would be a form of pre-scientific radical empiricism. As put by Hans Reichenbach (Reichenbach, 1971), “If knowledge is to reveal objective relations of physical objects, it must include reliable predictions. A radical empiricism, therefore, denies the possibility of knowledge.” Since knowledge is our goal and optimal operator design is our subject, we begin by defining a random function and considering the basic properties of such functions, including the calculus of random functions. A random function, or random process, is a family of random variables {X(v; t)}, t lying in some index set T, where, for each fixed t, the random variable X(v; t) is defined on a sample space S (v ∈ S). For a fixed v, X(v; t) defines a function on the set T, and each of such functions is termed a realization of the random function. We focus on real-valued functions. If T is a subset of the real line R, then, for fixed v, X(v; t) is a signal, and the random function {X(v; t)} is called a random signal, stochastic process, or random time function. Should a random process be defined only on the integers, it is sometimes called a random time series. In general, t can be a point in n-dimensional Euclidean space Rn , so that each realization is a
1
2
Chapter 1
deterministic function of n variables. To simplify notation we usually write X(t) to denote a random function, keeping in mind the underlying probability structure. In particular, if we fix t and let v vary, then X(t) is a random variable on the sample space. A specific realization will often be denoted by x(t).
1.1 Moments For each t, X(t) is a random variable and has a probability distribution function F(x; t) ¼ P(X(t) ≤ x), called a first-order distribution. For the random ðx; tÞ functions that concern us, X(t) will possess a first-order density f ðx; tÞ ¼ dFdx , where the derivative might involve delta functions. In practice it is common to index a random function by a random variable instead of elements in a sample space. Instead of considering the realizations to be dependent on observations coming from a sample space, it is more practical to suppose them to be chosen according to observations of a random variable. Since a random variable Z defined on a probability space induces a probability measure on the Borel field over the real line R, with the induced probability measure PZ defined in terms of the original probability measure P by PZ (B) ¼ P(Z ∈ B) for any event B, nothing is lost by indexing a random function by the values of a random variable. For fixed t, the first-order distribution completely describes the behavior of the random variable; however, in general, we require the nth-order probability distributions F ðx1 , x2 , : : : , xn ; t1 , t2 , : : : , tn Þ ¼ PðX ðt1 Þ ≤ x1 , X ðt2 Þ ≤ x2 , : : : , X ðtn Þ ≤ xn Þ
(1.1)
and the corresponding nth-order densities f ðx1 , x2 , : : : , xn ; t1 , t2 , : : : , tn Þ. It is possible by integration to obtain the marginal densities from a given joint density. Hence, each nth-order density specifies the marginals for all subsets of {t1 , t2 , : : : , tn }. Now, suppose that we wish to give a complete characterization of the random function in terms of the various joint densities. If the point set is infinite, it is not generally possible to completely characterize the random function by knowing all finite joint densities; however, if the realizations are sufficiently well behaved, knowledge of the densities of all finite orders completely characterizes the random function — and we will always make this assumption. For practical manipulations, it is useful to be able to characterize a random function by means of the joint densities of some finite order. If for each point set {t1 , t2 , : : : , tn } the random variables X(t1), X(t2), . . . , X(tn) are independent, then the random function is characterized by its first-order densities since f ðx1 , x2 , : : : , xn ; t1 , t2 , : : : , tn Þ ¼ f ðx1 ; t1 Þf ðx2 ; t2 Þ · · · f ðxn ; tn Þ:
(1.2)
Random Functions
3
An important class of random functions characterized by second-order distributions is the class of Gaussian random functions. A random function X(t) is said to be Gaussian, or normal, if for any collection of n points t1 , t2 , : : : , tn , the random variables X ðt1 Þ, X ðt2 Þ, : : : , X ðtn Þ possess a multivariate Gaussian distribution. A multivariate Gaussian distribution is completely characterized by its mean vector and covariance matrix, and these are in turn determined from the first- and second-order densities of the variables. A great deal of linear systems theory employs only second-order moment information. While mathematical tractability is gained, the loss is that there is no distinction between random functions possessing identical secondorder moments. Linear filters have a natural dependency on second-order information — and therefore lack discrimination relative to random processes differing only at higher orders. The expectation (mean function) of a random function X(t) is the first-order moment Z ` mX ðtÞ ¼ E½X ðtÞ ¼ xf ðx; tÞdx: (1.3) `
Another function depending on X(t) in isolation is the variance function: Z ` 2 Var½X ðtÞ ¼ E½ðX ðtÞ mX ðtÞÞ ¼ ðx mX ðtÞÞ2 f ðx; tÞdx: (1.4) `
For fixed t, Var[X(t)] is the variance of the random variable X(t). The standard 1 deviation function is defined by sX ðtÞ ¼ Var½X ðtÞ2 . The second-order covariance function of the random function X(t) is defined by K X ðt, t0 Þ ¼ E½ðX ðtÞ mX ðtÞÞðX ðt0 Þ mX ðt0 ÞÞ Z Z ` ` ðx mX ðtÞÞðx0 mX ðt0 ÞÞf ðx, x0 ; t, t0 Þdxdx0 . ¼ `
(1.5)
`
0
Letting t ¼ t in the covariance function yields the variance function, KX (t, t) ¼ Var[X(t)]. It is seen directly from its definition that the covariance function is symmetric: KX (t, t0 ) ¼ KX (t0 , t). The correlation-coefficient function is defined by rX ðt, t0 Þ ¼
K X ðt, t0 Þ : sX ðtÞsX ðt0 Þ
(1.6)
Clearly, |rX (t, t0 )| ≤ 1. The autocorrelation function is defined by RX ðt, t0 Þ ¼ E½X ðtÞX ðt0 Þ:
(1.7)
A straightforward calculation yields K X ðt, t0 Þ ¼ RX ðt, t0 Þ mX ðtÞmX ðt0 Þ:
(1.8)
4
Chapter 1
If the mean is identically zero, then the covariance and autocorrelation functions are identical. Given two random functions X(t) and Y(s), the cross-covariance function is defined as the covariance between pairs of random variables, one from each of the two processes: K X Y ðt, sÞ ¼ E½ðX ðtÞ mX ðtÞÞðY ðsÞ mY ðsÞÞ Z ` ðx mX ðtÞÞðy mY ðsÞÞf ðx, y; t, sÞdxdy, ¼
(1.9)
`
where f(x, y; t, s) is the joint density for X(t) and Y(s). If the cross-covariance function is identically zero, then the random functions are said to be uncorrelated; otherwise, they are said to be correlated. As with the covariance function, there is symmetry: KXY (t, s) ¼ KYX (s, t). The cross-correlation coefficient is defined by rX Y ðt, sÞ ¼
K X Y ðt, sÞ , sX ðtÞsY ðsÞ
(1.10)
and |rXY (t, s)| ≤ 1. The cross-correlation function is defined by RX Y ðt, sÞ ¼ E½X ðtÞY ðsÞ
(1.11)
and is related to the cross-covariance function by K X Y ðt, sÞ ¼ RX Y ðt, sÞ E½X ðtÞE½Y ðsÞ:
(1.12)
Example 1.1. Consider the random time function X(Z; t) ¼ I[Z, `) (t), where Z is the standard normal variable and I[Z, `)(t), the indicator (characteristic) function for the random infinite interval [Z, `), is defined by I[Z, `) (t) ¼ 1 if t ∈ [Z, `) and I[Z, `) (t) ¼ 0 if t ∈ = [Z, `). For each observation z of the random variable Z, X(z; t) is a unit step function with a step at t ¼ z. For fixed t, X(t) is a Bernoulli variable with X(t) ¼ 1 if t ≥ Z and X(t) ¼ 0 if t < Z. The density of X(t) is characterized by the probabilities Z t 1 2 PðX ðtÞ ¼ 1Þ ¼ PðZ ≤ tÞ ¼ ez ∕2 dz ¼ FðtÞ, 2p ` where F denotes the probability distribution function of Z, and PðX ðtÞ ¼ 0Þ ¼ PðZ . tÞ ¼ 1 FðtÞ: The mean function for the process X(t) is given by mX(t) ¼ P(X(t) ¼ 1) ¼ F(t). To find the covariance for X(t), we first find the autocorrelation RX (t, t0 ), recognizing that X(t)X(t0 ) is a binomial random variable. If t < t0 , then P(t0 ≥ Z | t ≥ Z) ¼ 1, so that
Random Functions
5
PðX ðtÞX ðt0 Þ ¼ 1Þ ¼ Pðt0 ≥ Z, t ≥ ZÞ ¼ Pðt0 ≥ Zjt ≥ ZÞPðt ≥ ZÞ ¼ Pðt ≥ ZÞ ¼ FðtÞ: A similar calculation shows that, for t0 ≤ t, P(X(t)X(t0 ) ¼ 1) ¼ F(t0 ). Consequently, PðX ðtÞX ðt0 Þ ¼ 1Þ ¼ Fðminðt, t0 ÞÞ: Thus, the autocorrelation and covariance are given by RX ðt, t0 Þ ¼ E½X ðtÞX ðt0 Þ ¼ Fðminðt, t0 ÞÞ, K X ðt, t0 Þ ¼ E½X ðtÞX ðt0 Þ mX ðtÞmX ðt0 Þ ¼ Fðminðt, t0 ÞÞ FðtÞFðt0 Þ: ▪ For a sum, X(t) þ Y(t), of two random functions, linearity of expectation yields maX þbY ðtÞ ¼ amX ðtÞ þ bmY ðtÞ
(1.13)
for any real numbers a and b. For the covariance of a sum, K X þY ðt, t0 Þ ¼ E½ðX ðtÞ þ Y ðtÞÞðX ðt0 Þ þ Y ðt0 ÞÞ ðmX ðtÞ þ mY ðtÞÞðmX ðt0 Þ þ mY ðt0 ÞÞ ¼ RX ðt, t0 Þ þ RY ðt, t0 Þ þ RX Y ðt, t0 Þ þ RY X ðt, t0 Þ mX ðtÞmX ðt0 Þ mY ðtÞmY ðt0 Þ
(1.14)
mX ðtÞmY ðt0 Þ mY ðtÞmX ðt0 Þ ¼ K X ðt, t0 Þ þ K Y ðt, t0 Þ þ K X Y ðt, t0 Þ þ K Y X ðt, t0 Þ, where we have assumed that the relevant quantities exist. If X(t) and Y(t) are uncorrelated, then the cross-covariance is identically zero and the preceding identity reduces to the covariance of the sum being equal to the sum of the covariances: K X þY ðt, t0 Þ ¼ K X ðt, t0 Þ þ K Y ðt, t0 Þ:
(1.15)
The preceding relations generalize to sums of n random functions. Suppose that
6
Chapter 1
W ðtÞ ¼
n X
X j ðtÞ:
(1.16)
j¼1
Then, assuming that the relevant quantities exist, mW ðtÞ ¼
n X
mX j ðtÞ,
(1.17)
j¼1
K W ðt, t0 Þ ¼
n X n X
K X i X j ðt, t0 Þ:
(1.18)
i¼1 j¼1
Should the Xj be mutually uncorrelated, then all terms in the preceding summation for which i ≠ j are identically zero. Thus, the covariance reduces to K W ðt, t0 Þ ¼
n X
K X j ðt, t0 Þ:
(1.19)
j¼1
1.2 Calculus Extending the calculus to random functions involves some subtlety because the difference quotient and Riemann sum defining the derivative and the integral, respectively, are random variables. Random-process differentiation is made mathematically rigorous by defining the derivative of a random process via mean-square convergence. In general, Xh(t) converges to X(t) in the mean square (MS) if lim E½jX h ðtÞ X ðtÞj2 ¼ 0: h→0
(1.20)
For fixed h, E[|Xh(t) X(t)|2] gives the mean-square distance between Xh(t) and X(t), so that MS convergence means that the distance between Xh(t) and X(t) is converging to 0. The random function X(t) is said to be mean-square (MS) differentiable and X0 (t) is the mean-square derivative of X(t) at point t if 2 X ðv; t þ hÞ X ðv; tÞ 0 lim E X ðv; tÞ ¼ 0: h→0 h
(1.21)
For fixed t and h, both the difference quotient and X 0 (v; t) are functions of v (random variables). The following theorem provides necessary and sufficient conditions for MS differentiability, and expressions for the mean and covariance functions of the derivative.
Random Functions
7
Theorem 1.1. The random function X(t) is MS differentiable on the interval T if and only if the mean is differentiable on T and the covariance possesses a second-order mixed partial derivative with respect to u and v on T. In the case of differentiability, d ðiÞ mX 0 ðtÞ ¼ mX ðtÞ, (1.22) dt ðiiÞ K X 0 ðu:vÞ ¼
2 K ðu, vÞ: uv X
(1.23)
▪ The theory of MS differentiability extends to random functions of two variables: a partial derivative is an ordinary derivative applied to a function of two variables, with one of the variables being held fixed. The random function Y ðv; u, vÞ is the MS partial derivative of the random function X(v; u, v) with respect to the variable u if 2 X ðv; u þ h, vÞ X ðv; u, vÞ lim E Y ðv; u, vÞ ¼ 0: h→0 h
(1.24)
A two-dimensional analogue of Theorem 1.1 holds. In ordinary calculus the integral of the time function x(t) is defined as a limit of Riemann sums: for any partition a ¼ t0 , t1 , t2 , · · · , tn ¼ b of the interval [a, b], Z
b
xðtÞdt ¼ lim
a
kDtk k→0
n X
xðt0k ÞDtk ,
(1.25)
k¼1
where Dtk ¼ tk tk1, ‖Dtk‖ is the maximum of the Dtk over k ¼ 1, 2, : : : , n, tk0 is any point in the interval [tk1, tk], and the limit is taken to mean that the same value is obtained over all partitions, as long as ‖Dtk‖ → 0. The limit is, by definition, the value of the integral. The variable t is not restricted to a single dimension. For a region of integration T ⊂ R, consider a disjoint collection of intervals Ik forming a partition Ξ ¼ {Ik} of T, meaning that T ¼ ∪kIk. For each v, we can form the Riemann sum corresponding to the realization of the random function X(v; t) and the partition Ξ in a manner analogous to a deterministic function: SX ðv; ΞÞ ¼
n X
X ðv; t0k ÞDðI k Þ,
(1.26)
k¼1
where t0k ∈ I k . Letting ‖Ik‖ be the maximum of the lengths, the limit can be taken over all partitions for which ‖Ik‖ → 0 to give the integral
8
Chapter 1
Z X ðv; tÞdt ¼ T
lim SX ðv; ΞÞ:
Ξ, kI k k→0
(1.27)
Since the sum on the right-hand side of Eq. 1.26 depends not only on the partition, but also on the realization (on v), the limit of Eq. 1.27 is a limit of random variables. A random function X(v; t) is said to be mean-square integrable and possess R integral T X ðv; tÞdt, itself a random variable, if and only if lim E½jI SX ðv, ΞÞj2 ¼ 0,
Ξ, kI k k→0
(1.28)
where the limit is taken over all partitions Ξ ¼ {Ik} for which ‖Ik‖ → 0. The integral of a random function can often be obtained by integration of realizations; however, MS integrability depends on the limit of Eq. 1.28. The basic mean-square integrability theorem concerns integrands of the form g(t, s)X(s), where g(t, s) is a deterministic function of two variables. The resulting integral is a random function of t and is not dependent on dimension. Theorem 1.2. If the integral
Z
Y ðtÞ ¼
gðt, sÞX ðsÞds
(1.29)
T
exists in the MS sense, then
Z
ðiÞ mY ðtÞ ¼
gðt, sÞmX ðsÞds,
(1.30)
T
Z Z
0
ðiiÞ K Y ðt, t Þ ¼ T
gðt, sÞgðt0 , s0 ÞK X ðs, s0 Þdsds0 :
(1.31)
T
Conversely, if the deterministic integrals in (i) and (ii) exist, then the integral defining Y(t) exists in the MS sense. ▪ Writing out (i) and (ii) in terms of the definitions of the mean and covariance shows that they state that integration and expectation can be interchanged. According to the theorem, these interchanges are justified if and only if there is MS integrability. If we let g(t, s) be a function of only s in Eq. 1.29, then the stochastic integral is just a random variable Y and Eq. 1.31 gives the variance of Y. Since the variance is nonnegative, the integral is nonnegative: Z Z K X ðs, s0 ÞgðsÞgðs0 Þdsds0 ≥ 0: (1.32) T
T
The inequality holds for any domain T and any function gðsÞ for which the integral exists. A function of two variables for which this is true is said to be
Random Functions
9
nonnegative definite. The requirement of nonnegative definiteness for a covariance function constrains the class of symmetric functions that can serve as covariance functions.
1.3 Three Fundamental Processes This section discusses three random functions that are very important in applications and theory. All involve generalized functions. 1.3.1 Poisson process The one-dimensional Poisson model is mathematically described in terms of points arriving randomly in time and letting X(t) count the number of points arriving in the interval [0, t]. Three assumptions are postulated: (i) The numbers of arrivals in any finite set of non-overlapping intervals are independent. (ii) The probability of exactly one arrival in an interval of length t is lt þ o(t). (iii) The probability of two or more arrivals in an interval of length t is o(t). The parameter l is constant over all t intervals, and o(t) represents any function g(t) for which limt→0 g(t)/t ¼ 0. Condition (ii) says that, for infinitesimal t, the probability of exactly one arrival in an interval of length t is lt plus a quantity very small in comparison to t, and condition (iii) says that, for infinitesimal t, the probability of two or more arrivals in an interval of length t is very small in comparison to t. The random time points are called Poisson points, and each realization of the Poisson process corresponds to a set of time points resulting from a particular observation of the arrival process. It can be proven that X(t) possesses a Poisson density with mean and variance equal to lt, namely, PðX ðtÞ ¼ kÞ ¼ elt
ðltÞk k!
(1.33)
for k ¼ 0, 1, 2, : : : . It follows from assumption (i) that the Poisson process has independent increments: if t < t0 < u < u0 , then X(u0 ) X(u) and X(t0 ) X(t) are independent. Using this independence, we find the covariance function. If t < t0 , then the autocorrelation is obtained as E½X ðtÞX ðt0 Þ ¼ E½X ðtÞ2 þ X ðtÞðX ðt0 Þ X ðtÞÞ ¼ E½X ðtÞ2 E½X ðtÞ2 þ E½X ðtÞE½X ðt0 Þ ¼ Var½X ðtÞ þ E½X ðtÞE½X ðt0 Þ ¼ lt þ l2 tt0 : Hence, for t < t0 ,
(1.34)
10
Chapter 1
K X ðt, t0 Þ ¼ Var½X ðtÞ ¼ lt:
(1.35)
Interchanging the roles of t and t0 yields K X ðt, t0 Þ ¼ Var½X ðminðt, t0 ÞÞ ¼ l minðt, t0 Þ
(1.36)
for all t and t0 . Theorem 1.1 states that MS differentiability depends on existence of the mixed partial derivative ∂2KX (u, v)/∂u∂v. The Poisson process is not MS differentiable, but its covariance function possesses the generalized mixed partial derivative 2 K X ðt, t0 Þ ¼ ldðt t0 Þ: tt0
(1.37)
Although we have defined differentiability in terms of MS convergence, it is possible to give meaning to a generalized differentiability for which differentiability of the process is related to the generalized mixed partial derivative of the covariance and the derivative of the mean; that is, Theorem 1.1 can be applied in a generalized sense. If a random process has covariance function ld(t t0 ) and constant mean l (which is the derivative of the mean of the Poisson process), then we refer to the process as the generalized derivative of the Poisson process. Proceeding heuristically, the Poisson process has step-function realizations, where steps occur at points in time randomly selected according to a Poisson density with parameter l. For any realization, say x(t), with steps at t1 , t2 , : : : , the usual generalized derivative is given by x0 ðtÞ ¼
` X
dðt tk Þ:
(1.38)
k¼1
If we assume that the derivative of the Poisson process consists of the process whose realizations agree with the derivatives of the realizations of the Poisson process itself, then the derivative process is given by X 0 ðtÞ ¼
` X
dðt Zk Þ,
(1.39)
k¼1
where the Zk form a sequence of random Poisson points. Since each realization of X0 (t) consists of a train of pulses, the process is called the Poisson impulse process. If we now apply the generalized form of Theorem 1.1 that was alluded to previously, given that the mean and covariance of the Poisson process are lt and lmin(t, t0 ), respectively, we conclude that the mean and covariance of the Poisson impulse process are given by mX0 (t) ¼ l and KX0 (t, t0 ) ¼ ld(t t0 ).
Random Functions
11
The Poisson process is generated by random Poisson points, which are often said to model complete randomness, meaning that intuitively they model a “uniform” distribution of points across the infinite interval [0, `). For application, a basic proposition states that the time distribution governing the arrival of the kth Poisson point following a given Poisson point is governed by a gamma distribution with a ¼ k and b ¼ 1/l (lt being the mean of the Poisson process). In particular, for k ¼ 1, the inter-arrival time is governed by an exponential distribution with mean 1/l. Up to this point we have based our discussion of the Poisson process on the conditions defining a Poisson arrival process, a key consequence being that the process possesses independent increments. In general, a random process X(t), t ≥ 0, is said to have independent increments if X(0) ¼ 0 and, for any t1 , t2 , ··· , tn , the random variables X ðt2 Þ X ðt1 Þ, X ðt3 Þ X ðt2 Þ, : : : , X ðtn Þ X ðtn1 Þ are independent. The process has stationary independent increments if X(t þ r) X(t0 þ r) is identically distributed to X(t) X(t0 ) for any t, t0 , and r. When the increments are stationary, the increment distribution depends only on the length of time, t t0 , not the specific points in time, t þ r and t0 þ r. As defined via the arrival model, the Poisson process has stationary independent increments. Axiomatically, we define a process X(t) to be a Poisson process with mean rate l if P1. X(t) has values in {0, 1, 2, . . .}. P2. X(t) has stationary independent increments. P3. For s < t, X(t) X(s) has a Poisson distribution with mean l(t s). The axiomatic formulation P1 through P3 completely captures the arrival model. By generalizing the axioms P1 through P3, we can arrive at a definition of Poisson points in space. Consider points randomly distributed in Euclidean space Rn and let N(D) denote the number of points in a domain D. The points are said to be distributed in accordance with a Poisson process with mean rate l if 1. For any disjoint domains D1 , D2 , : : : , Dr , the counts NðD1 Þ, NðD2 Þ, : : : , NðDr Þ are independent random variables. 2. For any domain D of finite volume, N(D) possesses a Poisson distribution with mean lv(D), where v(D) denotes the volume (measure) of D. 1.3.2 White noise A zero-mean random function X(k) defined on a discrete domain (taken to be the positive integers) is called discrete white noise if X(k) and X(j) are uncorrelated for k ≠ j. If X(k) is discrete white noise, then its covariance is given by K X ðk, jÞ ¼ E½X ðkÞX ðjÞ ¼ Var½X ðkÞdkj ,
(1.40)
12
Chapter 1
where dk j ¼ 1 if k ¼ j, and dk j ¼ 0 if k ≠ j. For any function g defined on the integers and for all k, ` X
K X ðk, iÞgðiÞ ¼ Var½X ðkÞgðkÞ:
(1.41)
i¼1
If there were a similar process in the continuous setting and X(t) were such a random function defined over domain T, then the preceding equation would take the form Z K X ðt, t0 Þgðt0 Þdt0 ¼ I ðtÞgðtÞ, (1.42) T
where I(t) is a function of t that plays the role played by Var[X(k)] in Eq. 1.41. If we set K X ðt, t0 Þ ¼ I ðtÞdðt t0 Þ,
(1.43)
then we obtain Eq. 1.42. Hence, any zero-mean random function having a covariance of the form I(t)d(t t0 ) is called continuous white noise. White noise plays key roles in canonical representation, noise modeling, and design of optimal filters. Continuous white noise does not exist from a standard mathematical perspective and requires generalized functions for a rigorous definition. We shall manipulate white noise formally under the assumption that the manipulations are justified in the context of generalized functions. It follows from the covariance function I(t)d(t t0 ) that continuous white noise processes have infinite variance (set t ¼ t0 ) and uncorrelated variables. I(t) is called the intensity of the white noise process. For an approximation of continuous white noise in one dimension, there exists a normal, zero-mean, stochastic process X(t) having covariance func0 tion KX (t, t0 ) ¼ e–b|t–t |, for b > 0. For very large values of b, this covariance function behaves approximately like the covariance of white noise. 1.3.3 Wiener process Suppose that a particle, starting at the origin (in R) moves a unit length to the right or left. Movements are taken independently, and for each movement the probabilities of going right or left are p and q ¼ 1 p, respectively. Let X(n) be the number of units the particle has moved to the right after n movements. X(n) is called the one-dimensional random walk. The range of X(n) is the set of integers fn, n þ 2, : : : , n 2, ng. To find the density for X(n), let Y be the binomial random variable for n trials with probability p of success. For X(n) ¼ x, there must be (n þ x)/2 movements to the right and (n x)/2 movements to the left. Hence, X(n) ¼ x if and only if Y ¼ (n þ x)/2. Hence,
Random Functions
13
nþx n ¼ nþx pðnþxÞ∕2 qðnxÞ∕2 : f X ðnÞ ðxÞ ¼ P Y ¼ 2 2
(1.44)
Owing to the relationship between X(n) and Y, the moments of X(n) can be evaluated in terms of moments of Y ¼ (n þ X(n))/2. Specifically, X(n) ¼ 2Y n and E½X ðnÞm ¼
n X ð2y nÞm PðY ¼ yÞ:
(1.45)
y¼0
Since Y is binomial, E[Y ] ¼ np, and E[Y 2] ¼ npq þ n2p2. Letting m ¼ 1 and m ¼ 2 yields E½X ðnÞ ¼ 2np n,
(1.46)
E½X ðnÞ2 ¼ 4ðnpq þ n2 p2 Þ þ ð1 4pÞn2 :
(1.47)
When movements to the right and left are equiprobable, so that p ¼ q ¼ 12, E[X(n)] ¼ 0 and Var[X(n)] ¼ E[X(n)2] ¼ n. Since the process has stationary independent increments, the covariance argument applied to the Poisson process also applies here. Thus, K X ðn, n0 Þ ¼ Var½X ðminðn, n0 ÞÞ ¼ minðn, n0 Þ:
(1.48)
In the random-walk analysis the time interval between movements is 1; however, the analysis can be adapted to any finite time interval. Moreover, instead of restricting the motion to a single dimension, it could be analyzed in two dimensions, where the random walker can choose between four directions: left, right, up, and down. Taking a limiting situation, one can imagine an infinitesimal particle being continually acted upon by forces from its environment. The motion of such a particle can appear spasmodic, and under suitable phenomenological conditions such motion is referred to as Brownian motion. Suppose that a particle is experiencing Brownian motion and X(t) is its displacement in a single dimension from its original initial position. Assuming the particle’s motion results from a multitude of molecular impacts lacking regularity, it is reasonable to suppose that X(t) has independent increments. Moreover, assuming that the nature of the particle, the medium, and the relationship between the particle and its medium remain stable, and that the displacement over any time interval depends only on the elapsed time, not on the moment the time period commences, it is reasonable to postulate stationarity of the increments. Next, suppose that for any fixed t, X(t) is normally distributed with mean zero, the latter reflecting forces acting on the particle without directional bias.
14
Chapter 1
In accordance with the foregoing considerations, a Wiener process X(t), t ≥ 0, is defined to be a random function satisfying the following conditions: 1. 2. 3. 4.
X(0) ¼ 0. X(t) has stationary independent increments. E[X(t)] ¼ 0. For any fixed t, X(t) is normally distributed.
Based on these conditions, one can verify that, for 0 ≤ t0 < t, the increment X(t) X(t0 ) has mean zero and variance s2|t t0 |, where s2 is a parameter to be empirically determined. In particular, Var[X(t)] ¼ s2t for t ≥ 0. As for the covariance, based on independent increments, the same argument used for the random walk process can be applied to obtain K X ðt, t0 Þ ¼ Var½X ðminðt, t0 ÞÞ ¼ s2 minðt, t0 Þ:
(1.49)
Up to a multiplicative constant, the Poisson and Wiener processes have the same covariance. If we take the generalized mixed partial derivative of the Wiener process covariance, we obtain, as in the Poisson case, a constant times a delta function: 2 K X ðt, t0 Þ ¼ s2 dðt t0 Þ: tt0
(1.50)
Since mX(t) ≡ 0, dmX (t)/dt ≡ 0. Consequently, via the generalized version of Theorem 1.1, KX0 (t, t0 ) ¼ s2d(t t0 ), and the derivative of the Wiener process is white noise.
1.4 Stationarity In general, the nth-order probability distributions of a random function at two different sets of time points need not have any particular relation to each other. This section discusses two situations in which they do. First, the covariance function of a random process X(t) is generally a function of two variables; however, in some cases it is a function of the difference between the variables. A stronger relation occurs when the nth-order probability distribution itself is invariant under a translation of the time point set. If the covariance function of the random function X(t) can be written as K X ðt, t0 Þ ¼ k X ðtÞ,
(1.51)
where t ¼ t t0 (scalar or vector) and X(t) has a constant mean mX, then X(t) is said to be wide-sense (WS) stationary. Its variance function is constant: Var½X ðtÞ ¼ K X ðt, tÞ ¼ k X ðt tÞ ¼ k X ð0Þ:
(1.52)
Random Functions
15
Owing to the symmetry of KX (t, t0 ), the covariance function of X(t) is an even function, k X ðtÞ ¼ k X ðt0 tÞ ¼ k X ðt t0 Þ ¼ k X ðtÞ:
(1.53)
Hence, kX (t) ¼ kX (|t|). The correlation coefficient reduces to a function of t: rX ðtÞ ¼
k X ðtÞ : k X ð0Þ
(1.54)
Since |rX (t)| ≤ 1, |kX (t)| ≤ kX (0). The autocorrelation is also a function of t: RX ðt, t0 Þ ¼ k X ðtÞ þ m2X ¼ rX ðtÞ:
(1.55)
A random function X(t) is WS stationary if and only if its covariance function is translation invariant, which means that, for any increment h, K X ðt þ h, t0 þ hÞ ¼ K X ðt, t0 Þ:
(1.56)
To see this, suppose that X(t) is WS stationary. Then K X ðt þ h, t0 þ hÞ ¼ k X ðt þ h ðt0 þ hÞÞ ¼ k X ðt t0 Þ ¼ K X ðt, t0 Þ:
(1.57)
Conversely, if the covariance function is translation invariant, then K X ðt, t0 Þ ¼ K X ðt t0 , t0 t0 Þ ¼ K X ðt t0 , 0Þ,
(1.58)
which is a function of t t0 . Example 1.2. Let Y(t) be the Poisson process with mean lt and r be a positive constant. The Poisson increment process is defined by X ðtÞ ¼ Y ðt þ rÞ Y ðtÞ: According to the Poisson model, X(t) counts the number of points in [t, t þ r], and mX ðtÞ ¼ E½Y ðt þ rÞ E½Y ðtÞ ¼ lr: For the covariance KX (t, t0 ), there are two cases: |t t0 | > r and |t t0 | ≤ r. If |t t0 | > r, then the intervals determined by t and t0 are nonoverlapping and, owing to independent increments, X(t) and X(t0 ) are independent, and their covariance is 0. Suppose that |t t0 | ≤ r. First consider the case where t < t0 . Then t < t0 < t þ r < t0 þ r, and we can apply the result of Eq. 1.34 together with the observation that, because the process counts the number of points in [t, t þ r], its mean is lr. Thus,
16
Chapter 1
K X ðt, t0 Þ ¼ E½ðY ðt0 þ rÞ Y ðt0 ÞÞðY ðt þ rÞ Y ðtÞÞ E½X ðt0 ÞE½X ðtÞ ¼ E½Y ðt0 þ rÞY ðt þ rÞ E½Y ðt0 þ rÞY ðtÞ E½Y ðt þ rÞY ðt0 Þ þ E½Y ðt0 ÞY ðtÞ l2 r2 ¼ lðt þ rÞ þ l2 ðt0 þ rÞðt þ rÞ lt l2 tðt0 þ rÞ lt0 l2 t0 ðt þ rÞ þ lt þ l2 tt0 l2 r2 ¼ l½r ðt0 tÞ: Owing to symmetry, interchanging the roles of t and t0 (t0 ≤ t) yields K X ðt, t0 Þ ¼ lðr jt t0 jÞ when |t t0 | ≤ r. Hence, X(t) is WS stationary with k X ðtÞ ¼
lðr jtjÞ, 0,
ifjtj ≤ r : ifjtj . r
▪
A stronger form of stationarity concerns higher-order probabilistic information. The random function X(t) is said to be strict-sense stationary (SS stationary) if, for any points t1 , t2 , : : : , tn , and for any increment h, its nthorder distribution function satisfies the relation F ðx1 , x2 , : : : , xn ; t1 þ h, t2 þ h, : : : , tn þ hÞ ¼ F ðx1 , x2 , : : : , xn ; t1 , t2 , : : : , tn Þ:
(1.59)
In terms of the nth-order density, f ðx1 , x2 , : : : , xn ; t1 þ h, t2 þ h, : : : , tn þ hÞ ¼ f ðx1 , x2 , : : : , xn ; t1 , t2 , : : : , tn Þ:
(1.60)
Given any finite set of random variables from the random function, a spatial translation of each by a constant h results in a collection of random variables whose multivariate distribution is identical to that of the original collection. From a probabilistic perspective, the new collection is indistinguishable from the first. If we define the random vector 1 X ðt1 Þ B X ðt2 Þ C C B Xðt1 , t2 , : : : , tn Þ ¼ B . C, @ .. A 0
(1.61)
X ðtn Þ
then Xðt1 þ h, t2 þ h, : : : , tn þ hÞ is identically distributed to Xðt1 , t2 , : : : , tn Þ. If we restrict Eq. 1.59 to a single point, then it becomes F(x; t þ h) ¼ F(x; t). X(t þ h) is identically distributed to X(t), and therefore the mean at t þ h must equal the mean at t for all h, which implies that the mean must be constant.
Random Functions
17
Now consider two points such that F ðx1 , x2 ; t1 þ h, t2 þ hÞ ¼ F ðx1 , x2 ; t1 , t2 Þ:
(1.62)
X(t1 þ h, t2 þ h) is identically distributed to X(t1, t2). Hence, K X ðt1 þ h, t2 þ hÞ ¼ K X ðt1 , t2 Þ:
(1.63)
Therefore, the covariance function is translation invariant, and X(t) is WS stationary. In summary, SS stationarity implies WS stationarity. For a Gaussian random function, SS and WS stationarity are equivalent: since a Gaussian process is completely described by its first- and second-order moments and these are translation invariant for a WS stationary process, the higher-order probability distribution functions must also be translation invariant.
1.5 Linear Systems If C is a linear operator on a class of random functions, then, by superposition, Cða1 X 1 þ a2 X 2 Þ ¼ a1 CðX 1 Þ þ a2 CðX 2 Þ:
(1.64)
For Y ¼ C(X), we desire mY (s) and KY (s, s0 ) in terms of mX (t) and KX (t, t0 ), respectively. Schematically, we would like to find operations to complete (on the bottom horizontal arrows) the following commutative diagrams involving the expectation and covariance:
We have considered the cases where the operators are differentiation and integration in Theorems 1.1 and 1.2, respectively, a key point being interchange of the linear operator and the expectation, E[C(X)] ¼ C[E(X)]. In terms of the relation Y(s) ¼ C(X)(s), the interchange can be written as mY(s) ¼ C(mX)(s), and thus commutativity is achieved in the diagram of Eq. 1.65 with C on the bottom arrow. Although interchange of expectation and a linear operator is not always valid, it is valid in practical situations, and henceforth we assume conditions to be such that it is justified.
18
Chapter 1
For the moment, let us focus on linear systems operating on deterministic functions, in particular, integral operators defined in terms of a weighting function g(s, t) via Z yðsÞ ¼ gðs, tÞxðtÞdt, (1.67) T
where x(t) belongs to some linear space of functions and the variables can be scalars or vectors. Whereas x(t) is defined over T, the output function y(s) is defined over some set of values s ∈ S, where S need not equal T. In the discrete sense, Z ` X ` yðnÞ ¼ gðn, kÞxðkÞ ¼ gðn, tÞxðtÞdt, (1.68) `
k¼`
where xðtÞ ¼
` X
xðkÞdðt kÞ:
(1.69)
k¼`
If C is a linear operator on a linear function space L, the functions x1 ðtÞ, x2 ðtÞ, : : : , xn ðtÞ lie in L, n X
xðtÞ ¼
ak xk ðtÞ,
(1.70)
ak yk ðsÞ,
(1.71)
k¼1
and y(s) ¼ C(x)(s), then n X
yðsÞ ¼
k¼1
where yk(s) ¼ C(xk)(s) for k ¼ 1, 2, : : : , n. Superposition applies to finite sums of input functions; should a sum be infinite, and even converge, interchanging summation with the operator may not be valid, or, to achieve validity, the procedure might have to be interpreted in some specialized sense. When the functions involved are “well-behaved,” such interchange can often take place. More generally, if a function x(t) is represented as an integral, Z xðtÞ ¼ aðuÞQðt, uÞdu, (1.72) U
and C is a linear operator such that for each fixed u, Q(t, u) is in the domain of C, can we interchange the order of integration and application of C and write Z aðuÞ½Ct ðQðt, uÞÞðsÞdu, (1.73) yðsÞ ¼ CðxÞðsÞ ¼ U
Random Functions
19
where the subscript t of Ct denotes that C is applied relative to the variable t (for fixed u)? Validity depends on the function class involved. Two points are germane: (1) conditions can be imposed on the function class to make the interchange valid; (2) interchange facilitates the use of weighting functions to represent linear system laws, and the suggestiveness of such representations makes interchange of laws and integrals, at least in a formal manner, invaluable. Consequently, we will apply superposition freely to functions defined in terms of weighting functions, recognizing that for finite sums (or for weighting functions that are finite sums of delta functions), the application is mathematically rigorous. If C and x(t) are defined by Eqs. 1.67 and 1.72, respectively, then, by superposition, Z Z yðsÞ ¼
gðs, tÞaðuÞQðt, uÞdudt Z Z ¼ aðuÞ gðs, tÞQðt, uÞdt du: T
U
U
(1.74)
T
Combining this with Eq. 1.73 shows that Z Ct Qðt, uÞðsÞ ¼
gðs, tÞQðt, uÞdt:
(1.75)
T
Consider representation of a function via an integral with a delta function kernel: Z xðtÞ ¼
`
`
xðuÞdðt uÞdu,
(1.76)
where, for notational convenience only, we have employed functions of a single variable. Application of C to x(t) yields the output Z yðsÞ ¼
`
`
xðuÞCt dðt uÞðsÞdu:
(1.77)
For random-function inputs, an operator defining the bottom arrow in the commuting diagram of Eq. 1.66 provides a formulation of the output covariance of a linear system in terms of the input covariance. To avoid cumbersome notation, two conventions will be adopted. First, equations may be shortened by not including the variable s subsequent to the operation. Although this practice will result in equations with the variable s on the left and no explicitly stated variable s on the right, no confusion should result if one keeps the meaning of the operations in mind. A second convention will be
20
Chapter 1
omission of parentheses when the meaning of the operations is obvious. For instance, we may write CX instead of C(X). For the centered random functions X0 and Y0, the identity EC ¼ CE yields Y 0 ðsÞ ¼ Y ðsÞ mY ðsÞ ¼ C½X ðtÞ mX ðtÞðsÞ
(1.78)
¼ C½X 0 ðtÞðsÞ: Consequently, K Y ðs, s0 Þ ¼ E½Y 0 ðsÞY 0 ðs0 Þ ¼ E½Ct X 0 ðtÞCt0 X 0 ðt0 Þ ¼ E½Ct Ct0 X 0 ðtÞX 0 ðt0 Þ
(1.79)
¼ Ct Ct0 E½X 0 ðtÞX 0 ðt0 Þ ¼ Ct Ct0 K X ðt, t0 Þ: Since the roles of Ct and Ct0 can be interchanged, we obtain the next theorem, completing the commuting diagram of Eq. 1.66. The same technique applies to the autocorrelation. Theorem 1.3. If X(t) is a random function for which CEX ¼ ECX, then ðiÞ mCX ðsÞ ¼ CðmX ðtÞÞ,
(1.80)
ðiiÞ K CX ðs, s0 Þ ¼ Ct Ct0 K X ðt, t0 Þ ¼ Ct0 Ct K X ðt, t0 Þ:
(1.81)
▪ If g(t, u) is the impulse response function for C, and we let Y ¼ CX, then Z ` Y ðtÞ ¼ gðt, uÞX ðuÞdu, (1.82) `
and the conclusions of Theorem 1.3 can be rewritten as Z ` 0 ði Þ mY ðtÞ ¼ gðt, uÞmX ðuÞdu, `
0
0
Z
ðii Þ K Y ðt, t Þ ¼
`
Z
`
`
`
gðt, uÞgðt0 , u0 ÞK X ðu, u0 Þdudu0 :
(1.83)
(1.84)
Letting t ¼ t0 yields the output variance: Z Var½Y ðtÞ ¼
`
`
Z
`
`
gðt, uÞgðt, u0 ÞK X ðu, u0 Þdudu0 :
(1.85)
Random Functions
21
By letting C be the differential operator and recognizing that Ct and Ct0 are partial derivatives with respect to t and t0 for the differential operator, if s ¼ t, then it follows from Eq. 1.81 that K CX ðt, t0 Þ ¼
K ðt, t0 Þ: t0 t X
(1.86)
This relation holds for generalized derivatives involving delta functions.
Chapter 2
Canonical Expansions Just as an appropriate representation of a deterministic function can facilitate an application, such as with trigonometric Fourier series, decomposition of a random function can make it more manageable. Specifically, given a random function X(t), where the variable t can either be vector or scalar, we desire a representation of the form X ðtÞ ¼ mX ðtÞ þ
` X
Z k xk ðtÞ,
(2.1)
k¼1
where x1 ðtÞ, x2 ðtÞ, : : : are deterministic functions, Z1 , Z2 , : : : are uncorrelated zero-mean random variables, the sum may be finite or infinite, and some convergence criterion is given. Equation 2.1 is said to provide a canonical expansion (representation) for X(t). The terms Zk, xk(t), and Zkxk(t) are called coefficients, coordinate functions, and elementary functions, respectively. {Zk} is a discrete white-noise process such that the sum is an expansion of the centered process X(t) mX (t) in terms of white noise. Consequently, it is called a discrete white-noise representation. If an appropriate canonical representation can be found, then dealing with a family of random variables defined over the domain of t is reduced to considering a discrete family of random variables. Moreover, whereas there may be a high degree of correlation among the random variables composing the random function, the random variables in a canonical expansion are uncorrelated.
2.1 Fourier Representation and Projections The development of random-function canonical expansions is closely akin to finding Fourier-series representations of vectors in an inner-product space, in particular, the expansion of deterministic signals in terms of functions composing an orthonormal system. To achieve Fourier representations in the random-process framework, we need to place the problem in the context of an inner-product space. 23
24
Chapter 2
For a vector space V to be an inner-product space, there needs to be defined on pairs of vectors in V an inner product (dot product). Letting 〈x, y〉 denote the inner product in V, 〈x, y〉 must satisfy five properties: I1. I2. I3. I4. I5.
〈x, x〉 ≥ 0. 〈x, x〉 ¼ 0 if and only if x is the zero vector. hx, yi ¼ hy, xi. 〈x, y þ z〉 ¼ 〈x, y〉 þ 〈x, z〉. 〈cx, y〉 ¼ c〈x, y〉 for any complex scalar c.
The overbar in I3 denotes complex conjugation. For a real vector space, the conjugation in I3 can be dropped. A norm is defined on an inner-product space by ‖x‖ ¼ 〈x, x〉1/2, and a metric is then defied by ‖x y‖. An inner-product space is a Hilbert space if it is topologically complete relative to the metric, meaning that if a sequence is Cauchy convergent, then it must be convergent. Vectors x and y are said to be orthogonal, denoted by x⊥y, if 〈x, y〉 ¼ 0. A collection {z1 , z2 , : : : } of nonzero vectors is orthogonal if the vectors in the collection are mutually orthogonal, and it is called an orthogonal system. It is called an orthonormal system if all of its vectors have norm 1. An orthogonal system is transformed into an orthonormal system by dividing each vector by its norm. Approximation theory in an inner-product space concerns the projection of an arbitrary vector into a subspace S generated by a set V of vectors, that is, the subspace of all linear combinations of the vectors in V. The subspace is known as the span of V and is denoted by span(V ). If {z1 , z2 , : : : } is an orthonormal system, then obtaining an optimal projection, in the sense of finding a vector in span(z1 , z2 , : : : , zm ) having minimal distance from the vector being projected, involves the theory of Fourier series. Relative to an orthonormal system {z1 , z2 , : : : }, the Fourier coefficients of a vector x are defined by xˆ k ¼ hx, zk i for k ¼ 1, 2, : : : . For any vector x, let xm ¼
m X
xˆ k zk :
(2.2)
k¼1
The following properties hold: (i) for any w ∈ spanðz1 , z2 , : : : , zm Þ, kx xm k ≤ kx wk:
(2.3)
(ii) kx xm k2 ¼ kxk2
m X k¼1
jxˆ k j2 :
(2.4)
Canonical Expansions
25
(iii) ` X
jxˆ k j2 ≤ kxk2 :
(2.5)
k¼1
From property (i) we see that that xm is the projection of x into the span of z1 , z2 , : : : , zm : it minimizes the distance from x to the span. The inequality in (iii) is known as the Bessel inequality. It is a basic proposition of projection theory that equality holds in the Bessel inequality, in which case it becomes the Parseval identity, if and only if x¼
` X
xˆ k zk ,
(2.6)
k¼1
the series being the Fourier series of x relative to {z1 , z2 , : : : } and equality holding relative to the inner-product norm. The Parseval identity is fundamental for the theory of orthonormal projections, including orthornomal canonical expansions of random functions, because it states that representation of a vector by its Fourier series is equivalent to representation of its norm in terms of its Fourier coefficients. An orthonormal system is called complete if every vector equals its Fourier series. Completeness is equivalent to the Parseval identity holding for all vectors in the space. Our focus is on real random functions; however, we do not want to be restricted to canonical expansions in which the coefficients and coordinate functions are real-valued. Therefore, we must consider complex random variables. A complex random variable is of the form X ¼ X1 þ jX2, where pffiffiffiffiffiffiffi j ¼ 1, and X1 and X2 are real random variables constituting the real and imaginary parts of X. The modulus (absolute value) |X| of X is the real random variable defined by jX j2 ¼ X X . X has mean mX ¼ E[X] ¼ E[X1] þ jE[X2]. It is immediate that E½X ¼ E½X . The mixed second moment of two complex random variables, X and Y, is E½X Y . Their covariance is Cov½X , Y ¼ E½ðX mX ÞðY mY Þ. The covariance satisfies the relation Cov½Y ,X ¼ Cov½X ,Y . X has second moment E[|X|2] and variance E[|XmX|2]. X and Y are uncorrelated if E½X Y ¼ E½X E½Y . The basic properties of the mean and covariance hold for complex random variables. Complex random variables defined on a probability space are functions on the space and therefore form a vector space relative to the addition of random variables and scalar multiplication of a random variable by a complex number. Restricting our attention to random variables having finite second moments, E[|X|2] < `, these form a vector subspace of the original vector space. For any two random variables X and Y in the subspace, we define their inner product by hX , Y i ¼ E½X Y . Properties I1, I3, I4, and I5 are clearly
26
Chapter 2
satisfied. Moreover, if we identify any two identically distributed random variables (as is typically done), then property I2 is satisfied and the family of finite-second-moment random variables on a probability space is an innerproduct space with inner product E½X Y . The (mean-square) norm of X is ‖X‖ ¼ E[|X|2]1/2. A sequence of random variables Xn converges to X if and only if Xn converges to X in the mean-square. The mean-square inner-product space of random variables on a probability space is a Hilbert space. Thus, all of the standard Fourier convergence theorems apply. Since real random functions are our concern, we could proceed without considering complex random functions; however, this would lead to a lack of symmetry since there would be conjugates occurring in the canonical expansions. Therefore, the theory of canonical expansions will be discussed in the context of complex random functions. Since a random function X(t) is a collection of random variables, the definition of a complex random function in terms of real and imaginary parts, each being a real random function, follows at once from the definition of a complex random variable. Specifically, X(t) ¼ X1(t) þ jX2(t). Because the second factors in both the mixed second moment and covariance are conjugated, the second factors in the definitions of the autocorrelation and covariance functions are also conjugated. The covariance function of a complex random function satisfies the relation K X ðt0 , tÞ ¼ K X ðt, t0 Þ. Similarly, RX ðt0 , tÞ ¼ RX ðt, t0 Þ. Analogous comments apply to the cross-correlation and cross-covariance functions. If Var[X(t)] is finite, then the mean-square theory applies to X(t) for each t, the inner product between X(t) and X(t0 ) is given by the autocorrelation RX (t, t0 ), and X(t) has norm ||X(t)|| ¼ RX(t, t)1/2. If X(t) is centered, then ||X(t)||2 ¼ Var[X(t)]. This relation motivates us to employ zero-mean random functions when discussing canonical representations. No generality is lost since for an arbitrary random function we can always consider its centered version and, once we have a canonical representation for the centered random function, transposition of the mean yields a canonical representation for the original random function. Suppose that {Z1 , Z2 , : : : } is a complete orthonormal system of complex (or real) random variables, where in the present context an orthonormal system is a collection of uncorrelated unit-variance random variables. Then every finite-second-moment random variable X possesses the representation of Eq. 2.6 with Zk in place of zk, Fourier coefficient xˆ k ¼ E½X Zk with respect to Zk, and convergence of the partial sums of the series to X as n → ` being with respect to the mean-square norm. In particular, if X(t) is a zero-mean random function having finite variance, then, for fixed t, X(t) possesses a Fourier representation in terms of {Zk} with Fourier coefficients xˆ k ðtÞ ¼ E½X ðtÞZ k . Whether or not {Z1 , Z2 , : : : } is complete, the projection of X(t) into the subspace spanned by Z1 , Z2 , : : : , Z m is given by
Canonical Expansions
27
X m ðtÞ ¼
m X
Z k xˆ k ðtÞ:
(2.7)
k¼1
As t varies, each Fourier coefficient is a deterministic function of t, and the projection Xm(t) is a random function. Since the square of the norm is the variance, the following theorem follows at once from projection theory on inner-product spaces. Theorem 2.1. Suppose that {Z 1 , Z 2 , : : : } is a collection of uncorrelated zero-mean, unit-variance random variables (so that {Zk} is an orthonormal system), and suppose that X(t) is a zero-mean, finite-variance random function. Then ðiÞ for any coordinate functions y1 ðtÞ, y2 ðtÞ, : : : , ym ðtÞ, Var½X ðtÞ X m ðtÞ ≤ Var½X ðtÞ
m X
Zk yk ðtÞ:
(2.8)
jE½X ðtÞZ k j2 :
(2.9)
k¼1
ðiiÞ Var½X ðtÞ X m ðtÞ ¼ Var½X ðtÞ
m X k¼1
ðiiiÞ
` X
jE½X ðtÞZk j2 ≤ Var½X ðtÞ ðBessel inequalityÞ:
k¼1
(2.10) ▪
An immediate consequence of Theorem 2.1(i) is that to minimize the mean-square error (variance) of the difference between X(t) and finite approximations in terms of the Zk, the coordinate functions should be given by the Fourier coefficients with respect to the Zk, which is exactly the situation in ordinary inner-product-space theory. The projection of X(t) into the subspace spanned by Z1 , Z2 , : : : , Z m gives the best mean-square approximation to X(t) lying in the subspace. Hence, if we are going to find a canonical representation in the sense of Eq. 2.1 for which convergence is in the mean square, then the coordinate functions should be the Fourier coefficients xˆ k ðtÞ ¼ E½X ðtÞZ k . Equality holds in the Bessel inequality for all t ∈ T if and only if, for any t ∈ T, X(t) equals its Fourier series expansion in the mean-square norm, in which case the Parseval identity takes the form Var½X ðtÞ ¼
` X
jE½X ðtÞZk j2 :
(2.11)
k¼1
If X(t) equals its Fourier series in the mean-square, i.e., if Var[X(t) Xm(t)] → 0 as m → ` for any t ∈ T, then
28
Chapter 2
X ðtÞ ¼
` X
Zk xˆ k ðtÞ ¼
k¼1
` X
Zk E½X ðtÞZk ,
(2.12)
k¼1
and we have a canonical representation of X(t) in the sense of Eq. 2.1. The problem is to find an orthonormal system for which the representation of Eq. 2.12 holds. While it is advantageous to have a complete system, it is often sufficient to find an orthonormal representation of a given function. Given a random function X(t), we will be satisfied to find a canonical representation by way of a decorrelated, zero-mean system for which Eq. 2.1 holds. Thus far we have assumed that Var[Zk] ¼ 1 to make the system orthonormal. In fact, it is sufficient to find a collection of mutually uncorrelated zero-mean random variables Zk (not necessarily having unit variance). The system can be orthonormalized by dividing each Zk by its norm. With respect to the orthonormalized random variables Zk/Var[Zk]1/2, and still assuming X(t) to have zero mean, the orthonormal expansion of Eq. 2.1 becomes X ðtÞ ¼
` X
Zk xˆ k ðtÞ, Var½Z k 1∕2 k¼1
(2.13)
where xˆ k ðtÞ is the Fourier coefficient with respect to the normalized random variable, xˆ k ðtÞ ¼ E½X ðtÞZ k Var½Z k 1∕2 . Factoring Var[Zk]–1/2 out of the expectation recovers Eq. 2.1; however, rather than being the Fourier coefficients, the coordinate functions xk(t) are now given by xk ðtÞ ¼
E½X ðtÞZk : Var½Zk
(2.14)
Because requiring the coefficients in a canonical expansion to have unit variance would complicate some of the ensuing algorithms, we will only require orthogonality of the family {Zk}, which is precisely the condition set forth at the outset. With a few minor changes, Theorem 2.1 still applies. In particular, choosing xk(t) according to Eq. 2.14 provides the best subspace approximation and results in a Fourier series with nonnormalized coefficients. The partial sums still give the appropriate projections. If there is convergence in the mean-square to X(t), then the series provides the desired representation in terms of white noise (albeit, white noise not possessing unit variance). If X(t) is given by the canonical representation of Eq. 2.1, then, because the zero-mean random variables Zk are uncorrelated, applying the expected value under the assumption that it can be interchanged with the infinite sum yields
Canonical Expansions
29
K X ðt, t0 Þ ¼
` X ` X
E½Z k Z i xk ðtÞxi ðt0 Þ
k¼1 i¼1
¼
` X
(2.15)
Var½Zk xk ðtÞxk ðt0 Þ:
k¼1
This equation is called a canonical expansion of the covariance function in terms of the coordinate functions xk. Setting t ¼ t0 in the covariance expansion yields a variance expansion: Var½X ðtÞ ¼
` X
Var½Zk jxk ðtÞj2 :
(2.16)
k¼1
Equation 2.16 is the Parseval identity, assuming that the coefficients are not necessarily normalized. Setting Var[Zk] ¼ 1 recovers the ordinary Parseval identity, which in the general algebraic theory takes the form kxk2 ¼
` X
jxˆ k j2
(2.17)
k¼1
relative to an orthonormal system {z1 , z2 , : : : }. Setting Var[Zk] ¼ 1 in Eq. 2.15 yields the random-function form of what is sometimes referred to as the Parseval inner-product identity, hx, yi ¼
` X
xˆ k yˆ k :
(2.18)
k¼1
As noted above, a vector x equals its Fourier series (Eq. 2.6) if and only if the Parseval identity (Eq. 2.17) holds. If both x and y equal their Fourier series, then Eq. 2.18 holds. Equations 2.15 and 2.16 are versions of the nonnormalized general Parseval identities, in which each term in the sum has the factor ||zk||2. In Section 2.4 we will show that, given a canonical expansion of the covariance function in terms of linearly independent coordinate functions, there exists a canonical expansion of the random function itself in terms of the same coordinate functions. Continuing to assume that X(t) is given by the canonical expansion of Eq. 2.1, if a linear operator C is applied to X(t) to obtain the output process Y(s), then Y ðsÞ ¼ mY ðsÞ þ
` X k¼1
Z k yk ðsÞ,
(2.19)
30
Chapter 2
where mY ¼ C(mX) and yk ¼ C(xk). Owing to the canonical form of this expansion, the nonnormalized forms of the Parseval inner-product identity and the Parseval norm identity [Eqs. 2.15 and 2.16 with yk(s) in place of xk(t)] apply to KY (s, s0 ) and Var[Y(s)], respectively.
2.2 Constructing Canonical Expansions This section provides a constructive methodology for forming canonical expansions based on generating coefficients via integral functionals. Suppose that a1 ðtÞ, a2 ðtÞ, : : : ∈ L2 ðTÞ. Assuming that X(t) has mean zero, for k ¼ 1, 2, : : : , let Z X ðtÞak ðtÞdt: (2.20) Zk ¼ T
Since Theorem 1.2 holds for complex random functions, the integral exists in the MS sense if and only if the corresponding mean and covariance integrals exist. Since mX (t) ¼ 0, E[Zk] ¼ 0. Thus, we only need to be concerned with the covariance integral, which here is Z Z X ðtÞak ðtÞdt X ðsÞaj ðsÞds E½Z k Zj ¼ E T T (2.21) Z Z ak ðtÞaj ðsÞK X ðt, sÞdtds: ¼ T
T
Assuming that KX (t, s) ∈ L2(T T), by the Cauchy–Schwarz inequality, Z Z 1∕2 Z Z 2 jak ðtÞaj ðsÞK X ðt, sÞjdtds ≤ jak ðtÞaj ðsÞj dtds T T T T Z Z 1∕2 2 jK X ðt, sÞj dtds T T (2.22) Z 1∕2 Z 1∕2 2 2 ¼ jak ðtÞj dt jaj ðsÞj ds T T Z Z 1∕2 2 jK X ðt, sÞj dtds , T
T
which is finite. Thus, ak ðtÞaj ðsÞK X ðt, sÞ is absolutely integrable, ensuring the existence of the integral in Eq. 2.21 and MS integrability in Eq. 2.20. Z1 , Z2 , : : : are uncorrelated if and only if Z Z ak ðtÞaj ðsÞK X ðt, sÞdtds ¼ 0 (2.23) T
T
for j ≠ k, and the variance of Zk is given by
Canonical Expansions
31
Z Z Vk ¼
ak ðtÞak ðsÞK X ðt, sÞdtds: T
(2.24)
T
If X(t) has the canonical expansion of Eq. 2.1, then according to Eq. 2.14, Z 1 xk ðtÞ ¼ E X ðtÞ X ðsÞak ðsÞds Vk T Z 1 a ðsÞK X ðt, sÞds: ¼ Vk T k
(2.25)
Like a1 ðtÞ, a2 ðtÞ, : : : , the coordinate functions x1 ðtÞ, x2 ðtÞ, : : : lie in L2(T), since Z
1 jxk ðtÞj dt ≤ 2 Vk T
Z Z jak ðsÞK X ðt, sÞjds
2
T
2 dt
T
Z Z Z 1 2 2 jak ðsÞj ds jK X ðt, sÞj ds dt ≤ 2 Vk T T T Z Z Z 1 2 jak ðsÞj ds jK X ðt, sÞj2 dsdt, ¼ 2 Vk T T T
(2.26)
which is finite because ak(s) and KX (t, s) are square-integrable. Integrating Eq. 2.25 against aj ðtÞ yields Z
1 aj ðtÞxk ðtÞdt ¼ Vk T
Z Z aj ðtÞak ðsÞK X ðt, sÞdtds: T
(2.27)
T
Therefore, if we are given a1 ðtÞ, a2 ðtÞ, : : : and define x1 ðtÞ, x2 ðtÞ, : : : by Eq. 2.25, it follows from Eqs. 2.23, 2.24, and 2.27 that the coefficients are orthogonal if and only if Z aj ðtÞxk ðtÞdt ¼ djk : (2.28) T
For j ≠ k, aj (t) and xk(t) are orthogonal in L2(T ) and said to be bi-orthogonal. Reconsidering Eq. 2.27 and using the conjugate symmetry of the covariance function, 1 Vj
Z Z aj ðtÞak ðsÞK X ðt,sÞdtds ¼ T
T
1 Vk Z
¼
Z
Z ak ðsÞ T
T
ak ðsÞxj ðsÞds: T
Combining Eqs. 2.27 and 2.29 yields
aj ðtÞK X ðs,tÞdt ds (2.29)
32
Chapter 2
Z
Z aj ðtÞxk ðtÞdt ¼ V j
Vk T
ak ðtÞxj ðtÞdt:
(2.30)
T
Two questions need to be addressed. First, how do we find a sequence of functions a1 ðtÞ, a2 ðtÞ, : : : so that Eq. 2.28 is satisfied; second, if it is satisfied, what are the conditions for the resulting expansion to converge to X(t) in the mean square? Consider the problem of finding function sequences satisfying Eqs. 2.25 and 2.28. To simplify notation and clarify the linear structure of the overall procedure, we rewrite Eqs. 2.24, 2.25, 2.28, and 2.30 in terms of the inner product on L2(T): V k ¼ hhK X ðt, sÞ, ak ðtÞi, ak ðsÞi,
(2.31)
xk ðtÞ ¼ V 1 k hK X ðt, sÞ, ak ðsÞi,
(2.32)
hxk ðtÞ, aj ðtÞi ¼ djk ,
(2.33)
V k hxk ðtÞ, aj ðtÞi ¼ V j hak ðtÞ, xj ðtÞi,
(2.34)
where the variables are included because KX (t, s) is a function of two variables. To generate sequences of functions in L2(T) satisfying Eqs. 2.32 and 2.33, let h1 ðtÞ, h2 ðtÞ, : : : ∈ L2 ðTÞ. Define a1(t) ¼ h1(t). Find V1 and x1(t) from Eqs. 2.31 and 2.32, respectively. Define a2(t) ¼ c21a1(t) þ h2(t), where c21 is determined by the bi-orthogonality condition 〈x1(t), a2(t)〉 ¼ 0, so that c21 hx1 ðtÞ, a1 ðtÞi þ hx1 ðtÞ, h2 ðtÞi ¼ 0:
(2.35)
Since 〈x1(t), a1(t)〉 ¼ 1, c21 ¼ 〈x1(t), h2(t)〉. V2 and x2(t) are found from Eqs. 2.31 and 2.32. Proceeding recursively, suppose that we have found a1 ðtÞ, a2 ðtÞ, : : : , an1 ðtÞ, x1 ðtÞ, x2 ðtÞ, : : : , xn1 ðtÞ, V 1 , V 2 , : : : , V n1 . Define an ðtÞ ¼
n1 X
cni ai ðtÞ þ hn ðtÞ,
(2.36)
i¼1
where cn1 , cn2 , : : : , cn, n1 are determined by bi-orthogonality k ¼ 1, 2, : : : , n 1. Applying bi-orthogonality to an(t) yields the system n1 X i¼1
cni hxk ðtÞ, ai ðtÞi þ hxk ðtÞ, hn ðtÞi ¼ 0
for
(2.37)
Canonical Expansions
33
for k ¼ 1, 2, : : : , n 1. Applying the bi-orthogonality conditions within the sum yields (2.38) cnk ¼ hxk ðtÞ, hn ðtÞi for k ¼ 1, 2, : : : , n 1. Vn and xn(t) are found from Eqs. 2.31 and 2.32. By construction, hxk ðtÞ, an ðtÞi ¼ 0
(2.39)
for k < n. From Eq. 2.34, 〈xn(t),ak(t)〉 ¼ 0 for k > n, and Eq. 2.33 is satisfied. Given a bi-orthogonal function system, we need to determine if the expansion with the coefficients of Eq. 2.20 is MS convergent to X(t). Using inner-product notation, that equation can be written as Zk ¼ hX ðtÞ, ak ðtÞi:
(2.40)
There exist necessary and sufficient conditions for convergence. We say that a sequence a1 ðtÞ, a2 ðtÞ, : : : of square-integrable functions is complete relative to the random function X(t) if, whenever b(t) is a square-integrable function such that hhK X ðt, sÞ, bðtÞi, ak ðsÞi ¼ 0
(2.41)
for k ¼ 1, 2, : : : , it cannot be that hhK X ðt, sÞ, bðtÞi, bðsÞi . 0:
(2.42)
Theorem 2.2. If {ak(t)} is a sequence of square-integrable functions, Zk ¼ hX ðtÞ, ak ðtÞi, xk ðtÞ ¼ V 1 k hK X ðt, sÞ, ak ðsÞi, and {xk(t)} and {ak(t)} are bi-orthogonal function sequences, then X(t) equals in the mean-square its canonical expansion determined by Zk and xk(t) if and only if {ak(t)} is complete relative to X(t). ▪ The theory we have provided concerns ordinary (nongeneralized) covariance functions. As has typically been the case, the theory extends to generalized covariance functions. Example 2.1. Let X(t) be white noise of constant intensity k over the interval [0, T]. X(t) has mean zero, and covariance function KX (t, t0 ) ¼ kd(t t0 ). For k ¼ : : : , 2, 1, 0, 1, 2, : : : , let ak ðtÞ ¼ e jvk t , where vk ¼ 2kp/T. From Eqs. 2.31 and 2.32, Z TZ T Vk ¼ k e jvk ðstÞ dðt sÞdtds ¼ kT, 0 0 Z T 1 e jvk t : xk ðtÞ ¼ e jvk s kdðt sÞds ¼ T kT 0
34
Chapter 2
The functions xk(t) and ai(t) are bi-orthogonal since Z 1 T jðvk vi Þt hxk ðtÞ, ai ðtÞi ¼ e dt ¼ dki : T 0 To show that a1 ðtÞ, a2 ðtÞ, : : : form a complete system, suppose that b(t) is square-integrable and Z TZ T bðtÞak ðsÞK X ðt, sÞdtds ¼ 0: 0
0
Then Z
T
0¼
Z
0
Z
¼k
T
bðtÞe jvk s kdðt sÞdtds
0 T
bðtÞe jvk t dt
0
for k ¼ : : : , 2, 1, 0, 1, 2, : : : . Hence, b(t) is identically zero, the impossibility of the strict inequality in Eq. 2.42 is obvious, and the function system is complete relative to X(t). Therefore, X(t) and KX (t, t0 ) have the canonical expansions X ðtÞ ¼ K X ðt, t0 Þ ¼
` 1 X Z e jvk t , T k¼` k
` k X 0 e jvk ðtt Þ , T k¼`
where Z Zk ¼
T
X ðtÞejvk t dt:
0
▪
2.3 Orthonormal Coordinate Functions This section considers a natural problem: given an orthonormal function system, find a canonical expansion for a zero-mean random function X(t). In the process we will deduce the Karhunen–Loève theorem and demonstrate its centrality to canonical expansions having orthonormal coordinate functions. 2.3.1 Canonical expansions and the Parseval identity Given an orthornomal system {uk(t)} and a random function X(t), the aim is to find coefficients Z1 , Z2 , : : : such that
Canonical Expansions
35
X ðtÞ ¼
` X
Zk uk ðtÞ
(2.43)
k¼1
in the mean-square. To discover the form of the coefficients, multiply both sides of the proposed expansion by the conjugate of uk (t) and integrate over T, assuming that the order of integration and summation can be interchanged. Owing to orthonormality, Z ` Z X X ðtÞuk ðtÞdt ¼ Zi ui ðtÞuk ðtÞdt ¼ Zk (2.44) T
i¼1
T
for k ¼ 1, 2, : : : . Thus, Zk is given by the generalized Fourier coefficient of X(t) relative to uk(t). Given an orthonormal function system {uk(t)}, a series of the form given in Eq. 2.43 with Z1 , Z2 , : : : being the generalized Fourier coefficients may or may not converge, and Z1 , Z2 , : : : may or may not be uncorrelated. Regarding correlation of the coefficients, the inner product between Zk and Zi is Z Z X ðtÞX ðsÞuk ðtÞui ðsÞdtds E½Z k Z i ¼ E T T (2.45) Z Z ¼ RX ðt, sÞuk ðtÞui ðsÞdtds: T
T
For k ≠ i, Zk and Zi are uncorrelated when this double integral vanishes. Moreover, the variance of Zk is given by Z Z 2 RX ðt, sÞuk ðsÞuk ðtÞdtds: (2.46) E½jZk j ¼ T
T
Assuming that Z1 , Z2 , : : : are uncorrelated, to analyze convergence we apply Theorem 2.1 to Z 1 , Z 2 , : : : , keeping in mind that Z1 , Z2 , : : : need not be orthonormal. Letting Vk ¼ Var[Zk], Eq. 2.14 gives the nonnormalized Fourier coefficients as Z E½X ðtÞZ k 1 ¼ E½X ðtÞX ðsÞuk ðsÞds Vk Vk T Z 1 ¼ R ðt, sÞuk ðsÞds: Vk T X
(2.47)
X(t) has the Fourier series Z ` X 1 R ðt, sÞuk ðsÞds Zk X ðtÞ Vk T X k¼1
(2.48)
36
Chapter 2
relative to the system {Zk}. According to the Parseval identity, X(t) equals its Fourier series in the mean square if and only if 2 Z ` X 1 RX ðt, sÞuk ðsÞds ¼ Var½X ðtÞ ¼ RX ðt, tÞ: Vk T k¼1
(2.49)
Recalling Eq. 2.16, we see that Eq. 2.49 is the canonical expansion for the variance corresponding to the canonical expansion defined by the Fourier series for X(t). Hence, we can state the following theorem. Theorem 2.3. Suppose that {uk(t)} is an orthonormal function system over T, and the generalized Fourier coefficients Z 1 , Z 2 , : : : of the zero-mean random function X(t) with respect to u1 ðtÞ, u2 ðtÞ, : : : are uncorrelated. Then X(t) has the canonical representation Z ` X 1 X ðtÞ ¼ R ðt, sÞuk ðsÞds Z k Vk T X k¼1
(2.50)
if and only if the series of Eq. 2.49 provides a canonical expansion of Var[X(t)]. The expansion is the Fourier series of X(t) relative to Z 1 , Z 2 , : : : . ▪ 2.3.2 Karhunen–Loève expansion We now consider a special case involving the eigenfunctions of the covariance function. The theorem provides a valid canonical representation throughout the domain over which the random function is defined and serves as a benchmark for data compression. A couple of definitions are required. For a weight function w(t) ≥ 0, a function g(s, t) ∈ L2(w(t) w(s)) on T T if Z Z T
T
jgðt, sÞj2 wðtÞwðsÞdtds , ` :
(2.51)
If g(s, t) ∈ L2(w(t) w(s)) on T T, and l0 and u0(t) solve the integral equation Z gðt, sÞuðsÞwðsÞds ¼ luðtÞ, (2.52) T
then l0 is called an eigenvalue, and u0(t) is the corresponding eigenfunction. Theorem 2.4 (Karhunen–Loève). Suppose that X(t) is a zero-mean random function on T, and RX (t, s) ∈ L2(w(t) w(s)) on T T.
Canonical Expansions
37
(i) For the integral equation Z RX ðt, sÞuðsÞwðsÞds ¼ luðtÞ,
(2.53)
T
there exists a discrete (finite or infinite) set of eigenvalues l1 ≥ l2 ≥ l3 ≥ · · · 0 with corresponding eigenfunctions u1 ðtÞ, u2 ðtÞ, u3 ðtÞ, : : : such that {uk(t)} is a deterministic orthonormal system on T relative to the weight function w(t). (ii) The generalized Fourier coefficients of X(t) relative to the set of eigenfunctions, Z Zk ¼ X ðtÞuk ðtÞwðtÞdt, (2.54) T
are uncorrelated for k ¼ 1, 2, : : : and Var[Zk] ¼ lk. (iii) X(t) is represented by its canonical expansion in Eq. 2.43. Condition (i) means that Z RX ðt, sÞuk ðsÞwðsÞds ¼ lk uk ðtÞ
▪
(2.55)
T
for k ¼ 1, 2, : : : . The Karhunen–Loève expansion is the paradigmatic example of Theorem 2.3. To see this (and for simplicity, letting w(s) ≡ 1), applying Eq. 2.55 to Eq. 2.21, which holds for any orthonormal system, shows that Z1 , Z2 , : : : are uncorrelated, Z E½Zk Zi ¼ lk ui ðtÞuk ðtÞdt ¼ 0 (2.56) T
for k ≠ i. Equation 2.46 shows that Vk ¼ lk. According to Eq. 2.55, Z 1 R ðt, sÞuk ðsÞds, uk ðtÞ ¼ Vk T X
(2.57)
so that the Fourier series of Eq. 2.50 takes the form of Eq. 2.43. From the theory of integral equations, Mercer’s theorem states that, if RX (t, s) is continuous over T T, then RX ðt, sÞ ¼
` X
lk uk ðtÞuk ðsÞ,
(2.58)
k¼1
where convergence is uniform and absolute. Letting s ¼ t in Mercer’s theorem yields
38
Chapter 2
RX ðt, tÞ ¼
` X
lk juk ðtÞj2 ,
(2.59)
k¼1
which is the canonical expansion of the variance in Eq. 2.49. Hence, Theorem 2.3 shows that Eq. 2.43 provides the desired canonical expansion. The ease with which MS convergence has been demonstrated shows that the depth of the Karhunen–Loève expansion lies in the integral-equation theory pertaining to Eq. 2.53 in the theorem. We have shown convergence under continuity of the covariance function; the theorem holds for more general covariance functions in the context of generalized function theory. A converse proposition holds. If {uk(t)} is an orthonormal system and X(t) has a canonical representation of the form given in Eq. 2.43, then (as we have shown) Z1 , Z2 , : : : must be the generalized Fourier coefficients of X(t) with respect to u1 ðtÞ, u2 ðtÞ, : : : and, because the Fourier representation of Eq. 2.50 is unique, uk(t) must be given by Eq. 2.57. Thus, u1 ðtÞ, u2 ðtÞ, : : : must be eigenfunctions for Eq. 2.53 and Vk ¼ lk. For the converse to hold, it must be that u1 ðtÞ, u2 ðtÞ, : : : constitute the eigenfunctions for all nonzero eigenvalues. This is assured because a canonical expansion of the variance must result from the function expansion, and the variance expansion is given by Eq. 2.59, which according to Mercer’s theorem, involves all eigenfunctions. Since it is always possible to find a weight function such that RX (t, s) ∈ 2 L (w(t) w(s)) on T T, in theory the Karhunen–Loève theorem is definitive. Example 2.2. The autocorrelation for the Wiener process is RX (t, t0 ) ¼ s2min(t, t0 ). For simplicity, let w(s) ≡ 1 and s2 ¼ 1. Then the Karhunen–Loève integral equation over the interval (0, T) is Z T minðt, sÞuðsÞds ¼ luðtÞ 0
for 0 < t < T. According to the definition of minimum, the integral equation becomes Z t Z T suðsÞds þ t uðsÞds ¼ luðtÞ, 0
t
with boundary condition u (0) ¼ 0. Differentiation with respect to t yields Z T uðsÞds ¼ lu0 ðtÞ, t
with boundary condition u0 (T) ¼ 0. A second differentiation yields lu 00 ðtÞ þ uðtÞ ¼ 0:
Canonical Expansions
39
Along with its boundary values, this second-order differential equation has solutions for 2 2T lk ¼ ð2k 1Þp for k ¼ 1, 2, : : : . These form distinct eigenvalues for the integral equation, and the corresponding eigenfunctions are rffiffiffiffi 2 ðk 1∕2Þpt uk ðtÞ ¼ sin : T T The uncorrelated generalized Fourier coefficients are rffiffiffiffi Z T 2 ð2k 1Þpt sin dt, X ðtÞ Zk ¼ T 2T 0 and the Karhunen–Loéve expansion of the Wiener process is given by Eq. 2.43. ▪ In regard to data compression, consider a discrete zero-mean random process X ðnÞ, n ¼ 1, 2, : : : , and its Karhunen–Loève expansion, X ðnÞ ¼
` X
Zk uk ðnÞ:
(2.60)
k¼1
Let X m ðnÞ ¼
m X
Zk uk ðnÞ
(2.61)
k¼1
be the mth partial sum of the series. According to the Karhunen–Loève theorem, Xm(n) converges to X(n) in the mean square as m → `. Owing to the uncorrelatedness of the generalized Fourier coefficients, the mean-square error is given by " X # ` ` X 2 E½jX ðnÞ X m ðnÞj ¼ E Zk uk ðnÞ Z i ui ðnÞ k¼mþ1
¼E
X ` ` X
i¼mþ1
Z k Z i uk ðnÞui ðnÞ
(2.62)
k¼mþ1 i¼mþ1
¼
` X
Var½Zk juk ðnÞj2 :
k¼mþ1
Define the global mean-square error (MSE) between the random functions X(n) and Xm(n) by summing over all n:
40
Chapter 2
MSE½X , X m ¼
` X
Var½Zk
¼
juk ðnÞj2
n¼1
k¼mþ1 ` X
` X
Var½Zk
(2.63)
k¼mþ1
¼
` X
lk ,
k¼mþ1
where the second equality follows from the orthonormality of fuk ðnÞgk¼1, 2, : : : over the positive integers. Xm(n) represents a compressed approximation of X(n) and, since the eigenvalues are nonincreasing, it represents the best approximation relative to the global MSE if one is going to use m terms from the Karhuen–Loève expansion.
2.4 Derivation from a Covariance Expansion We demonstrate the claim made in Section 3.1 that, given a canonical expansion of the covariance function in terms of linearly independent coordinate functions, there exists a canonical expansion of the random function in terms of the same coordinate functions. For a zero-mean random function X(t), assume that the covariance function possesses the canonical expansion K X ðt, sÞ ¼
` X
V k xk ðtÞxk ðsÞ
(2.64)
k¼1
having linearly independent coordinate functions. Then we can find uncorrelated zero-mean random variables Z 1 , Z 2 , : : : such that X ðtÞ ¼
` X
Zk xk ðtÞ,
(2.65)
k¼1
with Vk ¼ Var[Zk]. The aim is to find functions a1 ðtÞ, a2 ðtÞ, : : : that together with x1 ðtÞ, x2 ðtÞ, : : : satisfy the bi-orthogonality relation of Eq. 2.33 and the representation of Eq. 2.32 (equivalently, Eq. 2.25). Once a1 ðtÞ, a2 ðtÞ, : : : have been found, Eq. 2.65 provides a canonical expansion for X(t) with Zk ¼ 〈X(t), ak(t)〉 if the series is MS convergent to X(t). Before showing how to find appropriate functions a1 ðtÞ, a2 ðtÞ, : : : , we first consider MS convergence given that a1 ðtÞ, a2 ðtÞ, : : : have been found. Equations 2.25 and 2.14 both give Fourier coefficients of X(t) relative to a set of uncorrelated random variables Z1 , Z2 , : : : . The formulations are the same except that ak(s) appears in Eq. 2.25 and uk(s) appears in Eq. 2.47.
Canonical Expansions
41
Since Theorem 2.3 is based solely on Theorem 2.1 and the formulation of the Fourier coefficients for Z1 , Z2 , : : : , we can apply it in the present case with ak(s) in place of uk(s), and Zk given by Eq. 2.20 rather than Eq. 2.44. Thus, with regard to Eq. 2.65, it states that X(t) has the canonical expansion of Eq. 2.65 if and only if the variance of X(t) has the expansion Var½X ðtÞ ¼
` X
V k jxk ðtÞj2 :
(2.66)
k¼1
Setting t ¼ s in Eq. 2.64 yields this variance expansion as a corollary of the canonical expansion of the covariance function. In fact, since a canonical expansion of a random function yields a canonical expansion of the covariance, which in turn yields a canonical expansion of the variance, we have the following general theorem. Theorem 2.5. If {ak(t)} is a sequence of square-integrable functions, Zk ¼ hX ðtÞ, ak ðtÞi, xk ðtÞ ¼ V 1 k hK X ðt, sÞ, ak ðsÞi, and {xk(t)} and {ak(t)} are bi-orthogonal function sequences, then the following conditions are equivalent: (i) X(t) is equal in the mean square to its canonical expansion determined by Zk and xk(t); (ii) KX (t, t0 ) has a canonical expansion in terms of Vk, xk(t) and xk(t0 ); and (iii) Var[X(t)] has a canonical expansion in terms of Vk and xk(t). ▪ The fact that the three conditions of Theorem 2.5 are equivalent is not surprising. The first states that X(t) equals its Fourier series relative to Z1 , Z2 , : : : , the third states Parseval’s identity for the norm ||X(t)||, and the second states Parseval’s identity for the inner product 〈X(t), X(s)〉, namely, hX ðtÞ, X ðsÞi ¼ RX ðt, sÞ ` X V k xk ðtÞxk ðsÞ ¼ k¼1
¼
` X
1 V k E½X ðtÞV 1 k Z k E½X ðsÞV k Z k
(2.67)
k¼1
¼
` X
hX ðtÞ, V 1∕2 Zk ihX ðsÞ, V 1∕2 Zk i: k k
k¼1
Two vectors in a Hilbert space equal their respective Fourier series if and only if both forms of Parseval’s identity, norm and inner product, apply to them. In light of Theorem 2.5, the role played by the integral equation in Eq. 2.52 becomes clear: its eigenfunctions supply both {ak(t)} and {xk(t)}, which are ipso facto bi-orthogonal, and Mercer’s theorem gives a canonical expansion of the covariance.
42
Chapter 2
We now find functions a1 ðtÞ, a2 ðtÞ, : : : that, together with the functions x1 ðtÞ, x2 ðtÞ, : : : in the covariance expansion of Eq. 2.64, satisfy the conditions of Theorem 2.5. We begin with an arbitrary sequence of square-integrable functions h1 ðtÞ, h2 ðtÞ, : : : and define functions g1 ðtÞ, g2 ðtÞ, : : : in the following manner. Let g1(t) ¼ h1(t). Then define g2 ðtÞ ¼ c21 g1 ðtÞ þ h2 ðtÞ
(2.68)
subject to the condition 〈x1, g2〉 ¼ 0 (where we have dropped the variable t in the inner product because only a single variable is currently involved). Applying g2 to x1 yields c21 ¼
hx1 , h2 i : hx1 , g1 i
(2.69)
To avoid a problem here owing to division by 〈x1, g1〉, h1 must be chosen so that 〈x1, g1〉 ≠ 0. Proceeding recursively, given g1 , g2 , : : : , gn1 , define gn ðtÞ ¼
n1 X
cni gi ðtÞ þ hn ðtÞ
(2.70)
i¼1
subject to the conditions 〈xi, gn〉 ¼ 0 for i ¼ 1, 2, : : : , n 1. Also assume that h1 , h2 , : : : are chosen so that 〈xn, gn〉 ≠ 0 for n ¼ 1, 2, : : : . Applying gn to xk for k ¼ 1, 2, : : : , n 1 yields 0¼
k X
cni hxk , gi i þ hxk , hn i:
(2.71)
i¼1
Solving this system for the coefficients cni yields cn1 ¼ cnk ¼
hxk , hn i þ
Pk1 j¼1
hx1 , hn i , hx1 , g1 i
cnj hxk , gj i
hxk , gk i
ðk ¼ 2, 3, : : : , n 1Þ:
(2.72)
(2.73)
We now define the desired functions a1 ðtÞ, a2 ðtÞ, : : : by an ðtÞ ¼
` X
d ni gi ðtÞ,
(2.74)
i¼n
where the coefficients dni are found to satisfy the bi-orthogonality condition of Eq. 2.33. Applying an to xk under these conditions yields
Canonical Expansions
43 k X
d ni hxk , gi i ¼ dnk
(2.75)
i¼n
for n ¼ 1, 2, : : : and k ¼ n, n þ 1, : : : . For n ¼ 1, 2, : : : , we solve this system for the desired dnk: 1 , hxn , gn i
d nn ¼ d nk ¼
Pk1
d ni hxk , gi i hxk , gk i
ðk ¼ n þ 1, n þ 2, : : : Þ:
i¼n
(2.76)
(2.77)
2.5 Integral Canonical Expansions Rather than represent a random function as a sum involving discrete white noise and a countable collection of coordinate functions, we may be able to represent it as an integral involving continuous white noise: Z X ðtÞ ¼ mX ðtÞ þ
Ξ
ZðjÞxðt, jÞdj,
(2.78)
where Z(j) is white noise over the interval Ξ. If the integral is equal in the mean square to X(t), then it is called an integral canonical expansion (representation). With discrete canonical expansions, we can employ the L2 theory; with continuous canonical expansions, we must use generalized functions. This difficulty is evident because an integral expansion involves white noise, which has a generalized covariance function. Owing to the use of generalized functions, we will proceed formally, ignoring mathematical questions, such as integrability. Given the integral canonical representation of Eq. 2.78 for a zero-mean random function X(t), an integral canonical expansion for the covariance function can be derived. The white noise Z(j) has covariance function KZ(j, j0 ) ¼ I(j)d(j j0 ) and intensity I(j). Applying Theorem 1.2 in the generalized sense yields K X ðt, t0 Þ ¼
Z Z Ξ
Z ¼
Ξ
Ξ
xðt, jÞxðt0 , j0 ÞI ðjÞdðj j0 Þdjdj0 (2.79)
I ðjÞxðt, jÞxðt0 , jÞdj:
Letting t0 ¼ t yields the variance expansion Z Var½X ðtÞ ¼
Ξ
I ðjÞjxðt, jÞj2 dj:
(2.80)
44
Chapter 2
In analogy with discrete representation, given an integral canonical expansion of the covariance, there exists an integral canonical expansion for the random function. The cross-covariance between X(t) and Z(j) is given by Z
0
0
0
K X Z ðt, jÞ ¼ E Zðj Þxðt, j Þdj ZðjÞ Ξ Z ¼ K Z ðj0 , jÞxðt, j0 Þdj0 Ξ Z ¼ I ðj0 Þdðj0 jÞxðt, j0 Þdj0
(2.81)
Ξ
¼ I ðjÞxðt, jÞ: Dividing by the intensity of the white noise yields xðt, jÞ ¼
K X Z ðt, jÞ , I ðjÞ
(2.82)
which corresponds to Eq. 2.14 for discrete canonical expansions. Thus, the coordinate function for j is null at t if X(t) is uncorrelated with Z(j). If X(t), as a random function, is uncorrelated with Z(j), then x(t, j) ≡ 0. To construct an integral canonical expansion for a zero-mean random function X(t), consider integral functionals formed from a class of functions a(t, j), j ∈ Ξ: Z ZðjÞ ¼ X ðtÞaðt, jÞdt: (2.83) T
According to Eq. 2.82, if Z(j) is formed in this manner, then Z 1 xðt, jÞ ¼ E X ðtÞ X ðsÞaðs, jÞds I ðjÞ T Z 1 ¼ aðs, jÞK X ðt, sÞds: I ðjÞ T
(2.84)
Hence, according Theorem 1.2, KZ
ðj, j0 Þ
Z Z
K X ðt, t0 Þaðt, jÞaðt0 , j0 Þdtdt0 Z 0 ¼ I ðj Þ aðt, jÞxðt, j0 Þdt: ¼
T
T
(2.85)
T
Since the covariance for white noise is KZ(j, j0 ) ¼ I(j) d (j j0 ), it is implied that
Canonical Expansions
45
Z
aðt, jÞxðt, j0 Þdt ¼ dðj j0 Þ,
(2.86)
T
which takes the place of the bi-orthogonality relation of Eq. 2.28. Equations 2.84 and 2.86 correspond to Eqs. 2.32 and 2.33, respectively. Assuming that Z(j) is defined by Eq. 2.83, Eqs. 2.84 and 2.86 follow from the integral canonical representation of X(t). In fact, if we assume that these two equations hold, then it follows from Eq. 2.85, which only depends on the definition of Z(j), that Z(j) is white noise. Consequently, Eqs. 2.84 and 2.86 constitute necessary and sufficient conditions for Z(j) to be white noise. However, they are not sufficient to conclude that the random function equals the corresponding integral of Eq. 2.78. A third condition, Z xðt, jÞaðt0 , jÞdj ¼ dðt t0 Þ, (2.87) Ξ
combines with Eqs. 2.84 and 2.86 to provide necessary and sufficient conditions for X(t) to be given by the expansion of Eq. 2.78. Indeed, with this latter assumption, Z Z
Z Ξ
ZðjÞxðt, jÞdj ¼
X ðsÞaðs, jÞxðt, jÞdjds Z Z ¼ X ðsÞ aðs, jÞxðt, jÞdj ds Ξ T Z X ðsÞdðt sÞds ¼ Ξ
T
(2.88)
T
¼ X ðtÞ: To find an expression for the intensity of the white noise in the present setting, integrate the white-noise covariance function KZ(j, j0 ) ¼ I(j) d (j j0 ) and then substitute the first integral in Eq. 2.85 to obtain Z
K Z ðj, j0 Þdj0 Z Z Z ¼ K X ðt, t0 Þaðt, jÞaðt0 , j0 Þdtdt0 dj0 :
I ðjÞ ¼
Ξ
Ξ
T
(2.89)
T
Given an integral canonical expansion for the covariance function according to Eq. 2.79, we desire an integral canonical expansion of the random function itself. It can be shown that there exists white noise Z(j) with the given intensity I(j) in the equation and having cross-covariance given by Eq. 2.81. For this white noise process,
46
Chapter 2
2 Z Z ¼ K E X ðtÞ ZðjÞxðt, jÞdj K X Z ðt, jÞxðt, jÞdj X ðt, tÞ Ξ Ξ Z K X Z ðt, jÞxðt, jÞdj ZΞ Z þ K Z ðj, j0 Þxðt, jÞxðt, j0 Þdjdj0 Ξ Ξ Z ¼ K X ðt, tÞ I ðjÞxðt, jÞxðt, jÞdj Ξ Z I ðjÞxðt, jÞxðt, jÞdj ZΞ Z þ I ðjÞdðj j0 Þxðt, jÞxðt, j0 Þdjdj0 Ξ Ξ Z ¼ K X ðt, tÞ I ðjÞjxðt, jÞj2 dj,
(2.90)
Ξ
the last equality following upon integration relative to dj0 in the double integral. Setting t ¼ t0 in the integral canonical expansion for the covariance function (Eq. 2.79) shows the last expression to be 0.
2.6 Expansions of WS Stationary Processes Wide-sense stationary random functions have Fourier-integral canonical representations. These expansions are related to integral canonical expansions of the covariance function by means of the power spectral density of the random function. Let kX (t) be the covariance function of a zero-mean WS stationary random function X(t) of a single variable t ∈ R. Assume that k X ðtÞ is integrable. The power spectral density of X(t) is the Fourier transform of kX (t): Z ` k X ðtÞejvt dt: (2.91) S X ðvÞ ¼ `
The integrability of kX (t) guarantees that its Fourier transform is defined for all v ∈ R. Because X(t) is centered, kX (t) ¼ rX (t). Since k X ðtÞ ¼ k X ðtÞ, S X ðvÞ is real-valued. If X(t) is real-valued, then SX (v) is an even function because it is the Fourier transform of an even, real-valued function. If SX (v) is integrable, then the inversion formula, Z ` 1 k X ðtÞ ¼ S ðvÞejvt dv, (2.92) 2p ` X holds almost everywhere (and holds for all t ∈ R if kX(t) is continuous). We then have the transform pair, kX (t) ↔ SX (v). Assuming inversion for t ¼ 0 yields
Canonical Expansions
47
Z ` 1 Var½X ðtÞ ¼ k X ð0Þ ¼ S ðvÞdv: (2.93) 2p ` X The power spectral density extends to random functions of more than one variable by employing the Fourier transform in multiple dimensions. Given a real-valued function, the question arises as to whether or not there is a random function for which it is the power spectral density. Theorem 2.6. (i) If a real-valued function S(v) is nonnegative and integrable, then it is the power spectral density of a WS stationary random function X(t). S(v) is even if and only if X(t) is real-valued. (ii) A function r(t) is the autocorrelation function for a WS stationary random function if and only if it is positive semidefinite, meaning that, for any finite number of time points t1 < t2 < ⋯ < tn and complex numbers a1 , a2 , : : : , an , n X n X ak ai rðtk ti Þ ≥ 0: (2.94) k¼1 i¼1
Moreover, r(t) is even if and only if the process is real-valued.
▪
2.6.1 Power spectral density and linear operators For an integrable deterministic function h(t), consider the linear operator defined on WS stationary inputs X(t) by the convolution-type integral Z ` Y ðtÞ ¼ hðtÞX ðt tÞdt (2.95) `
(under the assumption that the integral exists in the MS sense). The mean of Y(t) is Z ` hðtÞE½X ðt tÞdt mY ¼ ` Z ` (2.96) ¼ mX hðtÞdt `
¼ Hð0ÞmX , where the system function H(v) is the Fourier transform of h(t). The autocorrelation function for Y(t) can be expressed as a convolution product: Z ` rY ðtÞ ¼ E½Y ðt þ tÞX ðt vÞhðvÞdv Z ` ` ¼ rY X ðt þ vÞhðvÞdv ` (2.97) Z ` ¼ rY X ðt vÞhðvÞdv `
¼ ðrY X h0 ÞðtÞ, where h0 is defined by h0 ðvÞ ¼ hðvÞ.
48
Chapter 2
Letting SYX denote the Fourier transform of the cross-correlation function rYX, taking the Fourier transform in the preceding equation yields Z ` S Y ðvÞ ¼ S Y X ðvÞ hðvÞejvv dv ` (2.98) ¼ HðvÞS Y X ðvÞ: The cross-correlation function is given by Z ` hðvÞrX ðt vÞdv ¼ ðh rX ÞðtÞ: rY X ðtÞ ¼ `
(2.99)
Taking the Fourier transform yields S Y X ðvÞ ¼ HðvÞS X ðvÞ
(2.100)
(showing that SYX need not be real-valued). Inserting this equation into Eq. 2.98 yields S Y ðvÞ ¼ jHðwÞj2 S X ðvÞ:
(2.101)
For WS stationary processes, the convolution-integral operator can be interpreted, relative to the power spectral densities of the input and output processes, as a system operator determined by multiplication by |H(v)|2. To appreciate the terminology “power spectral density,” for v0 < v1, consider the transfer function defined by H(v) ¼ 1 if v ∈ (v0, v1] and H(v) ¼ 0 otherwise. According to Eq. 2.101, SY (v) ¼ SX (v) if v ∈ (v0, v1] and SY (v) ¼ 0 otherwise. The average of the output power in the random signal Y(t) is given by rY (0) ¼ E[|Y(t)|2], and (assuming integrability of the power spectral densities) 1 rY ð0Þ ¼ 2p
Z
`
`
1 S Y ðvÞdv ¼ 2p
Z
v1
S X ðvÞdv:
(2.102)
v0
Integrating SX (v) across the frequency band (v0, v1] gives the average output power. 2.6.2 Integral representation of a WS stationary process The power spectral density plays a central role in the integral canonical representation of WS stationary random functions. If we let t ¼ t t0 , then Eq. 2.92 becomes 1 k X ðt t Þ ¼ 2p 0
Z
`
`
0
S X ðvÞe jvt e jwt dv,
(2.103)
Canonical Expansions
49
which is the WS stationary form of the integral canonical expansion of the covariance function given Eq. 2.79, with x(t, v) ¼ e jvt/2p. Accordingly, there exists a white noise process Z(v) with intensity 2pSX (v) such that X(t) has the integral canonical expansion Z ` 1 X ðtÞ ¼ ZðvÞe jvt dv: (2.104) 2p ` There is some subtlety here. The inversion integral of Eq. 2.92 depends on the integrability of the power spectral density, and according to Theorem 1.2, the MS integrability of the integral in Eq. 2.104 depends on the existence of the inversion integral. The problem exists even though the covariance function is assumed to be integrable. This and the fact that many important random functions possess generalized covariance functions compel us to abandon the assumption of covariance-function integrability and proceed in a generalized sense from the outset when considering integral canonical expansions of WS stationary random functions. For instance, white noise Z(t) of unit intensity has covariance d(t). Applying the Fourier transform yields its power spectral density, Z ` S Z ðvÞ ¼ dðtÞejvt dt ¼ 1, (2.105) `
applying the inverse transform yields 1 k Z ðtÞ ¼ 2p
Z
`
`
e jvt dv ¼ dðtÞ,
(2.106)
and we recognize the transform pair d(t) ↔ 1. The preceding section gives three conditions that together are sufficient for a canonical representation. Let a(t, v) ¼ e jvt and define Z(v) according to Eq. 2.83, Z ` X ðtÞejvt dt: (2.107) ZðvÞ ¼ `
Then the three conditions of Eqs. 2.84, 2.86, and 2.87 become Z ` 1 xðt, vÞ ¼ k ðt sÞe jvs ds I ðvÞ ` X Z 1 jvt ` e k X ðtÞejvt dt ¼ I ðvÞ ` S X ðvÞ jvt ¼ e , I ðvÞ
(2.108)
50
Chapter 2
Z
Z
`
`
Z S X ðvÞ ` jvt jv0 t e e dt I ðvÞ ` ` Z S ðvÞ ` jðvv0 Þt e dt ¼ X I ðvÞ ` 2pS X ðvÞ ¼ dðv v0 Þ, I ðvÞ Z S X ðvÞ ` jvt jt0 v 0 xðt, vÞaðt , vÞdv ¼ e e dv I ðvÞ ` Z S X ðvÞ ` jvðt0 tÞ e dv ¼ I ðvÞ ` 2pS X ðvÞ ¼ dðt t0 Þ: I ðvÞ `
aðt, vÞxðt, v0 Þdt ¼
According to Eq. 2.89, Z Z Z ` ` ` 0 0 ejvt e jv t k X ðt t0 Þdtdt0 dv0 I ðvÞ ¼ Z ` ` `Z Z ` ` ` 0t 0 jv 0 ¼ k X ðtÞdt e dv ejðvv Þt dt ` ` ` Z Z ` ` 0 ¼ 2p k X ðtÞdt ejv t dðv v0 Þdv0 ` Z ` ` ¼ 2p k X ðtÞejvt dt
(2.109)
(2.110)
(2.111)
`
¼ 2pS X ðvÞ: Hence, Eqs. 2.109 and 2.110 reduce to the required expressions, xðt, vÞ ¼
e jvt , 2p
(2.112)
and Eq. 2.104 provides an integral canonical expansion for X(t) in terms of white noise of intensity 2pSX(v).
Chapter 3
Optimal Filtering A filter estimates (predicts) the value of an unobserved random variable based on the values of a set of observed random variables. In the context of random processes, we desire a filter that, given an input X(t), produces an output Yˆ ðsÞ that best estimates Y(s), where goodness is based on an error measure between Yˆ ðsÞ and Y(s).
3.1 Optimal Mean-Square-Error Filters For two random variables, X to be observed and Y to be estimated, we desire a function c(X) that minimizes the mean-square error (MSE): MSE〈ci ¼ E½jY cðX Þj2 :
(3.1)
c(X) is called an optimal mean-square-error estimator of Y in terms of X. To make estimation mathematically or computationally tractable, or to obtain an estimation rule with desirable properties, we often restrict the class of estimation rules over which the MSE minimum is to be achieved. The constraint trade-off is a higher MSE. In the simplest case, for the best constant predictor, relative to a constant c, the MSE E [|Y c|2] is minimized if c ¼ mY, which means that E [|Y mY|2] ≤ E [|Y c|2] for all c. Indeed, applying linearity of the expected-value operator, differentiating with respect to c, and setting the derivative equal to 0 yields c ¼ E [Y]. If two random variables X and Y possess joint density f (x, y) and an observation of X, say X ¼ x, is given, then the conditional random variable Y given X ¼ x is denoted by Y |x and has conditional density f ðyjxÞ ¼
f ðx, yÞ , f X ðxÞ
(3.2)
defined for x such that fX (x) ≠ 0. The conditional expectation (mean) and conditional variance are defined by 51
52
Chapter 3
Z mY jx ¼ E½Y jx ¼
`
`
yf ðyjxÞdy,
Var½Y jx ¼ E½ðY jx mY jx Þ2 ,
(3.3) (3.4)
respectively. Letting x vary, the plot of E[Y |x] is the regression curve of Y on X. For an observed value x of X, E[Y |x] is a parameter; however, since X is a random variable, the observation of x is random and therefore the conditional expectation can be viewed as a random variable that is a function of X, in which case it is written as E[Y |X ] (or mY |X). A fundamental property of the conditional expectation is that its expectation equals the expectation of Y: E½E½Y jX ¼ E½Y :
(3.5)
Its variance can be expressed as Var ½E½Y jX ¼ Var ½Y E½ Var½Y jX :
(3.6)
It follows that Var [E [Y |X ]] ≤ Var [Y ]. Theorem 3.1. The conditional expectation E [Y |X ] is an optimal MSE estimator for the random variable Y based on the random variable X. Proof. To prove the theorem, consider an arbitrary estimation rule c. Then E½jY cðX Þj2 jX ¼ E½jY E½Y jX þ E½Y jX cðX Þj2 jX ¼ E½ðY E½Y jX Þ2 jX þ 2E½ðY E½Y jX ÞðE½Y jX cðX ÞÞjX
(3.7)
þ E½ðE½Y jX cðX ÞÞ2 jX : Given X, E [Y |X ] c(X) is a constant. Therefore, in the second summand it can be taken outside the expected value to yield 2ðE½Y jX cðX ÞÞE½Y E½Y jX jX ¼ 2ðE½Y jX cðX ÞÞðE½Y jX E½Y jX Þ ¼ 0:
(3.8)
Since the third summand in Eq. 3.7 is nonnegative, E½ðY cðX ÞÞ2 jX ≥ E½ðY E½Y jX Þ2 jX :
(3.9)
Taking the expected value of both sides and applying Eq. 3.5 gives E½jY E½Y jX j2 ≤ E½jY cðX Þj2 , which proves the theorem.
(3.10) ▪
Optimal Filtering
53
An important case arises when X and Y are jointly normal with marginal means mX and mY, marginal standard deviations sX and sY, and correlation coefficient r. The marginals X and Y are normally distributed, and straightforward algebra yields s 1 1 y ðmY þ r sYX ðx mX Þ 2 pffiffiffiffiffiffiffiffiffiffiffiffiffi f ðyjxÞ ¼ qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi exp : 2 sY 1 r2 2ps2Y ð1 r2 Þ
(3.11)
Since this density is Gaussian, for fixed X ¼ x, the conditional random variable is normally distributed with conditional mean sY ðx mX Þ: sX
mY jx ¼ mY þ r
(3.12)
In general, the problem is more difficult, and it may be necessary to reduce the class of estimators over which minimization of the MSE takes place. If C is a class of estimation rules, then an optimal MSE estimator of the random variable Y in terms of X relative to the class C is a function c(X) such that c ∈ C and E½jY cðX Þ j2 ≤ E½jY jðX Þj2
(3.13)
for all j ∈ C. Now suppose that there are n predictor variables X1, X2, . . . , Xn, Y is estimated by a function c(X1, X2, . . . , Xn), and c is chosen to minimize MSE〈ci ¼ E½jY cðX 1 , X 2 , : : : , X n Þj2 Z Z ` ` ¼ · ·· jy cðx1 , x2 , : : : , xn Þj2 f ðx1 , x2 , : : : , xn , yÞdx1 dx2 : : : dxn dy, `
`
(3.14) where the integral is (n þ 1)-fold and f (x1, x2, . . . , xn, y) is the joint distribution of X1, X2, . . . , Xn, Y. Theorem 3.1 applies; however, here the conditional mean is Z ` yf ðyjx1 , x2 , : : : , xn Þdy, (3.15) E½Y jx1 , x2 , : : : , xn ¼ `
where the conditional density for Y given x1, x2, . . . , xn is defined by f ðyjx1 , x2 , : : : , xn Þ ¼
f ðx1 , x2 , : : : , xn , yÞ , f ðx1 , x2 , : : : , xn Þ
(3.16)
f (x1, x2, . . . , xn) being the joint distribution of X1, X2, . . . , Xn. The optimal MSE estimator for Y given X1, X2, . . . , Xn is the conditional expectation E [Y |X1, X2, . . . , Xn].
54
Chapter 3
3.2 Optimal Finite-Observation Linear Filters Designing an optimal linear filter is a parametric problem: among a class of parameters defining a filter class, find a mathematically tractable formulation (integral equation, matrix equation, etc.) for a parameter vector yielding a filter having minimum mean-square error. Optimization is embedded in the theory of projections in Hilbert spaces. 3.2.1 Orthogonality principle Given a finite number of observation random variables X1, X2, . . . , Xn and the random variable Y to be estimated, the problem of optimal linear filtering is to find a linear combination of the observation random variables that best estimates Y relative to mean-square error: find constants a1, a2, . . . , an, and b that minimize MSE〈cA i¼E½jY cA ðX 1 , X 2 , : : : , X n Þj2 ,
(3.17)
where A ¼ (a1, a2, . . . , an, b)0 , prime denoting transpose, and cA is defined by cA ðX 1 , X 2 , : : : , X n Þ ¼
n X
ak X k þ b:
(3.18)
k¼1
If b ¼ 0, then Yˆ is an optimal homogeneous linear MSE filter; if b is arbitrary, then Yˆ is an optimal nonhomogeneous linear MSE filter. Since the nonhomogeneous filter can be treated as a special case of the homogeneous filter by introducing the constant variable X0 ¼ 1, we consider in detail only the latter, in which case b ¼ 0. The minimizing values aˆ 1 , aˆ 2 , . . . , aˆ n give the optimal linear MSE filter (estimator) Yˆ ¼ cAˆ ðXÞ ¼
n X
aˆ k X k ,
(3.19)
k¼1
ˆ ¼ ðˆa1 , aˆ 2 , : : : , aˆ n Þ0 . Yˆ is an optimal linear filter if and only if where A 2 n X 2 ˆ ak X k E½jY Y j ≤ E Y
(3.20)
k¼1
for all possible choices of a1, a2, . . . , an. For random variables having finite second moments, design of optimal linear filters involves projections into Hilbert subspaces. Because we will be considering real random variables, the inner product between random 0 variables U and V is E [UV]. Letting X ¼ ðX 1 , X 2 , : : : , X n Þ , we denote the span of X 1 , X 2 , : : : , X n by SX. Since the sum in Eq. 3.20 is a linear
Optimal Filtering
55
combination of X1, X2, . . . , Xn, the arbitrariness of the coefficients means that Yˆ is an optimal linear filter if the norm ‖Y U‖ is minimized over all U ∈ SX by letting U ¼ Yˆ . The distance between Y and SX is minimized by Yˆ . Since SX is finite-dimensional, there exists a unique estimator Yˆ minimizing this distance. We apply the orthogonality principle to find projections. In the framework of finite-dimensional inner-product spaces, the orthogonality principle states that y0 is the projection of y into the subspace V if and only if 〈(y y0), v〉 ¼ 0 for all v ∈ V. In the context of optimal linear filtering, the orthogonality principle states that Yˆ is the projection of Y into the subspace SX spanned by X1, X2, . . . , Xn if and only if for any other random variable V ∈ SX, E½ðY Yˆ ÞV ¼ 0. V ∈ SX if and only if V is a linear combination of X1, X2, . . . , Xn, but this means that there exists a vector A of scalars such that V ¼ A0 X. It is such a linear combination that defines the linear filter cA. We state the orthogonality principle as a theorem regarding finite collections of random variables. Theorem 3.2 (Orthogonality Principle). For the random vector 0 X ¼ (X1, X2, . . . , Xn) , there exists a set of constants that minimizes MSE〈cA〉 as an estimator of Y based on X1, X2, . . . , Xn over all possible choices of constants. Moreover, aˆ 1 , aˆ 2 , . . . , aˆ n comprise a minimizing set if and only if, for any constants a1, a2, . . . , an, X n n X aj X j ¼ 0, E Y aˆ k X k k¼1
(3.21)
j¼1
ˆ 0 X is orthogonal to A0 X. If X1, X2, . . . , Xn are linearly meaning that Y A independent, then the collection of optimizing constants is unique. ▪ 3.2.2 Optimal filter for linearly independent observations According to the orthogonality principle, Yˆ defined by Eq. 3.19 provides the best linear MSE estimator of Y based on X1, X2, . . . , Xn if and only if 0¼
n X
ak E½ðY Yˆ ÞX k
(3.22)
k¼1
for any constants a1, a2, . . . , an, which is the case if and only if E½ðY Yˆ ÞX k ¼ 0 for all k. Thus, for k ¼ 1, 2, . . . , n, we need to solve the equation 0 ¼ E½ðY Yˆ ÞX k ¼ E½Y X k
n X j¼1
aˆ j E½X j X k :
(3.23)
56
Chapter 3
For j, k ¼ 1, 2, . . . , n, let Rkj ¼ E[Xk Xj] ¼ E[Xj Xk] ¼ Rjk and for k ¼ 1, 2, . . . , n, let Rk ¼ E[YXk]. Solution of the system of equations R11 aˆ 1 þ R12 aˆ 2 þ · · · þ R1n aˆ n ¼ R1 R21 aˆ 1 þ R22 aˆ 2 þ · · · þ R2n aˆ n ¼ R2 .. .. .. .. . . . . Rn1 aˆ 1 þ Rn2 aˆ 2 þ · · · þ Rnn aˆ n ¼ Rn
(3.24)
provides the optimal homogeneous linear MSE estimator for Y. ˆ ¼ ðˆa1 , aˆ 2 , : : : , aˆ n Þ0 , and Let C ¼ (R1, R2, . . . , Rn)0 , A 0 1 R11 R12 · · · R1n B R21 R22 · · · R2n C B C R ¼ B .. .. .. .. C, @ . . . . A Rn1
Rn2
···
(3.25)
Rnn
where C is the cross-correlation vector for X1, X2, . . . , Xn and Y, and R is the autocorrelation matrix for X1, X2, . . . , Xn. The system of Eq. 3.24 has the form ˆ ¼ C: RA
(3.26)
If det[R] ≠ 0, then the desired solution for the optimizing coefficient vector is ˆ ¼ R1 C: A
(3.27)
Regarding the determinant, if x1, x2, . . . , xn are vectors in an inner-product space, 0
〈x1 , x1 i
B B 〈x , x i G¼B 2 1 .. @ . 〈xn , x1 i
〈x1 , x2 i
···
〈x2 , x2 i .. . 〈xn , x2 i
··· .. . ···
〈x1 , xn i
1
C 〈x2 , xn i C C .. A . 〈xn , xn i
(3.28)
is called the Grammian of x1, x2, . . . , xn and det[G] is the Gram determinant. The vectors are linearly independent if and only if det[G] ≠ 0. R is the Grammian of X1, X2, . . . , Xn, and therefore det[R] ≠ 0 if and only if X1, X2, . . . , Xn are linearly independent. Assuming linear independence of the observations, the optimal linear filter is determined by Eq. 3.27. Theorem 3.3. If X ¼ (X1, X2, . . . , Xn)0 is composed of linearly independent random variables, then the optimal homogeneous linear MSE filter for Y ˆ 0 X, where A ˆ ¼ R1 C. based on X is given by Yˆ ¼ A ▪
Optimal Filtering
57
To compute the MSE for the optimal linear filter, note that Y Yˆ is orthogonal to any linear combination of the observation random variables, in particular, Yˆ . Thus, E½jY Yˆ j2 ¼ E½ðY Yˆ ÞY ðY Yˆ ÞYˆ ¼ E½ðY Yˆ ÞY E½ðY Yˆ ÞYˆ ¼ E½ðY Yˆ ÞY X n 2 ¼ E½jY j E aˆ k X k Y
(3.29)
k¼1
¼ E½jY j2
n X
aˆ k Rk :
k¼1
The optimal homogeneous linear filter is a second-order filter: it depends only on the second-order moments of X 1 , X 2 , : : : , X n , and Y, not on the actual distribution of the random variables. If Z 1 , Z 2 , : : : , Zn comprise a different set of observation variables, and the joint distributions of X 1 , X 2 , : : : , X n , Y and of Z 1 , Z 2 , : : : , Zn , Y agree up to the second order, then the optimal linear filters for X 1 , X 2 , : : : , X n and for Z1 , Z2 , : : : , Z n are identical, even if the joint distributions are different. The optimal MSE filter for Y based on X is linear when X and Y are jointly Gaussian. This proposition extends to estimation based on more than one observation random variable. If X 1 , X 2 , : : : , X n , Y are jointly Gaussian, then the optimal linear MSE filter for Y in terms of X 1 , X 2 , : : : , X n is also the optimal MSE filter. 3.2.3 Optimal filter for linearly dependent observations To drop the linear independence assumption in Theorem 3.3, we require some Hilbert-space theory. If w is a linear operator between Hilbert spaces, then w is bounded if there exists a constant A such that ‖w(x)‖ ≤ A‖x‖ for all x. For finite-dimensional spaces, matrix operators are bounded. A subspace S of a Hilbert space is closed if it contains all of its limit points. This means that if x1 , x2 , : : : ∈ S and ‖xn x‖ → 0 as n → `, then x ∈ S. All finite-dimensional subspaces are closed. If S is any subspace, then S ⊥ denotes the subspace of elements that are orthogonal to all elements of S. For any linear operator w between Hilbert spaces, its null space N w is the subspace consisting of all vectors y such that w(y) is the zero vector. When filtering a finite number of observations, the optimal linear filter is given by the unique projection into the finite-dimensional subspace spanned by the observations. For infinite-dimensional subspaces, the projection theorem requires that the subspace be closed. We state the general
58
Chapter 3
Hilbert-space orthogonal projection theorem here for the purpose of discussing the pseudo-inverse of an operator and will refer to it again when we discuss optimal linear filters for an infinite number of observation random variables. Theorem 3.4. If S is a closed subspace of a Hilbert space H, then there exists a unique pair of bounded linear operators pS and rS such that pS maps H onto S, rS maps H onto S ⊥ , and each y ∈ H possesses a unique representation: y ¼ pS ðyÞ þ rS ðyÞ:
(3.30) ▪
pS and rS are called the orthogonal projections of H onto S and S ⊥ , respectively. If y ∈ S, then pS ðyÞ ¼ y and rS ðyÞ ¼ 0; if y ∈ S ⊥ , then pS ðyÞ ¼ 0 and rS ðyÞ ¼ y. Hence, S ⊥ is the null space of pS , and S is the null space of rS . The norm of y is decomposed by the projections according to the relation kyk2 ¼ kpS ðyÞk2 þ krS ðyÞk2 :
(3.31)
The minimum over all distances ‖y x‖, for x ∈ S, is achieved by ky pS ðyÞk. The properties of the orthogonal projections are familiar in the theory of finite-dimensional inner-product spaces. Indeed, if S is finite-dimensional and fz1 , z2 , : : : , zn g is an orthogonal spanning set for S, then pS is realized as the Fourier series relative to fz1 , z2 , : : : , zn g. We now define the pseudo-inverse of an operator w between two Hilbert spaces. Denote the range of w by Rw . Let w⊥ be the mapping defining the projection onto R⊥w . w⊥⊥ ¼ w(⊥)⊥ provides the projection onto ðRw⊥ Þ⊥ ¼ R⊥⊥ w ¼ Rw (the conjugate of the range of w for complex spaces). Let w0 denote the restriction of w to vectors lying in N ⊥w . If Rw is closed (which it is in finite-dimensional spaces), then it can be shown that w0 is a oneto-one, linear mapping onto Rw . Hence, it has an inverse w1 mapping 0 Rw onto N ⊥w . The pseudo-inverse of w is a bounded linear operator defined ⊥⊥ ⊥⊥ by the composition wþ ¼ w1 is the 0 w . If f has an inverse, then w þ –1 identity, w0 ¼ w (because N w ¼ f0g), and w ¼ w . Some other properties of the pseudo-inverse are wþþ ¼ w, wwþ ¼ w⊥⊥, wþw⊥⊥ ¼ wþ, wwþw ¼ w, and wþwwþ ¼ wþ. Our concern is with the pseudo-inverse of a matrix operator. If H is m n, m ≥ n, and the columns of H are linearly independent, then Hþ ¼ (H0 H)–1H0 , which is commonly used in least-squares estimation. Here we are interested in singular square matrices, in particular, the pseudo-inverse of a symmetric matrix (applying to Hermitian matrices in the complex setting).
Optimal Filtering
59
If B is a symmetric matrix and its unitary similarity transformation is U BU ¼ D, where U is unitary (U0 ¼ U–1) and D is diagonal, then the pseudoinverse of B is given by Bþ ¼ UDþU0 . If the eigenvalues l1, l2, . . . , ln of B are nonnegative, then 0 1 l1 0 · · · 0 B 0 l2 · · · 0 C B C D ¼ B .. (3.32) .. . . .. C, @ . . . . A 0
0
0
···
ln
where l1 ≥ l2 ≥ ⋯ ≥ ln. If lr > 0 and lr þ 1 ¼ 0, then the pseudo-inverse of D is given by 0
l1 1 B 0 B B . B .. B þ D ¼B B 0 B 0 B B .. @ . 0
0 l1 2 .. .
··· ··· .. .
0 0 .. .
··· ··· .. .
0
···
0 0 .. .
l1 r 0 .. .
0 0 .. .
··· ··· .. .
0 0 .. .
··· ··· .. .
0
0
···
1 0 0C C .. C .C C 0C C: 0C C .. C .A 0
(3.33)
We now state the fundamental theorem concerning projections into the span of a finite set of vectors in a Hilbert space that are not necessarily linearly independent. Theorem 3.5. Let {x1, x2, . . . , xn} be a set of vectors in a Hilbert space with span S. Then the orthogonal projection of any vector y into S is given by pS ðyÞ ¼
n X
aðy, xk Þxk ,
(3.34)
k¼1
where ˆ ¼ Gþ C, A
(3.35)
ˆ ¼ ðaðy, x1 Þ, aðy, x2 Þ, : : : , aðy, xn ÞÞ0 , C ¼ (〈y, x1〉, 〈y, x2〉, . . . , 〈y, xn〉)0 , with A and G is the Grammian of {x1, x2, . . . , xn}. ▪ If x1 , x2 , : : : , xn are linearly independent, then G is nonsingular, ˆ ¼ G1 C. If x1, x2, . . . , xn form an orthonormal set, then G ¼ G–1, and A G ¼ I and pS ðyÞ reduces to the Fourier series of y in terms of x1 , x2 , : : : , xn . In the case of optimal linear MSE filters, G ¼ R, the autocorrelation matrix of the observations X 1 , X 2 , : : : , X n , and C is the cross-correlation vector for the observations and the random variable Y to be estimated. þ
60
Chapter 3
When the observations are linearly independent, Rþ ¼ R–1, and Eq. 3.35 defines ˆ ¼ R1 C (Eq. 3.27). Should the observations not the optimal linear filter via A necessarily be linearly independent, then Eq. 3.27 is replaced by ˆ ¼ Rþ C, A
(3.36)
which yields a version of Theorem 3.3 without the assumption of linear independence.
3.3 Optimal Linear Filters for Random Vectors An estimator of the random vector Y ¼ (Y1, Y2, . . . , Ym)0 based on the observed random vector X ¼ (X1, X2, . . . , Xn)0 is a vector-valued function ˆ ¼ cðXÞ. Componentwise, Yˆ i ¼ ci ðX 1 , X 2 , : : : , X n Þ for j ¼ 1, 2, . . . , m. MSE Y optimization is framed in terms of MSE〈ci ¼
m X
E½jY i ci ðXÞj2 :
(3.37)
i¼1
The MSE is minimized by minimizing each summand. These are minimized by ci (X) ¼ E[Yi|X]. Thus, the optimal MSE vector estimator is a vector of conditional expectations: 0
1 E½Y 1 jX 1 , X 2 , : : : , X n B E½Y 2 jX 1 , X 2 , : : : , X n C C ˆ ¼ cðXÞ ¼ B Y B C: .. @ A .
(3.38)
E½Y m jX 1 , X 2 , : : : , X n
When there are r vectors, X1 , X2 , : : : , Xr , from which Y is to be estimated, the optimal estimator is the conditional expectation ˆ ¼ E½YjX1 , X2 , : : : , Xr , Y
(3.39)
ˆ is the conditional where the notation means that the ith component of Y expectation of Yi given all of the components of X1 , X2 , : : : , Xr . A linear estimator of Y based on X ¼ (X1, X2, . . . , Xn)0 takes the form 1 0 1 0 c1 ðXÞ a11 Yˆ 1 B Yˆ C B c2 ðXÞ C B a21 B 2C B C B B . C ¼ B .. C ¼ B .. . @ . A @ . A @ . am1 cm ðXÞ Yˆ m 0
a12 a22 .. .
··· ··· .. .
a1n a2n .. .
am2
···
amn
10
1 X1 CB X 2 C CB C CB .. C: A@ . A Xn
(3.40)
Optimal Filtering
61
Letting A denote the matrix, the estimator is given by ˆ ¼ cðXÞ ¼ AX: Y
(3.41)
The MSE is given by 2 m n X X E Y i aij X j : MSE〈ci¼ i¼1
(3.42)
j¼1
Minimization of the MSE results from finding the optimal linear filter for each component Yi in terms of the components of X. MSE〈c〉 is a squared distance in the space of all random m-vectors whose components possess finite second moments. The inner product is given by 〈U, Vi¼E½U0 V ¼
m X
E½U i V i ,
(3.43)
i¼1
where U ¼ ðU 1 , U 2 , : : : , U m Þ0 and V ¼ ðV 1 , V 2 , : : : , V m Þ0 . The norm is 0
kUk ¼ E½U U
1∕2
¼
X m
1∕2 E½jU i j : 2
(3.44)
i¼1
The operator MSE is given by MSE〈ci¼kY cðXÞk2 ,
(3.45)
which expands to Eq. 3.42. The optimal linear MSE vector filter minimizes the norm. The orthogonality principle extends directly by applying it componentˆ is the optimal linear estimator of Y based on the random wise. It states that Y vector X if and only if ˆ 0 ¼ 0 E½ðY YÞU
(3.46)
for any vector U whose components are linear combinations of the components of X (they lie in the subspace spanned by the components of X). ˆ is orthogonal to Expanded, Eq. 3.46 states that every component of Y Y every component of U. Expressed in the form of Theorem 3.2, the orthogonality condition states that 0 ˆ E½ðY YÞðBXÞ ¼0
(3.47)
for any m n matrix B. Theorem 3.3 also applies if the components of X are linearly independent. Let Ck ¼ E [YkX] be the cross-correlation vector for Yk with the components
62
Chapter 3
of X, and C be the matrix with columns C1 , C2 , : : : , Cm . According to Theorem 3.3, the optimal linear estimator for Yk is Yˆ k ¼ ðR1 Ck Þ0 X. Using the symmetry of R, we obtain 1 0 0 1 1 C1 R X ðR1 C1 ÞX B ðR1 C2 ÞX C B C0 R1 X C C B 2 C ˆ ¼B Y C¼B B C ¼ C0 R1 X: . . . . A @ @ A . . 0
ðR1 Cm ÞX
(3.48)
C0m R1 X
Relative to Eq. 3.41, the optimal linear filter is defined by the matrix ˆ ¼ C0 R1 : A
(3.49)
When the observations are not linearly independent, the same reasoning applies to Eq. 3.36 and the optimal linear filter is defined by the matrix ˆ ¼ C0 Rþ : A
(3.50)
Writing C0 and R as expectations, we have the following theorem, which is called the Gauss–Markov theorem. Theorem 3.6. If X ¼ ðX 1 , X 2 , : : : , X n Þ0 and Y ¼ ðY 1 , Y 2 , : : : , Y m Þ0 are random vectors comprising random variables possessing finite second moments, then the optimal linear MSE estimator of Y based on X is given by ˆ ¼ E½YX0 E½XX0 þ X: Y
(3.51) ▪
In the case of Theorem 3.3, we computed the MSE in Eq. 3.29. In the vector setting, we compute the error-covariance matrix: ˆ YÞðY ˆ YÞ0 ¼ E½ðY ˆ YÞY ˆ 0 E½ðY ˆ YÞY0 E½ðY ˆ YÞY0 ¼ E½ðY ˆ 0 ¼ E½YY0 E½YY
(3.52)
¼ E½YY0 E½YX0 E½XX0 þ E½XY0 , where the second equality follows from the orthogonality principle and the last from Theorem 3.6. The diagonal of the error-covariance matrix is composed of the MSEs for the components of Y, so that the trace of the error-covariance matrix gives the MSE.
Optimal Filtering
63
Example 3.1. Suppose that X and Y satisfy the observation model X ¼ HY þ N. Theorem 3.6 gives the optimal linear estimator ˆ ¼ E½YðHY þ NÞ0 E½ðHY þ NÞðHY þ NÞ0 þ X Y ¼ E½YY0 H0 þ YN0 E½HYY0 H0 þ HYN0 þ NY0 H0 þ NN0 þ X ¼ ðE½YY0 H0 þ E½YN0 ÞðHE½YY0 H0 þHE½YN0 þ E½NY0 H0 þ E½NN0 Þþ X: To proceed further, we assume that N is uncorrelated with Y. This means that every component of N is uncorrelated with every component of Y, so that their cross-correlation matrix E[NY0 ] is null. The autocorrelation matrices of Y and N are RY ¼ E[YY0 ] and RN ¼ E[NN0 ], respectively. The optimal linear estimator of Y is ˆ ¼ RY H0 ðHRY H0 þ RN Þþ X: Y
▪
3.4 Recursive Linear Filters Rather then estimate a random vector Y based on all observations at once, it is often beneficial to estimate Y from a subset of the observations and then update the estimator with new observations. This is done recursively with observation random vectors X0, X1, X2, . . . . An initial linear estimator for Y is based on X0; this initial estimator is used in conjunction with X1 to obtain the optimal linear estimator based on X0 and X1; and this procedure is recursively performed to obtain linear estimators for Y based on X0 , X1 , : : : , Xj for j ¼ 1, 2, . . . . The method can be extended to recursively estimate a vector random function Y(k) based on the random variables comprising a vector random function X( j) from time 0 to time j. Y(k) is estimated based on X(0), X(1), . . . , X( j). It is assumed that there is a recursive relation between Y(k1) and Y(k). This recursive relation is used in conjunction with prior estimators to derive an optimal linear estimator for Y(k). 3.4.1 Recursive generation of direct sums Optimal linear filtering is based on orthogonal projections onto subspaces spanned by observation random variables; recursive optimal linear filtering involves orthogonal projection onto direct sums of subspaces generated by a sequence of random vectors. As the recursion evolves, the filter incorporates the novel information in the new observations. The random variables in an observation sequence X0 , X1 , X2 , : : : , Xj , : : : span subspaces of increasing dimensionality as j increases. If X j is the span of the random variables in X0 , X1 , : : : , Xj , then X 0 ⊂ X 1 ⊂ · · · . The optimal linear filter for Y based on
64
Chapter 3
X0 , X1 , : : : , Xj is the orthogonal projection (of the components) of Y into X j . The increase in dimensionality between X j and X jþ1 depends on the linear dependence relations between the random variables comprising Xjþ1 and the random variables in X0 , X1 , : : : , Xj . Recursive optimal linear filtering is achieved by decomposing the subspaces X 0 , X 1 , X 2 , : : : into direct sums and then using the recursive projection procedure that generates these direct sums to obtain optimal linear estimators. Recursive discrete linear filtering follows directly from recursive generation of direct sums. In this subsection we discuss direct sums; in the next two, we use this theory to obtain recursive optimal linear filters. If W 0 , W 1 , : : : , W m are subspaces of a vector space, then their sum, W ¼ W0 þ W1 þ · · · þ Wm,
(3.53)
is the span of the union of W 0 , W 1 , : : : , W m . Each vector x ∈ W can be expressed as x ¼ w0 þ w1 þ · · · þ wm ,
(3.54)
where wk ∈ W k for k ¼ 0, 1 : : : , m. If B0 , B1 , : : : , Bm are bases for W 0 , W 1 , : : : , W m , respectively, then W is the span of B ¼ B0 ∪ B1 ∪ · · · ∪ Bm :
(3.55)
B is not necessarily a basis for W because B may not be linearly independent. An added assumption makes B a basis. W 0 , W 1 , : : : , W m are said to be independent if w0 þ w1 þ · · · þ wm ¼ 0
(3.56)
with wk ∈ W k for k ¼ 0, 1 : : : , m, implies that w0 ¼ w1 ¼ · ·· ¼ wm ¼ 0:
(3.57)
The following conditions are equivalent: (i) W 0 , W 1 , : : : , W m are independent; (ii) B is a basis for W; and (iii) for k ¼ 1, 2, . . . , m, W k ∩ ðW 0 þ W 1 þ ·· · þ W k1 Þ ¼ f0g:
(3.58)
If W 0 , W 1 , : : : , W m are independent, then we write B ¼ ðB0 , B1 , : : : , Bm Þ to indicate that the order of the basis vectors is preserved when they are unioned to form a basis for W, which is now called a direct sum and written as W ¼ W0 W1 · · · Wm:
(3.59)
For inner-product spaces, subspaces W k and W j are said to be orthogonal if 〈wk, wj〉 ¼ 0 for any wk ∈ W k and wj ∈ W j . We write W k ⊥ W j . The subspaces W 0 , W 1 , : : : , W m are orthogonal as a collection if they are mutually
Optimal Filtering
65
orthogonal. If W 0 , W 1 , : : : , W m are orthogonal, then they are independent. Indeed, if Eq. 3.56 holds, then taking the inner product with wk ∈ W k yields 〈wk, wk〉 ¼ 0, implying that wk ¼ 0. We are concerned with the situation in which there are collections of vectors: C0 ¼ fx01 , x02 , : : : , x0, nð0Þ g C1 ¼ fx11 , x12 , : : : , x1, nð1Þ g .. .
(3.60)
Cm ¼ fxm1 , xm2 , : : : , xm, nðmÞ g .. . where Cm has n(m) vectors. The vectors in C0 , C1 , : : : , Cm are not necessarily linearly independent. Let U 0 , U 1 , : : : , U m be the spans of C0 , C1 , : : : , Cm , respectively, and Vm ¼ U 0 þ U 1 þ · · · þ U m:
(3.61)
We will express V m as a direct sum, V m ¼ S0 S1 · · · Sm:
(3.62)
This will be done recursively. To start the recursion, let S 0 ¼ U 0 . Next, suppose that we have the direct sum V m1 ¼ S 0 S 1 · · · S m1
(3.63)
for m 1. To obtain the direct-sum representation for m, let pm1 be the orthogonal projection onto V m1 . For j ¼ 1, 2, . . . , n(m), let zmj ¼ xmj pm1 ðxmj Þ:
(3.64)
= V m1 , then zmj ≠ 0 If xmj ∈ V m1 , then pm1(xmj) ¼ xmj and zmj ¼ 0; if xmj ∈ and zmj ⊥ V m1 . Let S m be the span of zm1 , zm2 , : : : , zm, nðmÞ . These differences need not be linearly independent, but each is orthogonal to V m1 . Hence, S m ⊥ V m1 . We form the direct sum V m1 S m . Since vectors in V m1 S m are linear combinations of vectors in V m1 and U m , V m1 S m ⊂ V m1 þ U m . Solving Eq. 3.64 for xmj shows that U m ⊂ V m1 S m . Therefore, V m1 þ U m ⊂ V m1 þ ðV m1 S m Þ ¼ V m1 S m :
(3.65)
66
Chapter 3
Hence, V m ¼ V m1 þ U m ¼ V m1 S m ¼ S 0 S 1 · · · S m :
(3.66)
We are interested in direct sums because of their relation to orthogonal projections. If y is a vector in a Hilbert space, then the orthogonal projection of y into a direct sum in which the subspaces are mutually orthogonal is the sum of the projections into the subspaces forming the direct sum. Let pm and sm be the projections onto V m and S m , respectively. By construction, V m ¼ V m1 S m and V m1 ⊥ S m . Hence, the projections of y into V 0 , V 1 , . . . are found recursively by p0 ðyÞ ¼ s0 ðyÞ p1 ðyÞ ¼ p0 ðyÞ þ s1 ðyÞ .. . pm ðyÞ ¼ pm1 ðyÞ þ sm ðyÞ
(3.67)
.. . Referring to Eq. 3.64, let zm ¼ ðzm1 , zm2 , : : : , zm, nðmÞ Þ0 for m ¼ 1, 2, : : : . The components of zm span S m , and according to Theorem 3.5 (see Eq. 3.35), 0 sm ðyÞ ¼ ðGþ m Cm Þ zm ,
(3.68)
where Gm is the Grammian of zm and Cm ¼ ð〈y, zm1 i, 〈y, zm2 i, : : : , 〈y, zm, nðmÞ iÞ0 . 0 The recursion of Eq. 3.67 is initialized by p0 ðyÞ ¼ ðGþ 0 C0 Þ x0 , where x0 ¼ (x01, x02, . . . , x0, n(0))0 . Equations 3.67 and 3.68 are the basic equations for discrete recursive optimal linear filters. 3.4.2 Static recursive optimal linear filtering The recursive projection theory applies at once to linear filtering. There is a sequence of observation random vectors, X0 , X1 , X2 , . . . , and a random vector Y to be recursively estimated. The estimation is static in the sense that Y is fixed. Applying the recursive system of Eq. 3.67 componentwise in conjuncˆ k , yields an optimal recursive linear filter tion with Eq. 3.68, with zm ¼ Xk X for Y based on X0 , X1 , X2 , : : : : 0 ˆ pm ðyÞ ¼ pm1 ðyÞ þ ðGþ m Cm Þ ðXk Xk Þ:
(3.69)
Theorem 3.7. Let X0 , X1 , X2 , : : : and Y be random vectors possessing finite ˆ k be the optimal linear estimator of Y second moments. For k ¼ 0, 1, . . . , let Y ˆ k be the optimal linear estimator based on X0 , X1 , : : : , Xk . For k ¼ 1, 2, . . . , let X of Xk based on X0 , X1 , : : : , Xk1 . Then, for k ¼ 1, 2, . . . ,
Optimal Filtering
67
ˆ k1 þ Mk ðXk X ˆ k Þ, ˆk ¼ Y Y
(3.70)
where the gain matrix Mk and the initialization are given by ˆ k Þ0 E½ðXk X ˆ k ÞðXk X ˆ k Þ0 þ , Mk ¼ E½YðXk X
(3.71)
ˆ 0 ¼ E½YX0 E½X0 X0 þ X0 : Y 0 0
(3.72) ▪
ˆ k are called innovations because Xk X ˆ k provides The differences Xk X the new information upon which the filter is to be updated. Should the components of Xk lie in the span of the components of X0 , X1 , : : : , Xk1 , so ˆ k and that no observation dimensionality is added by Xk, then Xk ¼ X ˆ k1 . At the other extreme, if all components of Xk are orthogonal to ˆk ¼ Y Y ˆ k ¼ 0 and the span of the components of X0 , X1 , : : : , Xk1 , then X ˆk ¼ Y ˆ k1 þ E½YX0 E½Xk X0 þ Xk : Y k k
(3.73)
In this case, the update is simply the optimal linear estimator for Y based on Xk. The preceding theorem is quite general. We specialize it to the situation in which the new observations are linear combinations of the random variables composing Y plus additive noise. The intent here is to model a linear measurement model HkY, defined at time k by the matrix Hk. The measurements are degraded by additive noise Nk, which is assumed to be uncorrelated with X0 , X1 , : : : , Xk1 , and Y. Nk is not presupposed to be white. These assumptions provide the observation model Xk ¼ Hk Y þ Nk ,
(3.74)
E½YN0k ¼ E½Xj N0k ¼ 0 ðj , kÞ:
(3.75)
Let Vk ¼ ðX00 , X01 , : : : , X0k Þ0 be the vector with components of ˆ k is X0 , X1 , : : : , Xk , taken in order. Then Nk is uncorrelated with Vk1. X the optimal linear estimator of Xk based on Vk1. Let Ek denote the errorˆ k , namely, covariance matrix for estimation of Y by Y ˆ k ÞðY Y ˆ k Þ0 : Ek ¼ E½ðY Y
(3.76)
Denote the noise autocorrelation matrix by Rk ¼ E½Nk N0k . By the Gauss– Markov theorem applied to Xk and Vk1,
68
Chapter 3
ˆ k ¼ E½ðHk Y þ Nk ÞV0 E½Vk1 V0 þ Vk1 X k1 k1 ¼ ðHk E½YV0k1 þ E½Nk V0k1 ÞE½Vk1 V0k1 þ Vk1 ¼ Hk E½YV0k1 E½Vk1 V0k1 þ Vk1
(3.77)
ˆ k1 : ¼ Hk Y Hence, ˆ k ¼ Hk ðY Y ˆ k1 Þ þ Nk : Xk X
(3.78)
We need to compute the gain matrix from Theorem 3.7. First, ˆ k Þ0 ¼ E½YðHk ðY Y ˆ k1 Þ þ Nk Þ0 E½YðXk X ˆ k1 Þ0 H0 þ E½YN0 ¼ E½YðY Y k k ˆ k1 Þ0 H0 ¼ E½YðY Y k ˆ k1 Þ0 H0 E½Y ˆ k1 ðY Y ˆ k1 Þ0 H0 ¼ E½YðY Y k k
(3.79)
ˆ k1 ÞðY Y ˆ k1 Þ0 H0 ¼ E½ðY Y k ¼ Ek1 H0k , where the fourth equality follows from the orthogonality principle (the second summand being null). Next, ˆ k ÞðXk X ˆ k Þ0 ¼ E½ðHk ðY Y ˆ k1 Þ þ Nk ÞðHk ðY Y ˆ k1 Þ þ Nk Þ0 E½ðXk X ˆ k1 ÞðY Y ˆ k1 Þ0 H0 ¼ Hk E½ðY Y k ˆ k1 ÞN0 þ E½Nk N0k þ Hk E½ðY Y k
(3.80)
ˆ k1 Þ0 H0 þ E½Nk ðY Y k ¼ Hk Ek1 H0k þ Rk ,
where the last equality follows from the fact that, since the components of ˆ k1 lie in the span of the components of X0 , X1 , : : : , Xk1 , Eq. 3.75 implies Y that the last two summands in the preceding equation are null. From Theorem 3.7, we deduce the following recursive updating proposition, in which the recursive expression of the error-covariance matrix is obtained by a direct (but tedious) matrix calculation. Theorem 3.8. For the linear observation model Xk ¼ HkY þ Nk, subject to the orthogonality conditions E½YN0k ¼ 0 and E½Xj N0k ¼ 0 for j < k, the optimal linear estimator for Y based on X0 , X1 , : : : , Xk has the recursive formulation ˆ k1 þ Mk ðXk Hk Y ˆk ¼ Y ˆ k1 Þ, Y
(3.81)
where the gain matrix and error-covariance matrix are recursively given by
Optimal Filtering
69
Mk ¼ Ek1 H0k ðHk Ek1 H0k þ Rk Þþ ,
(3.82)
Ek ¼ Ek1 Ek1 H0k ðHk Ek1 H0k þ Rk Þþ Hk Ek1 :
(3.83) ▪
3.4.3 Dynamic recursive optimal linear filtering Up to this point we have considered recursive linear filtering when new observations are used to estimate a fixed random vector. We now treat the dynamic case in which a vector random function over discrete time is estimated from an observed vector random function over discrete time. There are two vector random functions, Xj ¼ X( j) and Yk ¼ Y(k), and a linear filter is desired to estimate Yk based on X0 , X1 , : : : , Xj . Yk satisfies a linear dynamic model composed of a state equation and a measurement equation. The state equation has the form Ykþ1 ¼ Tk Yk þ Uk
(3.84)
for k ¼ 0, 1, : : : . Yk is called the state vector, Tk is an n n matrix, Uk is a discrete white-noise vector random function, called the process noise, and the cross-correlation between the state vector and the process noise is null for j ≤ k, meaning that RUY ðk, jÞ ¼ E½Uk Y0j ¼ 0:
(3.85)
In the vector setting, Uk being white noise means that its autocorrelation function is RU ðk, jÞ ¼ E½Uk U0j ¼ Qk dkj ,
(3.86)
where Qk ¼ E½Uk U0k . The state equation describes the evolution of the random function Yk over time. The measurement equation describes the observation of the state vector via a linear transformation together with additive measurement noise. It takes the form X k ¼ Hk Y k þ Nk ,
(3.87)
where Hk is an m n measurement matrix and Nk is discrete white noise possessing the autocorrelation function RN ðk, jÞ ¼ E½Nk N0j ¼ Rk dkj ,
(3.88)
with Rk ¼ E½Nk N0k . In addition, for all k and j, RNU ðk, jÞ ¼ E½Nk U0j ¼ 0,
(3.89)
RNX ðk, jÞ ¼ E½Nk X0j ¼ 0:
(3.90)
70
Chapter 3
ˆ kjj of Yk, based on We desire the optimal linear estimator Y Vj ¼ ðX00 , X01 , : : : , X0j Þ0 :
(3.91)
The components of Vj consist of all observation random variables up to and including those at time j. The cross-correlation function of U and V is given by the blocked matrix RUV ðk, jÞ ¼ E½Uk V0j ¼ ðE½Uk X00 E½Uk X01 ·· · E½Uk X0j Þ:
(3.92)
Suppose that j ≤ k. For i ≤ j, the ith block of the preceding matrix is given by E½Uk X0i ¼ E½Uk ðHi Yi þ Ni Þ0 ¼ E½Uk Y0i H0i þ E½Uk N0i :
(3.93)
Owing to the uncorrelatedness assumptions of the model, RUV(k, j) ¼ 0. ˆ kjk , Y ˆ kþ1jk , and Y ˆ kj j, for j > k, are called the filter, In the present context, Y predictor, and smoother for Yk, respectively. In recursive form and in the framework of a dynamical system, they are called discrete Kalman filters. We ˆ kþ1jkþ1 . focus on the filter Y From the Gauss–Markov theorem, ˆ kjk ¼ E½Yk V0 E½Vk V0 þ Vk : Y k k
(3.94)
From the Gauss–Markov theorem and the state equation, ˆ kþ1jk ¼ E½ðTk Yk þ Uk ÞV0 E½Vk V0 þ Vk Y k k ¼ Tk ðE½Yk V0k þ E½Uk V0k ÞE½Vk V0k þ Vk ¼ Tk E½Yk V0k E½Vk V0k þ Vk
(3.95)
ˆ kjk : ¼ Tk Y The recursive representation for the linear observation model of Theorem 3.8 applies one step at a time. Since the measurement model is the same for the linear dynamical system, the recursion and the computation of Eq. 3.77 apply in the present setting. Hence, from Theorem 3.8 and the preceding equation, we deduce the Kalman filter recursive expression: ˆ kþ1jkþ1 ¼ Y ˆ kþ1jk þ Mkþ1 ðXkþ1 Hkþ1 Y ˆ kþ1jk Þ Y ˆ kjk þ Mkþ1 ðXkþ1 Hkþ1 Tk Y ˆ kjk Þ, ¼ Tk Y
(3.96)
Optimal Filtering
71
where the gain matrix is given by Eq. 3.82. It remains to find a recursive expression for the gain matrix. In the present setting, the error-covariance matrix for the filter takes the form ˆ kj j Yk ÞðY ˆ kj j Yk Þ0 : Ekj j ¼ E½ðY
(3.97)
Applying the state equation in conjunction with Eq. 3.95 yields the covariance extrapolation, ˆ kþ1jk Ykþ1 ÞðY ˆ kþ1jk Ykþ1 Þ0 Ekþ1jk ¼ E½ðY ˆ kjk ðTk Yk þ Uk ÞÞðTk Y ˆ kjk ðTk Yk þ Uk ÞÞ0 ¼ E½ðTk Y ˆ kjk Yk Þ Uk ÞðTk ðY ˆ kjk Yk Þ Uk Þ0 ¼ E½ðTk ðY ˆ kjk Yk ÞðY ˆ kjk Yk Þ0 T0 þ E½Uk U0 ¼ Tk E½ðY k k
(3.98)
ˆ kjk Yk ÞU0 E½Uk ðY ˆ kjk Yk Þ0 T0 Tk E½ðY k k ¼ Tk Ekjk T0k þ Qk , where the last equality follows from the fact that the second two summands in the next-to-last expression are zero by Eq. 3.85 and the relation RUV(k, j) ¼ 0 ˆ kjk lies in the span of the random variables constituting Vk). The (since Y Kalman gain matrix Mk ¼ Ekjk1 H0k ðHk Ekjk1 H0k þ Rk Þþ
(3.99)
follows from Eq. 3.82, and the covariance update Ekjk ¼ Ekjk1 Mk Hk Ekjk1
(3.100)
follows from Eq. 3.83. Theorem 3.9. For the discrete linear dynamical system of Eqs. 3.84 and 3.87, the Kalman recursive filter is given by ˆ kþ1jkþ1 ¼ Tk Y ˆ kjk þ Mkþ1 ðXkþ1 Hkþ1 Tk Y ˆ kjk Þ, Y
(3.101)
where the covariance extrapolation, Kalman gain matrix, and covariance update are given by Eqs. 3.98, 3.99, and 3.100, respectively. ▪ Theorem 3.9 addresses the Kalman filter. The Kalman predictor is obtained via Eq. 3.95 as ˆ kþ1jk ¼ Tk ½Tk1 Y ˆ k1jk1 þ Mk ðXk Hk Tk1 Y ˆ k1jk1 Þ Y ˆ kjk1 þ Mk ðXk Hk Y ˆ kjk1 Þ: ¼ Tk ½Y
(3.102)
72
Chapter 3
In theory, initialization of the Kalman filter involves application of the ˆ 0j0 of Y0, and then Gauss–Markov theorem to obtain the estimator Y ˆ computation of E0|0 according to Eq. 3.97. Once Y0j0 and E0|0 are in hand, recursion can proceed by obtaining E1|0 via Eq. 3.98. The practical problem is that we likely lack sufficient distributional knowledge regarding Y0 to obtain ˆ 0j0 . Equations 3.51 and 3.52 show that we would need the cross-correlation Y matrix of Y0 with X0 and the autocorrelation matrix of Y0. There are different statistical ways to obtain initialization estimates without these, but we will not pursue them here. There is one easy situation: if initialization of the system is ˆ 0j0 ¼ Y0 , and E0|0¼0. deterministic, then Y0 is a known constant vector, Y 0 If Rk, Ek|k, Ek|k1, and Hk Ekjk1 Hk þ Rk are nonsingular, then it can be shown via matrix algebra that 1 0 1 E1 kjk ¼ Ekjk1 þ Hk Rk Hk ,
(3.103)
Mk ¼ Ekjk H0k R1 k :
(3.104)
If the measurement noise is large, then the gain is small. This agrees with our intuition that we should not significantly update an estimator based on very noisy observations. The convergence of recursive estimators is a crucial issue. From the basic linear recursive system of Eq. 3.67, we see that the change in the estimator from step k 1 to step k is given by sk(y), the projection of y into S k . There are two potential problems. First, the new vectors may produce little increase in dimension. The most degenerate dimensionality case occurs when all new vectors lie in the span of the previously incorporated vectors. Then the update is null. A second issue is the correlation between y and the new observations. If y ⊥ S k , then sk(y) ¼ 0 and there is no update. Even if y is not orthogonal to S k , its projection into S k can be arbitrarily small. This potentiality is evident in Theorem 3.7, where the first factor of the gain matrix is the crossˆ k . This problem also surfaces in the correlation matrix of Y and Xk X observation model of Theorem 3.8. The cross-correlation matrix for this model is computed in Eq. 3.79. Because Y and Nk are assumed to be uncorrelated, the noise plays no role. The first factor of the gain matrix in Eq. 3.82 is the cross-correlation matrix, expressed as Ek1 H0k . In fact, Eq. 3.82 directly shows the dependence of convergence on the measurement matrix. Since the gain matrix for the Kalman filter is simply a restatement of the gain matrix for the model of Theorem 3.8, these comments apply directly to it. The difference with the Kalman filter is that the error-covariance matrix Ek|k1 is related to Ek|k by way of Tk (Eq. 3.98), but the import of this relationship is that it provides an error-covariance update. What is clear is that everything goes back to the basic linear recursive system of Eq. 3.67. The fact that in the
Optimal Filtering
73
dynamical linear system the random vector to be estimated is changing is of little consequence because of the uncorrelatedness assumptions of the model. The Kalman filter was introduced by R. E. Kalman for discrete time (Kalman, 1960) and shortly thereafter extended to continuous time, where it is called the Kalman–Bucy filter (Kalman and Bucy, 1961). Innovation processes have long been a staple in signal processing, going back to the 1950s (Bode and Shannon, 1950; Zadeh and Ragazzini, 1950), and were subsequently used by T. Kailath to derive the Kalman recursive equations (Kailath, 1968).
3.5 Optimal Infinite-Observation Linear Filters For finite-observation linear filters, the orthogonality principle provides a necessary and sufficient condition for MSE optimality. According to the projection theorem (Theorem 3.4), if a subspace is topologically closed, then the distance from a vector to the subspace is minimized by the orthogonal projection of the vector into the subspace. All finite-dimensional subspaces are closed, so there always exists a minimizing vector in the subspace. This vector can be found by the orthogonality principle. The orthogonality principle also applies for infinite-dimensional subspaces; however, the existence of a minimizing vector is only assured if the subspace is closed. We state the orthogonality principle as it applies to random variables in this more general framework. Theorem 3.10 (Orthogonality Principle). If S is a subspace of finite-secondmoment random variables and Y has a finite second moment, then Yˆ is the optimal MSE estimator of Y lying in S if and only if E½ðY Yˆ ÞU ¼ 0 for any U ∈ S. If it exists, then the best MSE estimator is unique. ▪ As in Theorem 3.2, Y Yˆ must be orthogonal to every element of S; however, when S is not spanned by a finite number of random variables, there is no uniform finite representation of the elements of S, and therefore the methodology that resulted in the finite-observation solution cannot be directly applied. The theorem does not assert the existence of an optimal MSE estimator in S (unless S is closed). If an estimator is found that satisfies the orthogonality principle, then existence is assured. As conceived by Kolmogorov, the theory of optimal linear filtering takes place in the context of operators on Hilbert spaces, where subspaces are generated by operator classes. We restrict the theory to linear integral operators on random functions. Whereas the finite procedure is justifiable owing to finite representation of the subspace random variables, application to integrals requires that certain stochastic integrals exist and that the random functions resulting from stochastic integrals possess finite second moments.
74
Chapter 3
3.5.1 Wiener–Hopf equation The aim is to estimate, for fixed s, the value Y(s) of a random function based on observation of a random function X(t) over some portion T of its domain. We consider estimators of the form Z W ðsÞ ¼
gðs, tÞX ðtÞdt:
(3.105)
T
Optimization is achieved by minimizing the mean-square error 2 Z MSE〈W ðsÞi¼E Y ðsÞ gðs, tÞX ðtÞdt
(3.106)
T
over all weighting functions g(s, t). By the orthogonality principle, the optimal estimator Yˆ ðsÞ ¼
Z gˆ ðs, tÞX ðtÞdt
(3.107)
T
satisfies the relation E½ðY ðsÞ Yˆ ðsÞÞW ðsÞ ¼ 0
(3.108)
for any W(s) of the form given in Eq. 3.105. In the finite-dimensional setting, the set of all linear combinations of a finite set of random variables generates a subspace, which is the span of the set. Here, we must be sure that the set L of random variables W(s) generated via Eq. 3.105 by kernels g(s, t) in some class G of kernels forms a subspace of the space of finite-second-moment random variables. First, a subspace must be linearly closed; that is, if W1(s), W 2 ðsÞ ∈ L, then, for arbitrary scalars c1 and c2, W ðsÞ ¼ c1 W 1 ðsÞ þ c2 W 2 ðsÞ ∈ L:
(3.109)
If W1(s) and W2(s) result from Eq. 3.105 via the kernels g1(s, t) and g2(s, t), then W(s) results from the kernel gðs, tÞ ¼ c1 g1 ðs, tÞ þ c2 g2 ðs, tÞ:
(3.110)
Hence, if G is linearly closed, then gðs, tÞ ∈ G, W ðsÞ ∈ L, and L is linearly closed. All elements of L must posses finite second moments. If the covariance function of X is square-integrable over T T, and g(s, t) is square-integrable over T in the variable t, then, assuming that mX (t) ≡ 0 [or else we could center X(t)] and applying Eq. 1.31 and the Cauchy–Schwarz inequality, we obtain
Optimal Filtering
75
E½jW ðsÞj2 ¼ K W ðs, sÞ Z Z gðs, tÞgðs, t0 ÞK X ðt, t0 Þdtdt0 ¼ ZT ZT ≤ jgðs, tÞgðs, t0 ÞK X ðt, t0 Þjdtdt0 T T Z Z 1∕2 ≤ jgðs, tÞgðs, t0 Þj2 dtdt0 T T Z Z 1∕2 jK X ðt, t0 Þj2 dtdt0 T T Z Z 1∕2 Z 2 0 2 0 ¼ jgðs, tÞj dt jK X ðt, t Þj dtdt : T
T
(3.111)
T
Hence, if we assume that KX (t, t0 ) is square-integrable, then Eq. 3.105 generates a subspace of weight functions that are square-integrable in t. Whether we make this constraint, or some other, we proceed under the assumption that G is a linear space and that, for a given random function X(t), integral operators having kernels in G yield random functions possessing finite second moments. Expanding the orthogonality relation of Eq. 3.108 and interchanging expectation and integration gives Z
gˆ ðs, uÞE½X ðuÞX ðtÞdu dt ¼ 0
Z
gðs, tÞ E½Y ðsÞX ðtÞ T
(3.112)
T
for all square-integrable kernels g(s, t). This relation is satisfied if and only if Z gˆ ðs, uÞE½X ðuÞX ðtÞdu
E½Y ðsÞX ðtÞ ¼
(3.113)
T
for (almost) all t ∈ T. Writing this equation in terms of moments yields the next theorem. Theorem 3.11. Suppose that G is a linear space of square-integrable kernels g (s, t) defined over the interval T and, for any gðs, tÞ ∈ G, the stochastic integral of Eq. 3.105 gives a random variable possessing a finite second moment. Then gˆ ðs, tÞ yields the optimal MSE linear estimator of Y(s) based on X(t) according to Eq. 3.107 if and only if Z RY X ðs, tÞ ¼ (3.114) gˆ ðs, uÞRX ðu, tÞdu: T ▪ The preceding integral equation is called the Wiener–Hopf equation. The theorem does not assert the existence of a solution to the Wiener–Hopf equation; however, if a solution gˆ ðs, tÞ can be found, then there exists
76
Chapter 3
an optimal MSE linear filter, and it is defined by Eq. 3.107. There is no restriction on the dimensions of the variables. Assuming that the random functions are sufficiently regular to justify interchanges of operations, no assumptions are made on the forms of X(t) and Y(s). The method is quite general. Practically, however, the integral equation must often be solved numerically. For discrete signals, the Wiener–Hopf equation takes the summation form RY X ðm, nÞ ¼
k2 X
gˆ ðm, kÞRX ðk, nÞ
(3.115)
k¼k 1
for all n between k1 and k2, where ` ≤ k1 ≤ k ≤ k2 ≤ `. When k1 and k2 are ˆ ¼ C. finite, this reduces to the finite-dimensional system RA The MSE for an optimal linear filter is given by E½ðY ðsÞ Yˆ ðsÞÞ2 ¼ E½ðY ðsÞ Yˆ ðsÞÞY ðsÞ Z 2 ¼ E½Y ðsÞ gˆ ðs, uÞE½Y ðsÞX ðuÞdu ZT ¼ Var½Y ðsÞ gˆ ðs, uÞRY X ðs, uÞdu:
(3.116)
T
3.5.2 Wiener filter In general, the Wiener–Hopf integral equation determines a spatially variant filter; however, if X(t) and Y(s) are jointly WS stationary, then the optimal linear filter is spatially invariant with gˆ ðs, uÞ ¼ gˆ ðs uÞ. Moreover, if X(t) is observed over all time, then the Wiener–Hopf equation becomes Z ` rY X ðs tÞ ¼ (3.117) gˆ ðs uÞrX ðu tÞdu: `
Letting t ¼ u t and j ¼ s t yields the convolution integral Z ` rY X ðjÞ ¼ gˆ ðj tÞrX ðtÞdt: `
(3.118)
Since the Fourier transform of the autocorrelation function is the power spectral density, applying the Fourier transform and dividing by SX(v) yields S ðvÞ ˆ , GðvÞ ¼ YX S X ðvÞ
(3.119)
ˆ where GðvÞ is the Fourier transform of gˆ ðtÞ. The filter is called the Wiener filter. An important case arises when a process Y(t) is corrupted by the blurring of a linear observation system and additive noise. Then the observed process is
Optimal Filtering
77
Z X ðtÞ ¼
` `
hðt sÞY ðsÞds þ NðtÞ,
(3.120)
where h(t) is the blurring function and N(t) is noise. Suppose that N(t) is white and uncorrelated with the ideal process Y(t). Except for the additive noise, the observation process X(t) is a convolution of Y(t) with the impulse response of the system. The spectral densities SYX and SX were evaluated in Eqs. 2.100 and 2.101, respectively, except that there the roles of X and Y were reversed (see Eq. 2.95) so that here S Y X ðvÞ ¼ HðvÞS Y ðvÞ (note the conjugate). Substituting SYX and SX directly into Eq. 3.119 yields ˆ GðvÞ ¼
HðvÞS Y ðvÞ : jHðvÞj2 S Y ðvÞ þ S N ðvÞ
(3.121)
ˆ When there is no additive noise, this reduces to the inverse filter, GðvÞ ¼ HðvÞ1 . If there is no blur, only additive white noise, then we get the Wiener smoothing filter, ˆ GðvÞ ¼
S Y ðvÞ : S Y ðvÞ þ S N ðvÞ
(3.122)
3.6 Optimal Filtering via Canonical Expansions Estimation of a random function Y(s) in terms of an observed random function X(t) can be facilitated by first canonically decomposing X(t) into white noise Z(j) and then estimating Y(s) by means of Z(j), a key advantage being the simplified form of the Wiener–Hopf equation for white-noise input (Pugachev, 1956, 1957, 1965). 3.6.1 Integral decomposition into white noise If the zero-mean random function X(t) possesses an integral canonical expansion in terms of white noise Z(j), then Z X ðtÞ ¼ ZðjÞxðt, jÞdj, (3.123) Ξ
Z ZðjÞ ¼
X ðtÞaðt, jÞdt:
(3.124)
T
Rather than estimate Y(s) directly by a linear filter acting on X(t), suppose that we can find an optimal linear filter that estimates Y(s) from Z(j), Z ˆ ˆ jÞZðjÞdj, Y ðsÞ ¼ wðs, (3.125) Ξ
ˆ jÞ is the optimal weighting function. Then where wðs,
78
Chapter 3
Yˆ ðsÞ ¼
Z gˆ ðs, tÞX ðtÞdt,
(3.126)
ˆ jÞaðt, jÞdj: wðs,
(3.127)
T
where
Z gˆ ðs, tÞ ¼
Ξ
To see that gˆ ðs, tÞ produces an optimal linear filter for X(t), suppose that ˆ jÞ is the optimal kernel g(s, t) is a different weighting function. Because wðs, for Z(j), 2 2 Z Z Z E Y ðsÞ gðs, tÞX ðtÞdt ¼ E Y ðsÞ gðs, tÞxðt, jÞdt ZðjÞdj T Ξ T 2 Z ˆ jÞZðjÞdj ≥ E Y ðsÞ wðs, Ξ 2 Z ¼ E Y ðsÞ gˆ ðs, tÞX ðtÞdt : T
(3.128) To summarize the method: (1) transform X(t) to white noise Z(j); (2) find the optimal linear filter for Z(j); and (3) obtain the optimal linear filter for X(t) by concatenation. To find an expression for gˆ ðs, tÞ, first observe that Z 1 ˆ tÞ ¼ ˆ uÞdu wðs, I ðtÞdðu tÞwðs, I ðtÞ T Z 1 ˆ uÞRZ ðu, tÞdu ¼ wðs, (3.129) I ðtÞ T 1 ¼ R ðs, tÞ, I ðtÞ Y Z where the last equality utilizes the Wiener–Hopf equation. From Eq. 3.124, Z RY Z ðs, jÞ ¼ RY X ðs, uÞaðu, jÞdu: (3.130) T
From Eq. 3.127, Z
aðt, jÞ gˆ ðs, tÞ ¼ Ξ I ðjÞ
Z RY X ðs, uÞaðu, jÞdudj:
(3.131)
T
Although our aim is to estimate real random functions, the conjugation in Eq. 3.131 appears as a result of the conjugation in the defining expression for Z(j) (because we do not want to limit ourselves to real canonical expansions).
Optimal Filtering
79
ˆ tÞ is real. Finally, we can express Yˆ ðsÞ in two different Nevertheless, gðs, forms, the first following from Eqs. 3.126 and 3.131, and the second from Eqs. 3.125 and 3.129: Z
Z
Z aðt, jÞ X ðtÞdt RY X ðs, uÞaðu, jÞdudj T Ξ I ðjÞ T Z ZðjÞRY Z ðs, jÞ dj: ¼ I ðjÞ Ξ
Yˆ ðsÞ ¼
(3.132)
Inserting the expression for gˆ in Eq. 3.131 into Eq. 3.116 and simplifying using Eq. 3.130 yields an expression for the MSE: Z jRY Z ðs, jÞj2 2 ˆ dj: (3.133) E½jY ðsÞ Y ðsÞj ¼ RY ðs, sÞ IðjÞ Ξ 3.6.2 Extended Wiener–Hopf equation Whether one approaches the optimal linear filter directly through the Wiener– Hopf integral equation or via canonical expressions, the problem ultimately comes down to solving integral equations of the form Z RX ðt, tÞgðs, tÞdt ¼ f ðs, tÞ (3.134) T
for t ∈ T. We call this integral equation the extended Wiener–Hopf equation. Given an integral canonical expansion for X(t), based on the preceding method involving white-noise decomposition (in particular, Eq. 3.127), we are motivated to look for a solution of the form Z gðs, tÞ ¼ hðs, jÞaðt, jÞdj: (3.135) Ξ
We need to find the function h(s, j). Substitution of the proposed form into Eq. 3.134 and application of Eq. 2.84 yields Z Z f ðs, tÞ ¼ RX ðt, tÞ hðs, jÞaðt, jÞdjdt Z Ξ ZT hðs, jÞ RX ðt, tÞaðt, jÞdtdj (3.136) ¼ Ξ T Z ¼ I ðjÞhðs, jÞxðt, jÞdj: Ξ
This equation holds for all t ∈ T. Hence,
80
Chapter 3
Z 1 hðs, tÞ ¼ I ðjÞhðs, jÞdðt jÞdj I ðtÞ Ξ Z Z 1 I ðjÞhðs, jÞ aðt, tÞxðt, jÞdtdj ¼ I ðtÞ Ξ T Z Z 1 ¼ aðt, tÞ I ðjÞhðs, jÞxðt, jÞdjdt I ðtÞ T Ξ Z 1 aðt, tÞf ðs, tÞdt, ¼ I ðtÞ T
(3.137)
where the second equality follows from the bi-orthogonality relation of Eq. 2.86. From Eq. 3.135, Z
aðt, jÞ gðs, tÞ ¼ Ξ I ðjÞ
Z f ðs, tÞaðt, jÞdtdj:
(3.138)
T
Equation 3.138 yields the optimal filter of Eq. 3.131 if f (s, t) ¼ RYX (s, t), which is the case when the extended Wiener–Hopf equation reduces to the Wiener–Hopf equation. Besides the fact that the solution of Eq. 3.138 is more general, there are differences between Eqs. 3.131 and 3.138. In deriving the former, it is assumed ˆ tÞ provides an optimal linear filter and Eq. 3.131 follows because it is that wðs, ˆ tÞ satisfy the Wiener–Hopf equation. The more general necessary that wðs, approach postulates a form for the solution and then proceeds to find the form of the kernel g (s, t). It does not ensure that g (s, t) provides an optimal weighting function. Moreover, there is a problem with the derivation as given. Specifically, whereas Eq. 3.137 necessarily follows from Eq. 3.136, the converse is not true. Omitting the theoretical details, it can be shown that if the extended Wiener–Hopf equation has a solution, then the solution must be given by Eq. 3.138. 3.6.3 Solution via discrete canonical expansions Integral canonical expansions are theoretically more difficult than discrete canonical expansions, and the latter are more readily achievable. Hence, we consider discrete canonical expansions of the form X ðtÞ ¼
` X
Zk xk ðtÞ,
(3.139)
k¼1
where Zk and xk(t) are given by Eqs. 2.20 and 2.25, respectively, and aj (t) and xk (t) satisfy the bi-orthogonality relation of Eq. 2.28. In analogy to the solution by integral canonical expansions, we look for an optimal kernel of the form
Optimal Filtering
81 ` X
gðs, tÞ ¼
hk ðsÞak ðtÞ:
(3.140)
k¼1
Substitution of the proposed kernel into the extended Wiener–Hopf equation and application of Eq. 2.25 yields X Z ` RX ðt, tÞ hk ðsÞak ðtÞ dt f ðs, tÞ ¼ T
¼ ¼
` X k¼1 ` X
Z hk ðsÞ
k¼1
RX ðt, tÞak ðtÞdt
(3.141)
T
V k hk ðsÞxk ðtÞ,
k¼1
where Vk ¼ Var [Zk]. Two points should be noted. First, the preceding calculation assumes that integration and summation can be interchanged. Second, the resulting representation of f (s, t) is only a necessary condition for g (s, t) to be a solution of the extended Wiener–Hopf equation. It can, in fact, be proven that for the extended Wiener–Hopf equation to possess a solution that results in an operator that transforms X(t) into a random function possessing a finite variance (which is a requirement in the context of optimal linear filtering), it is necessary that f (s, t) be representable in terms of the coordinate functions of X(t). If f (s, t) has such a representation, we say that it is coordinate-function representable. Multiplying by aj (t), integrating over T, and applying the bi-orthogonality relation of Eq. 2.28 yields Z Z ` X aj ðtÞf ðs, tÞdt ¼ V k hk ðsÞ aj ðtÞxk ðtÞdt T T (3.142) k¼1 ¼ V j hj ðsÞ: Solving for hj ðsÞ yields 1 hj ðsÞ ¼ Vj
Z aj ðtÞf ðs, tÞdt:
(3.143)
T
Putting this expression into Eq. 3.140 yields gðs, tÞ ¼
Z ` X ak ðtÞ k¼1
Vk
ak ðtÞf ðs, tÞdt:
(3.144)
T
If there exists a solution of the extended Wiener–Hopf equation, then it is given by Eq. 3.144. Letting f (s, t) ¼ R YX (s, t) yields the solution of the Wiener–Hopf equation:
82
Chapter 3
gðs, tÞ ¼
Z ` X ak ðtÞ k¼1
Vk
ak ðtÞRY X ðs, tÞdt:
(3.145)
T
Example 3.2. Let Y(t) be a zero-mean signal, {uk(t)} be the Karhunen–Loève orthonormal coordinate-function system for KY (t, t), {lk} be the corresponding eigenvalue sequence, N(t) be white noise with constant intensity k that is uncorrelated with the signal, and X ðtÞ ¼ Y ðtÞ þ NðtÞ be observed. Then K X ðt, tÞ ¼ K Y ðt, tÞ þ kdðt tÞ, Z
K Y X ðt, tÞ ¼ K Y ðt, tÞ, Z K X ðt, tÞuk ðtÞdt ¼ ðK X ðt, tÞ þ kdðt tÞÞuk ðtÞdt
T
T
¼ ðlk þ kÞuk ðtÞ: X(t) has the same coordinate functions as Y(t), but with eigenvalue set {lk þ k}. According to Eq. 3.145, Z ` X uk ðtÞ gˆ ðs, tÞ ¼ u ðtÞRY ðs, tÞdt l þk T k k¼1 k ¼
` X
lk u ðsÞuk ðtÞ l þk k k¼1 k
(which does solve the Wiener–Hopf equation). The optimal linear filter is given by Yˆ ðsÞ ¼
T
¼
lk u ðsÞuk ðtÞ X ðtÞdt l þk k k¼1 k
Z X ` ` X
lk Z u ðsÞ, l þk k k k¼1 k
where Z1, Z2, . . . are the Karhunen–Loève coefficients for X(t).
▪
3.7 Optimal Morphological Bandpass Filters Morphological filtering refers to a large class of nonlinear operators, mainly applied to images (Matheron, 1975; Serra, 1983; Giardina and Dougherty, 1988). Optimal morphological filtering has been well studied, with filter
Optimal Filtering
83
optimization dependent on different classes of morphological filters and whether they are applied to discrete or continuous random processes (Dougherty and Astola, 1999). Here, based mainly on (Chen and Dougherty, 1997), we focus on a class of morphological filters for which optimization can be done in a transform domain analogously to linear filters. These filters are applied to binary (black-and-white) images modeled as random sets. Like linear filters, they can be used to remove noise, here in the form of clutter. 3.7.1 Granulometries To motivate the discussion, consider a Euclidean set S decomposed as a disjoint union S ¼ S 1 ∪ S 2 ∪ : : : ∪ S n . Imagine that the components are passed over a sieve of mesh size t > 0 and that a parameterized filter Ct is defined componentwise according to whether a component does or does not pass through the sieve: Ct(Si) ¼ Si if Si does not fall through the sieve; Ct ðS i Þ ¼ ∅ if Si falls through the sieve. For the overall set, Ct ðSÞ ¼
n [
Ct ðS i Þ:
(3.146)
i¼1
Since the components of Ct(S ) form a subcollection of the components of S, Ct is anti-extensive, meaning that Ct(S ) ⊂ S. If T ⊃ S, then Ct(T ) ⊃ Ct(S ), so that Ct is increasing. If the components are sieved iteratively with two different mesh sizes, then the output after both iterations depends only on the larger of the mesh sizes. If S should undergo a spatial translation to the set S þ x, then Ct(S þ x) ¼ Ct(S ) þ x, so that Ct is translation invariant. A filter family {Ct}, t > 0, is called an algebraic granulometry if Ct is anti-extensive, Ct is increasing, and CrCs ¼ CsCr ¼ Cmax{r, s} for r, s > 0 [mesh property]. If {Ct} is an algebraic granulometry and r ≥ s, then Cr ¼ CsCr ⊂ Cs, where the equality follows from the mesh property, and the inclusion follows from the anti-extensivity of Cr and the increasingness of Cs. {Ct} is called a granulometry if it is an algebraic granulometry and Ct is translation invariant. Finally, a granulometry {Ct} is called a Euclidean granulometry if, for any t > 0, Ct(S) ¼ tC1(S/t) [Euclidean property]. The Euclidean condition means that scaling a set by 1/t, sieving by C1, and then rescaling by t is the same as sieving by Ct. We call C1 the unit of the granulometry. Although we restrict ourselves to binary granulometries, the theory can be extended to nonbinary processes. The fundamental representation theorem concerning Euclidean granulometries states that every Euclidean granulometry can be represented in terms of opening operators, where, given a set B, the opening of S by the structuring element B is defined by [ S⚬B ¼ B þ x, (3.147) Bþx⊂S
84
Chapter 3
the union of all translates of B that are subsets of S. To state the theorem, we require a few definitions. Suppose that W is a family of sets closed under union, translation, and scalar multiplication by t ≥ 1. A class G of sets is called a generator of W if the class closed under union, translation, and scalar multiplication by scalars t ≥ 1 generated by G is W. The invariant class of an operator C, denoted as Inv [C], consists of all sets S for which C(S ) ¼ S (one might think of eigenfunctions). If G generates the invariant class Inv [C1] of the unit of a Euclidean granulometry, then G is called a generator of {Ct}. Theorem 3.12 (Matheron, 1975). An operator family {Ct}, t > 0, is a Euclidean granulometry if and only if there exists a class of sets G such that [ [
Ct ðSÞ ¼
S ⚬ rB:
(3.148)
B∈G r≥t
Moreover, G is a generator of {Ct}.
▪
For practical application, the generator usually consists of compact, convex sets, in which case the double union reduces to the single outer union over G, Ct ðSÞ ¼
[
S ⚬ tB,
(3.149)
B∈G
called a convex granulometry, where if S 1 , S 2 , : : : are mutually disjoint compact sets, then Ct
[ ` i¼1
Si
¼
` [
Ct ðS i Þ:
(3.150)
i¼1
That is, a convex granulometry is distributive and can be viewed componentwise. Owing to the mesh property, a granulometry is idempotent: CtCt ¼ Ct; indeed, CtCt ¼ Cmax{t, t} ¼ Ct. A binary filter C is called a t-opening if it is translation invariant, increasing, anti-extensive, and idempotent. Thus, for fixed t, a Euclidean granulometry is a t-opening. The simplest t-opening is an opening. A subclass B ⊂ Inv½C is a base for t-opening C if every set in Inv [C] is a union of translates of sets in B. Bases are not unique. C is a t-opening if and only if it can be represented as a union of openings by sets in a base (Matheron, 1975). If {Ct} is a granulometry, S is a compact set, and n denotes the Lebesgue measure (area), then the size distribution V(t) ¼ n[S] n[Ct(S)], t > 0, is increasing and continuous from the left. Define V(0) ¼ 0. Assuming that at least one generating set for {Ct} contains more than a single point, V(t) ¼ n[S] for sufficiently large t. Treating S as a random set, V(t) is a random function.
Optimal Filtering
85
Granulometric optimization is based on the mean M(t) ¼ E[V(t)]. M(0) ¼ 0, M is increasing, M is continuous from the left, and, assuming that S is contained in a fixed compact set, M is bounded. Thus, it possesses a Lebesgue decomposition into absolutely continuous and singular parts. Because M need not be differentiable in the ordinary sense, the derivative H(t) ¼ M0 (t), known as a granulometric size distribution (GSD), may have to be taken in a generalized sense. The opening spectrum of S relative to the granulometry {Ct} is defined by \ St ¼ ½Ct ðSÞ Ct þ t ðSÞ (3.151) t.0
for t ≥ 0. The collection {St} of spectral components forms a partition of S. A subset Π of the nonnegative real axis is called a countable-interval subset if it can be represented as a countable union of disjoint intervals Πi, where singleton point sets are considered to be intervals of length zero. Without loss of generality we assume that i < j implies, for all t ∈ Πi and r ∈ Πj, that t < r and that there exists s ∈ = Π such that t < s < r. This assumption means that Πi j is to the left of Π and that Πi and Π j are separated by the complement of Π. The granulometric bandpass filter (GBF) Ξ corresponding to the countableinterval subset Π (relative to the opening spectrum {St}) is defined by ΞðSÞ ¼
[
St:
(3.152)
t∈Π
Π and Πc are the pass and fail sets for the filter, respectively. 3.7.2 Optimal granulometric bandpass filters Given the task of restoring set S from observed set S ∪ N, the error associated with a granulometric bandpass filter Ξ is ε½Ξ ¼ E½n½SDΞðS ∪ NÞ,
(3.153)
where D denotes set symmetric difference defined by SDT ¼ (S T) ∪ (T S). Filter optimization involves finding an optimal pass set relative to the spectral decomposition {(S ∪ N)t}: find a pass set yielding a filter defined according to Eq. 3.152 having minimum error. For granulometry {Ct}, an optimal pass set and corresponding optimal filter are denoted by Π〈C〉 and ΞC, respectively. Let MS and MN denote the mean size distributions for the signal and noise, respectively, relative to {Ct}. If both MS and MN are absolutely continuous (in particular, if they are both continuously differentiable), then the error for the granulometric bandpass filter Ξ with pass set Π is
86
Chapter 3
Z ε½Ξ ¼
Z
Πc
HS ðtÞdt þ
Π
HN ðtÞdt
(3.154)
(Dougherty, 1997). Intuitively, the error is the signal GSD mass over the fail set plus the noise GSD mass over the pass set. If MS and MN are continuously differentiable except on sets without limit points (a mathematical restriction posing no practical constraint) and S and N are disjoint, then, an optimal pass set is given in terms of the granulometric size distributions by Π〈Ci ¼ ft : HS ðtÞ ≥ HN ðtÞg,
(3.155)
where the derivatives may involve delta functions and the inequality is interpreted in the usual manner wherever impulses are involved (Dougherty, 1997). Designing an optimal GBF involves finding the GSDs of the signal and noise processes and then solving the differential inequality, which is analogous to finding the power spectral densities in Wiener filtering. Its error is given by Eq. 3.154 with pass set Π〈C〉. To get a sense of the geometry behind Eq. 3.155, consider the set consisting of six disjoint retangles S1, S2, S3, N1, N2, and N3 of dimensions 1 1, 4 2, 1 3, 5 1, 1 2, and 3 3, respectively, where S1, S2, S3 comprise the signal S and N1, N2, N3 comprise the noise N. If Ct is an opening by the unit vertical line segment, then the signal size distribution VS(t) is a step function with steps of heights 1, 8, and 3 at points 1, 2, and 3, respectively, and the noise size distribution VN (t) is a step function with steps of heights 5, 2, and 9 at points 1, 2, and 3, respectively. Suppose that the random set has the single realization S ∪ N, in which case MS ¼ VS, MN ¼ VN, HS (t) ¼ d(t1) þ 8d(t 2) þ 3d(t 3), and HN(t) ¼ 5d(t 1) þ 2d(t 2) þ 9d(t 3). According to Eq. 3.155, Π〈Ci ¼ ft : t ≠ 1, t ≠ 3g. In fact, the only relevant points are 1, 2, and 3, with 2 in the pass set and 1 and 3 in the fail set. This makes complete sense since we prefer to pass S2 and N2 rather than not to pass them, we prefer not to pass S1 and N1 rather than to pass them, and we prefer not to pass S3 and N3 rather than to pass them. If there exists a sequence 0 ¼ t0 < t1 < t2 < ⋯ such that the pass set is Π〈Ci ¼ ½t1 , t2 ∪ ½t3 , t4 ∪ ½t5 , t6 ∪ : : : ,
(3.156)
then ΞC ðS ∪ NÞ ¼
` [
½Ct2k1 ðS ∪ NÞ Ct2k ðS ∪ NÞ:
(3.157)
k¼1
Other decompositions of Π〈C〉 lead to corresponding filter representations.
Optimal Filtering
87
3.7.3 Reconstructive granulometric bandpass filters The filter family {Lt} is derived from the granulometry {Ct} by reconstruction if, for each connected component S, Lt(S) ¼ S if Ct(S) ≠ ∅ and Lt(S) ¼ ∅ if Ct(S) ¼ ∅. The reconstructive filter Lt fully passes each component not eliminated by Ct and eliminates any component eliminated by Ct. {Lt} is a granulometry called a reconstructive granulometry. {Lt} generates a size distribution and a GSD. To apply granulometric spectral theory to a reconstructive granulometry {Lt}, we need the specialized form of the size-distribution mean. Given a random compact set X, define the granulometric measure ({Lt}-measure) of X by hðXÞ ¼ supft : Lt ðXÞ ¼ Xg:
(3.158)
h(X) is a random variable whose distribution depends on the distribution of X. Define the random set X by X¼
C [
X k þ zk ,
(3.159)
k¼1
where X 1 , X 2 , : : : , X C are identically distributed to X (which plays the role of a primary grain for X), C is a random positive integer independent of the grains, and z1 , z2 , : : : , zC are locations randomized up to the constraint that the union forming X is disjoint. In practice, disjointness might result from image segmentation. Then VðtÞ ¼ Sfn½X k : hðX k Þ , tg ¼
C X
n½X k T½X k ; t,
(3.160)
k¼1
where T [Xk; t] ¼ 1 if h(Xk) < t and T [Xk; t] ¼ 0 if h(Xk) ≥ t. Taking expectations yields MðtÞ ¼ mX E½n½XT½X;t,
(3.161)
where mX ¼ E[C]. Typically, the random set X depends on some parameter vector W, so that X ¼ X(W), and both v[X] and h(X) depend on W. Hence, Z MðtÞ ¼ mX
fw : hðXÞðwÞ , tg
n½XðwÞf W ðwÞdw,
(3.162)
where fW(w) is the density of W. Filter optimization depends on M0 (t). In its most general form, the integral giving M(t) is intractable, thereby making the derivative problematic; however, for certain stochastic-geometric models, the integral simplifies and M0 (t) can be found. In general, M0 (t) is a generalized derivative, but in practical cases where
88
Chapter 3
the primary grain X is governed by a continuous probability distribution, it reduces to an ordinary derivative [meaning that M(t) is differentiable]. Regarding differentiability of M(t), we consider some special cases. Suppose that X depends on n þ 1 random parameters, X ¼ XðW , U 1 , U 2 , : : : , U n Þ, W is independent of U 1 , U 2 , : : : , U n , hðXÞ depends only on W, and h(X)(W) is an increasing function of W. Then Z MðtÞ ¼ mX
hðXÞ1 ðtÞ
Z
`
Z ···
0
0
`
n½Xðw, u1 , : : : , un Þ
0
(3.163)
f W ðwÞf ðu1 , : : : , un Þdu1 · · · dun dw: If fW is a continuous function of w, then the derivative of M(t) is f ðhðXÞ1 ðtÞÞ HðtÞ ¼ mX W 0 hðXÞ ðhðXÞ1 ðtÞÞ Z Z ` ` ·· · v½XðhðXÞ1 ðtÞ, u1 , : : : , un Þ 0
0
(3.164)
f ðu1 , : : : , un Þdu1 · · · dun ¼ mX
f W ðhðXÞ1 ðtÞÞ E ½n½XjW ¼hðXÞ1 ðtÞ hðXÞ0 ðhðXÞ1 ðtÞÞ
¼ mX gX ðtÞ f W ðhðXÞ1 ðtÞÞ, where gX ðtÞ ¼
E½n½XjW ¼hðXÞ1 ðtÞ hðXÞ0 ðhðXÞ1 ðtÞÞ
,
(3.165)
and n½XjW ¼hðXÞ1 ðtÞ is the area of X evaluated at W ¼ hðXÞ1 ðtÞ. In the special case when h(X) is the identity, HðtÞ ¼ mX f W ðtÞE½v½XjW ¼t :
(3.166)
This result is intuitive. It says that the derivative of the mean size distribution at t is the expected area of the primary grain when W is fixed at t, weighted by the infinitesimal probability mass of W at t. If X depends only on a single random parameter W and h(X)(W ) ¼ W, we get the reduction H(t) ¼ mX fW (t)n[X](t). Consider the signal-union-clutter model, in which there are two random, compact, connected sets S and N, the first being the primary signal grain and the second being the primary noise (clutter) grain. The signal and noise random sets are defined by A [ S¼ S i þ xi , (3.167) i¼1
Optimal Filtering
89
N¼
B [
N j þ yj ,
(3.168)
j¼1
respectively, where the components S 1 , S 2 , : : : , S A are identically distributed with S, the components N 1 , N 2 , : : : , N B are identically distributed with N, S 1 , S 2 , : : : , S A , N 1 , N 2 , : : : , N B form a disjoint collection, A and B are random positive integers independent of each other and independent of the grains, and x1 , x2 , : : : , xA , y1 , y2 , : : : , yB are locations randomized up to the constraint that all components forming S ∪ N are disjoint. The observed set is S ∪ N. Both signal and noise fit the model of Eq. 3.159. Therefore, their sizedistribution means MS and MN can be found via Eq. 3.162 or, under appropriate conditions, one of the subsequent special cases. The pass set of the optimal granulometric bandpass filter can be found by solving the inequality HS(t) ≥ HN(t). Example 3.3. Consider a signal consisting of squares possessing random angle of rotation U and random radius R, so that X ¼ X(R, U). Let Ct be an ordinary opening by a disk of radius t (and Lt be the induced reconstructive filter). Then Eq. 3.166 applies, h(X)(R, U) ¼ R, and HS(t) ¼ 4mS t2fS(t), where mS is the expected number of signal squares. Let the noise consist of ellipses possessing random angle of rotation, random minor axis 2W, and random major axis 4W. With Ct being ordinary opening by a disk of radius t, HN(t) ¼ 2pmN t2fN(t). The pass set is determined by the inequality mS fS(t) ≥ (p/2)mN fN(t). Now, suppose that mS ¼ mN, R is normally distributed with mean 20 and standard deviation 3, and W possesses a bimodal distribution that is the sum of two Gaussians, one having mean 10, standard deviation 2, and mass 3/4, the other having mean 30, standard deviation 2, and mass 1/4. The pass set consists of all t in the interval determined by fS(t) ≥ (p/2)fN(t). With t0 ¼ 14.336 and t00 ¼ 26.332 being the left and right endpoints of the pass band, the optimal granulometric bandpass filter is ΞL ðS ∪ NÞ ¼ Lt0 ðS ∪ NÞ Lt 00 ðS ∪ NÞ: The optimal filter has a single passband, a consequence of both the filter form and the form of the densities. For any single-passband filter with passband [r1, r2], denoted by Ξr1 , r2 , its error, according to Eq. 3.154, is given by Z ε½Ξr1 , r2 ¼
0
r1
Z HS ðtÞdt þ
`
r2
Z HS ðtÞdt þ
r2
HN ðtÞdt:
r1
The error is minimized with r1 ¼ 14.336 and r2 ¼ 26.332: ε[Ξ14.336, 0.0323 E[T], where T is total image area.
26.332] ¼
▪
90
Chapter 3
3.8 General Schema for Optimal Design The procedures employed for finding both optimal linear and optimal granulometric bandpass filters fit into a general schema for finding optimal operators. For linear filters, an optimal filter is found in terms of the crosscorrelation function, for linear filters in the case of WSS processes, an optimal filter is found in terms of the power spectra, and for GBFs, an optimal filter is found in terms of the granulometric size densities. The cross-correlation function, power spectra, and GSDs are called characteristics of the random processes, the term referring to any deterministic function of the processes. While in this chapter operators have been filters (estimators on stochastic processes), they could be other kinds of operators, such as operators affecting the behavior of a system or classifiers to make decisions regarding the state of the system. The general schema consists of four steps: 1. 2. 3. 4.
Construct the mathematical model. Define a class of operators. Define the optimization problem via a cost function. Solve the optimization problem via characteristics of the model.
The optimization problem takes the general form copt ¼ arg minCðcÞ, c∈F
(3.169)
where F is the operator class and C(c) is the cost of applying operator c on the model, such as the MSE for estimating one process by another. This translational paradigm originated in the classic work of Andrey Kolmogorov (Kolmogorov, 1941) and Norbert Wiener (Wiener, 1949), as covered here in the basic linear-filter theory — although published in 1949, an unpublished version of Wiener’s work appeared in 1942. For linear filtering, the synthesis scheme takes the following form: 1. The model consists of two jointly distributed random processes. 2. The operator class consists of integral linear filters over an observation window. 3. Optimization involves minimizing the MSE. 4. If the Wiener–Hopf equation is satisfied, then the optimization problem is solved in terms of the cross-correlation function by the weighting function in Eq. 3.131. For Wiener filtering, steps 1 and 4 become: 1. The model consists of two jointly WSS random processes. 4. If the Wiener–Hopf equation is satisfied, then the optimization problem is solved by the Fourier transform of the weighting function in terms of the power spectra SX(v) and SYX(v).
Optimal Filtering
91
For granulometric bandpass filters, the schema takes the following form: 1. The model is a disjoint union of signal and noise random sets. 2. The operator class consists of granulometric bandpass filters. 3. Optimization involves minimizing the expected measure of the symmetric difference in Eq. 3.153. 4. Under a differentiability assumption on the mean size distributions of the signal and noise, the optimization problem is solved by a passband defined via the GSDs HS (t) and HN (t). Characteristics will play an important role going forward; however, the optimization schema and the notion of a characteristic will be altered to take into account model uncertainty.
Chapter 4
Optimal Robust Filtering It is often unrealistic to assume that the design model is known with certainty, so it is prudent to assume that the true model belongs to an uncertainty class {(Xu, Yu)}u∈U of models parameterized by a vector u belonging to a parameter set U, also called the uncertainty class, that is in one-to-one correspondence with the model class {(Xu, Yu)}u∈U. A robust filter is optimal in some sense over the uncertainty class. If we think in terms of a system, then u characterizes the state of the system. Owing to the one-to-one correspondence, we can use the terms “model” and “state” interchangeably. In the context of filtering, where an unobserved ideal signal is estimated based on an observed signal, for each u ∈ U there is an observed-ideal signal pair (Xu, Yu) and an optimal filter cu. Robust filter design goes back to the late 1970s, with robust Wiener filtering involving minimax optimality in regard to uncertain power spectra (Kuznetsov, 1976; Kassam and Lim, 1977; Poor, 1980; Vastola and Poor, 1984). Robust design was extended to nonlinear filters and placed into a Bayesian framework by assuming a prior probability distribution governing the uncertainty class, the aim being to find a filter with minimal expected error across the uncertainty class (Grigoryan and Dougherty, 1999). Only recently have fully optimal solutions been found in this framework (Dalton and Dougherty, 2014), and it is to this topic that we now turn. It is important to recognize exactly what is meant by filtering relative to a collection {Xu (t)}u∈U of random functions. For a deterministic signal x(t) in isolation, there is no probability struture. According to our convention, a probability structure is defined on random functions by nth-order probability distributions as in Eq. 1.1. Hence, Xu(t), as a random function, is characterized by Eq. 1.1 with F and X replaced by Fu and Xu, respectively. To say that cu is an optimal MSE linear filter for estimating Yu(s) based on observing Xu(t) means that the MSE between the ideal and filtered random functions has been minimized relative to the probability structure defined by the collection {Fu} of nth-order joint probability distributions. This is clearly seen in Eq. 3.14 for the simpler situation in which a single random variable Y is being estimated via a collection of n predictor variables. The optimality of cu is
93
94
Chapter 4
based on the joint probabilistic behavior of Yu(s) and cu (Xu) (s). If we consider a different class {Ff} of joint distributions, then the effect of cu will be different; that is, the joint behavior of Yf(s) and cu(Xf)(s) will be governed by {Ff}, and there is no reason to expect that cu will continue to provide good estimation, now in the sense of MSE relative to {Ff}. The salient point, as we proceed, is that a filter is applied in the same way to any observed signal, but its performance is evaluated for each u ∈ U according to the proability model for u.
4.1 Intrinsically Bayesian Robust Filters From a general perspective, signal filtering involves a joint random process (X(t), Y(s)), t ∈ T, s ∈ S, and optimal filtering involves estimating a signal Y(s) at time s via a filter c given observations {X(t)}t∈T. Optimization is relative to a filter family F , where a filter c ∈ F is a mapping on the space S of possible observed signals. Performance measurement is relative to a cost function CðY ðsÞ, Yˆ ðsÞÞ, quantifying the cost (error) in estimating signal Y(s) with Yˆ ðsÞ ¼ cðX ÞðsÞ. Keeping in mind Eq. 3.169, for a fixed s ∈ S, an optimal filter is defined by copt ¼ arg minCðY ðsÞ, cðX ÞðsÞÞ: c∈F
(4.1)
For instance, the optimal estimator of Eq. 3.107 may be expressed via Eq. 4.1, with F consisting of all operators of the form given in Eq. 3.105 and C ¼ MSE, as defined in Eq. 3.106. When we lack full specificity as to the parameters of the actual model, there is an uncertainty class of joint random processes (Xu(t), Yu(s)), t ∈ T, s ∈ S, with the parameter vector u ∈ U. There is a cost function C and a filter family F . An intrinsically Bayesian robust (IBR) filter on {(Xu, Yu) : u ∈ U} is defined by cIBR ¼ arg min E U ½CðY u ðsÞ, cðX u ÞðsÞÞ, c∈F
(4.2)
where the expectation is with respect to a prior probability distribution p(u) on U (Dalton and Dougherty, 2014). An IBR filter is always relative to an uncertainty class U and a prior distribution over U. Normally, U is clear from the context, but should we want to make it explicit, we write cU IBR . An IBR filter is robust in the sense that on average it performs well over the whole uncertainty class. Since each parameter vector u ∈ U corresponds to a model (Xu, Yu), p(u) quantifies our prior knowledge that some models are more likely to be the actual model than are others. If there is no prior knowledge beyond the uncertainty class itself, then the prior distribution is taken to be uniform, meaning that all models are assumed to be equally likely, and p(u) is said to be noninformative. The key to finding IBR filters is to
Optimal Robust Filtering
95
develop a theory by which Eq. 4.2 can be solved similarly to Eq. 4.1, except that instead of using characterisitcs of a single signal model, using effective characteristics, which pertain to the full uncertainty class. 4.1.1 Effective characteristics The IBR filter theory rests on several definitions and two theorems whose proofs follow immediately from the definitions. Definition 4.1. An observation–signal pair (X(t), Y(s)) is solvable under cost C and function class F if there exists a solution to Eq. 4.1 under the processes, that is, if there exists a function copt ∈ F minimizing C(Y(s), c(X)(s)) over all c ∈ F . Definition 4.2. An observation–signal pair (XU(t), YU(s)) is an effective process under cost C, function class F , and uncertainty class U if, for all c ∈ F , both EU[C(Yu(s), c(Xu)(s))] and C(YU(s), c(XU)(s)) exist, and E U ½CðY u ðsÞ, cðX u ÞðsÞÞ ¼ CðY U ðsÞ, cðX U ÞðsÞÞ:
(4.3)
Theorem 4.1. If there exists a solvable effective process (XU(t), YU(s)) with optimal filter cU, then cIBR ¼ cU. Proof. The proof follows directly from Eq. 4.3: cIBR ¼ arg min E U ½CðY u ðsÞ, cðX u ÞðsÞÞ c∈F
¼ arg minCðY U ðsÞ, cðX U ÞðsÞÞ c∈F
¼ cU :
(4.4) ▪
The effective process need not be in the uncertainty class. In instances where an optimal filter is derived via characteristics, finding an IBR filter via an effective process can be relaxed to finding an IBR filter via effective characteristics. A salient point is that when characteristics are employed, filter error can often be expressed in the form of a cost functional Gðv, kÞ, where v refers to the characteristics of (X(t), Y(s)) and k refers to the filter parameters, as with optimal linear filtering, where v corresponds to the autocorrelation and cross-correlation functions, and k to the filter weighting function. Definition 4.3. A class L of process pairs (Xl(t), Yl(s)) is reducible under cost C and function class F if there exists a cost functional G such that for each l ∈ L and c ∈ F , CðY l ðsÞ, cðX l ÞðsÞÞ ¼ Gðvl , kc Þ,
(4.5)
96
Chapter 4
where vl is a collection of process characteristics and kc represents parameters for filter c. Definition 4.4. A collection of characteristics v is solvable in the weak sense under cost functional G and function class F if there exists a solution to cG,F ¼ arg min Gðv, kc Þ: c∈F
(4.6)
Given a set of characteristics v that are solvable in the weak sense, there is an optimal filter cF ,G,v possessing a cost functional Gðv, kcF ,G,v Þ. Definition 4.5. Let U be an uncertainty class of process pairs contained in a reducible class. The characteristics vU constitute an effective characteristic in the weak sense under cost functional G, function class F , and uncertainty class U if, for all c ∈ F , both E U ½Gðvu , kc Þ and GðvU , kc Þ exist, and E U ½Gðvu , kc Þ ¼ GðvU , kc Þ:
(4.7)
Theorem 4.2. Let U be an uncertainty class of process pairs contained in a reducible class. If there exist weak-sense solvable, weak-sense effective characteristics vU with optimal filter cU, then cIBR ¼ cU. Proof. The proof follows directly from Eq. 4.5: cIBR ¼ arg min E U ½CðY u ðsÞ, cðX u ÞðsÞÞ c∈F
¼ arg min E U ½Gðvu , kc Þ c∈F
¼ arg min GðvU , kc Þ
(4.8)
c∈F
¼ cU , where cU ¼ cF ,G,vU .
▪
If there exists an effective process providing the effective characteristics, we say that these are effective in the strong sense; otherwise, we say that they are effective in the weak sense. In either case, under the conditions of the appropriate theorem, the optimal filter relative to the effective process or characteristics is the IBR filter, and we can interchangeably write cIBR or cU. While it might be theoretically useful to have an effective process, when filter optimization is achieved via the characteristics, weak-sense effective characteristics are sufficient. Indeed, the procedure for finding an effective process may involve putting the error expression into a desired form so that effective characteristics of the process emerge, and then determining the existence of a process possessing the effective characteristics.
Optimal Robust Filtering
97
4.1.2 IBR linear filtering Consider an uncertain signal model (Xu, Yu) parameterized by u ∈ U with the MSE cost function and function class Z F ¼ c : cðX ÞðsÞ ¼ gðs, tÞX ðtÞdt : (4.9) T
The solvable class F consists of all process pairs (X, Y ) such that c(X )(s) has a finite second moment for any square-integrable kernel g(s, t) and there exists gˆ ðs, tÞ for which the Wiener–Hopf equation is satisfied. Define the effective correlation functions by RU,Y ðs, sÞ ¼ E U ½RY u ðs, sÞ,
(4.10)
RU,X ðt, uÞ ¼ E U ½RX u ðt, uÞ,
(4.11)
RU,Y X ðs, tÞ ¼ E U ½RY u X u ðs, tÞ:
(4.12)
As an autocorrelation function, RXu(t, u) is conjugate symmetric and nonnegative definite for all u ∈ U. It is straightforward to show that RU,X (t, u) has the same properties and is therefore also a valid autocorrelation function. Thus, there exists an effective zero-mean Gaussian process XU with autocorrelation function RU,X (t, u). Similar reasoning shows that there exists a zero-mean Gaussian process YU with autocorrelation RU,Y (s, s) at s and cross-correlation RU,YX (s, t) at s for all t ∈ T. In the robust model, the error is given by 2 Z E U E Y u ðsÞ gðs, tÞX u ðtÞdt ju T Z Z ¼ RU,Y ðs, sÞ RU,Y X ðs, tÞgðs, tÞdt RU,Y X ðs, tÞgðs, tÞdt (4.13) T T Z Z þ RU,X ðt, uÞgðs, tÞgðs, uÞdt du T
T
¼ E½jY U ðsÞ cðX U ÞðsÞj2 : Thus, Eq. 4.3 is satisfied. If (XU, YU) ∈ F, meaning that the Wiener–Hopf equation relative to (XU, YU) is satisfied, then (XU, YU) is an effective process and, according to Theorem 4.1, an IBR linear filter is given by the solution gˆ ðs, tÞ to Z RU,Y X ðs, tÞ ¼ (4.14) gˆ ðs, uÞRU,X ðu, tÞdu, T
which we call the effective Wiener–Hopf equation. In the present circumstances, the MSE can be written in the form Gðv, kÞ, where v ¼ {RX(t, u), RYX (s, t)} and k ¼ {g(s, t)}. In the robust setting,
98
Chapter 4
vU ¼ {RU,X (t, u), RU,YX (s, t)} provides the effective characteristics so that RU,X (t, u) and RU,YX (s, t) are effective auto- and cross-correlations relative to uncertainty class U in the strong sense. All basic equations for characteristics hold except that all characteristics are replaced by the effective characteristics RU,Y, RU,X, and RU,YX. To view the matter in the framework of canonical expansions, in the effective setting the three necessary and sufficient conditions for a canonical expansion take the form Z 1 xU ðt, jÞ ¼ aðs, jÞRU,X ðt, sÞds, (4.15) I U ðjÞ T Z aðt, jÞxU ðt, j0 Þdt ¼ dðj j0 Þ, (4.16) T
Z
Ξ
xU ðt, jÞaðt0 , jÞdj ¼ dðt t0 Þ,
where the intensity of the white noise is given by Z Z Z RU,X ðt, t0 Þaðt, jÞaðt0 , j0 Þdt dt0 dj0 : I U ðjÞ ¼ Ξ
T
(4.17)
(4.18)
T
If the three conditions hold and the Wiener–Hopf equation is satisfied for the effective process, then, according to Eq. 3.131, the IBR filter can be found as Z gIBR ðs, tÞ ¼
aðt, jÞ Ξ I U ðjÞ
Z T
RU,Y X ðs, uÞaðu, jÞdu dj:
(4.19)
According to Eq. 3.132, the estimate obtained by applying the IBR filter to process Xu is given by Yˆ IBR ðsÞ ¼ cIBR ðX u ÞðsÞ Z ¼ gIBR ðs, tÞX u ðtÞdt T Z Z u ðjÞRU,Y Z ðs, jÞ ¼ dj, I U ðjÞ Ξ
(4.20)
RU,Y Z ðs, jÞ ¼ E U ½RY u Zu ðs, jÞ:
(4.21)
where
In a manner similar to the derivation of the error expression of Eq. 3.133, inserting the expression for gIBR in Eq. 4.19 into Eq. 3.116 and then using Eq. 3.130 twice gives
Optimal Robust Filtering
99
Z Z aðu, jÞ 2 ˆ E½jY u ðsÞ Y IBR ðsÞj ¼ RY u ðs, sÞ RY u X u ðs, uÞ T Ξ I U ðjÞ Z RU,Y X ðs, vÞaðv, jÞdv dj du T
¼ RY u ðs, sÞ
(4.22)
Z
¯ Y Z ðs, jÞ RU,Y Z ðs, jÞR u u dj: ðjÞ I Ξ U
Taking the expctation over U yields the MSE: E U ½E½jY u ðsÞ Yˆ IBR ðsÞj2 ju ¼ RU,Y ðs, sÞ
Z
jRU,Y Z ðs, jÞj2 dj: I U ðjÞ Ξ
(4.23)
To apply the error representation of Eq. 3.133 to WS stationary processes, note that RY (s, s) ¼ rY (0), I(j) ¼ 2pSX (v), and the right-hand side of Eq. 3.130 reduces to e–jvsSYX (v). Hence, the error of the Wiener filter can be expressed in terms of power specta by Z ` jS Y X ðvÞj2 1 2 ˆ dv E½jY ðsÞ Y ðsÞj ¼ rY ð0Þ 2p ` S X ðvÞ Z Z ` ` jS Y X ðvÞj2 1 1 (4.24) ¼ dv S Y ðvÞdv 2p ` 2p ` S X ðvÞ Z ` S Y ðvÞS X ðvÞ jS Y X ðvÞj2 1 dv, ¼ S X ðvÞ 2p ` where in the second equality we have applied the inverse Fourier transform at t ¼ 0. For WS stationary processes in the presence of uncertainty, the effective correlation functions are rU,Y ðtÞ ¼ E U ½rY u ðtÞ, rU,X ðtÞ ¼ E U ½rX u ðtÞ, and rU,Y X ðtÞ ¼ E U ½rY u X u ðtÞ. The Fourier transforms of rU,X (t), rU,Y (t), and rU,YX (t) are called the effective power spectra and denoted by SU,X (v), SU,Y (v), and SU,YX (v), respectively. The effective Wiener–Hopf equation is the same as Eq. 3.117 except that the correlation functions are replaced by effective correlation functions. It is solved in the same way to produce the IBR Wiener filter: ˆ IBR ðvÞ ¼ S U,Y X ðvÞ : G S U,X ðvÞ
(4.25)
Randomly convolved signal-plus-noise model
We consider a linear model with a parameter u, where the signal Yu is convolved with a random process hu plus a process Nu: Z ` X u ðtÞ ¼ hu ðyÞY u ðt yÞdy þ N u ðtÞ, (4.26) `
100
Chapter 4
where for fixed u we assume that hu, Yu, and Nu are all uncorrelated processes, and Yu and Nu are zero-mean processes. Assuming that Yu and Nu are both WS stationary, for fixed u, the situation is similar Eq. 3.120, except here hu(y) is a random function. Throughout the following cacluations we assume that all interchanges of the Fourier transform F , expectation E, and integration are justified. To begin, the cross-correlation function for Yu(t) and Xu(t) is given by Z ` rY u X u ðtÞ ¼ E Y u ðt þ tÞ hu ðyÞY u ðt yÞdy þ E½Y u ðt þ tÞN u ðtÞ ` Z i h i h ` E h hu ðyÞ E Y u ðt þ tÞY u ðt yÞ dy ¼ Z ` h i ` (4.27) ¼ E h hu ðyÞ rY u ðt þ yÞdy Z ` i h ` ¼ rY u ðt yÞE h hu ðyÞ dy `
¼ ðrY u h1 ÞðtÞ, Ehh denotes i expectation relative to the distribution of h, h1 ðyÞ ¼ E h hu ðyÞ , and the second equality follows from the uncorrelatedwhere
ness of hu and Yu. Taking the Fourier transform of this convolution product yields S Y u X u ðvÞ ¼ F ½E h ½hu ðtÞjuS Y u ðvÞ ¼ E h ½H u ðvÞjuS Y u ðvÞ,
(4.28)
where hu(t) has Fourier transform Hu(v). From the uncorrelatedness of hu, Yu, and Nu, Z Z h i ` ` 0 0 RX u ðt, t Þ ¼ E hu ðyÞY u ðt yÞdy hu ðjÞY u ðt jÞdj þ E N u ðtÞN u ðt0 Þ ` ` Z Z i h i h ` ` ¼ E h hu ðyÞhu ðjÞ E Y u ðt yÞY u ðt0 jÞ dydj þ rN u ðt t0 Þ Z `Z ` ` ` ¼ Rhu ðy, jÞRY u ðt y, t0 jÞdydj þ rN u ðt t0 Þ ` ` Z Z ` ` Rhu ðy, jÞrY u ðt t0 y þ jÞdydj þ rN u ðt t0 Þ ¼ ` ` Z Z ` ` Rhu ðy, y wÞdy rY u ðt t0 wÞdw þ rN u ðt t0 Þ ¼ `
`
¼ ðh2 rY u ÞðtÞ þ rN u ðtÞ,
(4.29)
Optimal Robust Filtering
101
where, in the fifth equality, w ¼ y – j, in the sixth equality, t t0 ¼ t, and Z ` h2 ðwÞ ¼ Rhu ðy, y wÞdy: (4.30) `
The Fourier transform of h2(t) is Z Z Z h i ` ` ` F Rhu ðy, y wÞdy ðvÞ ¼ E h hu ðyÞhu ðy wÞ ejvw dydw ` ` ` Z Z ` ` jvy jvz hu ðyÞe hu ðzÞe dydz ¼ Eh ` ` h i ¼ E h H u ðvÞH u ðvÞ
(4.31)
¼ E h ½jH u ðvÞj2 :
Taking the Fourier transform of Eq. 4.29 and using Eq. 4.31 gives S X u ðvÞ ¼ E h ½jH u ðvÞj2 S Y u ðvÞ þ S N u ðvÞ:
(4.32)
This in conjunction with Eq. 4.28 gives the Wiener filter: ˆ u ðvÞ ¼ G
E h ½H u ðvÞS Y u ðvÞ : E h ½jH u ðvÞj2 S Y u ðvÞ þ S N u ðvÞ
(4.33)
If Hu(v) is a fixed nonrandom function, then the Wiener filter is ˆ u ðvÞ ¼ G
H u ðvÞS Y u ðvÞ , jH u ðvÞj2 S Y u ðvÞ þ S N u ðvÞ
(4.34)
which is simply Eq. 3.121 parameterized by u. To obtain the IBR Wiener filter, we need the effective characteristics. Assume that rhu ðtÞ and rY u ðt 0 Þ are uncorrelated relative to U for any t and t0 . The effective correlation functions are RU,h(u, t) ¼ EU[Rhu(u, t)], RU,Y (u, t) ¼ EU[RYu(u, t)], and RU,N (u, t) ¼ EU[RNu(u, t)]. Because Yu and Nu are both WS stationary, RU,Y (u, t) ¼ rU,Y (u t), RU,N(u, t) ¼ rU,N (u t), RU,X (u, t) ¼ rU,X (u t), and RU,YX (s, t) ¼ rU,YX (s t). From the fourth equality in Eq. 4.27, Z h i ` rU,Y X ðtÞ ¼ E U rY u ðt yÞE h hu ðyÞ dy ` (4.35) Z i h ` ¼ E U,h hu ðyÞ rU,Y ðt þ yÞdy, `
where EU,h ¼ EUEh is the total expectation over the randomness of both hu and u. From the fifth equality of Eq. 4.29,
102
Chapter 4
Z rU,X ðtÞ ¼ E U Z ¼
`
`
`
Z
`
Rhu ðy, y wÞrY u ðt wÞdwdy þ rN u ðtÞ ` RU,h ðy, y wÞdy rU,Y ðt wÞdw þ rU,N ðtÞ:
Z
`
`
`
(4.36)
The effective power spectra are Z
Z
S U,X ðvÞ ¼ F RU,h ðn, n wÞdn rU,Y ðt wÞdw ðvÞ þ S U,N ðvÞ ` ` Z ` RU,h ðn, n tÞdn ðvÞS U,Y ðvÞ þ S U,N ðvÞ ¼F ` Z ` ¼ E U,h F hu ðnÞhu ðn tÞdn ðvÞ S U,Y ðvÞ þ S U,N ðvÞ ` h i ¼ E U,h jH u ðvÞj2 S U,Y ðvÞ þ S U,N ðvÞ, `
`
and
Z S U,Y X ðvÞ ¼ F
`
(4.37)
E U,h ½hu ðyÞrU,Y ðt þ yÞdy ðvÞ i h ¼ F E U,h ½hu ðtÞ ðvÞS U,Y ðvÞ `
(4.38)
¼ F ½E U,h ½hu ðtÞðvÞS U,Y ðvÞ ¼ E U,h ½H u ðvÞS U,Y ðvÞ: Hence, the IBR Wiener filter is ˆ IBR ðvÞ ¼ G
E U,h ½H u ðvÞS U,Y ðvÞ : E U,h ½jH u ðvÞj2 S U,Y ðvÞ þ S U,N ðvÞ
(4.39)
Based on Eq. 4.24, the MSE of the Wiener filter is h i E U E½jY ðsÞ Yˆ ðsÞj2 ju Z ` ðE U,h ½jH u ðvÞj2 jE U,h ½H u ðvÞj2 ÞjS U,Y ðvÞj2 þ S U,Y ðvÞS U,N ðvÞ 1 ¼ dv: 2p ` E U,h ½jH u ðvÞj2 S U,Y ðvÞ þ S U,N ðvÞ (4.40) If hu ¼ h is a not a random function, then the IBR Wiener filter is given by ˆ IBR ðvÞ ¼ G with MSE
HðvÞS U,Y ðvÞ , jHðvÞj2 S U,Y ðvÞ þ S U,N ðvÞ
(4.41)
Optimal Robust Filtering
h EU
103
Z i ` S U,Y ðvÞS U,N ðvÞ 1 2 ˆ dv: E½jY ðsÞ Y ðsÞj ju ¼ 2p ` jHðvÞj2 S U,Y ðvÞ þ S U,N ðvÞ
(4.42)
For a convolution alone (no noise), the inverse filter is optimal and has zero MSE. In the case of only additive noise [H(v) ¼ 1], where the effective noise is white with variance E U ½s2u , the IBR Wiener filter is ˆ IBR ðvÞ ¼ G
S U,Y ðvÞ , S U,Y ðvÞ þ E U ½s2u
and the corresponding MSE is i E ½s2 Z ` h 2 ˆ ˆ GðvÞdv: E U E½jY ðsÞ Y ðsÞj ju ¼ U u 2p `
(4.43)
(4.44)
An example of IBR Wiener filtering will be given at the end of Section 4.3 when we compare several types of filtering methods in the presence of uncertainty. 4.1.3 IBR granulometric bandpass filters Let us reconsider optimal granulometric bandpass filters under the assumption that the random set belongs to an uncertainty class U, in which case we write Mu and Hu for the mean and granulometric size densities, respectively, for the granulometry {Ct}. Assuming absolute continuity for the MSDs for the signal and noise, Eq. 3.154 applies, and the GBF Ξ with pass set Π has expected error Z Z E U ½εu ½Ξ ¼E U Hu,S ðtÞdt þ Hu,N ðtÞdt Πc Π (4.45) Z Z ¼ E U ½Hu,S ðtÞdt þ E U ½Hu,N ðtÞdt: Πc
Π
The class of compact sets is reducible under the expected volume of the symmetric-difference cost and the class of GBFs since for any GBF Ξ, this error can be expressed in the form Gðv, ΠÞ where v ¼ {Hu,S (t), Hu,N (t)}. The effective granulometric size densities are HU,S (t) ¼ EU[Hu,S](t) and HU,N(t) ¼ EU[Hu,N](t) in the weak sense and are solvable in the weak sense via Eq. 3.155. By Theorem 4.2, the pass set ΠhCU i ¼ ft : E U ½Hu,S ðtÞ ≥ E U ½Hu,N ðtÞg
(4.46)
defines an IBR granulometric bandpass filter. d Since Hu ðtÞ ¼ dt Mu ðtÞ, effective MSDs can be obtained if we can interchange expectation with differentiation. To do this, Mu must be mean-square differentiable with mean-square derivative Hu. This depends on the distribution of the random set. According the Theorem 1.1, there are two necessary and sufficient conditions for mean-square differentiability:
104
Chapter 4
d E ½M ðtÞ, dt U u
(4.47)
2 K ðu, vÞ: uv Mu
(4.48)
E U ½Hu ðtÞ ¼ K Hu ðu, vÞ ¼
Assuming that these conditions hold, Eq. 4.45 becomes Z Z d d E U ½εu ½Ξ ¼ E U ½Mu,S ðtÞdt þ E U ½Mu,N ðtÞdt, c Π dt Π dt
(4.49)
we obtain weak-sense effective MSDs MU,S ðtÞ ¼ E U ½Mu,S ðtÞ,
(4.50)
MU,N ðtÞ ¼ E U ½Mu,N ðtÞ,
(4.51)
and the pass set becomes d d ΠhCU i ¼ t : E U ½Mu,S ðtÞ ≥ E U ½Mu,N ðtÞ : dt dt
(4.52)
In the case of Wiener filters, to apply Theorem 4.1 and have effective auto- and cross-correlation functions in the strong sense, we applied the wellknown theorem that there exists a Gaussian process possessing the secondorder moments. In the present setting the issue is whether, given EU[Mu], there exists a random set having MSD equal to EU[Mu]. If so, then we can select effective signal and noise processes possessing the preceding effective MSDs and GSDs, and these would be effective in the strong sense. Specifically, does there exist a theorem for the class of granulometries given in Eq. 3.149 stating that, if j(t) is a monotonically increasing differentiable function on [0, `) with j(0) ¼ 0, then there exists a GBF with MSD j(t)? No such proposition is known; however, one can obtain effective random set processes as special cases. For a geometrically intuitive example, consider a granulometry generated by a single opening with a horizontal-line structuring element and j(t). Suppose that a random set has a single grain, h(X) is the identity, X depends only on W, and Z t jðtÞ ¼ n½XðwÞdw: (4.53) 0
Then, according to Eq. 3.162, M(t) ¼ j(t). The desired random set results by defining the grain at w to be a rectangle with its base parallel to the x axis, base length w, and height j0 (w)/w, where base length 0 means that the grain is a vertical line because then n[X](w) ¼ j0 (w). Example 4.1. Let Ct be an opening by a disk of radius t and Lt be the induced reconstructive filter. Consider a signal consisting of squares with random angle of rotation D and random radius R, so that X ¼ X(R, D). Then
Optimal Robust Filtering
105
hX(R, D) ¼ R and HS(t) ¼ mS 4t2fR(t), where mS is the expected number of signal squares and fR is the density of R. Let the noise consist of ellipses having random angle of rotation, random minor axis 2W, and random major axis 4W. With Ct being an opening by a disk of radius t, HN (t) ¼ mN2pt2fW (t), where fW is the density of W. Assume that mS ¼ mN, R is normally distributed with mean 20 and variance 9, and W is normally distributed with mean u and variance 4. Figures 4.1(a) and (e) show realizations of S ∪ N for u ¼ 10 and u ¼ 30, respectively. When u ¼ 10, the optimal model-specific pass set consists of all t ∈ (14.504, `); when u ¼ 30, the optimal pass set consists of t ∈ [0, 25.496]. We assume that u is distributed as a mixture of two Gaussian distributions, one having mean 10, variance 5, and mass 3/4, and the other having mean 30, variance 5, and mass 1/4. Letting f(t; m, s2) be a normal density with mean m and variance s2, the effective characteristics are HU,S (t) ¼ mS 4t 2f(t; 20, 9) and HU,N ðtÞ ¼ E U ½mN 2pt2 f W ju ðtÞ ¼ mN 2pt2 E U ½f ðt; u, 4Þ Z ` 3 1 2 ¼ mN 2pt f ðt; u, 4Þ f ðu; 10, 5Þ þ f ðu; 30, 5Þ du 4 4 ` 3 1 ¼ mN 2pt2 f ðt; 10, 9Þ þ f ðt; 30, 9Þ : 4 4
Figure 4.1 Original and filtered images consisting of signal grains (squares) and noise grains (ellipses). Black grains are passed, and light gray grains are removed. (a) Original image 1, where u ¼ 10; (b) robust filter applied to image 1; (c) optimal filter for image 1 applied to image 1; (d) optimal filter for image 2 applied to image 1; (e) original image 2, where u ¼ 30; (f) robust filter applied to image 2; (g) optimal filter for image 1 applied to image 2; (h) optimal filter for image 2 applied to image 2. [Reprinted from (Dalton and Dougherty, 2014).]
106
Chapter 4
Figure 4.2 Scaled effective GSDs for signal (solid curve) and noise (dashed curve). [Reprinted from (Dalton and Dougherty, 2014).]
Scaled effective GSDs are depicted in Fig. 4.2. The optimal robust pass set is determined by p 3 1 f ðt; 20, 9Þ ≥ f ðt; 10, 9Þ þ f ðt; 30, 9Þ : 2 4 4 Referring to the figure, the pass set consists of all t ∈ [15.148, 25.841]. The optimal robust GBF has a single passband and is defined by ΞL ðS∪NÞ ¼ L15.148 ðS∪NÞ L25.841 ðS∪NÞ: Figure 4.1 shows filtered versions of images 1 and 2, where black grains are passed by the filter and gray grains are removed. Figure 4.1(b) shows image 1 filtered using the optimal robust filter ΞL(S ∪ N), having pass set t ∈ [15.148, 25.841]; Fig. 4.1(c) shows image 1 filtered using the model-specific optimal filter for this image, having pass set t ∈ (14.504, `); and Fig. 4.1(d) shows image 1 filtered using the model-specific optimal filter for image 2, having pass set t ∈ [0, 25.496]. Image 2 is filtered analogously in Figs. 4.1(f) through (h). We observe that model-specific filters perform very poorly in nondesign conditions, but the robust filter performs quite well in both cases. ▪ 4.1.4 The general schema under uncertainty Use of effective characteristics provides a general methodology for finding intrinsically Bayesian robust filters. The method requires a standard optimization solution involving characteristics. This approach has strong historical roots in spectral theory. Its extension to morphological filtering shows that optimization via characteristics can be applied whenever the signal
Optimal Robust Filtering
107
model, cost function, and filter representation can be brought into a suitable coherent structure. Once an optimization problem is solved via characteristics, finding effective characteristics, either in the strong or weak sense, extends the solution to the robust setting. The procedure extends the general schema of Section 3.8 to uncertainty classes. The following steps supplement the four in Section 3.8 to produce the IBR operator synthesis schema: 5. 6. 7. 8. 9.
Identify the uncertainty class. Construct a prior distribution. State the IBR optimization problem. Construct the appropriate effective characteristics. Prove that the IBR optimization problem is solved by replacing the model characteristics with the effective characteristics.
In the particular case of Wiener filtering, the following five steps are adjoined to the four in Section 3.8: 5. The uncertainty class is defined in terms of the uncertain parameters in the autocorrelation and cross-correlation functions. 6. A prior distribution is constructed for these parameters. 7. Perform IBR optimization by minimizing the expected MSE. 8. The effective characteristics are the effective power spectra. 9. Prove that the IBR optimization problem is solved by replacing the model characteristics with the effective characteristics. The fundamental part of the protocol is the last step: find conditions under which the solution to the IBR optimization is solved by replacing the characteristics in the ordinary solution with effective characteristics — and prove it. We close this section by noting that Bayesian uncertainty models have been around for some time (Bernardo and Smith, 2001; Clyde and George, 2004; Madigan and Raftery, 1994); for instance, in Bayesian Model Selection (Barbieri and Berger, 2004, Chen et al., 2003; Wasserman, 2000), where the aim is to select the most likely model in an uncertainty class, and Bayesian Model Averaging (Hoeting et al., 1999; Raftery and Hoeting, 1997; Wasserman, 2000), which attempts to find an average model to represent the uncertainty class. These approaches are more focused on model inference in the presence of uncertainty, whereas the IBR objective is not necessarily to infer a model but to optimize the performance of an operator over the uncertainty class. There is no necessity to infer a model; rather, the IBR methodology begins with a definition of performance and proves that closed-form solutions to many optimal filtering problems can be found via a kind of average model, the effective process, or effective characteristics. It is not necessary that the effective process be a member of the uncertainty class, or even for it to exist as a valid process.
108
Chapter 4
Moreover, and perhaps most importantly, uncertainty is grounded in the underlying random process, not in the parameters of an operational model, the latter uncertainty being derived from the uncertainty in the random process.
4.2 Optimal Bayesian Filters Suppose that, in addition to a prior distribution, data are drawn from the joint random process to produce a random sample S ¼ fðxðt1 Þ, yðv1 ÞÞ, : : : , ðxðtn Þ, yðvn ÞÞg, and p(u, S) is the joint distribution over U and the sampling process. An operator providing the minimum expected cost conditioned on the data is called an optimal Bayesian filter (OBF): cOBF ð · jSÞðvÞ ¼ arg min E U ½CðY u ðvÞ, cðX u ÞðvÞÞjS c∈F
(4.54)
(Qian and Dougherty, 2016). This equation is similar to Eq. 4.2 for an IBR filter, except that the expectation is conditioned on the sample. The conditional expectation is relative to the posterior density p (u) ¼ p(u|S). The OBF terminology derives from the corresponding problem in classification, where an optimal classifier relative to the posterior over an uncertainty class of feature-label distributions is called an “optimal Bayesian classifier” (Dalton and Dougherty, 2013a, b), a topic to be discussed in Chapter 6. To keep matters straight, when considering optimal Bayesian filters, we will write Ep and Ep for expectation with respect to the prior and posterior densities, respectively. Hence, EU in Eq. 4.54 becomes Ep. When expressing the OBF in terms of the posterior, the expectation Ep[·|S] is changed to the expectation Ep [·]. If there are no data, then the OBF reduces to the IBR filter. Moreover, the OBF for p and S is the IBR filter for the posterior p . The IBR filter theory for linear filtering extends to optimal Bayesian filtering by replacing the prior p with the posterior p . The effective correlation functions become RU,Y (v, v) ¼ Ep [RYu(v, v)], RU,X (t, u) ¼ Ep [RXu(t, u)], and RU,YX (v, t) ¼ Ep [RYuXu(v, t)]. The extended Wiener–Hopf equation applies, as do the canonical-expansion formulations. In the case of WS stationary processes, rU,X (t) ¼ Ep [rXu(t)], rU,YX (t) ¼ Ep [rYuXu(t)], and their respective effective power spectra are SU,X (v) and SU,YX (v). The optimal Bayesian Wiener filter is defined by Eq. 4.25. At this point we would like to consider optimal MSE estimation, a topic we could have treated in the IBR setting but have left to the more general OBF setting. Theorem 3.1 states that the conditional expectation E[Y|X] provides the optimal MSE estimate of random variable Y based on random variable X. Conditional-expectation optimality extends to estimation of Y based on a finite collection of random variables. For linear filtering we have been finding the optimal estimate of Y(v) based on observation of a random function X(t) over an interval T. The obvious extension of conditional expectation would be to consider E[Y(v)|{X(t)}t ∈ T], the expectation of Y(v)
Optimal Robust Filtering
109
given {X(t)}t ∈ T; however, the definition of conditional expectation in Eq. 3.3 does not readily extend to conditioning on an infinite number of random variables. Simply put, one cannot escape a measure-theoretic treatment. Hence, we will proceed formally, recognizing that the conditional expectation can be defined rigorously and that, in the case of a finite number of observations, it has the definition utilized heretofore. Moroever, to simplify the notation, we will write E[Y(v)|X], the observation interval T being implicit. Consider the MSE cost function and recall from the introduction to Chapter 1 that Xu(t) is short-hand notation for the random variable Xu(v; t), which can be written as Xu,v(t). For notational ease we will write cOBF(·|S) as cOBF(·), keeping in mind conditioning on the sample S. If we let V be the probability space for v, then for the MSE cost function, the expectation of Eq. 4.54 becomes E p ½E V ½jY u,v ðvÞ cðX u,v ÞðvÞj2 jujS ¼ E p ½E V ½jY u,v ðvÞ cðX u,v ÞðvÞj2 ju ¼ E p,V ½jY u,v ðvÞ cðX u,v ÞðvÞj2 ,
(4.55)
where Ep ,V is the joint expectation with respect to p and V. If F consists of all measurable functions, then the OBF is the optimal MSE estimator relative to p V. It is given by the conditional expectation relative to Ep ,V: cOBF ðX u,v ÞðvÞ ¼ E p , V ½Y u,v ðvÞjX u,v ¼ E p ½E V ½Y u,v ðvÞjX u,v jf
(4.56)
¼ E p ½copt,f ðX u,v ÞðvÞjf, where copt,f is the optimal filter for model f. Note that we have used f to represent an arbitrary state in the uncertainty class in order to avoid confusion with the signal pair (Xu,v, Yu,v) from state u for which cOBF is being employed. Relative to cOBF(Xu,v), which is a random function of Xu,v, which depends on u and v, f is a dummy variable that is integrated out with Ep . For a realization xu,v, the last line in Eq. 4.56 is Ep [copt,f(xu,v)(v)|f]. Thus, cOBF is found by evaluating the f-optimal filter for the observed realization and then averaging over f relative to the posterior distribution, which makes sense because u is unknown. We typically assume that the probability measure on a joint random process (X(t), Y(v)), t ∈ T, v ∈ V, is determined by a probabiltiy distribution function F ðx, y; t, vÞ ¼ PðX ðtÞ ≤ x, Y ðvÞ ≤ yÞ,
(4.57)
where, if X is an n-dimensional random vector, then X ≤ x(t) is replaced by n component inequalities (Eq. 1.1). From this, one can consider the conditional probability distribution function F(y|x; t, v), meaning that the observation Y(v) is conditioned on the observation X(t). Here, we are conditioning on {X(t)}t ∈ T,
110
Chapter 4
but this kind of conditional distribution function cannot be derived from a probability distribution function of a finite number of random variables. Hence, we procede formally with conditional probabilty distribution functions of the form F(y|X; T, v) corresponding to the conditional random variable Y(v)|{X(t)}t ∈ T. In the presence of uncertainty, there is conditioning on u, so we have F(y|X|u;T, v), which we write as Fu(y|X; v), T being implicit. Under these assumptions, Ep ,V ¼ Ep ,F. Suppressing v in the notation, Eqs. 4.54 and 4.56 become cOBF ¼ arg minE p , F ½jY u ðvÞ cðX u ÞðvÞj2
(4.58)
cOBF ðX u ÞðvÞ ¼ E p ½copt,f ðX u ÞðvÞjf,
(4.59)
copt,f ðX u ÞðvÞ ¼ E F f ½Y u ðvÞjX u :
(4.60)
c∈F
and
respectively, where
The true distribution governing the random pair (Xu, Yu(v)) is Fu, but the conditional expectation is being taken under the assumption that their joint behavior is governed by Ff. We define the effective conditional distribution by Z S F U ðyjX u ; vÞ ¼ F f ðyjX u ; vÞp ðfÞdf: (4.61) U
The MSE OBF can be expressed as cOBF ðX u ÞðvÞ ¼ E p ½E F f ½Y u ðvÞjX u jf Z Z ` ydF f ðyjX u ; vÞp ðfÞdf ¼ U ` Z ` ¼ ydF SU ðyjX u ; vÞ
(4.62)
`
¼ E F S ½Y u ðvÞjX u , U
which is the usual conditional expectation form of the optimal MSE filter, but relative to the effective conditional distribution. Given the random sample S, the posterior distribution is given by p ðuÞ ¼ pðujSÞ ∝ pðuÞpðSjuÞ n Y f u ðxi , yi ; ti , vi Þ, ¼ pðuÞ i¼1
(4.63)
Optimal Robust Filtering
111
where we have assumed that Fu(xi, yi; ti, vi) possesses the density fu(xi, yi; ti, vi). The constant of proportionality can be found by normalizing the integral of p (u) to 1. It is important to recognize that in Eq. 4.63 the density fu(xi, yi; ti, vi) is not fixed and depends on the index i (unless the joint process happens to be stationary). In general, p(u) is not required to be a valid density function. The priors are called “improper” if the integral of p(u) is infinite, i.e., if p(u) induces a s-finite measure but not a finite probability measure. Such priors can be used to represent uniform weight for all parameters in an unbounded range, rather than truncating the range of each parameter to a finite range. The use of improper priors can lead to improper posterior distributions. When improper priors are used, Bayes’ rule does not apply, and Eq. 4.63 is taken as a definition, with posterior distributions being normalized to possess unit integral. It is important to check that the normalizing integral does not diverge. Example 4.2. Consider the simple one-dimensional case where T ¼ V ¼ {0}, and the joint distribution of Xu(0) and Yu(0) is jointly Gaussian for all u ∈ U. For notational simplicity, let X ¼ Xu(0) and Y ¼ Yu(0). Then, for a given f ∈ U, E F f ½Y jX ¼ x ¼ copt,f ðxÞ ¼ mY jf þ rX ,Y jf
sY jf ðx mX jf Þ sX jf
is the classical Gaussian linear regression, where mX|f and mY|f are the means, sX|f and sY|f are the standard deviations of Xf and Yf, respectively, and rX,Y|f is their correlation coefficient, all relative to the distribution Ff. Hence, Eq. 4.59 yields sY jf cOBF ðX u Þð0Þ ¼ E p ½mY jf þ E p rX ,Y jf ðx mX jf Þ , sX jf which is still a linear function of x. What are the effective characteristics? If they were the effective means, variances, and correlation coefficient, then the OBF could be expressed as cOBF ðX u Þð0Þ ¼ E p ½mY jf þ E p ½rX ,Y jf
E p ½sY jf ðx E p ½mX jf Þ, E p ½sX jf
but this is not the case. While the means, variances, and correlation coefficient are characteristics of the process, the ordinary linear regression provides the mean of Y and two other characteristics arising from multiplying through in the second summand. The expectations of these appear in the OBF; therefore, they are the relevant effective characteristics. ▪ Note that in the preceding example the linear regression function for predicting Y naturally arises from the Gaussian joint distribution assumption
112
Chapter 4
on F, which is fundamentally different from the classical Bayesian linear regression (BLR) that starts by assuming a linear regression function. In the classical BLR approach for this setup, a linear function of the form cðxÞ ¼ bT x ¼ b0 þ b1 x
(4.64)
is assumed with a prior distribution on the parameter pair (b0, b1), thereby leading to a posterior distribution. The weakness of classical BLR is that the prior distribution is ad hoc relative to the underlying process Fu(x, y; 0, 0), which in this case is a Gaussian. No matter the choice of prior for (b0, b1), the resulting regression can be no better than the OBF since the OBF is optimal. Structurally, the OBF is based on the uncertainty in the distribution of (Xu, Yu), which is characterized via p(u), and the comparative performance of the BLR filter depends on the degree to which the prior on (b0, b1) happens to capture the knowledge in p(u). As noted in (Qian and Dougherty, 2016), there is a scientific gap in constructing functional models and making prior assumptions on model parameters when the actual uncertainty applies to the underlying random processes.
4.3 Model-Constrained Bayesian Robust Filters Having obtained full optimality relative to the cost function, filter class, and prior distribution via the IBR filter, we now turn to a suboptimal formulation that restricts optimization to filters that are optimal for models in the uncertainty class and depends directly on an analytic definition of robustness. For u ∈ U and c ∈ F , the cost of estimating Y(s) by c(X)(s) relative to the model determined by u will be denoted by Cu(c), leaving the random process pair and point s of application implicit. In fact, this makes the entire matter more general because in this framework the theory applies to general classes of operators on general model classes, not simply to filters on classes of joint observation-signal processes. We denote the optimal filter (operator) for u by cu. If u, f ∈ U, then Cu(cf) ≥ Cu(cu). The robustness of cf relative to state u is the cost increase due to using cf at state u (Grigoryan and Dougherty, 1998): kðu; fÞ ¼ C u ðcf Þ C u ðcu Þ:
(4.65)
Robustness is a nonnegative function on U U, k(u; f) is not symmetric in u and f, and k(u; u) ¼ 0. Qualitatively, a filter is robust if k(u; f) is small when |u – f| is modest. Assuming a prior distribution p(u) on U, the mean robustness at f is the expectation of k(u; f) relative to p(u):
Optimal Robust Filtering
113
Z kðfÞ ¼ E U ½kðu; fÞ ¼
`
`
kðu; fÞpðuÞdu:
(4.66)
The expected cost of using cf is E U ½C u ðcf Þ ¼ kðfÞ þ E U ½ðC u ðcu Þ,
(4.67)
where EU[(Cu(cu)] is the expected cost of optimal filtering relative to the prior distribution. For any minimizing value k(f), f is called a maximally robust state. If we are going to apply a state-specific optimal filter and are uncertain as to the observed state, then it is best to apply an optimal filter for a maximally robust state. A model-constrained (state-constrained) Bayesian robust (MCBR) filter is defined by cMCBR ¼ arg min E U ½C u ðcÞ c∈F U
¼ arg min E U ½CðY u ðsÞ, cðX u ÞðsÞÞ,
(4.68)
c∈F U
where F U is set of all filters in F that are optimal for some u ∈ U, and where in the second equality we explicitly put in the random process pair and the point of application so that the comparison to Eq. 4.2 is clear. An MCBR filter is suboptimal relative to minimizing over all filters in F , meaning that it is suboptimal relative to an IBR filter. Moreover, an MCBR filter is an optimal filter for a maximally robust state. Indeed, cMCBR ¼ arg min E U ½C u ðcf Þ cf :f∈U
¼ arg min ðkðfÞ þ E U ½C u ðcu ÞÞ cf :f∈U
(4.69)
¼ arg min kðfÞ, cf :f∈U
where the second equality follows from Eq. 4.67.
4.4 Robustness via Integral Canonical Expansions For robustness analysis of optimal linear filters in the context of integral canonical expansions, the cost function is the MSE and the characteristics are given by a function pair RXa(s, t) and RXaYa(s, t) consisting of the autocorrelation and cross-correlation, where, in accordance with the notation in (Grigoryan and Dougherty, 2001), we denote states by a and b such that
114
Chapter 4
signal pairs are denoted by (Xa, Ya). We write q(s, j) in place of a(s, j) in the basic canonical expansion theory, so that Eqs. 2.83 and 2.84 become Z ZðjÞ ¼
X ðtÞqðt, jÞdt,
(4.70)
T
Z
1 xðt, jÞ ¼ I ðjÞ
qðs, jÞRX ðt, sÞds,
(4.71)
T
respectively. According to Eq. 3.107, the optimal linear estimator for Ya(s) based on observing Xa(t) over the interval T is Yˆ a ðsÞ ¼
Z gˆ a ðs, tÞX a ðtÞdt,
(4.72)
T
where gˆ a ðs, tÞ is the optimal weighting function for state a. If, however, we apply the optimal linear estimator for state b to estimate Ya(s) based on observing Xa(t) over the interval T, then we get the estimate Yˆ a,b ðsÞ ¼
Z gˆ b ðs, tÞX a ðtÞdt,
(4.73)
T
where gˆ b ðs, tÞ is the optimal weighting function for state b. Relative to the MSE, the robustness is given by kða; bÞ ¼ E½jY a ðsÞ Yˆ a,b ðsÞj2 E½jY a ðsÞ Yˆ a ðsÞj2 :
(4.74)
In the context of canonical expansions, according to Eq. 3.132, Eq. 4.73 becomes Yˆ a,b ðsÞ ¼
Z
Za,b ðjÞRY b Zb ðs, jÞ dj, I b ðjÞ Ξ
(4.75)
where Zb(j) is the canonical-expansion coefficient for Xb(t), Ib(j) is the intensity of Zb(j), and Z Za,b ðjÞ ¼
X a ðtÞqb ðt, jÞdt:
(4.76)
T
In general, Za,b(j) need not be white noise. The MSE of Yˆ a,b ðsÞ as an estimator of Ya(s) is
Optimal Robust Filtering
115
" 2 # Z Z ðjÞR ðs, jÞ a,b Y Z b b dj E Y a ðsÞ I ðjÞ Ξ
b
Z a,b ðjÞRY b Zb ðs, jÞ dj ¼ E½jY a ðsÞj E Y a ðsÞ I b ðjÞ Ξ Z Z a,b ðjÞRY b Zb ðs, jÞ E Y a ðsÞ dj I b ðjÞ Ξ Z Z Z a,b ðjÞRY b Zb ðs, jÞ Za,b ðjÞRY b Zb ðs, jÞ dj dj þE I b ðjÞ I b ðjÞ Ξ Ξ Z RY a Za,b ðs, jÞRY b Zb ðs, jÞ 2 ¼ E½jY a ðsÞj dj I b ðjÞ Ξ Z RY a Za,b ðs, jÞRY b Zb ðs, jÞ dj I b ðjÞ Ξ Z Z RZa,b ðj, hÞRY b Zb ðs, jÞRY b Zb ðs, hÞ djdh: þ I b ðjÞI b ðhÞ Ξ Ξ
Z
2
The robustness is Z Z RZa,b ðj, hÞRY b Zb ðs, jÞRY b Zb ðs, hÞ kða; bÞðsÞ ¼ djdh I b ðjÞI b ðhÞ Ξ Ξ Z Z jRY a Za,b ðs, jÞ RY b Zb ðs, jÞj2 jRY a Za ðs, jÞj2 dj þ dj þ I a ðjÞ I b ðjÞ Ξ Ξ Z Z jRY a Za,b ðs, jÞj2 jRY b Zb ðs, jÞj2 dj dj: I b ðjÞ I b ðjÞ Ξ Ξ
(4.77)
(4.78)
Equation 4.78 provides the fundamental representation of robustness for optimal linear filters in the context of integral canonical expansions. The mean robustness k(b)(s) is found by integrating k(a; b)(s) against the prior distribution p(a). 4.4.1 Robustness in the blurred-signal-plus-noise model For a signal transformed by a linear operator F and uncorrelated additive noise, X a ðtÞ ¼ Fa Y ðtÞ þ N a ðtÞ,
(4.79)
with zero-mean processes, the basic equations determining the robustness become Z Z Z a ðjÞ ¼ Fa Y ðtÞqa ðt, jÞdt þ N a ðtÞqa ðt, jÞdt, (4.80) T
T
116
Chapter 4
Z
Z
Z a,b ðjÞ ¼
Fa Y ðtÞqb ðt, jÞdt þ T
Z
RY a Za ðs, jÞ ¼ E½Y ðsÞZ a ðjÞ ¼ Z RY a Za,b ðs, jÞ ¼ E½Y ðsÞZa,b ðjÞ ¼
T
T
N a ðtÞqb ðt, jÞdt,
RY Fa Y ðs, tÞqa ðt, jÞdt,
RY Fa Y ðs, tÞqb ðt, jÞdt,
RZa,b ðj, hÞ ¼ E½Z a,b ðjÞZ a,b ðhÞ Z Z ¼ ½RFa Y ðt, sÞ þ RN a ðt, sÞqb ðt, jÞqb ðs, hÞdtds: T
(4.81)
T
(4.82) (4.83)
(4.84)
T
For additive noise alone, RY a Za,b ðs, jÞ ¼ RY b Zb ðs, jÞ,
(4.85)
and Eq. 4.78 reduces to Z Z
kða; bÞðsÞ ¼
RZa,b ðj, hÞRY b Zb ðs, jÞRY b Zb ðs, hÞ djdh I b ðjÞI b ðhÞ Ξ Ξ Z Z jRY a Za ðs, jÞj2 jRY b Zb ðs, jÞj2 þ dj 2 dj: I a ðjÞ I b ðjÞ Ξ Ξ
(4.86)
Example 4.3. The preceding theory applies to discrete canonical representations with X X ðtÞ ¼ ZðkÞxk ðtÞ, k
Z ZðkÞ ¼
X ðtÞqk ðtÞdt: T
Consider the signal-plus-noise model X a ðtÞ ¼ Y ðtÞ þ N a ðtÞ in which white noise is uncorrelated with the signal and has constant intensity s2a . Let {uk(t)} be the Karhunen–Loève coordinate system for the signal autocorrelation function, with corresponding eigenvalue sequence {lk}. Then qk(t) ¼ uk(t), Var[Z(k)] ¼ lk, and according to Example 3.2, Yˆ ðsÞ ¼
` X
lk Z a ðkÞuk ðsÞ, l þ s2a k¼1 k
Optimal Robust Filtering
117
where Za(k) is the kth Karhunen–Loève coefficient for Xa(t), and Xa(t) has the same coordinate functions as Y(t), except that the eigenvalues for Xa(t) are lk þ s2a . From Eqs. 4.82 and 4.84, Z RY a Za ðs, kÞ ¼
RY ðs, tÞuk ðtÞdt ¼ lk uk ðsÞ, Z Z RZa,b ðj, hÞ ¼ ½RY ðt, sÞ þ s2a dðt sÞuk ðtÞul ðsÞdtds T T Z Z Z 2 ¼ RY ðt, sÞuk ðtÞul ðsÞdtds þ sa uk ðtÞul ðtÞdt T
T
T
¼ ðlk þ
T
s2a Þdkl :
Treating Eq. 4.78 in the discrete sense, we obtain X ðs2b s2a Þ2 kða; bÞðsÞ ¼ l2k juk ðsÞj2 : 2 2 2 ðl þ s Þðl þ s Þ a k k b k
▪
4.4.2 Robustness relative to the power spectral density Assuming WS stationarity, the blurred-signal-plus-noise model of Eq. 3.120 consists of a signal convolved with a kernel ha(t), plus additive uncorrelated noise Na(t), X a ðtÞ ¼ Fa Y ðtÞ þ N a ðtÞ Z ` ha ðtÞY ðt tÞdt þ N a ðtÞ, ¼
(4.87)
`
and we obtain the Wiener filter ˆ a ðvÞ ¼ G
H a ðvÞS Y ðvÞ : jH a ðvÞj2 S Y ðvÞ þ S N a ðvÞ
(4.88)
For robustness analysis, we need to recast the basic equations in the WS stationary context using the power spectral density and q(t, v) ¼ e jvt. According to Eq. 2.108, x(t, v) ¼ (2p)–1e jvt. This coordinate function does not depend on the state of the observed signal Xa(t), which itself depends on the state of the convolution kernel ha(t), or equivalently its Fourier transform Ha(v), and the state of the noise Na(t). Assuming that the noise is white with intensity s2a , Z Z a ðvÞ ¼ Za,b ðvÞ ¼
`
`
Fa Y ðtÞe
jvt
Z dt þ
` `
N a ðtÞejvt dt,
(4.89)
118
Chapter 4
rY a Za ðs vÞ ¼ E½Y a ðsÞZa ðvÞ Z ` rY Fa Y ðs tÞe jvt dt ¼ ` Z ` rY Fa Y ðuÞejvu du ¼ e jvs
(4.90)
`
¼e
jvs
S Y Fa Y ðvÞ
¼e
jvs
H a ðvÞS Y ðvÞ,
rY a Za,b ðs vÞ ¼ ejvs H a ðvÞS Y ðvÞ,
(4.91)
I a ðvÞ ¼ 2pS X a ðvÞ ¼ 2p½jH a ðvÞj2 S Y ðvÞ þ s2a ,
(4.92)
Z
`
Z
`
½rFa Y ðt sÞs2a dðt sÞejjt e jhs dtds Z ` jjt jhs 2 rFa Y ðt sÞe e dtds þ sa ejðjhÞs ds ¼ ` ` ` Z Z ` ` rFa Y ðuÞejuh du ejðjhÞt dt þ s2a dðj hÞ ¼
RZa,b ðj, hÞ ¼
Z ` `
`
Z ` `
(4.93)
`
¼ S Fa Y ðhÞdðj hÞ þ s2a dðj hÞ ¼ ½jH a ðhÞj2 S Y ðhÞ þ s2a dðj hÞ: Hence, Eq. 4.78 becomes kða; bÞ ¼
1 2p
Z
` `
1 þ 2p 1 þ 2p
Z Z
½jH a ðvÞj2 S Y ðvÞ þ s2a jH b ðvÞj2 S 2Y ðvÞ dv ½jH b ðvÞj2 S Y ðvÞ þ s2b 2 `
` ` `
jH a ðvÞj2 S 2Y ðvÞ dv jH a ðvÞj2 S Y ðvÞ þ s2a
(4.94)
ðjH b ðvÞ H a ðvÞj2 jH b ðvÞj2 jH a ðvÞj2 ÞS 2Y ðvÞ dv: jH b ðvÞj2 S Y ðvÞ þ s2b
For convolution alone (no noise), the inverse filter is optimal, and the robustness reduces to kða; bÞ ¼
1 2p
Z
`
`
jH b ðvÞ H a ðvÞj2 S Y ðvÞ dv, jH b ðvÞj2
(4.95)
which is an L2 norm in the transform space for the convolution kernels. In the case of additive noise alone (no convolution), Ha(v) ¼ 1, and the robustness reduces to
Optimal Robust Filtering
119
1 kða; bÞ ¼ 2p
Z
` `
ðs2a s2b Þ2 S 2Y ðvÞ dv: ðS Y ðvÞ þ s2b Þ2 ðS Y ðvÞ þ s2a Þ
(4.96)
Example 4.4. One model for signal uncertainty is ε-contamination. In the Bayesian framework, the power spectral density of the signal takes the form S Y a ðvÞ ¼ ð1 εÞS 0 ðvÞ þ εS W a ðvÞ, where 0 ≤ ε < 1, S0(v) is a fixed power spectral density, and {SWa} is a class of power spectral densities that contaminate the nominal signal power spectral density S0(v). The prior distribution on a places a probability structure on the contamination class. A similar contamination decomposition can be posited for the noise in the uncorrelated-signal-plus-noise model, with possibly different ε and different contamination class. For simplicity, we only consider signal uncertainty and assume white noise with intensity s2. The basic equations take similar forms to those for the previously considered convolvedsignal-plus-noise model. Za(v) and Za,b(v) are given by Eq. 4.89 with the exception that FaY(t) is replaced by Ya(t) in the first integral. Moreover, rY a Za ðs vÞ ¼ rY a Za,b ðs vÞ ¼ e jvs ½ð1 εÞS 0 ðvÞ þ εS W a ðvÞ, I a ðvÞ ¼ 2p½ð1 εÞS 0 ðvÞ þ εS W a ðvÞ þ s2 : Equation 4.78 becomes Z
½ð1 εÞS 0 ðvÞ þ εS W a ðvÞ þ s2 ½ð1 εÞS 0 ðvÞ þ εS W b ðvÞ2 dv ½ð1 εÞS 0 ðvÞ þ εS W b ðvÞ þ s2 2 ` Z ` ½ð1 εÞS 0 ðvÞ þ εS W ðvÞ2 1 a dv þ 2p ` ð1 εÞS 0 ðvÞ þ εS W b ðvÞ þ s2 Z 1 ` 2ð1 εÞ2 S 20 ðvÞ ε2 ðS 2W a ðvÞ þ S 2W b ðvÞÞ dv p ` ð1 εÞS 0 ðvÞ þ εS W b ðvÞ þ s2 Z 1 ` 2εð1 εÞS 0 ðvÞðS W a ðvÞ þ S W b ðvÞÞ dv: p ` ð1 εÞS 0 ðvÞ þ εS W b ðvÞ þ s2
1 kða; bÞ ¼ 2p
`
Note that k(a; b) ¼ 0 if ε ¼ 0. The model can be further extended by letting ε be a random variable possessing a prior distribution. ▪ 4.4.3 Global filters Rather than design optimal filters for specific states and apply the optimal filter for a maximally robust state, we can take a distribution-wide approach
120
Chapter 4
and define a single global filter that takes into account the distribution of the parametric mass. A simple way to do this is to use the optimal filter for E[u]. This corresponds to the naive approach in which one designs a single optimal filter at an average state. E[u] is not typically maximally robust, although it may be close to being so. While the mean takes into account the parameter distribution, it does not utilize the characteristics determining the optimal filter. We consider global linear filters that depend on the second-order statistics. A global filter c• cannot outperform a state-specific optimal filter cu at state u. We denote the increase in cost from applying c• instead of cu at u by k(u;•). The mean robustness of c• is the expected cost increase from application of c•: k(•) ¼ EU[k(u; •)]. If k(•) ≤ k(u) for all u ∈ U, then the global filter is said to be uniformly most robust. The response function of the optimal linear filter for state u is defined in Eq. 3.131, but now the terms depend on u. We construct a global filter via the response function by taking the expectation: gˆ • ðs, tÞ ¼ E U ½ˆgu ðs, tÞ:
(4.97)
To illustrate this mean-response global filter, consider the model of Eq. 4.87 when the signal and noise are uncorrelated and the noise is white. The transfer function for the optimal filter is ˆ a ðvÞ ¼ G
H a ðvÞS Y ðvÞ : jH a ðvÞj2 S Y ðvÞ þ s2a
(4.98)
Since averaging the transfer function is equivalent to averaging the response ˆ a ðvÞ. k(a; •) is calculated ˆ • ðvÞ ¼ E½G function, the global filter is defined by G according to Eq. 4.94 with • replacing b. A global filter can be defined using the mean characteristics. The characteristic is averaged to obtain a mean characteristic, vX,Y ;• ¼ E[vX,Y; a]. A global filter is then defined as the optimal linear filter relative to the mean characteristic. To illustrate this mean-characteristic global filter, again consider the model of Eq. 4.87 when the signal and noise are uncorrelated, and the noise is white. The transfer function for the mean-characteristic global filter is defined by ˆ • ðvÞ ¼ G
H • ðvÞS Y ðvÞ , jH • ðvÞj2 S Y ðvÞ þ s2•
(4.99)
where the characteristic is the triple ðS Y , H • ðvÞ, s2• Þ determined by S Y , H • ðvÞ ¼ E½H a ðvÞ, and s2• ¼ E½s2a . Again, k(a; •) is calculated according to Eq. 4.94 with • replacing b.
Optimal Robust Filtering
121
4.5 Minimax Robust Filters As previously noted, robust Wiener filtering originated with minimax optimization. Minimax robustness also has a long history in other linear optimization frameworks, including matched and Kalman filtering (Poor, 1983; Poor and Looze, 1981; Verdu and Poor, 1984a; Chen and Chen, 1984; Chen and Kassam, 1995). A general formulation has been developed in the context of game theory, where it can be proven under some convexity assumptions on the uncertainty class that a minimax robust filter exists (Verdu and Poor, 1984b). In the general game-theory context, the solution applies to a number of settings. A minimax robust filter has the best worst-case performance over all models: cminimax ¼ arg min max C u ðcÞ: c∈F u∈U
(4.100)
It only takes into account the uncertainty class, not a prior distribution on the uncertainty class. Relative to F U , a model-constrained (state-constrained) minimax robust filter is defined by cminimax ¼ arg min max C u ðcÞ, c∈F U u∈U
(4.101)
which is suboptimal relative to the minimax robust filter. We assume modelconstrained optimization throughout the remainder of this section. Unless further conditions are posited, there is no assurance that a solution exists, nor even that the minimax is finite. Minimax robust filters can be described via game-theoretic concepts. A least-favorable state u ∈ U has the greatest error among all optimal filters: C u ðcu Þ ¼ maxC u ðcu Þ: u∈U
(4.102)
A least-favorable state is said to be a saddle-point solution to the optimization if C u ðcu Þ ≤ C u ðcu Þ ≤ C u ðcf Þ
(4.103)
for all u ∈ U and all cf ∈ F U — equivalently, for all u, f ∈ U. The right inequality holds because cu is optimal for u. The left inequality says that cu performs at least as well for any state as it does for the least-favorable state, for which it is optimal. Note that the left inequality requires u to be least favorable because, owing to the optimality of cu for u, it implies that C u ðcu Þ ≤ C u ðcu Þ ≤ C u ðcu Þ:
(4.104)
122
Chapter 4
If a least-favorable state is a saddle-point solution, then cu is a minimax robust filter. Indeed, according to the left inequality of Eq. 4.103, for u, maxC u ðcu Þ ¼ C u ðcu Þ:
(4.105)
u∈U
According to the right inequality, for any u, maxC f ðcu Þ ≥ C u ðcu Þ ≥ C u ðcu Þ ¼ maxC u ðcu Þ: f∈U
u∈U
(4.106)
Hence, cu satisfies Eq. 4.101, and finding a minimax robust filter is reduced to finding a least-favorable state that is a saddle-point solution. As we have seen for linear filters and MSE, for a constrained class of filters, filter error and optimality often depend on only certain characteristics relating the observed and ideal signals, not the full joint distribution. When emphasizing dependency on characteristics, we denote the relevant characteristics by vX,Y and the filter cost (error) by C(vX,Y; c). Equation 4.103 takes the form CðvX ,Y ,u ; cu Þ ≤ CðvX ,Y ,u ; cu Þ ≤ CðvX ,Y ,u ; cf Þ:
(4.107)
For jointly WS stationary processes and the signal-plus-noise model, X(t) ¼ Y(t) þ N(t), vX,Y is comprised of second-order characteristics of the signal and noise, in particular, the power spectral density SX of the observed signal and the cross power spectral density SX,Y. If the signal and noise are uncorrelated, then vX,Y consists of the power spectral densities SY and SN of the signal and noise, respectively, and optimization is relative to the MSE determined by the power spectra and the linear filter. In this special case, the uncertainty class is {(SY,u, SN,u):u ∈ U}, vX,Y,u and vX ,Y ,u are replaced by (SY,u, SN,u) and ðS Y ,u , S N,u Þ, respectively, and ðS Y ,u , S N,u Þ is called a leastfavorable pair. Minimax filtering is very conservative, is subject to the detrimental effect of outliers (as we will demonstrate), and does not use any prior distributional knowledge on the uncertainty class. 4.5.1 Minimax robust granulometric bandpass filters Because the error of a granulometric bandpass filter Ξ depends on the signal and noise granulometric size densities HS and HN, we denote its error (Eq. 3.153) by ε(HS, HN; Ξ). Given granulometry {Ct}, an uncertainty class takes the form {(HS,a, HN,a): a ∈ U}, where, in accordance with (Dougherty and Chen, 2001), we denote states by a and b. For each GSD pair (HS,a, HN,a), there is an optimal GBF ΞC,a. A minimax robust filter is defined by
Optimal Robust Filtering
123
Ξminimax ¼ arg min max εðHS,a , HN,a ; ΞC,b Þ: ΞC,b :b∈U a∈U
(4.108)
To ease notation, we will write Ξa instead of ΞC,a, keeping in mind the generating granulometry {Ct}. A least-favorable GSD pair ðHs,a , HN,a Þ is defined by εðHs,a , HN,a ; Ξa Þ ¼ max εðHS,a , HN,a ; Ξa Þ: a∈U
(4.109)
Ξa is a saddle-point solution, and therefore a minimax robust filter, if εðHS,a , HN,a ; Ξa Þ ≤ εðHS,a , HN,a ; Ξa Þ ≤ εðHS,a , HN,a ; Ξb Þ
(4.110)
for all a, b ∈ U. Referring to Eq. 3.154, and denoting the error and optimal pass set for a by εa and Π〈a〉, respectively, a is least favorable if εa ½Ξa ≥ εa ½Ξa for any a ∈ U, meaning that Z Z Z Z HS,a ðtÞdt þ HN,a ðtÞdt ≥ HS,a ðtÞdt þ HN,a ðtÞdt: (4.111) Πhaic
Πhai
Πhaic
Πhai
If a is least favorable, then (HS,a, HN,a) is a saddle-point solution if εa ½Ξa ≥ εa ½Ξa , meaning that Z
Z Πhaic
HS,a ðtÞdt þ
Z Πhai
HN,a ðtÞdt ≥
Z Πhaic
HS,a ðtÞdt þ
Πhai
HN,a ðtÞdt
(4.112)
for any a ∈ U, which holds if the following two saddle-point inequalities hold: Z
Z Πhaic
HS,a ðtÞdt ≥
Z
Πhaic
HS,a ðtÞdt,
(4.113)
HN,a ðtÞdt:
(4.114)
Z
Πhai
HN,a ðtÞdt ≥
Πhai
With Ct an opening, for any a ∈ U, suppose that the optimal pass set is [ta, `), ta > 0. Thus, HS,a(t) ≥ HN,a(t) for t ≥ ta, and HS,a(t) < HN,a(t) for t < ta. This models the situation in which signal components tend to be larger than noise components. The opening Ξa ¼ Cta is an optimal GBF. The saddlepoint inequalities become Z t Z t a a HS,a ðtÞdt ≥ HS,a ðtÞdt, (4.115) 0
0
124
Chapter 4
Z
`
ta
Z HN,a ðtÞdt ≥
` ta
HN,a ðtÞdt:
(4.116)
For reconstructive granulometries and the grain model of Eq. 3.167, with single-parameter granulometric sizes for the signal and noise primary grains, Eq. 3.164 applied to the signal becomes HS,a ðtÞ ¼ mS,a gS ða; tÞf S,a ðhðSa Þ1 ðtÞÞ,
(4.117)
where a is now part of the definition of gS, and the distribution for the signal primary grain Sa is fS,a. An analogous expression applies to the noise with N in place of S. If a is a least-favorable state, then the saddle-point equations become Z
ta
mS,a
gS ða; tÞf S,a ðhðSa Þ1 ðtÞÞdt ≥ mS,a
0
Z mN,a
Z
ta
gS ða; tÞf S,a ðhðSa Þ1 ðtÞÞdt,
(4.118)
0 `
ta
gN ða; tÞf N,a ðhðNa Þ1 ðtÞÞdt ≥ mN,a
Z
`
ta
gN ða; tÞf N,a ðhðNa Þ1 ðtÞÞdt: (4.119)
These inequalities can be problematic, depending on the model and the geometric relations between the primary grains and the structuring elements. The existence of a minimax robust reconstructive opening can be demonstrated under rather general conditions that are commonly satisfied (Dougherty and Chen, 2001); however, the theorem is of a rather technical nature, and therefore we omit it, settling for an illustrative example. Example 4.5. Relative to Eq. 3.166, assume that h(X) is the identity and that X depends only on W. Then HS,a ðtÞ ¼ mS,a n½Sa ðtÞf S,a ðtÞ, HN,a ðtÞ ¼ mN,a n½Na ðtÞf N,a ðtÞ: If mS,an[Sa] ¼ mN,an[Na] ¼ a, a constant independent of a, then HS,a(t) ¼ afS,a(t) and HN,a(t) ¼ afN,a(t). Let the signal and noise sizing densities be Gaussian with means a2 ¼ b2 þ U and a1 ¼ b1 – V, respectively, where U and V are random variables with U, V > 0. In particular, a ¼ (a1, a2). Assume that the means are sufficiently large that we can ignore the negative masses of the Gaussian densities (otherwise, they can be truncated). Figure 4.3 depicts the least-favorable signal and noise densities, whose means are at b2 and b1, respectively, and another pair of signal and noise densities with means at a2 and a1, respectively. It shows the optimal opening parameters for the least-favorable pair and the other pair, these denoted by t a and ta, respectively. From the figure we observe that the
Optimal Robust Filtering
125
Figure 4.3 Least-favorable signal and noise densities (means b2 and b1), another pair of signal and noise densities (means at a2 and a1), and optimal opening parameters for the leastfavorable pair (ta ) and the other pair (ta). [Reprinted from (Dougherty and Chen, 2001).]
saddle-point inequalities of Eqs. 4.118 and 4.119 are satisfied. Hence, the minimax robust opening has parameter t a . ▪ To illustrate the conservative nature of the minimax approach, utilizing Fig. 4.3 we consider two nonexclusive situations: (1) the signal and noise GSDs can be strongly separated for certain states, and (2) the GSDs can be very close. First, let c be any point for which b1 < c < b2, a condition satisfied by both ta and t a , and assume that the mean grain counts are constant for a ∈ U, say, mS,a ¼ mS and mN,a ¼ mN. Suppose that we apply Ξa but let a2 → `, which means that the signal and noise GSDs are separating. Referring to Eq. 4.117 with h(Sa) being the identity, since gS(a; t)fS,a(t) → 0 for t ≤ c, by dominated convergence, Z c lim mS gS ða2 ; tÞf S,a2 ðtÞdt ¼ 0, (4.120) a2 → `
0
where we have indicated by the notation that gS and fS,a do not depend on a1. Therefore, Z ` gN ða1 ; tÞf N,a1 ðtÞdt, (4.121) lim εa ½Ξa ¼ mN a2 →`
ta
where the notation indicates that gN and fN,a do not depend on a2. However, Z ` lim εa ½Ξa ¼ mN gN ða1 ; tÞf N,a1 ðtÞdt ¼ 0: (4.122) a2 →`
ta
126
Chapter 4
The optimal filter for a yields almost perfect restoration for large a2 (Eq. 4.122), but the minimax robust filter has an error bounded below by an integral that can be substantial (Eq. 4.121). Suppose that there is a probability distribution p(a) governing the likelihood of a and its mass is concentrated with a2 much larger than a1. Choosing an opening parameter midway between the means of a1 and a2 will produce a small expected error. Minimax does not take into account the distribution of mass in the uncertainty class. Now consider the case where b1 and b2 are close. By Fatou’s lemma, Z c lim inf εa ½Ξa ≥ mS lim inf gS ðb1 , b2 ; tÞf S,ðb1 , b2 Þ ðtÞ dt b1 , b2 →c 0 b1 , b2 →c (4.123) Z ` lim inf gN ðb1 , b2 ; tÞf N,ðb1 , b2 Þ ðtÞ dt: þ mN c
b1 , b2 →c
For the Gaussian case, as b1, b2 → c, the means of the Gaussians tend towards c. Thus, recalling the definition g of Eq. 3.165, the first and second integrals in the preceding equation tend to approximately one-half of the primary signal and noise grain volumes, respectively. A large error is expected since the signal and noise are becoming increasingly similar. Taking the distribution of a into account could produce a filter with smaller expected error. If there is a high probability that the true state is near the least-favorable state, then the conservative nature of minimax robustness is not too detrimental; but if the probability mass of the states favors widely separated signal and noise GSDs, or states not close to the least-favorable state, then a minimax robust filter is likely to perform poorly. The analysis for single-component pass and fail sets extends to more general bandpass filters, albeit, often with increased difficulty. We illustrate the matter for a single pass band between two nonpass bands. Suppose that HS,a(t) ≥ HN,a(t) for ta ≤ t ≤ ya, and HS,a(t) < HN,a(t) for t < ta and t > ya. The saddle-point inequalities become Z Z t Z Z t ` ` a a HS,a ðtÞdt þ HS,a ðtÞdt ≥ HS,a ðtÞdt þ HS,a ðtÞdt, (4.124) 0
0
ya
Z
ya ta
ya
Z HN,a ðtÞdt ≥
ya ta
HN,a ðtÞdt:
(4.125)
A theorem in (Dougherty and Chen, 2001) provides conditions under which a GBF with a closed-interval pass band is a minimax robust filter.
4.6 IBR Kalman Filtering Often, the noise covariance matrices for the Kalman filter are unknown. A common strategy to address this situation is adaptive Kalman filtering,
Optimal Robust Filtering
127
which simultaneously estimates the noise covariances along with the state estimation (Mehra, 1970; Myers and Tapley, 1976; Durovic and Kovacevic, 1999; Sarkka and Nummenmaa, 2009). Finite-impulse-response analogues have also been proposed (Kwon et al., 1989; Kwon et al., 1990; Shmaliy, 2010, 2011). A regularized least-squares framework that takes into account unknown modeling-error parameters has been employed in which unknown parameters embody the deviation of the model parameters from their nominal values (Sayed, 2001). Another approach is based on penalizing sensitivity of estimation relative to modeling errors. It assumes that the model matrices are differentiable functions of some unknown modeling-error parameter u (Zhou, 2010) and have been extended to the situation in which the observation can be randomly lost (Zhou, 2016). Here we take an IBR approach to Kalman filtering that involves extending Theorem 3.9 to the situation where the model is uncertain. In effect, this means extending the relevant equations to their effective counterparts. We follow (Dehghannasiri, et al., 2017a), which extends the Kalman-filter development of (Kailath, 1968) to the Bayesian setting. Thus, whereas we arrived at Theorem 3.9 via the projection theory in conjunction with the Gauss–Markov theorem, here we will utilize a Baysian form of the orthogonality principle. Before proceeding to IBR filtering, we restate the Kalman recursion equations in a slightly different form (Kailath, 1968). We use lower case for random vectors: X, Y, and Z become x, y, and z, respectively, and the state and measurement equations become ykþ1 ¼ Tk yk þ uk ,
(4.126)
xk ¼ Hk yk þ nk :
(4.127)
We let zk ¼ Hkyk and distinguish between the error-covariance matrix for yk given in Eq. 3.97, which is now denoted as Eykj j , and the error-covariance matrix for zk, denoted by Ezkj j . These are related via Ezkj j ¼ Hk Eykj j H0k :
(4.128)
For Kalman filtering, we employ the discrete white-noise innovation process z˜ kjk1 ¼ xk zˆ kjk1 ,
(4.129)
where zˆ kjk1 is the optimal linear estimator of zk given all observations through k 1. Note that zˆ kjk1 ¼ Hk yˆ kjk1 . With the substitutions, the Kalman filter (Theorem 3.9) takes the form
128
Chapter 4
yˆ kþ1jkþ1 ¼ Tk yˆ kjk þ Mkþ1 ðxkþ1 Hkþ1 Tk yˆ kjk Þ ¼ yˆ kþ1jk þ Mkþ1 ðxkþ1 Hkþ1 yˆ kþ1jk Þ
(4.130)
¼ yˆ kþ1jk þ Mkþ1 z˜ kþ1jk , where the Kalman gain matrix is Mk ¼ Eykjk1 H0k ðHk Eykjk1 H0k þ Rk Þþ ¼ Eykjk1 H0k ðEzkjk1 þ Rk Þþ ,
(4.131)
the covariance extrapolation is Eykþ1jk ¼ Tk Eykjk T0k þ Qk ¼ Tk ðI Mk Hk ÞEykjk1 T0k þ Qk ,
(4.132)
and the covariance update is Eykjk ¼ Eykjk1 Mk Hk Eykjk1 ¼ ðI Mk Hk ÞEykjk1 :
(4.133)
The required recursive equations for the Kalman filter are given by Eqs. 4.129 through 4.132. In this section we will derive the IBR Kalman predictor. As noted previously, the derviation for the classical Kalman filter in (Kailath, 1968) utilizes the orthognality principle, as was done in Section 3.2.2 for linearly independent observations, which means that matrix inversion is possible and there is no need for the pseudo-inverse. In accordance with (Kailath, 1968; Kalman and Bucy, 1961), the recursive equations for the Kalman predictor are given by z˜ kjk1 ¼ xk zˆ kjk1 ,
(4.134a)
Mk ¼ Eykjk1 H0k ðEzkjk1 þ Rk Þ1 ,
(4.134b)
yˆ kþ1jk ¼ Tk ðˆykjk1 þ Mk z˜ kjk1 Þ,
(4.134c)
Eykþ1jk ¼ Tk ðI Mk Hk ÞEykjk1 T0k þ Qk :
(4.134d)
In Chapter 3 and to this point in the current section, when treating classical Kalman filtering, we have used the notation yˆ kjk and yˆ kjk1 to denote estimates at k based on observations through k and k 1, respectively, that is, for the filter and predictor. Thus, for the filter we have zˆ kjk and z˜ kjk , and for the predictor we have zˆ kjk1 and z˜ kjk1 . Analogously, for the error-covariance
Optimal Robust Filtering
129
matrix, Ek|k and Ek|k1 apply to estimates based on observations through k and through k 1, respectively. Going forward we will continue to use the same notation for observations through k; however, when working in the framework of an uncertainty class, to ease the notation we shall use yˆ k , zˆ k , z˜ k , and Ek to refer to the entities based on observations through k 1. In particular, the recursive equations for the classical Kalman predictor take the form z˜ k ¼ xk zˆ k ,
(4.135a)
Mk ¼ Eyk H0k ðEzk þ Rk Þ1 ,
(4.135b)
yˆ kþ1 ¼ Tk ðˆyk þ Mk z˜ k Þ,
(4.135c)
Eykþ1 ¼ Tk ðI Mk Hk ÞEyk T0k þ Qk :
(4.135d)
Uncertainty is introduced into dynamic recursive filtering by assuming that the covariance matrices of the process and observation noise are unknown. Let u1 and u2 be two unknown parameters for the process and observation noise covariance matrices, respectively: E½uuk1 ðuul 1 Þ0 ¼ Quk1 dkl ,
(4.136)
E½nuk2 ðnul 2 Þ0 ¼ Ruk2 dkl ,
(4.137)
where equality is in probability. The state-space model is completely determined by u ¼ (u1, u2) ∈ U. Letting p(u) be the prior distribution governing U and assuming that u1 and u2 are independent, p(u) ¼ p(u1)p(u2), and the state-space model is parameterized as 1 yukþ1 ¼ Tk yuk1 þ uuk1 ,
(4.138)
xuk ¼ Hk yuk1 þ nuk2 :
(4.139)
The state yuk1 depends only on the process-noise parameter u1, but the observation xuk depends on both u1 and the observation-noise parameter u2. 4.6.1 IBR Kalman predictor Let G be the vector space of all n m matrix-valued functions Gk,l such that ` X ` X
kGk,l k2 , ` ,
(4.140)
k¼1 l¼1
where ‖•‖2 is the L2 norm. The Bayesian minimum-mean-square-error (MMSE) linear estimator at time k given observations xul , l ≤ k 1, is
130
Chapter 4
yˆ uk ¼
X
u GU k,l xl ,
(4.141)
l≤k1
where 0 X X u1 u1 u u E y y : (4.142) ¼ arg min E G x G x GU U k,l l k,l l k,l k k Gk,l ∈G
l≤k1
l≤k1
Letting zuk ¼ Hk yuk , zˆ uk ¼ Hk yˆ uk :
(4.143)
If yˆ uk is the Bayesian MMSE estimate at time k for yuk1 , then the zero-mean process z˜ uk ¼ xuk zˆ uk
(4.144)
is called the Bayesian innovation process. In the present context, we restate the orthogonality principle in the form of Eq. 3.23 in terms of the inner product defined by Eu[E[•]] as applied to yuk1 , xul , and yˆ uk , keeping in mind that yuk1 depends only on u1, whereas xˆ uk depends on u ¼ [u1, u2]. Theorem 4.3 (Bayesian Orthogonality Principle). A linear estimator with weighting function GU k,l as defined in Eq. 4.141 satisfies Eq. 4.142 if and only if E U ½E½ðyuk1 yˆ uk Þðxul Þ0 ¼ 0
(4.145)
for all l ≤ k 1.
▪
We extend the expression for the covariance of the innovation process, E½˜zk z˜ 0l ¼ ðEzk þ Rk Þdkl
(4.146)
(Kailath, 1968), to the Bayesian innovation process. If we define u1 ˆ uk Þðzuk1 zˆ uk Þ0 , Ez,u k ¼ E½ðzk z
(4.147)
u2 E U ½E½˜zuk ð˜zul Þ0 ¼ E U ½Ez,u k þ Rk dkl :
(4.148)
then
To see this, first suppose that k ≠ l. Without loss of generality, assume that k > l. Then
Optimal Robust Filtering
131
E U ½E½˜zuk ð˜zul Þ0 ¼ E U ½E½ðzuk1 þ nuk2 zˆ uk Þðzul 1 þ nul 2 zˆ ul Þ0 ¼ E U ½E½ðzuk1 zˆ uk Þðzul 1 þ nul 2 zˆ ul Þ0 þ E U ½E½nuk2 ðnul 2 Þ0 þ E U ½E½nuk2 ðzul 1 zˆ uk Þ0 ,
(4.149)
¼ 0, where the first term is zero because of the Bayesian orthogonality principle, and the third term is zero because future observation noise is independent from the past observations. Now, for k ¼ l, E U ½E½˜zuk ð˜zuk Þ0 ¼ E U ½E½ðzuk1 þ nuk2 zˆ uk Þðzuk1 þ nuk2 zˆ uk Þ0 ¼ E U ½E½ðzuk1 zˆ uk Þðzuk1 zˆ uk Þ0 þ nuk2 ðnuk2 Þ0 þ 2E U ½E½nuk2 ðzuk1 zˆ uk Þ0
(4.150)
u2 ¼ E U ½Ez,u k þ Rk ,
where the third term in the second equality is zero because yˆ uk and zuk1 are both independent of the observation noise at time k. Theorem 4.4 (Dehghannasiri, et al., 2017a). A Bayesian MMSE linear estimate for yuk using the Bayesian innovation process z˜ ul , l ≤ k 1, is a Bayesian MMSE linear estimate for yuk based on observations xul , l ≤ k 1. Proof. For each u, we show that, if ̬
yuk ¼
X
̬
U
Gk,l z˜ ul ,
(4.151)
l≤k1
with ̬
E U ½E½ðyuk1 yuk Þð˜zul Þ0 ¼ 0
(4.152)
for l ≤ k 1, and ̬
z˜ ul ¼ xul Hl yul ,
(4.153)
then ̬
E U ½E½ðyuk1 yuk Þðxul Þ0 ¼ 0
(4.154)
also holds, which, according to the Bayesian orthogonality principle, implies ̬ that yuk is a Bayesian MMSE linear estimate given the observation process xul . Now, for any l ≤ k 1,
132
Chapter 4 ̬
0 ¼ E U ½E½ðyuk1 yuk Þð˜zul Þ0 " " ¼ EU E ¼
ðyuk1
E U ½E½ðyuk1
̬
yuk Þ
ðxul Þ0
̬
yuk Þðxul Þ0 ̬
X
l 0 ≤l1
X
l 0 ≤l1
̬ U ð˜zul 0 Þ0 ðGl,l0 Þ0 H0l
E U ½E½ðyuk1
!##
̬ U yuk Þð˜zul 0 Þ0 ðGl,l 0 Þ0 H0l ̬
(4.155)
¼ E U ½E½ðyuk1 yuk Þðxul Þ0 , ▪
where the last equality follows from Eq. 4.152.
According to Theorem 4.4, to derive recursive equations for the IBR Kalman predictor, yˆ uk can be found using z˜ ul via a linear operator of the form X ˜ ul : GU (4.156) yˆ uk ¼ k,l z l≤k1
≤ k 1 such that this equation holds, we apply the Bayesian To find orthogonality principle. For l ≤ k 1, GU k,l , l
0 ¼ E U ½E½ðyuk1 yˆ uk Þð˜zul Þ0 X u u 0 ˜ ¼ E U E yuk1 GU Þ z ð˜ z k,i i l i≤k1
¼ E U E½yuk1 ð˜zul Þ0 E
X
(4.157)
˜ ui ð˜zul Þ0 GU k,i z
:
i≤k1
(Observe the similarity to Eq. 3.23.) Hence, for l ≤ k 1, X u u 0 ˜ GU ð˜ z Þ z E U ½E½yuk1 ð˜zul Þ0 ¼ E U E k,i i l ¼
X
i≤k1
GU zui ð˜zul Þ0 k,i E U ½E½˜
i≤k1
¼
X
z,u GU k,i E U ½Ei
þ
(4.158)
Rui 2 dli
i≤k1 u2 z,u ¼ GU k,l E U ½El þ Rl , u2 where the third equality follows from Eq. 4.148. Assuming that E U ½Ez,u l þ Rl is nonsingular, u1 u 0 u2 z,u GU zl Þ E 1 k,l ¼ E U ½E½yk ð˜ U ½El þ Rl :
(4.159)
Optimal Robust Filtering
133
(Observe the similarity to Eq. 3.27.) Plugging this expression for GU k,l into Eq. 4.156 yields yˆ uk ¼
X
u2 u z,u E U ½E½yuk1 ð˜zul Þ0 E 1 zl : U ½El þ Rl ˜
(4.160)
l≤k1
Since E U ½Ez,u l is a positive semi-definite matrix, a sufficient condition for the u2 u2 nonsingularity of E U ½Ez,u l þ Rl is that E U ½Rl is positive definite, which is assured if the observations of all components of the state vector are noisy. An update equation is given by yˆ ukþ1 ¼
X
u2 u z,u 1 E U ½E½yukþ1 ð˜zul Þ0 E 1 zl U ½El þ Rl ˜
l≤k
¼
X
u2 u z,u Tk E U ½E½yuk1 ð˜zul Þ0 E 1 zl U ½El þ Rl ˜
(4.161)
l≤k
¼
X
z,u fTk E U ½E½yuk1 ð˜zul Þ0 E 1 U ½El
þ
Rul 2 ˜zul g
þ
˜ uk Tk MU kz
l≤k1
˜ uk , ¼ Tk yˆ uk þ Tk MU kz where the second equality follows from the state equation and the fact that z˜ ul is independent of uuk1 for l ≤ k, the third equality utilizes the effective Kalman gain matrix u1 u 0 u2 z,u MU zk Þ E 1 k ¼ E U ½E½yk ð˜ U ½Ek þ Rk ,
(4.162)
and the last equality results from the expression of yˆ uk in Eq. 4.160. To reformulate the effective Kalman gain matrix, first observe that E U ½E½yuk1 ð˜zuk Þ0 ¼ E U ½E½yuk1 ððyuk1 Þ0 H0k þ ðnuk2 Þ0 ðˆyuk Þ0 H0k Þ ¼ E U ½E½yuk1 ðyuk1 yˆ uk Þ0 H0k ¼ E U ½E½ðˆyuk þ yuk1 yˆ uk Þðyuk1 yˆ uk Þ0 H0k
(4.163)
¼ E U ½E½ðyuk1 yˆ uk Þðyuk1 yˆ uk Þ0 H0k 0 ¼ E U ½Ey,u k Hk ,
where Ey,u k is the Bayesian error-covariance matrix relative to u at time k (Eq. 3.76). Substituting Eq. 4.163 into Eq. 4.162 gives the effective Kalman gain matrix as 0
y,u u2 1 z,u MU k ¼ E U ½Ek Hk E U ½Ek þ Rk :
(4.164)
134
Chapter 4
To obtain an update equation for the error-covariance matrix, we first obtain an update equation for the error: 1 ˜ uk Tk yˆ uk yukþ1 yˆ ukþ1 ¼ Tk yuk1 þ uuk1 Tk MU kz
u1 u2 ˆ uk Þ Tk MU ¼ Tk ðyuk1 yˆ uk Þ þ uuk1 Tk MU k Hk ðyk y k nk
(4.165)
u1 u2 ˆ uk Þ þ uuk1 Tk MU ¼ Tk ðI MU k Hk Þðyk y k nk ,
where the state equation and Eq. 4.161 are used to obtain the first equality, and the measurement equation and Eq. 4.144 are used to get the second equality. Hence, u1 1 ˆ ukþ1 Þðyukþ1 Ey,u yˆ ukþ1 Þ0 kþ1 ¼ E½ðykþ1 y y,u U 0 0 ¼ Tk ðI MU k Hk ÞEk ðI Mk Hk Þ Tk
(4.166)
u2 U 0 0 þ Quk1 þ Tk MU k Rk ðMk Þ Tk :
Taking the expectation yields y,u U U 0 0 E U ½Ey,u kþ1 ¼ Tk ðI Mk Hk ÞE U ½Ek ðI Mk Hk Þ Tk u2 U 0 0 þ E U1 ½Quk1 þ Tk MU k E U2 ½Rk ðMk Þ Tk
(4.167)
y,u 0 u1 ¼ Tk ðI MU k Hk ÞE U ½Ek Tk þ E U1 ½Qk ,
where the second equality results from y,u y,u u2 0 0 U MU k Hk E U ½Ek Hk ¼ E U ½Ek Hk Mk E U2 ½Rk ,
(4.168)
which can be found by rearranging the equation for the effective Kalman gain matrix in Eq. 4.164. The following equations summarize the IBR Kalman predictor: z˜ uk ¼ xuk zˆ uk ,
(4.169)
y,u u2 0 1 z,u MU k ¼ E U ½Ek Hk E U ½Ek þ Rk ,
(4.170)
˜ uk Þ, yˆ ukþ1 ¼ Tk ðˆyuk þ MU kz
(4.171)
y,u 0 u1 U E U ½Ey,u kþ1 ¼ Tk ðI Mk Hk ÞE U ½Ek Tk þ E U1 ½Qk :
(4.172)
The structure of these equations is similar to the classical Kalman predictor, where Mk, Eyk , Qk, and Rk are replaced by their effective couterparts y,u u1 u2 MU k , E U ½Ek , E U1 ½Qk , and E U2 ½Rk , respectively. Initial conditions for the ˆ u0 ¼ E½y0 . y0 IBR Kalman predictor can be set to E U ½Ey,u 0 ¼ cov½y0 and y does not depend on u because it is independent from future process and observation noise.
Optimal Robust Filtering
135
4.6.2 IBR Kalman filter In this subsection, the observation xuk is also utilized for estimating yuk1 . Hence, Eq. 4.156 becomes X ˜ ul : yˆ ukjk ¼ GU (4.173) k,l z l≤k
Following similar steps as in Eqs. 4.157 through 4.159, but for l ≤ k, yields X u2 u z,u E U ½E½yuk1 ð˜zul Þ0 E 1 zl : (4.174) yˆ ukjk ¼ U ½El þ Rl ˜ l≤k
Comparing this with the second equality in Eq. 4.161, we see that yˆ ukþ1 ¼ Tk yˆ ukjk :
(4.175)
Similar to Eq. 4.161, we find the update equation for yˆ ukjk : yˆ ukþ1jkþ1 ¼
X
u2 u z,u 1 E U ½E½yukþ1 ð˜zul Þ0 E 1 zl U ½El þ Rl ˜
l≤kþ1
¼
X
u2 u z,u 1 E U ½E½yukþ1 ð˜zul Þ0 E 1 zl U ½El þ Rl ˜
l≤k u2 z,u 1 þ E U ½E½yukþ1 ð˜zukþ1 Þ0 E 1 zukþ1 U ½Ekþ1 þ Rkþ1 ˜
(4.176)
˜ ukþ1 ¼ Tk yˆ ukjk þ MU kþ1 z ˜ ukþ1 : ¼ yˆ ukþ1 þ MU kþ1 z To find the update equation for E U ½Ey,u kjk , first observe that u 1 1 ˆ ukþ1 Þ yukþ1 yˆ ukþ1jkþ1 ¼ yukþ1 yˆ ukþ1 MU kþ1 ðxkþ1 Hkþ1 y u1 1 2 ˆ ukþ1 Þ þ nukþ1 ¼ yukþ1 yˆ ukþ1 MU Þ kþ1 ðHkþ1 ðykþ1 y
(4.177)
u1 u2 ˆ ukþ1 Þ MU ¼ ðI MU kþ1 Hkþ1 Þðykþ1 y kþ1 nkþ1 ,
where the first equality follows from the preceding equation and the definition of z˜ ukþ1 , whereas the second follows from the measurment equation. Hence, u1 u1 0 ˆu ˆu E U ½Ey,u kþ1jkþ1 ¼ E U ½E½ðykþ1 ykþ1jkþ1 Þðykþ1 ykþ1jkþ1 Þ y,u U 0 ¼ ðI MU kþ1 Hkþ1 ÞE U ½Ekþ1 ðI Mkþ1 Hkþ1 Þ u2 U 0 þ MU kþ1 E U2 ½Rkþ1 ðMkþ1 Þ y,u ¼ ðI MU kþ1 Hkþ1 ÞE U ½Ekþ1 ,
(4.178)
136
Chapter 4
where Eq. 4.168 is used to obtain the last equality. Equations 4.176 and 4.178 are similar to those for the classical Kalman filter except for the presence of U E U ½Ey,u kþ1jkþ1 and the effective Kalman gain matrix Mk . In this section we have been considering the IBR Kalman predictor and filter. Both utilize information in the prior distribution but do not utilize new information provided by observations. In (Dehghannasiri et al., 2018a) the IBR Kalman filter is extended to the optimal Bayesian Kalman filter, where the optimization is relative to the posterior distribution obtained from updating the prior distribution via new data. The Bayesian MMSE linear estimator in Eqs. 4.141 and 4.142 is defined similarly except that the expectation with respect to U in the latter equation is conditioned on the previous observations. The Bayesian orthogonality principle (Theorem 4.3) and the Bayesian innovation process extend directly in posterior form. The optimal Bayesian Kalman filter involves recursive equations similar to those of the classical Kalman filter with the effective posterior noise statistics in place of the ordinary noise statistics. The effective posterior noise statistics correspond to the posterior distribution of the unknown noise parameters as observations are incorporated. The likelihood function calculation for unknown noise parameters is formulated as a message-passing algorithm running in a factor graph. A closed-form solution for the update rules required for the likelihood function calculation is obtained. After computing the likelihood function, posterior samples are generated via a Markov chain Monte Carlo (MCMC) method to compute the effective posterior noise statistics. For a second filter extension, recall from Section 3.4.3 the Kalman smoother, which we have not discussed, either in the certain or uncertain settings. The Kalman smoother goes back to the early 1960s (Bryson and Frazier, 1963), and an orthogonal-projection-based approach to the Kalman smoother was taken in 1967 (Meditch, 1967). The classical Kalman smoother recursively estimates states over a finite time window using all observations in the window, and assuming full knowledge of the noise model. In (Dehghannasiri et al., 2017d) an optimal Bayesian Kalman smoother is developed to obtain smoothed estimates that are optimal relative to the posterior distribution of the unknown noise parameters. The method uses a Bayesian innovation process and a posterior-based Bayesian orthogonality principle. The optimal Bayesian Kalman smoother possesses the same forward–backward structure as that of the ordinary Kalman smoother with the ordinary noise statistics replaced by their effective counterparts. It utilizes an effective Kalman smoothing gain for the backward step. It requires two forward steps. In the first forward step, the posterior effective noise statistics are computed. Then, using the obtained effective noise statistics, the optimal Bayesian Kalman filter in (Dehghannasiri et al., 2018a) is run in the forward direction over the window of observations. Finally, in the backward step, the Bayesian smoothed estimates are obtained.
Optimal Robust Filtering
137
4.6.3 Robustness of IBR Kalman filtering Robustness concerns the performance of an arbitrary model-specific Kalman filter (predictor) when the parameters used for filter design are different from the parameters of the model to which the filter will be applied. In the sequel, we use f ¼ [f1, f2] to denote the parameters used to design the Kalman filter and u to denote the parameters of the true model. The weighting function for the model-specific Kalman predictor relative to f is given by f2 1 Gfk,l ¼ E½yfk 1 ð˜zfl Þ0 ðEz,f l þ Rl Þ ,
(4.179)
where z˜ fl is the innovation process relative to model f and 0 Ez,f ¼ Hl Ey,f l Hl : l
(4.180)
Owing to the mismatch between the filter parameter f and the model parameter, the innovation process used for the estimation is z˜ k ðu, fÞ ¼ xuk zˆ k ðu, fÞ,
(4.181)
zˆ k ðu, fÞ ¼ Hk yˆ k ðu, fÞ
(4.182)
where
is the estimate when the f-specific Kalman predictor is applied to the u model. Equations 4.156 and 4.160 take the forms X
Gfk,l z˜ l ðu, fÞ
(4.183)
f2 1 ˜ l ðu, fÞ, E½yfk 1 ð˜zfl Þ0 ðEz,f l þ Rl Þ z
(4.184)
yˆ k ðu, fÞ ¼
l≤k1
and yˆ k ðu, fÞ ¼
X l≤k1
respectively, which in turn give zˆ k ðu, fÞ via Eq. 4.182 and then z˜ k ðu, fÞ via Eq. 4.181. Corresponding to Eq. 4.161, the recursive equation for yˆ k ðu, fÞ is X f f2 1 1 ˜ l ðu, fÞ yˆ kþ1 ðu, fÞ ¼ E½ykþ1 ð˜zfl Þ0 ðEz,f l þ Rl Þ z l≤k (4.185) f ˆ ˜ ¼ Tk yk ðu, fÞ þ Tk Mk zk ðu, fÞ, where Mfk is the Kalman gain matrix relative to f, which can be found using f2 Eq. 4.134b with Eyk and Rk replaced by Ey,f k and Rk , respectively.
138
Chapter 4
The estimation error can be expressed as 1 yˆ kþ1 ðu, fÞ ¼ Tk yuk1 þ uuk1 Tk yˆ k ðu, fÞ Tk Mfk z˜ k ðu, fÞ yukþ1
¼ Tk ½yuk1 yˆ k ðu, fÞ þ uuk1 Tk Mfk ½Hk yuk1 þ nuk2 Hk yˆ k ðu, fÞ ¼ Tk ½yuk1 yˆ k ðu, fÞ þ uuk1
(4.186)
Tk Mfk ½Hk ðyuk1 yˆ k ðu, fÞÞ þ nuk2 ¼ ðTk Tk Mfk Hk Þ½yuk1 yˆ k ðu, fÞ þ uuk1 Tk Mfk nuk2 , where we have used Eq. 4.181 in the second equality. Let Eyk ðu, fÞ ¼ E½ðyuk1 yˆ k ðu, fÞÞðyuk1 yˆ k ðu, fÞÞ0 :
(4.187)
Plugging in Eq. 4.186 yields Eykþ1 ðu, fÞ ¼ Tk ðI Mfk Hk ÞEyk ðu, fÞðI Mfk Hk Þ0 T0k þ Quk1 þ Tk Mfk Ruk2 ðMfk Þ0 T0k :
(4.188)
The MSE of the f-specific Kalman filter relative to u is the trace trðEyk ðu, fÞÞ. Taking the expectation relative to u in Eq. 4.188 gives E U ½Eykþ1 ðu, fÞ ¼ Tk ðI Mfk Hk ÞE U ½Eyk ðu, fÞðI Mfk Hk Þ0 T0k þ E U1 ½Quk1 þ Tk Mfk E U2 ½Ruk2 ðMfk Þ0 T0k :
(4.189)
To find Mfk and use it in Eq. 4.189 requires Ey,f k . The update equation for y,f Ek can be computed similarly to Eq. 4.134d by using matrices relative to parameter f, i.e., Mfk and Qfk 1 . Therefore, to find the expected errorcovariance matrix E U ½Eyk ðu, fÞ of an arbitrary f-specific Kalman filter, one needs to keep track of E U ½Eyk ðu, fÞ and Ey,f simultaneously. The average k MSE of the f-specific Kalman filter is the trace of E U ½Eyk ðu, fÞ. Given the initialization yˆ 0 ðu, fÞ ¼ y0 ,
(4.190a)
Ey,f 0 ¼ cov½y0 ,
(4.190b)
E U ½Ey0 ðu, fÞ ¼ cov½y0 ,
(4.190c)
and keeping in mind Eqs. 4.182 and 4.180, the first iteration for the predictor designed for f and applied to u is given by
Optimal Robust Filtering
139
z˜ 0 ðu, fÞ ¼ xu0 zˆ 0 ðu, fÞ, 0
(4.191a)
z,f f 1 Mf0 ¼ Ey,f 0 H0 ðE0 þ R0 Þ ,
(4.191b)
yˆ 1 ðu, fÞ ¼ T0 ½ˆy0 ðu, fÞ þ Mf0 z˜ 0 ðu, fÞ,
(4.191c)
0
y,f f f1 Ey,f 1 ¼ T0 ðI M0 H0 ÞE0 T0 þ Q0 ,
(4.191d)
with E U ½Ey1 ðu, fÞ found via Eq. 4.189. According to Eq. 4.65, the robustness is kðu; fÞ ¼ trðEyk ðu, fÞÞ trðEy,u k Þ:
(4.192)
From Eq. 4.66, the mean robustness is kðfÞ ¼ trðE U ½Eyk ðu, fÞÞ trðE U ½Ey,f k Þ:
(4.193)
In the next example we will compare the IBR Kalman filter to modelspecific filters and to the steady-state minimax Kalman filter defined by uminimax ¼ arg min max lim trðEyk ðu, fÞÞ f∈U u∈U k→`
(4.194)
(assuming it exists). The average MSE of the steady-state minimax Kalman filter is simply the average MSE of the uminimax-specific Kalman filter. In the next example we employ a slightly different form of the state equation in which the process noise is decomposed into Gkuk, where Gk need not be square, so that uk need not be of the same dimension as yk. The only change in the classical Kalman recursive equations is that the second summand in Eq. 4.134d becomes Gk Qk G0k and the second summand in Eq. 4.172 becomes Gk E U1 ½Quk1 G0k . Example 4.6. Following (Dehghannasiri, et al., 2017a), we consider state estimation in sensor networks, specifically, the model used for sensor scheduling in traffic monitoring and management (Gupta et al., 2006). Consider a vehicle in two-dimensional space that can move in both dimensions. Acceleration is assumed to be zero except for a small perturbation regarded as the process noise. The state vector y contains four components: px (vehicle position in x), py (vehicle position in y), vx (velocity in the x direction), and vy (velocity in the y direction). The discretization step size is denoted by h. Assuming that two sensors are utilized to measure vehicle dynamics, the state-space model describing the dynamics is of the following form:
140
Chapter 4
3 2 3 px 1 0 h 0 6p 7 60 1 0 h7 6 y7 6 7 yk ¼6 7, Tk ¼ 6 7, 4 vx 5 40 0 1 05 vy 0 0 0 1 2 2 3 0 h ∕2 6 0 2 1 0 0 0 h ∕2 7 6 7 : Gk ¼6 7, Hk ¼ 4 h 0 1 0 0 0 5 2
0
h
We set h ¼ 0.5. The process noise and observation noise covariance matrices are
Quk
u 0 5 0 ¼ , Rk ¼ , 0 u 0 5
where u ∈ [0.25, 3]. The initial conditions are set to cov[y0] ¼ 12 I and E[y0] ¼ 0. We consider two priors for u: uniform and beta B(0.1, 0.9) (scaled over the interval [0.25, 3]). The average MSEs for the model-specific, minimax, and IBR Kalman filters are given by the traces of E U ½Eyk ðu, fÞ, E U ½Eyk ðu, uminimax Þ, and E U ½Ey,u k , respectively. Figure 4.4 compares average steady-state MSEs for model-specific (designed at u0 on the x-axis), minimax, and IBR Kalman filters when k is large enough that the steady state is reached. The minimum average MSE is achieved by applying the IBR Kalman filter.
(a)
(b)
Figure 4.4 The average steady-state MSE obtained from applying different robust Kalman filters: (a) uniform prior and (b) beta prior. [Reprinted from (Dehghannasiri et al., 2017a).]
Optimal Robust Filtering
141
Figure 4.5 shows the performance of various Kalman filters over the uncertainty class of the process-noise parameter, where MSE values are in the steady state (approximately k ≥ 10). Figure 4.5(b) shows the MSE surface when the u0 -specific Kalman filter is applied to the model with parameter u. Figures 4.5(c) and (d) illustrate the steady-state MSE of the IBR and minimax Kalman filters over the entire uncertainty class when priors are either uniform or beta. The lower bound for MSE for each u is determined by the u-specific Kalman filter. The IBR Kalman filter generally does not perform as well as the u-specific Kalman filter at u and sometimes has even worse performance
(a)
(b)
(c)
(d)
Figure 4.5 Performance analysis of different Kalman filter designs over the uncertainty class. (a) Different priors considered for the unknown process noise variance u. (b) MSE surface that shows the performance of the model-specific Kalman filters when there is mismatch between the parameter value used for the filter design and the parameter value of the model. (c) MSE performance of various Kalman filters over the uncertainty class when the prior is uniform. (d) MSE performance of various Kalman filters over the uncertainty class for the beta prior. [Reprinted from (Dehghannasiri et al., 2017a).]
142
Chapter 4
than the minimax filter. This behavior is normal. The IBR filter is optimal relative to the prior and performs well in the regions of the uncertainty class where there is considerable prior probability mass. For example, in Fig. 4.5(d), the IBR Kalman filter performs very close to the performance of the u-specific Kalman filter and much better than the minimax filter for u < 1.25, most of the prior probability mass being concentrated in this region. ▪ In both parts of Fig. 4.4 we observe that the MSE curve for the modelspecific Kalman filter is tangent to the IBR MSE at a certain value of u0 . These values are maximally robust states, and the model-constrained Bayesian robust (MCBR) Kalman filters for parts (a) and (b) are designed at these values. Hence, we see in this example that the IBR and MCBR Kalman filters are identical. What has occurred? Examing the definitions for IBR and MCBR Kalman filters, we see that there are two conditions sufficient for the IBR Kalman to reduce to the MCBR Kalman filter: there exists a point u u ðu1 , u2 Þ ∈ U such that E U1 ½Quk1 ¼ Qk1 and E U2 ½Ruk2 ¼ Rk2 for all k. This is precisely what occurs in Example 4.6 if we allow for the fact that uk is replaced by Gkuk and the consequent change in Eq. 4.138.
4.7 IBR Kalman–Bucy Filtering Having focused on discrete-time IBR Kalman filtering in the previous section, we continue with the development in (Dehghannasiri, et al., 2017a) and consider continous time, where the filter is called the Kalman–Bucy filter (Kalman and Bucy, 1961) and the classical state and observation equations become y˙ t ¼ Tt yt þ Gt ut ,
(4.195)
xt ¼ Ht yt þ nt
(4.196)
for t ∈ [a, b), where, for t, s ∈ [a, b), E½ut u0s ¼ Qt dðt sÞ,
(4.197)
E½nt n0s ¼ Rt dðt sÞ,
(4.198)
and we now include Gt in the model. Analogous to the discrete-time equations, the continuous-time recursive equations are z˜ t ¼ xt zˆ t ,
(4.199a)
Mt ¼ Eyt H0t R1 t ,
(4.199b)
y˙ˆ t ¼ Tt yˆ t þ Mt z˜ t ,
(4.199c)
E˙ yt ¼ Tt Eyt þ Eyt T0t Mt Rt M0t þ Gt Qt G0t
(4.199d)
(Kailath, 1968; Kalman and Bucy, 1961).
Optimal Robust Filtering
143
With uncertainty in the process and noise covariance matrices, the statespace model becomes y˙ ut 1 ¼ Tt yut 1 þ Gt uut 1 ,
(4.200)
xut ¼ Ht yut 1 þ nut 2
(4.201)
for t ∈ [a, b), where for t, s ∈ [a, b), pðu1 Þ
E½uut 1 ðuus 1 Þ0 ¼ Qut 1 dðt sÞ, pðu2 Þ
E½nut 2 ðnus 2 Þ0 ¼ Rut 2 dðt sÞ:
(4.202) (4.203)
Let G be the vector space of all n m matrix-valued functions Gt,s : ½a, bÞ ½a, bÞ → Rnm such that Z a
b
Z a
b
kGt,s k2 dtds , ` :
(4.204)
Let zut 1 ¼ Ht yut 1 ,
(4.205)
and let Z yˆ ut ¼
t
u GU t,s xs ds
(4.206)
a
be the Bayesian least-squares estimate for yut 1 based on observations xus , s , t, where GU t,s
0 Z t u1 u ¼ arg min E U E yt Gt,s xs ds Gt,s ∈G
a
Z t u1 u yt Gt,s xs ds :
(4.207)
a
The continuous-time Bayesian orthogonality principle is similar to the discrete-time principle. Let yˆ ut be the estimate obtained using Eq. 4.206. Equation 4.207 holds if and only if E U ½E½ðyut 1 yˆ ut Þðxus Þ0 ¼ 0 for s ∈ [a, t). The Bayesian innovation process is
(4.208)
144
Chapter 4
z˜ ut ¼ xut Ht yˆ ut :
(4.209)
Just as the continuous-time Kalman filter can be derived from the discrete-time filter (Gelb, 1974; Smith and Roberts, 1978; Shald, 1999), we derive the continous-time IBR Kalman–Bucy filter from the discrete-time IBR filter. If time tk corresponds to discrete time, then the equations for the IBR Kalman–Bucy filter can be derived as an extension of those for the IBR Kalman filter by letting Dt ¼ tk þ 1 tk → 0. We require a correspondence between the state equations for the continuous and discrete time domains in Eqs. 4.200 and 4.138. The general solution for the dynamical model in Eq. 4.200 is Z t yut 1 ¼ Fðt, aÞyua1 þ Fðt, tÞGt uut 1 dt, (4.210) a
where F(t, a), called the state-transition matrix, satisfies F(t, t) ¼ I, the identity matrix, and Fðt, aÞ ¼ Tt Fðt, aÞ t
(4.211)
(Rugh, 1996). For discrete time tk, where tk þ 1 tk ¼ Dt, k ¼ 0, 1, 2, . . . , Eq. 4.210 can be discretized as Z t kþ1 u1 u1 ytkþ1 ¼ Fðtkþ1 , tk Þytk þ Fðtkþ1 , tÞGt uut 1 dt: (4.212) tk
States yutk1 , k ¼ 0, 1, 2, : : : , can be viewed as outputs of a state-space model with discrete-time domain, and we treat the continuous-time model as the limiting form of a discrete-time model when Dt → 0. An overline for variables in the discrete-time domain will distinguish them from those for continuous time. Associating Eq. 4.212 with Eq. 4.138, the matrices for the equivalent discretetime state-space model are Tk , uk , Gk , Hk , and nk , with Tk ¼ Fðtkþ1 , tk Þ and Hk ¼ Htk . Moreover, Z Gk uuk1
¼
tkþ1
tk
Fðtkþ1 , tÞGt uut 1 dt
(4.213)
and 0 Gk Quk1 Gk
Z ¼
tkþ1
tk
Z ¼
tkþ1
tk
Z
tkþ1
tk
fFðtkþ1 , t0 ÞGt0 Qut 1 dðt t0 ÞG0t F0 ðtkþ1 , tÞgdtdt 0
Fðtkþ1 , tÞGt Qut 1 G0t F0 ðtkþ1 , tÞdt:
(4.214)
Optimal Robust Filtering
145
We desire a differential equation for E U ½Ey,u t . Based on Eq. 4.172, y,u E U ½Ey,u 1n y,u 0 kþ1 E U ½Ek ¼ lim lim ðTk Tk MU k Hk ÞE U ½Ek Tk Dt→0 Dt→0 Dt Dt 0 þ Gk E U1 ½Quk1 Gk
o
E U ½Ey,u k
(4.215) :
Using Eq. 4.214 and assuming that expectation and integration can be interchanged, 0
Gk E U1 ½Quk1 Gk 0 ¼ Gtk E U1 ½Qutk1 Gtk : lim Dt→0 Dt
(4.216)
Next, Fðt, tk Þ Fðtkþ1 , tk Þ Fðtk , tk Þ ¼ lim Dt→0 t Dt t¼tk ¼ Ttk Fðtk , tk Þ
(4.217)
¼ Ttk , where Eq. 4.211 is used to obtain the second line. Hence, 1 1 Fðtkþ1 , tk Þ ¼ Ttk þ lim I: Dt→0 Dt Dt→0 Dt lim
(4.218)
Substituting Eqs. 4.216 and 4.218 in Eq. 4.215 yields y,u E U ½Ey,u kþ1 E U ½Ek Dt→0 Dt 1 1 y,u H ¼ lim Ttk þ I Tk MU k k E U ½Ek Dt→0 Dt Dt 1 y,u u1 0 0 ðTtk Dt þ IÞ þ Gtk E U1 ½Qtk Gtk E U ½Ek Dt n y,u 0 ¼ lim Ttk E U ½Ey,u k Ttk Dt þ Ttk E U ½Ek
lim
(4.219)
Dt→0
y,u 0 U 0 þ E U ½Ey,u k Ttk Tk Mk Hk E U ½Ek Ttk
o 1 y,u u1 0 T k MU H E ½E þ G E ½Q G tk U 1 tk : tk k k U k Dt
Since a rigorous treatment of continuous white noise requires generalized functions, as has often been done with Kalman–Bucy filtering (Gelb, 1974; Smith and Roberts, 1978; Shald, 1999), we approximate continuous white
146
Chapter 4
noise by discrete white noise and let Dt → 0, with the understanding that the limits do not exist in the ordinary sense. This means that the discrete whitenoise sequence approximates the continuous white-noise process by shrinking the pulse length Dt and increasing the amplitude so that the discrete whitenoise sequence tends to infinite-valued pulses of zero duration with the area under the impulse-autocorrelation function equaling the area under the continuous-white-noise impulse-autocorrelation function (Gelb, 1974). Let us evaluate the limit in Eq. 4.219. Since E½nut 2 ðnus 2 Þ0 ¼ Rut 2 dðt sÞ ¼ ` ,
(4.220)
for t ¼ s, when approximating continuous white noise by sampling it at fixed intervals Dt, as Dt → 0, the covariance of the resulting discrete noise process Ruk2 tends to `, and we can approximate Ruk2 by Rutk2 ∕Dt as Dt → 0. Hence, according to Eq. 4.164, Rutk2 y,u y,u 0 U 0 1 lim Mk ¼ lim E U ½Ek Hk E U Hk Ek Hk þ ¼0 (4.221) Dt→0 Dt→0 Dt because 1/Dt is in the denominator. Moreover, 1 U Mk ¼ lim Dt→0 Dt Dt→0 lim
¼
u i h Rt 2 z,u 0 1 k E U ½Ey,u H E E þ k U k k Dt
Dt
0 1 z,u lim E U ½Ey,u k Hk E U ½Ek Dt Dt→0
þ Rutk2
(4.222)
0 1 u2 ¼ E U ½Ey,u k Hk E U2 ½Rtk :
Note that limDt→0 Fk ¼ I. Plugging Eqs. 4.221 and 4.222 into Eq. 4.219 gives y,u E U ½Ey,u y,u 0 kþ1 E U ½Ek ¼ Ttk E U ½Ey,u k þ E U ½Ek Ttk Dt→0 Dt
lim
0 1 u2 E U ½Ey,u k Hk E U ½Rtk
(4.223)
þ Gtk E U1 ½Qutk1 G0tk : Equivalently, under our discrete-to-continous assumption, y,u y,u 0 E U ½E˙ y,u t ¼ Tt E U ½Et þ E U ½Et Tt u1 0 0 1 u2 E U ½Ey,u t Ht E U2 ½Rt þ Gt E U1 ½Qt Gt :
(4.224)
To find the differential equation for the Bayesian least-squares estimate, first, using Eq. 4.171, ˜ utk : yˆ utkþ1 ¼ Tk yˆ utk þ Tk MU kz
(4.225)
Optimal Robust Filtering
147
Incorporating Eqs. 4.218 and 4.222 yields yˆ utkþ1 yˆ utk ˜ utk yˆ utk Tk yˆ utk þ Tk MU kz ¼ lim Dt→0 Dt→0 Dt Dt ˜ utk yˆ utk ðTtk Dt þ IÞˆyutk þ Tk MU kz ¼ lim Dt→0 Dt y,u u2 u u ztk : ¼ Ttk yˆ tk þ E U ½Ek H0k E 1 U2 ½Rtk ˜ lim
(4.226)
Equivalently, 0 1 u2 u zt y˙ˆ ut ¼ Tt yˆ ut þ E U ½Ey,u t Ht E U2 ½Rt ˜
˜ ut , ¼ Tt yˆ ut þ MU t z
(4.227)
where y,u 0 1 u2 MU t ¼ E U ½Et Ht E U2 ½Rt
(4.228)
is the effective Kalman gain matrix. The equations for IBR Kalman–Bucy filtering are Eqs. 4.209, 4.228, 4.227, and 4.224. If ya denotes the random process at time a, then the initial conditions are set to yˆ ua ¼ E½ya and E U ½Ey,u a ¼ cov½ya . Finally, for the Bayesian innovation process z˜ ut , E U ½E½˜zut ð˜zus Þ0 ¼ E U2 ½Rut 2 dðt sÞ:
(4.229)
First, similar to the discrete case, it can be shown that E U ½E½˜zut ð˜zus Þ0 ¼ 0
(4.230)
for t ≠ s. For t ¼ s, we treat it as a limit of the discrete-time domain when Dt → 0: u2 0 lim E U ½E½˜zutk ð˜zutk Þ0 ¼ lim E U ½Hk Ey,u k Hk þ Rk Dt→0 1 u2 0 ¼ lim E U Hk Ey,u R H þ k k Dt→0 Dt tk 1 u2 ¼ lim E U R Dt→0 Dt tk 1 E U2 ½Rutk2 , ¼ lim Dt→0 Dt
Dt→0
(4.231)
u2 0 1 where, Hk Ey,u k Hk , which is finite, plays no role as Dt → 0 because Dt Rtk → ` . Eq. 4.229 is equivalent to Eqs. 4.230 and 4.231 (keeping in mind that all deltafunction manipulations are formal).
Chapter 5
Optimal Experimental Design Model uncertainty carries with it a loss of performance when finding an optimal operator. The IBR principle is to find an operator that, based on a cost function, is optimal over an uncertainty class relative to a prior (or posterior) distribution reflecting the state of knowledge regarding the underlying physical processes. While an IBR operator is optimal over the uncertainty class, it is likely to be suboptimal relative to the full (true) model. This loss of performance is the cost of uncertainty. Experiments can reduce model uncertainty. Since experiments can be costly and time consuming, the question arises as to which experiment can best reduce the uncertainty as it pertains to the operational objective, not necessarily as it pertains to the model uncertainty in general, for instance, the entropy.
5.1 Mean Objective Cost of Uncertainty For any model u in the uncertainty class U and operator family F , the objective cost of uncertainty (OCU) relative to u is the difference between the performance of an optimal model-specific operator cu for u and an IBR operator: UF ðu; UÞ ¼ C u ðcU IBR Þ C u ðcu Þ:
(5.1)
The expectation of the OCU over the prior distribution p(u) is called the mean objective cost of uncertainty (MOCU) (Yoon, et al., 2013): MF ðUÞ ¼ E U ½UF ðu; UÞ:
(5.2)
MOCU provides uncertainty quantification based on the objective for which the model is to be employed. MOCU can be viewed as the minimum expected value of a Bayesian loss function that maps an operator to its differential cost (for using the given operator instead of an optimal operator), and the minimum expectation is attained by an optimal robust operator that minimizes the average differential cost. In decision theory (Berry and Fristedt, 1985), this differential cost is 149
150
Chapter 5
referred to as the regret, which is defined as the difference between the maximum payoff (for making an optimal decision) and the actual payoff (for the decision that has been made). From this perspective, MOCU can be viewed as the minimum expected regret for using a robust operator. U Suppose that the IBR operator cU IBR equals an MCBR operator cMCBR , U where cMCBR is an optimal operator for the maximally robust state umax. Then UF ðu; UÞ ¼ C u ðcumax Þ C u ðcu Þ ¼ kðu, umax Þ:
(5.3)
Taking the expectation over U yields MF ðUÞ ¼ kðumax Þ:
(5.4)
MOCU equals the mean robustness of a maximally robust state when the IBR operator is an MCBR operator. In general, cU MCBR is suboptimal relative to U cIBR , and MF ðUÞ ≤ kðumax Þ. When we cannot find an IBR operator, we fall back to an MCBR operator and approximate UF ðu; UÞ by kðu, umax Þ. In this context, we refer to kðu, umax Þ as the MCBR objective cost of uncertainty and denote it by Uðu; F U Þ. The MCBR mean objective cost of uncertainty is then defined by MðF U Þ ¼ E U ½Uðu; F U Þ ¼ kðumax Þ:
(5.5)
While we have defined MOCU relative to the prior distribution, it could just as well be defined relative to the posterior distribution; indeed, a posterior distribution can be viewed as a new prior awaiting new data, which is precisely the view we take in the present chapter, meaning that we will simply refer to prior distributions. MOCU can be used to design experiments to optimally reduce the pertinent uncertainty in the model, namely, the uncertainty relevant to the operational objective. Suppose that there are k experiments T 1 , : : : , T k , where experiment Ti exactly determines the uncertain parameter ui in u ¼ ðu1 , u2 , : : : , uk Þ. The issue for experimental design is which experiment should be conducted first. Let uju¯ i ¼ ujðui ¼ u¯ i Þ be the conditional uncertainty vector composed of all uncertain parameters other than ui with ui ¼ u¯ i . Let Uju¯ i ¼ fujui : u ∈ Ug be the reduced uncertainty class, given ui ¼ u¯ i . Elements of Uju¯ i are of the form uju¯ i ¼ ðu1 , : : : , ui1 , u¯ i , uiþ1 , : : : , uk Þ,
(5.6)
Uju¯ where uj is random for all j ≠ i. The IBR operator for Uju¯ i is denoted by cIBRi and is called the reduced IBR operator relative to u¯ i . If the outcome of experiment Ti is u¯ i , then the remaining MOCU, given ui ¼ u¯ i , is defined to be
Optimal Experimental Design
151
i h ¯ Uju MF ðUju¯ i Þ ¼ E Uju¯ i C uju¯ i cIBRi C uju¯ i ðcuju¯ i Þ ,
(5.7)
where the expectation is relative to the conditional distribution pðuju¯ i Þ. The remaining MOCU is the MOCU for the reduced IBR operator relative to the reduced uncertainty class. Equation 5.7 gives the remaining MOCU for a specific value of ui. Treating the remaining MOCU as a function of ui and taking the expectation with respect to the marginal distribution p(ui) yields the expected remaining MOCU, given parameter ui, ii h h Uju E ui ½MF ðUjui Þ ¼ E ui E Ujui C ujui cIBRi C ujui ðcujui Þ (5.8) ¼ E ui ½E Ujui ½UF ðujui ; UÞ, where Uju UF ðujui ; UÞ ¼ C ujui cIBRi C ujui ðcujui Þ
(5.9)
is the conditional objective cost of uncertainty. Owing to its application in experimental design, the expected remaining MOCU is called the experimental design value and is denoted by Dðui Þ ¼ E ui ½MF ðUjui Þ,
(5.10)
where U and F are implicit in the notation. The experiment T i resulting in the minimum expected remaining MOCU is called the optimal experiment, where i ¼ arg min Dðui Þ: i∈1, : : : , k
(5.11)
ui is called the primary parameter. Combining the definitions yields h h ii Uju E ui ½E Ujui ½C ujui ðcujui Þ i ¼ arg min E ui E Ujui C ujui cIBRi i∈1, : : : , k
ii h h Uju ¼ arg min E ui E Ujui C ujui cIBRi Þ E U ½C u ðcu Þ i∈1, : : : , k
(5.12)
h h ii Uju ¼ arg min E ui E Ujui C ujui cIBRi , i∈1, : : : , k
where the second equality follows from Eq. 3.5, and the third equality is due to the independence of the second term inside the optimization expression from the variable of optimization. The last expression in Eq. 5.12 is the expectation over the possible outcomes of experiment Ti of the expected cost of the reduced IBR operator
152
Chapter 5
over the reduced uncertainty class. It makes sense to minimize this expectation over all possible experiments. We call this expectation the residual IBR cost for experiment Ti and denote it by R(ui). Equation 5.12 states that i ¼ arg min Dðui Þ ¼ arg min Rðui Þ: i∈1, : : : , k
i∈1, : : : , k
(5.13)
5.1.1 Utility Experimental design within a Bayesian framework has been studied for more than a half century. Considerable work has utilized the expected gain in Shannon information (Stone, 1959; DeGroot, 1962, 1986; Bernardo, 1979), which is equivalent to maximizing the expected Kullback–Leibler distance between the posterior and prior distributions. Specifically, suppose that l is a design selected from a family L and X is a data vector. The optimal experiment is then given by
l ¼ arg max E X,U l∈L
pðujX, lÞ jl : log pðuÞ
(5.14)
Since the prior distribution does not depend on l, l ¼ arg max E X,U ½U inf ðu, X, lÞjl, l∈L
(5.15)
where the utility function Uinf is defined by U inf ðu, X, lÞ ¼ log pðujX, lÞ:
(5.16)
l maximizes the expected Shannon information of the posterior distribution. Lindley has proposed a general decision theoretic approach incorporating a two-part decision involving the selection of an experiment followed by a terminal decision (Lindley, 1972). Leaving out the terminal decision, in this framework the optimal experiment is given by l ¼ arg max E X ½E U ½Uðu, X, lÞjX, ljl, l∈L
(5.17)
where U is a utility function [see (Chaloner and Verdinelli, 1995) for the full decision-theoretic optimization and a review of Bayesian experimental design]. In the context of the objective cost of uncertainty, an optimal experiment minimizes the expected remaining MOCU. If we view the remaining MOCU from the perspective of data, then each experiment Ti corresponds to a data vector X|Ti determining the value of ui, where the notation X|Ti is meant to convey the dependency of the data on the experiment. Prior to the data being observed, the distribution of X|Ti is identical to the distribution of ui, which is p(ui). In this context, Eq. 5.8 becomes
Optimal Experimental Design
153
ii h h UjðXjT Þ E ui ½MF ðUjX, T i Þ ¼ E XjT i E UjðXjT i Þ C ujðXjT i Þ cIBR i C ujðXjT i Þ ðcujðXjT i Þ Þ ¼ E XjT i ½E UjðXjT i Þ ½UF ðu, X, T i ; UÞ,
(5.18) where
UjðXjT Þ UF ðu, X, T i ; UÞ ¼ C ujðXjT i Þ cIBR i C ujðXjT i Þ ðcujðXjT i Þ Þ
(5.19)
is the conditional objective cost of uncertainty given the data from experiment Ti. From Eq. 5.18, the optimization of Eq. 5.11 can be expressed as i ¼ arg min E X ½E U ½UF ðu, X, T i ; UÞjX, T i jT i , i∈1, : : : , k
(5.20)
which is the form of the optimization in Eq. 5.17, with utility function UF ðu, X, T i ; UÞ. The salient two-fold issue for MOCU-based experimental design is not the form of the optimization; rather, it is that the uncertainty is on the underlying random process, meaning the science, and its aim is to design a better operator on the underlying process, be it for filtering, control, classification, or any other engineering objective.
5.2 Experimental Design for IBR Linear Filtering This section discusses experimental design for IBR linear filtering, which naturally takes place in the framework of canonical expansions (Dehghannasiri, et al., 2017c). Based on the cost given in Eq. 4.23, replacing EU by E Uju¯ i , in the case of linear filtering, Eq. 5.12 gives the primary parameter Z jRUju¯ i ,Y Z ðs, jÞj2 i ¼ arg min E u¯ i RUju¯ i ,Y ðs, sÞ dj I Uju¯ i ðjÞ Ξ i∈1, : : : , k (5.21) Z jRUju¯ i ,Y Z ðs, jÞj2 ¼ arg max E u¯ i dj , I Uju¯ i ðjÞ Ξ i∈1, : : : , k where RUju¯ i ,Y ¼ E Uju¯ i ½RY u ¼ E U ½RY u jui ¼ u¯ i ,
(5.22)
RUju¯ i ,X ¼ E Uju¯ i ½RX u , RUju¯ i ,Y X ¼ E Uju¯ i ½RY u X u , I Uju¯ i ðjÞ is obtained using Eq. 4.18 with RU,X replaced by RUju¯ i ,X , and the second equality follows from the relation E u¯ i ½RUju¯ i ,Y ðs, sÞ ¼ E u¯ i E Uju¯ i ½RY u ðs, sÞ ¼ E U ½RY u ðs, sÞ ¼ RU,Y ðs, sÞ:
(5.23)
Since the expectation in Eq. 5.21 is with respect to u¯ i , u¯ i can be replaced by ui.
154
Chapter 5
Example 5.1. We reconsider Example 3.2, except that now both the signal and noise are uncertain, with the signal-plus-noise model parameterized by u ¼ ðu1 , u2 Þ: X u ðtÞ ¼ Y u1 ðtÞ þ N u2 ðtÞ, where N u2 ðtÞ is uncorrelated white noise with intensity s2u2 , and u1 and u2 are independent. The effective autocorrelation is RU,X ðu, tÞ ¼ RU,Y ðu, tÞ þ E u2 ½s2u2 dðu tÞ, where RU,X ðu, tÞ ¼ E U ½RX u ðu, tÞ and RU,Y ðu, tÞ ¼ E u1 ½RY u ðu, tÞ. Since the 1
U noise is uncorrelated, RY u X u ðs, uÞ ¼ RY u ðs, uÞ. Let lU l and xl ðtÞ be the eigenvalues and the eigenfunctions of RU,Y, respectively. Substituting RU,X ðt, t0 Þ into Eq. 4.18, keeping in mind that the canonical expansion is discrete, we find the effective noise intensity: XZ Z U U 0 0 Il ¼ RU,Y ðt, t0 Þ þ E u2 ½s2u2 dðt t0 Þ xU l ðtÞxl 0 ðt Þdtdt T
l0
¼
T
l0
þ ¼
T
X Z Z T
l0
T
RU,Y ðt, t
X Z Z
E u2 ½s2u2
XZ
0
l0
T
0 0 ÞxU l 0 ðt Þdt
xU l ðtÞ dt
dðt
T
U U lU l 0 xl 0 ðtÞxl ðtÞdt
þ
0 0 t0 ÞxU l 0 ðt Þdt
xU l ðtÞ dt
XZ
E u2 ½s2u2
l0
T
U xU l 0 ðtÞxl ðtÞ dt
2 ¼ lU l þ E u2 ½su2 , 0 0 where the third equality results because xU l 0 ðt Þ is an eigenfunction of RU,Y ðt, t Þ, and the fourth equality results because the set of eigenfunctions fxU l ðtÞg forms an orthonormal system on T. Moreover,
RU,Y Z ðs, lÞ ¼ E U ½RY u Zu ðs, lÞ Z u ¼ EU RY u X u ðs, uÞxl ðuÞdu T Z u1 ¼ E u1 RY u ðs, uÞxl ðuÞdu ¼
1 T u1 u1 E u1 ½ll xl ðsÞ,
where the second equality follows from the definition of the canonical coefficients for X u as generalized Fourier coefficients in Eq. 2.54, and the fourth
Optimal Experimental Design
155
equality results because lul 1 and xul 1 ðtÞ are the eigenvalues and the eigenfunctions of RY u , respectively. Substituting I U l and RU,Y Z ðs, lÞ into Eq. 4.20 yields 1
cU IBR ðX u ÞðsÞ ¼
` X E u1 ½lul 1 xul 1 ðsÞ Zu ðlÞ: lU þ E u2 ½s2u2 l¼1 l
According to Eq. 4.23, the expected MSE is h
E U E½jY u1 ðsÞ
2 cU IBR ðX u ÞðsÞj
i
¼ RU,Y ðs, sÞ
` E 2 ½lu1 xu1 ðsÞ X u1 l l l¼1
2 lU l þ E u2 ½su2
:
Considering Eq. 5.21, to determine which parameter, u1 or u2, is primary, compare " # ` X ðlul 1 Þ2 jxul 1 ðsÞj2 , Dðu1 Þ ¼ E u1 u1 2 l þ E ½s u2 u2 l¼1 l the numerator and denominator being jRUju1 ,Y Z ðs, lÞj2 and I Uju1 ðlÞ, respectively, and " # ` E 2 ½lu1 xu1 ðsÞ X u1 l l , Dðu2 Þ ¼ E u2 2 lU l þ s u2 l¼1 noting that lU l depends only on u1, not on u2. If Dðu1 Þ . Dðu2 Þ, then u1 is primary; otherwise, u2 is primary. ▪ 5.2.1 Wiener filtering With uncertainty, the effective power spectra are S U,X ¼ F ½E U ½rX u ðtÞ,
(5.24)
S U,Y X ¼ F ½E U ½rY u X u ðtÞ:
(5.25)
The IBR Wiener filter GU IBR ðvÞ is found by plugging S U,X ðvÞ and S U,Y X ðvÞ into Eq. 4.25. Keeping in mind that our aim is to determine which unknown parameter should be estimated first, we rewrite Eq. 3.130 as Z ` rY X ðs uÞejvu du rY Z ðs vÞ ¼ ` Z ` (5.26) jvs ¼e rY X ðtÞejvt dt `
¼ e S Y X ðvÞ: jvs
156
Chapter 5
Therefore, jrY Z ðs vÞj2 ¼ jS Y X ðvÞj2 :
(5.27)
Substitution into Eq. 5.21 with the corresponding notation alterations determines the primary parameter: i ¼ arg max E u¯ i i∈1, : : : , k
"Z
`
`
"Z
¼ arg max E u¯ i
`
`
i∈1, : : : , k
# jrUju¯ i ,Y Z ðs vÞj2 dv I Uju¯ i ðjÞ # jS Uju¯ i ,Y X ðvÞj2 dv , S Uju¯ i ,X ðvÞ
(5.28)
where S Uju¯ i ,X ¼ F ½rUju¯ i ,X ðtÞ,
(5.29)
S Uju¯ i ,Y X ¼ F ½rUju¯ i ,Y X ðtÞ,
(5.30)
and we have deleted 2p in the denominator of the second line because it does not affect the arg max. 5.2.2 WS stationary blurring-plus-noise model We consider an experimental design for the signal-plus-noise model of Eq. 4.26, for which the IBR Wiener filter is given in Eq. 4.39. Here, however, we will assume that hu is not random for fixed u. Under this assumption, Eq. 4.39 continues to hold, except that EU,h reduces to EU. Consider the discrete signal observation model X u ðnÞ ¼ hs2 ðnÞ Y ðnÞ þ N s2n ðnÞ, h
(5.31)
where the blurring function h(n) has parameter s2h , the power spectral density for the noise process N(n) is s2n , u ¼ ðs2h , s2n Þ is unknown, and we desire the primary parameter. Let pðuÞ ¼ pðs2h Þpðs2n Þ denote the prior distribution for u, where pðs2h Þ and pðs2n Þ denote the marginal priors for s2h and s2n , respectively. The experimental design value in Eq. 5.28 for s2h is "Z # 2 ` jS Ujs2 ,Y X ðvÞj h Dðs2h Þ ¼ E s2 dv h S ` Ujs2h ,X ðvÞ (5.32) # Z "Z jH s2 ðvÞj2 jS Y ðvÞj2 ` h ¼ dv pðs2h Þ ds2h : 2 2 ` jH s2 ðvÞj S Y ðvÞ þ E s2n ½sn h
Optimal Experimental Design
Similarly,
"Z
Dðs2n Þ ¼ E s2n Z "Z ¼
157
#
`
jS Ujs2n ,Y X ðvÞj2 dv S Ujs2n ,X ðvÞ
`
jE s2 ½H s2 ðvÞj2 jS Y ðvÞj2
`
E s2 ½jH s2 ðvÞj2 S Y ðvÞ þ s2n
`
h
h
h
#
(5.33)
dv pðs2n Þ ds2n :
h
The primary parameter is chosen according to ui ¼ s2h if Dðs2h Þ . Dðs2n Þ and ui ¼ s2n if Dðs2h Þ ≤ Dðs2n Þ. Example 5.2. A random phase process is of the form Y ðn; A, f c Þ ¼ A cosð2pf c n þ FÞ, where the amplitude A and frequency fc are fixed, and the phase random variable F is uniformly distributed over the interval [0, 2p]. A random phase signal is WS stationary. The power spectral density S Y ðv; A, f c Þ of Y ðn; A, f c Þ 2 with N samples is A4N dð f f c Þ. Assume the signal observation model X ðnÞ ¼ hs2 ðnÞ Y ðn; A, f c Þ þ N s2n ðnÞ: h
½A, f c , s2h , s2n
is unknown, we use Eq. 5.28 to find the Supposing that u ¼ primary parameter. Dðs2h Þ is found using Eq. 5.32, where SY (v) is replaced by S Ujs2 ,Y ðvÞ, h
which, owing to the independence of A and fc from s2h , is given by h ii h S Ujs2 ,Y ðvÞ ¼ F E Ujs2 rY ujs2 ðt; A, f c Þ h
h
h
¼ F ½E A, f c ½rY ðt; A, f c Þ ¼ E A, f c ½F ½rY ðt; A, f c Þ ¼ E A, f c ½S Y ðv; A, f c Þ, assuming that interchanging the Fourier transform and expectation is justified. We use Eq. 5.33 to calculate Dðs2n Þ, where S Ujs2n ,Y ðvÞ is found similarly to S Ujs2 ,Y ðvÞ. For the amplitude, h Z ` jS UjA,Y X ðvÞj2 dv DðAÞ ¼ E A S UjA,X ðvÞ ` # Z "Z jE s2 ½H s2 ðvÞj2 jS UjA,Y ðvÞj2 ` h h ¼ dv pðAÞdA, 2 2 ` E s2 ½jH s2 ðvÞj S UjA,Y ðvÞ þ E s2n ½sn h
where
h
158
Chapter 5
S UjA,Y ðvÞ ¼ F ½E UjA ½rY ujA ðt; A, f c Þ ¼ F ½E f c ½rY ðt; A, f c Þ ¼ E f c ½F ½rY ðt; A, f c Þ ¼ E f c ½S Y ðv; A, f c Þ: Finally, Z Dðf c Þ ¼ E f c Z Z ¼
`
`
jS Uj f c ,Y X ðvÞj2 dv S Uj f c ,X ðvÞ
jE s2 ½H s2 ðvÞj2 jS Uj f c ,Y ðvÞj2
`
`
h
h
E s2 ½jH s2 ðvÞj2 S Uj f c ,Y ðvÞ þ E s2n ½s2n h
dv pðf c Þdf c ,
h
where S Uj f c ,Y ðvÞ can be found similarly to S UjA,Y ðvÞ. The parameter with maximum experimental design value is determined first. To analyze experimental-design performance, suppose that s2n ∈½0.1, s2nmax , 2 sh ∈ ½0.5, s2h max , A ∈ ½5, Amax , and f c ∈ ½0.1, f c max , all being uniform. The nominal values for s2n max , s2h max , Amax , and fc max are 1, 4, 10, and 0.15, respectively. Figure 5.1 shows the primary parameter. For example, in Fig. 5.1(a), we consider the uncertainty interval of s2n to be ½0.1, s2n max , 0.5 ≤ s2n max ≤ 8. When the interval is small, s2h is primary, but as the interval gets larger, fc becomes primary. In Fig. 5.1(d), when the interval of fc is small, s2h is primary, but for large uncertainty intervals of f c , the primary parameter is fc. We now consider experimental-design performance when a sequence of experiments is conducted. Four experiments are necessary to determine all unknown parameters. For the first experiment, the primary parameter is found using the prior distributions for the parameters. When the first experiment is completed, the true value of the determined parameter is put in the signal observation model. Then, using the updated signal observation model, which has fewer unknown parameters, the primary parameter among the remaining unknown parameters is found. The procedure is repeated until all parameters are determined. At each step, after performing the experiment and incorporating the value of the corresponding parameter in the model, we find the IBR Wiener filter for the new uncertainty class and calculate its MSE relative to the underlying true model. For simulations, we assume nominal intervals for the uncertain parameters as considered for the single-experiment case and compute the average MSE over 10,000 different assumed true models. Figure 5.2 shows the average MSE after conducting different numbers of experiments when chosen randomly and via experimental design. Experimental design achieves a faster decrease in average MSE than random selection. Both curves reach the same point because eventually all experiments are performed and the true model is found regardless of the order of the experiments. ▪
Optimal Experimental Design
159
(a)
(b)
(c)
(d)
Figure 5.1 Prioritizing the determination of uncertain parameters when the interval of one of the uncertain parameters changes and the other intervals are according to the nominal intervals. The parameter with the maximum experimental design value is the primary parameter. (a) The interval of s2n changes as [0.1, s2n max ]. (b) The interval of s2h changes as [0.1, s2h max ]. (c) The interval of A changes as [5, Amax]. (d) The interval of fc changes as [0.1, f c max ]. [Reprinted from (Dehghannasiri et al., 2017d).]
5.3 IBR Karhunen–Loève Compression In Section 2.3.3 we discussed Karhunen–Loève compression for a discrete zero-mean random process X ðnÞ, n ¼ 1, 2, : : : , assuming full knowledge of the covariance function. We now consider compression when X(n) is defined over f1, 2, : : : , Ng and the covariance matrix is unknown. Henceforth, to avoid confusion with primes when they are used to designate variables, we shall use superscript T to denote transpose. K(i, j) denotes the (i, j) element of matrix K. If the covariance matrix K is known, then the defining integral equation is Ku ¼ lu,
(5.34)
where l and u ¼ ½uð1Þ, : : : , uðNÞT are an eigenvalue and eigenvector of K, respectively. Then the Karhunen–Loève expansion is of the form
160
Chapter 5
Figure 5.2 The average MSE obtained after performing each experiment in a sequence of experiments for the signal observation model with four unknown parameters. Results are shown when experiments are chosen randomly or based on the proposed experimental design method. [Reprinted from (Dehghannasiri et al., 2017d).]
X ðnÞ ¼
N X
Zi ui ðnÞ,
(5.35)
i¼1
where Zi ¼ XT ui and ui is the eigenvector of K with corresponding eigenvalue li. Optimal m-term compression is achieved by X m ðnÞ ¼
m X
Zi ui ðnÞ:
(5.36)
i¼1
5.3.1 IBR compression Now suppose that K is unknown and U is the uncertainty class of all possible covariance matrices Ku , u ∈ U. We desire a covariance matrix KU that can represent U in an effective way, where effectiveness is defined relative to the uncertainty class and the objective, which is compression. The key is representing the MSE between a given process X and the compressed process X 0 when compression is via an arbitrary covariance marix K0 , in which case the compressed process is X 0m ðnÞ ¼
m X
Z0i u0i ðnÞ,
(5.37)
i¼1
where u0i is an eigenvector of K0 and Z0i is the generalized Fourier coefficient of X(n) relative to u0i . Recalling Eq. 2.63, this MSE is given by
Optimal Experimental Design
161 N X
MSE½X , X 0m ¼
E½jX ðnÞ X 0m ðnÞj2 :
(5.38)
n¼1
Theorem 5.1 (Dehghannasiri et al., 2018b). If the random processes X(n) and X 0m ðnÞ are defined as in Eqs. 5.35 and 5.37, respectively, then N X
MSE½X , X 0m ¼
li
m X
i¼1
ðu0i ÞT K u0i :
(5.39)
i¼1
Proof. For fixed n, the MSE is X N m h i X MSE jX ðnÞ X 0m ðnÞj2 ¼ E Zi ui ðnÞ Z 0i u0i ðnÞ i¼1
X N
i¼1
Z l ul ðnÞ
l¼1
¼E
X N X N
m X
Z 0l u0l ðnÞ
l¼1
Z i Z l ui ðnÞul ðnÞ þ
i¼1 l¼1
m X N X
m X m X i¼1 l¼1
Z 0i Z l u0i ðnÞul ðnÞ
N X m X
i¼1 l¼1
¼
N X
li jui ðnÞj2 þ
i¼1
Z 0i Z 0l u0i ðnÞu0l ðnÞ Z i Z0l ui ðnÞu0l ðnÞ
i¼1 l¼1 m X
E½jZ 0i j2 ju0i ðnÞj2
i¼1
m X N X
E½Z 0i Z l u0i ðnÞul ðnÞ
i¼1 l¼1
N X m X
E½Z i Z 0l ui ðnÞu0l ðnÞ:
i¼1 l¼1
(5.40) Hence, by Eq. 5.38, the global MSE over all n is
MSE½X , X 0m ¼
N X i¼1
li
N X
jui ðnÞj2 þ
n¼1
m X N X
N X m X i¼1 l¼1
E½jZ0i j2
i¼1
E½Z 0i Z l
i¼1 l¼1
m X
N X
N X n¼1
u0i ðnÞul ðnÞ
n¼1
E½Z i Z 0l
N X n¼1
ju0i ðnÞj2
ui ðnÞu0l ðnÞ:
(5.41)
162
Chapter 5
E½jZ 0i j2 can be calculated as follows: " E½jZ 0i j2 ¼ E
N X N X
# X ðnÞX ðn1 Þ u0i ðnÞu0i ðn1 Þ
n¼1 n1 ¼1
¼
N X N X
h i E X ðnÞX ðn1 Þ u0i ðnÞu0i ðn1 Þ
(5.42)
n¼1 n1 ¼1
¼ ðu0i ÞT Ku0i : To compute the third term in Eq. 5.41, we first obtain " E½Z0i Zl
¼E
N X N X
# X ðnÞX ðn1 Þ u0i ðnÞul ðn1 Þ
n¼1 n1 ¼1
¼
N X N X
N X
(5.43)
n1 ¼1
n¼1
¼
Kðn, n1 Þul ðn1 Þ u0i ðnÞ
ll ul ðnÞu0i ðnÞ,
n¼1
where the third equality follows from the discrete form of Eq. 2.55. Compute the third term in Eq. 5.41 by plugging in Eq. 5.43: m X N X i¼1 l¼1
E½Z 0i Z l
N X
! u0i ðn1 Þul ðn1 Þ
¼
n1 ¼1
m X N N X X i¼1 l¼1
¼ ¼
! ll ul ðnÞu0i ðnÞ
! u0i ðn1 Þul ðn1 Þ
n1 ¼1
n¼1
m X N X N X
N X
i¼1 n¼1 n1 ¼1
l¼1
m X N X N X
N X
!
ll ul ðnÞul ðn1 Þ u0i ðn1 Þu0i ðnÞ
Kðn, n1 Þu0i ðn1 Þu0i ðnÞ
i¼1 n¼1 n1 ¼1
¼
m X
ðu0i ÞT Ku0i ,
i¼1
(5.44) where the third equality follows from the discrete form of Mercer’s theorem in Eq. 2.58. In Eq. 5.41, the fourth term is the conjugate of the third term. Hence, it equals the conjugate of Eq. 5.44. Using Eqs. 5.42 and 5.44, we express Eq. 5.41 as
Optimal Experimental Design
MSE½X , X 0m ¼
N X
163
li þ
m X
i¼1
E½jZ0i j2
i¼1
N X m X
¼ ¼
li þ
E½Z i Z 0l
i¼1
N X
m X
i¼1
N X
u0i ðnÞul ðnÞ
n¼1
ui ðnÞu0l ðnÞ
n¼1
m X
i¼1
li
E½Z 0i Z l
i¼1 l¼1 N X
i¼1 l¼1 N X
m X N X
ðu0i ÞT Ku0i
m X
ðu0i ÞT Ku0i
i¼1
m X
ðu0i ÞT K u0i
i¼1
ðu0i ÞT K u0i ,
i¼1
(5.45) where the first two summands result from the orthonormality of {ui} and fu0i g. ▪ Note the scalar identity iT h ðu0i ÞT K u0i ¼ ðu0i ÞT K u0i ¼ ðu0i ÞT ðKÞT u0i ¼ ðu0i ÞT Ku0i :
(5.46)
Our aim is to find the m-term compression that minimizes the average MSE across the uncertainty class. Let X u(n) denote a random process with covariance matrix Ku whose eigenvalues and eigenvectors are lui and uui , respectively. The average MSE for the compressed random process X 0m ðnÞ defined in Eq. 5.37 is X N m i h X u 0 u 0 T u 0 li ðui Þ K ui E U MSE½X , X m ¼ E U i¼1
¼
N X i¼1
E U ½lui
i¼1
m X
(5.47) ðu0i ÞT E U ½Ku u0i :
i¼1
We need to find u0i that minimizes the preceding equation subject to the constraint ku0i k ¼ 1 for 1 ≤ i ≤ m: minimize 0
i h E U MSE½X u , X 0m
subject to
ku0i k ¼ 1, i ¼ 1, : : : , m:
ui
(5.48)
To solve this constrained optimization using Lagrange multipliers, consider
164
Chapter 5
Lðu0i , zi Þ
¼
N X
E U ½lui
i¼1
¼
m X
ðu0i ÞT E U ½Ku u0i
i¼1
N X
E U ½lui
i¼1
m X
zi ð, u0i , u0i . 1Þ
i¼1
m X N X N X i¼1 n¼1 n1 ¼1
E U ½Ku ðn, n1 Þu0i ðnÞu0i ðn1 Þ
(5.49)
X m N X 0 0 zi ui ðnÞui ðnÞ 1 : i¼1
n¼1
The partial derivative of Lðui , zi Þ relative to u0i ðnÞ is N X Lðui , zi Þ ¼ 2 E U ½Ku ðn, n1 Þu0i ðn1 Þ 2zi u0i ðnÞ: u0i ðnÞ n ¼1
(5.50)
1
Setting this partial derivative equal to 0 yields the following relation for uU i that minimizes the constrained optimization in Eq. 5.48: N X n1 ¼1
U E U ½Ku ðn, n1 ÞuU i ðn1 Þ ¼ zi ui ðnÞ:
(5.51)
This relation shows that the minimizing uU i and the Lagrange multiplier zi are the ith eigenvector and eigenvalue of the expected covariance matrix E U ½Ku , respectively. Hence, the minimum average MSE is achieved by using the effective covariance matrix KU ¼ E U ½Ku . Thus, the IBR m-term compression is XU m ðnÞ ¼
m X
U ZU i ui ðnÞ,
(5.52)
i¼1 U U where uU i is the eigenvector of K and Z i is the generalized Fourier coefficient relative to uU i .
5.3.2 Experimental design for Karhunen–Loève compression When performing Karhunen–Loève compression to m terms relative to an uncertainty class U of covariance matrices, a class F of m-term compressions, and the cost function C being the MSE between the original and compressed processes as in Eq. 5.39, MOCU is defined by i h u u MF ðUÞ ¼ E U MSE½X u , X U MSE½X , X m m :
(5.53)
The primary parameter minimizes the remaining MOCU. In accordance with Eq. 5.12, using the relation for MSE in Eq. 5.39, the primary parameter uj is
Optimal Experimental Design
165
h h h iii Uju j ¼ arg min E uj E Ujuj MSE X ujuj , X m j j
"
"
¼ arg min E uj E Ujuj
N X
j
uju li j
i¼1
¼ arg min j
N X i¼1
¼ arg max E uj j
E U ½lui "
m X
"
Ujuj
##
i¼1
E uj EUjuj
"
m X Uju T Uju ui j Kujuj ui j
##
(5.54)
i¼1
# li
m X Uju T Uju ui j Kujuj ui j
,
i¼1
Uju
where li j is the ith eigenvalue of KUjuj ¼ E Ujuj ½Kujuj , and where the fourth equality holds because the first summand does not affect the arg max, and in Uju
the second, ui j is the eigenvector of the conditional expected covariance matrix KUjuj , so that h i Uju Uju Uju E Ujuj Kujuj ui j ¼ li j ui j :
(5.55)
According to Eq. 5.54, to find the primary parameter, for each unknown parameter one needs to find the expectation of the other unknown parameters given each possible value of that parameter. In (Dehghannasiri, et al., 2017c), this problem is addressed when the elements of the covariance matrix are distributed according to the Wishart distribution. Here we provide an example from (Dehghannasiri, 2016) with a blocked covariance matrix. Example 5.3. In a block covariance matrix, each block corresponds to a group of correlated random variables, and there is no correlation between random variables in different blocks. Each block represents a system negligibly correlated with other systems. The model has long been used to study feature selection (Jain and Waller, 1978) and has more recently been used to model high-dimensional gene-expression microarray data (Hua et al., 2009). Different correlation structures can be used within blocks, depending on the application. Three structures are employed in (Jain and Waller, 1978): (i) rij ¼ r, (ii) rij ¼ rjijj , and (iii) rij ¼ 1 if ji jj ¼ 1 and rij ¼ 0 if ji jj . 1, so a single parameter r determines the correlation structure. Our methodology is not limited, and we use a more general model, with correlation rij ¼ ri and variance s2i within the ith block. The following covariance matrix contains two blocks of size 2 2 and 3 3:
166
Chapter 5
2
s21 6 r s2 6 1 1 6 6 0 6 4 0 0
r1 s21 s21 0 0 0
0 0 s22 r2 s22 r2 s22
0 0 r2 s22 s22 r2 s22
3 0 0 7 7 7 r2 s22 7: 7 r2 s22 5 s22
Assuming that the parameters of the covariance matrix are unknown, the primary parameter to improve the performance of Karhunen–Loève compression is found. Unknown parameters are uniformly distributed, and different parameters are independent. Consider a covariance matrix with two blocks of size 2 2 whose unknown parameters are uniformly distributed as follows (nominal intervals): s21 ∈ ½0.1, 4, r1 ∈ ½0.3, 0.3, s22 ∈ ½0.1, 3, and r2 ∈ ½0.1, 0.1. After compression, one random variable remains. In Fig. 5.3, the interval of one uncertain parameter changes while other intervals are fixed to the nominal intervals. According to Fig. 5.3(a), if we change the interval of s21 such that s21 ∈ ½0.1, s21 max and s21 max varies from 0.5 to 7, then the primary parameter is given by 8 r2 s21 max ≤ 1.4 > > > < s2 1.4 ≤ s2 2 1 max ≤ 2.9 : 2 > s1 2.9 ≤ s21 max ≤ 3.7 > > : r1 3.7 ≤ s21 max In Fig. 5.3(e), the average MSE resulting from obtaining the primary parameter is always lower than the MSE from obtaining other parameters. Suppose that we change the interval of s22 so that s22 ∈ ½0.1, s22 max , where s22 max varies from 0.5 to 7. As seen from Fig. 5.3(c), when the interval of s22 is very small, the primary parameter is r1; as the interval becomes larger, the primary parameter changes to s21 ; and for very large intervals, the primary parameter is s22 . To evaluate performance for sequential experiments, assume a covariance matrix that has one 2 2 block (block 1) and two 3 3 blocks (blocks 2 and 3). The parameters are distributed as follows: s21 ∈ ½0.1, 4, r1 ∈ ½0.3, 0.3, s22 ∈ ½0.1, 3, r2 ∈ ½0.01, 0.4, s23 ∈ ½0.5, 2, and r3 ∈ ½0.5, 0.9. Because parameters are uniformly distributed and independent, the posterior distribution after determining each parameter can be found analytically via multiplying the marginal distributions of the remaining unknown parameters. Figure 5.4 shows the average MSE obtained after conducting each experiment chosen either via experimental design or randomly. Table 5.1 shows the proportion of times that each unknown parameter is chosen based on all experiments. The primary parameter for the first experiment is always s21 , but for the remaining experiments the primary parameter depends on the outcomes of preceding experiments. ▪
Optimal Experimental Design
167
Figure 5.3 The effect of increasing the uncertainty interval of a particular uncertain parameter on the experimental design performance for an unknown covariance matrix with two blocks of size 2 2. Experimental design values as (a) s21 max increases; (b) r1 max increases; (c) s22 max increases; and (d) r2 max increases. Average MSE as (e) s21 max increases; (f) r1 max increases; (g) s22 max increases; and (h) r2 max increases.
168
Chapter 5
Figure 5.4 The average MSE obtained after conducting each experiment in a sequence of experiments for an unknown covariance matrix with a disjoint-block model when experiments are chosen randomly or based on the experimental design. Table 5.1 The proportion of choosing each element for each experiment in a sequence of experiments.
s1 r1 s2 r2 s3 r3
E1
E2
E3
E4
E5
E6
1 0 0 0 0 0
0 0.43 0.43 0 0.14 0
0 0.18 0.31 0 0.51 0
0 0.27 0.21 0.25 0.22 0.05
0 0.07 0.06 0.55 0.03 0.29
0 0.05 0 0.20 0.10 0.65
5.4 Markovian Regulatory Networks Thus far we have focused on experimental design in the context of canonical expansions, which is natural for signal processing, where much theory can be placed into that framework. But the definition of MOCU is very general, pertaining to any class of operators on any uncertainty class of models governed by a prior (posterior) distribution. MOCU-based experimental design was first formulated for operators on gene regulatory networks (Dehghannasiri, et al., 2015a), and it is to that application that we now turn. Since the network probability structure can be represented as an ergodic, irreducible Markov chain possessing a steady-state distribution and since operator optimality and MOCU are characterized in terms of that distribution, the methodology discussed in this section is set in the general context of a Markov chain possessing a steady-state distribution, and it is that perspective from which we study it. Furthermore, it can be applied to other kinds of networks with optimality and MOCU characterized in different manners.
Optimal Experimental Design
169
Consider a finite, irreducible Markov chain possessing transition probability matrix (TPM) P and steady-state distribution (SSD) p. Keeping in accord with the usual notation, throughout this section p will denote the SSD and not a prior distribution, a convention that should cause no difficulty because expectation with respect to the prior is denoted by EU. The Markov-chain states are partitioned into sets D and U of desirable and undesirable states, respectively, possessing total steady-state probability masses pD and pU ¼ 1 pD , respectively. The operator class F consists of a family of transformations (called perturbations) c on P, and the cost function C is the total steady-state probability mass of the undesirable states. An optimal perturbation minimizes X c CðcÞ ¼ pi , (5.56) i∈U
where pci is the steady-state probability of state i following the perturbation c. Uncertainty arises when P depends on an uncertainty vector u and takes the form P(u). An IBR perturbation is defined by cIBR ¼ arg minE U ½C u ðcÞ c∈F X c ¼ arg minE U pi ðuÞ , c∈F
c
(5.57)
i∈U
c
where p (u) is the SSD for P (u), the TPM for the Markov chain following perturbation by c. MOCU is defined in the usual manner. 5.4.1 Markov chain perturbations Application of IBR operator theory and MOCU-based experimental design depend on the properties of the perturbation family, in particular, those properties relating to the effects of perturbation on the SSD. Perturbation theory for finite Markov chains is fairly well developed (Hunter, 1992, 2002, 2005). Our presentation of the essentials of the perburation theory is based on (Qian and Dougherty, 2008), which utilizes many of the earlier results. Since we focus on a given perturbation, we denote the perturbed TPM and the resulting SSD by P˜ and p, ˜ respectively. Thus, pci is replaced by p˜ i in Eqs. 5.56 and 5.57. By definition, pT P ¼ pT and p˜ T P˜ ¼ p˜ T . Letting P˜ ¼ P þ E, the SSD change is ðp˜ pÞT ðI PÞ ¼ p˜ T E,
(5.58)
where I is the identity matrix. p˜ can be expressed using generalized inverses. A g-inverse of a matrix A is any matrix A– such that AA A ¼ A. If t and u are any vectors such that pT t ≠ 0 and uT e ≠ 0, then I P þ tuT is nonsingular, ½I P þ tuT 1 is a g-inverse of I P, and
170
Chapter 5
pT ¼
uT ½I P þ tuT 1 , uT ½I P þ tuT 1 e
(5.59)
where e is a column vector whose components are all unity. The g-inverse satisfies the relation ½I P þ tuT 1 t ¼
e T
u e
:
(5.60)
Consider a rank-one perturbation, in which case the perturbed Markov chain has the transition matrix P˜ ¼ P þ abT , where a, b are two arbitrary vectors satisfying bTe ¼ 0, and abT represents a rank-one perturbation to the original TPM P. If t and u are any vectors such that pT t ≠ 0 and uT e ≠ 0, then p˜ T ¼
ðpT aÞbT þ ð1 bT aÞpT , ðpT aÞðbT eÞ þ 1 bT a
(5.61)
where bT ¼ bT ½I P þ tuT 1
(5.62)
(Hunter, 2005). If we let t ¼ e, then bTe ¼ 0 and p˜ T ¼ pT þ
pT a bT : 1 bT a
(5.63)
If we now let u ¼ p, then bT ¼ bT Z, where Z ¼ ½I P þ epT 1
(5.64)
is the fundamental matrix of the Markov chain, which must exist because the chain is ergodic, and p˜ T ¼ pT þ
pT a bT Z: 1 bT Za
(5.65)
If the transition mechanisms before and after perturbation differ only in the kth state, then E ¼ ekbT has nonzero values only in its kth row, where ek is the elementary vector with a 1 in the kth position and 0s elsewhere. Substituting this into Eq. 5.65 yields p T ek bT Z 1 bT Zek pk bT : ¼ pT þ 1 bk
p˜ T ¼ pT þ
(5.66)
Optimal Experimental Design
171
For the ith state, p˜ i ¼ pi þ
pk bi : 1 bk
(5.67)
If the difference vector for the kth rows of the transition matrices has only two nonzero components for states u and v, say, p˜ ku ¼ pku ε and p˜ kv ¼ pkv þ ε, where ε is the only transition-probability change, then it is not difficult to show that bi ¼ εðzvi zui Þ, bk ¼ εðzvk zuk Þ, where zvi , zui , zvk , zuk are elements of Z, and p˜ i ¼ pi þ
pk εðzvi zui Þ : 1 εðzvk zuk Þ
(5.68)
Substituting this expression into Eq. 5.56 gives the cost function for this special case: CðcÞ ¼
X i∈U
pi þ
pk εðzvi zui Þ : 1 εðzvk zuk Þ
(5.69)
5.4.2 Application to Boolean networks Boolean networks (Kauffman, 1969) and their extension to probabilistic Boolean networks (Shmulevich et al., 2002) were both introduced to model gene regulatory networks and have found application beyond biology. We shall apply the IBR theory in the form of Eq. 5.56 to the Markov chain associated with a probabilistic Boolean network in the special case of finding a rank-one IBR perturbation based on Eq. 5.69. An n-node Boolean network (BN) is a pair (V, F), where V ¼ fX 1 ,X 2 ,:::,X n g is a set of binary-valued nodes and F ¼ f f 1 , f 2 , : : : , f n g is a set of Boolean functions such that f i : f0, 1gki → f0, 1g is the Boolean function that determines the value, 0 or 1, of Xi. In the context of gene regulatory networks, the nodes correspond to “off” and “on” states of a gene, and the Boolean functions represent gene regulation. The vector X ðtÞ ¼ ðX 1 ðtÞ, : : : , X n ðtÞÞ gives the state of the network at time t. The value of Xi at the next time point, X i ðt þ 1Þ ¼ f i ðX i1 ðtÞ, X i2 ðtÞ, : : : , X iki ðtÞÞ,
(5.70)
is determined by the values of ki predictor nodes at time t. In a Boolean network with perturbation (BNp) — not to be confused with a Markov chain perturbation — each node may randomly flip its value at a given time, with a perturbation probability p, independently from other nodes. Hence, for a BNp, X ðt þ 1Þ ¼ FðX ðtÞÞ with probability (1 p)n when there is no perturbation, but X(t þ 1) may take a different value with probability 1 (1 p)n when there exists one or more random perturbations. In a BNp, the sequence of states over time can be regarded as a Markov chain where
172
Chapter 5
transitions are made according to a fixed TPM P. Therefore, Markov chain theory can be applied for analyzing network dynamics. The general formula of a TPM using Boolean functions and perturbation probability has been derived in (Faryabi, et al., 2009). When p > 0, the resulting Markov chain is ergodic, irreducible, and possesses a steady-state distribution. In the context of translational genomics, the state space of a network is often partitioned into undesirable states (U) corresponding to abnormal (disease) phenotypes and desirable states (D) corresponding to normal (healthy) phenotypes. The goal in controlling a network is to decrease the probability that the network will enter U. Intervention aims at minimizing the undesirable steady-state probability mass. Structural intervention alters the long-run behavior of a network via a one-time change of the underlying network structure. Here we focus on the structural intervention method proposed in (Qian and Dougherty, 2008), where intervention is performed via a rank-one function perturbation, which corresponds to a rank-one TPM perturbation. We use a single-node perturbation, which is a rank-one function perturbation in which the output state for only one input state changes and the output states of other states remain unchanged. For IBR theory, there is an uncertainty class U containing uncertain parameter vectors that characterize the BNp. Owing to computational issues, especially when large-scale simulations are being employed, we shall find the model-constrained Bayesian robust operator. Although an MCBR operator is suboptimal, simulations in (Yoon et al., 2013) indicate that, at least for binary probabilistic Boolean networks with up to ten nodes, MCBR structural intervention provides an extremely accurate approximation of IBR structural intervention. Letting F˜ ¼ ff˜ 1 , f˜ 2 , : : : , f˜ n g be the set of Boolean functions for the perturbed BNp, a single-node perturbation to input state j solely changes the ˜ ≠ FðjÞ ¼ r output state for input state j and leaves the rest unaltered: s ¼ FðjÞ ˜ ˜ and FðiÞ ¼ FðiÞ for i ≠ j. The TPM P of the perturbed network will be identical to the TPM P of the original network, except for p˜ jr and p˜ js . Absent intervention, pjr is the probability that there is no random perturbation plus the probability that there is a random perturbation times the probability that with the random perturbation, state j transitions to state r: pjr ¼ ð1 pÞn þ ph j,ri ½1 ð1 pÞn ,
(5.71)
where ph j,ri is the latter probability, and pjs ¼ ph j,si ½1 ð1 pÞn :
(5.72)
With intervention, similar reasoning applies: p˜ js ¼ ð1 pÞn þ ph j,si ½1 ð1 pÞn
(5.73)
Optimal Experimental Design
173
and p˜ jr ¼ ph j,ri ½1 ð1 pÞn :
(5.74)
p˜ jr ¼ pjr ð1 pÞn ,
(5.75)
p˜ js ¼ pjs þ ð1 pÞn :
(5.76)
Hence,
According to Eq. 5.68, the SSD of the perturbed BNp is given by p˜ i ðj, sÞ ¼ pi þ
ð1 pÞn pj ðzsi zri Þ , 1 ð1 pÞn ðzsj zrj Þ
(5.77)
where zsi , zri , zsj , zrj are elements of Z and p˜ i ð j, sÞ is the perturbed steadystate probability for state i after applying the aforementioned intervention. Let p˜ i,u ðj, sÞ be the steady-state probability of state i in the network with uncertainty vector u after intervention ( j, s). Then X p˜ U,u ðj, sÞ ¼ p˜ i,u ðj, sÞ (5.78) i∈U
is the steady-state probability mass of the undesirable states after applying the single-node-perturbation structural intervention. For a BNp corresponding to u, the optimal single-node-perturbation structural intervention is cu ¼ ð j u , su Þ ¼
arg min j,s ∈f1, 2, 3, : : : , 2n g
p˜ U,u ðj, sÞ:
(5.79)
Robustness is given by kðu, fÞ ¼ p˜ U,u ðj f , sf Þ p˜ U,u ðj u , su Þ,
(5.80)
cMCBR ¼ ðj U , sU Þ ¼ arg min E U ½p˜ U,u ðj f , sf Þ
(5.81)
and ðj f ,sf Þ; f∈U
provides an MCBR intervention. cMCBR minimizes the expected undesirable steady-state mass among all model-specific optimal interventions. Example 5.4. Following (Dehghannasiri, 2015b), we consider a BNp in which nodes are regulated according to the majority-vote rule. The network regulations are governed by a regulatory matrix R, where Rij represents the regulatory relation from node j to node i: 8 the relation between j and i is activating > > j > P < 0 if Rij X j ðtÞ , 0 : X i ðt þ 1Þ ¼ f i ðX ðtÞÞ ¼ j > P > > > : X i ðtÞ if Rij X j ðtÞ ¼ 0 j
Uncertainty is introduced by assuming that for certain node pairs, although it is known that a regulatory relation exists, the type of regulation (activating or suppressive) is unknown. The uncertain parameters are the unknown regulatory relations. An uncertain parameter ui equals 1 for an activating regulation and 1 for a suppressive regulation. If there are k uncertain regulations, then U contains 2k networks. Experimental design selects a primary parameter to optimally improve the performance of structural intervention. Simulations have been performed with k ¼ 2, 3, 4, 5, assuming a uniform prior distribution over U. Networks have six nodes, X 1 , : : : , X 6 , each having three regulators. A random BNp is generated by randomly selecting three regulators for each node with uniform probability and randomly assigning 1 or 1 to the corresponding entries in the regulatory matrix R. The perturbation probability is set to p ¼ 0.001. States for which X1 ¼ 1 are assumed to be undesirable. Hence, U ¼ f32, : : : , 63g. For a given k, we generate 1,000 synthetic BNps and randomly select 50 different sets of k edges (i.e., regulations) for each network. In each case, the regulatory information of other edges is retained while that of the k selected edges is assumed to be unknown. From a practical perspective, the salient issue in evaluating an experimental design scheme using synthetic networks is controllability. Unlike real networks, which are controllable to a certain extent, many randomly generated networks may not be controllable. Hence, regardless of the intervention applied to the network, the SSD shift that results from the intervention may be negligible. For such networks, the difference between optimal and suboptimal experiments may be insignificant. For this reason, to examine the practical impact of experimental design, we must take controllability into account. We use the percentage decrease of total steady-state mass in undesirable states after optimal structural intervention as a measure of controllability: D¼
pU p˜ U ðj u , su Þ 100%, pU
where controllable networks have a larger D. Rank the experiments h1i, h2i, : : : , h5i according to which provide the greatest reduction in expected remaining MOCU. Experiment 〈1〉 is optimal. For i ¼ 1, 2, 3, 4, let
Optimal Experimental Design
175
¯ ¯ Ujuhiþ1i Ujuh1i hi ¼ C utrue cMCBR C utrue cMCBR be the cost difference between applying the MCBR intervention derived for the reduced uncertainty class that results from conducting the i þ 1-ranked experiment to the true network and the cost of applying the MCBR intervention obtained from conducting the optimal experiment. Table 5.2 summarizes the average gain of performing the optimal experiment over other suboptimal experiments according to hi. The average is taken over different sets of uncertain regulations and different networks with D ≥ 40%. Analogously to Figs. 5.2 and 5.4, Fig. 5.5 compares sequential optimal experimental design to sequential random selection. ▪ The overall information gain resulting from an experiment can be measured by the reduction in the entropy of the model (Atkinson and Donev, 1992); in particular, experimental design has been used in the inference of gene regulatory networks to reduce the entropy of the network model (Yeang et al., 2005; King et al., 2004). In (Yeang et al., 2005), the information gain for each experiment is Table 5.2 The average gain of conducting the optimal experiment predicted by the proposed experimental design strategy in comparison to other suboptimal experiments.
k k k k
¼ ¼ ¼ ¼
2 3 4 5
Average h1
Average h2
Average h3
Average h4
0.0584 0.0544 0.0545 0.0474
N/A 0.0718 0.0750 0.0696
N/A N/A 0.0855 0.0803
N/A N/A N/A 0.0863
Figure 5.5 Performance comparison based on a sequence of experiments. The average cost of robust intervention after performing the sequence of experiments predicted by the proposed strategy and the average cost after performing randomly selected experiments. [Reprinted from (Dehghannasiri et al., 2015a).]
176
Chapter 5
defined as the difference between the model entropy before the experiment and the conditional entropy given the experiment. The chosen experiment is the one maximizing this difference. In the preceding example, where there is a uniform distribution and indepenedent uncertain parameters, this entropy-based experimental design does not discriminate among potential experiments and performs the same as a random selection of experiments. As opposed to MOCU, entropy reduction ignores the experimental objective. 5.4.3 Dynamic intervention via external control Thus far in Section 5.4 we have focused on structural intervention in a Markov chain. Over the years there has been substantial analysis of dynamical intervention, which involves a control policy that decides whether and how to intervene at each time instant (Bertsekas, 2001). In particular, in the binary setting this could mean flipping or not flipping a control node. The decision is based on an immediate cost per time point, which depends on the origin state, the destination state, and the applied control input. To describe the setting mathematically, let fZk ∈ Sgk¼0, 1, : : : be the stochastic state process corresponding to transition probability matrix P, so that pij ¼ PðZ kþ1 ¼ jjZk ¼ iÞ. To model the effect of interventions over time, assume an external control input (action) A from a set A of possible inputs (actions). Let fðZk , Ak Þ : Zk ∈ S, Ak ∈ Agk¼0.1, : : : denote the stochastic process of state–action pairs. The TPM for the controlled process has transition probabilties pij ðAk Þ ¼ PðZ kþ1 ¼ jjZk ¼ i, Ak Þ,
(5.82)
pij (Ak) being the probability of going to state j at time k þ 1 from state i, while taking action Ak ∈ A at time k. With each state and action, there is a nonnegative, bounded cost function g : S A S → R that may reflect the desirability of different states and/or the cost of intervention. When the process moves from state i to state j under action Ak, an immediate cost gij (Ak) is incurred. An intervention policy m ¼ ðm0 , m1 , : : : Þ is a sequence of actions from A over time. In general, mk may be a mapping from the entire history of the process up to time k and need not be deterministic. Denote the set of all admissible policies by M. The TPM P, initial state Z0 ¼ i, and policy m ∈ M determine a unique probability measure Pmi over the space of all trajectories of states and actions, which correspondingly defines the stochastic processes Zk and Ak of the states and actions for the controlled network (Kumar, 1985). An optimal policy must minimize a total cost function. A common cost function is # " N1 X J mP ðiÞ ¼ lim E mi lk gZk Zkþ1 ðAk Þ , (5.83) N→`
k¼0
Optimal Experimental Design
177
where E mi is expectation relative to the probability measure Pmi and l ∈ ð0, 1Þ is a discount factor giving more weight to current costs than those in the future. J mP ðiÞ gives the total expected discounted cost. An optimal policy can be found via convergence, optimality, and uniqueness theorems (Derman, 1970). These provide a dynamic-programming iterative methodology that yields an optimal intervention policy mopt that is both stationary and deterministic, meaning that mopt ¼ ðm , m , : : : Þ, where m : S → A is a single-valued mapping from states to actions. Research into control when the TPM is uncertain goes back to the 1960s (Bellman and Kalaba, 1960; Silver, 1963; Gozzolino et al., 1965; Martin, 1967). The cost in Eq. 5.83 takes the form J m ði, pÞ with the expectation also taken with respect to the prior distribution p(u); that is, E mi is replaced by E mi,p . The resulting total expected discounted cost takes a similar form except that now, when different actions are taken, it involves the difference in expected immediate costs and the expected difference in future costs due to being in different states at the next period as well as the effect of different information resulting from these actions that affect the sequence of posterior distributions (Satia and Lave, 1973). As a consequence, the optimal Bayesian policy is almost certainly nonstationary. We will not pursue the matter, noting only that the computational cost is extreme, making application of the optimal Bayesian policy problematic except for very small state spaces and uncertainty classes. One way around the computation obstacle is to take an MCBR approach by requiring that the optimal policy be one of the model-specific optimal policies. These are stationary, and the solution becomes much more tractable, albeit, suboptimal (Pal et al., 2009). Consider an uncertainty class U of TPMs Pu, where Pu possesses optimal control policy mu. Let gu ðmw Þ denote the expected cost at state u for policy mw relative to the initial state i: m
gu ðmw Þ ¼ E i ½J Pwu ðiÞ:
(5.84)
The cost funtion gu provides a single value for representing the cost of a policy. The MCBR policy minimizes the expected cost over all optimal policies in U: mMCBR ¼ arg min E U ½gu ðmw Þ: w∈U
(5.85)
Recently, an IBR control policy has been derived for Markovian regulatory networks by finding the effective transition probability matrix for an uncertainty class of TPMs, thereby facilitating the extension of Bellman’s optimality theorem to the uncertain setting, and the derivation of an optimal (IBR) stationary policy (Dehghannasiri et al., 2018). Beginning with a prior distribution governing the TPM uncertainty class, a regulatory sequence is observed to update the prior to a posterior. An IBR controller can be derived based on this posterior, or an MOCU-based optimal experimental
178
Chapter 5
design can be employed to decide which experiment, among those that determine the TPM probabilities, should be implemented. The chosen experiment determines an unknown probability, thereby shrinking the uncertainty class and reducing the objective uncertainty in the posterior. One can now derive an IBR controller or observe another regulatory sequence and update to a new posterior, and so on, thereby performing sequential experimental design. Owing to the computational complexity of experimental design, which requires computation of many potential IBR polices, an approximate method utilizing mean first passage time (MFPT) is proposed for the experimental design stage. MFPT is used only in experimental design, with the final policy being an IBR policy.
5.5 Complexity Reduction MOCU-based experimental design requires finding the optimal intervention Uju cu for each u ∈ U and the IBR intervention cIBRi for each possible remaining uncertainty class U|ui. In the context of structural intervention in networks, which is our interest in this section, finding IBR interventions does not require additional calculations because the cost of each intervention c ∈ F for the network u can be stored. Therefore, complexity analysis requires computing the complexity of estimating optimal interventions. With n binary-valued nodes, the network has 2n states. Finding an optimal single-node function intervention requires searching among all possible 22n state pairs (u, v). Assuming that half of the states are undesirable, the relation for the SSD of the perturbed BNp must be evaluated 2n–1 times for each state pair. Thus, the complexity of finding the optimal intervention is O(23n). If there are k uncertain parameters and each can take on l values, then the uncertainty class U contains lk different networks for which an optimal intervention must be found. Hence, the complexity of optimal experimental design is Oðl k 23n Þ. This heavy computational cost can be mitigated by reducing the size of the network. 5.5.1 Optimal network reduction For IBR structual intervention, if a node is deleted from a BNp with regulatory function F, we need to define a regulatory function Fred for the reduced network. For notational ease, we will denote the deleted node by g (rather than Xi, the choice motivated by the fact that in genetic regulatory networks, a node corresponds to a gene). Doing this for each network corresponding to u ∈ U produces the uncertainty class Ug of reduced networks via the mapping u → ug. To approximate the optimal intervention for a network u ∈ U, use the corresponding network ug ∈ Ug, find the optimal intervention cug for the reduced network, and then induce the intervention to u the original network in U. This yields an induced intervention cg, ind . To find the U Ug induced robust intervention cg, IBR for U, first find the robust intervention cIBR
Optimal Experimental Design
179
for Ug and then find, via some inducement mapping, the induced robust U Ug intervention cg, IBR from cIBR . The reduction problem involves finding the best node g for deletion via some cost function g(g) and then obtaining the induced intervention and induced robust intervention needed for the MOCU calculations in the experimental design step by inducing interventions from the uncertainty class Ug of reduced networks (Dehghannasiri, et al., 2015b). For the design value (expected remaining MOCU) given the deletion of g and the determination of ui, Eq. 5.10, becomes h h ii g, Uju Dg ðui Þ ¼ E ui E Ujui C ujui cIBR i C ujui ðcujui Þ :
(5.86)
Since the choice of optimal experiment is based on the design value corresponding to each experiment, the network reduction step should have minimum effect on the design values. Deleting nodes increases the inherent uncertainty of the network, and we want to minimize this increase in the uncertainty of the model caused by network reduction. Therefore, define the cost of deleting node g by k X gðgÞ ¼ jDg ðui Þ Dðui Þj: (5.87) i¼1
Recognizing that Dg ðui Þ ≥ Dðui Þ,
(5.88)
the node g minimizing g is selected for deletion and is given by g ¼ arg min gðgÞ g∈1, 2, : : : , n
¼ arg min g
k X
½Dg ðui Þ Dðui Þ
i¼1
¼ arg min g
k X
(5.89) D ðui Þ g
i¼1
¼ arg min g
k X
h h ii g, Uju E ui E Ujui C ujui cIBR i :
i¼1
After g has been deleted, Dg ðui Þ is found via Eq. 5.86. The analysis up to and including Eq. 5.87 goes through by changing the IBR interventions to MCBR interventions; however, Eq. 5.88 does not hold in general for MCBR interventions. According to Eq. 5.89, D(ui) drops out of the minimization. Hence, we can redefine g(g) by the expectation in the last line of Eq. 5.89 without affecting g for IBR interventions but now make it
180
Chapter 5
applicable to MCBR intervention. g(g) then has IBR and MCBR varieties g, Uju g, Uju arising from using cIBR i and cMCBRi , respectively, in the defining expectation. The procedure for estimating optimal experiments via deleting one node extends directly to the deletion of two or more nodes. For instance, to delete two nodes, evaluate g(g1, g2) for all possible two-node combinations and delete the pair whose cost is minimum. Having outlined the optimization procedure, we must show how to construct Fred. Following (Ghaffari et al., 2010), every two states of the original network that differ only in the value of node g can be collapsed to find the transition rule of the reduced network. Let s1g and s0g be two states with value 1 and 0 for node g, respectively, and identical values for other nodes. State s g can be obtained from either s1g or s0g by removing the value of node g. If, for the original network, the transition rules for these two states are Fðs1g Þ ¼ p and Fðs0g Þ ¼ q, then for the reduced network, Fred ðs g Þ ¼ p g if pðs1g Þ . pðs0g Þ and, otherwise, Fred ðs g Þ ¼ q g , where p g and q g are found from states p and q via removing the value of g, respectively. Following this procedure yields the regulatory function Fred for all states in the reduced network. To find the induced intervention from the optimal intervention for the reduced network, suppose that the optimal intervention for the reduced network ug is cug ¼ ðˆu, vˆ Þ. The two states corresponding to uˆ in the original network are u˜ g1 and u˜ g0 , which are found by placing 1 and 0 in the gth coordinate of uˆ , respectively. Similarly, there are two states v˜ g1 and v˜ g0 in the original network corresponding to state vˆ . The induced optimal intervention for u g g g ˜ g1 and u˜ g0 the original network is cg, ind ¼ ðuind , vind Þ, where uind is the one among u g having larger steady-state probability in the original network, and vind is the one among v˜ g1 and v˜ g0 with larger steady-state probability in the original network. U Similarly, the induced robust intervention cg, IBR is found from the robust g intervention cU IBR ; however, here we choose the two states possessing larger expected steady-state probability across U using the expected SSD, pU ¼ E U ½pu , where pu is the SSD of the network with uncertainty vector u ∈ U. 5.5.2 Preliminary node elimination The coefficient of determination (CoD) (Dougherty, et al., 2000) can be used to delete nodes prior to deleting them based on minimizing the cost function g via Eq. 5.89. The CoD measures the strength of the relationship between a target random variable Y and a vector X of predictor random variables by the difference between the error of the best estimation of Y in the absence of predictors and in the presence of X. The CoD is between 0 and 1, and a larger CoD means a stronger connection between the target and predictors, in our case the target being the aim of intervention. The CoD of the target Y, relative to a vector X ¼ ðX 1 , : : : , X m Þ of predictors, is defined by
Optimal Experimental Design
181
CoDX ðY Þ ¼
εY εX ,Y , εY
(5.90)
where εY ¼ min ½PðY ¼ 0Þ, PðY ¼ 1Þ
(5.91)
is the error of the best estimate of Y without any predictors, and εX,Y is the error of the optimal estimate of Y based on observing X. Assuming that the value of the binary vector X of predictor nodes changes from 1 to 2m, εX ,Y ¼
2m X
PðX ¼ jÞ min½PðY ¼ 0jX ¼ jÞ, PðY ¼ 1jX ¼ jÞ:
(5.92)
j¼1
We apply the heuristic that nodes possessing a large CoD in relation to the target should not be deleted. If CoDX ðY ; uÞ denotes the CoD of Y relative to X in a network with uncertainty vector u, then the expected CoD of Y relative to X is CoDX ðY ; UÞ ¼ E U ½CoDX ðY ; uÞ:
(5.93)
Nodes possessing strong connection with the target in terms of CoDX (Y; U) are not considered for deletion. Owing to the possibility of intrinsic multivariate prediction (Martins et al., 2008), where a set of predictors may have low individual CoDs with respect to the target but may have significant CoD when used together for multivariate prediction, we consider CoDX (Y; U) for all three-node combinations. The nodes in the vector X having the largest value of CoDX (Y; U) are excluded from the minimzation of g(g) and cannot be deleted. To exclude less than three nodes from the minimization, only exclude those having larger expected individual CoDs. To exclude more than three nodes, choose those in the three-node combination with the second largest CoDX (Y; U) having larger expected individual CoDs and not belonging to the first three-node combination. Repeat this process until obtaining the desired number of nodes to exclude. If there are initially n nodes and three are to be deleted, then the cost function in Eq. 5.89 must be calculated for all C(n, 3) three-node combinations; however, if s nodes are excluded, then that number drops to C(n – s, 3). 5.5.3 Performance To evaluate performance of the reduction techniques, we first consider computational complexity. For the approximate method, suppose that p nodes are to be deleted. Then the cost function must be evaluated for all C(n 1, p) p-node combinations, where we have n 1 instead of n because the target node cannot be deleted. The complexity of finding an induced optimal
182
Chapter 5
intervention for each network after deleting p nodes is Oð23ðnpÞ Þ. Therefore, the complexity of the approximate method is a1 ¼ OðCðn 1, pÞ l k 23n3p Þ:
(5.94)
For large n, it is possible that for small p the complexity of the approximate method can exceed that of the original method; however, by deleting more nodes, the complexity of the approximate method drops sharply because with the deletion of each additional node, the complexity of estimating the optimal intervention decreases eight-fold. Incorporating CoD-based node exclusion in the approximation and excluding s nodes decreases the number of p-node combinations from C(n 1, p) to C(n s 1, p), which reduces the complexity of the approximate method to a2 ¼ OðCðn s 1, pÞ l k 23n3p Þ:
(5.95)
One can then define the computational gain by l¼
l k 23n 23p , ¼ a2 Cðn s 1, pÞ
(5.96)
which is the ratio of the complexity of the optimal method to the complexity of the approximate method when deleting p nodes using the cost function and excluding s nodes using the CoD-based node-exclusion step. Having considered computational complexity, we now examine the performance of experimental design when there is model reduction. This is done using the majority-vote rule for BNps introduced in Example 5.4. Uncertainty results from assuming that the values of some of the nonzero components of the matrix R are unknown. Each uncertain parameter ui can be 1 or 1. Conducting experiment Ti determines the true value u¯ i of ui, thereby resulting in the reduced uncertainty class Uju¯ i and the reduced IBR operator Uju¯
cIBRi . We define the gain of conducting experiment T i chosen by experimental design over a randomly chosen experiment T zðiÞ by Uju¯ ¯ Uju r ¼ C u¯ cIBRzðiÞ C u¯ cIBRi , (5.97) where C u¯ is the cost function for the true network, u¯ i is the true value of ui as determined by the experiment T i , u¯ zðiÞ is the true value of ui as determined by experiment T zðiÞ randomly chosen from among experiments T 1 , T 2 , : : : , T k , Uju¯
and cIBRzðiÞ is the reduced IBR operator relative to experiment T zðiÞ . If r > 0, then the chosen experiment outperforms the random experiment; if r < 0, then the random experiment outperforms the chosen experiment; and if r ¼ 0, then they perform the same.
Optimal Experimental Design
183
For performance evaluation, 1000 networks are randomly generated and 50 different sets of k regulations in each are selected to be unknown. Three randomly selected predictor nodes are assigned to each node, each randomly being activating or suppressive. The node perturbation probability is set to 0.001. States with up-regulated X1 are assumed to be undesirable. We remove the regulatory type of those regulations assumed to be uncertain and retain other regulatory information. All uncertain parameters are assumed to be independent from each other and have uniform marginal distribution. Because being desirable or undesirable for each state depends on X1, X1 is excluded from the reduction process. Therefore, we look for the best p-node subset to be deleted from fX 2 , : : : , X n g. As in Example 5.4, we take controllability into account. Figure 5.6 provides curves for experimental design performance as a function of minimum controllability when n ¼ 8 and k ¼ 4. It shows the average gain for the optimal method and the approximate method for 1 through 4 deletions. As expected, the average gain increases when networks are more controllable, regardless of the number of nodes deleted from the network. To evaluate the effectiveness of the CoD-based node exclusion procedure, we compare the average gain of the proposed approximate method when excluding nodes using the CoD-based exclusion procedure against the average gain when randomly excluding nodes. Figure 5.7 shows the average gain for n ¼ 8 nodes and k ¼ 4 uncertain parameters. For deleting p nodes, up to 7 p 1 nodes are excluded. For example, for deleting p ¼ 2 nodes, 1, 2, 3, and 4 nodes are excluded. The figure shows that the average gain when
Figure 5.6 Effect of controllability D on the performance of experimental design: the optimal method and the approximate method when deleting p nodes for networks with n ¼ 8 nodes and k ¼ 4 uncertain regulations. [Reprinted from (Dehghannasiri et al., 2015b).]
184
Chapter 5
Figure 5.7 Performance evaluation of the CoD-based gene exclusion algorithm for 8-node networks with k ¼ 4 uncertain regulations: average gain of approximate experimental design with respect to random experiments when p nodes are deleted and different numbers of nodes are excluded from the search space. [Reprinted from (Dehghannasiri et al., 2015b).]
excluding nodes using the CoD-based procedure is always larger than when using random node exclusion, regardless of the number of nodes deleted from the network. Note that when deleting more nodes, the difference between random exclusion and CoD-based exclusion increases because, as more nodes are deleted, exclusion has a larger impact on the number of candidate sets for evaluating the cost function. Now consider performance when a sequence of experiments is conducted. Suppose that there are k ¼ 5 uncertain regulations and five experiments to identify all unknown regulations. Sequential design is performed using both the approximate experimental design method with deletions (no CoD exclusion) and randomly chosen experiments. Figure 5.8 shows average gain over random selection for the optimal method and the approximate design method deleting up to four nodes for eight-node networks. The figure indicates that the approximate design method has reliable performance compared to the optimal method.
5.6 Sequential Experimental Design Sequential experimental design provides a mechanism for efficiently diminishing uncertainty when performing multiple experiments. Thus far, we have
Optimal Experimental Design
185
Figure 5.8 Performance comparison based on a sequence of experiments: average gain for the optimal method and the approximate method when deleting p nodes is shown for k ¼ 5 uncertain regulations and n ¼ 8 nodes. [Reprinted from (Dehghannasiri et al., 2015b).]
performed sequential design in a greedy manner, meaning that at each step we select the best experiment without looking ahead to further experiments. Here, we consider using dynamic programming over a finite horizon to determine the order of experiments, and we examine the advantage of MOCU-based design over entropy-based design (Imani et al., 2018). The analysis is done in the context of the activating–suppressive regulatory structure of Example 5.4. Assume M unknown, mutually independent regulations, meaning that we have a BNp with M uncertain regulatory parameters g1 , g2 , : : : , gM , where the actual value of gi is either 1 (activating) or 1 (suppressive). The uncertainty class is U ¼ fu1 , : : : , u2M g, uj ¼ ðg1j , g2j , : : : , gMj Þ, for j ¼ 1, 2, : : : , 2M . U possesses a prior distribution pðuÞ. Whereas, normally, we denote parameters by ui, in this section we want to name the networks and use parametric probabilities, so we denote the networks by ui and the parameters by gi. The prior probability that the ith regulation is activating is Pðgi ¼ 1Þ ¼
2M X
pðuj Þ 1uj ðgi Þ¼1 ,
(5.98)
j¼1
where u(gi) is the value of gi in network u, and 1uj ðgi Þ¼1 ¼ 1 if uj ðgi Þ ¼ 1 and 1uj ðgi Þ¼1 ¼ 0, otherwise. In Eq. 5.2, MOCU is defined via an expectation with respect to p(u). In Eq. 5.7, assuming that the chosen experiment is Ti, the remaining MOCU is defined via an expectation with respect to the posterior distribution pðujgi Þ. If the next chosen experiment is Tj, then the next
186
Chapter 5
remaining MOCU is defined via an expectation with respect to the posterior distribution pðujgi , gj Þ. We define the belief vector b(k) to characterize the state of knowledge before conducting the kth experiment. Initially, bð0Þ ¼ ðPðg1 ¼ 1Þ, Pðg2 ¼ 1Þ, : : : , PðgM ¼ 1ÞÞ:
(5.99)
If experiment Ti is performed for the kth experiment and Ti yields gi ¼ 1, then bi ðk þ 1Þ ¼ 1 and bl ðk þ 1Þ ¼ bl ðkÞ for l ¼ 1, : : : , M, l ≠ i. If Ti yields gi ¼ 1, then bi ðk þ 1Þ ¼ 0 and bl ðk þ 1Þ ¼ bl ðkÞ for l ¼ 1, : : : , M, l ≠ i. Each component of the belief vector can take three possible values during the experimental design process: bi ðkÞ ∈ f0, 1, Pðgi ¼ 1Þg. Thus, b(k) belongs to a space B of cardinality 3M, the belief-vector transition can viewed as a Markov decision process with 3M states, and the controlled transition matrix under experiment Ti is a 3M 3M matrix. The element associated with the probability of transition from state b ∈ B to state b 0 ∈ B under experiment Ti can be written as pbb0 ðT i Þ ¼ Pðbðk þ 1Þ ¼ b0 jbðkÞ ¼ b, T i Þ 8 If b0i ¼ 1 and bl ¼ b0l for l ≠ i, > < bi ¼ 1 bi If b0i ¼ 0 and bl ¼ b0l for l ≠ i, > : 0 otherwise:
(5.100)
Key to the analysis is representation of posterior network probabilities in terms of the belief vector. Let pb(k) denote the posterior distribution given the belief vector b(k). Define the events F lj ¼ ½uðgl Þ ¼ uj ðgl Þ and Gl ¼ ½uðgl Þ ¼ 1. Note that PðGl jbðkÞÞ ¼ bl ðkÞ. Using the independence assumption on the bðkÞ regulations, and letting pj ¼ pbðkÞ ðuj Þ ¼ pðuj jbðkÞÞ, we have bðkÞ
pj
¼ Pðu ¼ uj jbðkÞÞ ¼
M Y
Pðuðgl Þ ¼ uj ðgl ÞjbðkÞÞ
l¼1
¼
M Y
½PðF lj Þ ∩ G l jbðkÞÞ þ PðF lj ∩ G cl jbðkÞÞ
l¼1
¼
M Y
½PðF lj jG l , bðkÞÞPðG l jbðkÞÞ þ PðF lj jG cl , bðkÞÞPðG cl jbðkÞÞ
l¼1
¼
M Y
½Pðuj ðgl Þ ¼ 1Þbl ðkÞ þ Pðuj ðgl Þ ¼ 1Þð1 bl ðkÞÞ
l¼1
¼
M Y l¼1
½1uj ðgl Þ¼1 bl ðkÞ þ 1uj ðgl Þ¼1 ð1 bl ðkÞÞ:
(5.101)
Optimal Experimental Design
187
Owing to conversion of the belief vector to the posterior, the remaining MOCU given the belief vector b subsequent to the kth experiment [where for notational convenience we have suppressed k in the notation b(k)], can be expressed as Ujb
MF ðUjbÞ ¼ E Ujb ½C ujb ðcIBR Þ C ujb ðcujb Þ 2M X
¼
Ujb
pbj ½C uj jb ðcIBR Þ C uj jb ðcuj jb Þ ,
(5.102)
j¼1 Ujb
where cIBR is an IBR intervention relative to the posterior distibution pb; namely, Ujb cIBR
¼ arg minE Ujb ½C ujb ðcÞ ¼ arg min c∈F
c∈F
2M X
pbj C uj jb ðcÞ:
(5.103)
j¼1
In terms of the belief vector, greedy sequential MOCU-based experimental design, referred to as greedy-MOCU, is defined by the primary parameter i ¼ arg ¼ arg
min
i∈f1, : : : , Mg
min
i∈f1, : : : , Mg
E b0 jb, T i ½MF ðUjb0 Þ MF ðUjbÞ X pbb0 ðT i ÞMF ðUjb0 Þ ,
(5.104)
b0 ∈ B
where the second equality follows by expressing the expectation E b0 jb, T i in terms of pbb0 ðT i Þ and then dropping the terms unrelated to minimization. Entropy-based experimental design (Lindley, 1956; Raiffa and Schlaifer, 1961) aims to reduce the amount of entropy in the prior distribution but, unlike MOCU-based design, does not take into account an objective. The entropy for belief vector b is HðbÞ ¼
2M X
pbj log2 pbj ,
(5.105)
j¼1
with 0 ≤ H(b) ≤ M. The greedy-entropy approach sequentially chooses an experiment to minimize the expected entropy at the next time step: i ¼ arg ¼ arg
min
i∈f1, : : : , Mg
min
E b0 jb, T i ½Hðb0 Þ HðbÞ
i∈f1, : : : , Mg
"
X
b0 ∈B
pbb0 ðT i Þ
2M X
# b0
b0
pj log2 pj
(5.106) ,
j¼1
where the second equality is obtained by removing constant terms.
188
Chapter 5
5.6.1 Experimental design using dynamic programming If the number N of experiments is fixed at the outset, then all future experiments can be taken into account during the decision-making process, thereby resulting in optimal finite-horizon experimental design based on dynamic programming (DP). This DP-MOCU policy decides which uncertain regulation should be determined at each step in order to maximally reduce the uncertainty relative to the objective after conducting all N experiments. At each time step k, it deduces an optimal policy mk : B → fT 1 , : : : , T M g. For k ¼ 0, : : : , N 1, we define a bounded immediate cost function at time step k corresponding to the transition from b(k) ¼ b into bðk þ 1Þ ¼ b0 under policy mk by gk ðb, b0 , mk ðbÞÞ ¼ MF ðUjb0 Þ MF ðUjbÞ ,
(5.107)
where gk ðb, b0 , mk ðbÞÞ ≤ 0. The terminal cost function is defined as gN ðbÞ ¼ MF ðUjbÞ for any b ∈ B. Letting Π be the space of all possible policies, an optimal policy is given by mMOCU 0 : N1 ¼ arg min E
N1 X
m0:N1 ∈Π
gk ðbðkÞ, bðk þ 1Þ, mk ðbðkÞÞÞ þ gN ðbðNÞÞ ,
(5.108)
k¼0
where the expectation is taken over stochasticities in belief transition. Dynamic programming provides a solution for the minimization in Eq. 5.108. It starts by setting the terminal cost function as J MOCU ðbÞ ¼ gN ðbÞ N for b ∈ B. Then, in a recursively backward fashion, the optimal cost function can be computed as J MOCU ðbÞ ¼ k
min
i∈f1, : : : , Mg
E b0 jb, T i ½gk ðb, b0 , T i Þ þ J MOCU ðb0 Þ kþ1 "
¼
min
i∈f1, : : : , Mg
X
(5.109) #
0
pbb0 ðT i Þðgk ðb, b , T i Þ þ
b0 ∈B
J MOCU ðb0 ÞÞ kþ1
,
(5.110)
with an optimal policy given by mMOCU ðbÞ k
¼ arg
min
i∈f1, : : : , Mg
X b0 ∈B
pbb0 TrðT i Þðgk
ðb, b0 , T
iÞ
þ
J MOCU ðb0 ÞÞ kþ1
(5.111)
for b ∈ B and k ¼ N 1, : : : , 0. In (Huan and Marzouk, 2016), an approximate dynamic programming solution based on the entropy scheme for cases with a continuous belief space is considered. Here, we employ the optimal dynamic programming solution for the finite belief space B. Again letting mk(b) be a policy at time step k, for
Optimal Experimental Design
189
k ¼ 0, : : : , N 1, we define a bounded immediate cost function at k corresponding to a transition from the belief vector b(k) ¼ b to the belief vector b(k þ 1) ¼ b 0 under policy mk by g˜ k ðb, b0 , mk ðbÞÞ ¼ Hðb0 Þ HðbÞ :
(5.112)
Define the terminal cost function by g˜ N ðbÞ ¼ HðbÞ, for any b ∈ B. Using the immediate and terminal cost functions g˜ k and g˜ N instead of gk and gN, for k ¼ 0, : : : , N 1, in the dynamic programming process, we obtain the optimal finite-horizon policy mEntropy 0 :N1 ðbÞ for b ∈ B, called DP-entropy. Example 5.5. To compare the peformances of the various sequential design methods, we use the majority-vote BNp of Example 5.4. We employ the symmetric Dirichlet distribution for generating the prior (initial) distribution over the uncertainty class of network models: bð0Þ
p
∼f ðp
bð0Þ
2M Gðf 2M Þ Y bð0Þ f1 ; fÞ ¼ ðpj Þ , 2M GðfÞ j¼1
where G is the gamma function and f > 0 is the parameter of the symmetric Dirichlet distribution. The expected value of pb(0) is a vector of size 2M with all elements equal to 1/2M. f specifies the variability of the initial distributions: the smaller f is, the more the prior distributions deviate from the uniform distribution. Simulations based on synthetic BNps are performed. 100 random BNps of size six with a single set of M unknown regulations for each network are generated. The perturbation probability is set to p ¼ 0.001. States with an up-regulated first node are assumed to be undesirable. Setting f ¼ 5, 500 initial distributions are generated. Five experimental design strategies are considered: greedy-MOCU, DP-MOCU, greedy-entropy, DP-entropy, and random. IBR intervention based on the resulting belief state of each strategy is applied to the true (unknown) network, and the cost is the total steady-state mass in undesirable states. Figure 5.9 shows the average cost of robust intervention with respect to the number of conducted experiments for different experimental design strategies. DP-MOCU has the lowest average cost of robust intervention at the end of the horizon (all N experiments); however, greedy-MOCU has the lowest cost before reaching the end of the horizon. DP-MOCU finds a sequence of experiments from time 0 to N 1 to minimize the expected sum of the differences of MOCUs throughout this interval. The expected value of MOCU after conducting the last experiment plays the key role in the decision making by the dynamic programming policy. Because DP-MOCU plans to reduce MOCU at the end of the horizon, whereas greedy-MOCU takes only the next step into account, DP-MOCU achieves the lowest average cost at the
190
Chapter 5
Figure 5.9 Average cost of robust intervention with respect to the number of conducted experiments obtained by various experimental design strategies for randomly generated, synthetic BNps with 7 unknown regulations (M ) and f ¼ 5.
end of the horizon. If one is going to use a MOCU threshold or a threshold based on the difference in MOCU between experiments to decide when to stop experimenting, then greedy-MOCU may be preferred to DP-MOCU. ▪
5.7 Design with Inexact Measurements So far, we have assumed that each experiment Ti results in the exact determination of the uncertain parameter ui (returning to standard notation). While this assumption may be warranted when a parameter is measured with a high degree of accuaracy, in other situations it may not be warranted. 5.7.1 Design with measurement error To consider experimental error, denote the outcome of experiment Ti by a random variable uˆ i (Mohsenizadeh et al., 2018). The joint probability
Optimal Experimental Design
191
distribution f ðuˆ i , ui Þ for the estimated value from experiment and the true value of the unknown parameter characterizes experimental accuracy. Relative to ui being the true value of the ith uncertain parameter and uˆ i being the outcome of experiment Ti, we quantify the experimental error of experiment Ti by h ˆ i Uju Uju Gðuˆ i , ui Þ ¼ E Ujui C ujui cIBRi C ujui cIBRi ,
(5.113)
which is the expected increased cost across the reduced uncertainty class U|ui when applying the IBR intervention using the experiment outcome uˆ i instead of the true value ui of the uncertain parameter. The expectation with respect to the joint distribution f ðuˆ i , ui Þ quantifies the experimental error for experiment Ti: Dðuˆ l , ui Þ ¼ E uˆ i , ui ½Gðuˆ i , ui Þ:
(5.114)
In this setting, Eq. 5.11 is adjusted to take into account the experimental error, and the primary parameter is defined by i ¼ arg min Dðui Þ þ Dðuˆ l , ui Þ i¼1, : : : , k
h h ii Uju ¼ arg min E ui E Ujui C ujui cIBRi C ujui cujui i¼1, 2, : : : , k
h h ˆ ii Uju Uju þ E uˆ i , ui E Ujui C ujui cIBRi C ujui cIBRi h h ii Uju ¼ arg min E ui E Ujui C ujui cIBRi C ujui cujui i¼1, 2, : : : , k
h h h ˆ iii h h ii Uju Uju E ui E Ujui C ujui cIBRi þ E ui E uˆ i jui E Ujui C ujui cIBRi h h h ˆ iii h h ii Uju ¼ arg min E ui E uˆ i jui E Ujui C ujui cIBRi E ui E Ujui C ujui cujui i¼1, : : : , k h h h ˆ iii Uju , ¼ arg min E ui E uˆ i jui E Ujui C ujui cIBRi i¼1, : : : , k
(5.115) where in the third equality we use the fact that E uˆ i , ui ¼ E ui E uˆ i jui , in the third summand we use the fact that the expression inside the expectation does not depend on uˆ i so that E uˆ i jui can be dropped, and in the fifth equality we recognize that E ui E Ujui ¼ E U so that the second summand does not depend on i.
192
Chapter 5
Experiments can be done sequentially as follows: determine the highest ranked experiment T i ; conduct experiment T i and apply its outcome uˆ i to the uncertainty class U to yield a reduced uncertainty class Ujuˆ i ; treating Ujuˆ i as a new uncertainty class and removing T i from the original set of candidate experiments, calculate the next optimal experiment; repeat the process. To reduce the computational complexity of the experimental design in the presence of experimental error, following the rationale used to define the cost function for the experimental-error-free case, we use the following cost function for node deletion: gD ðgÞ ¼
k X
jDg ðui Þ þ Dg ðuˆ l , ui Þ Dðui Þ Dðuˆ l , ui Þj,
(5.116)
i¼1
where Dg ðuˆ l , ui Þ is defined similarly to Dðuˆ l , ui Þ but with the induced robust g, Ujuˆ
g, Ujui
Uju
and cIBR i in place of the robust interventions cIBRi Ujuˆ and cIBRi , respectively. Dg ðuˆ l , ui Þ quantifies the differential cost of using the induced robust intervention obtained relative to the erroneous outcome instead of the induced robust intervention obtained relative to the correct outcome. Some mathematical manipulation shows that the node for deletion is g ¼ arg min gD ðgÞ interventions cIBR
g
¼ arg min g
k X
h h h iii g, Ujuˆ E ui E uˆ i jui E Ujui C ujui cIBR i :
(5.117)
i¼1
Upon deleting node g*, use the corresponding induced interventions in Eq. 5.115 to find the best experiment T i . 5.7.2 Design via surrogate measurements Sometimes it is not possible to measure the parameter of interest, but one can measure a surrogate parameter related to it. If the surrogate parameter hi is measured in place of ui, then U remains the uncertainty class, but the probability distribution pðujhi Þ on U is now conditioned on hi. The remaining MOCU, given hi, is defined as i h Ujh (5.118) MF ðUjhi Þ ¼ E pðujhi Þ C u cIBRi C u ðcu Þ , Ujh
where cIBRi denotes the IBR filter relative to pðujhi Þ, and the expectation is relative to pðujhi Þ. Note that C ujui is replaced by Cu in Eq. 5.7 because, as opposed to the situation where ui ¼ ui and the uncertainty class is reduced to Ujui so that the IBR filter is relative to a reduced uncertainty class and the costs are considered over that reduced class, here the IBR filter is still relative
Optimal Experimental Design
193
to U but to a different distribution over U, and the costs are still considered over all of U. Taking the expectation with respect to the density f(hi) for hi yields the experimental design value, h h ii Ujh Dðui ; hi Þ ¼ E hi E pðujhi Þ C u cIBRi C u ðcu Þ ,
(5.119)
where the notation D(ui; hi) indicates measurement of the surrogate parameter hi. The primary parameter possesses minimum experimental design value. Proceeding analogously to Eq. 5.12, we can instead minimize the residual IBR cost: h
h
ii Ujhi Rðui ; hi Þ ¼ E hi E pðujhi Þ C u cIBR Z Z Ujhi ¼ C u cIBR pðujhi Þdu f ðhi Þdhi Z Z Ujh ¼ C u cIBRi f ðu, hi Þdudhi :
(5.120)
Although hi is being measured to gain information about ui, it need not be independent of the other parameters in u; indeed, ui might not be independent of the other parameters.
5.8 General MOCU-based Experimental Design In the previous section we demonstrated MOCU-based experimental design when an experiment does not yield the true value of an unknown parameter, either because there is an experimental error or the experiment outputs a surrogate measure. In fact, the concept of MOCU can be generalized to produce a more general theory of experimental design (Boluki et al., 2018). The definition of MOCU in Eq. 5.2 is quite general and includes nothing specifically related to the assumptions or functional forms of an underlying uncertainty class of physical models nor to operators on such models. Moreover, as seen in the preceding section, the general form of Eq. 5.7 defining the remaining MOCU is not limited to the conditioning being based on an exact determinination of the parameter, or the experiment output being an estimate of the parameter. For a general formulation of MOCU-based experimental design, assume a probability space U with probability measure p, a set C, and a function C : U C → ½0, ` Þ. U, p, C, and C are called the uncertanty class, prior distribution, action space, and cost function, respectively. Elements of U and C are called uncertainty parameters and actions, respectively. For any u ∈ U, an optimal action is an element cu ∈ C such that Cðu, cu Þ ≤ Cðu, cÞ for any c ∈ C. An intrinsically Bayesian robust (IBR) action is an element cU IBR ∈ C such that
194
Chapter 5
h i E U C u, cU ≤ E U ½Cðu, cÞ IBR
(5.121)
for any c ∈ C. The mean objective cost of uncertainty MC ðUÞ is given by Eq. 5.2 with the cost function now written as Cðu, cu Þ. Also assume the existence of a set Ξ, called the experiment space, whose elements j, called experiments, are jointly distributed with the uncertainty parameters u. Given j ∈ Ξ, the conditional distribution p(u | j) is the posterior distribution relative to j, and U|j denotes the corresponding probability space, called the conditional uncertainty class. Relative to U|j, we define IBR actions Ujj cIBR and the remaining MOCU by i h Ujj MC ðUjjÞ ¼ E Ujj C u, cIBR Cðu, cu Þ , (5.122) where the expectation is with respect to p(u | j). Taking the expectation over j gives the experimental design value (expected remaining MOCU): ii h h Ujj (5.123) DC ðU, jÞ ¼ E j E Ujj C u, cIBR Cðu, cu Þ : An optimal experiment j* ∈ Ξ minimizes DC ðU, jÞ: j ¼ arg min DC ðU, jÞ: j∈Ξ
(5.124)
With sequential experiments, the action space and the experiment space can be time dependent: they can be different for each time step. Thus, the action space and experiment space at time step k, and the optimal experiment selected at k are denoted by Ck, Ξk, and j, k , respectively. p(u|j:k) denotes the posterior distribution after observing the selected experiments’ outcomes from the first time step through k, and U|j:k denotes the corresponding conditional uncertainty class. By placing assumptions on the uncertainty class, action space, and experiment space, many existing Bayesian experimental design formulations result. For instance, MOCU-based experimental design for finding an optimal dopant and concentration to minimize the simulated energy dissipation for shape memory alloys (Dehghannasiri et al., 2017b) fits directly in the generalized MOCU formulation (Boluki et al., 2018). 5.8.1 Connection to knowledge gradient The knowledge gradient (Frazier et al., 2008, 2009), originally introduced as a solution to an offline ranking and selection problem, assumes that there are A ≥ 2 actions (alternatives) that can be selected so that, in the present setting, C ¼ fc1 , : : : , cA g. Each action has an unknown true reward (sign-flipped cost),
Optimal Experimental Design
195
and at each time step an experiment provides a noisy observation of the reward of a selected action. There is a limited budget B of the number of measurements that can be taken before deciding which action is the best, that being the one having the lowest expected cost (or the highest expected reward). As we now demonstrate, under its model assumptions, the knowledge gradient is a special case of MOCU (Boluki, et al., 2018). We assume normal prior beliefs over the unknown rewards, either independent normal beliefs over the rewards when the rewards of different actions are uncorrelated, or a joint normal belief when the rewards are correlated. In the independent case, for each action–reward pair ðci , uci Þ, uci ∼Nðmci , bci Þ. In the correlated case, the vector ðuc1 , : : : , ucA Þ of rewards has a multivariate normal distribution N(m, S) with the mean vector m ¼ ðmc1 , : : : , mcA Þ and covariance matrix S, with diagonal entries ðbc1 , : : : , bcA Þ. If the selected action to be applied at k is ck, then the observed noisy reward of ck at that iteration is jk ¼ uck þ εk , where uck is unknown and εk ∼Nð0, lck Þ is independent of the reward of ck. The underlying system to learn is the unknown reward function, and each possible model is fully described by a reward vector u ¼ ðuc1 , uc2 , : : : , ucA Þ ∈ U. For the independent case, pðuÞ ¼
A Y
Nðmci , bci Þ:
(5.125)
i¼1
For the correlated case, pðuÞ ¼ Nðm, SÞ. The experiment space is Ξ ¼ fj1 , : : : , jA g, where experiment ji corresponds to applying ci and getting a noisy observation of its reward uci , that is, measuring uci with observation noise, where ji juci ∼Nðuci , lci Þ. In the independent case, the state of knowledge at each time point k is captured by the posterior values of the means and variances for the rewards after incorporating observations j:k as S k ¼ ½ðmkc , bkc Þc∈C , and in the correlated case by the posterior vector of means and a covariance matrix after observing j:k as S k ¼ ðmk , Sk Þ, where mk ¼ ðmkc1 , : : : , mkcA Þ and the diagonal of Sk is the vector ðbkc1 , : : : , bkcA Þ. The probability space U|j:k equals U|Sk, and the cost function is Cðu, cÞ ¼ uc . For this problem, the IBR action at time step k is Ujj:k
cIBR ¼ arg min E Ujj:k ½Cðu, cÞ c∈C
¼ arg min E Ujj:k ½uc c∈C
¼ arg max E Ujj:k ½uc c∈C
¼ arg max mkc : c∈C
(5.126)
196
Chapter 5
The optimal experiment selected at time step k to be performed at step k þ 1 is h h ii h i Ujj:k , j Ujj:k E Ujj:k C u, cIBR j, k ¼ arg min E ji jj:k E Ujji , j:k C u, cIBR i ji ∈Ξ h h ii h i ¼ arg min E ji jj:k E Ujj:kþ1 u Ujj:kþ1 E Ujj:k u Ujj:k cIBR cIBR ji ∈Ξ h h ii h i ¼ arg max E ji jj:k E Ujj:kþ1 u Ujj:kþ1 E Ujj:k u Ujj:k cIBR cIBR ji ∈Ξ h i ¼ arg max E ji jj:k max mkþ1 mkc0 : max c0 0 0 ji ∈Ξ
c ∈C
(5.127)
c ∈C
This optimal experiment is exactly the original knowledge-gradient policy in (Frazier et al., 2008, 2009; Wang et al., 2015). Since the knowledge gradient is optimal when the horizon is a single measurement and asymptotically optimal, so too is the MOCU-based policy for this problem. The efficient global optimization (EGO) of (Jones et al., 1998) is used for black-box optimization and experimental design. As shown in (Frazier and Wang, 2016), the knowledge gradient reduces to EGO when there is no observation noise, and choosing the best action at each time step is limited to selecting from the set of actions whose rewards have been previously observed from the first time step up to that time. Thus, MOCU-based experimental design can also be reduced to EGO under its model assumptions. This is demonstrated directly from MOCU-based experimental design in (Boluki et al., 2018). There are fundamental differences between the knowledge gradient and MOCU-based experimental design: (1) with MOCU the experiment space and action space can be different, but with the knowledge gradient they are the same; (2) with MOCU the uncertainty over the parameters of the underlying physical model can be incorporated, which allows direct incorporation of prior knowledge regarding the underlying system via the prior distribution, whereas with the knowledge gradient the uncertainty is considered directly on the reward function, and there is no direct connection between prior assumptions and the underlying physical model.
Chapter 6
Optimal Classification Classification and clustering are often grouped together as topics within pattern recognition. This grouping is reasonable owing to their similarities: classifiers and clusterers both operate on points, and their performances are judged according to the correctness of operator output. This chapter and the next concern optimal classification and optimal clustering under model uncertainty. The underlying random processes for classification and clustering are feature-label distributions and random point processes, respectively. For classification, if the feature-label distribution is known, then one can derive an optimal classifier with minimum classification error. For clustering, if the random point process is known, then one can derive an optimal cluster operator with minimum clustering error. When these probability models are unknown and belong to uncertainty classes, then the best approach to take is to find an optimal robust classifier or optimal robust clusterer. This chapter treats classification, focusing on optimal Bayesian classifiers, a special case being intrinsically Bayesian robust classifiers when there are no training data. In the next chapter, we consider intrinsically Bayesian robust clustering.
6.1 Bayes Classifier Binary classification involves a feature vector X ¼ ðX 1 , X 2 , : : : , X d ÞT ∈ Rd composed of random variables (features), a binary random variable Y, and a classifier (measurable function) c : Rd → f0, 1g to predict Y, meaning that Y is predicted by c(X). The features X 1 , X 2 , : : : , X d can be discrete or realvalued. The values, 0 or 1, of Y are treated as class labels. The error ε[c] of c is the probability of erroneous classification, namely, ε½c ¼ PðcðXÞ ≠ Y Þ:
(6.1)
Owing to the binary nature of c(X) and Y, ε½c ¼ E½jY cðXÞj ¼ E½jY cðXÞj2 , 197
(6.2)
198
Chapter 6
so that the classification error equals both the mean-absolute and meansquare error of predicting Y from X using c. An optimal classifier cbay, called a Bayes classifier, is one having minimal error among the collection C of all measurable binary functions on Rd , so that it is an optimal MSE predictor of Y: cbay ¼ arg min ε½c:
(6.3)
c∈C
The error εbay of a Bayes classifier is called the Bayes error. Classification accuracy depends on the probability distribution FXY of the feature–label pair (X, Y ). FXY is called the feature-label distribution and completely characterizes the classification problem. Whereas there may be many Bayes classifiers, the Bayes error is intrinsic to the feature-label distribution. FXY determines the class prior probabilities c0 ¼ P(Y ¼ 0) and c1 ¼ P(Y ¼ 1), and (if they exist) the class-conditional densities f0(x) ¼ P(x|Y ¼ 0) and f1(x) ¼ P(x|Y ¼ 1). The latter correspond to class populations Π0 and Π1, respectively. To avoid trivialities, we assume that c0, c1 > 0. The posterior distributions are defined by hi ðxÞ ¼ PðY ¼ i j X ¼ xÞ
(6.4)
h1 ðxÞ ¼ 1 h0 ðxÞ ¼ E½Y j X ¼ x:
(6.5)
for i ¼ 0, 1. Note that
If densities exist, then h0(x) ¼ c0 f0 (x)/f(x) and h1(x) ¼ c1 f1(x)/f(x), where f ðxÞ ¼ c0 f 0 ðxÞ þ c1 f 1 ðxÞ is the density of X. If the class-conditional densities exist, then Z Z ε½c ¼ h1 ðxÞf ðxÞdx þ fxjcðxÞ¼0g
(6.6)
h0 ðxÞf ðxÞdx:
(6.7)
fxjcðxÞ¼1g
The right-hand side of Eq. 6.7 is minimized by 1, h1 ðxÞ ≥ h0 ðxÞ, cbay ðxÞ ¼ 0, otherwise:
(6.8)
cbay(x) equals 0 or 1 according to whether Y is more likely to be 0 or 1 given X ¼ x. The inequality h1(x) ≥ h0(x) can be replaced by a strict inequality without affecting optimality; in fact, ties between the posterior probabilities can be broken arbitrarily, yielding a possibly infinite number of distinct Bayes classifiers. The derivation of a Bayes classifier does not require the existence of class-conditional densities, as has been assumed here; for a general proof that cbay in Eq. 6.8 is optimal, see (Devroye et al., 1996). Equations 6.7 and 6.8 can be written in terms of the class-conditional densities:
Optimal Classification
199
Z
Z
ε½c ¼ c1
f 1 ðxÞdx þ c0 fxjcðxÞ¼0g
f 0 ðxÞdx,
(6.9)
fxjcðxÞ¼1g
and cbay ðxÞ ¼
1, 0,
c1 f 1 ðxÞ ≥ c9 f 0 ðxÞ, otherwise:
(6.10)
Population-specific error rates are defined by Z ε ½c ¼ PðcðXÞ ¼ 1jY ¼ 0Þ ¼ 0
f 0 ðxÞdx,
(6.11)
f 1 ðxÞdx:
(6.12)
fxjcðxÞ¼1g
Z ε ½c ¼ PðcðXÞ ¼ 0jY ¼ 1Þ ¼ 1
fxjcðxÞ¼0g
The overall error rate is given in terms of the population-specific rates by ε½c ¼ c0 ε0 ½c þ c1 ε1 ½c:
(6.13)
From Eqs. 6.7 and 6.8, the Bayes error is Z εbay ¼
Z h1 ðxÞf ðxÞdx þ
fxjh1 ðxÞ,h0 ðxÞg
h0 ðxÞf ðxÞdx fxjh1 ðxÞ≥h0 ðxÞg
(6.14)
¼ E½minfh0 ðXÞ, h1 ðXÞg: By Jensen’s inequality, it follows that εbay ≤ minfE½h0 ðXÞ, E½h1 ðXÞg:
(6.15)
Therefore, if either posterior probability is uniformly small (for example, if one of the classes is much more likely than the other), then the Bayes error is necessarily small. Regarding the Bayes classifier, the following four steps correspond to the four generic steps for optimal operator synthesis, with Eq. 6.3 corresponding to Eq. 3.169: 1. 2. 3. 4.
Construct the feature-label distribution. The operators consist of classifiers on the feature-label distribution. The cost is classifier error. An optimal operator is given by a Bayes classifier.
200
Chapter 6
6.1.1 Discriminants Classifiers are often expressed in terms of a discriminant function D : Rd → R according to 1, DðxÞ ≤ k, cðxÞ ¼ (6.16) 0, otherwise, for a suitably chosen constant k. In terms of the discriminant, the populationspecific error rates take the form Z 0 f 0 ðxÞdx, (6.17) ε ½c ¼ PðDðXÞ ≤ kjY ¼ 0Þ ¼ DðxÞ≤k
Z ε ½c ¼ PðDðXÞ . kjY ¼ 1Þ ¼ 1
DðxÞ.k
f 1 ðxÞdx:
(6.18)
It is well known (Anderson, 1984) that a Bayes classifier is given by the likelihood ratio f0(x)/f1(x) as the discriminant, with threshold k ¼ c1/c0. Since log is a monotone function, an equivalent optimal procedure is to use the loglikelihood ratio as the discriminant, Dbay ðxÞ ¼ log
f 0 ðxÞ , f 1 ðxÞ
(6.19)
and define a Bayes classifier as cbay ðxÞ ¼
1, 0,
Dbay ðxÞ ≤ log cc10 , otherwise:
(6.20)
For multivariate Gaussian populations Π0 N(m0, S0) and Π1 N(m1, S1), the class-conditional densities are 1 1 T 1 ffi exp ðx mi Þ Si ðx mi Þ f i ðxÞ ¼ pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi (6.21) 2 ð2pÞn detðSi Þ for i ¼ 0, 1. According to Eq. 6.19, the optimal discriminant is given by 1 DQ ðxÞ ¼ ðx m0 ÞT S1 0 ðx m0 Þ 2 1 detðS1 Þ þ ðx m1 ÞT S1 , 1 ðx m1 Þ þ log 2 detðS0 Þ
(6.22)
which describes a hyperquadric decision boundary in Rd . If S0 ¼ S1 ¼ S, then the optimal discriminant reduces to
Optimal Classification
201
m þ m1 T 1 DL ðxÞ ¼ x 0 S ðm0 m1 Þ, 2
(6.23)
which describes a hyperplane in Rd . The optimal classifiers based on discriminants DQ(x) and DL(x) define population-based quadratic discriminant analysis (QDA) and population-based linear discriminant analysis (LDA), respectively. It follows from Eq. 6.23 that the conditional linear discriminants possess Gaussian distributions: DL ðXÞjðY ¼ 0Þ N 12 d2 , d2 and DL ðXÞjðY ¼ 1Þ N 12 d2 , d2 , where qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi (6.24) d ¼ ðm1 m0 ÞT S1 ðm1 m0 Þ is the Mahalanobis distance between the populations. Hence, c
ε0 ½cbay ¼ P DL ðXÞ ≤ log 1 Y ¼ 0 c0 c1 log c0 12 d2 ¼F , d c
ε1 ½cbay ¼ P DL ðXÞ . log 1 Y ¼ 1 c0 c1 1 2 log c0 2 d ¼F , d
(6.25)
(6.26)
where F(x) is the cumulative distribution function of a standard N(0, 1) Gaussian random variable. If the classes are equally likely, c0 ¼ c1 ¼ 0.5, then ε½cbay ¼ Fð d2Þ.
6.2 Optimal Bayesian Classifier Model uncertainty arises when full knowledge of the feature-label distribution is lacking. Knowledge must come from existing scientific knowledge regarding the features and labels or be estimated from data. Since accurate estimation of distributions requires a huge amount of data, the amount increasing rapidly with dimension and distributional complexity, full knowledge of the feature-label distribution is rare. With model uncertainty, there are two options: (1) assume an uncertainty class of feature-label distributions governed by a prior distribution p(u) and find an optimal classifier, or (2) take a heuristic approach by assuming some methodology to construct a classifier, perhaps making some assumptions about the feature-label distribution. Our interest is in optimization, which we discuss in the current section. We will subsequently, and briefly, describe the second option (which dominates pattern recognition) for comparison purposes.
202
Chapter 6
With model uncertainty, there is an uncertainty class U of parameter vectors corresponding to feature-label distributions fu(x, y) for u ∈ U. Letting εu[c] denote the error of classifier c on model u, an intrinsically Bayesian robust (IBR) classifier is defined by cIBR ¼ arg min E p ½εu ½c, c∈C
(6.27)
where C is a collection of classifiers and where in this chapter we write Ep and Ep instead of EU in order to distinguish between expectation over the prior and posterior distributions. Recall from Section 3.1 that the best constant MSE predictor of a random variable is its mean. Thus, the best MSE constant prediction of εu[c] relative to p(u) is Ep[εu[c]], so that cIBR minimizes the optimal MSE constant predictor of the error over c ∈ C. Suppose that we have a random sample S n ¼ fðX1 , Y 1 Þ, : : : , ðXn , Y n Þg of vector–label pairs drawn from the feature-label distribution, meaning that the pairs (Xi, Yi) are independent and identically distributed according to FXY. The numbers N0 and N1 of sample points from populations Π0 and Π1, respectively, are random variables such that N0 binomial(n, c0) and N1 binomial(n, c1), with N0 þ N1 ¼ n. Given Sn, there is a posterior distribution p (u) ¼ p(u|Sn). An optimal Bayesian classifier (OBC) is defined by cOBC ðS n Þ ¼ arg min E p ½εu ½cjS n c∈C
¼ arg min E p ½εu ½c
(6.28)
c∈C
(Dalton and Dougherty, 2013a, b). An IBR classifier is an OBC corresponding to a null sample. In particular, Ep [εu[c]] has an interpretation corresponding to Ep[εu[c]]. 6.2.1 Bayesian MMSE error estimator A sample-dependent minimum-mean-square-error (MMSE) estimator εˆ ðS n Þ of εu[c] is obtained via the minimization εˆ ðS n Þ ¼ arg min E p,Sn ½jεu ½c jðS n Þj2 , j∈B
(6.29)
where B is the set of all Borel measurable functions. According to Theorem 3.1, εˆ ðS n Þ ¼ E p ½εu ½cjS n ¼ E p ½εu ½c:
(6.30)
In this light, Ep [εu[c]] is called the Bayesian MMSE error estimator (BEE) and is denoted by εˆ U ½c; S n (Dalton and Dougherty, 2011a, b). The OBC can be reformulated as cOBC ðS n Þ ¼ arg min εˆ U ½c; S n : c∈C
(6.31)
Optimal Classification
203
Besides minimizing Ep,Sn[|εu[c]j(Sn)|2], the BEE is also an unbiased estimator of εu[c] over the distribution of u and Sn, namely, E Sn ½ˆεU ½c; S n ¼ E Sn ½E p ½εu ½cjS n ¼ E p,S n ½εu ½c:
(6.32)
For small samples, purely data-based error estimators tend to perform poorly, and it is in the small-sample setting where the BEE has a major advantage (Braga-Neto and Dougherty, 2015). Because an IBR classifier and Ep[εu[c]] can be viewed as special cases of an OBC and Ep [εu[c]], respectively, in the absence of data, we focus our attention on the OBC, which makes practical sense because classifier design is generally considered in the context of sample data. Two issues must be addressed in OBC design: representation of the Bayesian MMSE error estimator and minimization. To address these issues, we first consider class-based error decomposition in the context of an uncertainty class. In binary classification, u is a random vector composed of three parts: the parameters of the class-0 and class-1 conditional distributions, u0 and u1, respectively, and the class-0 prior probability c ¼ c0 (with c1 ¼ 1 c for class 1). Let Uy denote the parameter space for uy, y ¼ 0, 1, and write the class-conditional distribution as fuy(x|y). The marginal prior densities are p(uy), y ¼ 0, 1, and p(c). To facilitate analytic representations, we assume that c, u0, and u1 are all independent prior to observing the data. This assumption imposes limitations; for instance, we cannot assume Gaussian distributions with the same unknown covariance for both classes, nor can we use the same parameter in both classes. However, it allows us to separate the prior density p(u) and ultimately to separate the Bayesian MMSE error estimator into components representing the error contributed by each class. Given the independence of c, u0, and u1 prior to sampling, they maintain their independence given the data. To see this, assume that the sample has n points, with n0 and n1 coming from classes 0 and 1, respectively, denoting their respective samples by Sn0 and Sn1, and let f(c, u0, u1, Sn) denote the joint distribution of c, u0, u1, and Sn. Then p ðuÞ ¼ f ðc, u0 , u1 jS n Þ ¼ f ðcjS n , u0 , u1 Þf ðu0 jS n , u1 Þf ðu1 jS n Þ:
(6.33)
If we assume that, given n0, c is independent from Sn and the distribution parameters for each class, then f ðcjS n , u0 , u1 Þ ¼ f ðcjn0 , S n , u0 , u1 Þ ¼ f ðcjn0 Þ:
(6.34)
If we also assume that, given n0, Sn0 and the distribution parameters for class 0 are independent from Sn1 and the distribution parameters for class 1, then
204
Chapter 6
f ðu0 jS n , u1 Þ ¼ f ðu0 jn0 , S n0 , S n1 , u1 Þ ¼
f ðu0 , S n0 jn0 , S n1 , u1 Þ f ðS n0 jn0 , S n1 , u1 Þ
¼
f ðu0 , S n0 jn0 Þ f ðS n0 jn0 Þ
(6.35)
¼ f ðu0 jn0 , S n0 Þ ¼ f ðu0 jS n0 Þ, where in the last line we assume that n0 is given when Sn0 is given. Given n1, analogous statements apply, and f ðu1 jS n Þ ¼ f ðu1 jn1 , S n0 , S n1 Þ ¼ f ðu1 jS n1 Þ:
(6.36)
Combining these results, p ðuÞ ¼ f ðcjn0 Þf ðu0 jS n0 Þf ðu1 jS n1 Þ ¼ p ðcÞp ðu0 Þp ðu1 Þ,
(6.37)
where p (u0), p (u1), and p (c) are the marginal posterior densities for the parameters u0, u1, and c, respectively. Focusing on c, since n0 binomial(n, c) given c, p ðcÞ ¼ pðcjn0 Þ ∝ pðcÞf ðn0 jcÞ ∝ pðcÞcn0 ð1 cÞn1 :
(6.38)
If p(c) is beta(a, b) distributed, then p (c) is still a beta distribution, p ðcÞ ¼
cn0 þa1 ð1 cÞn1 þb1 , Bðn0 þ a, n1 þ bÞ
(6.39)
where B is the beta function and E p ½c ¼
n0 þ a : nþaþb
(6.40)
In the case of uniform priors, a ¼ 1, b ¼ 1, and p ðcÞ ¼
ðn þ 1Þ! n0 c ð1 cÞn1 , n0 !n1 !
(6.41)
n0 þ 1 : nþ2
(6.42)
E p ½c ¼ Finally, if c is known, then E p ½c ¼ c.
Optimal Classification
205
When finding the posterior probabilities for the parameters, we need only consider the sample points from the corresponding class. Hence, applying Bayes’ rule, p ðuy Þ ¼ f ðuy jS ny Þ ∝ pðuy Þf ðS ny juy Þ Y ¼ pðuy Þ f uy ðxi jyÞ,
(6.43)
i:yi ¼y
where the constant of proportionality can be found by normalizing the integral of p (uy) to 1. The term f(Sny|uy) is called the likelihood function. Although we call p(uy), y ¼ 0, 1, the “prior probabilities,” they are not required to be valid density functions. A prior is called “improper” if the integral of p(uy) is infinite, i.e., if p(uy) induces a s-finite measure but not a finite probability measure. When improper priors are used, Bayes’ rule does not apply. Hence, assuming that the posterior is integrable, we take Eq. 6.43 as the definition of the posterior distribution, normalizing it so that its integral equals 1. Owing to the posterior independence between c, u0, and u1, if we decompose εu[c] according to Eq. 6.13, since εyu ½c is a function of uy only, the Bayesian MMSE error estimator can be expressed as εˆ U ½c; S n ¼ E p ½cε0u ½c þ ð1 cÞε1u ½c ¼ E p ½cE p ½ε0u ½c þ ð1 E p ½cÞE p ½ε1u ½c,
(6.44)
where Z E p ½εyu ½c ¼
Uy
εyuy ½cp ðuy Þduy
(6.45)
is the posterior expectation for the error contributed by class y. Letting εˆ yU ½c; S n ¼ E p ½εyn ½c,
(6.46)
εˆ U ½c; S n ¼ E p ½cˆε0U ½c; S n þ ð1 E p ½cÞˆε1U ½c; S n :
(6.47)
Eq. 6.44 takes the form
6.2.2 Effective class-conditional densities Direct evaluation of the Bayesian MMSE error estimator can be tedious owing to the need to evaluate a challenging integal. Taking an effective approach can simplify matters. For y ¼ 0, 1, the effective class-conditional density is defined by
206
Chapter 6
Z f U ðxjyÞ ¼
Uy
f uy ðxjyÞp ðuy Þduy :
(6.48)
Theorem 6.1 (Dalton and Dougherty, 2013a). Let c(x) ¼ 0 if x ∈ R0 and c(x) ¼ 1 if x ∈ R1, where R0 and R1 are measurable sets partitioning Rd . Given random sample Sn, the Bayesian MMSE error estimator is given by Z Z f U ðxj0Þdx þ ð1 E p ½cÞ f U ðxj1Þdx εˆ U ½c; S n ¼ E p ½c R1 R0 (6.49) Z ¼ ðE p ½cf U ðxj0ÞI x∈R1 þ ð1 E p ½cÞ f U ðxj1ÞI x∈R0 Þdx: Rd
Proof. For a fixed distribution uy and classifier c, the true error contributed by class y ∈ {0, 1} may be written as Z y εu ½c; S n ¼ f uy ðxjyÞdx: (6.50) R1y
Averaging over the posterior yields Z y εyuy ½c; S n p ðuy Þduy E p ½εu ½c; S n ¼ Z
Uy
Z
Uy
Z
R1y
Z
¼ R1y
Z
¼ ¼ R1y
f uy ðxjyÞdxp ðuy Þduy
Uy
(6.51)
f uy ðxjyÞp ðuy Þduy dx
f U ðxjyÞdx,
where switching the integrals in the third line is justified by Fubini’s theorem. Finally, εˆ U ½c; S n ¼ E p ½cE p ½ε0u ½c; S n þ ð1 E p ½cÞE p ½ε1u ½c; S n Z Z f U ðxj0Þdx þ ð1 E p ½cÞ f U ðxj1Þdx: ¼ E p ½c R1
R0
(6.52) ▪
In the proof, Eq. 6.52 was obtained by applying Eq. 6.51 to the y ¼ 0 and y ¼ 1 terms individually, so the theorem can be expressed as Z εˆ yU ½c; S n ¼ E p ½εyu ½c; S n ¼ f U ðxjyÞI x∈R1y dx (6.53) Rd
for y ∈ {0, 1}.
Optimal Classification
207
An OBC can be found by brute force using closed-form solutions for the Bayesian MMSE error estimator when available; however, if C is the set of all classifiers (with measurable decision regions), then the next theorem shows that an OBC can be found analogously to a Bayes classifier for a fixed distribution. It is found pointwise by choosing the class with the highest density at this point, where instead of the class-conditional density, one uses the effective class-conditional density. The density at other points in the space do not affect the classifier decision. The next theorem provides an OBC over all possible classifiers. Henceforth, we will only consider such OBCs. Theorem 6.2 (Dalton and Dougherty, 2013a). An optimal Bayesian classifier for the set of all classifiers with measurable decision regions is given pointwise by 0, if E p ½c f U ðxj0Þ ≥ ð1 E p ½cÞf U ðxj1Þ, cOBC ðxÞ ¼ (6.54) 1, otherwise: Proof. For any classifier c with measurable decision regions R0 and R1, εˆ U ½c; S n is given by Eq. 6.49. The integral is minimized by minimizing the integrand pointwise, as does the classifier in the theorem. This classifier has measurable decision regions. ▪ In the case of the OBC, we denote the Bayesian MMSE error estimate by εˆ OBC , this being shorthand for εˆ U ½cOBC ; S n , with Sn being implicit. Note that the five steps of the IBR operator synthesis schema given in Section 4.1 have been implemented: 5. There is an uncertainty class of feature-label distributions. 6. A prior distribution is placed on the uncertainty class, and a posterior distribution is derived via Bayes’ rule. 7. IBR and OBC optimization involve minimizing the expected error. 8. Effective class-conditional densities are derived. 9. An optimal OBC (or IBR classifier) is given in terms of the effective densities via Theorem 6.2.
6.3 Classification Rules In general, given a random sample S n ¼ fðX1 , Y 1 Þ, : : : , ðXn , Y n Þg, a mapping is applied to the sample to produce a classifier. Formally, a classification rule is a mapping Cn : Sn → cn such that, given a sample Sn, a designed classifier cn ¼ Cn(Sn) is obtained according to the rule Cn. In fact, a classification rule is a sequence of classification rules depending on n. Optimal Bayesian classification constitutes a classification rule by which the sample yields an OBC via minimization of the Bayesian MMSE error estimator, the sample being used by way of the posterior distribution. It is an unusual classification
208
Chapter 6
rule in that the vast majority of classification rules do not operate via minimization of a cost function based on prior knowledge. We prefer classifcation rules for which the error, εn ¼ ε[cn], of the designed classifier is close to the Bayes error. The design cost, Dn ¼ εn εbay, quantifies the degree to which a designed classifier fails to achieve this goal, with εn and Dn being sample-dependent random variables. The expected design cost is E[Dn], the expectation being relative to all possible samples. The expected error of cn is decomposed via E[εn] ¼ εbay þ E[Dn]. E[Dn] measures the performance of the classification rule relative to the feature-label distribution. The design cost can be reduced by constraining the classifier choice to a collection C. This leads to an optimal constrained classifier cC ∈ C having error εC . Since optimization in C is over a subclass of classifiers, the error εC of cC will typically exceed the Bayes error, unless a Bayes classifier happens to lie in C. This cost of constraint is DC ¼ εC εbay . A classification rule yields a classifier cn,C ∈ C, with error εn,C , and εn,C ≥ εC ≥ εbay . The design error for constrained classification is Dn,C ¼ εn,C εC . For small samples, this can be substantially less than Dn, depending on C and the classification rule. The error of a designed constrained classifier is decomposed as εn,C ¼ εbay þ DC þ Dn,C :
(6.55)
E½εn,C ¼ εbay þ DC þ E½Dn,C :
(6.56)
Therefore,
The constraint is beneficial if and only if DC , E½Dn E½Dn,C . The dilemma is as follows: strong constraint reduces E½Dn,C at the cost of increasing DC . A classical method to construct classification rules involves discriminants. Given a sample Sn, a sample-based discriminant is a function W(Sn, x), where x ∈ Rd , and it defines a classifier via the classification rule, cn ðS n ÞðxÞ ¼
1, 0,
W ðS n , xÞ ≤ k, otherwise,
(6.57)
where k is a threshold parameter. A plug-in discriminant is a discriminant obtained from the population discriminant of Eq. 6.19 by plugging in sample estimates, usually maximium-likelihood estimates. Before we discuss optimal Bayesian classification for particular featurelabel distributions, we will first consider standard non-optimal classification rules for these distributions. 6.3.1 Discrete classification If the range Rx of X is finite, then there is a bijection between Rx and a finite set of integers, so that there is no loss in generality in assuming a single feature
Optimal Classification
209
X taking values in the set f1, : : : , bg. The properties of this discrete classification problem are completely determined by the class prior probability c0 and the class-conditional probability mass functions pi ¼ P(X ¼ i|Y ¼ 0) and P and qi ¼ P(X ¼ i|Y ¼ 1), for i ¼ 1, : : : , b. Since pb ¼ 1 b1 i¼1 pi Pb1 qb ¼ 1 i¼1 qi , the classification problem is determined by a (2b 1)-dimensional vector ðc0 , p1 , : : : , pb1 , q1 , : : : , qb1 Þ ∈ R2b1 . For a given a probability model, the error of a discrete classifier c : f1, : : : , bg → f0, 1g is given by ε½c ¼
b X
PðX ¼ i, Y ¼ 1 cðiÞÞ
i¼1
¼
b X
PðX ¼ ijY ¼ 1 cðiÞÞPðY ¼ 1 cðiÞÞ
(6.58)
i¼1 b X ¼ ½pi c0 I cðiÞ¼1 þ qi c1 I cðiÞ¼0 : i¼1
Hence, a Bayes classifier is given by cbay ðiÞ ¼
1, 0,
pi c0 qi c1 ≤ 0, otherwise,
(6.59)
for i ¼ 1, : : : , b, with Bayes error εbay ¼
b X
minfc0 pi , c1 qi g:
(6.60)
i¼1
Given a sample S n ¼ fðX 1 , Y 1 Þ, : : : ðX n , Y n Þg, the bin counts U yi , y ¼ 0, 1, are defined as the observed number of points with X ¼ i for class y, for i ¼ 1,:::,b. The maximum-likelihood estimators of the distributional paraU0
U1
meters are cˆ 0 ¼ Nn0 , cˆ 1 ¼ Nn1 and pˆ i ¼ N 0i , qˆ i ¼ N 1i for i ¼ 1, : : : , b. Plugging these into cbay(i) yields the discrete histogram classification rule (also known as multinomial discrimination): Cn ðS n ÞðiÞ ¼ I U 0i ,U 1i ¼ for i ¼ 1, : : : , b.
1, 0,
U 0i , U 1i otherwise,
(6.61)
210
Chapter 6
The classification error of the discrete histogram rule is b X εn ¼ PðX ¼ i, Y ¼ 1 cn ðiÞÞ i¼1
¼
b X
(6.62) ðc0 pi I U 1i .U 0i þ c1 qi I U 0i ≥U 1i Þ:
i¼1
The expected error is E½εn ¼
b X
½c0 pi PðU 1i . U 0i Þ þ c1 qi PðU 0i ≥ U 1i Þ:
(6.63)
i¼1
The probabilities in the previous equation can be easily computed by using the fact that the vector ðU 01 , : : : , U 0b , U 11 , : : : , U 1b Þ of bin counts is distributed multinomially with parameters ðn, c0 p1 , : : : , c0 pb , c1 q1 , : : : , c1 qb Þ: PðU 01 ¼ u1 , : : : , U 0b ¼ ub , U 11 ¼ v1 , : : : , U 1b ¼ vb Þ P P Y b n! ui vi c0 c1 pui i qvi i : ¼ u1 ! · · · ub !v1 ! · · · vb ! i¼1
(6.64)
The marginal distribution of ðU 0i , U 1i Þ is multinomial with parameters (n, c0 pi, c1qi, 1 c0pi c1qi). Hence, X PðU 0i ¼ k, U 1i ¼ lÞ PðU 1i . U 0i Þ ¼ 0≤kþl≤n k,l
¼
X n!ðc0 pi Þk ðc1 qi Þl ð1 c0 pi c1 qi Þnkl , k!l!ðn k lÞ! 0≤kþl≤n
(6.65)
k,l
with PðU 0i ≥ U 1i Þ ¼ 1 PðU 1i . U 0i Þ:
(6.66)
6.3.2 Quadratic discriminant analysis For the population-based QDA discriminant in Eq. 6.22, if part or all of the population parameters are not known, then sample estimators can be plugged in to yield the sample-based QDA discriminant: 1 ¯ 0 ÞT Sˆ 1 ¯ W Q ðS n , xÞ ¼ ðx X 0 ðx X0 Þ 2 1 jSˆ 1 j ¯ 1 ÞT Sˆ 1 ¯ þ ðx X , 1 ðx X1 Þ þ log 2 jSˆ 0 j
(6.67)
Optimal Classification
211
where n 1 X ¯ Xy ¼ XI N y i¼1 i Y i ¼y
(6.68)
and Sˆ y ¼
n 1 X ¯ y ÞðXi X ¯ y ÞT I Y ¼y ðX X i N y 1 i¼1 i
(6.69)
are the sample mean and sample covariance matrix, respectively, for population Πy, y ¼ 0, 1. For the population-based LDA discriminant in Eq. 6.23, replacing the population parameters by sample estimators yields the sample-based LDA discriminant: ¯ 1 T 1 ¯ þX X ¯0 X ¯ 1 Þ, W L ðS n , xÞ ¼ x 0 Sˆ ðX (6.70) 2 ¯ 1 are the sample means and ¯ 0 and X where X Sˆ ¼
n 1 X ¯ 0 ÞðXi X ¯ 0 ÞT I Y ¼0 þ ðXi X ¯ 1 ÞðXi X ¯ 1 ÞT I Y ¼1 (6.71) ½ðXi X i 1 n 2 i¼1
is the pooled sample covariance matrix. The QDA and LDA classification rules are based on a Gaussian assumption; nevertheless, in practice they can perform well as long as the underlying populations are approximately unimodal, and one either knows the relevant covariance matrices or can obtain good estimates of them. Even when the covariance matrices are not identical, LDA can outperform QDA under small sample sizes owing to the difficulty of estimating the individual covariance matrices. Note that there is no optimization. While populationbased discriminants yield Bayes classifiers, the efficacy of plugging in maximum-likelihood estimates is highly dependent on the discriminant and the feature-label distribution.
6.4 OBC in the Discrete and Gaussian Models Having discussed multinomial discrimination and QDA in the discrete and Gaussian models, we now discuss OBCs for these models and then give examples comparing the standard classifiers to the corresponding OBCs. 6.4.1 Discrete model For discrete classification, we consider an arbitrary number of bins with beta class priors and define the parameters for each class to contain all but one bin
212
Chapter 6
probability, i.e., u0 ¼ ðp1 , p2 , : : : , pb1 Þ and u1 ¼ ðq1 , q2 , : : : , qb1 Þ. Each parameter space is defined as the set of all valid bin probabilities. For example, ðp1 , p2 , : : : , pb1 Þ ∈ U0 if and only if 0 ≤ pi ≤ 1 for i ¼ 1, : : : , b 1 Pb1 pi ≤ 1. Given u0, the last bin probability is defined by and i¼1 Pb1 pb ¼ 1 i¼1 pi . For y ¼ 0, 1, we employ the Dirichlet priors: pðu0 Þ ¼
b Y 1 a0 1 pi i , 0 0 0 Bða1 , a2 , : : : , ab Þ i¼1
(6.72)
pðu1 Þ ¼
b Y 1 a1i 1 q , Bða11 , a12 , : : : , a1b Þ i¼1 i
(6.73)
where ayi . 0 and B is the multivariate beta function, Qb i¼1 Gðai Þ Bða1 , a2 , : : : , ab Þ ¼ P : Gð bi¼1 ai Þ
(6.74)
These are conjugate priors, taking the same form as the posteriors. If ayi ¼ 1 for all i and y, we obtain uniform priors. Increasing a specific ayi has the effect of biasing the corresponding bin with ayi samples from the corresponding class before observing the data. The means of p1 , p2 , : : : , pb are given by a0j E½pj ¼ Pb i¼1
a0i
:
(6.75)
Their second-order moments are given by a0j ða0j þ 1Þ P E½p2j ¼ Pb , ð k¼1 a0k Þð1 þ bk¼1 a0k Þ
(6.76)
a0i a0j P : E½pi pj ¼ Pb ð k¼1 a0k Þð1 þ bk¼1 a0k Þ
(6.77)
The posterior distributions are again Dirichlet: P b Gðny þ bi¼1 ayi Þ Y U y þay 1 pi i i , p ðuy Þ ¼ Qb y y i¼1 GðU i þ ai Þ i¼1
(6.78)
for y ¼ 0, 1, (Dalton and Dougherty, 2011a). The effective class-conditional densities are given by
Optimal Classification
213
U yj þ ayj P f U ðjjyÞ ¼ ny þ bi¼1 ayi
(6.79)
(Dalton and Dougherty, 2013a). From Theorem 6.1, εˆ U ½c; S n ¼
b X
E p ½c
j¼1
U 0j þ a0j P I cðjÞ¼1 n0 þ bi¼1 a0i
U 1j þ a1j P þ ð1 E p ½cÞ I cðjÞ¼0 : n1 þ bi¼1 a1i
(6.80)
In particular, the first-order posterior moment of the true error for y ∈ {0, 1} is εˆ yU ½c; S n
¼ E p ½εu ½c ¼
b X j¼1
U yj þ ayj P I cðjÞ¼1y : ny þ bi¼1 ayi
(6.81)
For uniform c and uniform priors for the bin probabilities (ayi ¼ 1 for all i and y), ! b X n0 þ 1 U 0j þ 1 n1 þ 1 U 1j þ 1 I I εˆ U ½c; S n ¼ þ : (6.82) n þ 2 n0 þ b cðjÞ¼1 n þ 2 n1 þ b cðjÞ¼0 j¼1 The OBC is found by applying Theorem 6.2 to the effective classconditional densities in Eq. 6.79: 8 U 0j þ a0j U 1j þ a1j < ½c ½cÞ P P 1, if E , ð1 E , (6.83) p p cOBC ðjÞ ¼ n0 þ bi¼1 a0i n1 þ bi¼1 a1i : 0, otherwise (Dalton and Dougherty, 2013a). From Thereom 6.1,
b X U 0j þ a0j U 1j þ a1j P P εˆ OBC ¼ min E p ½c , ð1 E p ½cÞ : n0 þ bi¼1 a0i n1 þ bi¼1 a1i j¼1
(6.84)
The OBC minimizes the BEE by minimizing each term in the sum of Eq. 6.80 by assigning cOBC( j) the class with the smaller constant scaling the indicator function. For uniform c and uniform priors for the bin probabilities (ayi ¼ 1 for all i and y), 8 n þ1 0 n þ1 1 < ðU j þ 1Þ , 1 ðU þ 1Þ, 1, if 0 cOBC ðjÞ ¼ (6.85) n0 þ b n1 þ b j : 0, otherwise, and the expected error of the optimal classifier is
b X n0 þ 1 U 0j þ 1 n1 þ 1 U 1j þ 1 , εˆ OBC ¼ min : n þ 2 n0 þ b n þ 2 n1 þ b j¼1
(6.86)
214
Chapter 6
Example 6.1. Consider a synthetic simulation where c and the bin probabilities are generated randomly according to uniform prior distributions. For each fixed feature-label distribution, a binomial(n, c) experiment determines the number of sample points in class 0, and the bin for each point is drawn from bin probabilities corresponding to its class, thus generating a random sample of size n. The histogram classifier and the OBC are designed from the sample, and the true error for each classifier is calculated. This is repeated 100,000 times to obtain the average true error for each classification rule, presented in Fig. 6.1 for two, four, and eight bins. The average performance of the OBC is superior to that of the discrete histogram rule. Note, however, that OBCs are not guaranteed to be optimal for a specific distribution (the optimal classifier is the Bayes classifier), but are optimal when averaged over all distributions relative to the posterior distribution. To illustrate the effect of the prior relative to a specific distribution, suppose that the true distribution is discrete with c0 ¼ 0.5: p1 ¼ p2 ¼ p3 ¼ p4 ¼ 3∕16, p5 ¼ p6 ¼ p7 ¼ p8 ¼ 1∕16, q1 ¼ q2 ¼ q3 ¼ q4 ¼ 1∕16, q5 ¼ q6 ¼ q7 ¼ q8 ¼ 3∕16: Consider five Dirichlet priors p1 , p2 , : : : , p5 with c ¼ 0.5: a1j, 0 ¼ a2j, 0 ¼ a3j, 0 ¼ a4j, 0 ¼ a j,0 , a5j, 0 ¼ a6j, 0 ¼ a7j, 0 ¼ a8j, 0 ¼ b j,0 , a1j, 1 ¼ a2j, 1 ¼ a3j, 1 ¼ a4j, 1 ¼ a j,1 , a5j, 1 ¼ a6j, 1 ¼ a7j, 1 ¼ a8j, 1 ¼ b j,1 , for j ¼ 1, 2, : : : , 5, where a j,0 ¼ 1, 1, 1, 2, 4 for j ¼ 1, 2, : : : , 5, respectively; b j,0 ¼ 4, 2, 1, 1, 1 for j ¼ 1, 2, : : : , 5, respectively; a j,1 ¼ 4, 2, 1, 1, 1 for
Figure 6.1 Average true errors on discrete distributions from known priors with uniform c and bin probabilities versus sample size. (a) b ¼ 2, (b) b ¼ 4, (c) b ¼ 8. [Reprinted from (Dalton and Dougherty, 2013a).]
Optimal Classification
215
j ¼ 1, 2, : : : , 5, respectively; and b j, 1 ¼ 1, 1, 1, 2, 4 for j ¼ 1, 2, : : : , 5, respectively. For n ¼ 5 through n ¼ 30, we generate 100,000 samples of size n. For each of these, we design a histogram classifier and five OBCs corresponding to the five priors. Classifier errors are obtained using Eq. 6.58. Figure 6.2 shows average errors, with the Bayes error for the true distribution marked by small circles. Whereas the OBC from the uniform prior (prior 3) performs slightly better than the historgram rule, putting more prior mass in the vicinity of the true distribution (priors 4 and 5) gives greatly improved performance. Indeed, the OBC from prior 5 has an average true error extremely close to the Bayes error. The risk in leaving uniformity is demonstrated by priors 1 and 2, whose masses are concentrated away from the true distribution. Although the OBC is guaranteed to perform optimally on average across the posterior distribution, it can perform poorly if the true distribution is away from the mass of the prior distribution, and very poorly for small samples. This demonstrates the need for good prior construction if one is going to move away from uniformity. Finally, let us consider the performance of the Bayesian MMSE error estimator relative to true distributions of varying Bayes errors in comparison with leave-one-out cross-validation and resubstitution when there are b ¼ 16 bins. While the BEE is optimal with respect to the MSE relative to the prior distribution and sampling distribution, it need not be optimal for specific distributions. In this example, the true distributions have c0 ¼ 0.5 and are
Figure 6.2 Average true errors for the histogram classifier and OBCs based on different prior distributions.
216
Chapter 6
constructed using the Zipf power law model. In this model, there is free parameter b ≥ 0, and the model is defined by pi ∝ i –b and qi ¼ pb i þ 1 for i ¼ 1, 2, : : : , b. The parameter b can be used to obtain a specific Bayes error, with larger b corresponding to smaller Bayes error. We consider two prior distributions, both having c ¼ 0.5. Prior 1 is uniform, with ayi ¼ 1 for i ¼ 1, 2, : : : , 16 and y ¼ 0, 1. Prior 2 is nonuniform, with a01 ¼ a02 ¼ · · · ¼ a08 ¼ 8, a09 ¼ a010 ¼ · · · ¼ a016 ¼ 1, a11 ¼ a12 ¼ · · · ¼ a18 ¼ 1, a19 ¼ a110 ¼ · · · ¼ a116 ¼ 8: The simulation proceeds by first selecting a dense sequence of Bayes errors εbay,1 , εbay,2 , : : : , εbay,m between 0 and 0.5, and finding the corresponding b values b1 , b2 , : : : , bm . Focusing first on b1, we randomly generate a sample of size n ¼ 20 for the distribution corresponding to b1, design a histogram classifier from the sample, and compute the leave-one-out, resubstitution, and two Bayesian MMSE error estimates. The true classifier error is obtained from Eq. 6.58, and the squared difference ðεn εˆ n Þ2 between the true error and each of the error estimates is computed. The process is repeated 100,000 times; the (very accurately estimated) MSE for each estimator is the average squared difference over the 100,000 runs, and the (very accurately estimated) root-mean-square (RMS) error is the square root of this estimated MSE. This procedure is then repeated for b2 , b3 , : : : , bm . Figure 6.3 shows the RMS plotted against the Bayes error. The performance for the Bayesian error estimators is superior to resubstitution and leave-one-out (Loo) for most distributions. BEE performance tends to be especially favorable with moderate to high Bayes errors, whereas the other error estimators favor a small Bayes error. ▪ 6.4.2 Gaussian model For y ∈ {0, 1}, assume an Rd Gaussian distribution with parameters uy ¼ (my, Ly), where my is the mean of the class-conditional distribution, and Ly is a collection of parameters determining the covariance matrix Sy of the class. We distinguish between Ly and Sy to enable us to impose a structure on the covariance. In (Dalton and Dougherty, 2011b, 2013a), three types of models have been considered: a fixed covariance (Sy ¼ Ly is known perfectly), a scaled-identity covariance having uncorrelated features with equal variances (Ly ¼ s2y is a scaler and Sy ¼ s2y I d , where Id is the d d identity matrix), and a general (unconstrained, but valid) covariance matrix, Sy ¼ Ly. The parameter space of my is Rd . The parameter space of Ly, denoted by Ly, must
Optimal Classification
217
Figure 6.3 RMS deviation from the true error as a function of the Bayes error for several error estimators when using the histogram rule on the Zipf model.
permit only valid covariance matrices. We write Sy without explicitly showing its dependence on Ly; that is, we write Sy instead of Sy(Ly). A multivariate Gaussian distribution with mean m and covariance S is denoted by fm,S(x), so that the parameterized class-conditional distributions are fuy(x|y) ¼ fmy,Sy(x). In the independent covariance model, we assume that c, u0 ¼ (m0, L0), and u1 ¼ (m1, L1) are all independent prior to observing the data, so that p(u) ¼ p(c)p(u0)p(u1). Assuming that p(c) and p (c) have been established, we require priors p(uy) and posteriors p (uy) for both classes. We begin by specifying conjugate priors for u0 and u1. Let n be a nonnegative real number, m a length d real vector, k a real number, and S a nonnegative definite d d matrix. Define h n i f m ðm; n, m, LÞ ¼ jSj1∕2 exp ðm mÞT S1 ðm mÞ , 2 h 1 i f c ðL; k, SÞ ¼ jSjðkþdþ1Þ∕2 exp trðSS1 Þ , 2
(6.87) (6.88)
where S is a function of L. If n > 0, then fm is a (scaled) Gaussian distribution with mean m and covariance S/n. If S ¼ L, k > d 1, and S is symmetric positive definite, then fc is a (scaled) inverse-Wishart(k, S) distribution. To allow for improper priors, we do not necessarily require fm and fc to be normalizable.
218
Chapter 6
For y ¼ 0, 1, assume that Sy is invertible with probability 1 and priors are of the form pðuy Þ ¼ pðmy jLy ÞpðLy Þ,
(6.89)
pðmy jLy Þ ∝ f m ðmy ; ny , my , Ly Þ,
(6.90)
pðLy Þ ∝ f c ðLy ; ky , Sy Þ:
(6.91)
where
If ny > 0, then the prior p(my|Sy) for the mean conditioned on the covariance is proper and Gaussian with mean my and covariance Sy/ny. The hyperparameter my can be viewed as a target for the mean, where the larger ny is, the more localized the prior is about my. In the general covariance model where Sy ¼ Ly, p(Sy) is proper if ky > d 1 and Sy is symmetric positive definite. If, in addition, ny > 0, then p(uy) is a normal-inverse-Wishart distribution, which is the conjugate prior for the mean and covariance when sampling from normal distributions (DeGroot, 1970; Raiffa and Schlaifer, 1961). Then E p ½Sy ¼
1 S, ky d 1 y
(6.92)
so that Sy can be viewed as a target for the shape of the covariance, where the actual expected covariance is scaled. If Sy is scaled appropriately, then the larger ky is, the more certainty we have about Sy. Increasing ky while fixing the other hyperparameters defines a prior favoring smaller |Sy|. The model allows for improper priors. Some useful examples of improper priors occur when S ¼ 0 and v ¼ 0. In this case, pðuy Þ ∝ jSy jðkþdþ2Þ∕2 :
(6.93)
If k þ d þ 2 ¼ 0, we obtain flat priors used by Laplace (Laplace, 1812). Alternatively, if Ly ¼ Sy, then with k ¼ 0 we obtain the Jeffreys rule prior, which is designed to be invariant to differentiable one-to-one transformations of the parameters (Jeffreys, 1946, 1961), and with k ¼ 1, we obtain the Jeffreys independence prior, which uses the same principle as the Jeffreys rule prior but also treats the mean and covariance matrix as independent parameters. Theorem 6.3 (Dalton and Dougherty, 2011b). In the independent covariance model, the posterior distributions possess the same form as the priors: p ðuy Þ ∝ f m ðmy ; ny , my , Ly Þf c ðLy ; ky , Sy Þ, with updated hyperparameters
(6.94)
Optimal Classification
219
ny ¼ ny þ ny , my ¼
ny my þ ny mˆ y , ny þ ny
(6.95) (6.96)
ky ¼ ky þ ny ,
(6.97)
ny ny Sy ¼ Sy þ ðny 1ÞSˆ y þ ðmˆ my Þðmˆ y my ÞT , ny þ ny y
(6.98)
where mˆ y and Sˆ y are the sample mean and sample covariance, respectively, of ▪ the ny points in class y. Similar results are found in (DeGroot, 1970). Note that the choice of Ly will effect the proportionality constant in p ðuy Þ. Rewriting the representation of my in Eq. 6.96 as my ¼
mˆ y ny my , þ ny þ ny ny ∕ny þ 1
(6.99)
we see that my mˆ y → 0 as n → ` . The posterior can be expressed as p ðuy Þ ¼ p ðmy jLy Þp ðLy Þ,
(6.100)
p ðmy jLy Þ ¼ f fmy ,Sy ∕ny g ðmy Þ,
(6.101)
h 1 i p ðLy Þ ∝ jSy jðky þdþ1Þ∕2 exp trðSy S1 y Þ : 2
(6.102)
where
Assuming at least one sample point, ny . 0. Thus, p ðmy jLy Þ is always valid. The validity of p ðLy Þ depends on the definition of Ly. Improper priors are acceptable, but the posterior must always be a valid probability density: for the general covariance model, we require that ny . 0, ky . d 1, and Sy is symmetric positive definite. Since the effective class-conditional densities are found separately, a different covariance model may be used for each class. In the general covariance model, Sy ¼ Ly, and the parameter space Ly contains all symmetric positive definite matrices. In this case, p (Sy) has an inverse-Wishart distribution, h 1 i jSy jky ∕2 ðky þdþ1Þ∕2 1 trðS j exp S Þ , jS y y y 2 2ky d∕2 Gd ðky ∕2Þ
p ðSy Þ ¼
(6.103)
where Gd is the multivariate gamma function (Kanti et al., 1979). For a proper posterior, we require that ny . 0, ky . d 1, and Sy is positive definite.
220
Chapter 6
Theorem 6.4 (Dalton and Dougherty, 2013a). For a general covariance matrix, assuming that ny . 0, ky . d 1, and Sy is symmetric positive definite, the effective class-conditional density is a multivariate Student’s t-distribution, ky þd ky þd G 2 2 1 1 Þ f U ðxjyÞ ¼ d∕2 k 1 þ ðx my ÞT C1 ðx m , (6.104) y y ky ky pd∕2 jCy j1∕2 G 2y
with location vector my , scale matrix Cy ¼
ðky
ny þ 1 S , d þ 1Þny y
and k y ¼ ky d þ 1 degrees of freedom.
(6.105) ▪
As long as ky . d, the density is proper with mean m*, and as long as k . d þ 1, the covariance is Cy. The discriminant of the optimal Bayesian classifier can be simplified to k þd 0 1 ðx m Þ DOBC ðxÞ ¼ K 1 þ ðx m0 ÞT C1 0 0 k0 (6.106) k þd 1 1 T 1 1 þ ðx m1 Þ C1 ðx m1 Þ , k1 where K¼
1 E p ½c E p ½c
2 d k 0 jC0 j Gðk 0 ∕2ÞGððk 1 þ dÞ∕2Þ 2 : k 1 jC1 j Gððk 0 þ dÞ∕2ÞGðk 1 ∕2Þ
(6.107)
This classifier has a polynomial decision boundary if k0 and k1 are integers, which occurs when k0 and k1 are integers. In the special case where k ¼ k0 ¼ k1, the OBC can be simplified to 1 T C0 ðx m0 Þ DOBC ðxÞ ¼ ðx m0 Þ K0 (6.108) C1 1 T ðx m1 Þ ðx m1 Þ þ kðK 0 K 1 Þ, K1 where
1 jC0 j kþd , K0 ¼ ðE p ½cÞ2 1 kþd jC1 j : K1 ¼ 2 ð1 E p ½cÞ
(6.109)
(6.110)
Optimal Classification
221
Thus, the OBC is quadratic. It becomes linear if k ¼ k0 ¼ k1 and C ¼ K10 C0 ¼ K11 C1 . Example 6.2. An IBR classifier is essentially an OBC derived without data directly from the prior distribution. If we constrain Eq. 6.27 to model-specific Bayes classifiers, then we obtain a model-constrained Bayesian robust classifier, which can be found via a maximally robust state analogously to an MCBR filter (Dougherty et al., 2005). Consider a Gaussian model with d ¼ 2 features in the general covariance model and a prior distribution defined by known c ¼ 0.5 and hyperparameters n0 ¼ k0 ¼ 20d, m0 ¼ ð0, : : : , 0Þ, n1 ¼ k1 ¼ 2d, m1 ¼ ð1, : : : , 1Þ, and Sy ¼ (ky d 1)Id. Since k0 ≠ k1, k0 ≠ k1 and the IBR classifier is polynomial but generally not quadratic. Based on our previous analysis, the IBR classifier is given by cIBR(x) ¼ 0 if DIBR(x) ≤ 0 and cIBR(x) ¼ 1 if DIBR(x) > 0, where DIBR(x) is defined by Eq. 6.106 with m0 in place of m0 and m1 in place of m1 , K is defined by Eq. 6.107 with c in place of E p ½c, ky ¼ ky d þ 1, and Cy ¼
ny þ 1 S: k y ny y
For any particular state, the Bayes classifier is quadratic. Using Monte Carlo methods, a maximally robust state and the corresponding MCBR classifier cMCBR are found. We also consider a plug-in classifier cPI using the expected value of each parameter, which is the Bayes classifier, assuming that c ¼ 0.5, m0 ¼ m0, m1 ¼ m1, and S0 ¼ S1 ¼ ID. This plug-in (PI) classifier is linear. The average true errors are Ep[εu[cPI]] ¼ 0.2078, Ep[εu[cMCBR]] ¼ 0.2061, and Ep[εu[cIBR]] ¼ 0.2007. The IBR classifier is superior to the MCBR classifier. Figure 6.4 shows cMCBR, cIBR, and cPI. Level curves for the class-conditional distributions in the maximally robust state corresponding to cMCBR are shown in dark dashed lines, and level curves for the distributions corresponding to the expected parameters are shown in light-gray dashed lines. These have been found by setting the Mahalanobis distance to 1. Note that there may be more than one maximally robust state, so the one shown is only an example. cIBR is the only classifier that is not quadratic, and it is quite distinct from cMCBR. ▪ The Gaussian and discrete models, as constructed, possess conjugate priors, which facilitates closed-form representation of the posterior distributions. In practice, a prior is derived from existing knowledge, and we may not possess a closed-form representation of the posterior. In such cases, MCMC methods have been employed to obtain an estimate of the OCB (Knight et al., 2014, 2018).
6.5 Consistency A classification rule is consistent for a feature-label distribution of (X, Y) if εn → εbay in probability as n → `. It is universally consistent if convergence
222
Chapter 6
Figure 6.4 Classifiers for an independent arbitrary covariance Gaussian model with d ¼ 2 features and proper priors having k0 ≠ k1. The IBR classifier is polynomial with average true error 0.2007, whereas the MCBR classifier is quadratic with average true error 0.2061. [Reprinted from (Dalton and Dougherty, 2013a).]
holds for any distribution of (X, Y ). It is not difficult to show that a classification rule is consistent if and only if E[Dn] → 0 as n → `. A classification rule is strongly consistent for a feature-label distribution of (X, Y ) if εn → εbay with probability 1 as n → `, and universally strongly consistent if this holds for all feature-label distributions. For instance, by the law of large numbers, for multinomial discrimination, εn → εbay as n → ` (for fixed b), with probability 1, for any distribution; that is, the discrete histogram rule is universally strongly consistent. Thus, E[εn] → εbay. Our concern is with optimal Bayesian classification. Intuitively, if the posterior distribution is tightly concentrated around the actual value of the parameter vector (the true model), then one might expect the expected error of any classifier to be close to its error on the true model, so that minimizing the Bayesian MMSE error estimator over the collection of all classifiers would yield an OBC close to a Bayes classifier. In the limit, as n → `, if the posterior were to converge (in some appropriate manner) to a delta function at the true parameter value, then one might expect the OBC to coverge (in some manner) to a Bayes classifier. Hence, to study convergence of the OBC, we need to consider convergence of the posterior distribution. If ln and l are probability measures on U, then ln → l weak (in the weak topology on the space of all probability measures over U) if and only if R R f dln → f dl for all bounded continuous functions f on U. If du is a point mass at u ∈ U, then it can be shown that ln → du weak if and only if ln(U ) → 1 for every neighborhood U of u. Bayesian modeling parameterizes a family {Fu : u ∈ U} of probability distributions on Rd f0, 1g. For a fixed true parameter u, we denote the infinite
Optimal Classification
223
sampling distribution by F `u , which is an infinite product measure on
ðRd f0, 1gÞ` . The posterior of u is weak consistent at u ∈ U if the posterior probability of the parameter converges weak to du for almost all sequences in F `u, i.e., if for all bounded continuous functions f on U, PS
` ju
ðE ujS n ½ f ðuÞ → f ðuÞÞ ¼ 1,
(6.111)
where S` denotes the space of infinite sample-point sequences. Equivalently, the posterior probability (given a sample) of any neighborhood U of the true parameter u converges to 1 almost surely (a.s.) with respect to the sampling distribution; i.e., PS
` ju
ðPujSn ðUÞ → 1Þ ¼ 1:
(6.112)
The posterior is called weak consistent if it is weak consistent for every u ∈ U. In establishing that the posteriors of c, u0, and u1 are weak consistent for both discrete and Gaussian models (in the usual topologies), we assume proper priors on c, u0, and u1. If there is only a finite number of possible outcomes and no neighborhood of the true parameter has prior probability zero, then the posteriors are weak consistent (Freedman, 1963; Diaconis and Freedman, 1986). Thus, if the prior of the class probability c has a beta distribution, which has positive mass in every open interval in [0, 1], then the posterior is weak consistent. Moreover, since sample points in the discrete model also have a finite number of possible outcomes, the posteriors of u0 and u1 are weak consistent as n0, n1 → `, respectively. Consider an arbitrary proper prior on a finite-dimensional parameter space. If the true feature-label distribution is included in the parameterized family of distributions and some regularity conditions hold, notably that the likelihood is a bounded continuous function of the parameter that is not under-identified (i.e., not flat for a range of values of the parameter), and the true parameter is not excluded by the prior as impossible (no neighborhood of it has prior probability zero) or on the boundary of the parameter space, then the posterior distribution of the parameter approaches a normal distribution centered at the true mean with variance proportional to 1/n as n → ` (Gelman et al., 2004). These regularity conditions hold in the Gaussian model for y ¼ 0, 1. Hence, the posterior of uy is weak consistent as ny → `. Owing to the weak consistency of the posteriors for c, u0, and u1 in the discrete and Gaussian models, for any bounded continuous function f on U, Eq. 6.111 holds for all u ¼ ðc, u0 , u1 Þ ∈ U. A Borel measurable function f : U → R is almost uniformly integrable relative to a sequence of probability measures ln on U (or fln g`n¼0 -a.u. integrable) if for all e > 0 there exists 0 < Me < ` and Ne > 0 such that for all n > Ne,
224
Chapter 6
Z fu∈U:j f ðuÞj≥M e g
j f ðuÞjdln ðuÞ , e:
(6.113)
R If ln converges weak to du , then f dln → f ðuÞ for any fln g`n¼0 -a.u. integrable f (Zapala, 2008). Consistency of optimal Bayesian classification in the discrete and general covariance Gaussian models is a consequence of the following theorem. To emphasize that the posteriors may be viewed as sequences, they are denoted by pn(c), pn0(u0), and pn1(u1). Similarly, the effective class-conditional densities are denoted by fU,ny(x|y) for y ¼ 0, 1. Note that “or” in condition 4 means that either of the conditions can hold, along with the first three. We assume that 0 , c , 1. Theorem 6.5 (Dalton and Dougherty, 2013b). Suppose that 1. E pn ½c → c, n0 → `, and n1 → ` almost surely over the infinite sampling distribution. 2. pn0(u0) and pn1(u1) are weak consistent as n0, n1 → `. 3. The effective density fU, ny(x|y) is finite for any fixed x, y, and sample size ny. 4. (i) For fixed x and y, there exists M > 0 such that fuy(x|y) < M for all uy; or (ii) for fixed x and y, the class-conditional distributions in the Bayesian model are a.u. integrable over uy relative to the sequence of posteriors (a.s.). Then the optimal Bayesian classifier (over the space of all classifiers with measurable decision regions) constructed in Eq. 6.54 converges pointwise (a.s.) to a Bayes classifier. Proof. First assume condition 4(i). Since we are interested in pointwise convergence, fix x. Since 0 , c , 1, n0, n1 → ` (a.s.). For y ∈ {0, 1}, by definition, E Uy jSny ½ f uy ðxjyÞ ¼ f U,ny ðxjyÞ:
(6.114)
Since the class-conditional densities fuy(x|y) are bounded across all uy for fixed x and y, by Eq. 6.111, fU, ny(x|y) converges pointwise to the true classconditional density f uy ðxjyÞ (almost surely over the infinite class-y sampling distribution). Comparing the Bayes classifier, 0, if cf u0 ðxj0Þ ≥ ð1 cÞf u1 ðxj1Þ, cbay ðxÞ ¼ 1, otherwise, with the OBC given in Eq. 6.54 shows that the OBC converges pointwise (a.s.) to a Bayes classifier.
Optimal Classification
225
Now assume condition 4(ii). By the cited theorem from (Zapala, 2008), lim E Uy jS ny ½f uy ðxjyÞ ¼ f uy ðxjyÞ
ny → `
(6.115)
almost surely. Substituting this fact for Eq. 6.111, the proof is exactly the same as when assuming condition 4(i). ▪ Theorem 6.5 with condition 4(i) shows that optimal Bayesian classification in the discrete model with a beta prior on c is consistent; however, it does not apply to the general-covariance Gaussian model because at any point x there exists a class-conditional density that is arbitrarily large at x. Thus, even though there is weak consistency of the posteriors, we may not apply Eq. 6.111. It is proven in (Dalton and Dougherty, 2013a) that condition 4(ii) holds in the Gaussian model, which turns out to be a fairly deep result.
6.6 Optimal Sampling via Experimental Design Up to this point we have assumed random or separate sampling for classifier design. These are the standard assumptions in the vast majority of theory and application, their essential difference being that with separate sampling the class sample sizes are not random. In this section we will consider two other forms of nonrandom sampling for optimal Bayesian classification that are, in a sense, dual to each other. Both involve sequential experimental design. In the context of optimal Bayesian classification, the insight for the potential advantage of nonrandom sampling is straightforward. Imagine a situation in which there is separate sampling and the prior probabilities of the classes are known, and suppose that the prior for class 0 is a delta function (or very tightly concentrated) at the true parameter of the class-0 distribution, whereas the prior for class 1 has substantial variance about the true parameter. Since sample points from class 0 are virtually useless (or even detrimental), it would be wise to sample only class 1. More generally, we can use experimental design prior to selecting each sample point to determine which class to sample from. The dual-sampling idea is to determine a desired feature-vector for a sample point. The rationale here is that sampling feature vectors far removed from the decision boundary is not useful. Practically, one may think of a situation in which the features are attributes of a system and the labels are the results of a treatment to the system. For instance, for a tissue sample, the attributes might be gene expressions, the treatment a drug, and the label a binary result: failure or success. In both scenarios we apply MOCU-based experimental design, where the cost function is classifier error. As usual, optimal design reduces to choosing the action that minimizes the design cost, that is, the residual IBR cost. In the case of sampling, we proceed iteratively, beginning with the prior and an
226
Chapter 6
IBR classifier, determining the optimal experiment to obtain a sample point, deriving the OBC from the posterior, treating this OBC as a new IBR classifier and the posterior as a new prior, and then repeating the procedure iteratively to generate a nonrandom sample and the final OBC. In the case of determining the class from which to sample, the procedure has been applied in (Broumand and Dougherty, 2015) but without specifically mentioning the role of MOCU. Other methods for nonrandom sampling have been proposed that possess conceptual similarities, as well as differences, relative to the methods presented in this section. These include online learning approaches based on sequential measurements aimed at model improvement, such as the knowledge gradient, and active sampling (Cohn et al., 1994). As with the second design method discussed in this section, the aim of active sampling is to control the selection of potential unlabeled training points in the sample space to be labeled and used for further training. A generic active sampling algorithm is described in (Saar-Tsechansky and Provost, 2004), and a Bayesian approach is considered in (Golovin et al., 2010). We first treat the case where the issue is to determine which class to sample from. Given the class, sampling is random. We use separate sampling with known class-0 prior probability equal to c and consider the multinomial OBC with Dirichlet prior possessing hyperparameter vector a. The priors for classes 0 and 1 are given in Eqs. 6.72 and 6.73, respectively. According to Eq. 5.13, the task is to minimize the residual IBR cost, which in this setting takes the form Ujh
Ry ðUjhÞ ¼ E h ½E pðujhÞ ½C u ðcIBR Þ,
(6.116)
assuming that we select the data point from class y, where Cu is the classification error for state u, h denotes the outcome of the experiment, meaning that h selects the feature value of the sample from class 0, which for the multinomial Ujh distribution is the bin number, cIBR ¼ cOBCjh is the IBR classifer for the posterior distribution p(u|h), and y is implicit in h (see Eq. 5.120). Evaluating the inner expectation yields Ry ðUjhÞ ¼ E h ½E pðujhÞ ½C u ðcOBCjh Þ ¼ E h ½ˆε½cOBCjh X b min c ¼ Eh j¼1
U 0j þ a0j U 1j þ a1j Pb P , ð1 cÞ n0 þ i¼1 a0i n1 þ bi¼1 a1i
,
(6.117)
where the second equality follows because the inner expectation is the Bayesian MMSE error estimate of the OBC error, and the third equality applies Eq. 6.84. For selecting a single data point from class y ¼ 0, this reduces to
Optimal Classification
227
I h¼j þ a0j a1j R0 ðUjhÞ ¼ E h min c , ð1 cÞ 1 1 þ a00 a0 j¼1 X b I h¼j þ a0j a1j min c , ð1 cÞ 1 ¼ E U0 E hju0 1 þ a00 a0 j¼1 X b X b I j¼i þ a0j a1j pi min c , ð1 cÞ 1 , ¼ E U0 a0 1 þ a00 i¼1 j¼1 X b
(6.118)
where the second and third equalities follow from Eh ¼ EU0Eh|u0 and E hju0 ½gðhÞ ¼
b X
Pðij0ÞgðiÞ,
(6.119)
i¼1
P respectively, ay0 ¼ bi¼1 ayi and EU0 denotes expectation with respect to pðu0 Þ ¼ pðp1 , p2 ,:::, pb Þ. We next compute the residual IBR cost, assuming that we select the data point from bin i, where h denotes the outcome of the experiment in i, which is the class label of the sample from bin i. Evaluating Eq. 6.117 in the current scenario yields X b
I j¼i,h¼0 þ a0j I j¼i,h¼1 þ a1j Ri ðUjhÞ ¼ E h min c , ð1 cÞ I h¼0 þ a00 I h¼1 þ a10 j¼1 X b I j¼i,h¼0 þ a0j I j¼i,h¼1 þ a1j ¼ E U E hju min c , ð1 cÞ I h¼0 þ a00 I h¼1 þ a10 j¼1 b X I j¼i þ a0j a1j ¼ E U Pð0jiÞ min c , ð1 cÞ a10 1 þ a00 j¼1 0 b X aj I j¼i þ a1j min c 0 , ð1 cÞ þ E U Pð1jiÞ a0 1 þ a10 j¼1 b X I j¼i þ a0j a1j cpi min c , ð1 cÞ 1 ¼ EU cpi þ ð1 cÞqi j¼1 1 þ a00 a0 0 b aj I j¼i þ a1j ð1 cÞqi X þ EU min c 0 , ð1 cÞ , cpi þ ð1 cÞqi j¼1 1 þ a10 a0
(6.120)
where the preceding equalities are justified as follows: (1) given one sample point from bin i, the min reduces as shown, where Ij ¼ i, h ¼ 0 ¼ 1 if and only if j ¼ i and the experiment yields class 0; (2) Eh ¼ EUEh|u; (3)
228
Chapter 6
E hju ½gðhÞ ¼ Pð0jiÞgð0Þ þ Pð1jiÞgð1Þ,
(6.121)
where P(0|i) is the posterior probability of class 0 given bin i (which is dependent on u); and (4) the posterior probabilities are evaluated. The remaining expectations are taken with respect to p(pi, qi), since only the parameter values pertaining to bin i play a role. We close this section by noting that the effect of correlated sampling on classifier performance has long been studied in pattern recognition, going back to at least 1974 when Basu and Odell employed numerical examples to study the effect of equi-correlated sampling (Basu and Odell, 1974). Subsequently, several papers appeared that considered the asymptotic effects of various correlation structures on LDA (McLachlan, 1976; Tubbs, 1980; Lawoko and McLachlan, 1985, 1986). Owing to their asymptotic nature, these results are inapplicable to small samples. More recently, and more relevantly, nonrandom sampling has been addressed for finite samples by providing representation of the first- and second-order moments for expected errors arising from nonrandom sampling, again for LDA (Zollanvari et al., 2013). These results demonstrate that nonrandom sampling can be advantageous depending on the correlation structure within the data.
6.7 Prior Construction Up to this point we have ignored the issue of prior construction, assuming that the characterization of uncertainty is known. In a sense, prior construction is a different (but related) topic, since it involves the transformation of knowledge from some standard scientific form into a constraint on uncertainty, which by its very nature does not represent scientific knowledge. Regarding prior construction, in 1968, E. T. Jaynes remarked, “Bayesian methods, for all their advantages, will not be entirely satisfactory until we face the problem of finding the prior probability squarely.” (Jaynes, 1968). Twelve years later, he added, “There must exist a general formal theory of determination of priors by logical analysis of prior information — and that to develop it is today the top priority research problem of Bayesian theory.” (Jaynes, 1980). The problem is one of engineering, and it must be faced in the context of scientific knowledge and the transformation of that knowledge into prior distributions. There are deep epistemological issues here. Essentially, what is the nature of the knowledge represented by a prior? It is not validated knowledge in the scientific sense, nor is it empty speculation if it is derived in some structured manner from valid scientific knowledge (Dougherty, 2016). Historically, prior construction has tended to utilize general methodologies not targeting any specific type of prior information and has usually been treated independently (even subjectively) of real, available prior knowledge and sample data. Subsequent to the introduction of the Jeffreys non-informative
Optimal Classification
229
prior (Jeffreys, 1946), objective-based methods were proposed, two early ones being (Kashyap, 1971) and (Bernardo, 1979). There appeared a series of information-theoretic and statistical approaches: non-informative priors for integers (Rissanen, 1983), entropic priors (Rodriguez, 1991), maximal data information priors (MDIPs) (Zellner, 1995), reference (non-informative) priors obtained through maximization of the missing information (Berger and Bernardo, 1992), and least-informative priors (Spall and Hill, 1990) [see also (Bernardo, 1979), (Kass and Wasserman, 1996), and (Berger et al., 2012)]. The principle of maximum entropy can be seen as a method of constructing least-informative priors (Jaynes, 1957, 1968). Except in the Jeffreys prior, almost all of the methods are based on optimization: maximizing or minimizing an objective function, usually an information theoretic one. The leastinformative prior in (Spall and Hill, 1990) is found among a restricted set of distributions, where the feasible region is a set of convex combinations of certain types of distributions. In (Zellner, 1996), several non-informative and informative priors for different problems are found. In all of these methods, there is a separation between prior knowledge and observed sample data. Since a prior is a construct, not validated scientific knowledge, the more it is constrained by scientific knowledge, the more confident one can be that it is concentrated around the correct model. From a scientific perspective, the problem is to find mappings that take existing scientific knowledge and yield prior distributions targeted for specific operational objectives. With optimal Bayesian classification, the underlying probabilistic model is a feature-label distribution. In the context of the OBC, the first proposed approach was to use discarded features, ones not used for the classifier, to construct a prior distribution governing the feature-label distribution of the selected features, the assumption being that the discarded features implicitly contain useful information (Dalton and Dougherty, 2011c). The method was applied to the multivariate Gaussian model with u ¼ (m, S) and prior distribution defined in Eq. 6.89. The method of moments was used to determine the hyperparameters. A non-Gaussian model is considered in (Knight et al., 2014, 2018) based on modeling gene concentration levels in next-generation sequencing. These can be modeled using a log-normal distribution. The assumption that the sequencing instrument samples mRNA concentration through a Poisson process yields a multivariate Poisson (MP) model: PðX i, j jli, j Þ Poissonðd i expðli, j ÞÞ,
(6.122)
where li, j is the location parameter of the log-normal distribution for sample i and gene j, and di is a variable accounting for the sequencing depth as determined by the sequencing process (Ghaffari et al., 2013). For each i, the location parameter vector li is modeled with a multivariate Gaussian distribution,
230
Chapter 6
li (m, S). The mean m and covariance S of the gene concentrations are assumed to be independent for each class y, which possesses prior uy ¼ (m, S, d, l), where d ¼ ðd 1 , : : : , d n Þ and l ¼ ðli, j Þ, i ¼ 1, 2, : : : , n, j ¼ 1, 2, : : : , D, for n sample points and D total genes. Weakly informative priors are utilized, and discarded features are used to determine the hyperparameters via a different method than that used in (Dalton and Dougherty, 2011c). MCMC methods are used for computation. If the underlying random processes are derived as solutions to an equational model characterizing a physical system, then uncertainty in the parameters of the equational model will lead to uncertainty in the parameters of those random processes. This issue is addressed in (Zollanvari and Dougherty, 2016) for multivariate Gaussian processes that are solutions to vector stochastic differential equations (SDEs). In this scenario, the SDE provides a form of prior knowledge used in the classification of discrete time series, in which a sample point consists of a set of observations over a finite set of time points. Based on the relevant SDE theory (Arnold, 1974), the means and covariance matrices for the class-conditional distributions arising from the class-0 and class-1 SDEs are expressed in terms of the parameters of their respective SDEs. These distributions are multivariate Gaussian, and, assuming independent processes, an inverse-Wishart (conjugate) prior is assumed to govern the independent covariance model. Hence, the posteriors and effective class-conditional densities are determined by Theorems 6.3 and 6.4, respectively, and the OBC discriminant is given by Eq. 6.106. Leaving the SDE details to (Zollanvari and Dougherty, 2016), we briefly describe the classification structure. Consider a Gaussian random process X with sample space S. Sampling over an observation time-vector tN ¼ ðt1 , t2 , : : : , tN ÞT yields p-dimensional Gaussian random vectors Xt1 , Xt2 , : : : , XtN . The vector ðXTt1 , XTt2 , : : : , XTtN ÞT possesses a multivariate normal distribution with mean mtN ¼ ðmTt1 , mTt2 , : : : , mTtN ÞTNp1 ,
(6.123)
where mti ¼ E½Xti and Np Np covariance matrix StN ¼ ðSti ,tj Þ, where Sti ,tj ¼ E½ðXti E½Xti ÞðXtj E½Xtj ÞT :
(6.124)
For any v ∈ S, a sample path is a collection {Xt(v)}. A realization of X at sample path v and time vector tN is denoted by xtN ðvÞ. Consider two independent multivariate Gaussian processes X0 and X1, where for any tN, X0 and X1 possess mean and covariance m0tN and S0tN , and m1tN and S1tN , respectively. For y ¼ 0, 1, mytN is defined similarly to mtN , with myti ¼ E½Xyti , and SytN is defined similarly to StN , with Xyti replacing Xti in the definition of Syti ,tj . These are determined by the respective SDEs. Let P ytN denote a set of ny sample paths from process Xy at tN:
Optimal Classification
231
P ytN ¼ fxytN ðv1 Þ, xytN ðv2 Þ, : : : , xytN ðvny Þg:
(6.125)
Separate sampling is assumed. Classification is defined relative to a set of sample paths, which can be considered as Np-dimensional observations. Let xytN ðvs Þ denote a future sample path observed over tN but with y unknown. We desire a classifier ctN to predict y via a discriminant DtN : 0 if DtN ðxytN ðvs ÞÞ . 0, y (6.126) ctN ðxtN ðvs ÞÞ ¼ 1 otherwise: If there is no uncertainty in the SDEs, then the discriminant is determined by QDA; with uncertainty, the discriminant is the OBC. Our main aim in this section is to introduce a formal optimization to map constraints in the form of conditional probability statements regarding the phenomena of interest into a prior distribution. In the context of phenotype classification, specially designed optimizations constrained by pathway relations have been proposed to integrate knowledge concerning genetic signaling pathways into prior construction for classification based on both Gaussian and discrete gene-expression (Esfahani et al., 2013; Esfahani and Dougherty, 2014, 2015). In the remainder of this section, we present a general paradigm for prior formation involving a constrained optimization in which the constraints incorporate existing scientific knowledge augmented by slackness variables (Boluki et al., 2017b). The constraints tighten the prior distribution in accordance with prior knowledge, while at the same time avoiding inadvertent over-restriction of the prior. 6.7.1 General framework Prior construction involves two steps: (1) functional information quantification and (2) objective-based prior selection: combining sample data and prior knowledge, and building an objective function in which the expected mean log-likelihood is regularized by the quantified information in step 1. When there are no sample data or only one data point is available for prior construction, this procedure reduces to a regularized extension of the maximum entropy principle. The next two definitions provide the general framework. Definition 6.1. If Π is a family of proper priors, then a maximal knowledgedriven information prior (MKDIP) is a solution to the optimization arg min E p ½C u ðj, DÞ, p∈Π
(6.127)
where Cu(j, D) is a cost function depending on (1) the random vector u parameterizing the uncertainty class and (2) the state j of our prior knowledge
232
Chapter 6
and part of the sample data D. Alternatively, by parameterizing the prior probability as p(u; g), with g ∈ G denoting the hyperparameters, an MKDIP can be found by solving arg min E pðu;gÞ ½C u ðj, g, DÞ:
(6.128)
g∈G
The MKDIP incorporates prior knowledge and part of the data to construct an informative prior. Moreover, we allow the hyperparameters to play a role in the cost function. There is no known way of determining the optimal amount of observed data to use in the optimization problem of Definition 6.1; however, the issue has been addressed via simulations in (Esfahani and Dougherty, 2014). A special case arises when the cost function is additively decomposed into costs on the hyperparameters and the data, so that the cost function takes the form ð1Þ
ð2Þ
C u ðj, g, DÞ ¼ ð1 bÞgu ðj, gÞ þ bgu ðj, DÞ, ð1Þ
(6.129) ð2Þ
where b ∈ [0, 1] is a regularization parameter, and gu and gu are cost functions. The next definition transforms the MKDIP into a constrained optimization. Definition 6.2. A maximal knowledge-driven information prior with constraints takes the form of the optimization in Eq. 6.128 subject to the constraints ð3Þ
E pðu;gÞ ½gu,i ðjÞ ¼ 0, i ¼ 1, 2, : : : , nc ,
(6.130)
ð3Þ
where gu,i , i ¼ 1, 2, : : : , nc , are constraints resulting from the state j of our knowledge via a mapping ð3Þ
ð3Þ
ð3Þ
T : j → ðE pðu;gÞ ½gu,1 ðjÞ, E pðu;gÞ ½gu,2 ðjÞ, : : : , E pðu;gÞ ½gu,nc ðjÞÞ:
(6.131)
We restrict our attention to MKDIP with constraints and additive costs as in Eq. 6.129. In addition, while the MKDIP formulation allows prior information in the cost function, we will restrict prior information to the constraints so that Eq. 6.129 becomes ð1Þ
ð2Þ
C u ðg, DÞ ¼ ð1 bÞgu ðgÞ þ bgu ðDÞ,
(6.132)
and the optimization of Eq. 6.128 becomes ð1Þ
ð2Þ
arg min E pðu;gÞ ½ð1 bÞgu ðgÞ þ bgu ðDÞ: g∈G
(6.133)
u
The following are examples of cost functions in the literature: 1. Maximum Entropy. The principle of maximum entropy (MaxEnt) (Guiasu and Shenitzer, 1985) yields the least informative prior given the constraints in order to prevent adding spurious information. Under our general framework, MaxEnt can be formulated by setting b ¼ 0 and ð1Þ
gu ¼ H½u,
(6.134)
where H[·] denotes the Shannon entropy. 2. Maximal Data Information. The maximal data information prior (MDIP) (Zellner, 1984) provides a criterion for the constructed probability distribution to remain maximally committed to the data (Ebrahimi et al., 1999). To achieve MDIP in our general framework, set b ¼ 0 and ð1Þ
gu ¼ log pðu; gÞ þ
234
Chapter 6
Z rðutrue , uÞ ¼
f ðxjutrue Þ log f ðxjuÞdx
(6.138)
x∈X
¼ E½log f ðXjuÞjutrue , which can therefore be treated as a similarity measure between utrue and u. If D has nD points, since the sample points are conditioned on utrue, r(utrue, u) has the sample-mean estimate D 1 X log f ðxi juÞ, nD i¼1
n
lðu; DÞ ¼
(6.139)
where the summation is the log-likelihood function (Akaike, 1973, 1978; Bozdogan, 1987). As originally proposed, the preceding three approaches did not involve expectation over the uncertainty class. They were extended to the general prior construction form in Definition 6.1, including the expectation, to produce the regularized maximum entropy prior (RMEP) and the regularized maximal data information prior (RMDIP) in (Esfahani and Dougherty, 2014) and the regularized expected mean log-likelihood prior (REMLP) in (Esfahani and Dougherty, 2015). In all cases, optimization was subject to specialized constraints. Three MKDIP methods result from using these information-theoretic cost functions in the MKDIP prior construction optimization framework. A nonnegative slackness variable εi can be considered for each constraint in the MKDIP framework to make the constraint structure more flexible, thereby allowing potential error or uncertainty in prior knowledge (allowing potential inconsistencies in prior knowledge). When using slackness variables, these also become optimization parameters, and a linear function (summation of all slackness variables) times a regulatory coefficient is added to the cost function of the optimization in Eq. 6.128, so that the optimization in Eq. 6.128 relative to Eq. 6.129 becomes nc X ð1Þ ð2Þ arg min E pðu;gÞ l1 ½ð1 bÞgu ðj, gÞ þ bgu ðj, DÞ þ l2 εi , (6.140) g∈G,ε∈E
i¼1
subject to ð3Þ
εi ≤ E pðu;gÞ ½gu,i ðjÞ ≤ εi , i ¼ 1, 2, : : : , nc ,
(6.141)
where l1 and l2 are nonnegative regularization parameters, and ε ¼ ðε1 , : : : , εnc Þ and E represent the vector of all slackness variables and the feasible region for slackness variables, respectively. Each slackness variable determines a range — the more uncertainty regarding a constraint, the greater the range for the corresponding slackness variable.
Optimal Classification
235
6.7.2 From prior knowledge to constraints In most scientific settings, knowledge can be expressed in the form of conditional probabilities characterizing conditional relations, specifically, P(Ui ∈ Ai|Vi ∈ Bi), where Ui and Vi are vectors composed of variables in the system, and Ai and Bi are subsets of the ranges of Ui and Vi, respectively. For instance, if a system has m binary random variables X 1 , X 2 , : : : , X m , then there are m2m–1 probabilities for which a single variable is conditioned by the other variables: PðX i ¼ k i jX 1 ¼ k 1 , : : : , X i1 ¼ k i1 , X iþ1 ¼ k iþ1 , : : : , X m ¼ k m Þ ¼ aki i ðk 1 , : : : , k i1 , k iþ1 , : : : , k m Þ:
(6.142)
Most likely, not all variables will be used. The chosen constraints will involve conditional probabilities whose values are known (approximately). For instance, if X1 ¼ 1 if and only if X2 ¼ 1 and X3 ¼ 0, regardless of other variables, then a11 ð1, 0, k 4 , : : : , k m Þ ¼ 1
(6.143)
and a11 ð1, 1, k 4 , : : : , k m Þ ¼ a11 ð0, 0, k 4 , : : : , k m Þ ¼ a11 ð0, 1, k 4 , : : : , k m Þ ¼ 0 (6.144) for all k 4 , : : : , k m . In stochastic systems it is unlikely that conditioning will be so complete that constraints take 0-1 forms; rather, the relation between variables in the model will be conditioned on the context of the system being modeled, not simply the activity being modeled. In this situation, conditional probabilities take the form a11 ð1, 0, k 4 , : : : , k m Þ ¼ 1 d1 ð1, 0, k 4 , : : : , k m Þ,
(6.145)
where d1 ð1, 0, k 4 , : : : , k m Þ ∈ ½0, 1 is referred to as a conditioning parameter, and a11 ð1, 1, k 4 , : : : , k m Þ ¼ h1 ð1, 1, k 4 , : : : , k m Þ, a11 ð0, 0, k 4 , : : : , k m Þ ¼ h1 ð0, 0, k 4 , : : : , k m Þ,
(6.146)
a11 ð0, 1, k 4 , : : : , k m Þ ¼ h1 ð0, 1, k 4 , : : : , k m Þ, where h1 ðr, s, k 4 , : : : , k m Þ ∈ ½0, 1 is referred to as a crosstalk parameter. The “conditioning” and “crosstalk” terminology comes from (Dougherty et al., 2009), in which d quantifies loss of regulatory control based on the overall context in which the system is operating and, analogously, h corresponds to
236
Chapter 6
regulatory relations outside the model that result in the conditioned variable X1 taking the “on” value 1. In practice, it is unlikely that we would know the conditioning and crosstalk parameters for all combinations of k 4 , : : : , k m ; rather, we might know only the average, in which case, d1 ð1, 0, k 4 , : : : , k m Þ reduces to d1 ð1, 0Þ, h1 ð1, 1, k 4 , : : : , k m Þ reduces to h1(1, 1), etc. The basic scheme is very general and applies to the Gaussian and discrete models in (Esfahani and Dougherty, 2014, 2015). In this paradigm, the constraints resulting from the state of knowledge are of the form ð3Þ
gu, i ðjÞ ¼ PðX i ¼ k i jX 1 ¼ k 1 , : : : , X i1 ¼ k i1 , X iþ1 ¼ k iþ1 , : : : , X m ¼ k m Þ aki i ðk 1 , : : : , ki1 , k iþ1 , : : : , km Þ:
(6.147)
When slackness variables are introduced, the optimization constraints take the form aki i ðk 1 , : : : , k i1 , k iþ1 , : : : , k m Þ εi ðk 1 , : : : , k i1 , k iþ1 , : : : , k m Þ ≤ E pðu;gÞ ½PðX i ¼ k i jX 1 ¼ k 1 , : : : , X i1 ¼ k i1 , X iþ1 ¼ k iþ1 , : : : , X m ¼ k m Þ ≤ aki i ðk 1 , : : : , k i1 , k iþ1 , : : : , k m Þ þ εi ðk 1 , : : : , k i1 , k iþ1 , : : : , k m Þ:
(6.148) Not all constraints will be used, depending on our prior knowledge. In fact, the general conditional probabilities will not likely be used because they will likely not be known when there are many conditioning variables. When all constraints in the optimization are of this form, we obtain the optimizations MKDIP-E, MKDIP-D, and MKDIP-R, which correspond to using the same cost functions as REMP, RMDIP, and REMLP, respectively. Example 6.3. Consider the network in Fig. 6.5 in which X1 is regulated by X2, X3, and X5, for the moment, ignoring the regulating Boolean function shown in the figure. We have PðX 1 ¼ 1jX 2 ¼ k 2 , X 3 ¼ k 3 , X 5 ¼ k 5 Þ ¼ a11 ðk 2 , k 3 , k 5 Þ: For instance, it might be that PðX 1 ¼ 1jX 2 ¼ 1, X 3 ¼ 1, X 5 ¼ 0Þ ¼ a11 ð12 , 13 , 05 Þ, where the notation 12 denotes X2 ¼ 1. If we do not know a1(k2, k3, k5) for all combinations of k2, k3, k5, then we only use known combinations. If there is conditioning with X2, X3, and X5 regulating X1, with X1 ¼ 1 if X2 ¼ X3 ¼ X5 ¼ 1, then
Optimal Classification
237
Figure 6.5 Example showing components directly connected to X1. [Reprinted from (Boluki et al., 2017b).]
a11 ð12 , 13 , 15 Þ ¼ 1 d1 ð12 , 13 , 15 Þ: If we are limited to three-node predictors, and X3 and X5 regulate X1, with X1 ¼ 1 if X3 ¼ X5 ¼ 1, then a11 ðk 2 , 13 , 15 Þ ¼ 1 d1 ðk 2 , 13 , 15 Þ, meaning that the conditioning parameter depends on whether X2 ¼ 0 or X2 ¼ 1. Now, assuming the regulating function of X1 shown in Fig. 6.5, if context effects exist, then the following knowledge constraints can be extracted: a01 ðk 2 , k 3 , 05 Þ ¼ 1 d1 ðk 2 , k 3 , 05 Þ, a01 ðk 2 , 13 , k 5 Þ ¼ 1 d1 ðk 2 , 13 , k 5 Þ, a01 ð12 , k 3 , k 5 Þ ¼ 1 d1 ð12 , k 3 , k 5 Þ, a11 ð02 , 03 , 15 Þ ¼ 1 d1 ð02 , 03 , 15 Þ: If we assume that X1 is not affected by context, then a01 ðk 2 , k 3 , 05 Þ ¼ PðX 1 ¼ 0jX 5 ¼ 0Þ ¼ 1, a01 ðk 2 , 13 , k 5 Þ ¼ PðX 1 ¼ 0jX 3 ¼ 1Þ ¼ 1, a01 ð12 , k 3 , k 5 Þ ¼ PðX 1 ¼ 0jX 2 ¼ 1Þ ¼ 1, a11 ð02 , 03 , 15 Þ ¼ PðX 1 ¼ 1jX 2 ¼ 0, X 3 ¼ 0, X 5 ¼ 1Þ ¼ 1: From the preceding equations, it is seen that for some combinations of the regulator values, only a subset of them determines the value of X1, regardless of the other regulator values. If we assume that the value of X5 cannot be observed, then, of the four constraints, the only ones that can be extracted from the regulating function knowledge will be a01 ðk 2 , 13 , k 5 Þ and a01 ð12 , k 3 , k 5 Þ. ▪
238
Chapter 6
REMLP construction was employed in (Esfahani and Dougherty, 2014) for a normal-Wishart prior on an unknown mean and precision matrix using pathway knowledge in a graphical model. Constraints were imposed on three different kinds of knowledge regarding the random variables in the model: conditional probabilities, correlation, and Shannon entropy. In (Boluki et al., 2017a), the REMLP methodology was extended to a Gaussian mixture model when the labels were unknown, for both classification and regression. Prior construction bundled with prior update via Bayesian sampling resulted in Monte Carlo approximations to the optimal Bayesian classifier and optimal Bayesian regression. In (Esfahani and Dougherty, 2015), REMP and RMDIP were proposed, and REMP, RMDIP, and REMLP were applied to a Dirichlet prior for discrete classification. The general MKDIP framework, including the straightforward structuring of conditional-probabilistic constraints and sole dependence on them, was introduced in (Boluki et al., 2017b), resulting in MKDIP-E, MKDIP-D, and MKDIP-R. 6.7.3 Dirichlet prior distribution Consider the multinomial model with parameter vector p ¼ ðp1 , p2 , : : : , pb Þ possessing a Dirichlet prior distribution DðaÞ with parameter vector a ¼ ða1 , a2 , : : : , ab Þ: P b Gð bk¼1 ak Þ Y pðpjaÞ ¼ Qb pkak 1 Gða Þ k k¼1 k¼1
(6.149)
P (see Eqs. 6.72 and 6.73). Define a0 ¼ bk¼1 ak , which is interpreted as a measure of the strength of the prior knowledge (Ferguson, 1973). We define 0 0 g, where S ab1 is the feasible region for a given a0 by Π ¼ fDðaÞ : a ∈ S ab1 Pb1 defined by 0 ≤ k¼1 ak ≤ a0 . In the binary setting, fixing the value of a single random variable, Xi ¼ 0 or Xi ¼ 1, corresponds to a partition of the state space X ¼ f1, : : : , bg. Denote the portions of X for which (Xi ¼ k1, Xj ¼ k2) and (Xi ≠ k1, Xj ¼ k2) for any k1, k2 ∈ {0, 1} by X i,j ðk 1 , k 2 Þ and X i,j ðk c1 , k 2 Þ, respectively. For the Dirichlet distribution, the constraints on the expectation over the conditional probability in Eq. 6.148 can be explicitly written as functions of the hyperparameters. We denote the summation of the components of a in X i,j ðk 1 , k 2 Þ by ai,j ðk 1 , k 2 Þ ¼
X
ak :
(6.150)
k∈X i,j ðk 1 ,k 2 Þ
The notation can be easily extended for cases having more than two fixed random variables.
Optimal Classification
239
To evaluate the expectations of the conditional probabilities, we require a technical lemma proven in (Esfahani and Dougherty, 2015): for any nonempty disjoint subsets A, B ⊂ X ¼ f1, 2, : : : , bg, P P i∈A pi i∈A ai P P Ep P ¼P : (6.151) i∈A pi þ j∈B pj i∈A ai þ j∈B aj If, for any i, the vectors of random variables other than Xi and of their corresponding values are denoted by X˜ i and x˜ i , respectively, then, applying the preceding equation, the expectation over the conditional probability in Eq. 6.148 is E p ½PðX i ¼ ki jX 1 ¼ k 1 , : : : , X i1 ¼ k i1 , X iþ1 ¼ k iþ1 , : : : , X m ¼ km Þ P ˜ k∈X i,X i ðk i , x˜ i Þ pk P ¼ Ep P ˜ ˜ k∈X i,X i ðk i , x˜ i Þ pk þ k∈X i,X i ðk c , x˜ i Þ pk i
¼
(6.152)
˜
˜
ai,X i ðk i , x˜ i Þ ˜
ai,X i ðki , x˜ i Þ þ ai,X i ðkci , x˜ i Þ
,
where the summation in the numerator and the first summation in the denominator of the second equality are over the states (bins) for which (X i ¼ k i , X˜ i ¼ x˜ i ), and the second summation in the denominator is over the states (bins) for which (X i ¼ k ci , X˜ i ¼ x˜ i ). If there exists a set of random variables that completely determines the value of node Xi (or only a specific set-up of their values that determines the value), then the constraints on the conditional probability conditioned on all of the random variables other than Xi can be changed to be conditioned on that set only. Specifically, let Ri denote the set of random variables corresponding to such a set, and suppose that there exists a specific set-up of their values ri that completely determines the value of Xi. If the set of all random variables other than Xi and those in Ri is denoted by Bi ¼ X˜ ði,Ri Þ , and their corresponding values by bi, then the constraints on the conditional probability can be expressed as E p ½PðX i ¼ k i jRi ¼ ri Þ P P bi ∈OBi k∈X i,Ri ,Bi ðk i , ri , bi Þ pk P P ¼ Ep P bi ∈OBi ð k∈X i,Ri ,Bi ðk i , ri , bi Þ pk þ k∈X i,Ri ,Bi ðk ci , ri , bi Þ pk Þ P i,Ri ,Bi ðk i , ri , bi Þ bi ∈OBi a , ¼P i,Ri ,Bi ðk i , ri , bi Þ þ ai,Ri ,Bi ðk ci , ri , bi Þ bi ∈OB ½a i
where OBi is the set of all possible vectors of values for Bi.
(6.153)
240
Chapter 6
For a multinomial model with a Dirichlet prior distribution, a constraint on the conditional probabilities translates into a constraint on the preceding expectation over the conditional probabilities, as in Eq. 6.148. We next express the optimization cost functions for the three prior construction methods, REMP, RMDIP, and REMLP, as functions of the Dirichlet parameter. For REMP, the entropy of p is given by H½p ¼
b X
½log Gðak Þ ðak 1Þcðak Þ
(6.154)
k¼1
log Gða0 Þ þ ða0 bÞcða0 Þ, d ln GðxÞ where c : Rþ → R is the digamma function, defined by cðxÞ ¼ dx (Bishop, 2006). Assuming that a0 is given, so that it does not affect the minimization, the cost function is given by b X ½log Gðak Þ ðak 1Þcðak Þ: E p ½C p ðaÞ ¼
(6.155)
k¼1
For RMDIP, according to Eq. 6.135 and employing Eq. 6.154, after removing the constant terms, E p ½C p ðaÞ ¼
b X
" ½log Gðak Þ ðak 1Þcðak Þ E p
k¼1
b X
# pk ln pk : (6.156)
k¼1
To evaluate the second summand, first bring the expectation inside the sum. Given p DðaÞ with a0 defined as it is, it is known that pk is beta distributed with pk beta(ak, a0 ak). Plugging the terms into the expectation and doing some manipulation of the terms inside the expectation integral yields E p ½ pk log pk ¼
ak E ½log pk , a0 w
(6.157)
where w beta(ak þ 1, a0 ak). It is known that E xbetaða,bÞ ½log x ¼ cðaÞ cða þ bÞ:
(6.158)
Hence, E p ½ pk ln pk ¼
ak ½cðak þ 1Þ cða0 þ 1Þ, a0
and the second summand on the right in Eq. 6.156 becomes
(6.159)
Optimal Classification
" Ep
b X
241
#
"
pk log pk ¼
k¼1
b X ak k¼1
a0
# cðak þ 1Þ cða0 þ 1Þ:
(6.160)
Plugging this into Eq. 6.156 and dropping c(a0 þ 1) yields the RMDIP cost function: E p ½C p ðaÞ ¼
b h X
log Gðak Þ ðak 1Þcðak Þ þ
k¼1
i ak cðak þ 1Þ : a0
(6.161)
P For REMLP, if there are np ¼ bk¼1 ukp points for prior construction, there being ukp points observed for bin k, then the mean-log-likelihood function for the multinomial distribution is b np ! 1X ukp log pk þ log Qb lnp ½ðp1 , : : : , pb Þ ¼ p : np k¼1 k¼1 uk !
Since p DðaÞ and a0 ¼
(6.162)
P
i ai ,
E p ½log pk ¼ cðak Þ cða0 Þ
(6.163)
(Bishop, 2006). Hence, removing the constant parts in Eq. 6.162 and taking the expectation yields E p ½lnp ðpÞ ¼
b 1X u p ½cðak Þ cða0 Þ: np k¼1 k
(6.164)
Finally, the REMLP Dirichlet prior for known a0 involves optimization with respect to the cost function b 1X u p cðak Þ, E p ½C p ða, DÞ ¼ np k¼1 k
(6.165)
where D denotes the data used for prior construction.
6.8 Epistemology The need for robust operators arises from insufficient knowledge concerning the underlying physical system. In the case of classification, if the feature-label distribution is assumed to be known, either being deduced from existing scientific theory or there being sufficient data to accurately estimate it, then optimal classification is achieved by the Bayes classifier, and the minimum error achievable for the feature-label distribution is the Bayes error.
242
Chapter 6
In general, any pair M ¼ ðc, εc Þ composed of a measurable function c : Rd → f0, 1g and a real number εc ∈ [0, 1] constitutes a classifier model, with εc being simply a number, not necessarily specifying an actual error probability corresponding to c. M takes on an empirical meaning when it is applied to a feature-label distribution. If the feature-label distribution is unknown and a classification rule Cn is used to design a classifier cn from a random sample Sn, then the classifier model (cn, ε[cn]) is obtained. Since the feature-label distribution is unknown, ε[cn] is unknown and must be estimated by an estimation rule Ξn. Consequently, the random sample Sn yields a classifier cn ¼ Cn(Sn) and an error estimate εˆ ½cn ¼ Ξn ðS n Þ, which together constitute a classifier model ðcn , εˆ ½cn Þ. Overall, classifier design involves a rule model (Cn, Ξn) used to determine a sample-dependent classifier model ðcn , εˆ ½cn Þ. Both (cn, ε[cn]) and ðcn , εˆ ½cn Þ are random pairs relative to the sampling distribution. Since the error quantifies the predictive capacity of a classifier, the salient epistemological issue is error estimation accuracy. Given a feature-label distribution, error estimation accuracy is commonly measured by the meansquare error, MSEðˆεÞ ¼ E Sn ½ðˆεn εn Þ2 ,
(6.166)
where we denote ε[cn] and εˆ ½cn by εn and εˆ n , respectively, or, equivalently, by the square root of the MSE, known as the root-mean-square (RMS). The MSE is decomposed into the bias, Biasðˆεn Þ ¼ E Sn ½ˆεn εn ,
(6.167)
of the error estimator relative to the true error, and the deviation variance, Vardev ½ˆεn ¼ Var½ˆεn εn ,
(6.168)
MSEðˆεn Þ ¼ Vardev ½ˆεn þ Biasðˆεn Þ2 :
(6.169)
by
When a large amount of data is available, the sample can be split into independent training and test sets, the classifier being designed on the training data and its error being estimated by the proportion of errors on the test data, which is known as the holdout estimator. Holdout possesses the distributionfree bound pffiffiffiffiffiffiffiffi RMSðˆεholdout Þ ≤ 1∕ 4 m,
(6.170)
where m is the size of the test sample (Devroye et al., 1996). With limited data, the sample cannot be split without leaving too little data for classifier design, in which case training and error estimation must take place on the same data set.
Optimal Classification
243
The problem with training-data error estimation can be explained by the following formula for the deviation variance: Vardev ½ˆεn ¼ s2εˆ þ s2ε 2rsεˆ sε ,
(6.171)
where s2εˆ , s2ε , and r are the variance of the error estimate, the variance of the error, and the correlation coefficient between the estimated and true errors, respectively. The deviation variance is driven down by small variances or a correlation coefficient near 1. Consider the cross-validation error estimator. The error is estimated on the training data by randomly splitting the training data into k folds (subsets), S in for i ¼ 1, 2, : : : , k, training k classifiers on S n S in for i ¼ 1, 2, : : : , k, calculating the proportion of errors of each designed classifier on the appropriate left-out fold, and then averaging these proportions to obtain the cross-validation estimate of the originally designed classifier. Letting k ¼ n yields the leave-one-out estimator. Referring to Eq. 6.171, for small samples, cross-validation has large variance and little correlation with the true error (Braga-Neto and Dougherty, 2004). Hence, although with small folds cross-validation does not suffer too much from bias, it typically has large deviation variance. Moreover, the correlation coefficient is often close to zero and in some common models has been shown to be negative (Braga-Neto and Dougherty, 2010). 6.8.1 RMS bounds For a classifier model ðcn , εˆ ½cn Þ, we would like RMS bounds on the error estimator of the form RMSðˆεn Þ ≤ d for n ≥ nd. Model validity can be characterized by such inequalities. If there are no assumptions on the featurelabel distribution, meaning that the entire procedure is distribution-free, then there are three possibilities. First, if no validity criterion is specified, then the classifier model is ipso facto epistemologically meaningless. Second, if a validity criterion is specified and no distribution-free results are known for Cn and Ξn, then again the model is meaningless. Third, if there exist distributionfree RMS bounds concerning Cn and Ξn, then these bounds can be used to quantify model validity. Regarding the third possibility, the following is an example of a distributionfree RMS bound for the leave-one-out error estimator with the discrete histogram rule and tie-breaking in the direction of class 0 (Devroye et al., 1996): sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 1 þ 6∕e 6 RMSðˆεn Þ ≤ þ pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi: (6.172) n pðn 1Þ Unfortunately, the bound is useless for small samples: for n ¼ 200 the bound is 0.506. In general, there are very few cases in which distribution-free bounds are known; when they are known, they are useless for small samples.
244
Chapter 6
Distribution-based bounds are needed. These require knowledge of the RMS, which means knowledge concerning the second-order moments of the joint distribution between the true and estimated errors. If the feature-label distribution belongs to an uncertainty class of feature-label distributions and RMS representations are known for feature-label distributions in the uncertainty class, then distribution-based RMS bounds follow. Specifically, RMSðˆεn Þ ≤ maxRMSðˆεn jF u Þ, u∈U
(6.173)
where RMSðˆεn jF u Þ is the RMS of the error estimator under the assumption that the feature-label distribution is Fu. For instance, for LDA in the univariate Gaussian model, exact closed-form representations for the secondorder moments are known, and the RMS is thereby obtained (Zollanvari et al., 2012). Unfortunately, there are few cases in which these moments are known. If we assume a parameterized model in which the RMS is an increasing function of the Bayes error εbay as it is in the Gaussian model just cited, we can pose the following question: Given sample size n and l > 0, what is the maximum value maxBayes(l) of the Bayes error such that RMSðˆεn Þ ≤ l? If RMS is the measure of validity and l represents the largest acceptable RMS for the classifier model to be considered meaningful, then the epistemological requirement is characterized by maxBayes(l). Given the relationship between model parameters and the Bayes error, the inequality εbay ≤ maxBayes(l) can be solved in terms of the parameters to arrive at a necessary modeling assumption. To have scientific content, small-sample classification requires prior knowledge (Doughety and Dalton, 2013). Regarding the feature-label distribution, there are two extremes: (1) the feature-label distribution is known, in which case the entire classification problem collapses to finding the Bayes classifier and Bayes error; and (2) the uncertainty class consists of all feature-label distributions, the distribution-free case, and we typically have no bound, or have one that is too loose for practice. In the middle ground, there is a trade-off between the size of the uncertainty class and the size of the sample. The uncertainty class must be sufficiently constrained (equivalently, the prior knowledge must be sufficiently great) that an acceptable bound can be achieved with an acceptable sample size. 6.8.2 Validity in the context of a prior distribution Given the need for a distributional model to achieve useful performance bounds for classifier error estimation, a natural course of action is to define a prior distribution over the uncertainty class of feature-label distributions and then find the Bayesian MMSE error estimator, which is guaranteed to, on average, possess minimal RMS over the uncertainty class, which is precisely what has been done.
Optimal Classification
245
In this setting, the posteriors of the distribution parameters yield a sampleconditioned distribution on the true classifier error and, within the context of this distribution, the true error possesses moments (for a fixed sample and classifier), in particular, the expectation and variance. There is also a related MSE: for a fixed sample Sn and an arbitrary error estimator εˆ • , the sampleconditioned MSE relative to classifier c is defined by MSEðˆε• ½cjS n Þ ¼ E p ½ðεu ½c εˆ • jc; S n Þ2 :
(6.174)
This MSE differs from the one defined in Eq. 6.166. Whereas that one is based on the true error for a known feature-label distribution and involves expectation relative to the sampling distribution, the sample-conditioned MSE is based on our uncertain knowledge of the error and involves expectation relative to that uncertain knowledge (the posterior distribution). The sample-conditioned MSE for the Bayesian MMSE error estimator for class y is given by MSEðˆεyU ½cjS n Þ ¼ E p ½ðεyu ½c εˆ yU ½c; S n Þ2 :
(6.175)
To simplify notation, implicitly assuming a classifier c, we shall write MSEðˆε• jS n Þ and MSEðˆεyU jS n Þ for Eqs. 6.174 and 6.175, respectively. The sample-conditioned MSE of the Bayesian MMSE error estimator εˆ U ½S n can be expressed in terms of the sample-conditioned MSEs of εˆ 0U ½S n and εˆ 1U ½S n , which in turn may be decomposed into the first and second posterior moments of the true errors ε0u0 and ε1u1 , respectively. Assuming posterior independence between u0, u1, and c, the sample-conditioned MSE of the Bayesian MMSE error estimator is given by MSEðˆεU jS n Þ ¼ Varp ½cðˆε0U ½S n εˆ 1U ½S n Þ2 þ E p ½c2 Varp ½ε0u0 þ E p ½ð1 cÞ2 Varp ½ε1u1
(6.176)
(Dalton and Dougherty, 2012a). This is equivalent to MSEðˆεU jS n Þ ¼ Varp ½cðˆε0U ½S n εˆ 1U ½S n Þ2 þ E p ½c2 MSEðˆε0U jS n Þ þ E p ½ð1 cÞ2 MSEðˆε1U jS n Þ,
(6.177)
where MSEðˆεyU jS n Þ ¼ Varp ½εyuy ¼ E p ½ðεyuy Þ2 ðˆεyU Þ2
(6.178)
for y ¼ 0, 1. Closed-form expressions for the sample-conditioned MSE are available in (Dalton and Dougherty, 2012a). The sample-conditioned MSE of
246
Chapter 6
the Bayesian MMSE error estimator converges to zero almost surely in both the discrete and (appropriate) Gaussian models (Dalton and Dougherty, 2012b). The sample-conditioned MSE of an arbitrary error estimator εˆ • can be expressed in terms of the sample-conditioned MSE of the Bayesian MMSE error estimator as MSEðˆε• jS n Þ ¼ E p ½ðεu εˆ • Þ2 jS n ¼ E p ½ðεu εˆ U þ εˆ U εˆ • Þ2 jS n ¼ E p ½ðεu εˆ U Þ2 jS n þ 2ðˆεU εˆ • ÞE p ½εu εˆ U jS n þ ðˆεU εˆ • Þ
(6.179)
2
¼ MSEðˆεU jS n Þ þ ðˆεU εˆ • Þ2 : MSEðˆε• jS n Þ, as well as its square root RMSðˆε• jS n Þ, are minimized when εˆ • ¼ εˆ U . Equation 6.173 formulates the epistemological issue in terms of the RMS relative to the true error. The sample-conditioned RMS in the framework of a posterior distribution provides an alternative perspective. Recognizing that Eq. 6.174 applies to a classifier model ðcn , εˆ ½cn Þ arising from a rule model (Cn, Ξm), we can view the sample-conditioned RMS from the following perspective: given l > 0, choose n sufficiently large such that MSEðˆεn jS n Þ , l (Dalton and Dougherty, 2012b). This approach provides a form of censored sampling, where the sample size is not determined beforehand, but depends on MSEðˆεn jS n Þ, with Sn ⊂ Snþ1. While this provides a direct method to find a sample size assuring a desired sample-conditioned MSE, it does so relative to the prior distribution characterizing our knowledge of the feature-label distribution, not relative to the true feature-label distribution. The BEE, its RMS representation, and the OBC combine to confront a lack of data. With an “infinite” amount of data, we could estimate the featurelabel distribution very accurately, and there would be no need for classification rules. Absent that, if we must use a classification rule, be it a standard rule or an OBC, upon finding the classifier, we could estimate its error from test data if there are sufficient test data to obtain a satisfactory distribution-free RMS bound. Of course, even in this scenario we cannot determine the accuracy of a specific estimate, but bounding the RMS can serve as a validity criterion. The problem gets critical when there are insufficient data for estimating model parameters and insufficient data for validation. Our view throughout the entire book as been that the problem of insufficient data for model estimation can be framed in terms of a prior distribution over an uncertainty class, with operator design optimized relative to the prior or posterior distribution. With classification, we are simply treating error estimation as an operator for which there is insufficient data.
Optimal Classification
247
Yet even if a prior distribution is obtained via a transformation of existing scientific knowledge as in the previous section, it remains a pragmatic construct to facilitate operator design. Looking at the optimization of Eq. 6.140, we need to postulate a class of potential prior distributions, put in the slackness variables, and perform an optimization relative to a specified criterion. Given a cost function, an IBR operator is optimal on average relative to the prior (or posterior) distribution, but the real desire is for an operator that ¯ the true value of u. Can an IBR operator somehow be is optimal relative to u, ¯ Strictly speaking, the question “validated” on the model corresponding to u? ¯ makes no sense if it means to show that an IBR operator is optimal for u, which we do not know. An IBR operator has no direct connection to the full model. It is only related via the prior (or posterior) distribution. IBR operators cannot cure the small-data epistemological problem. What they do is place operator design under uncertainty in a rigorous optimization framework grounded in a formal infrastructure utilizing prior knowledge and data, while providing uncertainty quantification relative to a translational objective at the level of the underlying processes. Within the bounds set by existing scientific knowledge, the formalization of uncertainty, which is the prior distribution, must be constructed via subjectively imposed criteria. Nevertheless, given its construction, the prior distribution together with the cost function jointly form a hypothesis from which an optimal operator can be deduced; in the case of classification, its error can be estimated optimally and the accuracy of that estimate quantified. For a fuller discussion of these epistemological issues within the historical evolution of science, see (Dougherty, 2016).
Chapter 7
Optimal Clustering The first two steps in the optimization paradigm of Section 3.8 are constructing the mathematical model and defining a class of operators. These steps are mutual in the sense that one cannot begin to analyze a class of operators without knowing the mathematical structure of the operator domain, which in the case of random phenomena means specification of the underlying random process. In the case of clustering, given a finite point set, a cluster operator partitions the set, the input being the point set and the output being a partition, which is a disjoint collection of subsets whose union is the full point set. Just as a specific signal is a realization of a random function modeling a signal-generation process, a specific point set is a realization of a random point set modeling a point-set-generation process. The study of clustering must begin with a random set, or else it is scientifically meaningless. Moreover, the random set must be sufficiently endowed that it supports the analysis of partitioning, including partition error. Optimization involves choosing a cluster operator that minimizes the partition error. Working with random sets is inherently more difficult than working with random vectors. Whereas a random vector is fully characterized by a probability distribution function, which may be statistically estimated from realizations, characterization of random sets requires Choquet capacities via the Choquet– Matheron–Kendall theorem [see (Dougherty, 1999)]. In one dimension, a probability distribution function involves probabilities of the half-infinite intervals ð` , b, whereas a capacity functional involves probabilities for all compact sets in R (Stoyan et al., 1987; Cressie and Lasslett, 1987), thus making modeling and parameter estimation much more difficult. Nevertheless, one has no option other than to study clustering in the framework of random sets if the aim is a general characterization of clustering performance and the subsequent discovery of optimal clustering algorithms.
7.1 Clustering In this section we discuss some classical clustering approaches and in the next turn to mathematical modeling and clustering optimization. 249
250
Chapter 7
If we envision clusters formed from a set S of points x generated from a mixture of m circular Gaussian conditional distributions, then it might seem reasonable that the points aˆ 1 , aˆ 2 , : : : , aˆ m that minimize rS ða1 , a2 , : : : , am Þ ¼
1 X min kx aj k2 , hðSÞ x∈S 1 ≤ j ≤ m
(7.1)
where h(S) is the number of points in S, are a good choice for the centroids of the m subsamples arising from the conditional distributions. Let V ¼ fV 1 , V 2 , : : : , V m g be the Voronoi diagram of Rd induced by fˆa1 , aˆ 2 , : : : , aˆ m g, a point x lying in V k if its distance to aˆ k is no more than its distance to any other of the points aˆ 1 , aˆ 2 , : : : , aˆ m . For Euclidean-distance clustering, the points are clustered according to how they fall into the Voronoi diagram. While perhaps motivated by a mixture of Gaussians, rS can be computed for any set S of points. rS ða1 , a2 , : : : , am Þ provides an empirical distance-based cost for the points a1 , a2 , : : : , am , and the points aˆ 1 , aˆ 2 , : : : , aˆ k minimize this cost, given S. rS ða1 , a2 , : : : , am Þ does not provide a true cost because it is based on a specific point set. A true cost is given in terms of the distribution of the random set Ξ for which S is one realization. A true cost can be defined by rΞ ða1 , a2 , : : : , am Þ ¼ E Ξ
1 X 2 min kx aj k , hðΞÞ x∈Ξ 1 ≤ j ≤ m
(7.2)
where the expectation is taken with respect to the distribution of the random set Ξ. For a realization S of Ξ, rS estimates rΞ via a single observation. The empirical and true costs may differ significantly for small point sets; however, if there exists a compact set K such that PðΞ ⊂ KÞ ¼ 1, then their difference converges to zero with probability 1 as the number of points in Ξ tends to ` (Linder et al., 1994). In practice, this convergence is of little value unless the number of points is very large. Moreover, what is the relationship between rΞ and the partition error, which is the cost we ultimately care about? Direct implementation of Euclidean-distance clustering is computationally prohibitive. A classical iterative approximation is given by the k-means algorithm, where k refers to the number of clusters provided by the algorithm. Each point is placed into a unique cluster during each iteration, and the means are updated based on the clusters. Given a point set S with n points to be placed into k clusters, initialize the algorithm with k means, m1 , m2 , : : : , mk , among the points; for each point x ∈ S, calculate the distance kx mi k for i ¼ 1, 2, : : : , k; form clusters C 1 , C 2 , : : : , C k by placing x into C i if kx mi k ≤ kx mj k for j ¼ 1, 2, : : : , k; update m1 , m2 , : : : , mk as the means of C 1 , C 2 , : : : , C k , respectively; and repeat until the means do not change. At each stage of the algorithm, the clusters are determined by the Voronoi
Optimal Clustering
251
diagram associated with m1 , m2 , : : : , mk . Two internal problems with the k-means algorithm are the prior assumption on the number of means and the choice of means to seed the algorithm. More fundamentally, the k-means algorithm has no relation to the generating random set beyond the single realization on which it is operating. Equation 7.1 can be rewritten in the following form: rS ða1 , a2 , : : : , am Þ ¼
n X m 1 X PðC j jxi Þb kxi aj k2 , hðSÞ i¼1 j¼1
(7.3)
where b ¼ 1, and PðC j jxi Þ is the probability that xi ∈ C j , which is either 0 or 1, and is 1 only for the minimizing j of Eq. 7.1. A fuzzy approach results from letting the probabilities PðC j jxi Þ reflect uncertainty so that cluster inclusion is not crisp, and letting b . 0 be a parameter affecting the degree to which a point can belong to more than a single cluster. The conditional probabilities are constrained by the requirement that their sum is 1 for any fixed xi . Let pj denote the prior probability of C j . Since the probabilities PðC j jxi Þ are not estimable and are heuristically set, we view them as fuzzy membership functions. In this case, for the minimizing values of rS ða1 , a2 , : : : , am Þ, the partial derivatives with respect to aj and pj satisfy rS ∕aj ¼ 0 and rS ∕pj ¼ 0. These partialderivative identities yield Pn PðC j jxi Þb xi mj ¼ Pi¼1 , (7.4) n b i¼1 PðC j jxi Þ kxi mj kb1 PðC j jxi Þ ¼ Pk 1 . b1 kx m k i l l¼1 1
(7.5)
Equations 7.4 and 7.5 lead to the fuzzy k-means iterative algorithm (Dunn, 1973; Bezdek, 1981). Initialize the algorithm with b, k means m1 , m2 , : : : , mk , and the membership functions PðC j jxi Þ for j ¼ 1, 2, : : : , k and i ¼ 1, 2, : : : , n, where the membership functions must be normalized so that their sum is 1 for any fixed xi ; re-compute mj and PðC j jxi Þ by Eqs. 7.4 and 7.5; and repeat until there are only small pre-specified changes in the means and membership functions. The intent of fuzzifying the k-means algorithm is to keep the means from getting “stuck” during the iterative procedure. Hierarchical clustering iteratively joins clusters according to a similarity measure, often based on Euclidean distance. The basic procedure is to initialize the clusters by C i ¼ fxi g for i ¼ 1, 2, : : : , n and by a desired final number k of clusters, and then iteratively merge the nearest clusters according to the similarity measure until there are only k clusters. An alternative is to continue to merge until the similarity measure satisfies some criterion. As stated, hierarchical clustering is agglomerative in the sense that points are
252
Chapter 7
agglomerated into growing clusters. One can also consider divisive clustering, in which, beginning with a single cluster, the algorithm proceeds to iteratively split clusters. Various similarity measures have been proposed. Two popular ones are the minimum and maximum measures given by d min ðC i , C j Þ ¼ d max ðC i , C j Þ ¼
min kx yk,
(7.6)
max kx yk:
(7.7)
x∈C i , y∈C j
x∈C i , y∈C j
Hierarchical clustering using the minimum distance is called nearestneighbor clustering. If the algorithm halts when the distance between nearest clusters exceeds a pre-specified threshold, then it is called single-linkage clustering. While the algorithm may be intuitively pleasing owing to the manner in which it generates a hierarchical subcluster structure based on nearest neighbors, it is extremely sensitive to noise and can produce strange results, such as elongated clusters. It is also very sensitive to early mergings, since once joined, points cannot be separated. Farthest-neighbor clustering results from using the maximum distance. If it halts when the distance between nearest clusters exceeds a pre-specified threshold, then it is called complete-linkage clustering. Given a set of clusters at any stage of the algorithm, it merges the clusters for which the greatest distance between points in the two clusters is minimized. This approach counteracts the tendency toward elongation from which nearest-neighbor clustering suffers. While Euclidean-distance and hierarchical clustering have some intuition behind them, they are not model-based in the sense that they involve clustering decisions based in some manner on a random point set. If point sets are generated by randomly sampling from a mixture of Gaussian distributions, then a natural way to approach clustering would be to model the random set as a Gaussian mixture, estimate the Gaussian distributions from the data, and cluster the points based on these estimations (Banfield and Raftery, 1993; Celeux and Govaert, 1993). Suppose that there are l distributions, not necessarily Gaussian, possessing the parameterized densities f 1 ðxju1 Þ, f 1 ðxju2 Þ, : : : , f l ðxjul Þ corresponding to the labels 1, 2, : : : , l, each conditioned on an unknown parameter vector uk . For the sample data x1 , x2 , : : : , xn , the likelihood function for this mixture model is given by Lðu1 , u2 , : : : , ul Þ ¼
n X l Y
pk f k ðxi juk Þ,
(7.8)
i¼1 k¼1
where pk is the probability that a point is drawn from f k ðxjuk Þ. For a Gaussian mixture, the parameters take the form uk ¼ ðmk , Sk Þ. Various
Optimal Clustering
253
models for Gaussian mixtures have been proposed according to the form of the covariance matrices. The less constrained the model, the better it can fit the data, but constrained models can have far fewer parameters. The clustering algorithm proceeds by first specifying the number of clusters and then using the expectation–maximization (EM) algorithm to estimate the unknown parameters. In the E-step, the probability of each data point belonging to each cluster is estimated conditionally on the model parameters for the current step; in the M-step, the model parameters are estimated given the current membership probabilities. Upon convergence of the EM algorithm, each point is assigned to the cluster for which it has the maximum conditional probability. While this mixture-model approach does address the notion that there is a random set responsible for the data, it does not rest on a theory of partitioning within random point sets, and there is no error criterion by which to measure the goodness of a cluster. Moreover, model estimation uses data arising from a single realization of the random point set. Even if we make the assumption that the means and covariance matrices of the Gaussian distributions constituting the mixture are fixed, an assumption that is very restrictive in terms of applications, we cannot expect good estimation for a single observation. That being said, the mixture-model approach recognizes the role of the underlying random process, including its unknown parameters, and attempts to tie both the data and algorithm to the phenomena. Early work on clustering error was based on a mixture model generating the data in conjunction with partitioning error (Dougherty, et al., 2002).
7.2 Bayes Clusterer Prior to defining optimality, we must specify the underlying random process and operator family (Dougherty and Brun, 2004). Given a point set S ⊂ Rd , let hðSÞ denote the number of points in S. A random labeled point process (RLPP) is a pair ðΞ, LÞ, where Ξ is a point process generating a point set S, and L generates random labels on points in S. Ξ maps from a probability space to ½N; N , where N is a family of finite sequences in Rd and N is the smallest s-algebra on N such that for any Borel set B ⊂ Rd , the mapping S → hðS ∩ BÞ is measurable. The probability measure n of Ξ is determined by the probabilities nðY Þ for Y ∈ N , or equivalently, by the Choquet– Matheron–Kendall theorem, the system of probabilities PðΞ ∩ K ≠ ∅Þ over all compact sets K ⊂ Rd . A random labeling is a family, L ¼ fFS : S ∈ Ng, where FS is a random label function on the point set S ∈ N. Denoting the set of labels on individual points by L ¼ f1, 2, : : : , lg, FS has a probability mass function PS on LS defined by PS ðfS Þ ¼ PðFS ¼ fS Þ, where fS : S → L is a deterministic label function assigning a label to each point in S.
254
Chapter 7
7.2.1 Clustering error A label operator l maps point sets to label functions: lðSÞ ¼ fS, l ∈ LS . For any set S, label function fS , and label operator l, the label mismatch error is defined by εl ðS, fS Þ ¼
1 X I . hðSÞ x∈S fS ðxÞ ≠ fS, l ðxÞ
(7.9)
The error of label function lðSÞ is given by εl ðSÞ ¼ E FS ½εl ðS, FS ÞjS 1 X ¼ PðFS ðxÞ ≠ fS, l ðxÞjSÞ, hðSÞ x∈S
(7.10)
and the error of label operator l is defined by ε½l ¼ E Ξ E FΞ ½εl ðΞ, FΞ Þ:
(7.11)
From the probability expression in the summation of Eq. 7.10, we see that the optimal label operator l∗ , minimizing both εl ðSÞ and ε½l, and called the Bayes label operator, is given by label functions fS, l∗ ðxÞ ¼ arg max PðFS ðxÞ ¼ jjSÞ: j∈L
(7.12)
We minimize labeling error by assigning each point the label corresponding to the maximum marginal probability. Clustering involves identifying partitions of a point set rather than the actual labeling, where a partition of S into l clusters has the form P S ¼ fS 1 , S 2 , : : : , S l g such that the S i are disjoint and S ¼ ∪li¼1 S i . A cluster operator z maps point sets to partitions, zðSÞ ¼ P S, z . Clustering is affected by a label switching problem: every clustering operator z has associated with it a family F z of label operators that induce the same partitions as z. That is, l ∈ F z if and only if fS, l induces the same partition as zðSÞ for all S ∈ N, where a label function fS induces partition P S ¼ fS 1 , S 2 , : : : , S l g if S i ¼ fx ∈ S : fS ðxÞ ¼ l i g
(7.13)
for distinct l i ∈ L. For set S, label function fS , and cluster operator z, define the cluster mismatch error by εz ðS, fS Þ ¼ min εl ðS, fS Þ, l∈F z
the error of partition zðSÞ by
(7.14)
Optimal Clustering
255
εz ðSÞ ¼ E FS ½εz ðS, FS ÞjS ¼ E FS min εl ðS, FS ÞjS ,
(7.15)
l∈F z
and the error of cluster operator z by ε½z ¼ E Ξ E FΞ ½εz ðΞ, FΞ Þ ¼ E Ξ E FΞ min εl ðΞ, FΞ Þ :
(7.16)
l∈F z
The preceding error definitions can be represented in terms of riskoriented cost functions (Dalton et al., 2015). Considering the error of a label function lðSÞ, Eq. 7.10 can be rewritten as X
εl ðSÞ ¼
cS ðfS, l , fS ÞPS ðfS Þ,
(7.17)
1 X I 2 , 1 hðSÞ x∈S fS ðxÞ ≠ fS ðxÞ
(7.18)
fS ∈LS
where the label cost function, cS ðf1S , f2S Þ ¼
defines the cost in assigning labeling f1S when the true label is f2S as the proportion of mislabeled points in S. Thus, εl ðSÞ, the error of lðSÞ, may be viewed as the average risk of assigning labeling fS, l under the label cost function. cS depends on neither the model, which only affects the probabilities PS , nor the actual points in S. It depends only on the relative labels f1S ðxÞ and f2S ðxÞ for each point x ∈ S. Given a set S ¼ fx1 , : : : , xhðSÞ g and letting J ¼ f1, 2, : : : , hðSÞg, for a labeling fS of S, define a corresponding labeling fJ on J where fJ ð jÞ ¼ fS ðxj Þ. Then cJ ðf1J , f2J Þ ¼ cS ðf1S , f2S Þ. The cost function may be pre-computed independently of the model and S, while the probabilities PS ðfS Þ determine the labeling error for a given S. We now consider the error of a partition zðSÞ in terms of risk. For a given point set S and labeling fS , from Eq. 7.14, εz ðS, fS Þ ¼
X 1 min I . hðSÞ l∈F z x∈S fS ðxÞ ≠ fS, l ðxÞ
(7.19)
Since S is a fixed finite set, the minimum need only be taken over the finite set ffS, l jl ∈ F z g ⊂ LS [the family of label functions inducing partition zðSÞ]. For any partition P S , define GP S ¼ ffS : fS induces P S g. Then
256
Chapter 7
εz ðS, fS Þ ¼
X 1 min I . hðSÞ fS, z ∈GzðSÞ x∈S fS ðxÞ ≠ fS, z ðxÞ
(7.20)
Letting KS be the set of all possible partitions of S, the partition error for zðSÞ is defined by X 1 X εz ðSÞ ¼ I P ðf Þ min hðSÞ fS, z ∈GzðSÞ x∈S fS ðxÞ ≠ fS, z ðxÞ S S fS ∈LS X X X 1 min I ¼ P ðf Þ: hðSÞ fS, z ∈GzðSÞ x∈S fS ðxÞ ≠ fS, z ðxÞ S S P ∈K f ∈G S
S
S
(7.21)
PS
The term inside the brackets is constant for all fS ∈ GP S since the minimum essentially resolves the label switching problem relative to fS , and the fS in GP S are all identical except for permutations of the labels. Hence, X X X 1 min I P ðf Þ, εz ðSÞ ¼ hðSÞ fS, z ∈GzðSÞ x∈S fS, PS ðxÞ ≠ fS, z ðxÞ f ∈G S S P ∈K S
S
S
(7.22)
PS
where fS, P S is any fixed member of G PS . We can view εz ðSÞ as a special case of a general partition error applying to any partition QS defined by εðS, QS Þ ¼
X
cS ðQS , P S ÞPS ðP S Þ,
(7.23)
P S ∈KS
where we define the natural partition cost function by cS ðQS , P S Þ ¼
X 1 min I , hðSÞ fS, QS ∈GQS x∈S fS, PS ðxÞ ≠ fS, QS ðxÞ
(7.24)
with fS, PS being any member of GP S , and where PS ðP S Þ ¼
X
PS ðfS Þ
(7.25)
fS ∈G PS
is the probability mass on the partitions P S ∈ KS of S. Then εz ðSÞ ¼ εðS, zðSÞÞ ¼
X P S ∈KS
cS ðzðSÞ, P S ÞPS ðP S Þ:
(7.26)
Optimal Clustering
257
The cost cS between two partitions depends on neither the model nor the actual points in S or their order. To illustrate, given a set S ¼ fx1 , : : : , xhðSÞ g, let J ¼ f1, 2, : : : , hðSÞg, and for any partition P S ¼ fS 1 , S 2 , : : : , S l g of S, define a corresponding partition P J ¼ fJ 1 , J 2 , : : : , J l g on J, where j ∈ J i if and only if xj ∈ S i . Then cJ ðQJ , P J Þ ¼ cS ðQS , P S Þ:
(7.27)
Hence, cJ may be pre-computed as a matrix and utilized for any model or point set S. It is demonstrated in (Dalton et al., 2015) that the partition cost function cS is a metric on KS . The foregoing analysis assumes throughout the partition error εz ðSÞ as originally defined in (Dougherty and Brun, 2004), in particular, the general partition error and partition cost function of Eqs. 7.23 and 7.24, respectively. Regarding the latter, the fact that it can be evaluated with fS, PS being any member of GP S is a consequence of the resolution of the switching problem in Eq. 7.21. In fact, Eq. 7.23 is general and need not be evaluated with the natural partition cost function of Eq. 7.24. Other cost functions can be used, for instance, the zero-one cost function defined by cS ðQS , P S Þ ¼ 0 if QS ¼ P S and cS ðQS , P S Þ ¼ 1, otherwise. Under the zero-one cost function, εðS, QS Þ is the probability that any point in S is misclustered. Various loss functions have been considered for clustering (Binder, 1978; Quintana and Iglesias, 2003; Meilă, 2007; Fritsch and Ickstadt, 2009). Whereas these references define loss over label functions, we define cost directly over partitions, which is mathematically more direct and automatically treats the label switching problem. Moreover, these works treat loss abstractly, absent a practical notion of clustering error, like the expected (minimum) number of mislabeled points. In the remainder of the chapter, we focus exclusively on the partition error εz ðSÞ. Historically, the notion of a validity measure has been used to evaluate clustering results based on a single realization of the random point-set process. Validity measures fall broadly into three classes: internal validation is based on calculating properties of the resulting clusters; relative validation is based on comparisons of partitions generated by the same algorithm with different parameters or different subsets of the data; and external validation compares the partition generated by the clustering algorithm and a given partition of the data. In (Brun et al., 2007), a number of validity algorithms have been examined relative to their correlation with clustering error across clustering algorithms and random-point-set models. The results indicate that the performance of validity indices is highly variable, often lacking even modest correlation with clustering error. For complex models or when a clustering algorithm yields complex clusters, both the internal and relative indices fail to
258
Chapter 7
predict the error of the algorithm. Some external indices appear to perform well, whereas others do not. 7.2.2 Bayes clustering error A Bayes clusterer z∗ is a clusterer with minimal error ε½z∗ . Since ε½z ¼ E Ξ ½εz ðΞÞ and εz ðSÞ depend on the clusterer z only at zðSÞ, minimization of ε½z can be achieved by minimizing εz ðSÞ for every S ∈ N separately. Hence, a Bayes clusterer is defined by a Bayes partition z∗ ðSÞ for each set S ∈ N. By Eq. 7.26, z∗ ðSÞ ¼ arg min εz ðSÞ zðSÞ∈KS X ¼ arg min cS ðzðSÞ, P S ÞPS ðP S Þ zðSÞ∈KS
(7.28)
P S ∈KS
(Dalton et al., 2015). εz∗ ðSÞ and ε½z∗ ¼ E Ξ ½εz∗ ðΞÞ are called the Bayes partitioning error and the Bayes clustering error, respectively. Since the Bayes clusterer may be solved for each fixed S individually, we will sometimes write partitions and label functions without a subscript S, with the understanding that they are relative to a fixed point set S. Relative to optimal clustering, the synthesis scheme of Section 3.8 takes the following form: 1. 2. 3. 4.
The model is a random labeled finite point-set process. The operator class consists of cluster operators. The optimization minimizes the cluster-operator error. The optimization is solved by the Bayes cluster operator given in Eq. 7.28.
From Eq. 7.28 we see that finding the Bayes clusterer involves computing all costs cS ðQS , P S Þ for QS , P S ∈ KS . As noted following Eq. 7.27, these can all be pre-computed as a matrix. Each row can be viewed as a candidate partition QS , and each column can be viewed as a reference partition P S to which the candidates are compared. In practice, it is possible that not all partition pairs in KS KS need their costs computed to derive a Bayes clusterer. Recognizing this possibility, let CS ⊂ KS be a set of candidate partitions that comprise the search space and RS ⊂ KS be a set of reference partitions that have known probabilities. Let CS ¼ fQ1 , : : : , QjCS j g and RS ¼ fP 1 , : : : , P jRS j g. Then the optimal clustering problem can be formulated relative to a column vector of probabilities p ¼ fpj g, where pj ¼ PS ðP j Þ for j ¼ 1, : : : , jRS j; a cost matrix C ¼ fcij g, where cij ¼ cS ðQi , P j Þ for i ¼ 1, : : : , jCS j and j ¼ 1, : : : , jRS j; and a column vector of the candidate partition errors, e ¼ fei g, where ei ¼ εðS, Qi Þ is the error associated with
Optimal Clustering
259
candidate partition Qi for i ¼ 1, : : : , jCS j. From Eq. 7.23, e ¼ Cp. The Bayes ∗ partition for S is then Qi , where i∗ ¼ arg
min
i¼1, : : : , jCS j
ei ,
(7.29)
and the Bayes partitioning error, equivalent to the error of the Bayes partition ∗ ∗ Qi , is ei∗ . It is possible that PS ðQi Þ ¼ 0. The representation e ¼ Cp is problematic owing to the size of CS and RS . In a brute force search, CS ¼ KS , RS ¼ KS , and C is of size jKS j jKS j, which can be prohibitively large even for moderate-size point sets. A significant amount of effort in (Dalton et al., 2015) has gone into complexity reduction. One approach reduces CS and RS without loss of optimality by determining a subset of candidate partitions that is guaranteed to contain the Bayes partition. Another is to reduce the number of reference partitions. Any reference partition with probability zero can be omitted. Beyond that, (Dalton et al., 2015) gives an iterative algorithm that provides an increasing set of reference partitions R1S ⊂ R2S ⊂ · · · and an inequality of the form εlk ≤ ε½z∗ ≤ εuk ,
(7.30)
where the Bayes clustering error ε½z∗ is bounded below and above by functions εlk and εuk of R1S , R2S , : : : , RkS that converge towards the Bayes clustering error as k increases and equal the Bayes error for k ¼ jKS j, thereby facilitating the desired accuracy. By using combinations of lossless and lossy compression, point sets with up to several thousand points are considered.
7.3 Separable Point Processes Up to this point, we have characterized RLPPs with a point process Ξ that generates point sets S followed by an S-conditioned labeling process L that generates label functions fS . In practice, we can characterize an RLPP as a process that draws a sample size n, a set of labels for n points, and a set of n points with distributions corresponding to the labels. An RLPP is said to be separable if (1) a label function f is generated from an independent labelgenerating process F with probability mass function PðF ¼ fÞ over the set of all label functions with domain f1, 2, : : : , ng, (2) a random parameter vector r is independently drawn from a distribution f ðrÞ, and (3) the kth point xk in S with corresponding label i ¼ fðkÞ is independently drawn from a conditional distribution f ðxji, rÞ. For a separable RLPP, the probability of label function fS ∈ LS given S ¼ fx1 , x2 , : : : , xn g is
260
Chapter 7
PðFS ¼ fS jSÞ ∝ f ðSjfÞPðF ¼ fÞ,
(7.31)
where fðkÞ ¼ fS ðxk Þ, f ðSjfÞ ¼
Z Y l Y
f ðxji, rÞ f ðrÞdr,
(7.32)
i¼1 x∈S i
and S i ¼ fx ∈ S : fS ðxÞ ¼ ig. For clarification, F and f denote label function mappings from f1, 2, : : : , ng to labels, whereas FS and fS denote label function mappings from fx1 , x2 , : : : , xn g to labels. F and f carry only information about labels without requiring S; and FS and fS carry information about labels given S. In particular, if f is given and S has been ordered by S ¼ fx1 , x2 , : : : , xn g, then typically we have fS ðxk Þ ¼ fðkÞ to connect the two label functions. Then the probability PðFS ¼ fS jSÞ is the conditional probability on labels given S, and PðF ¼ fÞ is a probability on labels that is not dependent on S. Going forward, we let PS(fS) ¼ P(FS ¼ fS|S) and P(fS) ¼ P(F ¼ f). If r ¼ ðr1 , : : : , rl Þ, where the ri are mutually independent parameter vectors and the label-i-conditional distribution depends on only ri , that is, if f ðxji, rÞ ¼ f ðxji, ri Þ for i ¼ 1, : : : , l, then f ðSjfÞ ¼
l Z Y Y i¼1
f ðxji, ri Þ f ðri Þdri .
(7.33)
x∈S i
Q If S i is an empty set for some i, then we define x∈S i f ðxÞ ¼ 1 and P x∈S i f ðxÞ ¼ 0. We will consider three Gaussian models with proper distributions on the model parameters. Handling improper distributions is addressed in (Dalton et al., 2015). To ease notation, instead of f ðxji, ri Þ, we will often write f i ðx; ri Þ. In the first model, point sets are generated by l fixed distributions. For each i ∈ f1, : : : , lg, define fixed parameters: ri ¼ ðmi , Si Þ, where mi is a length d real vector and Si is a positive definite d d matrix. A point x ∈ S with label i has a Gaussian distribution with mean mi and covariance Si ; that is, f i ðx; ri Þ Nðmi , Si Þ. Then Y x∈S i
1X T 1 f i ðx; ri Þ ¼ ðx mi Þ Si ðx mi Þ , dni ni exp 2 x∈S ð2pÞ 2 jSi j 2 i 1
(7.34)
where ni is the cardinality of S i . If ni ≥ 2, let mˆ i and Sˆ i be the sample mean and covariance of points in S i , respectively. Then
Optimal Clustering
261
X
∗ 1 ðx mi ÞT S1 i ðx mi Þ ¼ trðFi Si Þ,
(7.35)
x∈S i
where F∗i ¼ ðni 1ÞSˆ i þ ni ðmi mˆ i Þðmi mˆ i ÞT .
(7.36)
If ni ¼ 1, then Eq. 7.35 holds with F∗i ¼ ðmi mˆ i Þðmi mˆ i ÞT :
(7.37)
From Eq. 7.34, Y x∈S i
trðF∗i S1 i Þ f i ðx; ri Þ ¼ : dni ni exp 2 ð2pÞ 2 jSi j 2 1
(7.38)
Since the ri are fixed, f ðri Þ is a point mass at ri ¼ ðmi , Si Þ. Hence, from Eqs. 7.31 and 7.33, 1 ∗ 1 PS ðfS Þ ∝ PðfS Þ trðFi Si Þ dni ni exp 2 i¼1 ð2pÞ 2 jSi j 2 Y 12 l l 1X ni ∗ 1 ∝ PðfS Þ jSi j exp trðFi Si Þ , 2 i¼1 i¼1 l Y
1
(7.39)
where by convention, F∗i ¼ 0 if ni ¼ 0. The Bayes clusterer is given by Eqs. 7.28 and 7.25. The probability of a label function is larger if the sample means mˆ i are close to the true means mi (so F∗i is smaller), and if the shapes of the sample covariances Sˆ i are close to that of the known covariances Si , in the sense that ˆ Sˆ i ≈ aSi [so trðF∗i S1 i Þ is smaller]. This probability is also larger if Si is smaller, in the sense that Sˆ i ≈ aSi is better for small a. Thus, “tighter” clusters have higher probability. In the second model, point sets are generated by distributions with random means and known covariances. For each i ∈ f1, : : : , lg, two fixed hyperparameters are defined: a real number ni . 0 and a length d real vector mi . We define parameters ri ¼ ðmi , Si Þ, where the Si are fixed, symmetric positive definite d d matrices and the mi have independent Gaussian distributions with mean mi and covariance n1i Si . A point x ∈ S having label i is drawn from a Gaussian distribution with mean mi and covariance Si , so that f i ðx; ri Þ Nðmi , Si Þ. When ni ≥ 1, define
262
Chapter 7
Li ðS i Þ ¼
Z "Y
# f i ðx; ri Þ f ðri Þdri .
(7.40)
x∈S i
For a proper distribution on the mi where ni . 0 for all i, applying Eq. 7.38 for ni ≥ 2,
1 1 ˆ Li ðS i Þ ¼ exp tr ðni 1ÞSi Si dðni þ1Þ ðni þ1Þ 2 ð2pÞ 2 jSi j 2 Z n ˆ iÞ exp i ðmi mˆ i ÞT S1 i ðmi m 2 ni T 1 ðmi mi Þ Si ðmi mi Þ dmi . 2 d
jni j2
(7.41)
Applying the fact that ˆ i Þ þ ni ðmi mi ÞT S1 ni ðmi mˆ i ÞT S1 i ðmi m i ðmi mi Þ T ni mˆ i þ ni mi ni mˆ i þ ni mi 1 ¼ ðni þ ni Þ mi Si mi ni þ ni ni þ ni nn þ i i ðmˆ i mi ÞT S1 ˆ i mi Þ, i ðm ni þ ni
(7.42)
and integrating out a Gaussian distribution on mi ,
d
jni j2
1 trððni 1ÞSˆ i S1 Li ðS i Þ ¼ dni ni exp i Þ d 2 2 2 2 jni þ ni j ð2pÞ jSi j 1 ni ni exp ðmˆ i mi ÞT S1 ð m ˆ m Þ i i i 2 ni þ ni d jni j2 1 ∗ 1 ¼ trðCi Si Þ , dni ni exp d 2 jni þ ni j2 ð2pÞ 2 jSi j 2
(7.43)
where C∗i ¼ ðni 1ÞSˆ i þ
ni ni ðmˆ mi Þðmˆ i mi ÞT . ni þ ni i
(7.44)
If ni ¼ 1, then Eq. 7.43 holds with C∗i ¼
ni ðmˆ mi Þðmˆ i mi ÞT . ni þ 1 i
(7.45)
Optimal Clustering
263
From Eqs. 7.33 and 7.43, Y l
jni jd PS ðfS Þ ∝ PðfS Þ jn þ ni jd jSi jni i¼1 i
12
l 1X ∗ 1 exp trðCi Si Þ , 2 i¼1
(7.46)
where by convention, C∗i ¼ 0 if ni ¼ 0. The probability of a label function is larger if the sample means m ^ i are ˆ close to the expected means mi and if the sample covariances Si are “tighter” with shapes close to the known covariances Si , in the sense that Sˆ i ≈ aSi for small a. In the third model, the means and variances of each class-conditional distribution are random. For each i ∈ f1, : : : , lg, define a length d real vector mi , real numbers ni . 0 and ki . d 1, and a symmetric positive definite matrix Ci as hyperparameters. Define parameters ri ¼ ðmi , Si Þ having independent normal-inverse-Wishart distributions, meaning that Si is inverse-Wishart, ki
f ðSi Þ ¼
jCi j 2 ki d
2 2 Gd ðk2i Þ
jSi j
ki þdþ1 2
1 1 exp trðCi Si Þ , 2
(7.47)
where given Si , the mi have Gaussian distributions with mean mi and covariance n1i Si . If ki . d þ 1, then E½Si ¼
1 C. ki d 1 i
(7.48)
With ri fixed, each x ∈ S having label i is Gaussian with f i ðx; ri Þ Nðmi , Si Þ. ri has a proper density. When ni ≥ 1, define Li ðS i Þ ¼
ZZ " Y Z
¼
# f i ðx; mi , Si Þ f ðmi jSi Þdmi f ðSi ÞdSi
x∈S i
1 ∗ 1 trðC exp S Þ f ðSi ÞdSi , dni ni i i d 2 jni þ ni j2 ð2pÞ 2 jSi j 2 d
jni j2
(7.49)
where in the last line we use Eq. 7.43 for the previous model with fixed covariances and C∗i is defined by Eq. 7.44 if ni ≥ 2 and by Eq. 7.45 if ni ¼ 1. Proceeding,
264
Chapter 7
1 ∗ 1 Li ðS i Þ ¼ trðCi Si Þ dni ni exp d 2 jni þ ni j2 ð2pÞ 2 jSi j 2 ki ki þdþ1 jCi j 2 1 1 ki d jSi j 2 exp trðCi Si Þ d Si 2 2 2 Gd ðki Þ Z
d
jni j2
2
¼
jni j
(7.50)
ki
d 2
jCi j 2
dni ki d d jni þ ni j2 ð2pÞ 2 2 2 Gd ðk2i Þ Z k þn þdþ1 1 i i2 ∗ 1 jSi j exp trððCi þ Ci ÞSi Þ d Si . 2
This is essentially an inverse-Wishart integral with updated parameters ki þ ni and Ci þ C∗i . Thus, ki
d
Li ðS i Þ ¼
jni j2 d
jni þ ni j2 ð2pÞ
dni 2
jCi j 2 ki d 2
2 Gd ðk2i Þ
2
ðki þni Þd 2
i Gd ðki þn 2 Þ
jCi þ C∗i j
ki þni 2
.
(7.51)
From Eqs. 7.33 and 7.51, after scaling across all fS , we obtain PS ðfS Þ ∝ PðfS Þ
l Y
i 2 jni j2 Gd ðki þn 2 ÞjCi j
ki
d
jni þ ni j2 jGd ðk2i ÞjCi þ C∗i j d
i¼1
ki þni 2
,
(7.52)
where by convention, C∗i ¼ 0 if ni ¼ 0. If PðfS Þ ∝ 1 for all fS considered, and all ni are fixed for all fS considered, then PS ðfS Þ ∝
l Y
jCi þ C∗i j
ki þni 2
.
(7.53)
i¼1
Example 7.1. To illustrate optimal clustering, several clustering algorithms are implemented in (Dalton et al., 2015) in addition to the Bayes clusterer: fuzzy c-means (FCM), K-means (KM), hierarchical clustering with single linkage (H-S), and hierarchical clustering with complete linkage (H-C). Suboptimal approximate algorithms are also considered to handle moderate and large point sets, but we do not include those results here. Thus, we only consider n ¼ 20 points, whereas up to n ¼ 10,000 points are considered in (Dalton et al., 2015). To mitigate computation, we only consider l ¼ 2 clusters, we limit the number of candidate partitions for a given point set S with n points to 2n1 , and we assume known cluster sizes, so that if there are n1
Optimal Clustering
265
and n2 points in the clusters, the number of reference partitions is reduced to 1 n! n! 2 n1 !n2 ! if n1 ¼ n2 or to n1 !n2 ! otherwise. Two-dimensional separable RLPPs based on Gaussian mixture models are employed: Model 0 has fixed known means and covariances; Model 1 has Gaussian means and fixed known covariances; and Model 2 has Gaussian means and inverse-Wishart covariances. Experiments are divided into three steps: point-set generation, clustering, and performance evaluation. Since we know the RLPP, for small point sets we can find the optimal partition, calculate its theoretical error, empirically calculate the errors for the other algorithms, and make comparisons. Table 7.1 shows settings used for simulations, where parameters are selected to obtain a Bayes clustering error close to 0.1. Table 7.2 shows average empirical errors. Note the enormous advantage of optimal clustering when both the means and covariance matrices are random. ▪
7.4 Intrinsically Bayesian Robust Clusterer When the RLPP is uncertain, we consider an uncertainty class of RLPPs ðΞu , Lu Þ, u ∈ U, where Ξu is a point process on ðN, N Þ, and Lu ¼ fFu, S : S ∈ Ng Table 7.1 Simulations based on three models and two different balancing of samples: 10:10 and 12:8. In the covariance matrices column, I d represents the d d identity matrix. Sim ID 1 2 3 4 5 6 Sim ID 1 2 3 4 5 6
Model
n1
n2
Repeats
Mean Vectors
0 1 2 0 1 2
10 10 10 12 12 12
10 10 10 8 8 8
1000 500 500 1000 500 500
m1 ¼ ð0, 0Þ, m2 ¼ ð1.5, 1.5Þ m1 ¼ ð0, 0Þ, m2 ¼ ð1.5, 1.5Þ m1 ¼ ð0, 0Þ, m2 ¼ ð1.5, 1.5Þ m1 ¼ ð0, 0Þ, m2 ¼ ð1.5, 1.5Þ m1 ¼ ð0, 0Þ, m2 ¼ ð1.5, 1.5Þ m1 ¼ ð0, 0Þ, m2 ¼ ð1.5, 1.5Þ
Model
Covariance Matrices
n
k
0 1 2 0 1 2
S1 ¼ S2 ¼ 1 · I d S1 ¼ S2 ¼ 0.5 · I d C1 ¼ C2 ¼ 0.5 · I d S1 ¼ S2 ¼ 1 · I d S1 ¼ S2 ¼ 0.5 · I d C1 ¼ C2 ¼ 0.5 · I d
n1 ¼ 1, n2 ¼ 2 n1 ¼ 1, n2 ¼ 2 n1 ¼ 1, n2 ¼ 2 n1 ¼ 1, n2 ¼ 2
k1 ¼ 2, k2 ¼ 3 k1 ¼ 2, k2 ¼ 3
Table 7.2 Errors for clustering algorithms in the simulations. Simulation Simulation Simulation Simulation Simulation Simulation
1 2 3 4 5 6
Optimal Optimal Optimal Optimal Optimal Optimal
0.14 0.07 0.09 0.12 0.08 0.09
FCM FCM FCM FCM FCM FCM
0.16 0.08 0.20 0.14 0.09 0.24
KM KM KM KM KM KM
0.17 0.09 0.21 0.15 0.09 0.25
H-C H-C H-C H-C H-C H-C
0.22 0.12 0.24 0.19 0.13 0.29
H-S H-S H-S H-S H-S H-S
0.42 0.30 0.35 0.36 0.27 0.35
266
Chapter 7
is a random labeling on N consisting of a random label function Fu, S for each S. The error of a clusterer z relative to state u is denoted by εu ½z. An intrinsically Bayesian robust (IBR) clusterer is defined by zIBR ¼ arg min E U ½εu ½z, z∈F
(7.54)
where F is a family of clusterers and the expectation is with respect to a prior distribution pðuÞ. A model-constrained Bayesian robust (MCBR) clusterer is defined by zMCBR ¼ arg min E U ½εu ½z, z∈F U
(7.55)
where F U consists of the model-specific optimal clusterers in F . A minimax robust clusterer is defined by Eq. 4.100 with εu ½z in place of C u ðcÞ. The robustness of cluster operator zf relative to state u is defined by Eq. 4.65, here taking the form kðu, fÞ ¼ εu ½zf εu ½zu :
(7.56)
The mean robustness is defined by Eq. 4.66. If f is a maximally robust state, then zMCBR ¼ zf . 7.4.1 Effective random labeled point processes We shall find an IBR clusterer via an effective RLPP. A RLPP is solvable under clusterer class F if zF ¼ arg min ε½z z∈F
(7.57)
can be solved under the RLPP. A RLPP ðΞU , LU Þ is an effective RLPP under clusterer class F relative to an uncertainty class U of RLPPs if for all z ∈ F both the expected clustering error E U ½εu ½z and the clustering error εU ½z under ðΞU , LU Þ exist and E U ½εu ½z ¼ εU ½z:
(7.58)
The next theorem is analogus to Theorem 4.1, and the proof is similar. Theorem 7.1 (Dalton et al., 2018). If there exists a solvable effective RLPP ðΞU , LU Þ for uncertainty class U under clusterer class F with optimal clusterer ▪ zU , then zIBR ¼ zU . If the clusterer family is restricted to F U in the theorem, then we obtain zMCBR ¼ zU . Since an effective RLPP under clusterer class F is an effective
Optimal Clustering
267
RLPP under any smaller clusterer class, if an effective RLPP is found for IBR clustering, then it is also an effective RLPP for MCBR clustering. In analogy to what we have seen with classification, not only are IBR clusterers better performing than MCBR clusterers, they can be much easier to find analytically. The next theorem addresses the existence of effective RLPPs. Theorem 7.2 (Dalton et al., 2018). For any uncertainty class U of RLPPs ðΞu , Lu Þ with prior pðuÞ, there exists an RLPP ðΞU , LU Þ such that i i h h E U E Ξu , Lu ½gðΞu , Fu, Ξu Þju ¼ E ΞU , LU gðΞU , FU, ΞU Þ
(7.59)
for any real-valued measurable function g. Proof. Suppose that the parameter u is a realization of a random vector, q : ðV, A, PÞ → ðU, BÞ. Then fq1 ðuÞ : u ∈ Ug partitions the sample space V. The point process Ξu is therefore a mapping Ξu : ðq1 ðuÞ, A ∩ q1 ðuÞ, Pu Þ → ðN, N Þ,
(7.60)
where Pu is the conditional probability, and we assume that nu , defined by nu ðY Þ ¼ Pu ðΞ1 u ðY ÞÞ for all Y ∈ N , is known. Write the random labeling as Lu ¼ fFu, S : S ∈ Ng, where Fu, S has a probability mass function PðFu, S ¼ fS ju, SÞ on LS . Given any real-valued measurable function g mapping from point set and label function pairs, let X ¼ gðΞ, FΞ Þ be a random variable, where ðΞ, FΞ Þ is drawn from fðΞu , Lu Þgu∈U , and note that E U ½E½X ju ¼ E½X . Define a mapping ΞU : ðV, A, PÞ → ðN, N Þ, given uniquely by ΞU ðvÞ ¼ Ξu ðvÞ when v ∈ q1 ðuÞ. Note that, if we define nðY Þ ¼ PðΞ1 U ðY ÞÞ, then nðY Þ ¼ E U ½nu ðY Þ, and ΞU is a random point-set process. Define LU ¼ fFU, S : S ∈ Ng, where FU;S has probability mass function PðFU, S ¼ fS jSÞ ¼ E U ½PðFu, S ¼ fS ju, SÞ
(7.61)
for all fS ∈ LS . Thus, LU is a random labeling. Let Z ¼ gðΞU , FU, ΞU Þ be a random variable, where ðΞU , FU, ΞU Þ is drawn from the constructed RLPP ðΞU , LU Þ, and note that E½X ¼ E½Z. ▪ The theorem applies for any function gðS, fS Þ, including the cluster mismatch error gðS, fS Þ ¼ εz ðS, fS Þ, for any clusterer z ∈ F . Thus, Eq. 7.59 implies that
268
Chapter 7
E U ½εu ðzÞ ¼ E U ½E Ξu , Lu ½εz ðΞu , Fu, Ξu Þju ¼ E ΞU , LU ½εz ðΞU , FU, ΞU Þ
(7.62)
¼ εU ½z: Hence, ðΞU , LU Þ is an effective RLPP on F for both MCBR and IBR clusterers. The following corollary shows that for separable RLPPs, the effective RLPP is also separable and aggregates stochasticity within models with uncertainty between models. Corollary 7.1. Let each RLPP in the uncertainty class be parameterized by r with prior density f ðrjuÞ, let F be an independent labeling process with probability mass function PðF ¼ fÞ that depends on neither u nor r, and denote the conditional distribution of points by f ðxjy, r, uÞ. Then the effective RLPP is separable with parameter ðu, rÞ, prior f ðu, rÞ, an independent labeling process with probability mass function PðF ¼ fÞ, and conditional distributions f ðxjy, r, uÞ. Proof. Let the number n of points and the label function f be fixed. From the proof of the theorem, recall that for a fixed u, the effective random point process ΞU ðvÞ is set to Ξu ðvÞ. Equivalently, a realization of S ¼ fx1 , : : : , xn g under the effective RLPP is governed by the distribution f ðSjfÞ ¼
Z Z Y n
f ðxk jfðkÞ, r, uÞ f ðrjuÞdr pðuÞdu.
(7.63)
k¼1
This is equivalent to a separable random point-set process with parameter ðu, rÞ, prior f ðu, rÞ ¼ pðuÞf ðrjuÞ, and conditional distributions f ðxjy, r, uÞ. Since the labeling process is independent, the full effective RLPP is the separable RLPP given in the statement of the corollary. ▪ Example 7.2. Following (Dalton and Benalcázar, 2017), for two clusters, parameterize an uncertainty class U of RLPPs with ui ¼ Si , where each Si is drawn independently from an inverse-Wishart distribution with parameters ki and Ci . Given u ∈ U, the RLPP ðΞu , Lu Þ is separable, with parameter ri ¼ mi , Gaussian prior f ðri Þ with mean mi and covariance Si ∕ni , and Gaussian conditional distributions f ðxji, ri , ui Þ with mean mi and covariance Si . The number of points n ¼ n1 þ n2 is fixed and set to 10, 20, or 70. The number of points in each cluster is also fixed. The distribution on label functions, PðF ¼ fÞ, has support on the set of label functions that assign the correct number of points and is constant on its support. Points associated with a given label are independent and identically distributed.
Optimal Clustering
269
Table 7.3 Clustering error across all states E U ½εu ðzÞ for five Gaussian data generation models and six clustering algorithms. m2 ½0, 2 ½0, 2 ½0, 5 ½0, 2 ½0, 5
n1 = n2
n1
n2
No. States
IBR
FCM
KM
H-C
H-S
Random
1 1 1 5 1
5 6 10 12 35
5 4 10 8 35
5 6 7 7 5
0.145 0.155 0.025 0.103 0.058
0.177 0.187 0.054 0.161 0.083
0.183 0.193 0.058 0.172 0.085
0.211 0.221 0.096 0.221 0.142
0.285 0.289 0.184 0.329 0.343
0.377 0.377 0.413 0.412 0.451
The effective RLPP ðΞU , LU Þ merges uncertainty in u with uncertainty in r, which, in this case, is equivalent to a separable Gaussian RLPP with uncertain mean and covariance. Thus, the IBR clusterer can be found in closed form, at least for n ¼ 10, by evaluating Eq. 7.26 for all partitions using Eq. 7.53 and choosing the minimizing partition. For larger n, since there are too many partitions to enumerate, we approximate the IBR clusterer using a suboptimal algorithm called suboptimal Pseed in (Dalton et al., 2015). For n ¼ 10 and 20, we evaluate the expected partition error E U ½εu ðS, P S Þ ¼ εU ðS, P S Þ over the uncertainty class analytically from Eq. 7.26 for each algorithm. For n ¼ 70, εU ðS, P S Þ cannot be found owing to the large number of partitions, so we instead approximate εU ðS, P S Þ for each algorithm with the proportion of misclustered points (i.e., the cluster mismatch error between the true partition and the algorithm output). This entire procedure is iterated over 1000 point sets per state under 5 simulation settings. The average of these errors over all states and iterations is given in Table 7.3. In all cases, m1 ¼ ð0, 0Þ, k1 ¼ k2 ¼ 5, and C ¼ 5I 2 , where I 2 is a 2 2 identity matrix and other parameters are provided in the table. The number of states u that are drawn is also provided in the table. Since errors in the table give E U ½εu ½z, the IBR clusterer should be optimal (or close to optimal under n ¼ 70, where it is approximated). Indeed, the IBR clusterer performs significantly better than the classical clustering algorithms. ▪ The theory of IBR clustering has been developed in the context of random labeled point sets, and the preceding example has been constructed so that the effective RLPP is a Gaussian separable RLPP. In practice, the random points correspond to feature vectors, the labels correspond to objects to which the feature vectors correspond, and the aim is to cluster the unlabeled feature vectors by object. We have focused on separable RLPPs because they possess a convenient structure to represent the conditional probabilities of the random label function (Eqs. 7.31 and 7.32). With Gaussian models we can obtain analytic expressions for Eq. 7.33. Other models can be considered; however, absent analytic expressions, numerical computations are required. In any event, optimal and IBR clustering require models. This means that we need knowledge regarding the feature-generation process.
270
Chapter 7
In imaging, there are numerous ways to generate features. For instance, shape contours can be represented as Fourier series and Fourier coefficients used as features. Image texture provides a rich source for features. A classic approach uses textural features based on grayscale spatial dependencies (Haralick et al., 1973). Another approach to texture is to operate on the image by a granulometry, compute the size distribution as in Section 3.7.1, normalize the size distribution so that it is an increasing function from 0 to 1, and compute a set of moments for the normalized size distribution (Dougherty et al., 1992). This can be done for several granulometries to generate a feature set. If one extends the random-grain model of Eq. 3.159 to consist of a disjoint union of multiple-primitive compact sets scaled by size parameters, then it has been known for some time that any finite-length vector of granulometric moments from a single structuring element is asymptotically jointly Gaussian, and there exist analytic expressions for the asymptotic means, variances, and covariances of the moments (Sivakumar, et al., 2001). In (Dalton, et al., 2018), this result is extended to asymptotic joint normality and analytic expressions for the asymptotic means and covariances of granulometric moments under multiple structuring elements. Once an image process can be modeled with the appropriate random-grain model, these asymptotic results can be applied (assuming an image with a large number of grains), and the optimal or IBR clustering theories applied. This has been done in (Dalton, et al., 2018) for clustering images of silver-halide photographic T-grain crystals. The salient point is that one is not solely dependent on good fortune for having features with favorable properties when classifying or clustering. Features can be constructed that possess beneficial properties. Indeed, feature construction is an important aspect of pattern recognition.
References Akaike, H., “Information theory and an extension of the maximum likelihood principal,” in 2nd International Symposium on Information Theory, 1973. Akaike, H., “A Bayesian analysis of the minimum AIC procedure,” Annals of the Institute of Statistical Mathematics, vol. 30, no. 1, pp. 9–14, 1978. Anderson, T., An Introduction to Multivariate Statistical Analysis, 2nd Edition. John Wiley, New York, 1984. Arnold, L., Stochastic Differential Equations: Theory and Applications. Wiley, New York, 1974. Atkinson, A. and A. Donev, Optimum Experimental Designs. Oxford University Press, 1992. Banfield, J. D. and A. E. Raftery, “Model-based Gaussian and non-Gaussian clustering,” Biometrics, vol. 49, pp. 803–821, 1993. Barbieri, M. and J. Berger, “Optimal predictive model selection,” Annals of Statistics, vol. 32, no. 3, pp. 870–897, 2004. Basu, J. and P. L. Odell, “Effect of intraclass correlation among training samples on the misclassification probabilities of Bayes procedure,” Pattern Recognition, vol. 6, no. 1, pp. 13–16, 1974. Bellman, R. and R. Kalaba, “Dynamic programming and adaptive processes: Mathematical foundation,” IRE Trans. Automatic Control, vol. AC-5, no. 1, pp. 5–10, 1960. Berger, J. O. and J. M. Bernardo, “On the development of reference priors,” Bayesian Statistics, vol. 4, no. 4, pp. 35–60, 1992. Berger, J. O., J. M. Bernardo, and D. Sun, “Objective priors for discrete parameter spaces,” Journal of the American Statistical Association, vol. 107, no. 498, pp. 636–648, 2012. Bernardo, J. M., “Expected information as expected utility,” Annals of Statistics, vol. 7, no. 3, pp. 686–690, 1979a. Bernardo, J. M., “Reference posterior distributions for Bayesian inference,” J. Royal Statistical Society. Series B, vol. 41, pp. 113–147, 1979b. Bernardo, J. M. and A. Smith, “Bayesian theory,” Measurement Science and Technology, vol. 12, no. 2, pp. 221–222, 2001. Berry, D. A. and B. Fristedt, Bandit Problems: Sequential Allocation of Experiments. Chapman and Hall, London, 1985.
271
272
References
Bertsekas, D. P., Dynamic Programming and Optimal Control (Vols 1 and 2). Athena Scientific, Belmont, MA, 2001. Bezdek, J. C., Pattern Recognition with Fuzzy Objective Function Algorithms. Plenum Press, New York, 1981. Binder, D. A., “Bayesian cluster analysis,” Biometrika, vol. 65, no. 1, pp. 31–38, 1978. Bishop, C. M., Pattern Recognition and Machine Learning. Springer-Verlag, New York, 2006. Bode, H. W. and C. E. Shannon, “A simplified derivation of linear least square smoothing and prediction theory,” In Proc. IRE, vol. 38, no. 4, pp. 417–425, 1950. Boluki, S., M. S. Esfahani, X. Qian, and E. R. Dougherty, “Constructing pathway-based priors within a Gaussian mixture model for Bayesian regression and classification,” IEEE/ACM Trans. Computational Biology and Bioinformatics, 2017a. Boluki, S., M. S. Esfahani, X. Qian, and E. R. Dougherty, “Incorporating biological prior knowledge for Bayesian learning via maximal knowledgedriven information priors,” BMC Bioinformatics, 2017b. Boluki, S., X. Qian, and E. R. Dougherty, “Experimental design via generalized mean objective cost of uncertainty,” arXiv:1805.01143, 2018. Bozdogan, H., “Model selection and Akaike’s information criterion (AIC): The general theory and its analytical extensions,” Psychometrika, vol. 52, no. 3, pp. 345–370, 1987. Braga-Neto, U. M. and E. R. Dougherty, “Is cross-validation valid for small-sample microarray classification?” Bioinformatics, vol. 20, no. 3, pp. 374–380, 2004. Braga-Neto, U. M. and E. R. Dougherty, “Exact correlation between actual and estimated errors in discrete classification,” Pattern Recognition Letters, vol. 31, no. 5, pp. 407–412, 2010. Braga-Neto, U. M. and E. R. Dougherty, Error Estimation for Pattern Recognition. Wiley-IEEE Press, New York, 2015. Broumand, A., M. S. Esfahani, B.-J. Yoon, and E. R. Dougherty, “Discrete optimal Bayesian classification with error-conditioned sequential sampling,” Pattern Recognition, vol. 48, no. 11, pp. 3766–3782, 2015. Brun, M., C. Sima, J. Hua, J. Lowey, B. Carroll, E. Suh, and E. R. Dougherty, “Model-based evaluation of clustering validation measures,” Pattern Recognition, vol. 40, no. 3, pp. 807–824, 2007. Bryson, A. and M. Frazier, “Smoothing for linear and nonlinear dynamic systems,” Proceedings of the Optimum System Synthesis Conference, pp. 353–364, 1962. Celeux, G. and G. Govaert, “Comparison of the mixture and the classification maximum likelihood in cluster analysis,” Statistical Computation and Simulation, vol. 47, pp. 127–146, 1993.
References
273
Chaloner, K. and I. Verdinelli, “Bayesian experimental design: A review,” Statistical Science, vol. 10, no. 3, pp. 273–304, 1995. Chen, Y.-l. and B.-s. Chen, “Minimax robust deconvolution filters under stochastic parametric and noise uncertainties,” IEEE Trans. Signal Processing, vol. 42, no. 1, pp. 32–45, 1994. Chen, Y. and E. R. Dougherty, “Optimal and adaptive reconstructive granulometric bandpass filters,” Signal Processing, vol. 61, pp. 65–81, 1997. Chen, M., J. Ibrahim, Q. Shao, and R. Weiss, “Prior elicitation for model selection and estimation in generalized linear mixed models,” J. Statistical Planning and Inference, vol. 111, no. 1, pp. 57–76, 2003. Chen, C.-t. and S. A. Kassam, “Robust multiple-input matched filtering: Frequency and time-domain results,” IEEE Trans. Information Theory, vol. 31, no. 6, pp. 812–821, 1995. Clyde, M. and E. I. George, “Model uncertainty,” Statistical Science, vol. 19, no. 1, pp. 81–94, 2004. Cohn, D., L. Atlas, and R. Ladner, “Improving generalization with active learning,” Machine Learning, vol. 15, no. 2, pp. 201–221, 1994. Cressie, N. and G. M. Lasslett, “Random set theory and problems of modeling,” SIAM Review, vol. 29, no. 4, pp. 557–554, 1987. Dalton, L. A. and E. R. Dougherty, “Bayesian minimum mean-square error estimation for classification error - part i: Definition and the Bayesian MMSE error estimator for discrete classification,” IEEE Trans. Signal Processing, vol. 59, no. 1, pp. 115–129, 2011a. Dalton, L. A. and E. R. Dougherty, “Bayesian minimum mean-square error estimation for classification error - part ii: Linear classification of Gaussian models,” IEEE Trans. Signal Processing, vol. 59, no. 1, pp. 130–144, 2011b. Dalton, L. A. and E. R. Dougherty, “Application of the Bayesian MMSE estimator for classification error to gene expression microarray data,” Bioinformatics, vol. 27, no. 13, pp. 1822–1831, 2011c. Dalton, L. A. and E. R. Dougherty, “Exact MSE performance of the Bayesian MMSE estimator for classification error - part i: Representation,” IEEE Trans. Signal Processing, vol. 60, no. 5, pp. 2575–2587, 2012a. Dalton, L. A. and E. R. Dougherty, “Exact MSE performance of the Bayesian MMSE estimator for classification error - part ii: Performance analysis and applications,” IEEE Trans. Signal Processing, vol. 60, no. 5, pp. 2588–2603, 2012b. Dalton, L. A. and E. R. Dougherty, “Optimal classifiers with minimum expected error within a Bayesian framework - part i: Discrete and Gaussian models,” Pattern Recognition, vol. 46, no. 5, pp. 1288–1300, 2013a.
274
References
Dalton, L. A. and E. R. Dougherty, “Optimal classifiers with minimum expected error within a Bayesian framework - part ii: Properties and performance analysis,” Pattern Recognition, vol. 46, no. 5, pp. 1301–1314, 2013b. Dalton, L. A. and E. R. Dougherty, “Intrinsically optimal Bayesian robust filtering,” IEEE Trans. on Signal Processing, vol. 62, no. 3, pp. 657–670, 2014. Dalton, L. A., M. E. Benalc´azar, M. Brun, and E. R. Dougherty, “Analytic representation of Bayes labeling and Bayes clustering operators for random labeled point processes,” IEEE Trans. Signal Processing, vol. 63, no. 6, pp. 1605–1620, 2015. Dalton, L. A., M. E. Benalc´azar, and E. R. Dougherty, “Optimal clustering under uncertainty,” arXiv:1806.00672, 2018. Dalton, L. A. and M. E. Benalc´azar, Personal Communication, 2017. DeGroot, M. H., “Uncertainty, information and sequential experiments,” Annals of Mathematical Statistics, vol. 33, no. 2, pp. 404–419, 1962. DeGroot, M. H., “Concepts of information based on utility,” Recent Developments in the Foundations of Utility and Risk Theory, Daboni, L., A. Montesano, A. and M. Lines, Eds., D. Reidel Publishing, Dordrecht, pp. 265–275, 1986. DeGroot, M. H., Optimal Statistical Decisions. McGraw-Hill, New York, 1970. Dehghannasiri, R., Optimal experimental design in the context of objectivebased uncertainty quantification. Ph.D. Dissertation, Department of Electrical and Computer Engineering, Texas A&M University, College Station, 2016. Dehghannasiri, R., B.-j. Yoon, and E. R. Dougherty, “Optimal experimental design for gene regulatory networks in the presence of uncertainty,” IEEE/ACM Trans. Computational Biology and Bioinformatics, vol. 14, no. 4, pp. 938–950, 2015a. Dehghannasiri, R., B.-j. Yoon, and E. R. Dougherty, “Efficient experimental design for uncertainty reduction in gene regulatory networks,” BMC Bioinformatics, vol. 16(Suppl 13): S2, 2015b. Dehghannasiri, R., X. Qian, and E. R. Dougherty, “Bayesian robust Kalman smoother in the presence of unknown noise statistics,” submitted, 2017d. Dehghannasiri, R., M. S. Esfahani, and E. R. Dougherty, “Intrinsically Bayesian robust Kalman filter: An innovation process approach,” IEEE Trans. Signal Processing, vol. 65, no. 10, pp. 2531–2546, 2017a. Dehghannasiri, R., M. S. Esfahani, X. Qian, and E. R. Dougherty, “Optimal Bayesian Kalman filtering with prior update,” IEEE Trans. on Signal Processing, vol. 66, no. 8, pp. 1982–1996, 2018a. Dehghannasiri, R., M. S. Esfahani, and E. R. Dougherty, “An experimental design framework for Markovian gene regulatory networks under stationary control policy,” BMC Systems Biology [in press], 2018.
References
275
Dehghannasiri, R., D. Xue, P. V. Balachandran, M. Yousefi, L. A. Dalton, T. Lookman, and E. R. Dougherty, “Optimal experimental design for materials discovery,” Computational Materials Science, vol. 129, pp. 311–322, 2017b. Dehghannasiri, R., X. Qian, and E. R. Dougherty, “Optimal experimental design in the context of canonical expansions,” IET Signal Processing, vol. 11, no. 8, pp. 942–951, 2017c. Dehghannasiri, R., X. Qian, and E. R. Dougherty, “Intrinsically Bayesian robust Karhunen–Loève compression,” Signal Processing, vol. 144, pp. 311–322, 2018b. Derman, C., Finite State Markovian Decision Processes. Academic Press, Orlando, 1970. Devroye, L., L. Gyorfi, and A. G. Lugosi, A Probabilistic Theory of Pattern Recognition. Springer-Verlag, New York, 1996. Diaconis, P. and D. A. Freedman, “On the consistency of Bayes estimates,” Annals of Statistics, vol. 14, no. 1, pp. 1–26, 1986. Dougherty, E. R., Random Processes for Image and Signal Processing. SPIE and IEEE Presses, Bellingham, WA, 1999. Dougherty, E. R., “Optimal binary morphological bandpass filters induced by granulometric spectral representation,” Mathematical Imaging and Vision, vol. 7, no. 2, pp. 175–192, 1997. Dougherty, E. R., The Evolution of Scientific Knowledge: From Certainty to Uncertainty. SPIE Press, Bellingham, WA, 2016. Dougherty, E. R., S. Kim, and Y. Chen, “Coefficient of determination in nonlinear signal processing,” Signal Processing, vol. 80, no. 10, pp. 2219– 2235, 2000. Dougherty, E. R. and J. Astola, Nonlinear Filters for Image Processing. SPIE and IEEE Presses, Bellingham, WA, 1999. Dougherty, E. R. and Y. Chen, “Robust optimal granulometric bandpass filters,” Signal Processing, vol. 81, pp. 1357–1372, 2001. Dougherty, E. R., J. Barrera, M. Brun, S. Kim, R. M. Cesar, Y. Chen, M. Bittner, and J. M. Trent, “Inference from clustering with application to gene-expression microarrays,” Computational Biology, vol. 9, no. 1, pp. 105–126, 2002. Dougherty, E. R. and M. Brun, “A probabilistic theory of clustering,” Pattern Recognition, vol. 37, no. 5, pp. 917–925, 2004. Dougherty, E. R., M. Brun, J. M. Trent, and M. L. Bittner, “Conditioningbased modeling of contextual genomic regulation,” IEEE/ACM Trans. on Computational Biology and Bioinformatics, vol. 6, no. 2, pp. 310–320, 2009. Dougherty, E. R., J. Hua, Z. Xiong, and Y. Chen, “Optimal robust classifiers,” Pattern Recognition, vol. 38, no. 10, pp. 1520–1532, 2005. Dougherty, E. R., J. Newell, and J. Pelz, “Morphological texture-based maximum-likelihood pixel classification based on local granulometric moments,” Pattern Recognition, vol. 25, no. 10, pp. 1181–1198, 1992.
276
References
Durovic, Z. N. and B. D. Kovacevic, “Robust estimation with unknown noise statistics,” IEEE Trans. Automatic Control, vol. 44, no. 6, pp. 1292–1296, 1999. Esfahani, M. S. and E. R. Dougherty, “Incorporation of biological pathway knowledge in the construction of priors for optimal Bayesian classification,” IEEE/ACM Trans. Computational Biology and Bioinformatics, vol. 11, pp. 202–218, 2014. Esfahani, M. S. and E. R. Dougherty, “An optimization-based framework for the transformation of incomplete biological knowledge into a probabilistic structure and its application to the utilization of gene/protein signaling pathways in discrete phenotype classification,” IEEE/ACM Trans. Computational Biology and Bioinformatics, vol. 12, pp. 1304–1321, 2015. Esfahani, M. S., J. Knight, A. Zollanvari, B.-J. Yoon, and E. R. Dougherty, “Classifier design given an uncertainty class of feature distributions via regularized maximum likelihood and the incorporation of biological pathway knowledge in steady-state phenotype classification,” Pattern Recognition, vol. 46, no. 10, pp. 2783–2797, 2013. Faryabi, B., G. Vahedi, J.-f. Chamberland, A. Datta, and E. R. Dougherty, “Intervention in context-sensitive probabilistic Boolean networks revisited,” EURASIP Journal on Bioinformatics and Systems Biology, vol. 2009, pp. 1–13, 2009. Ferguson, T. S., “A Bayesian analysis of some nonparametric problems,” The Annals of Statistics, vol. 1, no. 2, pp. 209–230, 1973. Frazier, P. I., W. B. Powell, and S. Dayanik, “A knowledge-gradient policy for sequential information collection,” SIAM Journal on Control and Optimization, vol. 47, no. 5, pp. 2410–2439, 2008. Frazier, P., W. Powell, and S. Dayanik, “The knowledge-gradient policy for correlated normal beliefs,” INFORMS Journal on Computing, vol. 21, no. 4, pp. 599–613, 2009. Frazier, P. I. and J. Wang, “Bayesian optimization for materials design,” Information Science for Materials Discovery and Design, pp. 45–75, 2016. Freedman, D. A., “On the asymptotic behavior of Bayes’ estimates in the discrete case,” Annals of Mathematical Statistics, vol. 34, no. 4, pp. 1386– 1403, 1963. Fritsch, A. and K. Ickstadt, “Improved criteria for clustering based on the posterior similarity matrix,” Bayesian Analysis, vol. 4, no. 2, pp. 367–391, 2009. Gelb, A., Applied Optimal Estimation. MIT Press, Cambridge, 1974. Gelman, A., J. B. Carlin, H. S. Stern, and D. B. Rubin, Bayesian Data Analysis, 2nd Edition. Chapman & Hall/CRC, Boca Raton, 2004. Ghaffari, N., I. Ivanov, X. Qian, and E. R. Dougherty, “A CoD-based reduction algorithm for designing stationary control policies on Boolean networks,” Bioinformatics, vol. 26, pp. 1556–1563, 2010.
References
277
Ghaffari, N., M. R. Yousefi, C. D. Johnson, I. Ivanov, and E. R. Dougherty, “Modeling the next generation sequencing sample processing pipeline for the purposes of classification,” BMC Bioinformatics, vol. 14, no. 1, p. 307, 2013. Giardina, C. and E. R. Dougherty, Morphological Methods in Image and Signal Processing. Prentice-Hall, Englewood Cliffs, 1988. Golovin, D., A. Krause, and D. Ray, “Near-optimal Bayesian active learning with noisy observations,” NIPS’ 10 Proceedings of the 23rd International Conference on Neural Information Processing Systems, pp. 766–774, 2010. Gozzolino, J. M., R. Gonzalez-Zubieta, and R. L. Miller, “Markovian decision processes with uncertain transition probabilities,” Technical report, Defense Technical Information Center, Fort Bevoir, VA, 1965. Grigoryan, A. M. and E. R. Dougherty, “Robustness of optimal binary filters,” Electronic Imaging, vol. 7, no. 1, pp. 117–126, 1998. Grigoryan, A. M. and E. R. Dougherty, “Design and analysis of robust binary filters in the context of a prior distribution for the states of nature,” Mathematical Imaging and Vision, vol. 11, pp. 239–254, 1999. Grigoryan, A. M. and E. R. Dougherty, “Bayesian robust optimal linear filters,” Signal Processing, vol. 81, no. 12, pp. 2503–2521, 2001. Guiasu, S. and A. Shenitzer, “The principle of maximum entropy,” The Mathematical Intelligencer, vol. 7, no. 1, pp. 42–48, 1985. Gupta, V., T. H. Chung, B. Hassibi, and R. M. Murray, “On a stochastic sensor selection algorithm with applications in sensor scheduling and sensor coverage,” Automatica, vol. 42, no. 2, pp. 251–260, 2006. Haralick, R. M., K. Shanmugam, and I. Dinstein, “Textural features for image classification,” IEEE Trans. Systems, Man, and Cybernetics, vol. SMC-3, no. 6, pp. 610–621, 1973. Hoeting, J., D. Madigan, A. Raftery, and C. Volinsky, “Bayesian model averaging: A tutorial,” Statistical Science, vol. 14, no. 4, pp. 382–417, 1999. Hua, J., W. D. Tembe, and E. R. Dougherty, “Performance of feature selection methods in the classification of high-dimensional data,” Pattern Recognition, vol. 42, no. 3, pp. 409–424, 2009. Huan, X. and Y. M. Marzouk, “Sequential Bayesian optimal experimental design via approximate dynamic programming,” arXiv:1604.08320, 2016. Hunter, J. J., “Stationary distributions and mean first passage times in Markov chains using generalised inverses,” Asia-Pacific Journal of Operational Research, vol. 9, pp. 145–153, 1992. Hunter, J. J., “Mixing times with applications to perturbed Markov chains,” Research Letters in the Information and Mathematical Sciences, vol. 3, pp. 85–98, 2002. Hunter, J. J., “Stationary distributions and mean first passage times of perturbed Markov chains,” Linear Algebra and its Applications, vol. 410, pp. 217–243, 2005.
278
References
Imani, M., R. Dehghannasiri, U. M. Braga-Neto, and E. R. Dougherty, “Sequential experimental design for optimal structural intervention in gene regulatory networks based on the mean objective cost of uncertainty,” arXiv:1805.12253, 2018. Jain, A. K. and W. G. Waller, “On the optimal number of features in the classification of multivariate Gaussian data,” Pattern Recognition, vol. 10, pp. 365–374, 1978. Jaynes, E. T., “Information theory and statistical mechanics,” Physical Review Letters, vol. 106, p. 620, 1957. Jaynes, E. T., “Prior probabilities,” IEEE Trans. Systems Science and Cybernetics, vol. 4, pp. 227–241, 1968. Jaynes, E., “What is the question?” in Bayesian Statistics, J. M. Bernardo et al., Eds., Valencia Univ. Press, Valencia, 1980. Jeffreys, H., “An invariant form for the prior probability in estimation problems,” Proc. Royal Society A, Mathematics, Astronomy, Physical Science, vol. 186, no. 1007, pp. 453–461, 1946. Jeffreys, H., The Theory of Probability. Oxford University Press, London, 1961. Jones, D. R., M. Schonlau, and W. J. Welch, “Efficient global optimization of expensive black-box functions,” Journal of Global Optimization, vol. 13, no. 4, pp. 455–492, 1998. Kailath, T., “An innovations approach to least-squares estimation - part i: Linear filtering in additive white noise,” IEEE Trans. Automatic Control, vol. 13, no. 6, pp. 646–655, 1968. Kalman, R. E., “A new approach to linear filtering and prediction problems,” J. Basic Engineering, vol. 82, no. 1, pp. 35–45, 1960. Kalman, R. E. and R. S. Bucy, “New results in linear filtering and prediction theory,” J. Basic Engineering, vol. 83, no. 1, pp. 95–108, 1961. Kanti, J. T. K., V. Mardia, and J. M. Bibby, Multivariate Analysis. Academic Press, London, 1979. Kashyap, R., “Prior probability and uncertainty,” IEEE Trans. Information Theory, vol. IT-17, no. 6, pp. 641–650, 1971. Kass, R. E. and L. Wasserman, “The selection of prior distributions by formal rules,” Journal of the American Statistical Association, vol. 91, no. 435, pp. 1343–1370, 1996. Kassam, S. A. and T. I. Lim, “Robust Wiener filters,” Journal of the Franklin Institute, vol. 304, pp. 171–185, 1977. Kauffman, S. A., “Metabolic stability and epigenesis in randomly constructed genetic nets,” J. Theoretical Biology, vol. 22, pp. 437–467, 1969. King, R. D., K. E. Whelan, F. M. Jones, P. G. Reiser, C. H. Bryant, S. H. Muggleton, D. B. Kell, and S. G. Oliver, “Functional genomic hypothesis generation and experimentation by a robot scientist,” Nature, vol. 427, no. 6971, pp. 247–252, 2004.
References
279
Knight, J., I. Ivanov, and E. R. Dougherty, “MCMC implementation of the optimal Bayesian classifier for non-Gaussian models: Model-based RNASeq classification,” BMC Bioinformatics, vol. 2014, 15:401, 2014. Knight, J., I. Ivanov, R. Chapkin, and E. R. Dougherty, “Detecting multivariate gene interactions in RNA-Seq data using optimal Bayesian classification,” IEEE/ACM Tran. on Computational Biology and Bioinformatics, vol. 15, no. 2, pp. 484–493, 2018. Kumar, P. R., “A survey of some results in stochastic adaptive control,” SIAM J. on Control and Optimization, vol. 23, no. 3, pp. 329–380, 1985. Kuznetsov, V. P., “Stable detection when the signal and spectrum of normal noise are inaccurately known,” Telecommunications and Radio Engineering, vol. 30-31, pp. 58–64, 1976. Kwon, O. K., W. H. Kwon, and K. S. Lee, “FIR filters and recursive forms for discretetime state-space models,” Automatica, vol. 25, no. 5, pp. 715– 728, 1989. Kwon, W. H., K. S. Lee, and O. K. Kwon, “Optimal FIR filters for timevarying state-space models,” IEEE Trans. Aerospace and Electronic Systems, vol. 26, no. 6, pp. 1011–1021, 1990. Laplace, P. S., Théorie Analytique des Probabilitiés. Ve. Courcier, Paris, 1812. Lawoko, C. R. and G. J. McLachlan, “Discrimination with autocorrelated observations,” Pattern Recognition, vol. 18, no. 2, pp. 145–149, 1985. Lawoko, C. R. and G. J. McLachlan, “Asymptotic error rates of the W and Z statistics when the training observations are dependent,” Pattern Recognition, vol. 19, no. 6, pp. 467–471, 1986. Linder, T., G. Lugosi, and K. Zeger, “Rates of convergence in the source coding theorem, in empirical quantizer design, and in universal lossy source coding,” IEEE Trans. on Information Theory, vol. 40, no. 6, pp. 1728–1740, 1994. Lindley, D. V., “On a measure of the information provided by an experiment,” The Annals of Mathematical Statistics, pp. 986–1005, 1956. Lindley, D. V., Bayesian Statistics – A Review, SIAM, Philadelphia, 1972. Madigan, D. and A. Raftery, “Model selection and accounting for model uncertainty in graphical models using Occam’s window,” J. American Statistical Association, vol. 89, no. 428, pp. 1535–1546, 1994. Martin, J. J., Bayesian Decision Problems and Markov Chains. John Wiley, New York, 1967. Martins, D., U. Braga-Neto, R. Hashimoto, M. L. Bittner, and E. R. Dougherty, “Intrinsically multivariate predictive genes,” IEEE J. of Selected Topics in Signal Processing, vol. 2, no. 3, pp. 424–439, 2008. Matheron, G., Random Sets and Integral Geometry. John Wiley, New York, 1975. Meditch, J. S., “Orthogonal projection and discrete optimal linear smoothing,” SIAM Journal on Control, vol. 5, no. 1, pp. 74–89, 1967.
280
References
Mehra, R. K., “On the identification of variances and adaptive Kalman filtering,” IEEE Trans. Automatic Control, vol. 15, no. 2, pp. 175–184, 1970. Meilă, M., “Comparing clusterings—an information based distance,” Journal of Multivariate Analysis, vol. 98, no. 5, pp. 873–895, 2007. McLachlan, G. J., “Further results on the effect of intraclass correlation among training samples in discriminant analysis,” Pattern Recognition, vol. 8, no. 4, pp. 273–275, 1976. Mohsenizadeh, D. N., R. Dehghannasiri, and E. R. Dougherty, “Optimal objective-based experimental design for uncertain dynamical gene networks with experimental error,” IEEE/ACM Trans. Computational Biology and Bioinformatics, vol. 15, no. 1, pp. 218–230, 2018. Myers, K. A. and B. D. Tapley, “Adaptive sequential estimation with unknown noise statistics,” IEEE Trans. Automatic Control, vol. 21, no. 4, pp. 520–523, 1976. Pal, R., A. Datta, and E. R. Dougherty, “Bayesian robustness in the control of gene regulatory networks,” IEEE Trans. Signal Processing, vol. 57, no. 9, pp. 3667–3678, 2009. Poor, H. V., “On robust Wiener filtering,” IEEE Trans. Automatic Control, vol. 25, no. 3, pp. 531–536, 1980. Poor, H. V., “Robust matched filters,” IEEE Trans. Information Theory, vol. 29, no. 5, pp. 677–687, 1983. Poor, H. V. and D. P. Looze, “Minimax state estimation for linear stochastic systems with noise uncertainty,” IEEE Trans. Automatic Control, vol. 26, no. 4, pp. 902–906, 1981. Pugachev, V. S., “The application of canonical expansions of random functions to the determination of the optimal linear system,” Avtomat. i Telemekh., vol. 27, no. 6, 1956. Pugachev, V. S., “Integral canonical representations of random functions and their application to the determination of optimal linear systems,” Avtomat. i Telemekh., vol. 28, no. 11, 1957. Pugachev, V. S., Theory of Random Functions and Its Applications to Control Problems. Pergamon Press, Oxford, 1965. Qian, X. and E. R. Dougherty, “Effect of function perturbation on the steadystate distribution of genetic regulatory networks: Optimal structural intervention,” IEEE Trans. Signal Processing, vol. 56, no. 10, Part 1, pp. 4966–4975, 2008. Qian, X. and E. R. Dougherty, “Bayesian regression with network prior: Optimal Bayesian filtering perspective,” IEEE Trans. Signal Processing, vol. 64, no. 23, 2016. Quintana, F. A. and P. L. Iglesias, “Bayesian clustering and product partition models,” Journal of the Royal Statistical Society: Series B (Statistical Methodology), vol. 65, no. 2, pp. 557–574, 2003.
References
281
Raftery, A. and J. Hoeting, “Bayesian model averaging for linear regression models,” J. American Statistical Association, vol. 92, no. 437, pp. 179– 191, 1997. Raiffa, H. and R. Schlaifer, Applied Statistical Decision Theory. MIT Press, Cambridge, 1961. Reichenbach, H., The Rise of Scientific Philosophy. University of California Press, Berkeley, 1971. Rissanen, J., “A universal prior for integers and estimation by minimum description length,” Annals of Statistics, vol. 11, pp. 416–431, 1983. Rodriguez, C. C., “Entropic priors,” in Maximum Entropy and Bayesian Methods, W. T. Grandy, Jr. and L. H. Schick, Eds., Kluwer, Dordrecht, 1991. Rosenblueth, A. and N. Wiener, “The role of models in science,” Philosophy of Science, vol. 12, no. 4, pp. 316–321, 1945. Rugh, W. J., Linear System Theory Vol. 2. Prentice Hall, Englewood Cliffs, 1996. Saar-Tsechansky, M. and F. Provost, “Active sampling for class probability estimation and ranking,” Machine Learning, vol. 54, no. 2, pp. 153–178, 2004. Sarkka, S. and A. Nummenmaa, “Recursive noise adaptive Kalman filtering by variational Bayesian approximations,” IEEE Trans. Automatic Control, vol. 54, no. 3, pp. 596–600, 2009. Satia, J. K. and R. E. Lave, “Markovian decision processes with uncertain transition probabilities,” Operations Research, vol. 21, no. 3, pp. 728–740, 1973. Sayed, A. H., “A framework for state-space estimation with uncertain models,” IEEE Trans. Automatic Control, vol. 46, no. 7, pp. 998–1013, 2001. Schrödinger, E., Science Theory and Man. Dover, New York, 1957. Serra, J., Image Analysis and Mathematical Morphology. Academic Press, New York, 1983. Shald, S., “The continuous Kalman filter as the limit of the discrete Kalman filter,” Stochastic Analysis and Applications, vol. 17, no. 5, pp. 841–856, 1999. Shmaliy, Y. S., “Linear optimal FIR estimation of discrete time-invariant state-space models,” IEEE Trans. Signal Processing, vol. 58, no. 6, pp. 3086–3096, 2010. Shmaliy, Y. S., “An iterative Kalman-like algorithm ignoring noise and initial conditions,” IEEE Trans. Signal Processing, vol. 59, no. 6, pp. 2465–2473, 2011. Shmulevich, I., E. R. Dougherty, and W. Zhang, “From Boolean to probabilistic Boolean networks as models of genetic regulatory networks,” Proc. of IEEE, vol. 90, no. 11, pp. 1778–1792, 2002. Silver, E. A., “Markovian decision processes with uncertain transition probabilities or rewards,” Technical report, Defense Technical Information Center, Fort Belvoir, VA, 1963.
282
References
Sivakumar, K., Y. Balagurunathan, and E. R. Dougherty, “Asymptotic joint normality of the granulometric moments,” Pattern Recognition Letters, vol. 22, no. 14, pp. 1537–1543, 2001. Smith, M. and A. Roberts, “An exact equivalence between the discrete and continuous-time formulations of the Kalman filter,” Mathematics and Computers in Simulation, vol. 20, no. 2, pp. 102–109, 1978. Spall, J. C. and S. D. Hill, “Least-informative Bayesian prior distributions for finite samples based on information theory,” IEEE Trans. on Automatic Control, vol. 35, no. 5, pp. 580–583, 1990. Stone, M., “Application of a measure of information to the design and comparison of regression experiments,” Annals of Mathematical Statistics, vol. 30, pp. 55–70, 1959. Stoyan, D., W. S. Kendall, and J. Mecke, Stochastic Geometry and Its Application, John Wiley, Chichester, 1987. Tubbs, J. D., “Effect of autocorrelated training samples on Bayes’ probabilities of misclassification,” Pattern Recognition, vol. 12, no. 6, pp. 351– 354, 1980. Vastola, K. S. and H. V. Poor, “Robust Wiener–Kolmogorov theory,” IEEE Trans. Information Theory, vol. 28, no. 6, pp. 316–327, 1984. Verdu, S. and H. V. Poor, “Minimax linear observers and regulators for stochastic systems with uncertain second-order statistics,” IEEE Trans. Automatic Control, vol. 29, no. 6, pp. 499–411, 1984a. Verdu, S. and H. V. Poor, “On minimax robustness: A general approach and applications,” IEEE Trans. Information Theory, vol. 30, no. 2, pp. 328– 340, 1984b. Wang, Y., K. G. Reyes, K. A. Brown, C. A. Mirkin, and W. B. Powell, “Nested-batch-mode learning and stochastic optimization with an application to sequential multistage testing in materials science,” SIAM Journal on Scientific Computing, vol. 37, no. 3, B361–B381, 2015. Wasserman, L., “Bayesian model selection and model averaging,” J. Mathematical Psychology, vol. 44, no. 1, pp. 92–107, 2000. Yeang, C.-H., H. C. Mak, S. McCuine, C. Workman, T. Jaakkola, and T. Ideker, “Validation and refinement of gene-regulatory pathways on a network of physical interactions,” Genome Biology, vol. 6, no. 7, R62, 2005. Yoon, B.-j., X. Qian, and E. R. Dougherty, “Quantifying the objective cost of uncertainty in complex dynamical systems,” IEEE Trans. Signal Processing, vol. 61, no. 9, pp. 2256–2266, 2013. Zadeh, L. A. and J. R. Ragazzini, “An extension of Wiener’s theory of prediction,” J. Applied Physics, vol. 21, no. 7, pp. 645–655, 1950. Zapala, A. M., “Unbounded mappings and weak convergence of measures,” Statistics and Probability Letters, vol. 78, pp. 698–706, 2008. Zellner, A., Basic Issues in Econometrics. University of Chicago Press, 1984.
References
283
Zellner, A., “Past and recent results on maximal data information priors,” Working paper series in economics and econometrics, University of Chicago, Graduate School of Business, Department of Economics, Chicago, 1995. Zellner, A., “Models, prior information, and Bayesian analysis,” Journal of Econometrics, vol. 75, no. 1, pp. 51–68, 1996. Zhou, T., “Sensitivity penalization based robust state estimation for uncertain linear systems,” IEEE Trans. Automatic Control, vol. 55, no. 4, pp. 1018– 1024, 2010. Zhou, T., “Robust recursive state estimation with random measurement droppings,” IEEE Trans. Automatic Control, vol. 61, no. 1, pp. 156–171, 2016. Zollanvari, A., U. Braga-Neto, and E. R. Dougherty, “Exact representation of the second-order moments for resubstitution and leave-one-out error estimation for linear discriminant analysis in the univariate heteroskedastic Gaussian model,” Pattern Recognition, vol. 45, no. 2, pp. 908–917, 2012. Zollanvari, A., J. Hua, and E. R. Dougherty, “Analytical study of performance of linear discriminant analysis in stochastic settings,” Pattern Recognition, vol. 46, no. 11, pp. 3017–3029, 2013. Zollanvari, A. and E. R. Dougherty, “Incorporating prior knowledge induced from stochastic differential equations in the classification of stochastic observations,” EURASIP Journal on Bioinformatics and Systems Biology, vol. 2016, no. 1, p. 2, 2016.
Index class prior probabilities, 198 classification rules, 207–208, 242 —almost uniformly integrable, 223 —consistent, 221 —strongly consistent, 222 —universally consistent, 221 —universally strongly consistent, 222 classifier, 197 classifier model, 242 cluster mismatch error, 254 coefficient of determination, 180 coefficients, 23 complete-linkage clustering, 252 computational gain, 182 conditional density, 51 conditional expectation (mean), 51 conditional random variable, 51 conditional uncertainty class, 194 conditional uncertainty vector, 150 conditional variance, 51 conditioning parameter, 235 constrained classifier, 208 continuous-time recursive equations, 142 controlled transition matrix, 186 convex granulometry, 84 coordinate function, 23 —canonical expansion of, 29 coordinate function representability, 81 correlation-coefficient function, 3 cost function, 94, 193
A action space, 193 algebraic granulometry, 83 anti-extensitivity, 83 autocorrelation function, 3 B Bayes classifier, 198 Bayes clustering error, 258 Bayes error, 198 Bayes label operator, 254 Bayes partition, 258 Bayes partitioning error, 258 Bayesian innovation process, 130 Bayesian minimum-meansquare-error (MMSE) linear estimator, 129 Bayesian MMSE error estimator, 202 Bayesian orthogonality principle, 130 belief vector, 186 Bessel inequality, 25 bi-orthogonality, 31 Boolean network, 171 —with perturbation, 171 Brownian motion, 13 C candidate partition, 258 canonical expansion (representation), 23 class-conditional densities, 198
285
286
cost functional, 95 cost of constraint, 208 covariance extrapolation, 71, 128 covariance function, 3 covariance update, 71, 128 cross-correlation coefficient, 4 cross-correlation function, 4 cross-covariance function, 4 cross-validation error estimator, 243 crosstalk parameter, 235 D design cost, 208 design error, 208 Dirichlet prior, 212, 238 discrete classifier, 209 discrete histogram classification rule, 209 discrete Kalman filter, 70 discriminant function, 200 dynamic-programming MOCU policy, 188 E effective auto-correlations, 98 effective characteristic, 95, 98 —in the weak sense, 96 effective class-conditional density, 205 effective conditional distribution, 110 effective correlation function, 97 effective cross-correlations, 98 effective granulometric size densities, 103 effective Kalman gain matrix, 133 effective power spectra, 99 effective process, 95 effective RLPP, 266 effective Wiener–Hofp equation, 97 efficient global optimization, 196 elementary functions, 23
Index
error-covariance matrix, 62, 127 error of cluster operator, 255 error of label function, 254 error of label operator, 254 error of partition, 254 estimation rule, 242 Euclidian-distance clustering, 250 Euclidian granulometry, 83 expectation (mean function), 3 expectation–maximization (EM) algorithm, 253 expected mean log-likelihood, 233 experimental design value, 151, 194 experimental error, 191 F farthest-neighbor clustering, 252 feature-label distribution, 198 filter, 70 Fourier coefficients, 24 Fourier series, 25 G gain matrix, 67 Gauss–Markov theorem, 62 Gaussian mixture, 252 general covariance model, 218 general partition error, 256 generalized derivative, 10 generator (or a family of sets), 84 global filter, 120 Grammian vector, 56 granulometric bandpass filter, 85 granulometric measure, 87 granulometric size distribution, 85 granulometry, 83 greedy-MOCU design, 187 H hierarchical clustering, 21 Hilbert space, 24 holdout estimator, 242
Index
I IBR action, 193 IBR clusterer, 266 IBR Kalman filter, 132, 135 IBR m-term compression, 164 IBR perturbation, 169 IBR Wiener filter, 99, 102 idempotence, 84 improper prior, 205 increasing, 83 independent covariance model, 217 independent increments, 11 —stationary, 11 induced intervention, 178 induced robust intervention, 178–179 inner product, 24 innovation process, 127 innovations, 67 integral canonical expansion (representation), 43 intrinsically Bayesian robust (IBR) classifier, 202 intrinsically Bayesian robust (IBR) filter, 94 invariant class, 84 inverse filter, 77 inverse-Wishart distribution, 217 K k-means algorithm, 250 —fuzzy, 251 Kalman gain matrix, 71, 128 Kalman predictor, 128 Kalman–Bucy filter, 142 Karhunen–Loève theorem, 36 knowledge gradient, 194 Kullback–Leibler distance, 152 Kullback–Leibler divergence, 233 L label cost function, 255 label-mismatch error, 254
287
least-favorable pair, 122 least-favorable state, 121 likelihood function, 205 linear discriminant analysis, 201 M Mahalanobis distance, 210 Markov chain Monte Carol (MCMC) method, 136 maximal data information, 233 maximal knowledge-driven information prior (MKDIP), 231 —with constraints, 232 maximally robust state, 113 maximum entropy, 233 MCBR clusterer, 266 MCBR mean objective cost of uncertainty, 150 MCBR objective cost of uncertainty, 150 mean-characteristic global filter, 120 mean objective cost of uncertainty (MOCU), 149, 194 mean-response global filter, 120 mean robustness, 112, 120 mean-square derivative, 6 mean-square differentiable, 6 mean-square error (MSE), 39, 51 mean-square error estimator —optimal, 51 mean-square integrability, 8 mean-square partial derivative, 7 measurement equation, 69 measurement matrix, 69 Mercer’s theorem, 37 minimax Kalman filter, 139 minimax robust filter, 121, 122–124 MKDIP with constraints and additive costs, 232 model-constrained (stateconstrained) Bayesian robust (MCBR) filter, 113
288
model-constrained (stateconstrained) minimax robust filter, 121 multinomial discrimination, 209 multivariate Poisson model, 229 multivariate Student’s t-distribution, 220 N natural partition cost function, 256 next-generation sequencing, 229 norm, 24 normal-inverse-Wishart distribution, 218, 263 O objective cost of uncertainty (OCU), 149 opening (of a set), 83 opening spectrum, 85 optimal action, 193 optimal Bayesian classifier (OBC), 202 optimal Bayesian filter (OBF), 108 optimal experiment, 151 optimal homogeneous linear MSE filter, 56 optimal linear MSE filter (estimator), 54 optimal single-node-perturbation structural intervention, 173 orthogonal projection theorem, 58 orthogonal system, 24 orthogonal vector, 24 orthogonality principle, 55, 73 orthonormal system, 24 —complete, 25 P Parseval identity, 25 Parseval inner-product identity, 29 pass and fail sets, 85 plug-in discriminant, 208
Index
point process, 253 Poisson impulse process, 10 Poisson increment process, 15 Poisson points, 9 Poisson process, 9, 11 population-specific error rates, 199 positive semidefinite function, 47 posterior density, 108 posterior distribution, 194, 198 power spectral density, 46 predictor, 70 primary grain, 87 primary parameter, 151 prior distribution, 193 prior probability distribution, 94 process noise, 69 pseudo-inverse, 58 Q quadratic discriminant analysis (QDA), 201 R random function, 1 random labeled point process (RLPP), 253 random labeling, 253 random process, 1 —characteristics, 90 random signal, 1 random time function, 1 random time series, 1 reconstructive granulometry, 87 recursive equations, 128–129 reduced IBR operator, 150 reduced uncertainty class, 150 reducibility, 95 reference partition, 258 regression curve, 52 regularized expected mean loglikelihood prior (REMLP), 234 regularized maximum entropy prior (RMEP), 234
Index
regularized maximum information prior (RMIP), 234 remaining MOCU, 150 residual IBR cost, 152, 226 robustness, 112 root-mean-square (RMS), 242 rule model, 242 S saddle-point solution, 121 sample-based LDA discriminant, 211 sample-based QDA discriminant, 210 separable point processes, 259–260 size distribution, 84 smoother, 70 solvability, 95, 266 solvability in the weak sense, 96 span, 24 spectral components, 85 standard deviation function, 3 state equation, 69 state-transition matrix, 144 state vector, 69 stochastic process, 1 strict-sense stationary, 16 structuring element, 83
289
T terminal cost function, 188 translation invariance, 15, 83 U uncertainty class, 93–94, 193 uniformly most robust (global filter), 120 utility function, 152 V validity measure, 257 variance function, 3 Voronoi partition, 250 W weak* consistent, 223 white noise —continuous, 12 —discrete, 11, 23 wide-sense stationarity, 14 Wiener filter, 76 Wiener process, 14 Wiener smoothing filter, 77 Wiener–Hopf equation, 75–76 —extended, 79 Z Zipf power law model, 216
Edward R. Dougherty is a Distinguished Professor in the Department of Electrical and Computer Engineering at Texas A&M University in College Station, Texas, where he holds the Robert M. Kennedy ‘26 Chair in Electrical Engineering and is Scientific Director of the Center for Bioinformatics and Genomic Systems Engineering. He holds a Ph.D. in mathematics from Rutgers University and an M.S. in Computer Science from Stevens Institute of Technology, and was awarded a Doctorate Honoris Causa by the Tampere University of Technology in Finland. His previous works include SPIE Press titles The Evolution of Scientific Knowledge: From Certainty to Uncertainty (2016) and Hands-on Morphological Image Processing (2003).