[1] I. Markovsky. Dynamic measurement. In Data-driven filtering and control design: Methods and applications, chapter 6, pages 97--108. IET, 2019. [ bib | DOI | pdf ]
In metrology, a given measurement technique has fundamental speed and accuracy limitations imposed by physical laws. Data processing allows us to overcome these limitations by using prior knowledge about the sensor dynamics. The prior knowledge considered in this paper is a model class to which the sensor dynamics belongs. We present methods that are applicable to linear time-invariant processes and are suitable for real-time implementation on a digital signal processor.

Keywords: System identification; subspace methods, real-time estimation, Kalman filtering; metrology.
[2] I. Markovsky, A. Fazzi, and N. Guglielmi. Applications of polynomial common factor computation in signal processing. In Latent Variable Analysis and Signal Separation, Lecture Notes in Computer Science, pages 99--106. Springer, 2018. [ bib | DOI | pdf ]
We consider the problem of computing the greatest common divisor of a set of univariate polynomials and present applications of this problem in system theory and signal processing. One application is blind system identification: given the responses of a system to unknown inputs, find the system. Assuming that the unknown system is finite impulse response and at least two experiments are done with inputs that have finite support and their Z-transforms have no common factors, the impulse response of the system can be computed up to a scaling factor as the greatest common divisor of the Z-transforms of the outputs. Other applications of greatest common divisor problem in system theory and signal processing are finding the distance of a system to the set of uncontrollable systems and common dynamics estimation in a multi-channel sum-of-exponentials model.

Keywords: blind system identification; sum-of-exponentials modeling; distance to uncontrollability; approximate common factor; low-rank approximation
[3] I. Markovsky and P.-L. Dragotti. Using structured low-rank approximation for sparse signal recovery. In Latent Variable Analysis and Signal Separation, Lecture Notes in Computer Science, pages 479--487. Springer, 2018. [ bib | DOI | pdf | software ]
Structured low-rank approximation is used in model reduction, system identification, and signal processing to find low-complexity models from data. The rank constraint imposes the condition that the approximation has bounded complexity and the optimization criterion aims to find the best match between the data---a trajectory of the system---and the approximation. In some applications, however, the data is sub-sampled from a trajectory, which poses the problem of sparse approximation using the the low-rank prior. This paper considers a modified structured low-rank approximation problem where the observed data is a linear transformation of a system's trajectory with reduced dimension. We reformulate this problem as a structured low-rank approximation with missing data and propose a solution methods based on the variable projections principle. We compare the structured low-rank approximation approach with the classical sparsity inducing method of 1-norm regularization. The 1-norm regularization method is effective for sum-of-exponentials modeling with a large number of samples, however, it is not suitable for identification of systems with damping.

Keywords: structured low-rank approximation, sparse approximation, missing data estimation, sum-of-exponentials modeling, 1-norm regularization
[4] I. Markovsky. System identification in the behavioral setting: A structured low-rank approximation approach. In E. Vincent et al., editors, Latent Variable Analysis and Signal Separation, volume 9237 of Lecture Notes in Computer Science, pages 235--242. Springer, 2015. [ bib | DOI | pdf ]
System identification is a fast growing research area that encompasses a broad range of problems and solution methods. It is desirable to have a unifying setting and a few common principles that are sufficient to understand the currently existing identification methods. The behavioral approach to system and control, put forward in the mid 80's, is such a unifying setting. Till recently, however, the behavioral approach lacked supporting numerical solution methods. In the last 10 yeas, the structured low-rank approximation setting was used to fulfill this gap. In this paper, we summarize recent progress on methods for system identification in the behavioral setting and pose some open problems. First, we show that errors-in-variables and output error system identification problems are equivalent to Hankel structured low-rank approximation. Then, we outline three generic solution approaches: 1) methods based on local optimization, 2) methods based on convex relaxations, and 3) subspace methods. A specific example of a subspace identification method---data-driven impulse response computation---is presented in full details. In order to achieve the desired unification, the classical ARMAX identification problem should also be formulated as a structured low-rank approximation problem. This is an outstanding open problem.

Keywords: system identification; errors-in-variables modeling, behavioral approach; Hankel matrix, low-rank approximation, impulse response estimation, ARMAX identification.
[5] I. Markovsky. Rank constrained optimization problems in computer vision. In A. Argyriou J. Suykens, M. Signoretto, editor, Regularization, Optimization, Kernels, and Support Vector Machines, Pattern Recognition, chapter 13, pages 293--312. Chapman & Hall/CRC Machine Learning, 2014. [ bib | DOI | pdf ]
[6] I. Markovsky and K. Usevich. Nonlinearly structured low-rank approximation. In Yun Raymond Fu, editor, Low-Rank and Sparse Modeling for Visual Analysis, pages 1--22. Springer, 2014. [ bib | DOI | pdf ]
Polynomially structured low-rank approximation problems occur in algebraic curve fitting, e.g., conic section fitting, subspace clustering (generalized principal component analysis), and nonlinear and parameter-varying system identification. The maximum likelihood estimation principle applied to these nonlinear models leads to nonconvex optimization problems and yields inconsistent estimators in the errors-in-variables (measurement errors) setting. We propose a computationally cheap and statistically consistent estimator based on a bias correction procedure, called adjusted least-squares estimation. The method is successfully used for conic section fitting and was recently generalized to algebraic curve fitting. The contribution of this book's chapter is the application of the polynomially structured low-rank approximation problem and, in particular, the adjusted least-squares method to subspace clustering, nonlinear and parameter-varying system identification. The classical in system identification input-output notion of a dynamical model is replaced by the behavioral definition of a model as a set, represented by implicit nonlinear difference equations.

Keywords: structured low-rank approximation, conic section fitting, subspace clustering, nonlinear system identification.
[7] I. Markovsky. Algorithms and literate programs for weighted low-rank approximation with missing data. volume 3, chapter 12, pages 255--273. Springer, 2011. [ bib | DOI | pdf | software ]
[8] I. Markovsky, A. Amann, and S. Van Huffel. Application of filtering methods for removal of resuscitation artifacts from human ECG signals. In L. Wang, H. Garnier, and T. Jakeman, editors, System Identification, Environmental Modelling, and Control System Design. Springer, 2009. [ bib | DOI | pdf | software ]
[9] I. Markovsky and S. Van Huffel. On weighted structured total least squares. In I. Lirkov, S. Margenov, and J. Waśniewski, editors, Large-Scale Scientific Computing, volume 3743 of Lecture Notes in Computer Science, pages 695--702. Springer--Verlag, 2006. [ bib | DOI | pdf ]
[10] A. Kukush, I. Markovsky, and S. Van Huffel. Consistent estimation of an ellipsoid with known center. In J. Antoch, editor, Comput. Stat. (COMPSTAT), pages 1369--1376. Physica--Verlag, 2004. [ bib | DOI | .ps.gz ]
[11] A. Kukush, I. Markovsky, and S. Van Huffel. On consistent estimators in linear and bilinear multivariate errors-in-variables models. In S. Van Huffel and P. Lemmerling, editors, Total Least Squares and Errors-in-Variables Modeling: Analysis, Algorithms and Applications, pages 155--164. Kluwer, 2002. [ bib | DOI | .ps.gz ]
We consider three multivariate regression models related to the TLS problem. The errors are allowed to have unequal variances.

For the model AX = B, the elementwise-weighted TLS estimator is considered. The matrix [A B] is observed with errors and has independent rows, but the errors in a row are correlated. In addition, the corresponding error covariance matrices may differ from row to row and some of the columns are allowed to be error-free. We give mild conditions for weak consistency of the estimator, when the number of rows in A increases. We derive the objective function for the estimator and propose an iterative procedure to compute the solution.

In a bilinear model AXB=C, where the data A,B,C are perturbed by errors, an adjusted least squares estimator is considered, which is consistent, i.e. converges to X, as the number m of rows in A and the number q of columns in B increase.

A similar approach is applied in a related model, arising in motion analysis. The model is vTFu=0, where the vectors u and v are homogeneous coordinates of the projections of the same rigid object point in two images, and F is a rank deficient matrix. Each pair (u,v) is observed with measurement errors. We construct a consistent estimator of F in three steps: a) estimate the measurement error variance, b) construct a preliminary matrix estimate, and c) project that estimate on the subspace of singular matrices.

A simulation study illustrates the theoretical results.


This file was generated by bibtex2html 1.98.