Computing a Nearest Correlation Matrix with Factor Structure
An n x n correlation matrix has k factor structure if its off-diagonal agrees with that of a rank k matrix. Such correlation matrices arise, for example, in factor models of collateralized debt obligations (CDOs) and multivariate time series. We analyze the properties of these matrices and, in parti...
        Saved in:
      
    
          | Published in | SIAM journal on matrix analysis and applications Vol. 31; no. 5; pp. 2603 - 2622 | 
|---|---|
| Main Authors | , , | 
| Format | Journal Article | 
| Language | English | 
| Published | 
        Philadelphia, PA
          Society for Industrial and Applied Mathematics
    
        01.01.2010
     | 
| Subjects | |
| Online Access | Get full text | 
| ISSN | 0895-4798 1095-7162  | 
| DOI | 10.1137/090776718 | 
Cover
| Summary: | An n x n correlation matrix has k factor structure if its off-diagonal agrees with that of a rank k matrix. Such correlation matrices arise, for example, in factor models of collateralized debt obligations (CDOs) and multivariate time series. We analyze the properties of these matrices and, in particular, obtain an explicit formula for the rank in the one factor case. Our main focus is on the nearness problem of finding the nearest k factor correlation matrix C(X) = diag(I - XX theta )+XX theta to a given symmetric matrix, subject to natural nonlinear constraints on the elements of the n x k matrix X, where distance is measured in the Frobenius norm. For a special one parameter case we obtain an explicit solution. For the general k factor case we obtain the gradient and Hessian of the objective function and derive an instructive result on the positive definiteness of the Hessian when k = 1. We investigate several numerical methods for solving the nearness problem: the alternating directions method; a principal factors method used by Anderson, Sidenius, and Basu in the CDO application, which we show is equivalent to the alternating projections method and lacks convergence results; the spectral projected gradient method of Birgin, Martinez, and Raydan; and Newton and sequential quadratic programming methods. The methods differ in whether or not they can take account of the nonlinear constraints and in their convergence properties. Our numerical experiments show that the performance of the methods depends strongly on the problem, but that the spectral projected gradient method is the clear winner. | 
|---|---|
| Bibliography: | ObjectType-Article-2 SourceType-Scholarly Journals-1 ObjectType-Feature-1 content type line 23  | 
| ISSN: | 0895-4798 1095-7162  | 
| DOI: | 10.1137/090776718 |