Skip to main content
placeholder image

A nonlinear recurrent neural network estimation of conditional mean and variance

Conference Paper


Abstract


  • In the context of nonlinear financial time series, both conditional mean and variance (volatility) tend to evolve over time and depend on previous values. Commonly, the objective function used in Artificial Neural Networks (ANNs) is the sum of squared errors. This requires the target and forecasted output vector to have the same dimension. It is therefore of interest to consider recurrent neural networks with two-dimensional output even though the target data are one-dimensional. The idea of the optimization algorithm can be extended to this situation. In additional, the negative log-likelihood based on a parametric statistical model is a possible alternative to the traditional least squares objective. It has been found that the Root Mean Square Prediction Error (RMSPE) for the mean and variance prediction is smaller for of the developed recurrent neural network (S-GRNN) than others in the majority cases. S-GRNN also provides better performance to other for the real data set (S&P500 Index).

Authors


Publication Date


  • 2008

Citation


  • Dechpichai, P. & Davy, P. J. (2008). A nonlinear recurrent neural network estimation of conditional mean and variance. International Conference on Artificial Intelligence and Pattern Recognition 2008, AIPR 2008 (pp. 232-239).

Scopus Eid


  • 2-s2.0-84876758250

Ro Metadata Url


  • http://ro.uow.edu.au/eispapers/892

Start Page


  • 232

End Page


  • 239

Abstract


  • In the context of nonlinear financial time series, both conditional mean and variance (volatility) tend to evolve over time and depend on previous values. Commonly, the objective function used in Artificial Neural Networks (ANNs) is the sum of squared errors. This requires the target and forecasted output vector to have the same dimension. It is therefore of interest to consider recurrent neural networks with two-dimensional output even though the target data are one-dimensional. The idea of the optimization algorithm can be extended to this situation. In additional, the negative log-likelihood based on a parametric statistical model is a possible alternative to the traditional least squares objective. It has been found that the Root Mean Square Prediction Error (RMSPE) for the mean and variance prediction is smaller for of the developed recurrent neural network (S-GRNN) than others in the majority cases. S-GRNN also provides better performance to other for the real data set (S&P500 Index).

Authors


Publication Date


  • 2008

Citation


  • Dechpichai, P. & Davy, P. J. (2008). A nonlinear recurrent neural network estimation of conditional mean and variance. International Conference on Artificial Intelligence and Pattern Recognition 2008, AIPR 2008 (pp. 232-239).

Scopus Eid


  • 2-s2.0-84876758250

Ro Metadata Url


  • http://ro.uow.edu.au/eispapers/892

Start Page


  • 232

End Page


  • 239