Document Type

Technical Report

Publication Date

1990-06-01

Filename

WUCS-90-21.pdf

DOI:

10.7936/K7377721

Technical Report Number

WUCS-90-21

Abstract

The important feature of this work is the combination of minimizing a function with desirable properties, using the conjugate gradient method (cgm). The method has resulted in significant improvements for both easy and difficult training tasks. Two major problems slow the rate at which large back propagation neural networks (bpnns) can be taught. First is the linear convergence of gradient descent used by modified steepest descent method (msdm). Second is the abundance of saddle points which occur because of the minimization of the sum of squared errors. This work offers a solution to both difficulties. The cgm which is super linearly convergent replaces gradient descent. Division of each squared error term by its derivative and then summing the terms produces a minimization function with a significantly reduced number of saddle points.

Comments

Permanent URL: http://dx.doi.org/10.7936/K7377721

Share

COinS