A New Nonlinear Conjugate Gradient Coefficient for Unconstrained Optimization

In this paper, we suggest a new nonlinear conjugate gradient method for solving large scale unconstrained optimization problems. We prove that the new conjugate gradient coefficient k β with exact line search is globally convergent. Preliminary numerical results with a set of 116 unconstrained optimization problems show that k β is very promising and efficient when compared to the other conjugate gradient coefficients Fletcher - Reeves ) ( FR and Polak -Ribiere – Polyak ) ( PRP .


Introduction
In this paper, we focus our attention on the unconstrained optimization problem, ) ( min x f n R x∈ (1.1) Where R R f n → : is continuously differentiable function and n R denotes an n-dimensional Euclidean space. We denote by ) (x g , the gradient of f at x . The conjugate gradient (CG) method is the best methods for solving (1.1), especially when the dimension is large. The iterates of the CG method for solving (1.1) are obtained by ,... 2 , 1 , 0 , Where k x is current iterate point and the k α is step size. The step size is computed by carrying out some line search, for example, the exact line search where, β is a scalar. The most well-known classical formula for k β are the Hestenes-Stiefe ) (HS method [11]. The Fletcher -Reeves ) (FR method [7]. The Polak-Ribiere -Polyak ) (PRP method [15,16]. The conjugate descent ) (CD method [6]. The Liu -Storey ) (LS method [14] and the Dai -Yuan ) (DY method [2]. The parameters of these k β as follows The most studied properties of CG methods are its global convergence properties. Zoutendijk [22] and Powell [17] proved that FR method with exact line search is globally convergent. Zhang et al [13], Proposed a modified FR method MFR which is globally convergent under inexact line search. Polyak [16] and Powell [18] showed that PRP has a good numerical performance, but does not have such good convergence property. Touati-Ahmed and Storey [20], Gilbert and Nocedal [8] gave another way to discuss the global convergence of the PRP method with the weak Wolfe -Powell line search, where the parameter k β in (1.6) is not allowed to be negative, , therefore, during the past few years, many authors has been investigated to create new formula for k β , [3,4,9,10,19,21]. In this paper, we will show a new k β in section 2. In section 3, we will study the sufficient descent condition and the global convergence proof of the new k β . In section 4, we present the numerical results and discussion. Finally, we present the conclusions in section 5.

New k β parameter and algorithm
In this section, we present a modified of PRP method which is known as The following algorithm is a general algorithm for solving optimization by CG methods.
Step 5: Set The following assumptions are often used in the studies of the conjugate gradient methods.

Assumption A.
) (x f is bounded from below on the level set x is the starting point. Assumption B. In some neighbourhood N of, the objective function is continuously differentiable, and its gradient is Lipschitz continuous, that is there exists a constant

The Global Convergence properties
In this section, we study the global convergent properties of MRM k β , first we need to simplify the MRM k β , so that the proof will be easier. From (2.1) we know that Hence we obtain 2 1 The following lemmas are very useful in the process of the studies on the conjugate gradient methods has an advancement that the directions will approach to the steepest descent directions while the step length k α is small.
The same as the above proof, for the points in 2 P , we also have

From (3.2) and (3.3) we have,
The proof is completed.

Numerical results and discussions
In this section, we present the computational performance of a MATLAB program on a set of 116 unconstrained optimization test problems. We selected 24 test functions considered in Andrei [1], each of them is tested in different variables. We performed a comparison with two CG methods Fletcher -Reeves ) (FR and Polak-Ribiere-Polyak ) (PRP , we considered 6 10 − = ε and the gradient value as the stopping criteria as Hillstrom [12] suggested that ε ≤ k g as the stopping criteria. For each of the test functions problem, we used four initial points, starting from a closer point to the solution and moving on to the one that is furthest from it. A list of problem functions and the initial points used are shown in table1, where the exact line search was used to compute the step size. The CPU processor used was Intel (R) Core TM i3-M350 (2.27GHz), with RAM 4 GB. In some cases, the computation stopped due to the failure of the line search to find the positive step size, and thus it was considered a failure. Numerical results are compared relative on the CPU time and number of iteration. The performance results are shown in Figs.1 and 2 respectively, using a performance profile introduced by Dolan and More [5].

Conclusion and further research
This paper gives a new conjugate gradient method for solving unconstrained optimization problems. Under the exact line search, this k β possesses the global convergence condition. Numerical results show that our method is competitive to other two conjugate gradient methods, Fletcher Reeves