Conjugate Gradient Algorithms in Nonconvex Optimization by Radoslaw Pytlak

By Radoslaw Pytlak

This updated e-book is on algorithms for large-scale unconstrained and certain limited optimization. Optimization ideas are proven from a conjugate gradient set of rules standpoint.

Large a part of the e-book is dedicated to preconditioned conjugate gradient algorithms. specifically memoryless and restricted reminiscence quasi-Newton algorithms are offered and numerically in comparison to regular conjugate gradient algorithms.

The distinct recognition is paid to the equipment of shortest residuals constructed by means of the writer. a number of powerful optimization recommendations in accordance with those tools are offered.

Because of the emphasis on functional tools, in addition to rigorous mathematical remedy in their convergence research, the ebook is aimed toward a large viewers. it may be utilized by researches in optimization, graduate scholars in operations learn, engineering, arithmetic and desktop technological know-how. Practitioners can reap the benefits of a variety of numerical comparisons optimization codes mentioned within the e-book.

Show description

Read Online or Download Conjugate Gradient Algorithms in Nonconvex Optimization PDF

Similar linear programming books

The Stability of Matter: From Atoms to Stars

During this assortment the reader will locate common effects including deep insights into quantum platforms mixed with papers at the constitution of atoms and molecules, the thermodynamic restrict, and stellar buildings.

Generalized Linear Models, Second Edition (Chapman & Hall CRC Monographs on Statistics & Applied Probability)

The luck of the 1st variation of Generalized Linear types ended in the up-to-date moment variation, which keeps to supply a definitive unified, remedy of tools for the research of numerous varieties of information. at the present time, it continues to be well known for its readability, richness of content material and direct relevance to agricultural, organic, well-being, engineering, and different functions.

Switched Linear Systems: Control and Design (Communications and Control Engineering)

Switched linear platforms have loved a specific progress in curiosity because the Nineteen Nineties. the big volume of knowledge and ideas hence generated have, formerly, lacked a co-ordinating framework to concentration them successfully on a number of the primary matters equivalent to the issues of sturdy stabilizing switching layout, suggestions stabilization and optimum switching.

AMPL: A Modeling Language for Mathematical Programming

AMPL is a language for large-scale optimization and mathematical programming difficulties in creation, distribution, mixing, scheduling, and lots of different functions. Combining general algebraic notation and a strong interactive command atmosphere, AMPL makes it effortless to create versions, use a wide selection of solvers, and think about recommendations.

Extra info for Conjugate Gradient Algorithms in Nonconvex Optimization

Example text

36) directly. Since the directions p1 , . . 43) and because pi = −ri + βi pi−1 , we have ri ∈ span {pi , pi−1 } . 36). The theorem shows that gradients evaluated by the conjugate gradient algorithm are mutually orthogonal. Therefore, the question arises why the method was called the conjugate gradient algorithm? One explanation is that the method starts with p1 = −g1 . In addition Hestenes [100] proves that directions generated by the conjugate gradient method are directions of steepest descent in the subspaces Pˆ n−k = x ∈ R n : pTj A (x − xk+1) , j = 1, .

77) resulting from the condition rk+1 k xk+1 − x¯ 2 A = (xk+1 − x) ¯ T A (xk+1 − x) ¯ ¯ = (Axk+1 − b)T (xk+1 − x) T (xk − αk rk − x) ¯ = rk+1 T (xk − x) ¯ = rk+1 = (rk − αk Ark )T (xk − x) ¯ = rkT A−1 rk − αk rkT rk = xk − x¯ 2 A 1− rkT rk rkT rk rkT Ark rkT A−1 rk since xk − x¯ 2 A = rkT A−1 AA−1 rk = rkT A−1 rk . Applying the Kantorovitch inequality gives the thesis. 67): xk+1 − x¯ 2 A min max (1 + λiPk (λi ))2 x1 − x¯ Pk 1 i n 2 A which can be rephrased as xk+1 − x¯ 2 A min max [Qk (λ )]2 x1 − x¯ 2A .

6 Rate of Convergence 37 Proof. The proof is given in [31]. 78) can be stated in a different form, more convenient for further analysis. 4. 79) with η= λmin . λmax − λmin Proof. 15 based on [189]) In order to obtain the thesis we use the inequality Ck (t) = k 1 t + t2 − 1 + t + 2 k 1 t + t2 − 1 2 t2 − 1 −k which gives Ck (1 + 2η ) 1 1 + 2η + (1 + 2η )2 − 1 2 k 1 1 + 2η + 2 η (η + 1) . 2 Furthermore, 2 √ η + η +1 √ √ 2 λmin + λmax = λ +λ √ min √ max λmin + λmax √ = √ λ − λmin √ max κ +1 , = √ κ −1 1 + 2η + 2 η (η + 1) = where κ is the spectral condition number: κ= λmax .

Download PDF sample

Rated 4.80 of 5 – based on 40 votes