Idea Transcript
Springer Optimization and Its Applications 142
Xinmin Yang
Generalized Preinvexity and Second Order Duality in Multiobjective Programming
Springer Optimization and Its Applications Volume 142 Managing Editor Panos M. Pardalos, University of Florida Editor-Combinatorial Optimization Ding-Zhu Du, University of Texas at Dallas Advisory Board J. Birge, University of Chicago S. Butenko, Texas A&M University F. Giannessi, University of Pisa S. Rebennack, Karlsruhe Institute of Technology T. Terlaky, Lehigh University Y. Ye, Stanford University
Aims and Scope Optimization has been expanding in all directions at an astonishing rate during the last few decades. New algorithmic and theoretical techniques have been developed, the diffusion into other disciplines has proceeded at a rapid pace, and our knowledge of all aspects of the field has grown even more profound. At the same time, one of the most striking trends in optimization is the constantly increasing emphasis on the interdisciplinary nature of the field. Optimization has been a basic tool in all areas of applied mathematics, engineering, medicine, economics and other sciences. The series Springer Optimization and Its Applications aims to publish stateof-the-art expository works (monographs, contributed volumes, textbooks) that focus on algorithms for solving optimization problems and also study applications involving such problems. Some of the topics covered include nonlinear optimization (convex and nonconvex), network flow problems, stochastic optimization, optimal control, discrete optimization, multi-objective programming, description of software packages, approximation techniques and heuristic approaches. Volumes from this series are indexed by Web of Science, zbMATH, Mathematical Reviews, and SCOPUS.
More information about this series at http://www.springer.com/series/7393
Xinmin Yang
Generalized Preinvexity and Second Order Duality in Multiobjective Programming
123
Xinmin Yang College of Mathematics Science Chongqing Normal University Chongqing, China
ISSN 1931-6828 ISSN 1931-6836 (electronic) Springer Optimization and Its Applications ISBN 978-981-13-1980-8 ISBN 978-981-13-1981-5 (eBook) https://doi.org/10.1007/978-981-13-1981-5 Library of Congress Control Number: 2018951723 Mathematics Subject Classification: 90C25, 90C29, 90C30, 90C31, 90C46 © Springer Nature Singapore Pte Ltd. 2018 This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, express or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Singapore Pte Ltd. The registered company address is: 152 Beach Road, #21-01/04 Gateway East, Singapore 189721, Singapore
Generalized Preinvexity and Second Order Duality in Multiobjective Programming
Abstract In the book, several generalized convex functions and generalized monotone functions are introduced, and various properties and relations are established. These generalized convexities and generalized monotonicities are then applied to the study of optimality conditions and duality theory in optimization. The book consists of three parts. In Part I, the concepts of semistrictly preinvex functions and generalized preinvex functions are introduced and their properties are given. A preinvex function is characterized by an intermediate-point preinvexity condition. Some properties of semistrictly preinvex functions and semipreinvex functions are discussed. In particular, the relationship between a semistrictly preinvex function and a preinvex function is investigated. It is shown that a function is semistrictly preinvex if and only if it satisfies a strict invexity inequality for any two points with distinct function values. It is worth noting that these characterizations reveal various interesting relationships among prequasiinvex, semistrictly prequasiinvex, and strictly prequasiinvex functions. These relationships are useful in the study of optimization problems. In Part II, generalized invariant monotonicity is introduced and its relationship with generalized invexity is established. Some examples are given to show that these generalized invariant monotonicities are proper generalization of the corresponding generalized monotonicities. Moreover, several examples are also presented to illustrate the properly inclusive relations among the generalized invariant monotonicity. Finally in Part III, several second order and higher order symmetric duality models are provided for multiobjective nonlinear programming problems. Weak and strong duality theorems are established under generalized convexity assumptions. It is worth noting that special cases of our models and results can be reduced to the corresponding first order cases.
v
Preface
Convexity plays a central role in mathematical economics, engineering, management science, and optimization theory. The main reason is that it is a sufficient condition for ensuring that a local optimal solution of an extremum problem is a global one. However, convexity is not a necessary condition. In fact, the class of convex functions is relatively small. Therefore, researchers have been trying to relax the convexity condition while preserving nice properties similar to those enjoyed by convex functions. New functions have been discovered such as quasiconvex, pseudoconvex, convexlike, and subconvexlike functions. These functions are called generalized convex functions. During the last 30 years, there has been a continuous increase in research activities in generalized convexity and its applications. See [6, 10, 13, 16, 26, 30, 36, 40, 47, 49, 55, 60, 76, 78, 87, 99] and web site: http:// www.ec.unipi.it/-genconv. The main purpose of this book is to study properties of generalized preinvexity functions and its applications to second or higher order duality in multiobjective programming. Generalized preinvex functions and generalized invariant monotonicities are introduced. Properties and characterizations of various preinvex and generalized preinvex functions are obtained. Relationships between generalized preinvexity and generalized invariant monotonicity are established. Relevant examples are given to illustrate the relationships between various generalized convexities and generalized monotonicities. Second or higher order symmetric duality is derived for multiobjective programming. In Chap. 1, we consider the class of preinvex functions. Under the upper semicontinuity (respectively, lower semi-continuity) condition, the preinvexity of a function can be verified by checking the validity of an intermediate-point preinvexity condition of the function concerned. Under an appropriate intermediate-point preinvexity condition, distinct characterizations are obtained for prequasiinvex, semistrictly prequasiinvex, locally prequasiinvex, and locally semistrictly prequasiinvex functions, respectively. These results reveal that preinvexity, prequasiinvexity, locally prequasiinvexity, semistrict prequasiinvexity, and locally semistrict prequasiinvexity are equivalent under appropriate conditions.
vii
viii
Preface
In Chap. 2, we consider the class of semistrictly preinvex functions, and some of their properties are obtained. Under the lower semicontinuity condition, an important relationship between a semistrictly preinvex function and a preinvex function is established. Furthermore, it is shown that a function is semistrictly preinvex if and only if it satisfies a strict invexity inequality for any two points with distinct function values. In Chap. 3, we consider the class of semipreinvex functions. Some important properties of semipreinvex functions are established. As an application of semipreinvex functions, saddle point optimality criteria are developed for a multiobjective fractional programming problem. In Chap. 4, generalized preinvex functions are introduced, the class functions which contain as special cases prequasiinvex, semistrictly prequasiinvex, and strictly prequasiinvex functions. Distinct characterizations of prequasiinvex functions are obtained under three different conditions, namely lower semicontinuity, upper semicontinuity, and semistrict prequasiinvexity. Furthermore, distinct characterizations of semistrictly prequasiinvex functions are also obtained under the condition of prequasiinvexity as well as under the condition of lower semicontinuity. A similar result is established for strictly prequasiinvex functions. It is worth noting that these specific distinct characterizations reveal various interesting relationships among prequasiinvex, semistrictly prequasiinvex, and strictly prequasiinvex functions. They are very useful in the study of optimization problems. The convexity of a real-valued function is closely related to the monotonicity of a vector-valued function. The monotonicity has played a very important role in the study of the existence of solutions of variational inequality problems. In Chap. 5, generalized invariant monotonicity and its relationships with generalized invexity are studied. We introduce several types of generalized invariant monotonicities which are generalizations of the (strict) monotonicity, (strict) pseudomonotonicity, and quasimonotonicity considered in [46]. The main purpose of this chapter is to obtain relationships among generalized invariant monotonicities and generalized invexities. Several examples are given to show that these generalized invariant monotonicities are proper generalizations of the corresponding generalized monotonicities. Moreover, examples are also given to show the proper inclusive relationships among the generalized invariant monotonicities. In Chap. 6, we present two pair of Wolfe type second order symmetric dual model in multiobjective nonlinear programming. Under η-invexity or F -convexity conditions, the weak, strong, and converse duality theorems are proved. In Chap. 7, we show that a pair of second order symmetric models for a class of nondifferentiable multiobjective programs is introduced. Weak duality, strong duality, and converse duality theorems are established under F -convexity assumptions. In Chap. 8, we focus on the second order nonsymmetric dual for a class of multiobjective programming with cone constraints. Based on the first order duality results, four types of second order duality models are formulated. Weak and strong
Preface
ix
duality theorems are established in terms of the generalized convexity, respectively. Converse duality theorems, essential parts of duality theory, are presented under appropriate assumptions. Higher order duality has been studied by many researchers; in Chap. 9, we give a converse duality theorem and symmetric duality (the weak, strong, and converse duality theorems) on higher order Mond-Weir type dual model under mild assumptions. We are thankful to Guangya Chen, Kok Lay Teo, Duan Li, Shui Hung Hou, and Xiaoqi Yang for their joint research collaboration on some parts of the book. We acknowledge that the research of this book has been supported by the National Science Foundation of China (No.11431004) and Science Foundation of Chongqing (Nos.cstc2017jcyj-yszxX0008, cstc2018jcyj-yszx0007). Chongqing, China April 2018
Xinmin Yang
Contents
Part I Generalized Preinvexity 1
Preinvex Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 Notations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3 Semicontinuity and Preinvex Functions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.4 Characterizations of Preinvex Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3 3 4 6 13
2
Semistrictly Preinvex Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1 Introduction and Notations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 Properties of Semistrictly Preinvex Functions . . . . . . . . . . . . . . . . . . . . . . . . . 2.3 Relationship Between Preinvexity and Semistrict Preinvexity. . . . . . . . 2.4 Lower Semicontinuity and Semistrict Preinvexity . . . . . . . . . . . . . . . . . . . . 2.5 Gradient Properties of Strictly and Semistrictly Preinvex Functions .
21 21 22 27 32 35
3
Semipreinvex Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 Some New Properties of Semipreinvex Functions. . . . . . . . . . . . . . . . . . . . . 3.3 Applications to Multiobjective Fractional Programming . . . . . . . . . . . . .
43 43 44 48
4
Prequasiinvex Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1 Introduction and Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2 Properties of Prequasiinvex Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3 Properties of Semistrictly Prequasiinvex Functions . . . . . . . . . . . . . . . . . . . 4.4 Properties of Strictly Prequasiinvex Functions . . . . . . . . . . . . . . . . . . . . . . . . 4.5 Applications of Prequasiinvex Type Functions . . . . . . . . . . . . . . . . . . . . . . . .
53 53 56 65 69 71
Part II Generalized Invariant Monotonicity 5
Generalized Invexity and Generalized Invariant Monotonicity . . . . . . . . 5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2 Invariant Monotone and Strictly Invariant Monotone Maps . . . . . . . . . . 5.3 Invariant Quasimonotone Maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
77 77 78 81 xi
xii
Contents
5.4 Invariant Pseudomonotone Maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5 Strictly Invariant Pseudomonotone Maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.6 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
84 87 90
Part III Duality in Multiobjective Programming 6
Multiobjective Wolfe Type Second-Order Symmetric Duality . . . . . . . . . 95 6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95 6.2 Notations and Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96 6.3 Wolfe Type I Symmetric Duality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97 6.4 Wolfe Type II Symmetric Duality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
7
Multiobjective Mond-Weir-Type Second-Order Symmetric Duality . . 7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2 Notations and Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3 Mond-Weir-Type Symmetric Duality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4 Remarks and Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
109 109 109 110 119
8
Multiobjective Second-Order Duality with Cone Constraints . . . . . . . . . . 8.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3 Weak Duality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4 Strong Duality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.5 Converse Duality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
121 121 123 126 131 133
9
Multiobjective Higher-Order Duality. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2 Mond-Weir Type Converse Duality Involving Cone Constraints . . . . . 9.3 Mond-Weir Type Symmetric Duality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.4 Special Cases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
149 149 150 153 158
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159 Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165
Part I
Generalized Preinvexity
Chapter 1
Preinvex Functions
1.1 Introduction The notion of invex functions was introduced by Hanson [37] as a generalization of differentiable convex functions. Let K ⊆ Rn be open and let f : K −→ R. If there exists a vector-valued function η(x, u) : K × K −→ Rn such that f (x) − f (u) ≥ ∇f (u)T η(x, u), ∀x, u ∈ K, then the function f is called an invex function. This terminology (a short form of “invariant convex”) was given by Craven [23] who observed that invexity is not destroyed (unlike convexity) under bijective coordinate transformations. That is, if h : Rn −→ R is differentiable and convex and that : Rr −→ Rn (r ≥ n) is differentiable, with ∇ having full rank, then f = h() is invex. After the introduction of invex functions, the scope of work on optimization has expanded further. Later, Ben-Israel and Mond [9] introduced a class of real-valued functions having the property that there exists a vector-valued function η(x, u) : K × K −→ Rn such that f (u + λη(x, u)) ≤ λf (x) + (1 − λ)f (u), ∀x, u ∈ K, λ ∈ [0, 1]. Weir and Jeyakumar [97] and Weir and Mond [98] called such functions preinvex functions. This class of functions is a generalization of nondifferentiable convex functions. Pini [81] showed that if f is defined on K ⊆ Rn and is preinvex and differentiable, then f is also invex with respect to η. Pini [81] gave a counterexample which illustrates that invexity does not imply preinvexity. Mohan and Neogy [66] introduced Condition C as follows.
© Springer Nature Singapore Pte Ltd. 2018 X. Yang, Generalized Preinvexity and Second Order Duality in Multiobjective Programming, Springer Optimization and Its Applications 142, https://doi.org/10.1007/978-981-13-1981-5_1
3
4
1 Preinvex Functions
Condition C. Let η : Rn × Rn −→ Rn . We say that the function η satisfies Condition C if for any x, y, the following equalities hold, η(y, y + λη(x, y)) = −λη(x, y), η(x, y + λη(x, y)) = (1 − λ)η(x, y), for all λ ∈ [0, 1]. Mohan and Neogy [66] proved that a differentiable function which is invex on K with respect to η is also preinvex under Condition C. In this chapter, we consider the class of preinvex functions. We provide several new criteria for nondifferentiable preinvex functions under certain conditions. In particular, under Condition C and either upper semicontinuity or lower semicontinuity conditions, we show that a real-valued function f is preinvex if and only if f is intermediate-point preinvex. That is, under these conditions, the determination of the preinvexity for a function can be achieved via an intermediate-point preinvexity check on the function. Under an appropriate intermediate-point preinvexity condition, specific distinct characterizations are obtained for prequasiinvex, semistrictly prequasiinvex, locally prequasiinvex, and locally semistrictly prequasiinvex functions, respectively. These results reveal that preinvexity, prequasiinvexity, local prequasiinvexity, semistrict prequasiinvexity, and local semistrict prequasiinvexity are equivalent under appropriate conditions. Other properties and characterizations of preinvex functions will be given in Chap. 2.
1.2 Notations The concepts of preinvex function and invex set are defined as follows. Definition 1.2.1 ([97, 98]) A set K ⊆ Rn is said to be invex if there exists a vectorvalued function η : Rn × Rn −→ Rn such that x, y ∈ K, λ ∈ [0, 1] ⇒ y + λη(x, y) ∈ K. A convex set is an invex set with η(x, y) = x − y. But the converse does not necessarily hold. Example 1.2.1 This example illustrates that an invex set is not necessarily convex. Let K = [−3, −2] [−1, 2], and ⎧ x−y ⎪ ⎪ ⎨ x−y η(x, y) = ⎪ −3 − y ⎪ ⎩ −1 − y
if if if if
−1≤x −3≤x −1≤x −3≤x
≤ 2, −1 ≤ y ≤ 2; ≤ −2, −3 ≤ y ≤ −2; ≤ 2, −3 ≤ y ≤ −2; ≤ −2, −1 ≤ y ≤ 2.
Then K is an invex set with respect to η, but it is obvious that K is not a convex set.
1.2 Notations
5
Notice that the function η in Example 1.2.1 is not defined on a convex set but satisfies Condition C. Definition 1.2.2 ([97, 98]) Let K ⊆ Rn be an invex set with respect to η : Rn × Rn −→ Rn . Let f : K −→ R. We say that f is preinvex if f (y + λη(x, y)) ≤ λf (x) + (1 − λ)f (y), ∀x, y ∈ K, ∀ λ ∈ [0, 1]. Example 1.2.2 The following function is preinvex and satisfies Condition C, but it is not a convex function. Let K = R\{0}, f (x) = |x|, ⎧ ⎨ x − y if x ≥ 0, y ≥ 0; η(x, y) = x − y if x ≤ 0, y ≤ 0; ⎩ −y otherwise. Then K is an invex set satisfying Condition C and f is preinvex on K. However, K is not a convex set and f is not a convex function on K. Definition 1.2.3 ([66]) Let K ⊆ Rn be an invex set with respect to η : Rn × Rn −→ Rn . Let f : K −→ R. We say that f is prequasiinvex if f (y + λη(x, y)) ≤ max{f (x), f (y)},
∀ x, y ∈ K, ∀λ ∈ [0, 1].
Now we introduce three classes of generalized preinvex functions as follows. Definition 1.2.4 Let K ⊆ Rn be an invex set with respect to η : Rn × Rn −→ Rn . Let f : K −→ R. We say that f is semistrictly prequasiinvex if ∀x, y ∈ K, f (x) = f (y), f (y + λη(x, y)) < max{f (x), f (y)}, ∀λ ∈ (0, 1). Definition 1.2.5 Let K ⊆ Rn be an invex set with respect to η : Rn × Rn −→ Rn . Let f : K −→ R. We say that f is locally prequasiinvex on an invex set K if for any x, y ∈ K, there exists a δ ∈ [0, 1) such that f (y + λη(x, y)) ≤ max{f (x), f (y)}, ∀λ ∈ (δ, 1). Definition 1.2.6 Let K ⊆ Rn be an invex set with respect to η : Rn × Rn −→ Rn . Let f : K −→ R. We say that f is locally semistrictly prequasiinvex on an invex set K if for any x, y ∈ K with f (x) > f (y), there exists δ ∈ [0, 1) such that f (y + λη(x, y)) < f (x),
∀λ ∈ (δ, 1).
Definition 1.2.7 Let S be a nonempty subset of Rn . A function f : S −→ R is said to be upper semicontinuous at x ∈ S if for every > 0, there exists δ > 0 such that for all x ∈ S with x − x < δ, f (x) < f (x) + .
6
1 Preinvex Functions
If −f is upper semicontinuous at x ∈ S, then f is said to be lower semicontinuous at x ∈ S. Proposition 1.2.1 Let η : Rn × Rn −→ Rn . If η satisfies Condition C, then η(y + λ1 η(x, y), y + λ2 η(x, y)) = (λ1 − λ2 )η(x, y), ∀x, y ∈ Rn , λ1 , λ2 ∈ [0, 1]. (1.1) Proof If λ1 = λ2 , (1.1) holds by identity (1.1) in Condition C. It remains to consider the following two cases. Case (a): For all x, y ∈ Rn and 0 ≤ λ2 < λ1 ≤ 1, by Condition C, we have η(y + λ1 η(x, y), y + λ2 η(x, y)) = η(y + λ1 η(x, y), y + λ1 η(x, y) − (λ1 − λ2 )η(x, y)) = η(y + λ1 η(x, y), y + λ1 η(x, y) + =−
(λ1 − λ2 ) η(y, y + λ1 η(x, y))) λ1
(λ1 − λ2 ) η(y, y + λ1 η(x, y)) λ1
= (λ1 − λ2 )η(x, y).
(1.2)
Case (b): For all x, y ∈ Rn and 0 ≤ λ1 < λ2 ≤ 1, by Condition C, we get η(y + λ1 η(x, y), y + λ2 η(x, y)) = η(y + λ1 η(x, y), y + λ1 η(x, y) + (λ2 − λ1 )η(x, y)) = η(y + λ1 η(x, y), y + λ1 η(x, y) + =−
(λ2 − λ1 ) η(x, y + λ1 η(x, y))) 1 − λ1
(λ2 − λ1 ) η(x, y + λ1 η(x, y)) 1 − λ1
= (λ1 − λ2 )η(x, y). Based on (1.2) and (1.3), we obtain (1.1).
(1.3)
1.3 Semicontinuity and Preinvex Functions The following result concerned with convex functions is well known. Lemma 1.3.1 (see [86]) Let f be a continuous real-valued function on a convex set S ⊆ Rn . If 1 1 1 1 f ( x + y) ≤ f (x) + f (y), ∀x, y ∈ S, 2 2 2 2 then f is a convex function on S.
1.3 Semicontinuity and Preinvex Functions
7
The importance of this lemma is that the convexity can be determined by checking a midpoint convexity under the continuity condition (see [109, 110]). In this section, we will present two conditions that determine the preinvexity of a function via an intermediate-point preinvexity check under the condition of upper (respectively, lower) semicontinuity. Our development extends the convexity results presented in Lemma 1.3.1 to preinvexity. First, we derive the following basic result which will be used to prove Theorems 1.3.1 and 1.4.1. Lemma 1.3.2 Let K ⊆ Rn be an invex set with respect to η : Rn × Rn −→ Rn , and let η satisfy Condition C. If f : K −→ R satisfies f (y + η(x, y)) ≤ f (x), for all x, y ∈ K and there exists α ∈ (0, 1) such that f (y + αη(x, y)) ≤ αf (x) + (1 − α)f (y) ∀x, y ∈ K,
(1.4)
then the set defined by A = {λ ∈ [0, 1]|f (y + λη(x, y)) ≤ λf (x) + (1 − λ)f (y), ∀x, y ∈ K} is dense in [0, 1]. Proof Note that both λ = 0 and 1 belong to the set A due to the fact that f (y) ≤ f (y) and the assumption f (y + η(x, y)) ≤ f (x). Suppose that the hypotheses hold and A is not dense in [0, 1]. Then, there exist a λ0 ∈ (0, 1) and a neighborhood N(λ0 ) of λ0 such that N(λ0 ) A = ∅. (1.5) Define λ1 = inf{λ ∈ A|λ ≥ λ0 },
(1.6)
λ2 = sup{λ ∈ A|λ ≤ λ0 }.
(1.7)
Then, by (1.5), we have 0 ≤ λ2 < λ1 ≤ 1. Now, since {α, 1 − α} ∈ (0, 1), we can choose u1 , u2 ∈ A with u1 ≥ λ1 and u2 ≤ λ2 such that max{α, 1 − α}(u1 − u2 ) < λ1 − λ2 . Then, u2 ≤ λ2 < λ1 ≤ u1 .
(1.8)
8
1 Preinvex Functions
Next, consider λ = αu1 + (1 − α)u2 . From Condition C, we have y + u2 η(x, y) + αη(y + u1 η(x, y), y + u2 η(x, y)) = y + u2 η(x, y) + αη(y + u1 η(x, y), y + u1 η(x, y) − (u1 − u2 )η(x, y)) = y + u2 η(x, y) + αη(y + u1 η(x, y), y + u1 η(x, y) u1 − u2 η(y, y + u1 η(x, y))) + u1 u1 − u2 = y + u2 η(x, y) − α η(y, y + u1 η(x, y)) u1 = y + [u2 + α(u1 − u2 )]η(x, y) = y + λη(x, y), ∀x, y ∈ K, where Condition C is used in the second, the third, and the fourth equalities. From (1.4) and the fact that u1 , u2 ∈ A, we obtain f (y + λη(x, y)) = f [y + u2 η(x, y) + αη(y + u1 η(x, y), y + u2 η(x, y))] ≤ αf (y + u1 η(x, y)) + (1 − α)f (y + u2 η(x, y)) ≤ α[u1 f (x) + (1 − u1 )f (y)] + (1 − α)[u2 f (x) + (1 − u2 )f (y)] = λf (x) + (1 − λ)f (y). That is, λ ∈ A. If λ ≥ λ0 , then it follows from (1.8) that λ − u2 = α(u1 − u2 ) < λ1 − λ2 , and therefore λ < λ1 . Because λ ≥ λ0 and λ ∈ A, this contradicts (1.6). Similarly, λ ≤ λ0 leads to a contradiction to (1.7). Consequently, A is dense in [0, 1]. This completes the proof.
Theorem 1.3.1 Suppose that K ⊆ Rn is an invex set with respect to η : Rn × Rn −→ Rn , where η satisfies Condition C. Assume that f is an upper semicontinuous real-valued function on K and f satisfies f (y + η(x, y)) ≤ f (x), ∀x, y ∈ K. Then, f is a preinvex function on K if and only if there exists α ∈ (0, 1) such that f (y + αη(x, y)) ≤ αf (x) + (1 − α)f (y), ∀x, y ∈ K.
1.3 Semicontinuity and Preinvex Functions
9
Proof The necessity follows directly from the definition of preinvex functions. For the sufficiency, let A = {λ ∈ [0, 1]|f (y + λη(x, y)) ≤ λf (x) + (1 − λ)f (y), ∀x, y ∈ K}. From the hypotheses of the theorem and Lemma 1.3.2, it is clear that A is dense in the interval [0, 1]. Thus, for any λ¯ ∈ (0, 1), there exists a sequence {λn } with λn ∈ A and λn < λ¯ such that λn −→ λ
(n −→ ∞).
Given x, y ∈ K, let z = y + λη(x, y). Define yn = y + (
λ − λn )η(x, y), ∀n. 1 − λn
Then, yn −→ y (n −→ ∞). Noticing that 0 < λn < λ < 1, we have 0<
λ − λn < 1. 1 − λn
Thus, yn ∈ K. Furthermore, by Condition C, we have yn + λn η(x, yn ) = y+(
λ − λn λ − λn )η(x, y) + λn η(x, y + ( )η(x, y)) 1 − λn 1 − λn
= y + λη(x, y) = z.
(1.9)
By the upper semicontinuity of f on K, it follows that, for any > 0, there exists N > 0 such that the following holds for n > N: f (yn ) ≤ f (y) + .
10
1 Preinvex Functions
Therefore, from (1.9) and the fact that λn ∈ A, we have f (z) = f (yn + λn η(x, yn )) ≤ λn f (x) + (1 − λn )f (yn ) ≤ λn f (x) + (1 − λn )(f (y) + ) −→ λf (x) + (1 − λ)(f (y) + ) (n −→ ∞). Since > 0 may be arbitrarily small, we have f (z) ≤ λf (x) + (1 − λ)f (y).
Hence, f is preinvex on K.
Theorem 1.3.2 Let K be a nonempty invex set in Rn with respect to η : Rn × Rn −→ Rn , where η satisfies Condition C. Assume that f : K −→ R is a lower semicontinuous function and that f satisfies f (y + η(x, y)) ≤ f (x) for all x, y ∈ K. Then f is a preinvex function on K if and only if for any x, y ∈ K, there exists α ∈ (0, 1) such that f (y + αη(x, y)) ≤ αf (x) + (1 − α)f (y).
(1.10)
Proof The necessity follows directly from the definition of preinvex functions. We only need to prove the sufficiency. To the contrary, suppose that there exist x, y ∈ K and λ ∈ (0, 1) such that f (y + λη(x, y)) > λf (x) + (1 − λ)f (y).
(1.11)
Let xt = y + tη(x, y), t ∈ (λ, 1], and B = {xt ∈ K|t ∈ (λ, 1], f (xt ) = f (y + tη(x, y)) ≤ tf (x) + (1 − t)f (y)}, u = inf{t ∈ (λ, 1]|xt ∈ B}. It is easy to check that x1 ∈ B by using the assumption of the theorem and the fact that xλ ∈ B (from (1.11)). Then, xt ∈ B, λ ≤ t < u, and there exists a sequence {tn } with tn ≥ u and xtn ∈ B such that
1.3 Semicontinuity and Preinvex Functions
11
tn −→ u (n −→ ∞). Since f is lower semicontinuous, we have f (xu ) ≤ lim f (xtn ) n→∞
≤ lim {tn f (x) + (1 − tn )f (y)} n→∞
= uf (x) + (1 − u)f (y). Hence, xu ∈ B. Similarly, let yt = y + tη(x, y), t ∈ [0, λ), D = {yt ∈ K|t ∈ [0, λ), f (yt ) = f (y + tη(x, y)) ≤ tf (x) + (1 − t)f (y)}, and v = sup{t ∈ [0, λ)|yt ∈ D}. It can be verified that y0 = y ∈ D, yλ = y + λη(x, y) ∈ D, where (1.11) is used in the second equation. Then yt ∈ D, v < t ≤ λ, and there exists a sequence {tn } with tn ≤ v and ytn ∈ D such that tn −→ v (n −→ ∞). Since f is a lower semicontinuous function, we have f (yv ) ≤ lim f (ytn ) n→∞
≤ lim {tn f (x) + (1 − tn )f (y)} n→∞
= vf (x) + (1 − v)f (y). Hence, yv ∈ D.
(1.12)
12
1 Preinvex Functions
From the definitions of u and v, we have 0 ≤ v < λ < u ≤ 1. Now, from Condition C, xu + λη(yv , xu ) = y + uη(x, y) + λη(y + vη(x, y), y + uη(x, y)) = y + uη(x, y) + λη(y + vη(x, y), y + vη(x, y) + (u − v)η(x, y)) = y + uη(x, y) + λη(y + vη(x, y), y + vη(x, y) + = y + uη(x, y) − λ
u−v η(x, y + vη(x, y))) 1−v
u−v η(x, y + vη(x, y)) 1−v
= y + [u − λ(u − v)]η(x, y) = y + [λv + (1 − λ)u]η(x, y),
∀λ ∈ [0, 1].
Hence, from the definitions of u and v, we have f (xu + λη(yv , xu )) = f (y + [λv + (1 − λ)u]η(x, y)) > [λv + (1 − λ)u]f (x) + [1 − λv − (1 − λ)u]f (y) = λ[vf (x) + (1 − v)f (y)] + (1 − λ)[uf (x) + (1 − u)f (y)] ≥ λf (yv ) + (1 − λ)f (xu ),
∀ λ ∈ (0, 1),
(1.13)
where (1.12) is used in the first equality, and u ∈ B, v ∈ D are used in the last inequality. Clearly, (1.13) is a contradiction to (1.10). This completes the proof. Corollary 1.3.1 Let K be a nonempty invex set in Rn with respect to η : Rn × Rn −→ Rn , where η satisfies Condition C. Assume that f : K −→ R is a lower semicontinuous function and that f satisfies f (y + η(x, y)) ≤ f (x) for all x, y ∈ K. Then f is a preinvex function on K if and only if there exists α ∈ (0, 1) such that the following holds for every x, y ∈ K: f (y + αη(x, y)) ≤ αf (x) + (1 − α)f (y). Remark 1.3.1 In view of the results in this section, it follows that, under either upper semicontinuity or lower semicontinuity condition, the preinvexity of a function can be verified by checking the intermediate-point preinvexity of the function.
1.4 Characterizations of Preinvex Functions
13
1.4 Characterizations of Preinvex Functions In [79], Nikodem obtained an interesting result quoted in the following Lemma. Lemma 1.4.1 Let f be a real-valued function on a convex set A ⊆ Rn , where the domain of f is an open set. Then f is convex on A if and only if it is quasiconvex on A and 1 1 1 1 f ( x + y) ≤ f (x) + f (y), ∀x, y ∈ A. 2 2 2 2 In the following, we present four characterizations of preinvex functions under intermediate-point preinvexity for prequasiinvexity, semistrict prequasiinvexity, local semistrict prequasiinvexity, and local prequasiinvexity, respectively (see [105, 107]). Theorem 1.4.1 (Characterization 1) Let K ⊆ Rn be an invex set with respect to η : Rn × Rn −→ Rn , where η satisfies Condition C. Then a real-valued function f is preinvex on K if and only if it is prequasiinvex on K and there exists α ∈ (0, 1) such that f (y + αη(x, y)) ≤ αf (x) + (1 − α)f (y) ∀x, y ∈ K.
(1.14)
Proof The necessity is easy to verify. We only need to prove the sufficiency. For every x, y ∈ K, let zλ = y + λη(x, y), ∀λ ∈ [0, 1]. The following two situations are considered separately. (I) f (x) = f (y). We need to show that f (y + λη(x, y)) ≤ λf (x) + (1 − λ)f (y),
∀λ ∈ [0, 1].
To the contrary, suppose that there exists β ∈ (0, 1] such that f (zβ ) = f (y + βη(x, y)) > βf (x) + (1 − β)f (y) = f (x) = f (y). (i) Assume that 0 < α < β ≤ 1. Let u =
β−α 1−α .
(1.15)
From Condition C, we have
zβ = zu + αη(x, zu ). From (1.14) and (1.15), we obtain f (zβ ) ≤ αf (x) + (1 − α)f (zu ) < f (zu ). On the other hand, let t =
β−u β .
Then it follows from Condition C that
zu = zβ + tη(y, zβ ).
(1.16)
14
1 Preinvex Functions
Hence, from the prequasiinvexity of f and f (y) < f (zβ ), we get f (zu ) = f (zβ + tη(y, zβ )) ≤ f (zβ ), which contradicts the inequality (1.16). (ii) Assume that 0 < β < α < 1. Let u = have
β α
> β. Then, by Condition C, we
zβ = y + αη(zu , y).
(1.17)
From (1.14), (1.17), and (1.15), we obtain f (zβ ) ≤ αf (zu ) + (1 − α)f (y) < f (zu ). Let t =
u−β 1−β .
(1.18)
Then, it follows Condition C that zu = zβ + tη(x, zβ )
From the prequasiinvexity of f and f (x) < f (zβ ), we get f (zu ) = f (zβ + tη(x, zβ )) ≤ f (zβ ), which contradicts the inequality (1.18). (II) f (x) = f (y). In this case, we need to show that f (y + λη(x, y)) ≤ λf (x) + (1 − λ)f (y),
∀λ ∈ [0, 1].
To the contrary, suppose that there exists β ∈ (0, 1) such that f (y + βη(x, y)) > βf (x) + (1 − β)f (y).
(1.19)
From Lemma 1.3.2, it is clear that, for A defined in Lemma 1.3.2, f (y + λη(x, y)) ≤ λf (x) + (1 − λ)f (y)
∀λ ∈ A.
(a) Assume that f (x) < f (y). Then, from (1.19) and the density of A, there exists u ∈ A with u < β such that f (zu ) = f (y + uη(x, y)) ≤ uf (x) + (1 − u)f (y) < f (y + βη(x, y)) = f (zβ ). (1.20) Let t = β−u . Then, we see that 0 < t < 1. From Condition C, we have 1−u zβ = zu + tη(x, zu ).
1.4 Characterizations of Preinvex Functions
15
(i) If f (x) ≤ f (zu ), it follows from the prequasiinvexity of f that f (zβ ) = f (zu + tη(x, zu )) ≤ f (zu ), which contradicts the inequality (1.20). (ii) If f (x) > f (zu ), then, by the prequasiinvexity of f and f (x) < f (y), it follows from (1.19) that f (zβ ) = f (zu + tη(x, zu )) ≤ f (x) < βf (x) + (1 − β)f (y) < f (zβ ), which is a contradiction. (b) Assume that f (y) < f (x). Then, from (1.19) and the density of A, there exists a u ∈ A with u > β such that f (zu ) = f (y + uη(x, y)) ≤ uf (x) + (1 − u)f (y) < f (y + βη(x, y)) = f (zβ ). (1.21) Let t = u−β . It is obvious that 0 < t < 1. Then, from Condition C, we have u zβ = zu + tη(y, zu ).
(i) If f (y) ≤ f (zu ), then it is clear from the prequasiinvexity of f that f (zβ ) = f (zu + tη(y, zu )) ≤ f (zu ), which contradicts the inequality (1.21). (ii) If f (y) > f (zu ), then, by the prequasiinvexity of f and f (y) < f (x), it is clear from (1.19) that f (zβ ) = f (zu + tη(y, zu )) ≤ f (y) < βf (x) + (1 − β)f (y) < f (zβ ), which is a contradiction. This completes the proof.
Rn
Theorem 1.4.2 (Characterization 2) Let K ⊆ be an invex set with respect to η : Rn × Rn −→ Rn , where η satisfies Condition C. Then a real-valued function f is preinvex on K if and only if it is semistrictly prequasiinvex on K and there exists α ∈ (0, 1) such that f (y + αη(x, y)) ≤ αf (x) + (1 − α)f (y), ∀x, y ∈ K.
(1.22)
Proof We only need to prove that f is a preinvex function on K. To the contrary, assume that there exist x, y ∈ K and β ∈ (0, 1) such that f (y + βη(x, y)) > βf (x) + (1 − β)f (y).
(1.23)
16
1 Preinvex Functions
There are two cases to be considered: (I) f (x) = f (y), (II) f (x) = f (y). (I) f (x) = f (y). (a) Assume that 0 < α < β ≤ 1. Let u = β−α 1−α , zu = y + uη(x, y) and zβ = y + βη(x, y). Then, by Condition C, we have zu + αη(x, zu ) = y + uη(x, y) + αη(x, y + uη(x, y)) = y + [u + α(1 − u)]η(x, y) = y + βη(x, y) = zβ . From (1.22), (1.23), and f (x) = f (y), we obtain f (zβ ) = f (zu + αη(x, zu )) ≤ αf (x) + (1 − α)f (zu ) < f (zu ). On the other hand, let t =
β−u β .
(1.24)
Then, by Condition C, we have
zβ + tη(y, zβ ) = y + βη(x, y) + tη(y, y + βη(x, y)) = y + [β − tβ]η(x, y) = y + uη(x, y) = zu . Hence, from the semistrict prequasiinvexity of f and f (y) < f (zβ ), we get f (zu ) = f (zβ + tη(y, zβ )) < f (zβ ), which contradicts the inequality (1.24). (b) Assume that 0 < β < α < 1. Let u = have
β α
> β. Then, by Condition C, we
y + αη(zu , y) = y + αη(y + uη(x, y), y + uη(x, y) − uη(x, y)) = y + α]η(y + uη(x, y), y + uη(x, y) + η(y, y + uη(x, y))) = y + αuη(x, y) = y + βη(x, y) = zβ . From (1.22) and (1.23), we obtain f (zβ ) = f (y + αη(zu , y)) ≤ αf (zu ) + (1 − α)f (y) < f (zu ). Let t =
u−β 1−β .
Then, by Condition C, we have zβ + tη(x, zβ ) = y + βη(x, y) + tη(x, y + βη(x, y)) = y + [β + t (1 − β)]η(x, y)y + uη(x, y) = zu .
(1.25)
1.4 Characterizations of Preinvex Functions
17
From the semistrict prequasiinvexity of f and f (x) < f (zβ ), we get f (zu ) = f (zβ + tη(x, zβ )) < f (zβ ), which contradicts the inequality (1.25). (II) f (x) = f (y). From Lemma 1.3.2, we see that, for A defined in Lemma 1.3.2, f (y + λη(x, y)) ≤ λf (x) + (1 − λ)f (y),
∀λ ∈ A,
We consider the following two cases: (a) f (x) < f (y). Then, from (1.23) and the density of A on [0,1], there exists u ∈ A with u < β such that uf (x) + (1 − u)f (y) < f (y + βη(x, y)). That is, f (zu ) = f (y + uη(x, y)) ≤ uf (x) + (1 − u)f (y) < f (y + βη(x, y)) = f (zβ ). (1.26) Let t = β−u . Then 0 < t < 1. Thus by Condition C, we have 1−u zu + tη(x, zu ) = y + uη(x, y) + tη(x, y + uη(x, y)) = y + [u + t (1 − u)]η(x, y) = y + βη(x, y) = zβ . (i) If f (x) < f (zu ), then, it follows from the semistrict prequasiinvexity of f that f (zβ ) = f (zu + tη(x, zu )) ≤ f (zu ), which contradicts the inequality (1.26). (ii) If f (x) > f (zu ), then, by the semistrict prequasiinvexity of f and f (x) < f (y), it follows from (1.23) that f (zβ ) = f (zu + tη(x, zu )) < f (x) < βf (x) + (1 − β)f (y) < f (zβ ), which is a contradiction. (iii) If f (x) = f (zu ), then, by the fact that zu + tη(x, zu ) = zβ and (1.23), we have f (zu + tη(x, zu )) = f (zβ ) = f (y + βη(x, y)) > βf (x) + (1 − β)f (y) = tf (x) + (1 − t)[uf (x) + (1 − u)f (y)] ≥ tf (x) + (1 − t)f (zu ) = f (x) = f (zu ),
18
1 Preinvex Functions
Thus, by using a method similar to that used to establish (I), we obtain a contradiction. (b) f (y) < f (x). Then, from (1.23) and the density of A on [0,1], there exists a u ∈ A with u > β such that uf (x) + (1 − u)f (y) < f (βx + (1 − β)y). That is, f (zu ) = f (y + uη(x, y)) ≤ uf (x) + (1 − u)f (y) < f (βx + (1 − β)y) = f (zβ ). (1.27) Let t = u−β . Then 0 < t < 1. Thus, by Condition C, we have u zu + tη(y, zu ) = y + uη(x, y) + tη(y, y + uη(x, y)) = y + [u − tu]η(x, y) = y + βη(x, y) = zβ . (i) If f (y) < f (zu ), it follows from the semistrictly prequasiinvexity of f that f (zβ ) = f (zu + tη(y, zu )) ≤ f (zu ), which contradicts the inequality (1.27). (ii) If f (y) > f (zu ), then, by the prequasiinvexity of f and f (y) < f (x), it follows from (1.23) that f (zβ ) = f (zu + tη(y, zu )) ≤ f (y) < βf (x) + (1 − β)f (y) < f (zβ ), which is a contradiction. (iii) If f (y) = f (zu ), then, by zu + tη(y, zu ) = zβ and (1.23), we have f (zu + tη(y, zu ) = f (zβ ) = f (y + βη(x, y)) > βf (x) + (1 − β)f (y) = tf (y) + (1 − t)[uf (x) + (1 − u)f (y)] ≥ tf (y) + (1 − t)f (y + uη(x, y)) = tf (y) + (1 − t)f (zu ) = f (y) = f (zu ), This leads to a contradiction via a method similar to that used for (I). This completes the proof.
Theorem 1.4.3 (Characterization 3) Let K ⊆ Rn be an invex set with respect to η : Rn × Rn −→ Rn , where η satisfies Condition C. Then, a real-valued function f is preinvex on K if and only if it is a locally semistrictly prequasiinvex on K and there exists α ∈ (0, 1) such that f (y + αη(x, y)) ≤ αf (x) + (1 − α)f (y), ∀x, y ∈ K.
(1.28)
1.4 Characterizations of Preinvex Functions
19
Proof We only need to prove that f is an preinvex function on K. Suppose that there exist x, y ∈ K and β ∈ (0, 1) such that f (y + βη(x, y)) > βf (x) + (1 − β)f (y).
(1.29)
Without loss of generality, we can assume that f (y) ≤ f (x). From Lemma 1.3.2 and (1.29), we know that there exists δ1 ∈ A (where A is defined in Lemma 1.3.2) with δ1 > β such that uf (x) + (1 − u)f (y) < f (y + βη(x, y)),
∀u ∈ (β, δ1 ].
That is, f (y + uη(x, y)) ≤ uf (x) + (1 − u)f (y) < f (y + βη(x, y)), ∀u ∈ (β, δ1 ]
A.
(1.30)
On the other hand, from f (y) ≤ f (x) and (1.29), we have f (y + βη(x, y)) > f (y). By the locally semistrict prequasiinvexity of f , there exists δ2 ∈ (0, β) such that f (y + λη(x, y)) < f (y + βη(x, y)), ∀λ ∈ (δ2 , β). It is obvious that there exist u ∈ (β, δ1 ] (1 − α)λ. From Condition C, we have
(1.31)
A and λ ∈ (δ2 , β) such that β = αu +
y + λη(x, y) + αη(y + uη(x, y), y + λη(x, y)) = y + λη(x, y) + αη(y + uη(x, y), y + uη(x, y) − (u − λ)η(x, y)) = y + λη(x, y) + αη(y + uη(x, y), y + uη(x, y) + = y + λη(x, y) −
u−λ η(y, y + uη(x, y))) u
α (u − λ)η(y, y + uη(x, y)) u
= y + (αu + (1 − α)λ)η(x, y) = y + βη(x, y). From (1.28), (1.30), (1.31), and the above equation, we obtain f (y + βη(x, y)) = f (y + λη(x, y) + αη(y + uη(x, y), y + λη(x, y)) ≤ (1 − α)f (y + λη(x, y)) + αf (y + uη(x, y)) < f (y + βη(x, y)). This is a contradiction. The proof is complete.
20
1 Preinvex Functions
Lemma 1.4.2 Let K ⊆ Rn be an invex set with respect to η : Rn × Rn −→ Rn , where η satisfies Condition C. If a real-valued function f is locally prequasiinvex on K, f (y + η(x, y)) ≤ f (x), for all x, y ∈ K, and there exists α ∈ (0, 1) such that f (y + αη(x, y)) ≤ αf (x) + (1 − α)f (y) ∀x, y ∈ K, then f is a prequasiinvex function on K. Proof Suppose that there exist two distinct points x, y ∈ K and β ∈ (0, 1) such that f (y + βη(x, y)) > max{f (x), f (y)}. Without loss of generality, we can assume that f (x) ≤ f (y). Then, the above inequality yields: f (y + βη(x, y)) > f (y) ≥ f (x).
(1.32)
From (1.32) and the local prequasiinvexity of f , we know that there exists δ ∈ [0, 1) such that f (y + λη(x, y)) ≤ f (y + βη(x, y)), ∀λ ∈ (β + δ(1 − β), 1). Choose u ∈ (β + δ(1 − β), 1) such that t = Lemma 1.3.2). Then, by Condition C, we have
u−β u
(1.33)
∈ A (where A is defined in
zu + tη(y, zu ) = y + uη(x, y) + tη(y, y + uη(x, y)) = y + βη(x, y). Since t ∈ A, it follows from (1.32), (1.33), and Lemma 1.3.2 that f (y + βη(x, y)) = f (zu + tη(y, zu )) ≤ (1 − t)f (zu ) + tf (y) < f (y + βη(x, y)). This is a contradiction. The proof is complete.
Theorem 1.4.4 (Characterization 4) Let K ⊆ Rn be an invex set with respect to η : Rn × Rn −→ Rn , where η satisfies Condition C. Then a real-valued function f is preinvex on K if and only if it is locally prequasiinvex on K and there exists α ∈ (0, 1) such that f (y + αη(x, y)) ≤ αf (x) + (1 − α)f (y), ∀x, y ∈ K. Proof From Lemma 1.4.2 and Theorem 1.4.1, the result follows readily.
The results obtained in Sect. 1.4 reveal that, under certain conditions, preinvexity is equivalent to prequasiinvexity, local prequasiinvexity, semistrict prequasiinvexity, and local semistrict prequasiinvexity when an appropriate intermediate-point preinvexity condition is satisfied.
Chapter 2
Semistrictly Preinvex Functions
2.1 Introduction and Notations In Chap. 1, some important properties of preinvex functions are obtained. In this chapter, a new class of generalized convex functions is introduced. These functions are closely related to preinvex functions and are referred to as termed semistrictly preinvex functions. Some properties of semistrictly preinvex functions are established. In particular, relationship between a semistrictly preinvex function and a preinvex function is obtained. Further properties of semistrictly preinvex functions are derived under the lower semicontinuity condition. It is shown that a function is semistrictly preinvex if and only if it satisfies a strict invexity inequality for any two points with distinct function values. This property is very similar to the result obtained by Mohan and Neogy for preinvex functions in [66]. Definition 2.1.1 Let K ⊆ Rn be an invex set with respect to η : Rn × Rn −→ Rn . Let f : K −→ R. We say that f is semistrictly preinvex if for all x, y ∈ K with f (x) = f (y), we have f (y + λη(x, y)) < λf (x) + (1 − λ)f (y), ∀λ ∈ (0, 1). Example 2.1.1 This example illustrates that a semistrictly preinvex function is not necessarily preinvex. Let K = [−6, −2] [−1, 6], f (x) =
1
if x = 0,
0
if x = 0, x ∈ K,
© Springer Nature Singapore Pte Ltd. 2018 X. Yang, Generalized Preinvexity and Second Order Duality in Multiobjective Programming, Springer Optimization and Its Applications 142, https://doi.org/10.1007/978-981-13-1981-5_2
21
22
2 Semistrictly Preinvex Functions
and let ⎧ x−y ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ x−y ⎪ ⎨ η(x, y) = −7 − y ⎪ ⎪ ⎪ ⎪ −y ⎪ ⎪ ⎪ ⎩1 6x
−1 ≤ x ≤ 6, −1 ≤ y ≤ 6, −6 ≤ x ≤ −2, −6 ≤ y ≤ −2, −1 ≤ x ≤ 6, −6 ≤ y ≤ −2, −6 ≤ x ≤ −2, −1 ≤ y ≤ 6, y = 0, −6 ≤ x ≤ −2, y = 0.
It is obvious that f is semistrictly preinvex on K with respect to η. Let x = −1, y = 1, λ = 12 . Since f (y + λη(x, y)) = f (0) = 1 ≤ 0 =
1 1 1 1 f (−1) + f (1) = f (x) + f (y), 2 2 2 2
f is not preinvex on K for the same η. Definition 2.1.2 Let K ⊆ Rn be an invex set with respect to η : Rn × Rn −→ Rn . Let f : K −→ R. We say that f is strictly preinvex if for all x, y ∈ K with x = y, f (y + λη(x, y)) < λf (x) + (1 − λ)f (y), ∀λ ∈ (0, 1). Definition 2.1.3 Given S ⊆ Rn × R, S is said to be G-preinvex set if there exists η : Rn × Rn −→ Rn such that, for any pair of (x, α) ∈ S and (y, β) ∈ S, we have (y + λη(x, y), λα + (1 − λ)β) ∈ S, ∀λ ∈ [0, 1].
2.2 Properties of Semistrictly Preinvex Functions In this section, we derive some properties of semistrictly preinvex functions (see [108]). In particular, the following theorem shows that a local minimum of a semistrictly preinvex function over an invex set is also a global one. Theorem 2.2.1 Let K be a nonempty invex set in Rn with respect to η : Rn × Rn −→ Rn , and let f : K −→ R be a semistrictly preinvex function for the same η. If x ∈ K is a local optimal solution to the problem of minimizing f (x) subject to x ∈ K, then x is a global minimum. Proof Suppose that x ∈ K is a local minimum. Then, there is an -neighborhood N (x) around x such that f (x) ≤ f (x), ∀x ∈ K N (x). (2.1) If x is not a global minimum of f , then there exists an x ∗ ∈ K such that f (x ∗ ) < f (x).
2.2 Properties of Semistrictly Preinvex Functions
23
By the semistrict preinvexity of f , f (x + αη(x ∗ , x)) < αf (x ∗ ) + (1 − α)f (x) < f (x), for all 0 < α < 1. For a sufficiently small α > 0, it follows that x + αη(x ∗ , x) ∈ K
N (x),
which is a contradiction to (2.1). This completes the proof.
From Example 2.1.1 and Theorem 2.2.1, we can conclude that the class of semistrictly preinvex functions constitutes an important class of generalized convex functions in mathematical programming. Theorem 2.2.2 Let K be a nonempty invex set in Rn with respect to η : Rn × Rn −→ Rn , let f : K −→ R be a semistrictly preinvex function for the same η, and let g : I −→ R be a convex and strictly increasing function, where range(f ) ⊆ I. Then, the composite function g(f ) is a semistrictly preinvex function on K. Proof Consider any x, y ∈ K, and λ ∈ (0, 1). If g(f (x)) = g(f (y)), then f (x) = f (y). Since f is a semistrictly preinvex function, we have f (y + λη(x, y)) < λf (x) + (1 − λ)f (y). From the convexity and strictly increasing property of g, we obtain g[f (y + λη(x, y))] < g[λf (x) + (1 − λ)f (y)] ≤ λg(f (x)) + (1 − λ)g(f (y)). Hence, g(f ) is a semistrictly preinvex function on K.
The following two theorems can be proved similarly. Theorem 2.2.3 Let K be a nonempty invex set in Rn with respect to η : Rn × Rn −→ Rn ; let f : K −→ R be a semistrictly preinvex function for the same η, and let g : I −→ R be a strictly convex and increasing function, where range(f ) ⊆ I . Then the composite function g(f ) is a semistrictly preinvex function on K. Theorem 2.2.4 Let K be a nonempty invex set in Rn with respect to η : Rn × Rn −→ Rn . If fi : K −→ R, i = 1, . . ., p, are both preinvex and semistrictly preinvex for the same η, then f =
p
λi fi ,
∀λi > 0, i = 1, 2, · · · , p,
i=1
is both preinvex and semistrictly function on K with respect to the same η.
24
2 Semistrictly Preinvex Functions
Before deriving further properties of semistrictly preinvex functions, we present several properties of preinvex functions. First, we give a characterization of preinvex functions in terms of its epigraph E(f ) given by E(f ) = {(x, α)|x ∈ K, α ∈ R, f (x) ≤ α}. Theorem 2.2.5 Let K ⊆ Rn be an invex set with respect to η : Rn × Rn −→ Rn . Then, f : K −→ R is a preinvex function if and only if E(f ) is a G-preinvex set in Rn × R. Proof Suppose that f is a preinvex function on K. Let (x, α), (y, β) ∈ E(f ). Then, f (x) ≤ α, f (y) ≤ β. Since f is a preinvex function on K, for 0 ≤ λ ≤ 1, we have f [y + λη(x, y)] ≤ λf (x) + (1 − λ)f (y) ≤ λα + (1 − λ)β. Hence, (y + λη(x, y), λα + (1 − λ)β) ∈ E(f ), ∀λ ∈ [0, 1]. Thus, E(f ) is a G-preinvex set. Conversely, assume that E(f ) is a G-preinvex set. Let x, y ∈ K. Then, (x, f (x)) ∈ E(f ), (y, f (y)) ∈ E(f ). Thus, for 0 ≤ λ ≤ 1, (y + λη(x, y), λf (x) + (1 − λ)f (y)) ∈ E(f ), and it follows that f [y + λη(x, y)] ≤ λf (x) + (1 − λ)f (y), ∀λ ∈ [0, 1]. Hence, f is a preinvex function on K.
Theorem 2.2.6 If (Si )i∈I is a family of G-preinvex sets in Rn × R with respect to the same function η : Rn × Rn −→ Rn , then their intersection i∈I Si is a G-preinvex set. Proof Let (x, α), (y, β) ∈ i∈I Si . Then, for each i ∈ I , (x, α), (y, β) ∈ Si . Since for each i ∈ I , Si is a G-preinvex set, it follows that (y + λη(x, y), λα + (1 − λ)β) ∈ Si , 0 ≤ λ ≤ 1 :
2.2 Properties of Semistrictly Preinvex Functions
Thus, (y + λη(x, y), λα + (1 − λ)β) ∈
25
Si ,
∀λ ∈ [0, 1].
i∈I
Hence, the result follows. Rn
Theorem 2.2.7 Let K ⊆ be an invex set with respect to η : × −→ and let (fi )i∈I be a family of real-valued functions which are preinvex for the same function η and bounded from above on K. Then the function f (x) = supi∈I fi (x) is a preinvex function on K. Rn
Rn
Rn ,
Proof Since each fi is a preinvex function with respect to the same function η on K, its epigraph E(fi ) = {(x, α)|x ∈ K, α ∈ R, fi (x) ≤ α} is a G-preinvex set in Rn × R. Therefore, their intersection E(fi ) = {(x, α)|x ∈ K, α ∈ R, fi (x) ≤ α, i ∈ I } i∈I
= {(x, α)|x ∈ K, α ∈ R, f (x) ≤ α} is also a G-preinvex set in Rn × R, by Theorem 2.2.6. This intersection is the epigraph of f . Hence, by Theorem 2.2.5, f is a preinvex function on
We wish to note that the semistrictly preinvex functions do not possess any analogous property as that established in Theorem 2.2.7 as shown in the following example. Example 2.2.1 Let f1 (x) = f2 (x) = and let
1 0
if x = 0, if x = 0, x ∈ [−6, −2] [−1, 6],
1 0
if x = 1, if x = 1, x ∈ [−6, −2] [−1, 6],
⎧ x−y ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎨x − y η(x, y) = −7 − y ⎪ ⎪ ⎪ −y ⎪ ⎪ ⎪ ⎩1 6x
−1 ≤ x ≤ 6, −1 ≤ y ≤ 6, −6 ≤ x ≤ −2, −6 ≤ y ≤ −2, −1 ≤ x ≤ 6, −6 ≤ y ≤ −2, −6 ≤ x ≤ −2, −1 ≤ y ≤ 6, y = 0, −6 ≤ x ≤ −2, y = 0.
26
2 Semistrictly Preinvex Functions
It is obvious that f1 and f2 are semistrictly preinvex on [−6, −2] [−1, 6]. It can be verified that 1 if x = 0 or x = 1, f (x)= sup{fi (x), 1 ≤ i ≤ 2}= 0 if x = 0 and x = 1, x ∈ [−6, −2] [−1, 6]. If we take x = −1, y = 1, λ = 12 , then, we have f (x) = f (−1) = 0 < 1 = f (1) = f (y). However, 1 1 1 = f (−1) + f (1) = λf (x) + (1 − λ)f (y). 2 2 2 Hence, f is not a semistrictly preinvex functions on [−6, −2] [−1, 6]. f (y + λη(x, y)) = f (0) = 1 >
But we have the following result: Theorem 2.2.8 Let K be a nonempty invex set in Rn with respect to η : Rn × Rn −→ Rn , and let fi : K −→ R(i ∈ I ) be any (either finite or infinite) collection of functions that are both semistrictly preinvex and preinvex with respect to the same η on K. Define, for each x ∈ K, f (x) = sup{fi (x), i ∈ I }. Assume that for any x ∈ K, there exists i0 := i(x) ∈ I , such that f (x) = fi0 (x). Then, f is both semistrictly preinvex and preinvex function on K. Proof By Theorem 2.2.7, we note that f is preinvex on K. It suffices to show that f is a semistrictly preinvex function on K. Assume that f is not semistrictly preinvex. Then, there exist x, y ∈ K with f (x) = f (y) and α ∈ (0, 1) such that f (y + αη(x, y)) ≥ αf (x) + (1 − α)f (y). By the preinvexity of f , we have f (y + αη(x, y)) ≤ αf (x) + (1 − α)f (y). Hence, f (y + αη(x, y)) = αf (x) + (1 − α)f (y).
(2.2)
Let z = y + αη(x, y). From the assumptions of the theorem, there exist i(z) := i0 , i(x) := i1 , i(y) := i2 , satisfying f (z) = fi0 (z), f (x) = fi1 (x), f (y) = fi2 (y).
2.3 Relationship Between Preinvexity and Semistrict Preinvexity
27
Then, (2.2) implies that fi0 (z) = αfi1 (x) + (1 − α)fi2 (y).
(2.3)
(i) If fi0 (x) = fi0 (y), then, we have, by the semistrict preinvexity of fi0 , fi0 (z) < αfi0 (x) + (1 − α)fi0 (y).
(2.4)
From fi0 (x) ≤ fi1 (x), fi0 (y) ≤ fi2 (y), and (2.4), we obtain fi0 (z) < αfi1 (x) + (1 − α)fi2 (y), which contradicts (2.3). (ii) If fi0 (x) = fi0 (y), we have, by the preinvexity of fi0 , fi0 (z) ≤ αfi0 (x) + (1 − α)fi0 (y) = fi0 (x) = fi0 (y).
(2.5)
Since f (x) = f (y), at least one of the following inequalities, (a) fi0 (x) ≤ fi1 (x) = f (x) and (b) fi0 (y) ≤ fi2 (y) = f (y), is a strict inequality. From (2.5), we obtain f (z) = fio (z) < αf (x) + (1 − α)f (y),
which contradicts (2.3). This completes the proof.
2.3 Relationship Between Preinvexity and Semistrict Preinvexity We have the following interesting results for the relationship between preinvexity and semistrict preinvexity: Theorem 2.3.1 Let K be a nonempty invex set in Rn with respect to η : Rn × Rn −→ Rn , where η satisfies Condition C, and let f : K −→ R be a semistrictly preinvex function with respect to the same η on K, satisfy f (y + η(x, y)) ≤ f (x), ∀x, y ∈ K. If there exists α ∈ (0, 1) such that, for any x, y ∈ K, the following inequality holds f (y + αη(x, y)) ≤ αf (x) + (1 − α)f (y).
(2.6)
Then f is a preinvex function on K. Proof To the contrary, suppose that there exist x, y ∈ K and λ ∈ (0, 1) such that f (y + λη(x, y)) > λf (x) + (1 − λ)f (y).
28
2 Semistrictly Preinvex Functions
Notice that λ cannot be equal to α in view of the hypothesis given in (2.6). Without loss of generality, assume that f (x) ≥ f (y) and let z = y + λη(x, y). Then, f (z) > λf (x) + (1 − λ)f (y).
(2.7)
Suppose that f (x) > f (y). Since f is a semistrictly preinvex function, we have f (z) < λf (x) + (1 − λ)f (y), which is a contradiction to (2.7). If f (x) = f (y), then it follows from (2.7) that f (z) > f (x) = f (y).
(2.8)
(i) Suppose that 0 < λ < α < 1. Let z1 = y +
λ η(x, y). α
Then, from Condition C, y + αη(z1 , y) λ λ λ η(x, y), y + η(x, y) + η(y, y + η(x, y))) α α α λ = y + α[−η(y, y + η(x, y))] α = y + λη(x, y) = z. = y + αη(y +
By (2.6), we have f (z) ≤ αf (z1 ) + (1 − α)f (y). Because of (2.8) and the above inequality, it follows that f (z) < f (z1 ). Let b=
(1 − α)λ . α(1 − λ)
(2.9)
2.3 Relationship Between Preinvexity and Semistrict Preinvexity
29
Since 0 < λ < α < 1, it is easy to show that 0 < b < 1. Thus, from Condition C, z + bη(x, z) = y + λη(x, y) + bη(x, y + λη(x, y)) = y + [λ + b(1 − λ)]η(x, y) = y + [λ + =y+
1−α λ]η(x, y) α
λ η(x, y) = z1 . α
Since f is a semistrictly preinvex function, it follows from the inequality (2.8) and the above equality that f (z1 ) < bf (x) + (1 − b)f (z) < f (z). This is a contradiction to (2.9). (ii) Suppose that 0 < α < λ < 1, that is, 0<
λ−α < 1. 1−α
Let z2 = y +
λ−α η(x, y). 1−α
Then, from Condition C, z2 + αη(x, z2 ) =y+
λ−α λ−α η(x, y) + αη(x, y + η(x, y)) 1−α 1−α
= y+[
λ−α λ−α + α(1 − )]η(x, y) 1−α 1−α
= y + λη(x, y) = z. By (2.6), we have f (z) ≤ αf (x) + (1 − α)f (z2 ). Again, it follows from (2.8) and the above inequality that f (z) < f (z2 ). Let u=
λ−α . (1 − α)λ
(2.10)
30
2 Semistrictly Preinvex Functions
Since 0 < α < λ < 1, it is easy to show that 0 < u < 1. Thus, z + (1 − u)η(y, z) = y + λη(x, y) + (1 − u)η(y, y + λη(x, y)) = y + [λ − λ(1 − u)]η(x, y) = y + λuη(x, y) =y+
λ−α η(x, y) = z2 . 1−α
Since f is a semistrictly preinvex function, it follows from the inequality (2.8) and the above equality that f (z2 ) < uf (z) + (1 − u)f (y) < f (z).
This is a contradiction to (2.10).
Theorem 2.3.2 Let K be a nonempty invex set in Rn with respect to η : Rn × Rn −→ Rn , where η satisfies Condition C, and let f : K −→ R be a preinvex function for the same η on K. If there exists α ∈ (0, 1) such that for every x, y ∈ K, f (x) = f (y) f (y + αη(x, y)) < αf (x) + (1 − α)f (y).
(2.11)
Then f is a semistrictly preinvex function on K. Proof To the contrary, suppose that there exist x, y ∈ K, λ ∈ (0, 1) such that f (x) = f (y) and f (y + λη(x, y)) ≥ λf (x) + (1 − λ)f (y).
(2.12)
Without loss of generality, suppose that f (x) < f (y). Let z = y + λη(x, y). Then, (2.12) implies that f (z) ≥ λf (x) + (1 − λ)f (y) > f (x).
(2.13)
Since f is a preinvex function, we have f (z) ≤ λf (x) + (1 − λ)f (y). which together with (2.13) leads to f (x) < f (z) = λf (x) + (1 − λ)f (y).
(2.14)
2.3 Relationship Between Preinvexity and Semistrict Preinvexity
31
Let z1 = z + αη(x, z), zk = zk−1 + αη(z, zk−1 ) (k ≥ 2). Then, from Condition C, z2 = z1 + αη(z, z1 ) = z + α(1 − α)η(x, z), z3 = z2 + αη(z, z2 ) = z + α(1 − α)2 η(x, z), ······ zk = zk−1 + αη(z, zk−1 ) = z + α(1 − α)k−1 η(x, z), ∀k ∈ N. From inequalities (2.11), (2.14) as well as the preinvexity of f , we get f (z1 ) = f (z + αη(x, z)) ≤ αf (x) + (1 − α)f (z) < f (z), f (z2 ) = f (z1 + αη(z, z1 )) ≤ αf (z) + (1 − α)f (z1 ) < f (z), f (z3 ) = f (z2 + αη(z, z2 )) ≤ αf (z) + (1 − α)f (z2 ) < f (z), ······ f (zk ) = f (zk−1 + αη(z, zk−1 )) ≤ αf (z) + (1 − α)f (zk−1 ) < f (z), ∀k ∈ N.
(2.15)
Since z = y + λη(x, y), it follows from Condition C that zk = z + α(1 − α)k−1 η(x, z) = y + [λ + α(1 − α)k−1 (1 − λ)]η(x, y). Let k1 ∈ N be such that α 2 (1 − α)k1 −1 <
λ , 1−λ
and let β1 = λ + α(1 − α)k1 (1 − λ), β2 = λ − α 2 (1 − α)k1 −1 (1 − λ), x = y + β1 η(x, y), y = y + β2 η(x, y). Then, from Condition C, z + α(1 − α)k1 η(x, z) = y + β1 η(x, y) = x.
(2.16)
By (2.15) and (2.16), we obtain f (x) = f (z + α(1 − α)k1 η(x, z)) = f (zk1 +1 ) < f (z).
(2.17)
32
2 Semistrictly Preinvex Functions
(i) If f (x) ≥ f (y), then it is clear from Condition C that y + αη(x, y) = y + λη(x, y) = z. The preinvexity of f implies that f (z) ≤ αf (x) + (1 − α)f (y) ≤ f (x), which contradicts inequality (2.17). (ii) If f (x) < f (y), then, by Condition C, we have y + αη(x, y) = y + λη(x, y) = z. By (2.11), we obtain f (z) < αf (x) + (1 − α)f (y).
(2.18)
Since x = y + β1 η(x, y), and y = y + β2 η(x, y), we have, by the preinvexity of f , f (x) ≤ β1 f (x) + (1 − β1 )f (y),
(2.19)
f (y) ≤ β2 f (x) + (1 − β2 )f (y).
(2.20)
From (2.18), (2.19) and (2.20), we get f (z) < λf (x) + (1 − λ)f (y), which contradicts inequality (2.14).
2.4 Lower Semicontinuity and Semistrict Preinvexity Under the lower semicontinuity condition and Condition C, we prove in this section that semistrict preinvexity implies preinvexity. Theorem 2.4.1 Let K be a nonempty invex set in Rn with respect to η : Rn × Rn −→ Rn , where η satisfies Condition C, and let f : K −→ R be a semistrictly preinvex function for the same η on K. If f is lower semicontinuous function, then f is preinvex on K. Proof Let x, y ∈ K. If f (x) = f (y), then, by the definition of a semistrictly preinvex function, we have f (y + λη(x, y)) < λf (x) + (1 − λ)f (y), ∀λ ∈ (0, 1).
2.4 Lower Semicontinuity and Semistrict Preinvexity
33
Now suppose that f (x) = f (y). To show that f is a preinvex function, we need to show that f (y + λη(x, y)) ≤ f (x),
∀λ ∈ (0, 1).
By contradiction, suppose that there exists α ∈ (0, 1) such that f (y + αη(x, y)) > f (x).
(2.21)
Let zα = y + αη(x, y). Since f is lower semicontinuous (see [41]), there exists β with α < β < 1 such that f (zβ ) = f (y + βη(x, y)) > f (x) = f (y).
(2.22)
From Condition C, zβ = z α +
β −α η(x, zα ). 1−α
Hence, from (2.21) and the semistrict preinvexity of f , we have f (zβ ) <
β −α β −α f (x) + (1 − )f (zα ) < f (zα ). 1−α 1−α
(2.23)
On the other hand, from Condition C, zα = zβ + (1 −
α )η(y, zβ ). β
Therefore, from (2.22) and the semistrict preinvexity of f , we have f (zα ) < (1 −
α α )f (y) + f (zβ ) < f (zβ ), β β
which contradicts (2.23). This completes the proof.
Now using Theorem 1.3.2, we obtain a generalization of Theorem 2.4.1. Theorem 2.4.2 Let K be a nonempty invex set in Rn with respect to η : Rn × Rn −→ Rn , where η satisfies Condition C, and let f : K −→ R be a lower semicontinuous function that satisfies f (y + η(x, y)) ≤ f (x). Suppose that there exists α ∈ (0, 1) such that for every x, y ∈ K, f (x) = f (y), f (y + αη(x, y)) < αf (x) + (1 − α)f (y). Then f is a preinvex function on K.
(2.24)
34
2 Semistrictly Preinvex Functions
Proof By Theorem 1.3.2, we only need to show that for each x, y ∈ K, there exists λ ∈ (0, 1) such that f (y + λη(x, y)) ≤ λf (x) + (1 − λ)f (y). Assume, to the contrary, that there exist x, y ∈ K such that f (y + λη(x, y)) > λf (x) + (1 − λ)f (y), ∀λ ∈ (0, 1).
(2.25)
If f (x) = f (y), then, by (2.24), we have f (y + αη(x, y)) < αf (x) + (1 − α)f (y), which contradicts (2.25). If f (x) = f (y), then (2.25) implies that f (y + λη(x, y)) > f (x) = f (y), ∀λ ∈ (0, 1),
(2.26)
by which and Condition C, we obtain f [y + λη(x, y) + αη(x, y + λη(x, y))] = f [y + (λ + α(1 − λ))η(x, y)] > f (y), ∀λ ∈ (0, 1).
(2.27)
From (2.24) and (2.26), we have f [y + λη(x, y) + αη(x, y + λη(x, y))] < (1 − α)f [y + λη(x, y)] + αf (x) < f [y + λη(x, y)],
∀λ ∈ (0, 1).
Again, by (2.24), (2.27) and (2.28), we have f [y + (1 − α)(λ + α(1 − λ))η(x, y)] = f [y + λη(x, y) + αη(x, y + λη(x, y)) +αη(y, y + λη(x, y) + αη(x, y + λη(x, y)))] < αf (y) + (1 − α)f (y + λη(x, y) + αη(x, y + λη(x, y))) < f (y + λη(x, y) + αη(x, y + λη(x, y))) < f (y + λη(x, y)),
∀λ ∈ (0, 1).
(2.28)
2.5 Gradient Properties of Strictly and Semistrictly Preinvex Functions
Let λ =
1−α 2−α
35
∈ (0, 1). Then, the above inequality implies that f (y +
1−α 1−α η(x, y)) < f (y + η(x, y)), 2−α 2−α
which is a contradiction.
2.5 Gradient Properties of Strictly and Semistrictly Preinvex Functions In this section, we will establish gradient properties of strictly preinvex and semistrictly preinvex functions. Before showing the properties in Theorems 2.5.2 and 2.5.3, we first derive the following result on preinvex functions. Theorem 2.5.1 Let K be a nonempty invex set in Rn with respect to η : Rn × Rn −→ Rn , where η satisfies Condition C, and let f : K −→ R be a preinvex function for the same η. For any x, y ∈ K, let g(λ) = f [y + λη(x, y)], ∀λ ∈ [0, 1]. Then g(β) − g(0) g(α) − g(0) ≤ , 0 < α < β ≤ 1. α β That is, f [y + αη(x, y)] − f (y) f [y + βη(x, y)] − f (y) ≤ , α β
0 < α < β ≤ 1.
Proof For 0 < α < β ≤ 1, let zα = y + αη(x, y), zβ = y + βη(x, y), u = 1 −
α . β
By Condition C, zβ + uη(y, zβ ) = y + βη(x, y) + uη(y, y + βη(x, y)) = y + (β − uβ)η(x, y) = y + αη(x, y) = zα . We have α α g(α)=f (zα ) = f (zβ +uη(y, zβ ))≤uf (y)+(1−u)f (zβ ) = (1− )g(0)+ g(β). β β Therefore, we obtain g(β) − g(0) g(α) − g(0) ≤ . α β
36
2 Semistrictly Preinvex Functions
Theorem 2.5.2 Let K be a nonempty open invex set in Rn with respect to η : Rn × Rn −→ Rn , where η satisfies Condition C, for any x, y ∈ K, if x = y, x = y+λη(x, y), y = y+λη(x, y), ∀λ ∈ (0, 1), and let f : K −→ R be a differentiable function. Then, f is a strictly preinvex function for the same η on K if and only if f is a strictly invex function, i.e., for every pair of points x, y ∈ K, x = y, it holds that f (y) > f (x) + η(y, x)T ∇f (x). Proof Suppose that f is a strictly preinvex function on K. By definition, for every pair of points x, y ∈ K such that x = y, we have f (x + λη(y, x)) < λf (y) + (1 − λ)f (x),
∀λ ∈ (0, 1).
This yields f (x + λη(y, x)) − f (x) < f (y) − f (x), ∀λ ∈ (0, 1). λ From Theorem 2.5.1, we get f (x + λη(y, x)) − f (x) < f (y) − f (x), λ≥0 λ
η(y, x)T ∇f (x) = inf that is,
f (y) > f (x) + η(y, x)T ∇f (x). Conversely, suppose that x, y ∈ K, x = y, λ ∈ (0, 1). By the strict invexity of f , we have f (x) − f (y + λη(x, y)) > η(x, y + λη(x, y))T ∇f (y + λη(x, y)).
(2.29)
Similarly, applying the strict invexity condition to the pair y, y + λη(x, y) yields f (y) − f (y + λη(x, y)) > η(y, y + λη(x, y))T ∇f (y + λη(x, y)).
(2.30)
Now, multiply (2.29) by λ and (2.30) by (1 − λ). Then, by adding them together, we obtain λf (x) + (1 − λ)f (y) − f (y + λη(x, y)) > (λη(x, y + λη(x, y)) + (1 − λ)η(y, y + λη(x, y)))∇f (y + λη(x, y)).
2.5 Gradient Properties of Strictly and Semistrictly Preinvex Functions
37
However, by Condition C, λη(x, y + λη(x, y)) + (1 − λ)η(y, y + λη(x, y)) = 0.
Hence, the conclusion of the theorem follows.
To establish a gradient property of semistrictly preinvex functions in Theorem 2.5.3 below, let us introduce the following condition. Condition A: let K ⊆ Rn be an invex set with respect to η : Rn × Rn −→ Rn . A function f is said to satisfy Condition A if, for any x, y ∈ K with f (x) < f (y), one has f (y + η(x, y)) < f (y). The following simple example illustrates Condition A. Example 2.5.1 Let f (x) = −|x|, ∀x ∈ [−1, 1], and let η(x, y) =
x−y
if x ≥ 0, y ≥ 0 or x ≤ 0, y ≤ 0,
y−x
if x ≥ 0, y ≤ 0 or x ≤ 0, y ≥ 0.
Then, f satisfies condition A. Theorem 2.5.3 Let K be a nonempty open invex set in Rn with respect to η : Rn × Rn −→ Rn , where η satisfies Condition C, and let f : K −→ R be a differentiable function that satisfies Condition A. Then, f is a semistrictly preinvex function for the same η on K if and only if for every pair of points x, y ∈ K, f (x) = f (y), it holds that f (y) > f (x) + η(y, x)T ∇f (x). Proof Suppose that f is a semistrictly preinvex function on K. By definition, for every pair of points x, y ∈ K with f (x) = f (y), we have f (x + λη(y, x)) < λf (y) + (1 − λ)f (x),
∀λ ∈ (0, 1).
This yields f (x + λη(y, x)) − f (x) < f (y) − f (x), ∀λ ∈ (0, 1). λ From Theorem 2.5.1, we get η(y, x)T ∇f (x) = inf
λ≥0
f (x + λη(y, x)) − f (x) < f (y) − f (x), λ
38
2 Semistrictly Preinvex Functions
that is, f (y) > f (x) + η(y, x)T ∇f (x). Conversely, suppose that for every pair of points x, y ∈ K such that f (x) = f (y), we have f (y) > f (x) + η(y, x)T ∇f (x). Now let zα = y + αη(x, y), ∀α ∈ (0, 1). Without loss of generality, we assume f (x) < f (y). We now show that f (zα ) = f (y), ∀α ∈ (0, 1). Assume, to the contrary, that for some α0 ∈ (0, 1) f (zα0 ) = f (y).
(2.31)
Now from (2.31), we will show that f (zα0 +λη(y, zα0 )) = f (y), for any λ ∈ (0, 1). Assume that there exists λ ∈ (0, 1) such that f (zα0 + λη(y, zα0 )) = f (y). (i) Suppose that f (zα0 + λη(y, zα0 )) > f (y). Let g(λ) = f (zα0 + λη(y, zα0 )), ∀λ ∈ [0, 1]. Since g(0) = f (zα0 ) = f (y), it follows from Condition C that g(1) = f (zα0 + η(y, zα0 )) = f (y + α0 η(x, y) + η(y, y + α0 η(x, y))) = f (y). Therefore, g attains a maximum in (0, 1). Assume that the maximum is achieved at λ0 ∈ (0, 1). Then, 0 = g (λ0 ) = η(y, zα0 )T ∇f (zα0 + λ0 η(y, zα0 )). From Condition C, we obtain η(x, y)T ∇f (zα0 + λ0 η(y, zα0 )) = 0. Using Condition C again, we have η(y, zα0 + λ0 η(y, zα0 )) = η(y, y + α0 η(x, y) + λ0 η(y, y + α0 η(x, y))) = η(y, y + (α0 − λ0 α0 )η(x, y)) = −(α0 − λ0 α0 )η(x, y).
2.5 Gradient Properties of Strictly and Semistrictly Preinvex Functions
39
Hence, we have η(y, zα0 + λ0 η(y, zα0 ))T ∇f (zα0 + λ0 η(y, zα0 )) = 0.
(2.32)
From f (zα0 + λ0 η(y, zα0 )) ≥ f (zα0 + λη(y, zα0 )) > f (y), (2.32) and the hypotheses of the theorem, we obtain f (y) > f (zα0 + λ0 η(y, zα0 )) + η(y, zα0 + λ0 η(y, zα0 ))T ∇f (zα0 + λ0 η(y, zα0 )) = f (zα0 + λ0 η(y, zα0 )) which contradicts the assumption that g attains its maximum at λ0 . (ii) Suppose that f (zα0 + λη(y, zα0 )) < f (y). From f (zα0 ) = f (y) and f (x) < f (y) as well as Condition A, we have f (y + η(x, y)) < f (y) = f (zα0 ). Let g(λ) = f (zα0 + λη(y, zα0 ) + λη(x, zα0 + λη(y, zα0 ))). Since g(0) = f (zα0 + λη(y, zα0 )) < f (y) = f (zα0 ), it follows from Condition C and Condition A that g(1) = f (zα0 + λη(y, zα0 ) + η(x, zα0 + λη(y, zα0 ))) = f (y + η(x, y)) < f (y) = f (zα0 ). Since 0 <
α0 λ 1−α0 (1−λ)
< 1, by Condition C, we have f (zα0 ) = g(
α0 λ 1 − α0 (1 − λ)
).
Hence, g(λ) = f (zα0 + λη(y, zα0 ) + λη(x, zα0 + λη(y, zα0 ))) attains its maximum in (0, 1). Suppose that the maximum occurs at λ0 ∈ (0, 1). Then 0=g (λ0 )=η(x, zα0 +λη(y, zα0 ))T ∇f (zα0 +λη(y, zα0 )+λ0 η(x, zα0 +λη(y, zα0 ))) Since f (zα0 + λη(y, zα0 ) + λ0 η(x, zα0 + λη(y, zα0 ))) > f (zα0 + λη(y, zα0 )),
40
2 Semistrictly Preinvex Functions
we have f (zα0 + λη(y, zα0 )) > f (zα0 + λη(y, zα0 ) + λ0 η(x, zα0 + λη(y, zα0 ))) + η(zα0 + λη(y, zα0 ), zα0 + λη(y, zα0 ) + λ0 η(x, zα0 + λη(y, zα0 )))T × ∇f (zα0 + λη(y, zα0 ) + λ0 η(x, zα0 + λη(y, zα0 ))) = f (zα0 + λη(y, zα0 ) + λ0 η(x, zα0 + λη(y, zα0 ))) − λ0 η(x, zα0 + λη(y, zα0 )T ∇f (zα0 + λη(y, zα0 ) + λ0 η(x, zα0 + λη(y, zα0 ))) = f (zα0 + λη(y, zα0 ) + λ0 η(x, zα0 + λη(y, zα0 ))) which contradicts the assumption that λ0 is a maximum point of g(λ) in (0, 1). Combining (i) and (ii), we have f (zα0 + λη(y, zα0 )) = f (y) ∀λ ∈ [0, 1].
(2.33)
Let h(λ) = f (zα0 + λη(y, zα0 )). Then, by (2.33), we have 0 = h (1) = η(y, zα0 )T ∇f (zα0 + η(y, zα0 )) = η(y, zα0 )T ∇f (y) = −α0 η(x, y)T ∇f (y). That is, η(x, y)T ∇f (y) = 0. By the hypotheses of the theorem and (2.34), we obtain f (x) > f (y) + η(x, y)T ∇f (y) = f (y), which contradicts f (x) < f (y). Therefore, f (zα ) = f (y), ∀α ∈ (0, 1). If f (zα ) = f (x) for some α ∈ (0, 1), then from f (x) < f (y), f (zα ) < αf (x) + (1 − α)f (y).
(2.34)
2.5 Gradient Properties of Strictly and Semistrictly Preinvex Functions
41
If f (zα ) = f (x) for some α ∈ (0, 1), then, by the appropriate hypotheses of the theorem and Condition C, we obtain f (x) > f (zα ) + η(x, zα )T ∇f (zα ) = f (zα ) + (1 − α)η(x, y)T ∇f (zα ),
(2.35)
f (y) > f (zα ) + η(y, zα )T ∇f (zα ) = f (zα ) − αη(x, y)T ∇f (zα ).
(2.36)
Multiply (2.35) by α and (2.36) by (1 − α). Then, by adding them together, we have f (zα ) < αf (x) + (1 − α)f (y), i.e., f (y + αη(x, y)) < αf (x) + (1 − α)f (y), This completes the proof.
∀α ∈ (0, 1).
Chapter 3
Semipreinvex Functions
3.1 Introduction Yang and Chen [102] introduced a class of generalized convex functions, called semipreinvex functions. A set K in Rn is said to satisfy the “semi-connected” property, ie, for any x, y ∈ K and α ∈ [0, 1], there exists a vector η(x, y, α) ∈ Rn such that y + αη(x, y, α) ∈ K. Let K be a set in Rn having the “semi-connected” property with respect to η(x, y, α) : K × K × [0, 1] −→ Rn , and let f (x) be a real function on K. Then, f is called semipreinvex with respect to the same η(x, y, α) if, for any x, y ∈ K and α ∈ [0, 1], f (y + αη(x, y, α)) ≤ αf (x) + (1 − α)f (y) holds and limα↓0 αη(x, y, α) = 0. Semipreinvex functions include preinvex functions and arc-connected convex functions as special cases. A semipreinvex function preserves some nice properties enjoyed by a convex function. In this chapter, we give some new properties of semipreinvex functions. In particular, we show that the ratio of two semipreinvex functions is a semipreinvex function. This result extends that the ratio of two invex functions is an invex function obtained by Khan and Hanson [50]. We also point out that a statement made by Noor in [80] is not correct. Also, saddle point optimality criteria involving semipreinvex functions are developed for a multiobjective fractional programming problem.
© Springer Nature Singapore Pte Ltd. 2018 X. Yang, Generalized Preinvexity and Second Order Duality in Multiobjective Programming, Springer Optimization and Its Applications 142, https://doi.org/10.1007/978-981-13-1981-5_3
43
44
3 Semipreinvex Functions
3.2 Some New Properties of Semipreinvex Functions The following results characterize semipreinvex functions. Theorem 3.2.1 Let K be a semi-connected set with respect to η(x, y, α). A function f : K −→ R is semipreinvex with respect to the same η(x, y, α) if and only if, for all x, y ∈ K, α ∈ [0, 1], and u, v ∈ R, f (x) < u and f (y) < v
⇒
f (y + αη(x, y, α)) < αu + (1 − α)v.
Proof Let f be semipreinvex with respect to η, and let f (x) < u, f (y) < v, 0 < α < 1. From the definition of semipreinvexity, we have f (y + αη(x, y, α)) ≤ αf (x) + (1 − α)f (y) < αu + (1 − α)v. Conversely, let x, y ∈ K, α ∈ [0, 1]. For any δ > 0, f (x) < f (x) + δ, f (y) < f (y) + δ. By the assumption of the theorem, we have, for 0 < α < 1, f (y + αη(x, y, α)) < α(f (x) + δ) + (1 − α)(f (y) + δ) = αf (x) + (1 − α)f (y) + δ. Since δ > 0 can be arbitrarily small, it follows that f (y + αη(x, y, α)) ≤ αf (x) + (1 − α)f (y), α ∈ (0, 1). Hence f is semipreinvex on K. This completes the proof.
Theorem 3.2.2 Let K be a semi-connected set with respect to η(x, y, α). A function f : K −→ R is semipreinvex with respect to the same η(x, y, α) if and only if the set F (f ) = {(x, u) : x ∈ K, u ∈ R, f (x) < u} is semi-connected with respect to η : F (f ) × F (f ) × [0, 1] −→ Rn+1 , where η ((y, v), (x, u), α) = (η(y, x, α), v − u), for all (x, u), (y, v) ∈ F (f ). Proof Necessity. Let (x, u) ∈ F (f ) and (y, v) ∈ F (f ), i.e., f (x) < u and f (y) < v. From the semipreinvexity of f , we have f (x + αη(x, y, α)) ≤ (1 − α)f (x) + αf (y) < (1 − α)u + αv, α ∈ [0, 1]. It follows that (x + αη(y, x, α), (1 − α)u + αv) ∈ F (f ), α ∈ (0, 1).
3.2 Some New Properties of Semipreinvex Functions
45
That is, (x, u) + α(η(y, x, α), v − u) ∈ F (f ), α ∈ (0, 1). Hence F (f ) is a semi-connected set with respect to η ((y, v), (x, u), α) = (η(y, x, α), v − u). Sufficiency. Assume that F (f ) is a semi-connected set with respect to η ((y, v), (x, u), α) = (η(y, x, α), v − u). Let x, y ∈ K and u, v ∈ R such that f (x) < u, f (y) < v. Then, (x, u) ∈ F (f ) and (y, v) ∈ F (f ). From the semi-connectedness of the set F (f ) with respect to η ((y, v), (x, u), α) = (η(y, x, α), v − u), we have (x, u) + αη ((y, v), (x, u), α) ∈ F (f ), α ∈ (0, 1). It follows that (x + αη(y, x, α), (1 − α)u + αv) ∈ F (f ), α ∈ (0, 1). That is, f (y + αη(x, y, α)) < αu + (1 − α)v. Then, by Theorem 3.2.1, f is a semipreinvex function with respect to η(x, y, α) on K.
In Noor [80], the following statement is given: a function f : K −→ R is semipreinvex with respect to η(x, y, α) if and only if the epigraph of f , G(f ) = {(x, u) : x ∈ K, u ∈ R, f (x) ≤ u} is semi-connected with respect to the same η. This statement contains an error, i.e., the set G(f ) is semi-connected with respect to η ((y, v), (x, u), α) = (η(y, x, α), v − u), but not with respect to η. We give a correction of this statement below. Theorem 3.2.3 Let K be a semi-connected set with respect to η(x, y, α). A function f : K −→ R is semipreinvex with respect to the same η(x, y, α) if and only if the set G(f ) = {(x, u) : x ∈ K, u ∈ R, f (x) ≤ u} is a semi-connected set with respect to η : G(f ) × G(f ) × [0, 1] −→ Rn+1 , where η ((y, v), (x, u), α) = (η(y, x, α), v − u), for all (x, u), (y, v) ∈ G(f ).
46
3 Semipreinvex Functions
Proof Necessity. Let (x, u) ∈ G(f ) and (y, v) ∈ G(f ), i.e., f (x) ≤ u and f (y) ≤ v. From the semipreinvexity of f , we have f (x + αη(y, x, α)) ≤ (1 − α)f (x) + αf (y) ≤ (1 − α)u + αv, α ∈ (0, 1). It follows that (x + αη(y, x, α), (1 − α)u + αv) ∈ G(f ) α ∈ (0, 1). That is, (x, u) + α(η(y, x, α), v − u) ∈ G(f ), α ∈ (0, 1). Hence, G(f ) is a semi-connected set with respect to η ((y, v), (x, u), α) = (η(y, x, α), v − u). Sufficiency. Assume that G(f ) is a semi-connected set with respect to η ((y, v), (x, u), α) = (η(y, x, α), v − u). Let x, y ∈ K and u, v ∈ R such that f (x) ≤ u, f (y) ≤ v. Then, (x, u) ∈ G(f ) and (y, v) ∈ G(f ). From the semi-connectedness of the set G(f ) with respect to η ((y, v), (x, u), α) = (η(y, x, α), v − u), we have (x, u) + αη ((y, v), (x, u), α) ∈ G(f ), α ∈ (0, 1). It follows that (x + αη(y, x, α), (1 − α)u + αv) ∈ G(f ), α ∈ (0, 1). That is, f (y + αη(x, y, α)) ≤ αu + (1 − α)v. Hence, f is a semipreinvex function with respect to η(x, y, α) on K. Theorem 3.2.4 Let K ⊂
Rn+1
and
f (x) = inf{u : u ∈ R, (x, u) ∈ K}, ∀x ∈ Rn . If K is a semi-connected set with respect to η : K × K × [0, 1] −→ Rn+1 and η : Rn × Rn × [0, 1] −→ Rn satisfying η ((y, v), (x, u), α) = (η(y, x, α), v − u), for all (x, u), (y, v) ∈ K, then f : Rn −→ R is a semipreinvex function with respect to η on Rn .
3.2 Some New Properties of Semipreinvex Functions
47
Proof It suffices to show that the function f : Rn −→ R is a semipreinvex function with respect to η(y, x, α). To see this, let x, y ∈ Rn . Since K is a semi-connected set with respect to η ((y, v), (x, u), α), we have, for any (x, u), (y, v) ∈ K, (x, u) + αη ((y, v), (x, u), α) ∈ K, ∀α ∈ [0, 1]. It follows from η ((y, v), (x, u), α) = (η(y, x, α), v − u) that (x, u)+αη ((y, v), (x, u), α) = (x +αη(x, y, α), (1−α)u+αv) ∈ K ∀α ∈ [0, 1]. By the definition of f , we obtain f (x + αη(x, y, α)) ≤ αf (y) + (1 − α)f (x), ∀α ∈ [0, 1]. Hence, f is a semipreinvex function with respect to η on Rn .
Theorem 3.2.5 Let I be an index set. If (Si )i∈I is a family of semi-connected subsets in Rn+1 with respect tothe same function η : Rn+1 × Rn+1 × [0, 1] −→ Rn+1 , then their intersection i∈I Si is a semi-connected set with respect to the same function η . Proof Let (x, α), (y, β) ∈ i∈I Si . Then, for each i ∈ I , (x, α), (y, β) ∈ Si . Since Si is a semi-connected set with respect to the same function η , for each i ∈ I , it follows that (y + αη (x, y, α), αα + (1 − α)β) ∈ Si , 0 ≤ α ≤ 1 Thus (y + αη (x, y, α), αα + (1 − α)β) ∈
Si ,
∀α ∈ [0, 1].
i∈I
Hence, the result follows. Theorem 3.2.6 Let K ⊆ Rn be a semi-connected set with respect to η : Rn × Rn × [0, 1] −→ Rn ,
and consider a family of real-valued functions (fi )i∈I which are semipreinvex with respect to the same η and bounded from above on K. Then, the function f (x) = supi∈I fi (x) is a semipreinvex function with respect to the same η on K. Proof Since each fi is a semipreinvex function with respect to the same function η on K, it follows from Theorem 3.2.3 that its epigraph G(fi ) = {(x, α)|x ∈ K, α ∈ R, fi (x) ≤ α}
48
3 Semipreinvex Functions
is a semi-connected set in Rn ×R with respect to η = (η(y, x, α), v−u). Therefore, their intersection G(fi ) = {(x, α)|x ∈ K, α ∈ R, fi (x) ≤ α, i ∈ I } i∈I
= {(x, α)|x ∈ K, α ∈ R, f (x) ≤ α} is also a semi-connected set in Rn × R with respect to η = (η(y, x, α), v − u), by Theorem 3.2.5. This intersection is the epigraph of f . Hence, by Theorem 3.2.3, f is a semipreinvex function with respect to η on K.
3.3 Applications to Multiobjective Fractional Programming The following notations for vector orderings in Rn will be used: x x x x x
> y if and only if y if and only if ≥ y if and only if ≥ y is the negation of ≯ y is the negation of
xi xi xi x x
> yi , i = 1, 2, · · · , n; ≥ yi , i = 1, 2, · · · , n; ≥ yi , i = 1, 2, · · · , n, but x = y; ≥ y; > y;
Multiobjective fractional programming problems have been extensively studied in the literature. In this section, we obtain saddle point optimality criteria and Lagrangian type duality results for multiobjective fractional programming problems involving semipreinvex functions. We consider the following problem. Primal Problem (FP): f (x) Minimize := g(x)
f1 (x) fk (x) ,··· , g1 (x) gk (x)
subject to h(x) 0, x ∈ X. Assume that fj (x) ≥ 0, gj (x) > 0, 1 ≤ j ≤ k, ∀x ∈ X. Definition 3.3.1 x ∗ is said to be an efficient solution of (FP) if it is a feasible solution of (FP) and there exists no other feasible solution x of (FP) such that f (x ∗ ) f (x) ≤ . g(x) g(x ∗ ) Definition 3.3.2 x ∗ is said to be a properly efficient solution of (FP) if it is an efficient solution of (FP) and there exists a scalar M > 0 such that, for each i,
3.3 Applications to Multiobjective Fractional Programming fi (x ∗ ) fi (x) gi (x ∗ ) − gi (x) fj (x) fj (x ∗ ) gj (x) − gj (x ∗ )
for some j such that fi (x) gi (x)
<
fi (x ∗ ) gi (x ∗ ) .
fj (x) gj (x)
>
fj (x ∗ ) gj (x ∗ ) ,
49
M
where x is a feasible solution of (FP) and
Following Bector’s parametric approach reported in [8], we consider the following multi-objective optimization problem. Primal Problem (MPv ): Minimize (f1 (x) − v1 g1 (x), · · · , fk (x) − vk gk (x)) subject to h(x) 0, x ∈ X. Following Geoffrion’s idea reported in [29], we consider two scalar programs corresponding to (FP) and (MPv ∗ ). Primal Problem (F P )α ∗ : Minimize α ∗T (
f (x) ) g(x)
subject to : h(x) 0, x ∈ X. Primal Problem (MPv ∗ )α ∗ : Minimize α ∗T (f (x) − v ∗ g(x)) subject to : h(x) 0, x ∈ X. The following results are due to Geoffrion [29]. Lemma 3.3.1 If x ∗ is an optimal solution of (F P )α for some α ∗ ∈ Rk with strictly positive components, then x ∗ is a proper efficient solution of (FP). Lemma 3.3.2 If x ∗ is an optimal solution of (MPv ∗ )α for some α ∗ ∈ Rk with strictly positive components, then x ∗ is a proper efficient solution of (FP). The following result is due to Yang and Chen [102]. Lemma 3.3.3 Let hi (x)(i = 1, 2, · · · , k) be semipreinvex functions. Then exactly one of the following two systems is solvable: (1) there exists x ∈ C, h1 (x) < 0, · · · , hk (x) < 0; K k \{0}, λ ≥ 0(i = 1, · · · , k), (2) there exist λ ∈ R+ i i=1 λi hi (C) ⊂ R+ . The following Lemma shows that the converse of the result in Lemma 3.3.1 is valid under semipreinvexity.
50
3 Semipreinvex Functions
Lemma 3.3.4 If x ∗ is a proper efficient solution of (FP), and fi , −gi , i = 1, 2, · · · , k, and hj , j = 1, 2, · · · , m, are semipreinvex functions with respect to a same η on X, then x ∗ is an optimal solution of (MPv ∗ )α ∗ , where vj∗ = 1, 2, · · · , k, and α ∗ ∈ α + = {α ∈ Rk : α > 0,
k
fj (x ∗ ) gj (x ∗ ) ,
j =
αi = 1}.
i=1
Proof Let x ∗ be a proper efficient solution of (FP). Then, x ∗ is a proper efficient f (x ∗ ) solution of (MPv ∗ ) , where vj∗ = gjj (x ∗ ) , j = 1, 2, · · · , k. Since fi , −gi , i = 1, 2, · · · , k, and hj , j = 1, 2, · · · , m, are semipreinvex functions with respect to the same η, it follows that fi − vi∗ gi is a semipreinvex function with respect to the same η for each i = 1, 2, · · · , k. From Lemma 3.3.3, we see that x ∗ is an optimal solution of (MPv ∗ )α ∗ where α ∗ ∈ α + .
Now we define the vector saddle point Lagrangian for (FP) as follows: φ(x, y) =
f (x) + y T h(x)e f1 (x) + y T h(x) fk (x) + y T h(x) =: ( ,··· , ) g(x) g1 (x) gk (x)
where e = (1, 1, . . . , 1) ∈ Rk . The vector saddle point problem for (FP) is the problem of finding x ∗ ∈ X, ∗ y ∈ Rm , y ∗ 0 such that f (x ∗ ) + y ∗T h(x ∗ )e f (x ∗ ) + y T h(x ∗ )e ≥ ∗ g(x ) g(x ∗ )
(3.1)
f (x ∗ ) + y ∗T h(x ∗ )e f (x) + y ∗T h(x)e ≥ ∗ g(x ) g(x)
(3.2)
for all x ∈ X, y ∈ Rm , y 0. Theorem 3.3.1 Suppose that (x ∗ , y ∗ ) is a solution of the vector saddle point problem and that f , −g and h are semipreinvex with respect to a same η. Then, x ∗ is a proper efficient solution of (FP). Proof Clearly, (3.1) implies that h(x ∗ ) 0, or else (3.1) can be violated by making an appropriate component of y infinitely large in magnitude. Now taking y = 0 in (3.1) yields: y ∗T h(x ∗ ) 0. Noting that y ∗ 0 and h(x ∗ ) 0 imply that y ∗T h(x ∗ ) 0, we have y ∗T h(x ∗ ) = 0. Hence, x ∗ is a feasible solution of (FP). (3.2) is equivalent to the following statement: The inequality (f1 (x) + y ∗T h(x) −
f1 (x ∗ ) + y ∗T h(x ∗ )e g1 (x), g1 (x ∗ )
3.3 Applications to Multiobjective Fractional Programming
· · · , fk (x) + y ∗T h(x) −
51
fk (x ∗ ) + y ∗T h(x ∗ )e gk (x)) 0 gk (x ∗ )
has no solution on X. From the fact that f , −g and h are semipreinvex with respect to the same η, it follows from Lemma 3.3.3 that we can find scalars αi∗ > 0, i = 1, 2, · · · , k, such that k
i=1
α∗
fi (x ∗ ) ∗ y ∗T h(x ∗ ) ∗ fi (x) ∗ y ∗T h(x) + + α α α gi (x ∗ ) gi (x ∗ ) gi (x) gi (x) k
k
i=1
k
i=1
(3.3)
i=1
for all x ∈ X. From the equality y T h(x ∗ ) = 0 and (3.3), and it follows that k
i=1
α∗
fi (x ∗ ) ∗ fi (x) α gi (x ∗ ) gi (x) k
i=1
for any feasible solution x of (FP). Thus, x ∗ is an optimal solution of (F P )α ∗ , it
follows that x ∗ is a proper efficient solution of (FP). The program (FP) is said to satisfy the generalized Slater constraint qualification if h is semipreinvex with respect to η and there exists x1 ∈ X such that g(x1 ) < 0. Theorem 3.3.2 Let x ∗ be a proper efficient solution for (FP). Suppose that the generalized Slater constraint qualification is satisfied, and that f , −g and h are semipreinvex with respect to a same η. Then, there exists y ∗ 0 such that (x ∗ , y ∗ ) is a solution of the vector saddle point problem. Proof Since x ∗ is a proper efficient solution of (FP), by Lemma 3.3.4, it is also an f (x ∗ ) optimal solution of (MPv ∗ )α ∗ , where vj∗ = gjj (x ∗ ) , j = 1, 2, · · · , k, and α ∗ ∈ α + . Since α ∗T (f − v ∗ g) is semipreinvex on X with respect to η, it can be shown that there exists a y ∗ ∈ Rm , y ∗ 0 such that y ∗T h(x ∗ ) = 0 and L(x ∗ , y) ≤ L(x ∗ , y ∗ ) ≤ L(x, y ∗ )
(3.4)
for all x ∈ X, y ∈ Rm , y 0, where L(x, y) = α ∗ (f (x) − v ∗ g(x) + y T h(x)e). If (3.1) is not true, then, for some i ∈ {1, 2, · · · , k}, y ∈ Rm , y 0, we have fi (x ∗ ) y ∗T h(x ∗ ) fi (x ∗ ) y T h(x ∗ ) + > + ∗ ∗ gi (x ) gi (x ) gi (x ∗ ) gi (x ∗ )
(3.5)
fj (x ∗ ) y T h(x ∗ ) fj (x ∗ ) y ∗T h(x ∗ ) + + gj (x ∗ ) gj (x ∗ ) gj (x ∗ ) gj (x ∗ )
(3.6)
and
for all j = i.
52
3 Semipreinvex Functions
Multiply (3.5) by αi∗ gi (x ∗ ), (3.6) by αj∗ gj (x ∗ ) for j = i, and Then add them together. This, yields a contradiction to the first inequality in (3.4) for y = y. Similarly; if (3.2) is not true, then, for some i ∈ {1, 2, · · · , k}, x ∈ X, we have fi (x ∗ ) y ∗T h(x ∗ ) fi (x) y ∗T h(x) + > + gi (x ∗ ) gi (x ∗ ) gi (x) gi (x)
(3.7)
fj (x) y ∗T h(x) fj (x ∗ ) y ∗T h(x ∗ ) + + gj (x) gj (x) gj (x ∗ ) gj (x ∗ )
(3.8)
and
for all j = i. Multiply (3.7) and (3.8) by αi∗ gi (x), i = 1, 2, · · · , k, and Then add them together. This again yields a contradiction to the second inequality of (3.4) in view (x ∗ ) of the fact that y ∗T h(x ∗ ) = 0 and vi∗ = fgii (x ∗ ) , i = 1, 2, · · · , k. Thus, (3.1) and (3.2)
hold. Therefore, (x ∗ , y ∗ ) is a solution of the vector saddle point problem.
Chapter 4
Prequasiinvex Functions
4.1 Introduction and Preliminaries In [81], the concept of prequasiinvex functions is introduced. Later, Mohan and Neogy [66] obtain some properties of generalized preinvex functions. In this chapter, we consider the class of generalized preinvex functions, which contain prequasiinvex, semistrictly prequasiinvex, and strictly prequasiinvex functions as special cases. Properties of prequasiinvex functions are obtained under three different conditions. They are lower semicontinuity, upper semicontinuity, and semistrict prequasiinvexity. Furthermore, properties of semistrictly prequasiinvex functions are also obtained under the condition of prequasiinvexity as well as under the condition of lower semicontinuity. A similar result is established for strictly prequasiinvex functions. It is worth noting that these properties reveal various interesting relationships among prequasiinvex, semistrictly prequasiinvex, and strictly prequasiinvex functions. They are very useful in the study of optimization problems. The following class of prequasiinvex functions is introduced by Pini (see [81]). Definition 4.1.1 (see [81]) Let K ⊆ Rn be an invex set with respect to η : Rn × Rn −→ Rn . Let f : K −→ R. We say that f is prequasiinvex if, ∀ x, y ∈ K, ∀λ ∈ [0, 1], f (y + λη(x, y)) ≤ max{f (x), f (y)}.
(4.1)
A subclass of the above type of prequasiinvex functions can be described as follows. Definition 4.1.2 Let K ⊆ Rn be an invex set with respect to η : Rn × Rn −→ Rn . Let f : K −→ R. We say that f is strictly prequasiinvex if ∀ x, y ∈ K with x = y; it holds that f (y + λη(x, y)) < max{f (x), f (y)},
∀λ ∈ (0, 1).
© Springer Nature Singapore Pte Ltd. 2018 X. Yang, Generalized Preinvexity and Second Order Duality in Multiobjective Programming, Springer Optimization and Its Applications 142, https://doi.org/10.1007/978-981-13-1981-5_4
53
54
4 Prequasiinvex Functions
Definition 4.1.3 Let K ⊆ Rn be an invex set with respect to η : Rn × Rn −→ Rn . Let f : K −→ R. We say that f is semistrictly prequasiinvex if ∀ x, y ∈ K such that f (x) = f (y); it holds that f (y + λη(x, y)) < max{f (x), f (y)},
∀λ ∈ (0, 1).
It is obvious that strict prequasiinvexity implies semistrict prequasiinvexity; if η(x, x) = 0, ∀x ∈ K, strict prequasiinvexity implies prequasiinvexity. However, prequasiinvexity does not imply semistrict prequasiinvexity, and semistrict prequasiinvexity does not imply prequasiinvexity. Example 4.1.1 This example illustrates that a prequasiinvex function satisfying Condition C is not necessarily a quasiconvex function. f (x) =
−|x|
if |x| ≤ 1;
−1
if |x| ≥ 1,
⎧ x−y ⎪ ⎪ ⎪ ⎪ ⎨x − y η(x, y) = ⎪1 − y ⎪ ⎪ ⎪ ⎩ −1 − y
if x ≥ 0, y ≥ 0; if x ≤ 0, y ≤ 0; if x < 0, y > 0; if x > 0, y < 0.
It is clear that f is prequasiinvex with respect to η, but f is not quasiconvex. Example 4.1.2 This example illustrates that a prequasiinvex function is not necessarily a semistrict prequasiinvex function. Let f (x) =
−x
if x > 0;
0
if x ≤ 0,
⎧ x−y ⎪ ⎪ ⎪ ⎪ ⎨x − y η(x, y) = ⎪ y−x ⎪ ⎪ ⎪ ⎩ y−x
if x ≥ 0, y ≥ 0; if x ≤ 0, y ≤ 0; if x < 0, y > 0; if x > 0, y < 0.
Then, f is a prequasiinvex function with respect to η on R. However, by letting y = −1, x = 1, λ = 12 , we have f (y) = f (−1) = 0 > −1 = f (1) = f (x) and 1 f (y+λη(x, y)) = f ((−1)+ η(1, −1)) = f (−2) = 0 = max{f (1), f (−1)} = 0. 2 Thus, f is not a semistrict prequasiinvex function with respect to η on R.
4.1 Introduction and Preliminaries
55
Example 4.1.3 This example illustrates that a semistrcitly prequasiinvex function is not necessarily a prequasiinvex function. Let f (x) = ⎧ ⎪ x−y ⎪ ⎪ ⎪ ⎪ ⎪ x−y ⎪ ⎪ ⎪ ⎪ ⎪ x−y ⎪ ⎪ ⎪ ⎪ ⎨x − y η(x, y) = ⎪ y−x ⎪ ⎪ ⎪ ⎪ ⎪ y−x ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ y−x ⎪ ⎪ ⎪ ⎩ y−x
−|x|
if |x| ≤ 1;
−1
if |x| ≥ 1,
if x ≥ 0, y ≥ 0; if x ≤ 0, y ≤ 0; if y < −1, x > 1; if x < −1, y > 1; if − 1 ≤ x ≤ 0, y ≥ 0; if − 1 ≤ y ≤ 0, x ≥ 0; if 0 ≤ x ≤ 1, y ≤ 0; if 0 ≤ y ≤ 1, x ≤ 0.
Then, f is a semistrictly prequasiinvex function with respect to η on R. However, by letting x = 2, y = −2, λ = 12 , we have 1 f (y + λη(x, y)) = f (−2 + η(2, −2)) = f (0) = 0 > −1 = f (2) 2 = f (−2) = max{f (x), f (y)}. That is, f is not a prequasiinvex function with respect to the same η. This example also illustrates that a semistrcitly prequasiinvex function is not necessarily a semistrictly quasiconvex function. Example 4.1.3 shows that strict preinvexity implies semistrict prequasiinvexity, but the converse is not necessarily true. Example 4.1.4 This example illustrates that a strictly prequasiinvex function is not necessarily a strictly quasiconvex function. Consider f (x) = −|x|, ⎧ x−y ⎪ ⎪ ⎪ ⎪ ⎨x − y η(x, y) = ⎪ y−x ⎪ ⎪ ⎪ ⎩ y−x
if x ≥ 0, y ≥ 0; if x ≤ 0, y ≤ 0; if x < 0, y > 0; if x > 0, y < 0.
We know that preinvexity implies prequasiinvexity, but the converse is not necessarily true. Example 4.1.2 is an illustrative counterexample. Here, we present another counterexample below.
56
4 Prequasiinvex Functions
Example 4.1.5 (see [118]) This example illustrates that a prequasiinvex function is not necessarily a preinvex function. Let f (x) =
1 2 2 x + 2x −x 2 + 2x
⎧ x−y ⎪ ⎪ ⎪ ⎪ ⎨x − y η(x, y) = ⎪ 1−y ⎪ ⎪ ⎪ ⎩ −2 − y
if − 4 ≤ x ≤ 0; if 0 ≤ x ≤ 2.
if 2 ≥ x ≥ 0, 2 ≥ y ≥ 0; if − 4 ≤ x ≤ 0, −4 ≤ y ≤ 0; if − 4 ≤ x < 0, 2 ≥ y > 0; if 2 ≥ x > 0, −4 ≤ y < 0.
Then, f is a prequasiinvex function with respect to η on [−4, 2]. However, by letting x = −1, y = 1, λ = 12 , we have f (y + λη(x, y)) = f (y + λ(1 − y)) = f (1) = −1. However, λf (x) + (1 − λ)f (y) = 1/2f (−1) + 1/2f (1) = (−3/4) + (−1/2) = −5/4. Thus, f (y + λη(x, y)) > λf (x) + (1 − λ)f (y). That is, f is not a preinvex function with respect to η on [−4, 2]. Definitions 4.1.1, 4.1.2, and 4.1.3 with η(x, y) = x − y are reduced to respective versions with quasiconvex function, strictly quasiconvex function and semistrictly quasiconvex functions.
4.2 Properties of Prequasiinvex Functions In Sects. 4.2, 4.3, 4.4, and 4.5, we assume that (i) K ⊆ Rn is an invex set with respect to η : Rn × Rn −→ Rn with η satisfying Condition C, and (ii) f is a realvalued function on K. In the following, we present some properties of prequasiinvex functions. Proposition 4.2.1 If f is a differentiable function on K, then, the following conditions are equivalent: (i) S = {x|f (x) ≤ u} is an invex set with respect to η for every real number u; (ii) S = {x|f (x) < u} is an invex set with respect to η for every real number u; (iii) f (x) ≤ f (y) implies f (y + αη(x, y)) ≤ f (y), for any α ∈ [0, 1];
4.2 Properties of Prequasiinvex Functions
(iv) f (x) < f (y) (v) f (x) ≤ f (y) (vi) f (x) < f (y)
57
implies f (y + αη(x, y)) ≤ f (y), for any α ∈ [0, 1]; implies η(x, y)T ∇f (y) ≤ 0; and implies η(x, y)T ∇f (y) ≤ 0.
Proof To establish the equivalency of these six conditions, it suffices to show the following: (i) is equivalent to (ii), (iii) is equivalent to (iv), (v) is equivalent to (vi), (i) is equivalent to (iii), and (iii) is equivalent to (vi). It is obvious that (i) is equivalent to (ii). Clearly, (iv) follows immediately from (iii). To show the converse, we only need to consider f (x) = f (y). Suppose the result is false. Then, there exists α ∈ (0, 1) such that f (y + αη(x, y)) > f (y). It follows from the continuity of f that there exists λ ∈ (0, 1) such that λ < α and f (x) = f (y) < f (y + λη(x, y)) < f (y + αη(x, y)), which means, by (iv), that f (y + λη(x, y) + βη(x, y + λη(x, y))) ≤ f (y + λη(x, y))∀β ∈ [0, 1]. From Condition C, we have y + αη(x, y) = y + λη(x, y) + βη(x, y + λη(x, y)). Thus there exists a β such that f (y + αη(x, y)) = f (y + λη(x, y) + βη(x, y + λη(x, y))) ≤ f (y + λη(x, y)). This is a contradiction, so (iii) implies (iv). Clearly (vi) follows from (v). To show the converse, we again only need to consider f (x) = f (y). Suppose η(x, y)T ∇f (y) > 0, then there exists a λ0 ∈ (0, 1) such that z0 = y + λ0 η(x, y) and f (x) = f (y) < f (z0 ).
58
4 Prequasiinvex Functions
Thus, from (vi), we have η(y, z0 )T ∇f (z0 ) ≤ 0 and η(x, z0 )T ∇f (z0 ) ≤ 0. From Condition C, we obtain η(x, y)T ∇f (z0 ) = 0. Let
= {z|f (z) ≤ f (y), z = y + αη(x, y), 0 ≤ α ≤ 1}. Then, there exists a λ1 ∈ (0, 1), such that z1 = y + λ1 η(x, y) and ||z0 , z1 || = minz∈ ||z, z0 ||. By the mean valued theorem, we have f (z1 ) − f (z0 ) = f (y + λ1 η(x, y)) − f (y + λ0 η(x, y)) = (λ1 − λ0 )η(x, y)T ∇f (y + αη(x, y)), where α is between λ0 and λ1 . It follows from the choice of z1 that f (y) < f (y + αη(x, y)). Hence, from (vi), we have η(y, y + αη(x, y))T ∇f (y + αη(x, y)) ≤ 0 and η(x, y + αη(x, y))T ∇f (y + αη(x, y)) ≤ 0. From Condition C, we obtain η(x, y)T ∇f (y + αη(x, y)) = 0. We get (λ1 − λ0 )η(x, y)T ∇f (y + αη(x, y)) = 0, which means that f (z1 ) = f (z0 ). By f (y) ≥ f (z1 ), this contradicts f (y) < f (z0 ). The equivalence between (i) and (iii) is easily verified.
4.2 Properties of Prequasiinvex Functions
59
From (iii), we derive (vi) as follows. Let f (x) ≤ f (y). Then, by (iii), we have f (y + αη(x, y)) ≤ f (y), ∀α ∈ (0, 1). Hence there exists a λ ∈ (0, 1) such that αη(x, y)T ∇f (y + λαη(x, y)) = f (y + αη(x, y)) − f (y) ≤ 0. Dividing by α and letting α tend to zero, it follows that η(x, y)∇f (y) ≤ 0. (iii) follows from (vi) based on Theorem 2.2 in [66].
Next, we give an important lemma before presenting properties of prequasiinvex functions. Lemma 4.2.1 If f : K −→ R satisfies f (y + η(x, y)) ≤ f (x), ∀x, y ∈ K, and there exists an α ∈ (0, 1) such that, for all x, y ∈ K, f (y + αη(x, y)) ≤ max{f (x), f (y)},
(4.2)
then the set defined by A = {λ ∈ [0, 1]|f (y + λη(x, y)) ≤ max{f (x), f (y)}, ∀x, y ∈ K} is dense in the interval [0, 1]. Proof Suppose that A is not dense in [0, 1]. Then, there exists a λ0 ∈ (0, 1) and a neighborhood N (λ0 ) of λ0 such that A = ∅. (4.3) N(λ0 ) From f (y + η(x, y)) ≤ f (x), for any x, y ∈ K and (4.2), we have {λ ∈ A|λ ≥ λ0 } = ∅,
and
{λ ∈ A|λ ≤ λ0 } = ∅.
Define λ1 = inf{λ ∈ A|λ ≥ λ0 },
(4.4)
λ2 = sup{λ ∈ A|λ ≤ λ0 }.
(4.5)
Then, by (4.3), we have 0 ≤ λ2 < λ1 ≤ 1. Since α, 1 − α ∈ (0, 1), we can choose u1 , u2 ∈ A satisfying u1 ≥ λ1 , u2 ≤ λ2 , such that max{α, 1 − α}(u1 − u2 ) < λ1 − λ2 .
(4.6)
60
4 Prequasiinvex Functions
Next, let us consider λ = αu1 + (1 − α)u2 . From Condition C, we have y + λη(x, y) = y + (u2 + α(u1 − u2 ))η(x, y) u1 − u2 )η(y, y + u1 η(x, y)) = y + u2 η(x, y) + α(− u1 u1 − u2 = y + u2 η(x, y) + αη(y+u1 η(x, y), y+u1 η(x, y)+ η(y, y+u1 η(x, y))) u1 = y + u2 η(x, y) + αη(y + u1 η(x, y), y + u1 η(x, y) − (u1 − u2 )η(x, y)) = y + u2 η(x, y) + αη(y + u1 η(x, y), y + u2 η(x, y))
∀x, y ∈ K.
From (4.2) and the fact that u1 , u2 ∈ A, we obtain f (y + λη(x, y)) = f (y + u2 η(x, y) + αη(y + u1 η(x, y), y + u2 η(x, y))) ≤ max{f (y + u1 η(x, y)), f (y + u2 η(x, y))} ≤ max{max{f (x), f (y)}, max{f (x), f (y)}} = max{f (x), f (y)}. That is, λ ∈ A. If λ ≥ λ0 , then it follows from (4.6) that λ − u2 = α(u1 − u2 ) < λ1 − λ2 , and therefore λ < λ1 . Because λ ≥ λ0 and λ ∈ A, this is a contradiction to (4.4). Similarly, λ ≤ λ0 yields a contradiction to (4.5). Consequently, A is dense in [0, 1].
Next, two properties of prequasiinvex functions are obtained under upper and lower semicontinuity, respectively. Theorem 4.2.2 Let K ⊆ Rn be an invex set. If f : K −→ R is upper semicontinuous on K and satisfies f (y + η(x, y)) ≤ f (x), ∀x, y ∈ K, then, f is a prequasiinvex function on K if and only if f satisfies inequality (4.2). Proof The necessity is obvious from Definition 4.1.1. We prove the sufficiency. From Lemma 4.2.1, the set A = {λ ∈ [0, 1]|f (y + λη(x, y)) ≤ max{f (x), f (y)}, ∀x, y ∈ K}
4.2 Properties of Prequasiinvex Functions
61
is dense in the interval [0, 1]. Thus, for any λ ∈ (0, 1), there exists a sequence {λn } with λn ∈ A and 0 < λn < λ such that λn −→ λ
(n −→ ∞).
For any x, y ∈ K, define yn = y + (
λ − λn )η(x, y), ∀n. 1 − λn
z = y + λη(x, y). Then, yn −→ y (n −→ ∞). Note that 0 < λn < λ < 1, we have 0<
λ − λn < 1. 1 − λn
Thus, yn ∈ K. Furthermore, by Condition C, we have λ − λn λ − λn η(x, y)+λn η(x, y+ η(x, y))=y+λη(x, y) = z. 1 − λn 1 − λn (4.7) By the upper semicontinuity of f on K, it follows that, for any > 0, there exists an N > 0 such that yn +λn η(x, yn )=y+
f (yn ) ≤ f (y) + , for n > N. Therefore, from (4.7) and the fact that λn ∈ A, we have f (z) = f (yn + λn η(x, yn )) ≤ max{f (x), f (yn )} ≤ max{f (x), f (y) + }, for n > N. Since > 0 may be arbitrarily small, we have f (z) ≤ max{f (x), f (y)}. Thus, f is a prequasiinvex function on K.
62
4 Prequasiinvex Functions
Theorem 4.2.3 Assume that f : K −→ R is a lower semicontinuous function on K and satisfies f (y + η(x, y)) ≤ f (x), ∀x, y ∈ K. Then, f is a prequasiinvex function on K if and only if, for every x, y ∈ K, there exists an α ∈ (0, 1) such that f (y + αη(x, y)) ≤ max{f (x), f (y)}. Proof The necessity is obvious from Definition 4.1.1. We prove the sufficiency. By contradiction, we assume that there exist distinct x, y ∈ K and α¯ ∈ (0, 1) such that f (y + αη(x, ¯ y)) > max{f (x), f (y)}. Let z = y + αη(x, ¯ y),
xt = z + tη(x, z).
From Condition C, we have ¯ y). xt = y + [α¯ + t (1 − α)]η(x, Let B = {xt ∈ K|t ∈ (0, 1], f (xt ) ≤ max{f (x), f (y)}}, u = inf{t ∈ (0, 1]|xt ∈ B}. Since f (y + η(x, y)) ≤ f (x), for any x, y ∈ K and x0 ∈ B, it is clear that x1 ∈ B. Thus, xt ∈ B, 0 ≤ t < u, and, by Lemma 4.2.1, there exist tn ≥ u, xtn ∈ B, n = 1, 2, · · · , such that tn −→ u, (n −→ ∞). Since f is a lower semicontinuous function, we have f (xu ) ≤ lim inf f (xtn ) ≤ max{f (x), f (y)}. n→∞
Hence, xu ∈ B. Similarly, let yt = z + (1 − t)η(y, z).
4.2 Properties of Prequasiinvex Functions
63
Then, from Condition C, we have ¯ y). yt = y + t αη(x, Let ¯ y)) ≤ max{f (x), f (y)}}, D = {yt ∈ K|t ∈ [0, 1), f (yt ) = f (y + t αη(x, and v = sup{t ∈ [0, 1)|yt ∈ D}. It is clear that ¯ y) = z ∈ D, y0 = y ∈ D, y1 = y + αη(x, and yt ∈ D, v < t ≤ 1, and, by Lemma 4.2.1, there exist tn ≤ v, ytn ∈ D, n = 1, 2, · · · , such that tn −→ v, (n −→ ∞). Since f is a lower semicontinuous function, we have f (yv ) ≤ lim inf f (ytn ) ≤ max{f (x), f (y)}. n→∞
Hence, yv ∈ D. ¯ and α2 = α¯ + u − uα. ¯ Then, Let α1 = v α, 0 ≤ α1 < α¯ < α2 ≤ 1. Now, from Condition C, xu + λη(yv , xu ) = y + α2 η(x, y) + λη(y + α1 η(x, y), y + α2 η(x, y)) = y + [λα1 + (1 − λ)α2 ]η(x, y),
∀λ ∈ [0, 1].
Hence, from the definitions of α1 and α2 , we have f (xu + λη(yv , xu )) = f {y + [λα1 + (1 − λ)α2 ]η(x, y)} > max{f (x), f (y)} ≥ max{f (yv ), f (xu )},
∀ λ ∈ (0, 1),
which contradicts the assumption of the theorem.
64
4 Prequasiinvex Functions
Next result shows that the upper semicontinuity assumed in Theorem 4.2.2 can be replaced by the semistrict prequasiinvexity, and in this case, K is not required to be open. Theorem 4.2.4 Let f be a semistrictly prequasiinvex function. Then, f is a prequasiinvex function on K if and only if f satisfies inequality (4.2). Proof The necessity is obvious from Definition 4.1.3. We prove the sufficiency. Suppose that there exist x, y ∈ K and λ ∈ (0, 1) such that f (y + λη(x, y)) > max{f (x), f (y)}. Without loss of generality, assume that f (x) ≥ f (y) and let z = y + λη(x, y). Then, f (z) > f (x).
(4.8)
If f (x) > f (y), it follows from the semistrict prequasiinvexity of f that f (z) < f (x), contradicting (4.8). If f (x) = f (y), then (4.8) implies that f (z) > f (x) = f (y). (i) Consider 0 < λ < α < 1. Let z1 = y +
λ η(x, y). α
Then, from Condition C, we have y + αη(z1 , y) λ η(x, y), y) α λ = y + αη(y + η(x, y), y + α λ = y + αη(y + η(x, y), y + α λ = y − αη(y, y + η(x, y)) α = y + λη(x, y) = z = y + αη(y +
λ λ η(x, y) − η(x, y)) α α λ λ η(x, y) + η(y, y + η(x, y))) α α
(4.9)
4.3 Properties of Semistrictly Prequasiinvex Functions
65
According to (4.2), we have f (z) ≤ max{f (z1 ), f (y)}. Because of (4.9) and the above inequality, it follows that f (z) ≤ f (z1 ).
(4.10)
Let b=
λ(1 − α) . α(1 − λ)
Because 0 < λ < α < 1, it is easy to show that 0 < b < 1. Thus, from Condition C, z + bη(x, z) = y + λη(x, y) + bη(x, y + λη(x, y)) = y + [λ + b(1 − λ)]η(x, y) = y + [λ + =y+
λ(1 − α) ]η(x, y) α
λ η(x, y) = z1 . α
Since f is a semistrictly prequasiinvex function, it follows from inequality (4.9) and the equality above that f (z1 ) < max{f (x), f (z)} = f (z), contradicting (4.10). (ii) Consider 0 < α < λ < 1. This case will also yield a contradiction by just exchanging the roles of α and (1 − α) and the roles of λ and (λ − α) in part (i).
4.3 Properties of Semistrictly Prequasiinvex Functions In this section, we will give properties of semistrictly prequasiinvex functions under prequasiinvexity and lower semicontinuity, respectively. Theorem 4.3.1 Let f be a prequasiinvex function and η satisfy Condition C. Then, f is a semistrictly prequasiinvex function on K if and only if there exists an α ∈ (0, 1) such that, for every x, y ∈ K, f (x) = f (y)
⇒
f (y + αη(x, y)) < max{f (x), f (y)}.
(4.11)
66
4 Prequasiinvex Functions
Proof The necessity is obvious from Definition 4.1.3. We prove the sufficiency. To the contrary, we assume that there exist x, y ∈ K and λ ∈ (0, 1) such that f (x) = f (y) and f (y + λη(x, y)) ≥ max{f (x), f (y)}.
(4.12)
Without loss of generality, suppose that f (x) < f (y). Let z = y + λη(x, y). Then, (4.12) implies that f (z) ≥ f (y) > f (x).
(4.13)
Since f is prequasiinvex, we have f (z) ≤ max{f (x), f (y)} = f (y), which, together with (4.13), leads to f (x) < f (z) = f (y).
(4.14)
Let z1 = z + αη(x, z), z2 = z1 + αη(z, z1 ), ······ zk = zk−1 + αη(z, zk−1 ), ∀k ∈ N. According to (4.11), f (z) > f (x) implies that f (z1 ) = f (z + αη(x, z)) < f (z), f (z2 ) = f (z + αη(z1 , z)) < f (z), ······ f (zk ) = f (z + αη(zk−1 , z)) < f (z), ∀k ∈ N.
(4.15)
From z = y + λη(x, y) and Condition C, we have zk = z + α(1 − α)k−1 η(x, z) = y + [λ + α(1 − α)k−1 (1 − λ)]η(x, y).
4.3 Properties of Semistrictly Prequasiinvex Functions
67
Let k1 ∈ N be such that α 2 (1 − α)k1 −1 <
λ , 1−λ
and let β1 = λ + α(1 − α)k1 (1 − λ), β2 = λ − α 2 (1 − α)k1 −1 (1 − λ). Then, 0 ≤ β2 ≤ λ ≤ β1 ≤ 1, and x = y + β1 η(x, y), y = y + β2 η(x, y). Thus, from Condition C, z + α(1 − α)k1 η(x, z) = y + β1 η(x, y) = x. According to (4.15) and the equality above, we obtain f (x) = f (z + α(1 − α)k1 η(x, z)) = f (zk1 +1 ) < f (z).
(4.16)
There are two cases to be considered: (i) If f (x) ≥ f (y), then it follows from Condition C that y + αη(x, y) = y + λη(x, y) = z. Since f is prequasiinvex, this implies that f (z) ≤ max{f (x), f (y)} = f (x), which contradicts inequality (4.16). (ii) Suppose f (y) > f (x). Since z = y + αη(x, y), (4.11) implies that f (z) < max{f (x), f (y)}.
(4.17)
Again, since x = y + β1 η(x, y), y = y + β2 η(x, y), it follows from the prequasiinvexity of f that f (x) ≤ max{f (x), f (y)},
(4.18)
f (y) ≤ max{f (x), f (y)}.
(4.19)
According to (4.17), (4.18) and (4.19), we obtain f (z) < max{f (x), f (y)}, which contradicts inequality (4.14). This completes the proof.
68
4 Prequasiinvex Functions
The following result shows that intermediate-point semistrictly prequasiinvex function implies that prequasiinvex function under the lower seimcontinuous condition. Theorem 4.3.2 Assume that f : K −→ R is a lower semicontinuous function on K and satisfies f (y + η(x, y)) ≤ f (x), ∀x, y ∈ K. Let η satisfy Condition C. If there exists an α ∈ (0, 1) such that for every x, y ∈ K, f (x) = f (y), it holds that f (y + αη(x, y)) < max{f (x), f (y)},
(4.20)
then, f is a prequasiinvex function on K. Proof By Theorem 4.2.3, we only need to show that for each x, y ∈ K, there exists a λ ∈ (0, 1) such that f (y + λη(x, y)) ≤ max{f (x), f (y)}.
(4.21)
By contradiction, we assume that there exist x, y ∈ K such that f (y + λη(x, y)) > max{f (x), f (y)}, ∀λ ∈ (0, 1).
(4.22)
If f (x) = f (y), it follows from (4.20) that f (y + αη(x, y)) < max{f (x), f (y)}, which contradicts (4.22). If f (x) = f (y), then (4.22) implies that f (y + λη(x, y)) > f (x) = f (y), ∀λ ∈ (0, 1).
(4.23)
By (4.23), we obtain f [y + λη(x, y) + αη(x, y + λη(x, y))] = f [y + (λ + α(1 − λ))η(x, y)] > f (y), ∀λ ∈ (0, 1),
(4.24)
and, from (4.20) and (4.23), we have f [y + λη(x, y) + αη(x, y + λη(x, y))] < max{f [y + λη(x, y)], f (x)} = f [y + λη(x, y)],
∀λ ∈ (0, 1). (4.25)
Again by (4.20), (4.24) and (4.25), we have f [y + (1 − α)(λ + α(1 − λ))η(x, y)] = f [y + λη(x, y) + αη(x, y + λη(x, y)) +αη(y, y + λη(x, y) + αη(x, y + λη(x, y)))]
4.4 Properties of Strictly Prequasiinvex Functions
69
< max{f (y), f (y + λη(x, y) + αη(x, y + λη(x, y)))} = f (y + λη(x, y) + αη(x, y + λη(x, y))) < f (y + λη(x, y)), Let λ =
1−α 2−α
∀λ ∈ (0, 1).
∈ (0, 1). Then, the inequality above implies that f (y +
1−α 1−α η(x, y)) < f (y + η(x, y)), 2−α 2−α
which is a contradiction. The above theorem is a generalization of Theorem in [45].
Theorem 4.3.3 Let A be a nonempty convex set in Rn and let g : A −→ R be semistrictly quasiconvex and lower semicontinuous. Then, g is quasiconvex function on K. Remark 4.3.1 On the lines of Example 4.1.3, we can show that if η does not satisfy Condition C, then a semistrcitly prequasiinvex function is not necessarily prequasiinvex under the lower seimcontinuous condition and the condition that f (y + η(x, y)) ≤ f (x), ∀x, y ∈ K. By Theorems 4.3.1 and 4.3.2, we have the following corollary. Corollary 4.3.1 Assume that f : K −→ R is a lower semicontinuous function on K and satisfies f (y + η(x, y)) ≤ f (x), ∀x, y ∈ K. Let η satisfy Condition C. Then, f is a semistrictly prequasiinvex function on K if and only if there exists an α ∈ (0, 1) such that for every x, y ∈ K, f (x) = f (y), it holds that f (y + αη(x, y)) < max{f (x), f (y)}.
4.4 Properties of Strictly Prequasiinvex Functions Theorem 4.4.1 Let η satisfy Condition C. f is a strictly prequasiinvex function on K if and only if the following two conditions hold: (i) f is a prequasiinvex function on K; and (ii) there exists an α ∈ (0, 1) such that for every pair of distinct points x, y ∈ K, f (y + αη(x, y)) < max{f (x), f (y)}.
(4.26)
Proof The necessity is obvious from Definitions 4.1.1 and 4.1.2. Let us prove the sufficiency. To the contrary, we assume that f is not a strictly prequasiinvex function on K. Then, there exist x, y ∈ K, x = y, λ ∈ (0, 1) such that f (y + λη(x, y)) ≥ max{f (x), f (y)}.
70
4 Prequasiinvex Functions
Since f is prequasiinvex, we have f (y + λη(x, y)) ≤ max{f (x), f (y)}. Hence, f (y + λη(x, y)) = max{f (x), f (y)}.
(4.27)
Choose β1 , β2 so as to satisfy 0 < β1 < λ < β2 < 1, such that λ = αβ1 + (1 − α)β2 . Let x = y + β1 η(x, y),
y = y + β2 η(x, y).
Then, by Condition C, it follows from an argument similar to that given for Theorem 4.3.1 that, y + αη(x, y) = y + λη(x, y).
(4.28)
Again, since f is prequasiinvex, f (x) ≤ max{f (x), f (y)},
(4.29)
f (y) ≤ max{f (x), f (y)}.
(4.30)
By (4.26), (4.28), (4.29) and (4.30), we have f (y + λη(x, y)) < max{f (x), f (y)}, which contradicts inequality (4.27).
An analogous result for a semistrictly prequasiinvex function can be obtained by using the requirement x = y instead of f (x) = f (y). The proof is, in fact, easier. Theorem 4.4.2 Assume that f : K −→ R is a lower semicontinuous function on K and satisfies f (y + η(x, y)) ≤ f (x), ∀x, y ∈ K. Then, f is a semistrictly prequasiinvex function on K if and only if f satisfies inequality (4.26). Clearly, strict prequasiinvexity implies semistrict prequasiinvexity by definition, the converse may not be true. However, we have the following result. Theorem 4.4.3 Let η satisfy Condition C. f is a strictly prequasiinvex function on K if and only if f is a semistrictly prequasiinvex function and satisfies inequality (4.26).
4.5 Applications of Prequasiinvex Type Functions
71
Proof The necessity is obvious from Definitions 4.1.2 and 4.1.3. We prove the sufficiency. Since f is semistrictly prequasiinvex, it suffices to show that f (x) = f (y), x = y, implies f (y + λη(x, y)) < max{f (x), f (y)}, ∀λ ∈ (0, 1). From (4.26) and for each x, y ∈ K, x = y, we have f (y + αη(x, y)) < f (x) = f (y).
(4.31)
Let x = y + αη(x, y). Let λ ∈ (0, 1). If λ < α, then μ =
α−λ α
∈ (0, 1). From Condition C, we have
x + μη(y, x) = y + λη(x, y). Since f is semistrictly prequasiinvex on K and (4.31) holds, we have f (y + λη(x, y)) = f (x + μη(y, x)) < max{f (y), f (x)} = f (y). If λ > α, then ν =
λ−α 1−α
∈ (0, 1). From Condition C, we have x + νη(x, x) = y + λη(x, y).
Since f is a semistrictly prequasiinvex function on K and (4.31) holds, we have f (y + λη(x, y)) = f (x + νη(x, x)) < max{f (x), f (x)} = f (x).
This completes the proof.
4.5 Applications of Prequasiinvex Type Functions In [81], prequasiinvex functions are used in the study of single-objective optimization problems. In this section, prequasiinvex functions, semistrictly prequasiinvex functions and strictly prequasiinvex functions will be used in characterizing various solutions for optimization problems. Consider the following multiobjective programming problem (MP ) : min f (x) = (f1 (x), · · · , fm (x))T ,
s. t. x ∈ ,
72
4 Prequasiinvex Functions
where f : −→ Rm is a vector-valued function, and ⊂ Rn is an invex set with respect to η : Rn × Rn −→ Rn . Let m Rm + = {λ ∈ R |λ = (λ1 , · · · , λm ), λi ≥ 0, i = 1, · · · , m} m Rm ++ = {λ ∈ R |λ = (λ1 , · · · , λm ), λi > 0, i = 1, · · · , m}.
Definition 4.5.1 A point x¯ ∈ is called a global efficient solution of (MP) (see [95]) if there does not exist any point y ∈ , such that f (y) ∈ f (x) ¯ − Rm + \{0}. A point x¯ ∈ is called a local efficient solution of (MP) if there is a neighborhood N(x) ¯ of x¯ such that there does not exist any point y ∈ ∩ N (x), ¯ such that f (y) ∈ f (x) ¯ − Rm + \{0}. Definition 4.5.2 A point x¯ ∈ is called a global weakly efficient solution of (MP) (see [95]) if there does not exist any point y ∈ , such that f (y) ∈ f (x) ¯ − Rm ++ . A point x¯ ∈ is called a local weakly efficient solution of (MP) if there is a neighborhood N (x) ¯ of x¯ such that there does not exist any point y ∈ N (x), ¯ such that f (y) ∈ f (x) ¯ − Rm ++ . Theorem 4.5.1 Let fi (x), i = 1, · · · , m, be prequasiinvex and semistrictly prequasiinvex functions with respect to a same vector-valued function η. Then, any local efficient solution of (MP) is a global efficient solution of (MP). Proof Assume on the contrary that there exists a x¯ ∈ such that x¯ is a local efficient solution of (MP), but not a global efficient solution of (MP). Then, there exists a u ∈ such that fi (u) ≤ fi (x), ¯ 1 ≤ i ≤ m,
(4.32)
¯ fj (u) < fj (x).
(4.33)
and for some j
From the prequasiinvexity of f1 (x), · · · , fm (x), (4.32) implies ¯ ≤ fi (x), ¯ 1 ≤ i ≤ m, for any β ∈ [0, 1], fi (x¯ + βη(u, x))
(4.34)
4.5 Applications of Prequasiinvex Type Functions
73
and from the semistrict prequasiinvexity of fj (x), (4.33) implies ¯ < fj (x), ¯ for any β ∈ [0, 1], fj (x¯ + βη(u, x))
(4.35)
(4.34) and (4.35) show that x¯ is not a local efficient solution of (MP), a contradiction.
Theorem 4.5.1 generalizes Theorem 2 in [96], where preinvexity is replaced by semistrict prequasiinvexity and prequasiinvexity. Theorem 4.5.2 Let f1 (x), · · · , fm (x) be prequasiinvex with respect to a vectorvalued function η. Suppose that for some k, fk (x) is a strictly prequasiinvex function with respect to the same vector function η and that there exists an λ = (λ1 , · · · , λm ) ≥ 0 with λk > 0, such that x ∈ is a local solution of min λT f (x), s.t. x ∈ . Then, x is also a global efficient solution of (MP). Proof Assume the contrary that x is not a global efficient solution of (MP), i.e., there exists some xˆ ∈ , f (x) ˆ = f (x), such that ˆ ≤ fj (x), fj (x)
1 ≤ j ≤ m.
Then, for any 0 < β < 1, we have, from the prequasiinvexity of fj , ˆ x)) ≤ fj (x), fj (x + βη(x,
1 ≤ j ≤ m,
and, from the strict prequasiinvexity of some fk , fk (x + βη(x, ˆ x)) < fk (x). Since λ = (λ1 , · · · , λm ) ≥ 0 with λk > 0, we obtain m
λi fi (x + βη(x, ˆ x)) <
i=1
m
λi fi (x), 0 < β < 1.
i=1
That is, ˆ x)) < λT f (x), 0 < β < 1. λT f (x + βη(x, We have a contradiction.
Theorem 4.5.3 Let f1 (x), · · · , fm (x) be prequasiinvex with respect to a vectorvalued function η. Suppose that for some j , fj (x) is a strictly prequasiinvex function with respect to the same vector function η : Rn × Rn −→ Rn . Then, any local efficient solution of (MP) is also a global efficient solution of (MP). Similarly, we can prove the following two theorems.
74
4 Prequasiinvex Functions
Theorem 4.5.4 Let f1 (x), · · · , fm (x) be semistrictly prequasiinvex with respect to a vector-valued function η. Then, any local weakly efficient solution of (MP) is a global weakly efficient solution of (MP). Theorem 4.5.4 generalizes Theorem 1 in [96], where preinvexity is replaced by semistrict prequasiinvexity. Theorem 4.5.5 Let f1 (x), · · · , fm (x) be semistrictly prequasiinvex with respect to a vector-valued function η. Suppose that there exists an λ = (λ1 , · · · , λm ) ≥ 0 with λk > 0 for some k: 1 ≤ k ≤ m, such that x ∈ is a local solution of min λT f (x), s.t. x ∈ F . Then, x is also a global weakly efficient solution of (MP). If m = 1 in (MP), then multiobjective programming (MP) reduces to singleobjective programming (MP)1 : min f (x),
s. t. x ∈ ,
where f : −→ R is a real-valued function, and ⊂ Rn is an invex set with respect to η : Rn × Rn −→ Rn . Theorem 4.5.6 Let f be strictly prequasiinvex with respect to a vector-valued function η : Rn × Rn −→ Rn . Then, the solution of (MP )1 is unique. Proof Let x be a solution of (MP )1 . Assume the contrary that x be not the unique solution. Then, there exists a y ∈ such that x = y and f (y) = f (x). Since f is a strictly prequasiinvex function with respect to the vector-valued function η, we have f (x + λη(y, x)) < f (x),
∀λ ∈ (0, 1),
which implies that x is not a solution of (MP )1 , a contradiction.
Similarly, we can easily prove the following theorem. Theorem 4.5.7 Let f be prequasiinvex with respect to a vector-valued function η. Then, the solution set of (MP )1 is an invex set with respect to η. Proof Let α = infx∈ f (x) and S = {x ∈ : f (x) = α}. Now, for any x1 , x2 ∈ S, by the prequasiinvexity of f on with respect to η, we have, f (x1 + λη(x2 , x1 )) ≤ max{f (x1 ), f (x2 )} = α, ⇒ x1 + λη(x2 , x1 ) ∈ S,
0 ≤ λ ≤ 1,
0 ≤ λ ≤ 1.
Hence, the solution set S of (MP)1 is invex with respect to η.
Part II
Generalized Invariant Monotonicity
Chapter 5
Generalized Invexity and Generalized Invariant Monotonicity
5.1 Introduction A closely related concept of the convexity of a real-valued function is the monotonicity of a vector-valued function. It is well known that the convexity of a real-valued function is equivalent to the monotonicity of the corresponding gradient function. It is worth noting that monotonicity has played a very important role in the study of the existence and solution methods of variational inequality problems. As an important breakthrough, a generalization of this relation is given in [46] for various pseudo-/quasiconvexities and pseudo-/quasi-monotonicities. Subsequently, there is an increasing interest in the study of monotonicity and generalized monotonicity and their relationships to convexity and generalized convexity (see [53, 54, 84, 89]). On the other hand, some relationships between generalized invexity and generalized invariant monotonicity are obtained in [83, 88, 119] under certain conditions. In particular, the existence of solutions to the variational-like inequality problem is proved under generalized invariant monotonicity in [88]. In this chapter, generalized invariant monotonicity and its relationships with generalized invexity are studied. Several types of generalized invariant monotonicities which are generalizations of the (strict) monotonicity, (strict) pseudomonotonicity, and quasimonotonicity are introduced. Relations among generalized invariant monotonicities and generalized invexities are established. Several examples are given to show that these generalized invariant monotonicities are proper generalization of the corresponding generalized monotonicities. Moreover, some examples are also presented to illustrate the properly inclusive relations among the generalized invariant monotonicities.
© Springer Nature Singapore Pte Ltd. 2018 X. Yang, Generalized Preinvexity and Second Order Duality in Multiobjective Programming, Springer Optimization and Its Applications 142, https://doi.org/10.1007/978-981-13-1981-5_5
77
78
5 Generalized Invexity and Generalized Invariant Monotonicity
5.2 Invariant Monotone and Strictly Invariant Monotone Maps Let be a nonempty subset of Rn , η a vector-valued function from X × X into Rn (X ⊂ Rn ), and F a vector-valued function from into Rn . Definition 5.2.1 F is said to be monotone on if, for every pair of distinct points x, y ∈ , (y − x)T (F (y) − F (x)) ≥ 0. Definition 5.2.2 F is said to be invariant monotone on with respect to η if, for every pair of points x, y ∈ , η(x, y)T F (y) + η(y, x)T F (x) ≤ 0. Remark 5.2.1 Every monotone map is an invariant monotone map with η(x, y) = x − y, but the converse is not necessarily true. Example 5.2.1 Let F and η be maps defined by F (x) = (1 + cos x1 , 1 + cos x2 ) η(x, y) = (
x = (x1 , x2 ) ∈ (0,
sin x1 − sin y1 sin x2 − sin y2 , ) cos y1 cos y2
π π ) × (0, ), 2 2
x, y ∈ (0,
π π ) × [0, ). 2 2
Clearly, F is invariant monotone with respect to η on (0, π2 ) × (0, π2 ). Let x = ( π4 , π4 ), y = ( π6 , π6 ). Then √ √ π 3 2 − ) < 0. (y − x)(F (y) − F (x)) = − ( 6 2 2 Thus, F is not monotone. Definition 5.2.3 Let the set be invex with respect to η. The function f : −→ R is said to be preinvex with respect to η on if, f (y + λη(x, y)) ≤ λf (x) + (1 − λ)f (y), ∀x, y ∈ , λ ∈ [0, 1].
(5.1)
The function f is said to be strictly preinvex with respect to η on if (5.1) holds with a strict inequality for any pair of distinct points x and y, and λ ∈ (0, 1). The following theorem shows that the preinvexity of a function is equivalent to the invariant monotonicity of its gradient. This is a generalization of the convexity of a function and the monotonicity of its gradient obtained in [3].
5.2 Invariant Monotone and Strictly Invariant Monotone Maps
79
Lemma 5.2.1 Let f be differentiable on an open set containing . If f is invex with respect to η, then ∇f is invariant monotone with respect to η. Proof The proof follows readily from the definitions of invexity and invariant monotonicity.
Theorem 5.2.1 Let be a nonempty invex set in Rn with respect to η, where η satisfies Condition C. Assume that f is differentiable on . Then f is a preinvex function with respect to η on if and only if ∇f is invariant monotone with respect to η on and f satisfies f (y + η(x, y)) ≤ f (x), for any x, y ∈ . Proof Suppose that f is preinvex on with respect to η. First, f (y + η(x, y)) ≤ f (x) is just inequality (5.1) with λ = 1 for any x, y ∈ . Next, based on that preinvex function is invex, by Lemma 5.2.1, it follows that a ∇f is invariant monotone with respect to η on . Conversely, suppose ∇f is invariant monotone with respect to η on , and f is not preinvex with respect to η on . Then there exist x, y ∈ such that f (y + λ¯ η(x, y)) > λ¯ f (x) + (1 − λ¯ )f (y), for some λ¯ ∈ (0, 1). Since f (y + η(x, y)) ≤ f (x), for any x, y ∈ , it follows that f (y + λ¯ η(x, y)) > λ¯ f (y + η(x, y)) + (1 − λ¯ )f (y), for some λ¯ ∈ (0, 1). That is, ¯ (y + λη(x, ¯ ¯ ¯ λ(f y)) − f (y + η(x, y))) + (1 − λ)(f (y + λη(x, y)) − f (y)) > 0. By the mean-value theorem, we have λ¯ (λ¯ − 1)η(x, y)T ∇f (y + λ1 η(x, y)) + (1 − λ¯ )λ¯ η(x, y)T ∇f (y + λ2 η(x, y)) > 0, (5.2) where 0 < λ2 < λ¯ < λ1 < 1. From (5.2), we obtain − η(x, y)T ∇f (y + λ1 η(x, y)) + η(x, y)T ∇f (y + λ2 η(x, y)) > 0.
(5.3)
By Condition C, it follows from Proposition 1.2.1 that η(y + λ2 η(x, y), y + λ1 η(x, y)) = (λ2 − λ1 )η(x, y),
(5.4)
η(y + λ1 η(x, y), y + λ2 η(x, y)) = (λ1 − λ2 )η(x, y).
(5.5)
80
5 Generalized Invexity and Generalized Invariant Monotonicity
Multiply (5.3) by (λ1 − λ2 ). Then, by (5.4) and (5.5), we have η(y + λ2 η(x, y), y + λ1 η(x, y))T ∇f (y + λ1 η(x, y)) + η(y + λ1 η(x, y), y + λ2 η(x, y))T ∇f (y + λ2 η(x, y)) > 0, which contradicts invariant monotonicity of ∇f with respect to η.
Definition 5.2.4 F is said to be strictly monotone on if, for every pair of distinct points x, y ∈ , (y − x)T (F (y) − F (x)) > 0. Definition 5.2.5 F is said to be strictly invariant monotone with respect to η on
if, for every pair of distinct points x, y ∈ , η(x, y)T F (y) + η(y, x)T F (x) < 0. Remark 5.2.2 Every strictly monotone map is a strictly invariant monotone map with η(x, y) = x − y, but the converse is not necessarily true. Example 5.2.2 Define the maps F and η as F (x) = (−1 − sin x1 , −1 − sin x2 ), η(x, y) = (
x ∈ (0,
cos y1 − cos x1 cos y2 − cos x2 , ) sin y1 sin y2
π π ) × (0, ), 2 2
x, y ∈ (0,
π π ) × (0, ). 2 2
Clearly, F is strictly invariant monotone with respect to η. Let x = ( π6 , π6 ), y = ( π4 , π4 ). Then, √ π 1− 2 T (y − x) (F (y) − F (x)) = < 0. 6 2 Thus, F is not strictly monotone. Remark 5.2.3 Every strictly invariant monotone map is an invariant monotone map with respect to the same η, but the converse is not necessarily true. Example 5.2.3 Define the maps F and η as F (x) = (− cos x1 , − cos x2 ), η(x, y) = (
π π π π x ∈ (− , ) × (− , ), 2 2 2 2
sin x1 − sin y1 sin x2 − sin y2 , ), cos y1 cos y2
π π π π x, y ∈ (− , ) × (− , ). 2 2 2 2
5.3 Invariant Quasimonotone Maps
81
Clearly, F is invariant monotone with respect to η. However, η(y, x)T F (y) + η(x, y)T F (x) = 0,
π π x, y ∈ (− , ). 2 2
Thus, F is not strictly invariant monotone with respect to the same η. Theorem 5.2.2 Let be a nonempty invex set in Rn with respect to η, where η satisfies Condition C. Assume that f is differentiable on . Then f is a strictly preinvex function with respect to η on if and only if ∇f is strictly invariant monotone with respect to η on and f satisfies f (y + η(x, y)) ≤ f (x), for any x, y ∈ . Proof The proof is similar to that of Theorem 5.2.1 and hence is omitted.
5.3 Invariant Quasimonotone Maps Definition 5.3.1 A map F is quasimonotone on a set of Rn if, for every pair of distinct points x, y ∈ , (y − x)T F (x) > 0 implies (y − x)T F (y) ≥ 0. Definition 5.3.2 A map F is invariant quasimonotone with respect to η on if, for every pair of distinct points x, y ∈ , η(y, x)T F (x) > 0 implies η(x, y)T F (y) ≤ 0. Remark 5.3.1 Every quasimonotone map is an invariant quasimonotone map, but the converse is not necessarily true. Example 5.3.1 Define the maps F and η as F (x) = (sin2 x1 cos x1 , sin2 x2 cos x2 ),
x ∈ [0, π ] × [0, π ],
η(x, y) = (cos y1 (sin x1 − sin y1 ), cos y2 (sin x2 − sin y2 )),
x, y ∈ [0, π ] × [0, π ].
3π Clearly, F is invariant quasimonotone with respect to η. Let x = ( 3π 4 , 4 ), y = π π ( 4 , 4 ). Then
√ √ 2π 2π T , but (y − x) F (y) = − < 0. (y − x) F (x) = 4 4 T
Thus, F is not quasimonotone.
82
5 Generalized Invexity and Generalized Invariant Monotonicity
Definition 5.3.3 ([66]) Let ⊂ Rn be an invex set with respect to η. A function f : → R is prequasiinvex with respect to the same η on if, for all x, y ∈ , λ ∈ [0, 1], f (y + λη(x, y)) ≤ max{f (x), f (y)}. Lemma 5.3.1 ([66]) Let ⊂ Rn be an invex set with respect to η, and let η satisfy Condition C. Then a differentiable function f is prequasiinvex with respect to η on
if and only if, for every pair of points x, y ∈ , f (y) ≤ f (x) implies η(y, x)T ∇f (x) ≤ 0.
(5.6)
Theorem 5.3.1 Let ⊂ Rn be an invex set with respect to η, and let f be a differentiable function on . If η satisfy Condition C, then f is prequasiinvex with respect to the same η on if and only if ∇f is invariant quasimonotone with respect to the same η on and for all x, y ∈ , f (y) ≤ f (x) implies f (y + η(x, y)) ≤ f (x).
(5.7)
Proof Suppose that f is prequasiinvex with respect to η. It is obvious that inequality (5.7) is true. Let x, y ∈ be such that η(y, x)T ∇f (x) > 0.
(5.8)
Then, we have f (y) > f (x). By (5.6), f (x) < f (y) implies η(x, y)T ∇f (y) ≤ 0. This shows that ∇f is invariant quasimonotone with respect to the same η. Conversely, suppose that ∇f is invariant quasimonotone with respect to η. Assume that f is not prequasiinvex with respect to the same η. Then, there exist x, y ∈ such that f (y) ≤ f (x), and, furthermore, there exists a λ¯ ∈ (0, 1) such that ¯ f (y + λη(x, y)) > f (x) ≥ f (y).
(5.9)
By the mean-value theorem, there exist λ1 , λ2 ∈ (0, 1) such that f (y + λ¯ η(x, y))−f (y +η(x, y)) = (λ¯ −1)η(x, y)T ∇f (y +λ1 η(x, y)),
(5.10)
5.3 Invariant Quasimonotone Maps
83
f (y + λ¯ η(x, y)) − f (y) = λ¯ η(x, y)T ∇f (y + λ2 η(x, y)), 0 < λ2 < λ¯ < λ1 < 1.
(5.11) (5.12)
Then, from (5.9), (5.10), (5.11), (5.12) and (5.7), we obtain η(x, y)T ∇f (y + λ1 η(x, y)) < 0,
(5.13)
η(x, y)T ∇f (y + λ2 η(x, y)) > 0.
(5.14)
and
From Condition C, we have η(y + λ2 η(x, y), y + λ1 η(x, y)) = η(y + λ2 η(x, y), y + λ2 η(x, y) + (λ1 − λ2 )η(x, y)) = η(y + λ2 η(x, y), y + λ2 η(x, y) + =−
λ1 − λ2 η(x, y + λ2 η(x, y))) 1 − λ2
λ1 − λ2 η(x, y + λ2 η(x, y)) 1 − λ2
= (λ2 − λ1 )η(x, y),
(5.15)
η(y + λ1 η(x, y), y + λ2 η(x, y)) = η(y + λ1 η(x, y), y + λ1 η(x, y) − (λ1 − λ2 )η(x, y)) = η(y + λ1 η(x, y), y + λ1 η(x, y) + η(y, y + (λ1 − λ2 )η(x, y))) = −η(y, y + (λ1 − λ2 )η(x, y)) = (λ1 − λ2 )η(x, y).
(5.16)
Then, by (5.13), (5.14), (5.15) and (5.16), it follows that η(y + λ2 η(x, y), y + λ1 η(x, y))T ∇f (y + λ1 η(x, y)) > 0, and η(y + λ1 η(x, y), y + λ2 η(x, y))T ∇f (y + λ2 η(x, y)) > 0. These two inequalities contradict the invariant quasimonotonicity of ∇f .
84
5 Generalized Invexity and Generalized Invariant Monotonicity
5.4 Invariant Pseudomonotone Maps Definition 5.4.1 (see [46]) A map F : −→ Rn is said to be pseudomonotone on
if, for every pair of distinct points x, y ∈ , (y − x)T F (x) ≥ 0 implies (y − x)T F (y) ≥ 0. Definition 5.4.2 A map F : −→ Rn is said to be invariant pseudomonotone with respect to η on ⊂ Rn if, for every pair of distinct points x, y ∈ , η(y, x)T F (x) ≥ 0 implies
η(x, y)T F (y) ≤ 0.
Definition 5.4.3 (see [48]) A differentiable function f : → R is pseudoinvex with respect to η on if, for every pair of distinct points x, y ∈ , η(y, x)T ∇f (x) ≥ 0
implies
f (y) ≥ f (x).
Remark 5.4.1 Every pseudomonotone map is an invariant pseudomonotone map with η(x, y) = x − y, but the converse is not necessarily true. Example 5.4.1 Define the maps F and η as F (x1 , x2 ) = (1, cos x2 ),
π π π π (x1 , x2 ) ∈ (− , ) × (− , ), 2 2 2 2
sin x2 − sin y2 ), x = (x1 , x2 ), cos y2 π π π π y = (y1 , y2 ) ∈ (− , ) × (− , ). 2 2 2 2
η(x, y) = (sin x1 − sin y1 ,
Clearly, F is invariant pseudomonotone with respect to η. Let x = ( π3 , 0), y = ( π6 , π6 ). Then √ 3 π T T − 1) < 0. (y − x) F (x) = 0, and (y − x) F (y) = ( 6 2 Thus, F is not pseudomonotone. Remark 5.4.2 Every invariant monotone map is an invariant pseudomonotone map with respect to the same η, but the converse is not necessarily true. Example 5.4.2 Define the maps F and η as F (x) = cos2 x,
π π x ∈ (− , ), 2 2
η(x, y) = cos y − cos x,
π π x, y ∈ (− , ). 2 2
5.4 Invariant Pseudomonotone Maps
85
Clearly, F is invariant pseudomonotone with respect to η on (− π2 , π2 ). Let x = − π6 , y = π4 . Then η(y, x)T F (x) + η(x, y)T F (y) > 0. Thus, F is not invariant monotone with respect to η on (− π2 , π2 ). Remark 5.4.3 Every invariant pseudomonotone map is an invariant quasimonotone map with respect to the same η but the converse is not true. Example 5.4.3 Define the maps F and η as: F (x) = sin2 x cos x,
x ∈ [0, π ],
η(x, y) = cos y(sin x − sin y),
x, y ∈ [0, π ].
Clearly, F is invariant quasimonotone with respect to η. Let x =
π 2,y
=
π 4.
Then
η(y, x)T F (x) = 0 but η(x, y)T F (y) > 0. Thus, F is not invariant pseudomonotone with respect to η. It is well known that every pseudoconvex function is quasiconvex. This result can be generalized to invex type functions. The details are given in the following. Lemma 5.4.1 Let η satisfy Condition C. If the differentiable function f is pseudoinvex with respect to η on an invex set of Rn , and for all x, y ∈ , f (y) ≤ f (x) implies f (y + η(x, y)) ≤ f (x).
(5.17)
Then f is prequasiinvex with respect to the same η on . Proof Suppose that f is pseudoinvex with respect to η on . Assume that f is not prequasiinvex with respect to η. Then, there exist x, y ∈ such that f (x) ≤ f (y), ¯ and, furthermore, there exists λ¯ ∈ (0, 1) such that, for x¯ = y + λη(x, y), f (x) ¯ > f (y) ≥ f (x). From (5.17) and the inequalities above, there exists y¯ = y + λ∗ η(x, y), for some λ∗ ∈ (0, 1) such that f (y) ¯ = max f (y + λη(x, y)). λ∈[0,1]
Then it follows that ¯ = 0. η(x, y)T ∇f (y)
86
5 Generalized Invexity and Generalized Invariant Monotonicity
From Condition C, we have ¯ = −λ∗ η(x, y). η(x, y) ¯ = (1 − λ∗ )η(x, y), and η(y, y) Hence, ¯ = (1 − λ∗ )η(x, y)T ∇f (y) ¯ = 0. η(x, y) ¯ T ∇f (y) Since f is pseudoinvex with respect to η, it holds that f (y) ¯ ≤ f (x), which is a contradiction. Thus, f is prequasiinvex with respect to η.
Theorem 5.4.1 Let ⊂ Rn be an open invex set with respect to η, f be differentiable on , η satisfy Condition C, and f (y + η(x, y)) ≤ f (x), for any x, y ∈ . Then f is pseudoinvex with respect to η on if and only if ∇f is invariant pseudomonotone with respect to η on . Proof Suppose that f is pseudoinvex with respect to η on . Let x, y ∈ , x = y be such that η(x, y)T ∇f (y) ≥ 0. Then f (y) ≤ f (x).
(5.18)
From Lemma 5.4.1, we see that every pseudoinvex is also prequasiinvex with respect to the same η. It follows from (5.18) and Lemma 5.3.1 that η(y, x)T ∇f (x) ≤ 0, Therefore, ∇f is an invariant pseudomonotone with respect to η. Conversely, suppose ∇f is an invariant pseudomonotone on . Let x, y ∈ , x = y be such that η(x, y)T ∇f (y) ≥ 0.
(5.19)
We need to show that f (x) ≥ f (y). Assume the contrary, i.e., f (x) < f (y).
(5.20)
By the mean-value theorem, we have ¯ y)), f (y + η(x, y)) − f (y) = η(x, y)T ∇f (y + λη(x,
(5.21)
5.5 Strictly Invariant Pseudomonotone Maps
87
for some λ¯ ∈ (0, 1). By Condition C and f (y + η(x, y)) ≤ f (x), for any x, y ∈ , it follows that f (y + η(x, y)) < f (y),
(5.22)
¯ ¯ η(y, y + λη(x, y)) = −λη(x, y).
(5.23)
and
Now, from (5.20), (5.21), (5.22), and (5.23), we obtain η(y, y + λ¯ η(x, y))T ∇f (y + λ¯ η(x, y)) > 0.
(5.24)
Since ∇f is an variant pseudomonotone with respect to η, it follows from (5.24) that η(y + λ¯ η(x, y), y)T ∇f (y) < 0. From Condition C, we have η(x, y)T ∇f (y) < 0, which contradicts (5.19). Hence, f is pseudoinvex with respect to η.
5.5 Strictly Invariant Pseudomonotone Maps Definition 5.5.1 ([46]) A map F is strictly pseudomonotone on a set ⊂ Rn if, for every pair of distinct points x, y ∈ , (y − x)T F (x) ≥ 0
implies
(y − x)T F (y) > 0.
Definition 5.5.2 A map F is strictly invariant pseudomonotone with respect to η on
if, for every pair of distinct points x, y ∈ , η(y, x)T F (x) ≥ 0
implies
η(x, y)T F (y) < 0.
Remark 5.5.1 Every strictly pseudomonotone map is a strictly invariant pseudomonotone map with η(x, y) = x − y, but the converse is not necessarily true. Example 5.5.1 Define the maps F and η as F (x) = sin x + cos x,
x ∈ (0, π ),
η(x, y) = (sin y + cos y)(cos x − cos y),
x, y ∈ (0, π ).
88
5 Generalized Invexity and Generalized Invariant Monotonicity
Clearly, F is strictly invariant pseudomonotone with respect to η on (0, π ). Let π x = 3π 4 , y = 4 . Then √ π 2 < 0; (y − x) F (x) = 0 but (y − x) F (y) = − 2 T
T
Thus, F is not strictly pseudomonotone on (0, π ). Remark 5.5.2 Every strictly invariant monotone map is a strictly invariant pseudomonotone map with respect to the same η, but the converse is not necessarily true. Example 5.5.2 Define the map F and η as F (x) = sin x cos x,
x ∈ (0,
η(x, y) = sin x cos x(cos x − cos y),
π ), 2 x, y ∈ (0,
π ). 2
Clearly, F is strictly invariant pseudomonotone with respect to η on (0, π2 ). However, η(x, y)T F (y) + η(y, x)T F (x) = 0. Thus, F is not strictly invariant monotone with respect to η on (0, π2 ). Remark 5.5.3 Every strictly invariant pseudomonotone map is invariant pseudomonotone with respect to the same η map, but the converse is not necessarily true. Example 5.5.3 Define the maps F and η as F (x) = sin x cos2 x,
π π x ∈ (− , ), 2 2
η(x, y) = sin y(cos y − cos x),
π π x, y ∈ (− , ). 2 2
Clearly, F is invariant pseudomonotone with respect to η on (− π2 , π2 ). Let x = − π6 , y = π6 . Then η(y, x)T F (x) = 0 and η(x, y)T F (y) = 0; Thus, F is neither strictly invariant pseudomonotone nor strictly invariant monotone with respect to the same η on (− π2 , π2 ).
5.5 Strictly Invariant Pseudomonotone Maps
89
Definition 5.5.3 ([48]) Let ⊂ Rn be an open invex set with respect to η. A differentiable function f on is strictly pseudoinvex with respect to η on if, for every pair of distinct points x, y ∈ , η(y, x)T ∇f (x) ≥ 0 implies
f (y) > f (x).
Theorem 5.5.1 Let ⊂ Rn be an open invex set with respect to η, and let f be differentiable on , η satisfy Condition C, and f (y + η(x, y)) ≤ f (x), for any x, y ∈
. Then f is strictly pseudoinvex with respect to η on if and only if ∇f is strictly invariant pseudomonotone with respect to η on . Proof Suppose that f is strictly pseudoinvex with respect to η on . Let x, y ∈ , x = y, be such that η(y, x)T ∇f (x) ≥ 0.
(5.25)
We need to show that η(x, y)T ∇f (y) < 0. On the contrary, we assume that η(x, y)T ∇f (y) ≥ 0. From the strict pseudoinvexity of f with respect to η, it follows that f (x) > f (y).
(5.26)
On the other hand, from the strict pseudoinvexity of f with respect to η, (5.25) implies that f (y) > f (x), which contradicts (5.26). Conversely, suppose that ∇f is strictly invariant pseudomonotone with respect to η on . Let x, y ∈ , x = y, be such that η(y, x)T ∇f (x) ≥ 0.
(5.27)
f (y) > f (x).
(5.28)
f (y) ≤ f (x).
(5.29)
We need to show that
To the contrary, we assume that
90
5 Generalized Invexity and Generalized Invariant Monotonicity
By the mean-value theorem, we have f (x + η(y, x)) − f (x) = η(y, x)T ∇f (x + λ¯ η(y, x)),
(5.30)
for some 0 < λ¯ < 1. By f (y + η(x, y)) ≤ f (x), for any x, y ∈ , it follows from (5.29), (5.30), and Condition C that ¯ ¯ ¯ ¯ x)) = −λη(y, x)T ∇f (x + λη(y, x)) ≥ 0. η(x, x + λη(y, x))T ∇f (x + λη(y, (5.31) Since ∇f is strictly invariant pseudomonotone with respect to η, we conclude that η(x + λ¯ η(y, x), x)T ∇f (x) < 0.
(5.32)
Again, from Condition C, we note that η(x + λ¯ η(y, x), x) = η(x + λ¯ η(y, x), x + λ¯ η(y, x) + η(x, x + λ¯ η(y, x))) ¯ = −η(x, x + λη(y, x)) = λ¯ η(y, x). Thus, it follow from (5.32) that η(y, x)T ∇f (x) < 0,
which contradicts (5.27).
5.6 Conclusions In this chapter, several concepts of generalized invariant monotonicities are introduced and their relations with generalized invexities are established. Diagram 1 below summarizes these relations where ⇒ means the implication relation and ⇐ means that the implication relation does not hold.
Invariant monotonicity
⇑ ⇓
⇒
⇒
⇐ Invariant pseudomonotonicity ⇐ Invariant quasimonotonicity ⇒
⇑
⇓
Strict invariant monotonicity ⇐ Strict invariant pseudomonotonicity
5.6 Conclusions
91
Diagram 1 For generalized invexities of a real-valued function, the relations in Diagram 2 hold:
Preinvexity ⇑ ⇓
⇒
⇒
⇐ Pseudoinvexity ⇐ Quasiinvexity ⇑ ⇓ ⇒
Strict preinvexity ⇐ Strict pseudoinvexity Diagram 2 We conclude that under Condition C and f (y + η(x, y)) ≤ f (x), for any x, y ∈
, the generalized invexity in Diagram 2 and the corresponding generalized invariant monotonicity in Diagram 1 are equivalent. There are some generalized invariant monotonicities which have not been discussed in this chapter. For details on the relations between these generalized invariant monotonicities and generalized invexities, we refer the readers to [120].
Part III
Duality in Multiobjective Programming
Chapter 6
Multiobjective Wolfe Type Second-Order Symmetric Duality
6.1 Introduction Symmetric duality in nonlinear programming in which the dual of the dual is the primal is first introduced by Dorn [28]. Subsequently, developments on the notion of symmetric duality are reported in [4, 11, 15, 19–21, 25, 27, 67, 69]. The Wolfe dual models, presented in [27], involve a scalar kernel function f (x, y), x ∈ Rn , y ∈ Rm , which is required to be convex in x for fixed y and concave in y for fixed x. Later, single-objective symmetric duality is generalized to multiobjective case in [32, 56, 57, 72, 74, 85, 92, 99, 100, 111]. For second-order case, the concept of second-order convex functions and the Wolfe dual models are introduced in [68], where symmetric duality results are proved under the assumptions of secondorder convexity on functions involved in the primal problem. In [59], a nonlinear programming is considered, where second order duality is discussed under inclusion condition. Second-order Mangasarian-type dual formulations are discussed under ρconvexity and generalized representation conditions, respectively, in [43] and [101]. The study of the second-order duality is important, as it provides tighter bounds for the value of the objective function over first-order duality when approximations are used [1, 31, 43, 59, 61, 68, 94, 101, 112]. Based on these ideas, in this chapter, two pairs of Wolfe-type second-order symmetric dual model in multiobjective nonlinear programming are suggested. Under η-invexity conditions, the weak, strong, and converse duality theorems are proved. Several known results including these obtained in [32] are special cases.
© Springer Nature Singapore Pte Ltd. 2018 X. Yang, Generalized Preinvexity and Second Order Duality in Multiobjective Programming, Springer Optimization and Its Applications 142, https://doi.org/10.1007/978-981-13-1981-5_6
95
96
6 Multiobjective Wolfe Type Second-Order Symmetric Duality
6.2 Notations and Definitions The following ordering relations for vectors Rn will be used in this chapter: x < y ⇐⇒ y − x ∈ int Rn+ ; x ≤ y ⇐⇒ y − x ∈ Rn+ \ {0}; x y ⇐⇒ y − x ∈ Rn+ . The negation of x ≤ y is denoted by x ≤ y. A general multiobjective programming problem can be expressed in the following form: (MP)
minimize subject to :
h(x) g(x) 0,
where h : Rn → Rk and g :Rn → Rm . And the feasible set of (MP) is denoted by X = x | g(x) 0, x ∈ Rn . For the (MP ) problem, the following solution concepts were introduced. Definition 6.2.1 A feasible point x ∗ is an efficient (or Pareto optimal) solution of (MP) if there exists no other x ∈ X such that h(x) ≤ h(x ∗ ). Definition 6.2.2 A feasible point x ∗ is said to be a weak efficient solution of (MP), if there does not exist any feasible x such that h(x) < h(x ∗ ). Definition 6.2.3 A feasible point x ∗ is said to be a properly efficient solution of (MP), if it is an efficient solution of (MP) and there exists a scalar M > 0 such that for each i and x ∈ X satisfying hi (x) < hi (x ∗ ) we have hi (x ∗ ) − hi (x) ≤M hj (x) − hj (x ∗ ) for some j satisfying hj (x) > hj (x ∗ ). Definition 6.2.4 A differentiable vector-valued function h = (h1 , . . . , hk ) : Rn → Rk is said to be η-invex at u ∈ Rn if there exists a vector function η : Rn × Rn → Rn such that h(x) − h(u) η(x, u)T ∇h(u), ∀ x ∈ Rn , or hi (x) − hi (u) η(x, u)T ∇hi (u), ∀ x ∈ Rn , i = 1, 2 . . . , k, where ∇h(u) is the k × n matrix with its {i, j }−th entry being
∂hj ∂ui
.
6.3 Wolfe Type I Symmetric Duality
97
Definition 6.2.5 A functional F : X×X×Rn −→ R (where X ⊆ Rn ) is sublinear in its third component if, for all (x, u) ∈ X × X, (i) F (x, u; a1 + a2 ) ≤ F (x, u; a1 ) + F (x, u; a2 ), for all a1 , a2 ∈ Rn ; and (ii) F (x, u; αa) = αF (x, u; a), for all α ∈ R+ and for all a ∈ Rn . For notational convenience, we denote Fx,u (a) = F (x, u; a). Now we introduce the second-order convexity for a twice differentiable real-valued function f : X −→ R with respect to a sublinear function F : X × X × Rn −→ R. Definition 6.2.6 f is said to be second-order F -convex at u ∈ X with respect to p ∈ Rn if, ∀ x ∈ X, 1 f (x) − f (u) + pT ∇uu f (u)p ≥ Fx,u (∇u f (u) + ∇uu f (u)p). 2 f is second-order F -concave at u ∈ X with respect to p ∈ Rn if −f is second-order F -convex at u ∈ X with respect to p ∈ Rn . Remark 6.2.1 (i) The second-order F -convexity is reduced to the invexity when p = 0 and Fx,u (a) := η(x, u)T a. (ii) For Fx,u (a) := η(x, u)T a, where η is a function from X×X to Rn ; the secondorder F -convexity is reduced to the second-order η-convexity introduced in [38] and [93]. (iii) If p = 0, then the second-order F -convexity is reduced to F -convexity studied in [39].
6.3 Wolfe Type I Symmetric Duality We formulate a pair of second-order nonlinear multiobjective programs. The primal problem (MSP) and the dual problem (MSD) are stated as follows: (MSP)
min FP (x, y, λ, p)
x,y,λ,p
= f (x, y) − (y T ∇y (λT f )(x, y))ek − (y T (∇yy (λT f )(x, y)p)ek subject to : ∇y (λT f )(x, y) + ∇yy (λT f )(x, y)p 0, λ > 0,
λT ek = 1.
(6.1)
98
6 Multiobjective Wolfe Type Second-Order Symmetric Duality
(MSD)
max FD (u, v, λ, r)
u,v,λ,r
= f (u, v) − (uT ∇u (λT f )(u, v))ek − (uT (∇uu (λT f )(u, v)r)ek subject to : ∇u (λT f )(u, v) + ∇uu (λT f )(u, v)r 0, λ > 0,
(6.2)
λT ek = 1,
where f : Rn × Rm → Rk , p ∈ Rm , r ∈ Rn , λ ∈ Rk , and ek = (1, . . . , 1)T ∈ Rk . Theorem 6.3.1 (Weak duality) Let (x, y, λ, p) be feasible for (MSP) and (u, v, λ, r) be feasible for (MSD). If there exist two vector functions η1 : Rn ×Rn → Rn and η2 : Rm × Rm → Rm such that f (·, y) is η1 -invex function for fixed y, and −f (x, ·) is η2 -invex function for fixed x. Furthermore, η1 (x, u) + u 0, η2 (v, y) + y 0 and η1T (x, u)∇uu (λT f )(u, v)r 0,
(6.3)
η2T (v, y)∇yy (λT f )(x, y)p 0.
(6.4)
FP (x, y, λ, p) ≤ FD (u, v, λ, r).
(6.5)
Then,
Proof Suppose to the contrary that (6.5) is not true, that is, f (x, y) − (y T ∇y (λT f )(x, y))ek − (y T (∇yy (λT f )(x, y)p)ek ≤ f (u, v) − (uT ∇u (λT f )(u, v))ek − (uT (∇uu (λT f )(u, v)r)ek It follow from λ > 0 and λT ek = 1, we have (λT f )(x, y) − y T ∇y (λT f )(x, y) − y T (∇yy (λT f )(x, y)p < (λT f )(u, v) − uT ∇u (λT f )(u, v) − uT (∇uu (λT f )(u, v)r.
(6.6)
Firstly, from the assumption that f (·, v) is η1 -invex, we obtain f (x, v) − f (u, v) η1 (x, u)T ∇u f (u, v). Since λ > 0, (λT f )(x, v) − (λT f )(u, v) η1T (x, u)∇u (λT f )(u, v).
(6.7)
6.3 Wolfe Type I Symmetric Duality
99
From (6.2) and the assumption of η1 (x, u) + u 0, we have η1T (x, u)∇u (λT f )(u, v) + η1T (x, u)∇uu (λT f )(u, v)r + uT ∇u (λT f )(u, v) + uT ∇uu (λT f )(u, v)r 0.
(6.8)
Combining (6.3) and (6.8) yields η1T (x, u)∇u (λT f )(u, v) −uT ∇u (λT f )(u, v) − uT ∇uu (λT f )(u, v)r.
(6.9)
Thus, (6.9) and (6.7) implies (λT f )(x, v) − (λT f )(u, v) −uT ∇u (λT f )(u, v) − uT ∇uu (λT f )(u, v)r. (6.10) Secondly, the η2 -invexity assumption of −f (u, ·) implies f (x, v) − f (x, y) η2 (v, y)T ∇y f (x, y). And from λ > 0, we have (λT f )(x, v) − (λT f )(x, y) η2T (v, y)∇y (λT f )(x, y).
(6.11)
It follows from (6.1) and η2 (v, y) + y 0, we have η2T (v, y)∇y (λT f )(x, y) + η2T (v, y)∇yy (λT f )(x, y)p + y T ∇y (λT f )(x, y) + y T ∇yy (λT f )(x, y)p 0.
(6.12)
Combining (6.4) and (6.12), we get η2T (v, y)∇y (λT f )(x, y)) −y T ∇y (λT f )(x, y)−y T ∇yy (λT f )(x, y)p.
(6.13)
Using (6.13) and (6.11), we get (λT f )(x, v) − (λT f )(x, y) −y T ∇y (λT f )(x, y) − y T ∇yy (λT f )(x, y)p. (6.14) It follows from (6.10) and (6.14) that (λT f )(x, y) − y T ∇y (λT f )(x, y) − y T ∇yy (λT f )(x, y)p (λT f )(u, v) − uT ∇u (λT f )(u, v) − uT ∇uu (λT f )(u, v)r, which is a contradiction to (6.6), and the proof finishes.
Remark 6.3.1 If the invexity assumption in Theorem 6.3.1 is replaced by convexity, the conditions η1 (x, u) + u 0 and η2 (v, y) + y 0 reduce to x 0 and v 0.
100
6 Multiobjective Wolfe Type Second-Order Symmetric Duality
These conditions may be augmented to problems (MSP) and (MSD) respectively, to obtain the pair of symmetric dual problems as considered in [52]. The following theorem also serves to correct the proof of Theorem 2.2 in [52]. ¯ p) Theorem 6.3.2 (Strong Duality) Let (x, ¯ y, ¯ λ, ¯ be an efficient solution of (MSP). Assume that the following conditions hold: (i) The matrix ∇yy (λ¯ T f )(x, ¯ y) ¯ is nonsingular; (ii) the vectors ∇y f1 (x, ¯ y), ¯ . . . , ∇y fk (x, ¯ y) ¯ are linearly independent; and ¯ y) ¯ p¯ ∈ span{∇y f1 (x, ¯ y), ¯ . . . , ∇y fk (x, ¯ y)} ¯ \ {0}. (iii) ∇yy (λ¯ T f )(x, If the assumptions of Theorem 6.3.1 are satisfied, then the objective values of (MSP) and (MSD) are equal, and (x, ¯ y, ¯ λ¯ , r¯ = 0) is an efficient solution of (MSD). Proof Let LP = α T [f (x, y) − (y T ∇y (λT f )(x, y))ek − (y T (∇yy (λT f )(x, y)p)ek ] + β T [∇y (λT f )(x, y) + ∇yy (λT f )(x, y)p] − ωT λ + μ(λT ek − 1), ¯ p) where α ∈ Rk , β ∈ Rm , ω ∈ Rk , and μ ∈ R. Since (x, ¯ y, ¯ λ, ¯ is an efficient solution of (MSP), it follows from Fritz John optimality conditions [24] that there exist α¯ ∈ Rk , β¯ ∈ Rm , μ ∈ R, and ω ∈ Rk such that ∂LP = ∇x (α¯ T f )(x, ¯ y) ¯ + ∇xy (λ¯ T f )(x, ¯ y)( ¯ β¯ − (α¯ T ek )y) ¯ ∂x ¯ y) ¯ p}( ¯ β¯ − (α¯ T ek )y) ¯ = 0, +∇x {∇yy (λ¯ T f )(x,
(6.15)
∂LP = ∇y f (x, ¯ y)( ¯ α¯ − (α¯ T ek )λ¯ ) + ∇yy (λ¯ T f )(x, ¯ y)( ¯ β¯ − (α¯ T ek )y) ¯ ∂y ¯ y) ¯ p¯ + ∇y {∇yy (λ¯ T f )(x, ¯ y) ¯ p}( ¯ β¯ − (α¯ T ek )y) ¯ = 0, (6.16) −(α¯ T ek )∇yy (λ¯ T f )(x, ∂LP = ∇yy (λ¯ T f )(x, ¯ y)( ¯ β¯ − (α¯ T ek )y) ¯ = 0, ∂p
(6.17)
∂LP = ∇yT f (x, ¯ y)( ¯ β¯ − (α¯ T ek )y) ¯ − ω + μek ∂λ T ¯ α¯ T ek )y) ¯ α¯ T ek )y) + (β−( ¯ T ∇yy f1 (x, ¯ y) ¯ p, ¯ . . . , (β−( ¯ T ∇yy fk (x, ¯ y) ¯ p¯ =0, (6.18) β¯ T
∂LP = β¯ T (∇y (λ¯ T f )(x, ¯ y) ¯ + ∇yy (λ¯ T f )(x, ¯ y) ¯ p) ¯ = 0, ∂β
∂LP = ωT λ¯ = 0, ∂ω ¯ ω) 0, (α, ¯ ω, μ) = 0. (α, ¯ β, ¯ β,
ωT
Since λ¯ > 0 and ω 0, (6.20) yields ω = 0.
(6.19) (6.20) (6.21)
6.3 Wolfe Type I Symmetric Duality
101
Using the assumption (i) in (6.17), we have β¯ = (α¯ T ek )y. ¯
(6.22)
Therefore, it follows from (6.18) that μ = 0. We now claim that α¯ = 0. Otherwise, α¯ = 0, then (6.22) yields β¯ = 0. ¯ ω, μ) = 0, contradicting (6.21). Hence, α¯ ≥ 0 and Consequently, (α, ¯ β, α¯ T ek = 0.
(6.23)
From (6.16) and (6.22), we obtain ¯ y)( ¯ α¯ − (α¯ T ek )λ¯ ) − (α¯ T ek )∇yy (λ¯ T f )(x, ¯ y) ¯ p¯ = 0. ∇y f (x,
(6.24)
We also claim that p¯ = 0. Indeed, if p¯ = 0, then the nonsingularity of ¯ y) ¯ implies ∇yy (λ¯ T f )(x, ∇yy (λ¯ T f )(x, ¯ y) ¯ p¯ = 0. From (6.24), we get ∇yy (λ¯ T f )(x, ¯ y) ¯ p¯ =
1 α¯ T ek
∇y f (x, ¯ y)( ¯ α¯ − (α¯ T ek )λ¯ ),
which contradicts condition (iii) of the theorem. Hence, p¯ = 0. Since the set {∇y f1 (x, ¯ y), ¯ · · · , ∇y fk (x, ¯ y)} ¯ is linearly independent, (6.24) yields α¯ = (α¯ T ek )λ¯ .
(6.25)
Using (6.15), (6.22), (6.23) and (6.25), we have ¯ y) ¯ = (∇x (α¯ T f )(x, ¯ y))/( ¯ α¯ T ek ) = 0. ∇x (λ¯ T f )(x,
(6.26)
Hence (x, ¯ y, ¯ λ¯ , r = 0) is feasible for (MSD). From (6.19), (6.22) and (6.23), we obtain y¯ T ∇y (λ¯ T f )(x, ¯ y) ¯ = 0.
(6.27)
Therefore, from (6.26) and (6.27), we have ¯ y, ¯ λ¯ , p) ¯ = FD (x, ¯ y, ¯ λ¯ , 0). FP (x, Finally, by Theorem 6.3.1, we can easily show that (x, ¯ y, ¯ λ¯ , r = 0) is an efficient solution of (MSD).
102
6 Multiobjective Wolfe Type Second-Order Symmetric Duality
We also have a converse duality theorem by virtue of the symmetry of the problem. This converse duality theorem is stated below, and its proof is similar to that given for the proof of Theorem 6.3.2. ¯ r¯ ) be an efficient solution of Theorem 6.3.3 (Converse duality) Let (u, ¯ v, ¯ λ, (MSD). Assume that (i) ∇uu (λ¯ T f )(u, ¯ v) ¯ is nonsingular matrix, (ii) the vectors ∇u f1 (u, ¯ v), ¯ · · · , ∇u fk (u, ¯ v) ¯ are linearly independent, and (iii) ∇uu (λ¯ T f )(u, ¯ v)¯ ¯ r ∈ span{∇u f1 (u, ¯ v), ¯ · · · , ∇u fk (u, ¯ v)} ¯ \ {0}. Furthermore, suppose that the assumptions of Theorem 6.3.1 hold. Then the objective values of (MSP) and (MSD) are equal, and (u, ¯ v, ¯ λ¯ , p¯ = 0) is an efficient solution of (MSP). Remark 6.3.2 (i) If we take m = n, p = r = 0, η1 (x, u) = x − u, η2 (v, y) = v − y, then our second-order symmetric dual models and the duality results reduce to the corresponding second-order symmetric dual models and duality results in [52]. (ii) In [50], it is stated that the ratio of invex functions is invex, i.e., if h(x) ≥ 0, g(x) > 0, h(x) and −g(x) are invex with respect to η(x, y), then h(x)/g(x) is an invex function with respect to η(x, y) = (g(y)/g(x))η(x, y). Hence, our results also hold for the fractional programming. Remark 6.3.3 In the above duality theorems, if we require λ ≥ 0 instead of λ > 0, then the same results also hold under the assumption of the strict invexity for f (·, y) and −f (x, ·).
6.4 Wolfe Type II Symmetric Duality We formulate a pair of second-order nonlinear multiobjective programs and establish the corresponding duality results. The primal problem (MP) and the dual problem (MD) are stated as follows: (MP)
min FP (x, y, λ, p)
x,y,λ,p
= f (x, y) − (y T ∇y (λT f )(x, y))ek − (y T (∇yy (λT f )(x, y)p)ek 1 −( pT (∇yy (λT f )(x, y)p)ek 2 subject to : ∇y (λT f )(x, y) + ∇yy (λT f )(x, y)p 0, λ > 0,
λ ek = 1. T
(6.28)
6.4 Wolfe Type II Symmetric Duality
(MD)
103
max FD (u, v, λ, r)
u,v,λ,r
= f (u, v) − (uT ∇u (λT f )(u, v))ek − (uT (∇uu (λT f )(u, v)r)ek 1 −( r T (∇uu (λT f )(u, v)r)ek 2 subject to : ∇u (λT f )(u, v) + ∇uu (λT f )(u, v)r 0, λ > 0,
(6.29)
λT ek = 1,
where f : Rn × Rm → Rk , p ∈ Rm , r ∈ Rn , λ ∈ Rk , and ek = (1, . . . , 1)T ∈ Rk . Theorem 6.4.1 (Weak duality) Let (x, y, λ, p) be feasible for (MP) and let (u, v, λ, r) be feasible for (MD). Suppose that there exist sublinear functionals F : Rn × Rn × Rn −→ R and G : Rm × Rm × Rm −→ R satisfying Fx,u (a) + a T u ≥ 0, for all a ∈ Rn+ ,
(6.30)
Gv,y (b) + bT y ≥ 0, for all b ∈ Rm +.
(6.31)
Furthermore, assume that fi (·, v) (1 ≤ i ≤ k) is second-order F -convex at u and fi (x, ·) (1 ≤ i ≤ k) is second-order G-concave at y. Then, FP (x, y, λ, p) ≤ FD (u, v, λ, r).
(6.32)
Proof Suppose to the contrary that (6.32) is not true, that is, f (x, y) − (y T ∇y (λT f )(x, y))ek − (y T (∇yy (λT f )(x, y)p)ek 1 −( pT (∇yy (λT f )(x, y)p)ek 2 ≤ f (u, v) − (uT ∇u (λT f )(u, v))ek − (uT (∇uu (λT f )(u, v)r)ek 1 −( r T (∇uu (λT f )(u, v)r)ek 2 Since λ > 0 and λT ek = 1, we have (λT f )(x, y) − y T ∇y (λT f )(x, y) − y T (∇yy (λT f )(x, y)p 1 − pT (∇yy (λT f )(x, y)p < 2 (λT f )(u, v) − uT ∇u (λT f )(u, v) − uT (∇uu (λT f )(u, v)r 1 − r T (∇uu (λT f )(u, v)r. 2
(6.33)
104
6 Multiobjective Wolfe Type Second-Order Symmetric Duality
Firstly, the second-order F -convexity of fi (·, v) (1 ≤ i ≤ k) implies 1 fi (x, v) − fi (u, v) + r∇uu fi (u, v)r ≥ Fx,u (∇u fi (u, v) + ∇uu fi (u, v)r). 2 Since λ > 0, it is clear that 1 (λT f )(x, v) − (λT f )(u, v) + r∇uu λT f (u, v)r 2 Fx,u (∇u (λT f )(u, v) + ∇uu λT f (u, v)r).
(6.34)
From (6.29), we have a = ∇u (λT f )(u, v) + ∇uu (λT f )(u, v)r ∈ Rm +. Thus, it follows from the condition (6.30), we have Fx,u (∇u (λT f )(u, v) + ∇uu (λT f )(u, v)r) −uT ∇u (λT f )(u, v) − uT ∇uu (λT f )(u, v)r.
(6.35)
Combining (6.34) and (6.35), we obtain 1 (λT f )(x, v) − (λT f )(u, v) + r∇uu λT f (u, v)r 2 −uT ∇u (λT f )(u, v) − uT ∇uu (λT f )(u, v)r.
(6.36)
Secondly, from the second-order G-concavity of −fi (u, ·), (1 ≤ i ≤ k), we have 1 fi (x, y) − fi (x, v) − p∇yy fi (x, y)p ≥ Gv,y (−∇y fi (x, y) − ∇yy fi (x, y)p). 2 Since λ > 0, it is clear that 1 (λT f )(x, y) − (λT f )(x, v) − p∇yy λf (x, y)p 2 Gv,y (−∇y (λT f )(x, y) − ∇yy fi (x, y)p).
(6.37)
From (6.28), we have b = ∇y (λT f )(x, y) + ∇yy (λT f )(x, y)p ∈ Rn+ . Thus, it follows from the condition (6.31), we have Gv,y (−∇y (λT f )(x, y) − ∇yy (λT f )(x, y)p) ≥ y T ∇y (λT f )(x, y) + y T ∇yy (λT f )(x, y)p.
(6.38)
6.4 Wolfe Type II Symmetric Duality
105
Combining (6.37) and (6.38), we obtain 1 (λT f )(x, y) − (λT f )(x, v) − p∇yy λf (x, y)p 2 y T ∇y (λT f )(x, y) + y T ∇yy (λT f )(x, y)p.
(6.39)
And consequently, it follows from (6.36) and (6.39) that 1 (λT f )(x, y) − y T ∇y (λT f )(x, y) − y T ∇yy (λT f )(x, y)p − p∇yy (λT f )(x, y)p 2 1 (λT f )(u, v)−uT ∇u (λT f )(u, v)−uT ∇uu (λT f )(u, v)r− r∇uu (λT f )(u, v), 2 which is a contradiction to (6.33).
Theorem 6.4.2 (Strong duality) Let (x, ¯ y, ¯ λ¯ , p) ¯ be a weak efficient solution of (MP). Assume that the following conditions are satisfied. (i) The matrix ∇yy (λ¯ T f )(x, ¯ y) ¯ is nonsingular; (ii) the vectors ∇y f1 (x, ¯ y), ¯ . . . , ∇y fk (x, ¯ y) ¯ are linearly independent; ¯ y), ¯ . . . , ∇y fk (x, ¯ y)}\{0}; ¯ (iii) the vector pT ∇y (∇yy λ¯ T f (x, y)p) ∈ span{∇y f1 (x, and (iv) p¯ = 0 implies ∇y {∇yy (λ¯ T f )(x, ¯ y) ¯ p} ¯ p¯ = 0. Furthermore, suppose that the assumptions of Theorem 6.4.1 are satisfied. Then, the objective values of (MP) and (MD) are equal, and (x, ¯ y, ¯ λ¯ , r¯ = 0) is an efficient solution of (MD). Proof Let LP = α T [f (x, y) − (y T ∇y (λT f )(x, y))ek − (y T (∇yy (λT f )(x, y)p)ek 1 −( pT (∇yy (λT f )(x, y)p)ek ] + β T [∇y (λT f )(x, y) + ∇yy (λT f )(x, y)p] 2 −ωT λ + μ(λT ek − 1), ¯ y, ¯ λ¯ , p) ¯ is a weak efficient where α ∈ Rk , β ∈ Rm , ω ∈ Rk , and μ ∈ R. Since (x, solution of (MSP), it follows from Fritz John optimality conditions [24] that there exist α¯ ∈ Rk , β¯ ∈ Rm , μ ∈ R, and ω ∈ Rk such that ∂LP = ∇x (α¯ T f )(x, ¯ y) ¯ + ∇xy (λ¯ T f )(x, ¯ y)( ¯ β¯ − (α¯ T ek )y) ¯ ∂x 1 ¯ y) ¯ p}( ¯ β¯ − (α¯ T ek )(y¯ − p)) +∇x {∇yy (λ¯ T f )(x, ¯ = 0, 2
(6.40)
106
6 Multiobjective Wolfe Type Second-Order Symmetric Duality
∂LP = ∇y f (x, ¯ y)( ¯ α¯ − (α¯ T ek )λ¯ ) + ∇yy (λ¯ T f )(x, ¯ y)( ¯ β¯ − (α¯ T ek )y) ¯ − ∂y ¯ y) ¯ p¯ + ∇y {∇yy (λ¯ T f )(x, ¯ y) ¯ p}( ¯ β¯ − (α¯ T ek ) (α¯ T ek )∇yy (λ¯ T f )(x, 1 (y¯ + p)) ¯ = 0, 2 ∂LP = ∇yy (λ¯ T f )(x, ¯ y)( ¯ β¯ − (α¯ T ek )(y¯ + p)) ¯ = 0, ∂p ∂LP = ∇yT f (x, ¯ y)( ¯ β¯ − (α¯ T ek )y) ¯ − ω + μek + [(β¯ − (α¯ T ek ) ∂λ 1 (y¯ − p)) ¯ y) ¯ p¯ ¯ T ∇yy f1 (x, 2 1 ¯ T ∇yy fk (x, , · · · , (β¯ − (α¯ T ek )(y¯ − p)) ¯ y) ¯ p] ¯ = 0, 2 ∂LP β¯ T = β¯ T (∇y (λ¯ T f )(x, ¯ y) ¯ + ∇yy (λ¯ T f )(x, ¯ y) ¯ p) ¯ = 0, ∂β ∂LP = ωT λ¯ = 0, ∂ω ¯ ω) 0, (α, ¯ ω, μ) = 0. (α, ¯ β, ¯ β,
ωT
(6.41) (6.42)
(6.43) (6.44) (6.45) (6.46)
Since λ¯ > 0 and ω 0, (6.43) yields ω = 0. ¯ y) ¯ in (6.42), we have Using the nonsingularity of ∇yy (λ¯ T f )(x, β¯ = (α¯ T ek )(y¯ + p). ¯
(6.47)
We claim that α¯ = 0. Otherwise, α¯ = 0. Then (6.45) yields β¯ = 0 and (6.41) ¯ ω, μ) = 0, contradicting (6.44). Hence, α¯ ≥ 0 ¯ β, yields μ = 0. Consequently, (α, and α¯ T ek = 0.
(6.48)
It follows from (6.45) and (6.39), we obtain ¯ y)( ¯ α¯ − (α¯ T ek )λ¯ ) + ∇y f (x,
α¯ T ek ∇y {∇yy (λ¯ T f )(x, ¯ y) ¯ p} ¯ p¯ = 0. 2
(6.49)
We claim that p¯ = 0. Indeed, if p¯ = 0, then condition (iv) of the theorem implies ∇y {∇yy (λ¯ T f )(x, ¯ y) ¯ p} ¯ p¯ = 0. From (6.49), we get ¯ y) ¯ p} ¯ p¯ = ∇y {∇yy (λ¯ T f )(x,
−2 ¯ ∇y f (x, ¯ y)( ¯ α¯ − (α¯ T ek )λ), α¯ T ek
which contradicts the conditions (iii) of the theorem. Hence, p¯ = 0.
6.4 Wolfe Type II Symmetric Duality
107
Since p¯ = 0 and the set {∇y f1 (x, ¯ y), ¯ · · · , ∇y fk (x, ¯ y)} ¯ is linearly independent, (6.49) yields α¯ = (α¯ T ek )λ¯ .
(6.50)
By (6.38), (6.45), (6.46) and (6.50), we obtain ¯ y) ¯ = (∇x (α¯ T f )(x, ¯ y))/( ¯ α¯ T ek ) = 0. ∇x (λ¯ T f )(x,
(6.51)
Hence, (x, ¯ y, ¯ λ¯ , r = 0) is feasible for (MD). From (6.41), (6.45) and (6.46), we get ¯ y) ¯ = 0. y¯ T ∇y (λ¯ T f )(x,
(6.52)
Therefore, from (6.51) and (6.52), we have ¯ p) ¯ 0). ¯ y, ¯ λ, ¯ = FD (x, ¯ y, ¯ λ, FP (x, Finally, by an argument similar to that in the proof of Theorem 2 in [32], we can ¯ r = 0) is a properly efficient solution of (MD). show that (x, ¯ y, ¯ λ,
A converse duality theorem is stated below. Its proof is similar to the proof of Theorem 6.4.2. Theorem 6.4.3 (Converse duality) Let (u, ¯ v, ¯ λ¯ , r¯ ) be a weak efficient solution of (MD). Assume that the following conditions are satisfied. (i) ∇uu (λ¯ T f )(u, ¯ v) ¯ is nonsingular matrix; ¯ v), ¯ · · · , ∇u fk (u, ¯ v) ¯ are linearly independent; (ii) the vectors ∇u f1 (u, (iii) the vector r¯ T ∇u {∇uuλ¯ T f (u, ¯ v)¯ ¯ r } ∈ span{∇u f1 (u, ¯ v), ¯ . . . , ∇u fk (u, ¯ v)} ¯ \ {0}; and ¯ v)¯ ¯ r } = 0. (iv) the vector r¯ = 0 implies that r¯ T ∇u {∇uu (λ¯ T f )(u, Furthermore, suppose that the assumptions of Theorem 6.4.1 hold. Then, the objective values of (MP) and (MD) are equal, and (u, ¯ v, ¯ λ¯ , p¯ = 0) is a properly efficient solution of (MP). Remark 6.4.1 If we take m = n, p = r = 0, Fx,u (a) = η1 (x, u)T a, Gv,y (b) = η2 (v, y)T b, then our multiobjective second-order symmetric dual models and results reduce to the multiobjective first-order symmetric dual models and results in [32]. In the theorems above, if we require λ ≥ 0 instead of λ > 0, then the same results also hold under the assumption of the strict F -convexity of f (·, y) and −f (x, ·).
Chapter 7
Multiobjective Mond-Weir-Type Second-Order Symmetric Duality
7.1 Introduction In Chap. 6, some results on multiobjective Wolfe-type second-order symmetric duality are given. We also note that second-order Mangasarian-type dual formulation is studied in [43] under ρ-convexity and generalized representation conditions in [101]. Mond-Weir-type second-order primal and dual nonlinear programs are investigated in [7], where second-order symmetric duality results are established for these programs. Later on, the results obtained in [7] are generalized to nonlinear programs involving second-order pseudoinvexity in [103]. More recently, two new symmetric dual pairs are constructed in [70], where the objectives contain a support function and hence are nondifferentiable. Based on ideas in [70], the results obtained in [103] are extended to nondifferentiable nonlinear programming problems under second-order F -pseudoconvexity assumptions [42]. More recently, a pair of MondWeir-type multiobjective second-order symmetric dual programs is presented in [93], and the corresponding duality results are established. In this chapter, motivated by [42, 70, 93, 116, 117], a pair of second-order symmetric models for a class of nondifferentiable multiobjective programs is introduced. Weak duality, strong duality, and converse duality theorems are established under F -convexity assumptions.
7.2 Notations and Preliminaries Let f (x, y) be a real-valued thrice continuously differentiable function defined on an open set in Rn × Rm . Let ∇x f (x, y) denote the gradient vector of f with respect to x at (x, y). Also, let ∇xx f (x, y) denote the Hessian matrix with respect to x evaluated at (x, y). ∇y f (x, y) and ∇yy f (x, y) are defined similarly. © Springer Nature Singapore Pte Ltd. 2018 X. Yang, Generalized Preinvexity and Second Order Duality in Multiobjective Programming, Springer Optimization and Its Applications 142, https://doi.org/10.1007/978-981-13-1981-5_7
109
110
7 Multiobjective Mond-Weir-Type Second-Order Symmetric Duality
(∂/∂yi )(∇yy f (x, y)) is the m × m matrix obtained by differentiating each of the element of ∇yy f (x, y) with respect to yi , and (∇xx f (x, y)q)y denotes the matrix whose (i, j )th element is (∂/∂yi )(∇xx f (x, y)q)j , where q ∈ Rn . Definition 7.2.1 ([70]) Let C be a compact convex set in Rn . The support function s(x|C) of C is defined by s(x|C) := max{x T y, y ∈ C}. The support function s(x|C), being convex and everywhere finite, has a subdifferential at every x in the sense of Rockafellar, that is, there exists z such that s(y|C) ≥ s(x|C) + zT (y − x) for all y ∈ C. Equivalently, zT x = s(x|C). The subdifferential of s(x|C) is given by ∂s(x|C) := {z ∈ C : zT x = s(x|C)}. For any set S ⊂ Rn , the normal cone to S at a point x ∈ S is defined by NS (x) := {y|y T (z − x) ≤ 0, for all z ∈ S}. It is readily verified that for a compact convex set C, y is in NC (x) if and only if s(y|C) = x T y, or equivalently, x is in the subdifferential of s at y.
7.3 Mond-Weir-Type Symmetric Duality We now state the following pair of second-order Mond-Weir-type nondifferentiable multiobjective programming problems with k-objective: Primal problem (P) min H (x, y, z, p) = (H1 (x, y, z1 , p), H2 (x, y, z2 , p), · · · , Hk (x, y, zk , p)) subject to :
k
λi (∇y fi (x, y) − zi + ∇yy fi (x, y)pi ) ≤ 0,
(7.1)
i=1
yT
k
i=1
λi (∇y fi (x, y) − zi + ∇yy fi (x, y)pi ) ≥ 0,
(7.2)
7.3 Mond-Weir-Type Symmetric Duality
111
zi ∈ Di , i = 1, 2, · · · , k,
(7.3)
λ > 0,
(7.4)
Dual problem (D) max G(u, v, ω, q) = (G1 (u, v, ω1 , q), G2 (u, v, ω2 , q), · · · , Gk (u, v, ωk , q)) subject to :
k
λi (∇x fi (u, v) + ωi + ∇xx fi (u, v)qi ) ≥ 0, (7.5)
i=1
uT
k
λi (∇x fi (u, v) + ωi + ∇xx fi (u, v)qi ) ≤ 0, (7.6)
i=1
ωi ∈ Ci , i = 1, 2, · · · , k, (7.7) λ > 0 (7.8) where Hi (x, y, zi , p)=fi (x, y)+s(x|Ci )−y T zi − 12 piT ∇yy fi (x, y)pi , Gi (u, v, ωi , q) = fi (u, v) − s(v|Di ) + uT wi − 12 qiT ∇xx fi (u, v)qi and (i) fi (1 ≤ i ≤ k) is a thrice differentiable function from Rn × Rm −→ R; (ii) qi , ωi (1 ≤ i ≤ k) are vectors in Rn , pi , zi (1 ≤ i ≤ k) are vectors in Rm , and λi ∈ R (1 ≤ i ≤ k); and (iii) Ci and Di (1 ≤ i ≤ k) are compact convex sets in Rn and Rm , respectively. Define p = (p1 , p2 , · · · , pk ), q = (q1 , q2 , · · · , qk ), z = (z1 , z2 , · · · , zk ), ω = (ω1 , ω2 , · · · , ωk ), and λ = (λ1 , λ2 , · · · , λk )T . Clearly, (P) and (D) belong to a special class of nonlinear multiobjective programming problems. The nondifferentiable terms in the form of support functions are included in the objective function of each problem. It can be easily seen that if we write the dual problem as a minimization problem, then its dual is just the primal problem written as a maximization problem. This enables us to establish the symmetry of the two problems (P) and (D). We prove the following duality results for the pairs of problems (P) and (D). Theorem 7.3.1 (Weak Duality) Let (x, y, λ, z, p) be feasible for the primal problem (P), and let (u, v, λ, ω, q) be feasible for the dual problem (D). Suppose that there exist sublinear functionals F : Rn × Rn × Rn −→ R and K : Rm × Rm × Rm −→ R satisfying Fx,u (a) + a T u ≥ 0, ∀a ∈ Rn+ ,
(7.9)
Kv,y (b) + bT y ≥ 0, ∀b ∈ Rm +.
(7.10)
112
7 Multiobjective Mond-Weir-Type Second-Order Symmetric Duality
Furthermore, assume that fi (·, v) + (·)T ωi (1 ≤ i ≤ k) is second order F -convex at u with respect to qi ∈ Rn and fi (x, ·) − (·)T zi (1 ≤ i ≤ k) is second order K-concave at y with respect to pi ∈ Rm . Then, H (x, y, z, p) ≤ G(u, v, ω, q).
(7.11)
Proof Assume, on a contrary, that (11) is not true, that is, (H1 (x, y, z1 , p), H2 (x, y, z2 , p), · · · , Hk (x, y, zk , p)) ≤ (G1 (u, v, ω1 , q), G2 (u, v, ω2 , q), · · · , Gk (u, v, ωk , q)) Then, since λ > 0, we have k
i=1
<
k
i=1
1 λi [fi (x, y) + s(x|Ci ) − y T zi − piT ∇yy fi (x, y)pi ] 2
1 λi [fi (u, v) − s(v|Di ) + uT wi − qiT ∇uu fi (u, v)qi ]. 2
(7.12)
By the second order F -convexity of fi (·, v) + (·)T ωi at x with respect to qi ∈ Rn , we obtain 1 fi (x, v) + x T ωi − fi (u, v) − uT ωi + qiT ∇uu fi (u, v)qi 2 ≥ Fx,u (∇u fi (u, v) + ωi + ∇uu fi (u, v)qi ) Since λ > 0, it follows the Definition 6.2.5 that k
i=1
1 λi [fi (x, v) + x T ωi − fi (u, v) − uT ωi + qiT ∇uu fi (u, v)qi ] 2
≥ Fx,u (
k
λi [∇u fi (u, v) + ωi + ∇uu fi (u, v)qi ])
i=1
From the constraint (7.5), we obtain a :=
k
i=1
λi (∇u fi (u, v) + ωi + ∇uu fi (u, v)qi ) ∈ Rn+ .
(7.13)
7.3 Mond-Weir-Type Symmetric Duality
113
Thus, by (7.9), it holds that Fx,u (a) + a T u ≥ 0, i.e., Fx,u
k
λi (∇u fi (u, v) + ωi + ∇uu fi (u, v)qi )
i=1
≥ −uT
k
λi (∇u fi (u, v) + ωi + ∇uu fi (u, v)qi )
(7.14)
i=1
Combining (7.6), (7.13), and (7.14), we obtain k
i=1
1 λi [fi (x, v) + x T ωi − fi (u, v) − uT ωi + qiT ∇uu fi (u, v)qi ] ≥ 0. (7.15) 2
Next, by the second order K-concavity of fi (x, ·) − (·)T zi at y with respect to pi ∈ Rm , we have 1 −fi (x, v) + v T zi + fi (x, y) − y T zi − piT ∇yy fi (x, y)pi 2 ≥ Kv,y (−∇y fi (x, y) + zi − ∇yy fi (x, y)pi ). Since λ > 0, it follows from Definition 6.2.5 that k
i=1
1 λi [fi (x, y) − y T zi − fi (x, v) + v T zi − piT ∇yy fi (x, y)pi ] 2
≥ Kv,y
k
λi [−∇y fi (x, y) + zi − ∇yy fi (x, y)pi ] .
i=1
By the constraint (7.1), we obtain b := −
k
i=1
λi (∇y fi (x, y) − zi + ∇yy fi (x, y)pi ) ∈ Rm +.
(7.16)
114
7 Multiobjective Mond-Weir-Type Second-Order Symmetric Duality
Thus, by (7.10), it holds that T
Kv,y (b) + b y ≥ 0, i.e., Kv,y −
k
λi (∇y fi (x, y) − zi + ∇yy fi (x, y)pi )
i=1
≥y
T
k
λi (∇y fi (x, y) − zi + ∇yy fi (x, y)pi ) .
(7.17)
i=1
Combining (7.2), (7.16), and (7.17) yields k
i=1
1 λi [fi (x, y) − y T zi − fi (x, v) + v T zi − piT ∇yy fi (x, y)pi ] ≥ 0. (7.18) 2
Finally, using x T wi ≤ s(x|Ci ) and v T zi ≤ s(v|Di ) (1 ≤ i ≤ k), it follows from (7.15) and (7.18) that k
i=1
≥
1 λi [fi (x, y) + s(x|Ci ) − y T zi − piT ∇yy fi (x, y)pi ] 2
k
i=1
1 λi [fi (u, v) − s(v|Di ) + uT wi − qiT ∇uu fi (u, v)qi ], 2
which is a contradiction to (7.12). Rn
Theorem 7.3.2 (Strong Duality) Let f : × −→ be thrice differen¯ be a weak efficient solution of (P); fix λ = λ¯ in (D), tiable. Let (x, ¯ y, ¯ λ¯ , z, p) and suppose that the following conditions are fulfilled (a) ∇yy fi is positive definite for all i = 1, 2, · · · , k, and ki=1 λ¯ i p¯ iT [∇y fi − z¯ i ] ≥ 0, or ∇yy fi is negative definite for all i = 1, 2, · · · , k, and ki=1 λ¯ i p¯ iT [∇y fi − z¯ i ] ≤ 0; and (b) the set {∇y f1 − z¯ 1 +∇yy f1 p¯ 1 , ∇y f2 − z¯ 2 +∇yy f2 p¯ 2 , · · · , ∇y fk − z¯ k +∇yy fk p¯ k } is linearly independent. Here, fi = fi (x, ¯ y), ¯ (1 ≤ i ≤ k). Then there exists ωi ∈ Ci such ¯ z, p) ¯ ω, q). ¯ ω, q = 0) is feasible for (D) and H (x, ¯ y, ¯ λ, ¯ = G(x, ¯ y, ¯ λ, that (x, ¯ y, ¯ λ, Moreover, if the hypotheses of Theorem 7.3.1 are satisfied for all feasible solutions ¯ ω, q) is a properly efficient solution for (D). of (P) and (D), then (x, ¯ y, ¯ λ, Rm
Rk
Proof Since (x, y, λ¯ , z, p) is a weak efficient solution of (P), it follows from Fritz John optimality condition [91] that there exist α ∈ Rk+ , β ∈ Rm + , γ ∈ R+ and
7.3 Mond-Weir-Type Symmetric Duality
115
δ ∈ Rk+ such that k
i=1
1 αi [∇x fi + μi − (∇yy fi p¯ i )x p¯ i ]+ ¯ = 0, λ¯ i [∇yx fi + (∇yy fi p¯ i )x ](β − γ y) 2 k
i=1
(7.19) k
i=1
1 αi [∇y fi − z¯ i − (∇yy fi p¯ i )y p¯ i ] + ¯ λ¯ i [∇yy fi + (∇yy fi p¯ i )y ](β − γ y) 2 k
i=1
−γ
k
λ¯ i (∇y fi − z¯ i + ∇yy fi p¯ i ) = 0, (7.20)
i=1
(β − γ y) ¯ T (∇y fi − z¯ i + ∇yy fi p¯ i ) − δi = 0, [(β − γ y) ¯ λ¯ i − αi p¯ i ]T ∇yy fi = 0,
βT
k
i = 1, 2, · · · , k,
i = 1, 2, · · · , k,
λ¯ i (∇y fi − z¯ i + ∇yy fi p¯ i ) = 0,
(7.21)
(7.22)
(7.23)
i=1
γ y¯ T
k
λ¯ i (∇y fi − z¯ i + ∇yy fi p¯ i ) = 0,
(7.24)
i=1
δ T λ¯ = 0,
(7.25)
αi y¯ − λi (β + γ y) ¯ ∈ NDi (¯zi ), ¯ i ), μi ∈ Ci , μTi x¯ = s(x|C
i = 1, 2, · · · , k, i = 1, 2, · · · , k,
(α, β, γ , δ) = 0.
(7.26)
(7.27)
(7.28)
As λ¯ > 0, it follows from (7.25) that δ = 0. Therefore, from (7.21), we obtain (β − γ y) ¯ T (∇y fi − z¯ i + ∇yy fi p¯ i ) = 0,
i = 1, 2, · · · , k,
(7.29)
116
7 Multiobjective Mond-Weir-Type Second-Order Symmetric Duality
As ∇yy fi is positive definite or negative definite for i = 1, 2, · · · , k by condition (a) of the theorem, it follows from (7.22) that (β − γ y) ¯ λ¯ i = αi p¯ i ,
i = 1, 2, · · · , k.
(7.30)
We claim that αi = 0, i = 1, 2, · · · , k. Indeed, if for some k0 , αk0 = 0, then it follows from λ¯ k0 > 0 and (7.30) that β = γ y. ¯
(7.31)
From (7.20), we get k k
(αi − γ λ¯ i )(∇y fi − z¯ i ) + λ¯ i ∇yy fi (β − γ y¯ − γ p¯ i ) i=1
i=1
+
k
1 (∇yy fi p¯ i )y [(β − γ y) ¯ λ¯ i − αi p¯ i ] = 0. 2 i=1
By (7.30), we obtain k k
1
λ¯ i (∇yy fi p¯ i )y (β − γ y) (αi − γ λ¯ i )(∇y fi − z¯ i + ∇yy fi p¯ i ) + ¯ = 0. 2 i=1
i=1
Using (7.31), it follows that k
(αi − γ λ¯ i )(∇y fi − z¯ i + ∇yy fi p¯ i ) = 0.
i=1
By condition (b), we get αi = γ λ¯ i ,
i = 1, 2, · · · , k.
(7.32)
Since λ¯ i > 0, i = 1, 2, · · · , k and αk0 = 0, for some k0 , it follows that γ = 0. Now, from (7.31), (7.32), and γ = 0, we have β = 0, αi = 0, i = 1, 2, · · · , k which contradicts (7.28). Therefore, αi > 0, i = 1, 2, · · · , k. Premultiplying by λ¯ i in (7.29) and using (7.30) and noting αi > 0, i = 1, 2, · · · , k, we obtain p¯ iT (∇y fi − zi + ∇yy fi p¯ i ) = 0,
i = 1, 2, · · · , k.
Since λ¯ > 0, it is clear that k
i=1
λ¯ i p¯ iT (∇y fi − zi ) +
k
i=1
λ¯ i (p¯ iT ∇yy fi p¯ i ) = 0.
(7.33)
7.3 Mond-Weir-Type Symmetric Duality
117
This implies that p¯ i = 0, for i = 1, 2, · · · , k. If it is not so, then, by condition of the theorem, we obtain k
λ¯ i p¯ iT (∇y fi − zi ) +
i=1
k
λ¯ i (p¯ iT ∇yy fi p¯ i ) = 0,
i=1
contradicting to (7.33). Hence, p¯ i = 0, for i = 1, 2, · · · , k. Since λ¯ i > 0, p¯ i = 0, for i = 1, 2, · · · , k, it follows from (7.30) that β = γ y. ¯
(7.34)
Using (7.20), we obtain k
(αi − γ λ¯ i )(∇y fi − z¯ i ) = 0. i=1
By condition (b) of the theorem, we get αi = γ λ¯ i ,
i = 1, 2, · · · , k.
(7.35)
Therefore, γ > 0. From (7.19), (7.34), (7.35), and γ > 0, we have k
λ¯ i (∇x fi + μi ) = 0.
i=1
Now, taking ωi := μi ∈ Ci , for i = 1, 2, · · · , k, we find that (u, v, λ¯ , ω, q) = (x, ¯ y, ¯ λ¯ , ω, q = 0) satisfies (7.5), (7.6), (7.7), and (7.8), the constraint conditions of (D). Therefore, it is a feasible solution to the dual problem (D). Furthermore, by (7.34) and the fact that λ¯ i > 0, for i = 1, 2, · · · , k, we see that (7.26) implies y¯ ∈ NDi (¯zi ). Thus, ¯ i ). y¯ T z¯ i = s(y|D Therefore, by (7.27) and (7.36), we get 1 fi (x, ¯ y) ¯ + s(x|C ¯ i ) − y¯ T z¯ i − ∇yy fi (x, ¯ y) ¯ p¯ i 2 1 = fi (x, ¯ y) ¯ − s(y|D ¯ i ) + x¯ T ωi − ∇xx fi (x, ¯ y)q ¯ i. 2
(7.36)
118
7 Multiobjective Mond-Weir-Type Second-Order Symmetric Duality
That is, the objective value of (P) at (x, ¯ y. ¯ λ¯ , z¯ , p) ¯ and the objective value of (D) at ¯ ω, q) are equal, meaning that (x, ¯ y. ¯ λ, H (x, ¯ y, ¯ λ¯ , z¯ , p) ¯ = G(x, ¯ y, ¯ λ¯ , ω, q).
(7.37)
If (x, ¯ y, ¯ λ¯ , ω, q) is not an efficient solution to (D), then there exists a feasible solution (u, v, λ¯ , ω, q) of (D) such that, by (7.37), it holds that H (x, ¯ y, ¯ λ¯ , z¯ , p) ¯ ≤ G(u, v, λ¯ , ω, q), ¯ z¯ , p) which is a contradiction to Theorem 7.3.1. If (x, ¯ y, ¯ λ, ¯ is not a properly efficient solution to (D), then there exists a feasible solution (u, v, λ¯ , ω, q) of (D) and some i such that 1 ¯ y) ¯ + s(x|C ¯ i ) − y¯ T z¯ i , fi (u, v) − s(v|Di ) + uT wi − qiT ∇uu fi (u, v)qi > fi (x, 2 1 fi (u, v) − s(v|Di ) + uT wi − qiT ∇uu fi (u, v)qi − fi (x, ¯ y) ¯ − s(x|C ¯ i ) + y¯ T z¯ i 2 1 > M[fj (x, ¯ y)−s( ¯ x|C ¯ j )+ y¯ T z¯ j −fj (u, v)+s(v|Dj )−uT wj + qjT ∇uu fj (u, v)qj ] 2 for all M > 0 and all j satisfying 1 fj (x, ¯ y) ¯ + s(x|C ¯ j ) − y¯ T z¯ j > fj (u, v) − s(v|Dj ) + uT wj − qjT ∇uu fj (u, v)qj . 2 This means that fi (u, v) − s(v|Di ) + uT wi − 12 qiT ∇uu fi (u, v)qi − fi (x, ¯ y) ¯ − s(x|C ¯ i ) + y¯ T z¯ i can be made arbitrarily large. Thus for any λ¯ > 0, k
i=1
1 λ¯ i (fi (u, v) − s(v|Di ) + uT wi − qiT ∇uu fi (u, v)qi ) 2
>
k
¯ y) ¯ + s(x|C ¯ i ) − y¯ T z¯ i ), λ¯ i (fi (x,
i=1
which again contradicts Theorem 7.3.1. Rn
Theorem 7.3.3 (Converse Duality) Let f : × −→ be thrice differentiable. Let (u, ¯ v, ¯ λ¯ , ω, q) be a weak efficient solution of (D) and set λ = λ¯ in (P). Suppose that the following conditions are satisfied. (a) ∇xx fi is positive definite for all i = 1, 2, · · · , k, and ki=1 λ¯ i q Ti [∇x fi − ωi ] ≥ 0, or ∇xx fi is negative definite for all i = 1, 2, · · · , k, and ki=1 λ¯ i q Ti [∇y fi − ωi ] ≤ 0; and Rm
Rk
7.4 Remarks and Examples
119
(b) the set {∇x f1 − ω1 + ∇xx f1 q 1 , ∇x f2 − ω2 + ∇xx f2 q 2 , · · · , ∇x fk − ωk + ∇xx fk q k } is linearly independent. Here, fi = fi (u, ¯ v), ¯ (1 ≤ i ≤ k). Then, there exists z¯ i ∈ Di such (u, ¯ v, ¯ λ¯ , z¯ , p¯ = ¯ ¯ 0) is a feasible solution to (P) and H (u, ¯ v, ¯ λ, z¯ , p) ¯ = G(u, ¯ v, ¯ λ, ω, q). Moreover, if the hypotheses of Theorem 7.3.1 are satisfied for all feasible solutions of (P) and (D), then (u, ¯ v, ¯ λ¯ , z¯ , p) ¯ is a properly efficient solution of (P). Proof The proof follows from an argument similar to the proof of Theorem 7.3.2.
7.4 Remarks and Examples Our results in this chapter extend, unify, and improve the works in [7, 32, 42, 71, 93], and [103]. Details are listed below: (i) If C = {0}, D = {0}, k = 1, then (P) and (D) are reduced to problems studied in [7] and [103]. (ii) If C = {0}, D = {0}, then (P) and (D) are reduced to problems studied in [93]. (iii) If k = 1, then (P) and (D) are reduced to problems studied in [42]. (iv) If p = q = 0, k = 1, then (P) and (D) become a pair of symmetric nondifferentiable dual programs considered in [71]. (v) If p = q = 0, k = 1, C = {0}, D = {0}, then (P) and (D) become a pair of single objective symmetric differentiable dual programs considered in [17]. (vi) If p = q = 0, C = {0}, D = {0}, Fx,u (a) := η(x, u)T a, then (P) and (D) become a pair of multiobjective symmetric differentiable dual programs considered in [32]. In [7, 93], and [103], it is assumed that if the matrix (fyy (x, y)p) ¯ y or K ¯ ¯ y is positive or negative definite, then p¯ = 0 or p¯ i = 0, i=1 λi (fyy (x, y)p) for i = 1, 2, · · · , k. Clearly, this assumption and the result p¯ = 0 or p¯ i = 0, for i = 1, 2, · · · , k, are inconsistent. The models and results studied in this chapter can be reduced to first order models and corresponding results. Therefore, the problem mentioned above, which is contained in [7, 103], and [93], is settled. Example 7.4.1 Let n = m = 1, f1 (x, y) = x 2 − y 2 , f2 (x, y) = x − y, C1 = [0, 1], C2 = {0}, D1 = {0}, D2 = [0, 1]. Then, s(x|C1 ) = (x + |x|)/2, s(x|C2 ) = 0, s(y|D1 ) = 0, s(y|D2 ) = (y + |y|)/2. Problems (P) and (D) become Problem (P ) min H (x, y, z, p) = (x 2 − y 2 + (x + |x|)/2 + p2 , x − y − yz) subject to :
λ1 (−2y − p) + λ2 (−1 − z) ≤ 0,
y[λ1 (−2y − p) + λ2 (−1 − z)] ≥ 0, z ∈ [0, 1], λ1 > 0, λ2 > 0,
120
7 Multiobjective Mond-Weir-Type Second-Order Symmetric Duality
Problem (D ) max G(u, v, w, q) = (u2 − v 2 + uw − q 2 , u − v + (v + |v|)/2) subject to : λ1 (2u + w + 2q) + λ2 ≥ 0, u[λ1 (2u + w + 2q) + λ2 ] ≤ 0, w ∈ [0, 1], λ1 > 0, λ2 > 0. We can prove the symmetric duality for (P ) and (D ) using the results of this chapter. However, the symmetric duality for (P ) and (D ) cannot be proved by using the results presented in [7, 32, 42, 71, 93, 103], because dual models (P ) and (D ) are a pair of multiobjective programming problems with nondifferentiable terms s(x|C) or s(v|D).
Chapter 8
Multiobjective Second-Order Duality with Cone Constraints
8.1 Introduction Duality theory plays an important role in studying the solutions of nonlinear programming problems, and it has been of much interest, and many contributions have been made to its development in the past few decades. Many authors have formulated different duality models, such as, Wolfe dual [58] and Mond-Weir dual [72] for nonlinear programming problems. Especially, in 1996, Nanda and Das [77] considered the following scalar nonlinear programming problem with cone constraints: (P )
min f (x) s.t. g(x) ∈ C2∗ , x ∈ C1 .
where f : R n → R and g : R n → R m are twice continuously differentiable functions and C1 ⊂ R n and C2 ⊂ R m are two closed convex cones with nonempty interiors. Furthermore, let S = {x ∈ Rn : g(x) ∈ C2∗ , x ∈ C1 } be the feasible set of (P ). Nanda and Das established four types of duality models for problem (P ), which were motivated by the work of Bazaraa and Goode [5] and Hanson and Mond [39]. Later on, Chandra and Abha [14] pointed out that the construction of the dual models of Nanda and Das [77] is restricted and there are some shortcomings in the proofs of duality theorems followed by the corrected versions given below. (N D)1
max f (u) + y T g(u) − uT ∇(f + y T g)(u) s.t.
− ∇(f + y T g)(u) ∈ C1∗ , y ∈ C2 .
© Springer Nature Singapore Pte Ltd. 2018 X. Yang, Generalized Preinvexity and Second Order Duality in Multiobjective Programming, Springer Optimization and Its Applications 142, https://doi.org/10.1007/978-981-13-1981-5_8
121
122
8 Multiobjective Second-Order Duality with Cone Constraints
(N D)2
max f (u) s.t.
− ∇(f + y T g)(u) ∈ C1∗ , y T g(u) − uT ∇(f + y T g)(u) ≥ 0, y ∈ C2 .
(N D)3
max f (u) − uT ∇(f + y T g)(u) s.t.
− ∇(f + y T g)(u) ∈ C1∗ , y T g(u) ≥ 0, y ∈ C2 .
(N D)4
max f (u) + y T g(u) s.t.
− ∇(f + y T g)(u) ∈ C1∗ , uT ∇(f + y T g)(u) ≤ 0, y ∈ C2 .
Furthermore, they also established the corresponding weak and strong duality theorems. It is well known that the weak duality theorem shows that the objective value of a feasible solution to the primal problem is not less than the corresponding dual one, which provides a lower bound for the primal optimal value if a feasible dual solution is known. The strong duality theorem implies that, whenever the primal problem has an optimal solution, the dual problem has one also and there is no duality gap. However, the essential but the most difficult part of the duality theory is on the converse duality theorem. It deals with issues on how to obtain the primal solution from the dual solution, and there is also no duality gap between the primal problem and the dual problem under several conditions. In order to handle the nonlinear problem (P ) and its corresponding four duality models mentioned above, Yang et al. [122] established converse duality theorems under suitable assumptions, such as nonsingularity and positive/negative definiteness. In order to provide tighter bounds for the value of the objective function than first-order duality when approximations are used, Mangasarian [59] introduced the concept second-order duality. And its study is significant due to computational advantage over first-order duality (see[68, 72, 77, 82, 104, 114, 121–123]). On the other hand, multiobjective programming problems appear naturally and frequently in various areas of our daily life. Thus, it is valuable to investigate the duality theory of the multiobjective programming. We note that the results of secondorder duality for multiobjective programming are mostly on symmetric duality (see [114, 121]). More precisely, Yang et al. [124, 125] studied second-order
8.2 Preliminaries
123
symmetric dual programs and established duality theorems under F -convexity conditions. Following the work of Yang et al. [124, 125], Mishra and Lai [63] established second-order symmetric dual results under cone second-order pseudoinvexity for multiobjective programs; Gulati et al. [33, 34] obtained duality theorems for second-order multiobjective symmetric dual problems under η-bonvexity/ηpseudobonvexity assumptions; Kailey et al. [44] studied second-order multiobjective mixed symmetric dual under η-bonvexity/η-pseudobonvexity; and Gupta and Kailey [35] investigated second-order symmetric dual programs under generalized cone-invexity. To the best of our knowledge, there are only a very few works dealing with nonsymmetric type of second-order duality for multiobjective programming with cone constraints. Furthermore, unlike linear programming, a majority of dual formulations in nonlinear programming do not possess the symmetry property. Therefore, in Chap. 8, we focus on the second-order nonsymmetric dual for a class of multiobjective programming with cone constraints. Based on the first-order duality results of Chandra and Abha [14] and Yang et al. [122] and the second-order duality theorems of Yang et al. [123], Tang et al. [94] and Ahmad and Agarwal [2] for nonlinear programming with cone constraints, four types of second-order duality models are introduced. Weak duality theorems are presented under the assumptions of F -pseudoconvexity and F -quasiconvexity, which are more general than invexity. Furthermore, strong duality theorems are established by using the characterization of efficient solutions of Chankong and Haimes [18] and the generalized Fritz Johntype conditions in [5]. Most importantly, converse duality theorems, which play a crucial role in duality theory, are discussed under certain suitable assumptions for a primal problem and its four second-order duality models, respectively. Furthermore, some deficiencies in the recent work on the second-order converse duality results obtained by Ahmad and Agarwal [2] are discussed.
8.2 Preliminaries We first introduce the following second-order convexity notions for a twice differentiable function h : Rn → R with respect to a sublinear functional in its third argument F : Rn × Rn × Rn → R. For convenience, we denote Fx,u (a) = F (x, u, a). Furthermore, denote by ∇u h(u) = ∇h(u) and ∇uu h(u) = ∇ 2 h(u) the gradient and the Hessian matrix of the function h evaluated at u, respectively. Definition 8.2.1 h is said to be second-order F -pseudoconvex at u ∈ Rn if (x, p) ∈ Rn × Rn , Fx,u [∇u h(u) + ∇uu h(u)p] ≥ 0 1 ⇒ h(x) ≥ h(u) − pT ∇uu h(u)p. 2
124
8 Multiobjective Second-Order Duality with Cone Constraints
Definition 8.2.2 h is said to be second-order F -quasiconvex at u ∈ Rn if 1 (x, p) ∈ Rn × Rn , h(x) ≤ h(u) − pT ∇uu h(u)p 2 ⇒ Fx,u [∇u h(u) + ∇uu h(u)p] ≤ 0. We consider the following multiobjective programming problem with cone constraints, which is the extension of the scalar programming problem (P ): (MOP )
min f (x) s.t. x ∈ S.
where f = (f1 , f2 , . . . , fp ) : Rn → Rp are vector-valued functions such that each component function is twice continuously differentiable. And the other assumptions about the constraint function g, C1 , and C2 are the same as the problem (P ). For (MOP ), we need the following notations: for each x, y, u ∈ Rn and α ∈ Rp α T ∇f (u) :=
p
αi ∇fi (u);
i=1
∇f (u)x := [∇f1 (u)T x, . . . , ∇fp (u)T x]T ; x T ∇ 2 f (u)y := [x T ∇ 2 f1 (u)y, . . . , x T ∇ 2 fp (u)y]T . The solution involved in this paper is defined in the sense of efficiency as given below: Definition 8.2.3 (See [18]) A point x¯ ∈ S is said to be an efficient solution of (MOP ), if there exists no other x ∈ S such that f (x) ≤ f (x). ¯ We shall use the characterization of efficiency from [18] (Theorem 4.11). Lemma 8.2.1 x¯ is an efficient solution for (MOP ) if and only if x¯ solves ⎧ min fk (x) ⎪ ⎪ ⎨ ¯ for all j = k, s.t. fj (x) ≤ fj (x) Pk (x) ¯ ∗, ⎪ g(x) ∈ C ⎪ 2 ⎩ x ∈ C1 for all k = 1, . . . , p. Motivated by the first-order duals of Chandra and Abha [14] and the secondorder duals of Yang et al. [123] for nonlinear programming with cone constraints, we now introduce four types of second-order nonsymmetric duality models for multiobjective programming problems with cone constraints (MOP ) as follows:
8.2 Preliminaries
125
1 (N D )1 max f (u) + {y T g(u) − pT ∇ 2 (λT f + y T g)(u)p 2 − uT [∇(λT f + y T g)(u) + ∇ 2 (λT f + y T g)(u)p]}e s.t. − [∇(λT f + y T g)(u) + ∇ 2 (λT f + y T g)(u)p] ∈ C1∗ , y ∈ C2 , λ > 0, λT e = 1.
(N D )2
1 max f (u) − { pT ∇ 2 (λT f )(u)p}e 2 s.t. − [∇(λT f + y T g)(u) + ∇ 2 (λT f + y T g)(u)p] ∈ C1∗ , 1 y T g(u) − pT ∇ 2 y T g(u)p − uT [∇(λT f + y T g)(u) 2 + ∇ 2 (λT f + y T g)(u)p] ≥ 0, y ∈ C2 , λ > 0, λT e = 1.
(N D )3
1 max f (u) − { pT ∇ 2 (λT f )(u)p + uT [∇(λT f + y T g)(u) 2 + ∇ 2 (λT f + y T g)(u)p]}e s.t. − [∇(λT f + y T g)(u) + ∇ 2 (λT f + y T g)(u)p] ∈ C1∗ , 1 y T g(u) − pT ∇ 2 y T g(u)p ≥ 0, 2 y ∈ C2 , λ > 0, λT e = 1.
(N D )4
1 max f (u) + {y T g(u) − pT ∇ 2 (λT f + y T g)(u)p}e 2 s.t. − [∇(λT f + y T g)(u) + ∇ 2 (λT f + y T g)(u)p] ∈ C1∗ , uT [∇(λT f + y T g)(u) + ∇ 2 (λT f + y T g)(u)p] ≤ 0, y ∈ C2 , λ > 0, λT e = 1.
126
8 Multiobjective Second-Order Duality with Cone Constraints
Remark 8.2.1 When f : Rn → R (i.e., (MOP ) reduces to (P )), the second-order duality models (i.e., (N D )1 , (N D )2 , (N D )3 , and (N D )4 ) for (MOP ) reduce to the second-order duality models introduced by Yang et al. [123] for nonlinear programming with cone constraints. In addition, if p = 0, the above second-order duality models reduce to the first-order duality models (i.e., (N D)1 , (N D)2 , (N D)3 , and (N D)4 ) proposed by Chandra and Abha [14], respectively. In the next three sections, we establish weak, strong, and converse duality theorems between (MOP ) and (N D )1 , (N D )2 , (N D )3 , and (N D )4 , respectively.
8.3 Weak Duality In this section, we present the weak duality results, which give the relationships between the objective values of feasible solutions to the primal problem (MOP ) and those to the respective duality models (N D )1 , (N D )2 , (N D )3 , and (N D )4 under some appropriate conditions, such as second-order cone-pseudoconvexity and second-order cone-quasiconvexity. Theorem 8.3.1 (Weak duality for (MOP ) and (N D )1 ) Let x and (u, y, λ, p) be feasible for (MOP ) and (N D )1 , respectively. If there exists a sublinear functional F : Rn × Rn × Rn → R such that λT f (·) + y T g(·) + (·)T v is second-order F −pseudoconvex at u with v = −[∇(λT f + y T g)(u) + ∇ 2 (λT f + y T g)(u)p], then 1 f (x) f (u) + {y T g(u) − pT ∇ 2 (λT f + y T g)(u)p 2 − uT [∇(λT f + y T g)(u) + ∇ 2 (λT f + y T g)(u)p]}e. Proof Suppose to the contrary that 1 f (x) ≤f (u) + {y T g(u) − pT ∇ 2 (λT f + y T g)(u)p 2 − uT [∇(λT f + y T g)(u) + ∇ 2 (λT f + y T g)(u)p]}e. It follows that from the constraints of (MOP ) and (N D )1 , we have 1 λT f (x) 0. From the second-order F −quasiconvexity of y T g(·) + (·)T v at u with v = −[∇(λT f + y T g)(u) + ∇ 2 (λT f + y T g)(u)p], we have y T g(x) − x T [∇(λT f + y T g)(u) + ∇ 2 (λT f + y T g)(u)p] 1 >y T g(u) − uT [∇(λT f + y T g)(u) + ∇ 2 (λT f + y T g)(u)p] − pT ∇ 2 (y T g)(u)p. 2 (8.5) Notice that 0 ≥ y T g(x) − x T [∇(λT f + y T g)(u) + ∇ 2 (λT f + y T g)(u)p], which together with (8.5) implies that 1 y T g(u)− pT ∇ 2 (y T g)(u)p − uT [∇(λT f + y T g)(u) + ∇ 2 (λT f + y T g)(u)p] < 0. 2
This contradicts to the feasibility of (u, λ, y, p) for (N D )2 . Therefore, 1 f (x) f (u) − [ pT ∇ 2 (λT f )(u)p]e. 2
Remark 8.3.2 Similarly, Theorem 8.3.2 reduces to Theorem 2 in Yang et al. [123], when (MOP ) reduces to (P ). Furthermore, if we let p = 0 and Fx,u (a) = η(x, u)T a, then the second-order F −pseudoconvexity and the second-order F −quasiconvexity reduce to pseudoinvexity and quasiinvexity, respectively. Hence, Theorem 8.3.2 reduces to Theorem 2 established by Schandra and Abha [14]. Theorem 8.3.3 (Weak duality for (MOP ) and (N D )3 ) Let x and (u, y, λ, p) be feasible for (MOP ) and (N D )3 , respectively. If there exists a sublinear functional F : Rn × Rn × Rn → R such that λT f (·) + (·)T v is second-order F −pseudoconvex
8.3 Weak Duality
129
at u for v = −[∇(λT f + y T g)(u) + ∇ 2 (λT f + y T g)(u)p] and y T g(·) is secondorder F −quasiconvex at u, then 1 f (x) f (u) − { pT ∇ 2 (λT f )(u)p + uT [∇(λT f + y T g)(u) 2 + ∇ 2 (λT f + y T g)(u)p]}e. Proof Suppose to the contrary that 1 f (x) ≤f (u) − { pT ∇ 2 (λT f )(u)p + uT [∇(λT f + y T g)(u) 2 + ∇ 2 (λT f + y T g)(u)p]}e. It follows that from the constraints of (MOP ) and (N D )3 , we have 1 λT f (x) 0.
130
8 Multiobjective Second-Order Duality with Cone Constraints
From the definition of the second-order F −quasiconvexity of y T g(·) at u, we have 1 y T g(x) > y T g(u) − pT ∇ 2 (y T g)(u)p. 2 Notice that y T g(x) ≤ 0, thus 1 y T g(u) − pT ∇ 2 (y T g)(u)p < 0, 2 which contradicts the constraints of (N D )3 , and thus 1 f (x) f (u) − { pT ∇ 2 (λT f )(u)p + uT [∇(λT f + y T g)(u) 2 + ∇ 2 (λT f + y T g)(u)p]}e.
Remark 8.3.3 As mentioned in Remark 8.3.2, Theorem 8.3.3 reduces to Theorem 3 obtained by Yang et al. [123], when (MOP ) reduces to (P ). And, if p = 0 and Fx,u (a) = η(x, u)T a, then Theorem 8.3.3 reduces to Theorem 3 established by Schandra and Abha [14]. Theorem 8.3.4 (Weak duality for (MOP ) and (N D )4 ) Let x and (u, y, λ, p) be feasible for (MOP ) and (N D )4 , respectively. If there exists a sublinear functional F : Rn × Rn × Rn → R satisfying Fx,u (a) + a T u ≤ 0, for all a ∈ C1∗ , and λT f (·) + y T g(·) is second-order F −pseudoconvex at u, then 1 f (x) f (u) + {y T g(u) − pT ∇ 2 (λT f + y T g)(u)p}e. 2 Proof Suppose to the contrary that 1 λT f (x) < λT f (u) + y T g(u) − pT ∇ 2 (λT f + y T g)(u)p. 2 Combining with the condition y T g(x) ≤ 0, we get 1 λT f (x) + y T g(x) < λT f (u) + y T g(u) − pT ∇ 2 (λT f + y T g)(u)p. 2
(A)
8.4 Strong Duality
131
It is easy to verify that Fx,u (∇(λT f + y T g)(u) + ∇ 2 (λT f + y T g)(u)p) < 0,
(8.6)
for λT f (·) + y T g(·) is second-order F −pseudoconvex at u. On the other hand, the condition of (A) and the constraints of (N D )4 imply that Fx,u (−[∇(λT f + y T g)(u) + ∇ 2 (λT f + y T g)(u)p]) ≤ uT [∇(λT f + y T g)(u) + ∇ 2 (λT f + y T g)(u)p] ≤ 0. It follows that from the sublinearity of F , we have Fx,u (∇(λT f + y T g)(u) + ∇ 2 (λT f + y T g)(u)p) ≥ 0, which contradicts to (8.6). Thus, 1 f (x) f (u) + {y T g(u) − pT ∇ 2 (λT f + y T g)(u)p}e. 2
Remark 8.3.4 Similar to Theorem 8.3.1, 8.3.2, and 8.3.3, Theorem 8.3.4 reduces to Theorem 4 presented by Yang et al. [123], when (MOP ) reduces to (P ). And, if p = 0 and Fx,u (a) = η(x, u)T a, the condition “Fx,u (a) + a T u ≤ 0, for all a ∈ C1∗ ” in Theorem 8.3.4 becomes “η(x, u) + u ∈ C1 in Theorem 4 of Schandra and Abha [14]. And, in this case, Theorem 8.3.4 reduces to Theorem 4 established by Schandra and Abha [14].
8.4 Strong Duality Based on the above weak duality theorems, we now present strong duality theorems for efficient solutions, which are focused on how to obtain efficient solutions of four second-order duals (N D )1 , (N D )2 , (N D )3 , and (N D )4 from the ones of the primal programming (MOP ), respectively. These results are established in terms of the characterization of efficient solutions (i.e., Lemma 8.2.1) and the generalized Fritz John conditions [5].
132
8 Multiobjective Second-Order Duality with Cone Constraints
Theorem 8.4.1 (Strong duality for (MOP ) and (N D )1 ) Let x¯ be an efficient solution of (MOP ) at which a suitable constraint qualification [58] is satisfied. ¯ y, ¯ λ¯ , p¯ = 0) is feasible for Then there exist λ¯ > 0 and y¯ ∈ C2 such that (x, (N D )1 and the objective values of (MOP ) and (N D )1 are equal. Furthermore, if hypotheses of Theorem 8.3.1 are satisfied, then (x, ¯ y, ¯ λ¯ , p¯ = 0) is an efficient solution to (N D )1 . Proof Since x¯ is an efficient solution of (MOP ) at which a suitable constraint qualification is satisfied, by Lemma 8.2.1 and the generalized Fritz John condition in [5], there exist λ¯ ∈ Rp with λ¯ > 0 and y¯ ∈ C2 such that ¯ T (x − x) ¯ ≥ 0, ∀x ∈ C1 , [∇(λ¯ T f + y¯ T g)(x)]
(8.7)
¯ = 0. y¯ T g(x)
(8.8)
and
Since C1 is a convex cone and x¯ ∈ C1 , x + x¯ ∈ C1 for all x ∈ C1 . Consider x + x¯ in (8.7) for x ∈ C1 , and then we have ¯ T x ≥ 0, ∀x ∈ C1 , [∇(λ¯ T f + y¯ T g)(x)] which implies −∇(λ¯ T f + y¯ T g)(x) ¯ ∈ C1∗ . That is, (x, ¯ y, ¯ λ¯ , p¯ = 0) is feasible for (N D )1 . Substituting x = 0 and x = 2x¯ in (8.7), we obtain x¯ T ∇(λ¯ T f + y¯ T g)(x) ¯ = 0.
(8.9)
Consequently, it follows from (8.8), (8.9), and p¯ = 0 that 1 f (x) ¯ =f (x) ¯ + {y¯ T g(x) ¯ − p¯ T ∇ 2 (λ¯ T f + y¯ T g)(x) ¯ p¯ 2 − x¯ T [∇(λ¯ T f + y¯ T g)(x) ¯ + ∇ 2 (λ¯ T f + y¯ T g)(x) ¯ p]}e. ¯ It follows that from the weak duality theorem (Theorem 8.3.1), we have ¯ p¯ = 0) which is an efficient solution to (N D )1 .
(x, ¯ y, ¯ λ, Remark 8.4.1 (i) The constraint “λT e = 1” is not essential for (N D )1 . For example, by taking λ¯ :=
λ¯ y¯ and y¯ := T T λ¯ e λ¯ e
in the proof of Theorem 8.4.1, we obtain all the constraints of (N D )1 .
8.5 Converse Duality
133
(ii) Similar to the proof of Theorem 8.4.1, strong duality theorems for (N D )2 , (N D )3 , and (N D )4 can also be established, respectively. (iii) If (MOP ) reduces to (P ), then the strong duality theorems between (MOP ) and the four second-order duality models (i.e., (N D )1 , (N D )2 , (N D )3 , and (N D )4 ) reduce to the corresponding strong duality theorems given by Yang et al. [123], respectively. Moreover, if p = 0 and Fx,u (a) = η(x, u)T a, then the above results reduce to those established by Schandra and Abha [14].
8.5 Converse Duality In this section, we concentrate on the converse duality theorems between the primal problem (MOP ) and four second-order duals (N D )1 , (N D )2 , (N D )3 , and (N D )4 under appropriate assumptions, respectively. As pointed out in Remark 8.4.1 (i), the constraint “λT e = 1 is not essential to the four second-order duals ( i.e., (N D )1 , (N D )2 , (N D )3 , and (N D )4 ). Hence, in the following converse duality theorems, we do not consider this constraint “λT e = 1” unless otherwise stated. Theorem 8.5.1 (Converse duality for (MOP ) and (N D )1 ) Let (u, ¯ y, ¯ λ¯ , p) ¯ be an efficient solution for (N D )1 . Suppose that (i) the n × n Hessian matrix ∇ 2 (λ¯ T f + y¯ T g)(u) ¯ is nonsingular, and, (ii) p¯ T ∇(λ¯ T f )(u) ¯ + 12 p¯ T ∇ 2 (λ¯ T f )(u) ¯ p¯ = 0 ⇒ p¯ = 0. Then u¯ is feasible for (MOP ), and the objective values of (MOP ) and (N D )1 are equal. In addition, if the assumptions of weak duality theorem (Theorem 8.3.1) are satisfied for all feasible solutions of (MOP ) and (N D )1 , then u¯ is an efficient solution of (MOP ). Proof Let 1 L =α T {f (u) + {y T g(u) − pT ∇ 2 (λT f + y T g)(u)p − uT [∇(λT f + y T g)(u) 2 + ∇ 2 (λT f + y T g)(u)p]}e} + β T [∇(λT f + y T g)(u) + ∇ 2 (λT f + y T g)(u)p] − δ T y + ηT λ. ¯ y, Since (u, ¯ λ, ¯ p) ¯ is an efficient solution for (N D )1 , it follows from Lemma 8.2.1 and the generalized Fritz John-type necessary conditions in [5] that there exist α ∈ p p R+ , β ∈ C1 , η ∈ R+ and δ ∈ C2∗ such that
134
8 Multiobjective Second-Order Duality with Cone Constraints
∂L 1 T ¯ T | ¯ y, ¯ + (β − α T ep¯ − α T eu) ¯ T∇ ¯ λ¯ ,p) ¯ = (α − α eλ) ∇f (u) ∂u (u, 2 (∇ 2 (λ¯ T f + y¯ T g)(u) ¯ p) ¯ + (β − α T ep¯ − α T eu) ¯ T ∇ 2 (λ¯ T f + y¯ T g) (u) ¯ = 0,
(8.10)
∂L 1 T T | ¯ y, ¯ T ∇ 2 f (u) ¯ p¯ + ∇f (u)(β ¯ − α T eu) ¯ + η = 0, ¯ λ¯ ,p) ¯ =(β − α ep¯ − α eu) ∂λ (u, 2 (8.11)
∂L 1 T |(u, ¯ + (β − α T ep¯ − α T eu) ¯ T ∇ 2 g(u) ¯ p¯ + ∇g(u)(β ¯ − α T eu) ¯ ¯ y, ¯ λ¯ ,p) ¯ =α eg(u) ∂y 2 − δ = 0,
(8.12)
∂L T T | ¯ y, ¯ T ∇ 2 (λ¯ T f + y¯ T g)(u) ¯ = 0, ¯ λ¯ ,p) ¯ = (β − α ep¯ − α eu) ∂p (u,
(8.13)
β T [∇(λ¯ T f + y¯ T g)(u) ¯ + ∇ 2 (λ¯ T f + y¯ T g)(u) ¯ p] ¯ = 0,
(8.14)
δ T y¯ = 0,
(8.15)
ηT λ¯ = 0,
(8.16)
(α, β, η, δ) = 0.
(8.17)
Noting that λ¯ > 0 and η ∈ Rn+ , therefore, (8.16) implies that η = 0.
(8.18)
¯ is nonsingular, it follows that from (8.13), we have Since ∇ 2 (λ¯ T f + y¯ T g)(u) ¯ β = α T ep¯ + α T eu.
(8.19)
Now, we claim that α = 0. Indeed, by (8.19) and (8.12), we obtain β = 0 and δ = 0. Hence, (α, β, η, δ) = 0, and contradicting to (8.17). Therefore, α 0 and α T e > 0. Multiplying (8.11) by λ¯ , from (8.19), we have 1 α T e[p¯ T ∇(λ¯ T f )(u) ¯ + p¯ T ∇ 2 (λ¯ T f )(u) ¯ p] ¯ = 0. 2
8.5 Converse Duality
135
Since α T e > 0, 1 p¯ T ∇(λ¯ T f )(u) ¯ + p¯ T ∇ 2 (λ¯ T f )(u) ¯ p¯ = 0. 2 By assumption (ii), we obtain p¯ = 0.
(8.20)
Thus, (8.19) reduces to β = (α T e)u. ¯ Since α T e > 0 and β ∈ C1 , u¯ =
β ∈ C1 . αT e
(8.21)
Substituting p¯ = 0 and β = (α T e)u¯ into (8.12), we have ¯ = δ ∈ C2∗ . (α T e)g(u) For α T e > 0, we get g(u) ¯ =
δ ∈ C2∗ , αT e
(8.22)
which together with (8.21) implies that u¯ is feasible for (MOP ). On the other hand, it follows directly from substituting (8.21) into (8.14) that u¯ T [∇(λ¯ T f + y¯ T g)(u) ¯ + ∇ 2 (λ¯ T f + y¯ T g)(u) ¯ p] ¯ = 0.
(8.23)
Multiplying (9.29) by y, ¯ and from (8.15), it is easy to have y¯ T g(u) ¯ =
1 T δ y¯ = 0, αT e
as α T e > 0. Putting (8.20), (8.23), and (8.24) together, we get 1 f (u) ¯ =f (u) ¯ + {y¯ T g(u) ¯ − p¯ T ∇ 2 (λ¯ T f + y¯ T g)(u) ¯ p¯ 2 − u¯ T [∇(λ¯ T f + y¯ T g)(u) ¯ + ∇ 2 (λ¯ T f + y¯ T g)(u) ¯ p]}e. ¯
(8.24)
136
8 Multiobjective Second-Order Duality with Cone Constraints
This implies that the objective values of (MOP ) and (N D )1 are equal. And the efficiency of u¯ for (MOP ) follows from the weak duality theorem (Theorem 8.3.1).
Remark 8.5.1 (i) Notice that the condition 1 “p¯ T ∇(λ¯ T f )(u) ¯ + p¯ T ∇ 2 (λ¯ T f )(u) ¯ p¯ = 0 ⇒ p¯ = 0 2 in Theorem 8.5.1 is different from the assumption ¯ p¯ + ∇ 2 y¯ T g(u) ¯ p) ¯ = 0 ⇒ p¯ = 0” “p¯ T ∇(∇ 2 f (u) in Theorem 1 of Ahmad and Agarwal [2]. Indeed, the primal and the secondorder duality models discussed in [2] are single-objective optimization problems, but our models are multiobjective. Since there are some essential differences between scalar and multiobjective programming, the above two conditions are not the same. Furthermore, the assumption in [2] “p¯ T ∇(∇ 2 f (u) ¯ p¯ + ∇ 2 y¯ T g(u) ¯ p) ¯ = 0 ⇒ p¯ = 0” needs the third derivative, but our condition 1 “p¯ T ∇(λ¯ T f )(u) ¯ + p¯ T ∇ 2 (λ¯ T f )(u) ¯ p¯ = 0 ⇒ p¯ = 0” 2 only consider the second derivative. In this sense, our condition is superior to the one in [2]. (ii) Furthermore, if p = 0 and Fx,u (a) = η(x, u)T a, then the second-order F −pseudoconvexity reduces to the pseudoinvexity, and in this case, Theorem 8.5.1 completely reduces to Theorem 1 established by Yang et al. [122]. Before establishing the second-order converse duality theorems of (N D )2 and (N D )3 , we should point out that there is some drawback in assumption (ii) of Theorem 2 and Theorem 3 established by Ahmad and Agarwal [2]. The assumption (ii) is “the vectors {[∇ 2 f (u)] ¯ j , [∇ 2 (y¯ T g)(u)] ¯ j , j = 1, · · · , n} are linearly 2 ¯ j is the j th row of ∇ 2 f (u) ¯ and [∇ 2 (y¯ T g)(u)] ¯ j is the independent, where [∇ f (u)] th 2 T j row of ∇ (y¯ g)(u).” ¯ Note that both of the matrices ∇ 2 f (u) ¯ and ∇ 2 (y¯ T g)(u) ¯ ¯ j , [∇ 2 (y¯ T g)(u)] ¯ j, are n × n, and the number of n-dimensional vectors {[∇ 2 f (u)] j = 1, · · · , n} is 2n. Consequently, the vectors {[∇ 2 f (u)] ¯ j , [∇ 2 (y¯ T g)(u)] ¯ j, j = 1, · · · , n} are linearly dependent. Thus, this condition for the converse duality theorems cannot hold. In order to overcome and modify this deficiency, we impose some restriction on the second-order duality models (N D )2 and (N D )3 in the following theorems.
8.5 Converse Duality
137
Theorem 8.5.2 (Converse duality for (MOP ) and (N D )2 ) Let (u, ¯ y, ¯ λ¯ , p) ¯ be an efficient solution to (N D )2 . Suppose that (i) ∇(λ¯ T f + y¯ T g)(u) ¯ = 0, (ii) the n × n Hessian matrix ∇ 2 (λ¯ T f )(u) ¯ is positive or negative definite, ¯ is positive definite and y¯ T g(u) ¯ ≥ 0, or (iii) the n × n Hessian matrix ∇ 2 (y¯ T g)(u) the n × n Hessian matrix ∇ 2 (y¯ T g)(u) ¯ is negative definite and y¯ T g(u) ¯ ≤ 0, (iv) the n × n Hessian matrix ∇ 2 (λ¯ T f )(u) ¯ + ∇ 2 (y¯ T g)(u) ¯ is nonsingular, and, ¯ i = 1, · · · , p} are linearly independent, where ∇fi (u) ¯ is (v) the vectors {∇fi (u), the i th row of ∇f (u). ¯ Then u¯ is feasible for (MOP ), and the objective values of (MOP ) and (N D )2 are equal. Further, if the hypotheses of weak duality theorem (Theorem 8.3.2) are satisfied for all feasible solutions of (MOP ) and (N D )2 , then u¯ is an efficient solution to (MOP ). Proof Let 1 L =α T [f (u) − pT ∇ 2 (λT f )(u)pe] + β T [∇(λT f + y T g)(u) 2 + ∇ 2 (λT f + y T g)(u)p] 1 + γ {y T g(u) − pT ∇ 2 y T g(u)p − uT [∇(λT f + y T g)(u) 2 + ∇ 2 (λT f + y T g)(u)p]} − δ T y + ηT λ. ¯ p) Since (u, ¯ y, ¯ λ, ¯ is an efficient solution for (N D )2 , it follows from Lemma 8.2.1 and the generalized Fritz John-type necessary condition in [5] that there exist α ∈ p p R+ , β ∈ C1 , γ ∈ R+ , δ ∈ C2∗ and η ∈ R+ such that ∂L ¯ T | ¯ y, ¯ + (β − γ u¯ − γ p) ¯ T ∇ 2 (λ¯ T f + y¯ T g)(u) ¯ ¯ λ¯ ,p) ¯ =(α − γ λ) ∇f (u) ∂u (u, 1 ¯ T ∇(∇ 2 (λ¯ T f )(u) ¯ p) ¯ + (β − α T ep¯ − γ u) 2 1 ¯ T ∇(∇ 2 (y¯ T g)(u) + (β − γ p¯ − γ u) ¯ p) ¯ = 0, (8.25) 2 ∂L 1 | ¯ y, ¯ − γ u) ¯ + (β − α T ep¯ − γ u) ¯ T ∇ 2 f (u) ¯ p¯ + η = 0 ¯ λ¯ ,p) ¯ =∇f (u)(β ∂λ (u, 2 (8.26)
138
8 Multiobjective Second-Order Duality with Cone Constraints
∂L 1 | ¯ y, ¯ T ∇ 2 g(u) ¯ + ∇g(u)(β ¯ − γ u) ¯ + (β − γ p¯ − γ u) ¯ p¯ − δ = 0, ¯ λ¯ ,p) ¯ =γ g(u) ∂y (u, 2 (8.27)
∂L T | ¯ y, ¯ u) ¯ T ∇ 2 (λ¯ T f )(u)+(β−γ ¯ p−γ ¯ u) ¯ T ∇ 2 (y¯ T g)(u) ¯ = 0, ¯ λ¯ ,p) ¯ =(β − α ep−γ ∂p (u, (8.28) β T [∇(λ¯ T f + y¯ T g)(u) ¯ + ∇ 2 (λ¯ T f + y¯ T g)(u) ¯ p] ¯ = 0,
(8.29)
1 γ {y¯ T g(u) ¯ − p¯ T ∇ 2 y¯ T g(u) ¯ p¯ − u¯ T [∇(λ¯ T f + y¯ T g)(u) ¯ 2 + ∇ 2 (λ¯ T f + y¯ T g)(u) ¯ p]} ¯ = 0,
(8.30)
δ T y¯ = 0,
(8.31)
ηT λ¯ = 0.
(8.32)
(α, β, γ , η) = 0.
(8.33)
Notice that λ¯ > 0 and η ∈ Rn+ , thus (8.32) implies η = 0.
(8.34)
Multiplying (8.27) by y¯ and combining with (8.30) and (8.31), we have ¯ + ∇ 2 (y¯ T g)(u) ¯ p] ¯ + γ u¯ T [∇(λ¯ T f )(u) ¯ + ∇ 2 (λ¯ T f )(u) ¯ p] ¯ = 0. β T [∇(y¯ T g)(u) (8.35) Subtracting (8.29) from (8.35), we obtain (β − γ u) ¯ T [∇(λ¯ T f )(u) ¯ + ∇ 2 (λ¯ T f )(u) ¯ p] ¯ = 0.
(8.36)
Multiplying (8.26) by λ¯ and combining with (8.34) and (8.36), we get 1 T (α e)p¯ T ∇ 2 (λ¯ T f )(u) ¯ p¯ = 0. 2 It follows that from the condition (ii), we have (α T e) · p¯ = 0.
(8.37)
8.5 Converse Duality
139
First, we show that γ > 0. Otherwise, it follows from (8.37) and (8.28), we have ¯ + ∇ 2 (y¯ T g)(u)] ¯ = 0. β T [∇ 2 (λ¯ T f )(u) ¯ + ∇ 2 (y¯ T g)(u) ¯ is nonsingular, β = 0. Thus, (8.25) reduces to Since ∇ 2 (λ¯ T f )(u) p
αi ∇fi (u) ¯ = 0.
i=1
However, the vectors {∇fi (u), ¯ i = 1, · · · , p} are linearly independent, and the above equation yields α = 0, which contradicts to (8.33). And thus, γ > 0. Now, we claim that α = 0. Suppose that α = 0. Since ∇(λ¯ T f + y¯ T g)(u) ¯ = 0, it follows from (8.29) that ¯ + ∇ 2 (y¯ T g)(u)] ¯ p¯ = 0. β T [∇ 2 (λ¯ T f )(u) Which combining with (8.28), we have ¯ p¯ + ∇ 2 (y¯ T g)(u)] ¯ p¯ + γ p¯ T ∇ 2 (y¯ T g)(u) ¯ p¯ = 0. γ u¯ T [∇ 2 (λ¯ T f )(u)
(8.38)
¯ = 0 into (8.30), we obtain that Taking (8.38) and ∇(λ¯ T f + y¯ T g)(u) 1 ¯ + p¯ T ∇ 2 (y¯ T g)(u) ¯ p} ¯ = 0. γ {y¯ T g(u) 2
(8.39)
Since γ = 0, (8.39) implies 1 y¯ T g(u) ¯ + p¯ T ∇ 2 y¯ T g(u) ¯ p¯ = 0. 2
(8.40)
From assumption (iii), we have p¯ = 0. Thus, (8.28) reduces to (β − γ u) ¯ T ∇ 2 (λ¯ T f + y¯ T g)(u)] ¯ = 0. Since ∇ 2 (λ¯ T f )(u) ¯ + ∇ 2 (y¯ T g)(u) ¯ is nonsingular, β = γ u, ¯ which together with ¯ = 0. Since the α = 0, p¯ = 0, and γ = 0 shows that (8.25) reduces to ∇(λ¯ T f )(u) vectors {∇fi (u), ¯ i = 1, · · · , p} are linearly independent, λ¯ = 0. Which contradicts with λ¯ > 0. And thus, α = 0 and α T e > 0. Now we also claim that p¯ = 0. Indeed, since α T e > 0, (8.37) implies p¯ = 0. For p¯ = 0, (8.28) reduces to (β − γ u) ¯ T [∇ 2 (λ¯ T f )(u) ¯ + ∇ 2 (y¯ T g)(u)] ¯ = 0. It follows from the condition (iv), we have β = γ u, ¯
(8.41)
140
8 Multiobjective Second-Order Duality with Cone Constraints
which combining with γ > 0 and β ∈ C1 , we have u¯ =
1 β ∈ C1 , γ
(8.42)
On the other hands, from (8.27), (8.41), γ > 0, and p¯ = 0, we get g(u) ¯ =
1 δ ∈ C2∗ . γ
(8.43)
Consequently, (8.42) and (8.43) imply that u¯ is feasible for (MOP ). Furthermore, from p¯ = 0, we have 1 ¯ p]e, ¯ f (u) ¯ = f (u) ¯ − [ p¯ T ∇ 2 (λ¯ T f )(u) 2 that is, the objective values of (MOP ) and (N D )2 are equal. The efficiency of u¯ for (MOP ) follows from the weak duality theorem (Theorem 8.3.2).
Our converse duality result between (MOP ) and (N D)2 reveals the fact that under mild assumptions, such as positive or negative definiteness, nonsingularity, and linearly independent property, feasible solutions of (MOP ) can be obtained from efficient solutions of the second-order duality model (N D)2 , and the values of objective functions for both problems are equal. In addition, if the generalized convexity assumptions in the weak duality theorem (i.e., Theorem 8.3.2) between (MOP ) and (N D )2 hold, the feasible solutions are also efficient solutions of (MOP ). Remark 8.5.2 If (MOP ) problem reduces to the scalar problem (P ) , condition (v) in the second-order converse duality in Theorem 8.5.2 “the vectors {∇fi (u), ¯ i = 1, · · · , p} are linearly independent” reduces to “∇f (u) ¯ = 0.” However, even if we assume that p = 0 and Fx,u (a) = η(x, u)T a, Theorem 8.5.2 could not completely reduce to Theorem 2 given by Yang et al. [122]. Some other conditions are needed to be imposed in Theorem 8.5.2 to ensure α = 0. This is because both of the primal and second-order duality models (i.e., (MOP ), (N D)2 ) are multiobjective programming, and there are more parameters in the case of second-order duality models. As mentioned above, there exists some shortcoming in assumption (ii) of Theorem 3 established by Ahmad and Agarwal [2]. Hence, similar to Theorem 8.3.2, some restriction on the duality model (N D )3 is imposed to obtain the second-order converse duality theorem between (MOP ) and (N D )3 . Theorem 8.5.3 (Converse duality for (MOP ) and (N D )3 ) Let (u, ¯ y, ¯ λ¯ , p) ¯ be an efficient solution for (N D )3 . Suppose that:
8.5 Converse Duality
141
(i) ∇ y¯ T g(u) ¯ = 0, ¯ = 0, (ii) ∇(λ¯ T f + y¯ T g)(u) (iii) ∇ 2 (y¯ T g)(u) ¯ is positive definite and y¯ T g(u) ¯ ≤ 0 or ∇ 2 (y¯ T g)(u) ¯ is negative T definite and y¯ g(u) ¯ ≥ 0, ¯ is positive or negative definite, (iv) ∇ 2 (λ¯ T f )(u) (v) the n × n Hessian matrix ∇ 2 (λ¯ T f )(u) ¯ + ∇ 2 (y¯ T g)(u) ¯ is nonsingular, and, (vi) the vectors {∇fi (u), ¯ i = 1, · · · , p} are linearly independent, where ∇fi (u) ¯ is the ith row of ∇f (u). ¯ Then u¯ is feasible for (MOP ), and the objective values of (MOP ) and (N D )3 are equal. In addition, if the hypotheses of weak duality theorem (Theorem 8.3.3) are satisfied for all feasible solutions of (MOP ) and (N D )3 , then u¯ is an efficient solution of (MOP ). Proof Let 1 L =α T {f (u) − { pT ∇ 2 (λT f )(u)p + uT [∇(λT f + y T g)(u) 2 + ∇ 2 (λT f + y T g)(u)p]}e} + β T [∇(λT f + y T g)(u) + ∇ 2 (λT f + y T g)(u)p] 1 + γ [y T g(u) − pT ∇ 2 (y T g)(u)p] + ηT λ. 2 ¯ p) Since (u, ¯ y, ¯ λ, ¯ is an efficient solution for (N D )3 , it follows from Lemma 8.2.1 p and the generalized Fritz John conditions that there exist α ∈ R+ , β ∈ C1 , γ ∈ R+ , p and η ∈ R+ such that ∂L T ¯ T | ¯ y, ¯ + (γ − α T e)∇(y¯ T g)(u) ¯ ¯ λ¯ ,p) ¯ =(α − α eλ) ∇f (u) ∂u (u, 1 ¯ T ∇(∇ 2 (λ¯ T f )(u) ¯ p) ¯ + (β − α T ep¯ − α T eu) 2 1 + (β − γ p¯ − α T eu) ¯ T ∇(∇ 2 (y¯ T g)(u) ¯ p) ¯ 2 + (β − α T eu¯ − α T ep) ¯ T ∇ 2 (λ¯ T f + y¯ T g)(u) ¯ = 0,
(8.44)
∂L 1 |(u, ¯ − α T eu) ¯ + (β − α T ep¯ − α T eu) ¯ T ∇ 2 f (u) ¯ p¯ + η = 0, ¯ y, ¯ λ¯ ,p) ¯ =∇f (u)(β ∂λ 2 (8.45)
142
8 Multiobjective Second-Order Duality with Cone Constraints
(y − y) ¯ T
∂L 1 | ¯ y, ¯ + (β − γ p¯ − α T eu) ¯ T ∇ 2 g(u) ¯ p¯ ¯ λ¯ ,p) ¯ =[γ g(u) ∂y (u, 2 ¯ T (y − y) ¯ ≤ 0, ∀y ∈ C2 , + ∇g(u)(β ¯ − α T eu)] (8.46)
∂L T T ¯ T | ¯ y, ¯ λ¯ ,p) ¯ = (β − α ep¯ − α eu) ∂p (u, ∇ 2 (λ¯ T f )(u) ¯ + (β − γ p¯ − α T eu) ¯ T ∇ 2 (y¯ T g)(u) ¯ = 0, ¯ + ∇ 2 (λ¯ T f + y¯ T g)(u) ¯ p] ¯ = 0, β T [∇(λ¯ T f + y¯ T g)(u)
(8.47) (8.48)
1 ¯ − p¯ T ∇ 2 (y¯ T g)(u) ¯ p] ¯ = 0, γ [y¯ T g(u) 2
(8.49)
ηT λ¯ = 0,
(8.50)
(α, β, γ , η) = 0.
(8.51)
Note that λ¯ > 0 and η ∈ Rn+ ; it follows that from (8.50), we have η = 0. Since C2 is a convex cone, (8.46) implies that 1 γ (y¯ T g(u)) ¯ + (β − α T eu) ¯ T ∇(y¯ T g)(u) ¯ + (β − γ p¯ − α T eu) ¯ T ∇ 2 (y¯ T g)(u) ¯ p¯ = 0, 2 (8.52) which combining with (8.49), we have (β − α T eu) ¯ T [∇(y¯ T g)(u) ¯ + ∇ 2 (y¯ T g)(u) ¯ p] ¯ = 0.
(8.53)
First, we show that α = 0. Otherwise α = 0; then (8.45), (8.47), and (8.53) reduce to ¯ p¯ = 0, ∇f (u)β ¯ + β T ∇ 2 f (u)
(8.54)
¯ + (β − γ p) ¯ T ∇ 2 (y¯ T g)(u) ¯ = 0, β T ∇ 2 (λ¯ T f )(u)
(8.55)
¯ + ∇ 2 (y¯ T g)(u) ¯ p] ¯ = 0, β T [∇(y¯ T g)(u)
(8.56)
respectively. Multiplying (8.54) by λ¯ and combining with (8.56), we get ¯ + ∇ 2 (λ¯ T f + y¯ T g)(u) ¯ p] ¯ = 0. β T [∇(λ¯ T f + y¯ T g)(u)
(8.57)
8.5 Converse Duality
143
Combining the equation with (8.55) yields ¯ = −γ p¯ T ∇ 2 (y¯ T g)(u) ¯ p. ¯ β T ∇(λ¯ T f + y¯ T g)(u)
(8.58)
And using the assumption (ii) in (8.58), we have ¯ p¯ = 0. γ p¯ T ∇ 2 (y¯ T g)(u) ¯ is positive or negative definite, γ p¯ = 0. Substituting it Since ∇ 2 (y¯ T g)(u) into (8.55), we have β = 0, since the Hessian matrix ∇ 2 (λ¯ T f )(u) ¯ + ∇ 2 (y¯ T g)(u) ¯ is nonsingular. Using the results α = 0, γ p¯ = 0, and β = 0 in (8.44), we have γ ∇(y¯ T g)(u) ¯ = 0. It follows that from the assumption (i), we obtain γ = 0. And thus, (α, β, γ , η) = 0, contradicting (8.51). Therefore, α ≥ 0 and α T e > 0. We now claim that γ = 0. Otherwise, (8.47) reduces to ¯ T ∇ 2 (λ¯ T f + y¯ T g)(u) ¯ = α T ep¯ T ∇ 2 (λ¯ T f )(u). ¯ (β − α T eu)
(8.59)
Which combining with (8.45) and (8.53), we have 1 − α T ep¯ T ∇ 2 (λ¯ T f )(u) ¯ p¯ = (β − α T eu) ¯ T ∇(λ¯ T f + y¯ T g)(u) ¯ p. ¯ 2 ¯ = 0 and α T e > 0, Since ∇(λ¯ T f + y¯ T g)(u) ¯ p¯ = 0. p¯ T ∇ 2 (λ¯ T f )(u) ¯ is positive or negative definite, we get It follows from the condition if ∇ 2 (λ¯ T f )(u) p¯ = 0. Hence, (8.59) reduces to ¯ T ∇ 2 (λ¯ T f + y¯ T g)(u) ¯ = 0. (β − α T eu) ¯ + ∇ 2 (y¯ T g)(u) ¯ is nonsingular, Since the n × n Hessian matrix ∇ 2 (λ¯ T f )(u) β = α T eu. ¯ Substituting p¯ = 0 and β = α T eu¯ into (8.44), we have ¯ − α T e∇(λ¯ T f + y¯ T g)(u) ¯ = ∇(α T f )(u) ¯ = 0. ∇(α T f )(u)
144
8 Multiobjective Second-Order Duality with Cone Constraints
Note that the vectors {∇fi (u), ¯ (i = 1, · · · , p), ∇(y¯ T g)(u)} ¯ are linearly independent, α = 0. This contradicts α ≥ 0. And thus, γ > 0. For γ > 0, (8.49) implies that 1 y¯ T g(u) ¯ − p¯ T ∇ 2 (y¯ T g)(u) ¯ p¯ = 0. 2 From the assumption (iii), we get p¯ = 0. Consequently, (8.47) reduces to (β − α T eu) ¯ T [∇ 2 (λ¯ T f )(u) ¯ + ∇ 2 (y¯ T g)(u)] ¯ = 0. ¯ the above equation It follows that from the nonsingularity of ∇ 2 (λ¯ T f + y¯ T g)(u), yields ¯ β = (α T e)u,
(8.60)
And thus, from α T e > 0 and β ∈ C1 , we have u¯ =
1 αT e
β ∈ C1 .
(8.61)
Combining with (8.46), (8.60), and p¯ = 0, we have g(u) ¯ ∈ C2∗ ,
(8.62)
as γ > 0 and C2 are convex cones. Accordingly, it follows from (8.61) and (8.62) that u¯ is feasible for (MOP ). And by (8.48) and (8.60) along with α T e > 0 and p¯ = 0, we have 1 f (u) ¯ =f (u) ¯ − { p¯ T ∇ 2 (λ¯ T f )(u) ¯ p¯ + u¯ T [∇(λ¯ T f + y¯ T g)(u) ¯ 2 + ∇ 2 (λ¯ T f + y¯ T g)(u) ¯ p]}e. ¯ That is, the objective values of (MOP ) and (N D )3 are equal. The efficiency of u¯ for (MOP ) follows from the weak duality theorem (Theorem 8.3.3).
Similar to Theorem 8.5.2, this converse duality result between (MOP ) and (N D)3 indicates that under suitable assumptions, feasible solutions of (MOP ) can be derived from efficient solutions of second-order duality model (N D)3 , and the values of objective functions for both problems are equal. In addition, if the conditions of weak duality theorem (i.e., Theorem 8.3.3) between (MOP ) and (N D )3 hold, then these feasible solutions are efficient solutions to (MOP ).
8.5 Converse Duality
145
Remark 8.5.3 (i) It should be noted that the condition (iii) in the above converse duality theorem “the vectors {∇fi (u), ¯ i = 1, · · · , p} are linearly independent” indicates that the number p cannot be more than n. (ii) The conditions in the second-order converse duality theorem between (MOP ) and (N D)3 (i.e., Theorem 8.5.3) are still stronger than that in Theorem 3 given by Yang et al. [122], even under the situation that f : Rn → R, p = 0 and Fx,u (a) = η(x, u)T a. Because both of the primal and dual problems (i.e., (MOP ) and (N D)3 ) are multiobjective programming, and there are more parameters in the case of second-order duality models, we need to strengthen conditions for this second-order converse duality theorem. Theorem 8.5.4 (Converse duality for (MOP ) and (N D )4 ) Let (u, ¯ y, ¯ λ¯ , p) ¯ be an efficient solution for (N D )4 . Suppose that (i) either (a) ∇ 2 (λ¯ T f + y¯ T g)(u) ¯ is positive definite and p¯ T [∇(λ¯ T f + y¯ T g)(u)] ¯ ≥ 2 T T 0, or, (b) ∇ (λ¯ f + y¯ g)(u) ¯ is negative definite and p¯ T [∇(λ¯ T f + y¯ T g)(u)] ¯ ≤ 0, (ii) the vectors {∇fi (u), ¯ (i = 1, · · · , p), ∇(y¯ T g)(u)} ¯ are linearly independent, where ∇fi (u) ¯ is the i th row of ∇f (u). ¯ Then u¯ is feasible for (MOP ), and the objective values of (MOP ) and (N D )4 are equal. Moreover, if the hypotheses of the weak duality theorem (Theorem 8.3.4) are satisfied for all feasible solutions of (MOP ) and (N D )4 , then u¯ is an efficient solution of (MOP ). Proof Let 1 L =α T {f (u) + [y T g(u) − pT ∇ 2 (λT f + y T g)(u)p]e} 2 + β T [∇(λT f + y T g)(u) + ∇ 2 (λT f + y T g)(u)p] − γ uT [∇(λT f + y T g)(u) + ∇ 2 (λT f + y T g)(u)p] + ηT λ. Since (u, ¯ λ¯ , y, ¯ p) ¯ is an efficient solution for (N D )4 , it follows from Lemma 8.2.1 p and the generalized Fritz John conditions that there exist α ∈ R+ , β ∈ C1 , γ ∈ R+ p and η ∈ R+ such that ∂L ¯ T | ¯ y, ¯ + (α T e − γ )∇(y¯ T g)(u) ¯ ¯ λ¯ ,p) ¯ =(α − γ λ) ∇f (u) ∂u (u, 1 ¯ T ∇(∇ 2 (λ¯ T f + y¯ T g)(u) ¯ p) ¯ + (β − α T ep¯ − γ u) 2 + (β − γ p¯ − γ u) ¯ T ∇ 2 (λ¯ T f + y¯ T g)(u) ¯ = 0,
(8.63)
146
8 Multiobjective Second-Order Duality with Cone Constraints
∂L 1 | ¯ y, ¯ − γ u) ¯ + (β − α T ep¯ − γ u) ¯ T ∇ 2 f (u) ¯ p¯ + η = 0, ¯ λ¯ ,p) ¯ =∇f (u)(β ∂λ (u, 2 (8.64)
(y − y) ¯ T
∂L 1 T |(u, ¯ + (β − α T ep¯ − γ u) ¯ T ∇ 2 g(u) ¯ p¯ ¯ y, ¯ λ¯ ,p) ¯ =[α eg(u) ∂y 2 ¯ ≤ 0, ∀y ∈ C2 , + ∇g(u)(β ¯ − γ u)] ¯ T (y − y)
(8.65)
∂L T | ¯ y, ¯ T ∇ 2 (λ¯ T f + y¯ T g)(u) ¯ = 0, ¯ λ¯ ,p) ¯ =(β − α ep¯ − γ u) ∂p (u,
(8.66)
β T [∇(λ¯ T f + y¯ T g)(u) ¯ + ∇ 2 (λ¯ T f + y¯ T g)(u) ¯ p] ¯ = 0,
(8.67)
¯ + ∇ 2 (λ¯ T f + y¯ T g)(u) ¯ p] ¯ = 0, γ u¯ T [∇(λ¯ T f + y¯ T g)(u)
(8.68)
ηT λ¯ = 0,
(8.69)
(α, β, γ , η) = 0.
(8.70)
Using the conditions λ¯ > 0 and η ∈ Rn+ in (8.69), we have η = 0. Assumption (i) implies ∇ 2 (λ¯ T f + y¯ T g)(u) ¯ is nonsingular. And from (8.66), we get ¯ β = α T ep¯ + γ u.
(8.71)
We now claim that α = 0. Otherwise, it follows from (8.71) that β = γ u. ¯ Which combining with (8.63) implies ¯ + ∇(y¯ T g)(u) ¯ + p¯ T ∇ 2 (λ¯ T f + y¯ T g)(u)] ¯ = 0. γ [∇(λ¯ T f )(u) Now, we consider the following two cases: Case 1, γ = 0. Then β = 0, i.e., (α, β, γ , η) = 0, which is a contraction to (8.70); Case 2, γ = 0. Then ¯ + ∇(y¯ T g)(u) ¯ + ∇ 2 (λ¯ T f + y¯ T g)(u) ¯ p¯ = 0. ∇(λ¯ T f )(u) Multiplying (8.72) by p, ¯ we get ¯ + ∇(y¯ T g)(u)] ¯ + p¯ T ∇ 2 (λ¯ T f + y¯ T g)(u) ¯ p¯ = 0. p¯ T [∇(λ¯ T f )(u)
(8.72)
8.5 Converse Duality
147
From the assumption (i), we obtain p¯ = 0. Together with (8.72) yields p ¯ ¯ + ∇(y¯ T g)(u) ¯ = 0. This contradicts the assumption (ii). Hence, i=1 λi ∇fi (u) T α e > 0. Now, we also should show p¯ = 0. In fact, it follows from (8.67) and (8.68) that ¯ + ∇ 2 (λ¯ T f + y¯ T g)(u) ¯ p] ¯ = 0, (β − γ u) ¯ T [∇(λ¯ T f + y¯ T g)(u) which combined with (8.71) and α T e > 0 yields ¯ + ∇ 2 (λ¯ T f + y¯ T g)(u) ¯ p] ¯ = 0. p¯ T [∇(λ¯ T f + y¯ T g)(u) By assumption (i), we get p¯ = 0. And from p¯ = 0 and (8.71), we have β = γ u, ¯
(8.73)
which together with (8.63) and p¯ = 0 yields p
(αi − γ λ¯ i )∇fi (u) ¯ + (α T e − γ )∇(y¯ T g)(u) ¯ = 0. i=1
Since the vectors {∇fi (u), ¯ (i = 1, · · · , p), ∇(y¯ T g)(u)} ¯ are linearly independent, γ = α T e > 0. Therefore, (8.73) implies u¯ =
1 β ∈ C1 . γ
(8.74)
Substituting (8.73) and p¯ = 0 in (8.65), we obtain (α T e)g(u) ¯ T (y − y) ¯ ≤ 0, ∀y ∈ C2 .
(8.75)
Since α T e > 0 and C2 is a convex cone, g(u) ¯ ∈ C2∗ ,
(8.76)
¯ = 0, y¯ T g(u)
(8.77)
and
Consequently, it follows from (8.74) and (8.76) that u¯ is feasible for (MOP ). From (8.77) and p¯ = 0, we have 1 f (u) ¯ = f (u) ¯ + {(y¯ T g)(u) ¯ − p¯ T ∇ 2 (λ¯ T f + y¯ T g)(u) ¯ p}e, ¯ 2
148
8 Multiobjective Second-Order Duality with Cone Constraints
that is, the objective values of (MOP ) and (N D )4 are equal. The efficiency of u¯ for (MOP ) follows from the weak duality theorem (Theorem 8.3.4).
Remark 8.5.4 (i) Similar to Remark 8.5.3 (i), condition (ii) of Theorem 8.5.4 indicates that the number p cannot be more than n. (ii) Note that in the second-order duality theorem (Theorem 4) given by Ahmad and Agarwal [2] for nonlinear programming, the condition “∇f (u) ¯ + ∇(y¯ T g)(u) ¯ + ∇ 2 f (u) ¯ p¯ + ∇ 2 (y¯ T g)(u) ¯ p¯ = 0” is essentially ¯ = 0”, “∇f (u) ¯ + ∇(y¯ T g)(u) since p¯ = 0. Therefore, the linearly independent property in assumption (ii) of Theorem 8.5.4 is stronger than the above condition, even for the scalar problem (P ). The main reason is that both of the primal and dual problems (i.e., (MOP ) and (N D)4 ) are multiobjective programming. (iii) In addition, in spite of p = 0 and Fxu (a) = η(x, u)T a, the linearly independent property in assumption (ii) of Theorem 8.5.4 is also stronger than the condition “∇f (u) ¯ + ∇(y¯ T g)(u) ¯ = 0, ” which is used in the first-order duality theorem (Theorem 4) established by Yang et al. [122] for nonlinear programming. This is because both of the primal and dual problems (i.e., (MOP ) and (N D)4 ) are multiobjective programming, and there are more parameters in the second-order duality model (N D)4 .
Chapter 9
Multiobjective Higher-Order Duality
9.1 Introduction We consider the following nonlinear programming problem: (P) Min f (x) subject to g(x) 0, where f and g are twice differentiable functions from Rn −→ R and Rn −→ Rm , respectively. By introducing two differentiable functions h : Rn × Rn −→ R and k : Rn × Rn −→ Rm , Mangasarian [59] formulated a class of higher-order dual problems for a nonlinear programming problem involving twice differentiable functions. (HD1) Max f (u) + h(u, p) − y T g(u) − y T k(u, p) subject to ∇p h(u, p) = ∇p y T k(u, p), y 0, where ∇p h(u, p) denotes the n×1 gradient of h with respect to p and ∇p y T k(u, p) denotes the n×1 gradient of y T k with respect to p. A subsequent, higher-order dual to (P) is formulated in [73] as (HD) Max f (u) + h(u, p) − y T h(u, p)
© Springer Nature Singapore Pte Ltd. 2018 X. Yang, Generalized Preinvexity and Second Order Duality in Multiobjective Programming, Springer Optimization and Its Applications 142, https://doi.org/10.1007/978-981-13-1981-5_9
149
150
9 Multiobjective Higher-Order Duality
subject to ∇p h(u, p) = ∇p y T k(u, p), yi gi (u) + yi ki (u, p) − pT ∇p yi ki (u, p) ≤ 0, i = 1, 2, . . . , m, y 0. Higher-order duality has been studied by many researchers, such as Mond and Zhang [75], Chen [22], Mishra [62], Mishra and Rueda [64], and Yang et al. [113, 126]. Recently, researchers are interested at higher-order duality in mathematical programming [65, 75]. In this chapter, we give a converse duality theorem on higherorder Mond-Weir-type dual model under mild assumptions. The result corrects the work of Kim et al. [51]. And as motivated by [52, 75, 106, 115], we propose a pair of higher-order symmetric dual models in multiobjective nonlinear programming. Under invexity conditions, the weak, strong, and converse duality theorems are established.
9.2 Mond-Weir Type Converse Duality Involving Cone Constraints A nonempty set C in Rn is said to be a cone with vertex zero, if x ∈ C implies that λx ∈ C for all λ 0. If, in addition, C is convex, then C is called a convex cone. The polar cone C ∗ of C is defined by C ∗ = {z ∈ Rn |x T z ≤ 0, ∀x ∈ C}. Consider the following multiobjective programming problem: (MCP) Minf (x) = (f1 (x), f2 (x), . . . , fl (x)) subject to −g(x) ∈ C ∗ ,
(9.1)
where f = [f1 , f2 , · · · , fl ]T and g are differentiable functions from Rn −→ Rl and Rn −→ Rm , respectively, and C is a closed convex cone in Rm . Consider the higher-order Mond-Weir-type dual model in multiobjective programming as follows. (MMCD) Max
f (u) + (λT h(u, p))e − pT ∇p (λT h(u, p))e
subject to λT ∇p h(u, p) = ∇p (y T k(u, p)), g(u) + k(u, p) − pT ∇p (k(u, p)) ∈ C ∗ , y ∈ C, λ > 0, λT e = 1,
9.2 Mond-Weir Type Converse Duality Involving Cone Constraints
151
where h : Rn × Rn −→ Rl and k : Rn × Rn −→ Rm are two differentiable functions and e = (1, . . . , 1)T is a vector in Rl , λ ∈ Rl . In this section, we give a converse duality theorem as follows. Theorem 9.2.1 (Converse Duality) Let (x, ¯ y, ¯ λ¯ , p) ¯ be an efficient solution of (MMCD). Suppose the following hold: (i) (ii) (iii) (iv) (v)
∇ y¯ T g(x) ¯ + ∇x y¯ T k(x, ¯ 0) = 0, ¯ 0) is nonsingular, ∇f (x) ¯ + ∇x λ¯ T h(x, h(x, ¯ 0) = 0, k(x, ¯ 0) = 0, ∇p h(x, ¯ 0) = ∇f (x), ¯ ∇p k(x, ¯ 0) = ∇g(x), ¯ p¯ T ∇p y¯ T k(x, ¯ p) ¯ = 0 implies that p¯ = 0, ¯ p)] ¯ j , [∇p2 y¯ T k(x, ¯ p)] ¯ j , j = 1, 2, · · · , n} be linearly independent. {[∇p2 h(x,
If the conditions of Theorem 3.1 in [51] are satisfied, then x¯ is an efficient solution to (MCP). ¯ p) Proof Since (x, ¯ y, ¯ λ, ¯ is an efficient solution of (MMCD), we know that (x, ¯ y, ¯ λ¯ , p) ¯ is an efficient solution of the following multiobjective programming: Max
f (u) + (λT h(u, p))e − pT ∇p (λT h(u, p))e
subject to λT [∇p h(u, p)] = ∇p (y T k(u, p)),
(9.2)
y g(u) + y k(u, p) − p ∇p (y k(u, p)) ≤ 0,
(9.3)
y ∈ C, λ > 0, λ e = 1.
(9.4)
T
T
T
T
T
Now by the generalized Fritz John theorem [24], there exist μ1 ∈ Rl , μ2 ∈ Rn , μ3 ∈ R, μ4 ∈ R, and μ5 ∈ C ∗ , such that ¯ − ∇x λ¯ T h(x, ¯ p)e ¯ + p¯ T ∇x [∇p λ¯ T h(x, ¯ p)]e} ¯ + μT2 {∇x [∇p λ¯ T h(x, ¯ p) ¯ μT1 {−∇f (x) T T −∇p (y¯ T k(x, ¯ p))]}+μ ¯ ¯ ¯ p)− ¯ p¯ T(∇x ∇p (y¯ T k(x, ¯ p))}=0,(9.5) ¯ 3 {∇ y¯ g(x)+∇ x y¯ k(x,
{−μT2 ∇p k(x, ¯ p)+μ ¯ ¯ x, ¯ p)− ¯ p¯ T ∇p k(x, ¯ p)]}(y ¯ − y) ¯ ≥ 0, ∀y ∈ C, (9.6) 3 [g(x)+k( μT1 {pT ∇p h(u, p)e − h(u, p)e} + μT2 ∇p h(u, p) − μ4 = 0,
(9.7)
μT1 [p¯ T ∇p2 λ¯ T h(x, ¯ p)e] ¯ + μT2 {∇p2 λ¯ T h(x, ¯ p)e ¯ − ∇p2 [y¯ T k(x, ¯ p)]} ¯ ¯ p)] ¯ = 0, −μ3 p¯ T ∇p2 [y¯ T k(x,
(9.8)
152
9 Multiobjective Higher-Order Duality
μ3 {y¯ T g(x) ¯ + y¯ T k(x, ¯ p) ¯ − p¯ T ∇p y¯ T k(x, ¯ p)} ¯ = 0,
(9.9)
μT4 λ¯ = 0,
(9.10)
(μ1 , μ3 , μ4 ) ≥ 0,
(9.11)
(μ1 , μ2 , μ3 , μ4 ) = 0.
(9.12)
From (9.8), we have ¯ p)) ¯ − (μ3 p¯ + μ2 )T ∇p2 (y¯ T k(x, ¯ p)) ¯ = 0. [(μT1 e)p¯ + μ2 ]T ∇p2 (λ¯ h(x, From Assumption (v), we obtain (μT1 e)p¯ + μ2 = 0,
μ3 p¯ + μ2 = 0.
(9.13)
We proceed to show μ1 = 0. If μ1 = 0, from (9.13), and μ2 = 0, we shall show that it implies μ3 =0. To the contrary, assume that μ3 = 0. Then, from (9.13), we ¯ + have p¯ = 0. From (9.5) and μ1 = 0, μ2 = p¯ = 0, we get μ3 {∇ y¯ T g(x) ∇x y¯ T k(x, ¯ 0)} = 0. This together with (i) contradicts μ3 = 0. Thus, if μ1 = 0, then μ3 =0. So, (9.7) implies μ4 = 0, a contradiction to (9.12). Thus, μT1 e > 0. Next, we shall show that μ3 > 0. If μ3 = 0, from (9.13), μ2 = 0. It follows from (9.13) and μT1 e > 0 that p¯ = 0. From (9.5), we have μ1 [−∇f (x) ¯ − ∇x λ¯ T h(x, ¯ 0)] = 0, which contradicting μ1 = 0 and Assumption (ii). Therefore, μ3 > 0. ¯ respectively, it follows from (9.13) that In (9.6), taking y = 2y¯ and y = 12 y, μ3 {y¯ T g(x) ¯ + y¯ T k(x, ¯ p)} ¯ = 0.
(9.14)
Based on (9.9) and μ3 > 0, we get p¯ T ∇p y¯ T k(x, ¯ p) ¯ = 0. From the above equation and Assumption (iv), we obtain p¯ = 0. Thus, it is clear from (9.13) that μ2 = 0. Now, by using μ2 = 0, μ3 > 0, and p¯ = 0, it follows from (9.6) that [g(x) ¯ + k(x, ¯ 0)]T (y − y) ¯ ≥ 0, ∀y ∈ C
9.3 Mond-Weir Type Symmetric Duality
153
then, by Assumption (iii), we obtain g(x) ¯ T (y − y) ¯ ≥ 0, ∀y ∈ C. Since C is a ∗ convex cone, we have g(x) ¯ ∈ C . Therefore, x¯ is a feasible solution to (NDP). The conclusion follows readily from Condition (iii), p¯ = 0, and Theorem 3.1 in [51].
9.3 Mond-Weir Type Symmetric Duality We formulate a pair of higher-order nonlinear multiobjective programs. The primal problem (HMP) and the dual problem (HMD) are stated as follows: (HMP)
min FP (x, y, λ, p)
x,y,λ,p
= f (x, y) − (y T ∇y (λT f )(x, y))ek − (y T ∇p h(x, y, p))ek subject to : ∇y (λT f )(x, y) + ∇p h(x, y, p) 0, λT ek = 1.
λ > 0, (HMD)
(9.15)
max FD (u, v, λ, r)
u,v,λ,r
= f (u, v) − (uT ∇u (λT f )(u, v))ek − (uT ∇r k(u, v, r))ek subject to : ∇u (λT f )(u, v) + ∇r k(u, v, r) 0, λ > 0,
(9.16)
λT ek = 1,
where f : Rn × Rm → Rk , h : Rn × Rm × Rm → R, k : Rn × Rm × Rn → R, p ∈ Rm , r ∈ Rn , λ ∈ Rk , and ek = (1, · · · , 1)T ∈ Rk . Theorem 9.3.1 (Weak Duality) Let (x, y, λ, p) be a feasible solution of (HMP) and let (u, v, λ, r) be a feasible solution of (HMD). Assume that there exist two vector functions η1 : Rn × Rn → Rn and η2 : Rm × Rm → Rm satisfying η1 (x, u) + u 0,
(9.17)
η2 (v, y) + y 0,
(9.18)
such that f (·, y) is η1 -invex function for fixed y and −f (x, ·) is η2 -invex function for fixed x. Assume further that η1 (x, u)T ∇r k(u, v, r) 0
(9.19)
η2 (v, y)T ∇p h(x, y, p) 0
(9.20)
154
9 Multiobjective Higher-Order Duality
Then, FP (x, y, λ, p) ≤ FD (u, v, λ, r).
(9.21)
Proof Assume, to the contrary, that (9.21) is not true, that is, f (x, y) − (y T ∇y (λT f )(x, y))ek − (y T ∇p h(x, y, p))ek ≤ f (u, v) − (uT ∇u (λT f )(u, v))ek − (uT ∇r k(u, v, r))ek Then, by the fact that λ > 0 and λT ek = 1, we obtain (λT f )(x, y) − y T ∇y (λT f )(x, y) − y T ∇p h(x, y, p) < (λT f )(u, v) − uT ∇u (λT f )(u, v) − uT ∇r k(u, v, r).
(9.22)
Since f (·, v) is η1 -invex, we obtain f (x, v) − f (u, v) η1 (x, u)T ∇u f (u, v). It follows from λ > 0 that (λT f )(x, v) − (λT f )(u, v) η1 (x, u)T ∇u (λT f )(u, v).
(9.23)
In view of the constraint (9.16), it follows that a¯ = ∇u (λT f )(u, v) + ∇r k(u, v, r) ∈ Rn+ . From (9.17), we have η1 (x, u)T ∇u (λT f )(u, v) + η1 (x, u)T ∇r k(u, v, r) + uT ∇u (λT f )(u, v) + uT ∇r k(u, v, r) 0.
(9.24)
Combining (9.19) and (9.24), it yields η1 (x, u)T ∇u (λT f )(u, v) −uT ∇u (λT f )(u, v) − uT ∇r k(u, v, r).
(9.25)
Thus, (9.25) and (9.23) give (λT f )(x, v) − (λT f )(u, v) −uT ∇u (λT f )(u, v) − uT ∇r k(u, v, r). Next, Since −f (x, ·) is η2 -invex, we obtain f (x, v) − f (x, y) η2 (v, y)T ∇y f (x, y).
(9.26)
9.3 Mond-Weir Type Symmetric Duality
155
It follows from λ > 0 that (λT f )(x, v) − (λT f )(x, y) η2 (v, y)T ∇y (λT f )(x, y).
(9.27)
From the constraint (9.15), it follows that b¯ = −∇y (λT f )(x, y) − ∇p h(x, y, p) ∈ Rm +. From (9.18), we have η2 (v, y)T ∇y (λT f )(x, y) + η2 (v, y)T ∇p h(x, y, p) + y T ∇y (λT f )(x, y) + y T ∇p h(x, y, p) 0.
(9.28)
Combining (9.20) and (9.28), it yields η2 (v, y)T ∇y (λT f )(x, y) −y T ∇y (λT f )(x, y) − y T ∇p h(x, y, p).
(9.29)
Using (9.29) and (9.27), we get (λT f )(x, v) − (λT f )(x, y) −y T ∇y (λT f )(x, y) − y T ∇p h(x, y, p).
(9.30)
It follows from (9.26) and (9.30) that (λT f )(x, y) − y T ∇y (λT f )(x, y) − y T ∇p h(x, y, p) (λT f )(u, v) − uT ∇u (λT f )(u, v) − uT ∇r k(u, v, r), which is a contradiction to (9.22).
Remark 9.3.1 If the invexity assumption of Theorem 8.2.1 is replaced by convexity, then the conditions η1 (x, u)+u 0 and η2 (v, y)+y 0 become x 0 and v 0. Again, if we take h(x, y, p) = 12 pT ∇yy f (x, y)p, k(x, y, r) = 12 r T ∇yy f (x, y)r, then our models reduce to the pair of symmetric dual problems as considered in [52] and [106]. The following theorem is a generalization of Theorem 2.2 in [52] and Theorem 3.2 in [106]. ¯ p) Theorem 9.3.2 (Strong Duality) Let (x, ¯ y, ¯ λ, ¯ be an efficient solution of (HMP). Assume that (i) (ii) (iii) (iv) (v)
the matrix ∇pp h(x, ¯ y, ¯ p) ¯ is nonsingular; the vectors ∇y f1 (x, ¯ y), ¯ . . . , ∇y fk (x, ¯ y) ¯ are linearly independent; ¯ y, ¯ p) ¯ ∈ span{∇y f1 (x, ¯ y), ¯ . . . , ∇y fk (x, ¯ y)} ¯ \ {0}; ∇p h(x, p¯ = 0 implies ∇p h(x, ¯ y, ¯ p) ¯ = 0; and ¯ y, ¯ 0) = ∇r k(x, ¯ y, ¯ 0) = 0. ∇p h(x,
Assume further that the assumptions of Theorem 9.3.1 are satisfied. Then, the ¯ r¯ = 0) is an efficient objective values of (HMP) and (HMD) are equal, and (x, ¯ y, ¯ λ, solution of (HMD).
156
9 Multiobjective Higher-Order Duality
Proof Let LP = α T [f (x, y) − (y T ∇y (λT f )(x, y))ek − (y T ∇p h(x, y, p)ek ] + β T [∇y (λT f )(x, y) + ∇p h(x, y, p)] − ωT λ + μ(λT ek − 1), ¯ y, ¯ λ¯ , p) ¯ is an efficient where α ∈ Rk , β ∈ Rm , ω ∈ Rk , and μ ∈ R. Since (x, solution of (HMP), it follows from the Fritz John optimality conditions [24] that there exist α¯ ∈ Rk , β¯ ∈ Rm , μ ∈ R, and ω ∈ Rk such that ∂LP = ∇x (α¯ T f )(x, ¯ y) ¯ + ∇xy (λ¯ T f )(x, ¯ y)( ¯ β¯ − (α¯ T ek )y) ¯ ∂x ¯ y, ¯ p)( ¯ β¯ − (α¯ T ek )y) ¯ = 0, + ∇px h(x,
(9.31)
∂LP = ∇y f (x, ¯ y)( ¯ α¯ − (α¯ T ek )λ¯ ) + ∇yy (λ¯ T f )(x, ¯ y)( ¯ β¯ − (α¯ T ek )y) ¯ ∂y ¯ y, ¯ p) ¯ + ∇py h(x, ¯ y, ¯ p)( ¯ β¯ − (α¯ T ek )y) ¯ = 0, − (α¯ T ek )∇p h(x,
(9.32)
∂LP = ∇pp h(x, ¯ y, ¯ p)( ¯ β¯ − (α¯ T ek )y) ¯ = 0, ∂p
(9.33)
∂LP = ∇yT f (x, ¯ y)( ¯ β¯ − (α¯ T ek )y) ¯ − ω + μek = 0, ∂λ
(9.34)
β¯ T
∂LP = β¯ T (∇y (λ¯ T f )(x, ¯ y) ¯ + ∇p h(x, ¯ y, ¯ p)) ¯ = 0, ∂β
(9.35)
∂LP = ωT λ¯ = 0, ∂ω
(9.36)
ωT
¯ ω) 0, (α, ¯ ω, μ) = 0. (α, ¯ β, ¯ β,
(9.37)
Since λ¯ > 0 and ω 0, it follows from (9.36) that ω = 0. ¯ y, ¯ p), ¯ we have From (9.33) and the nonsingularity of ∇pp h(x, β¯ = (α¯ T ek )y. ¯ Therefore, it follows from (9.34) that μ = 0.
(9.38)
9.3 Mond-Weir Type Symmetric Duality
157
¯ ω, μ) = 0, If α¯ = 0, then it is clear from (9.38) that β¯ = 0. Consequently, (α, ¯ β, contradicting to (9.37). Hence, α¯ ≥ 0, and α¯ T ek = 0.
(9.39)
¯ − (α¯ T ek )∇p h(x, ¯ y)( ¯ α¯ − (α¯ T ek )λ) ¯ y, ¯ p) ¯ = 0. ∇y f (x,
(9.40)
From (9.38) and (9.32), we get
We claim that p¯ = 0. Indeed, if p¯ = 0, by condition (iv), we obtain ¯ y, ¯ p) ¯ = 0. ∇p h(x, From (9.40), we get ∇p h(x, ¯ y, ¯ p) ¯ =
1 ¯ ∇y f (x, ¯ y)( ¯ α¯ − (α¯ T ek )λ), α¯ T ek
which contradicts condition (iii). Hence, p¯ = 0. By conditions (ii) and (v), it follows from (9.40) that α¯ = (α¯ T ek )λ¯ .
(9.41)
Using (9.31), (9.38), (9.39), and (9.41), we have ¯ y) ¯ = (∇x (α¯ T f )(x, ¯ y))/( ¯ α¯ T ek ) = 0. ∇x (λ¯ T f )(x,
(9.42)
Thus, it follows from condition (v) that (x, ¯ y, ¯ λ¯ , r = 0) is a feasible solution of (HMD). From (9.35), (9.38), and (9.39), we get y¯ T ∇y (λ¯ T f )(x, ¯ y) ¯ = 0.
(9.43)
Therefore, from (9.42), (9.43) and condition (v), we obtain ¯ y, ¯ λ¯ , p) ¯ = FD (x, ¯ y, ¯ λ¯ , 0). FP (x, ¯ r = 0) is an efficient Finally, using Theorem 8.3.1, we can easily show that (x, ¯ y, ¯ λ, solution of (HMD).
We also have a converse duality theorem by virtue of the symmetry of the problem. This converse duality theorem is stated below, and its proof is similar to that given for the proof of Theorem 9.3.2. Theorem 9.3.3 (Converse Duality) Let (u, ¯ v, ¯ λ¯ , r¯ ) be an efficient solution of (HMD). Assume that the following conditions are satisfied.
158
(i) (ii) (iii) (iv) (v)
9 Multiobjective Higher-Order Duality
The matrix ∇rr k(u, ¯ v, ¯ r¯ ) is nonsingular; the vectors ∇u f1 (u, ¯ v), ¯ . . . , ∇u fk (u, ¯ v) ¯ are linearly independent; ∇r k(u, ¯ v, ¯ r¯ ) ∈ span{∇u f1 (u, ¯ v), ¯ . . . , ∇u fk (u, ¯ v)} ¯ \ {0}; ¯ v, ¯ r¯ ) = 0; and r¯ = 0 implies ∇r k(u, ∇r k(u, ¯ v, ¯ 0) = ∇p h(u, ¯ v, ¯ 0) = 0.
Furthermore, assume that the conditions of Theorem 9.3.1 hold. Then, the objective ¯ p¯ = 0) is an efficient solution values of (HMP) and (HMD) are equal, and (u, ¯ v, ¯ λ, of (HMP).
9.4 Special Cases (i) If we take h(x, y, p) =
1 1 T p ∇yy f (x, y)p, k(x, y, r) = r T ∇yy f (x, y)r, 2 2
then our second-order symmetric dual models and results in this section reduce to the second-order symmetric dual models and results in Sect. 6.3 of Chap. 6. (ii) If we take m = n, p = r = 0, h(x, y, p) = 12 pT ∇yy f (x, y)p, k(x, y, r) = 1 T 2 r ∇yy f (x, y)r, η1 (x, u) = x − u, and η2 (v, y) = v − y, then our secondorder symmetric dual models and results reduce to the second-order symmetric dual models and results in [52].
References
1. Aghezzaf, B.: Second order mixed type duality in multiobjective programming problems. J. Math. Anal. Appl. 285, 97–106 (2003) 2. Ahmad, I., Agarwal, R.P.: Second order converse duality for nonlinear programming. J. Nonlinear Sci. Appl. 3, 234–244(2000) 3. Avriel, M., Diewert, W.E., Schaible, S., Zang, I.: Generalized Concavity. Plenum Press, New York (1988) 4. Balas, E.: Minimax and duality for linear and nonlinear mixed integer programming. In: Abadie, J. (ed.) Integer and Nonlinear Programming. North-Holland, Amsterdam (1970) 5. Bazaraa, M.S., Goode, J.J.: On symmetric duality in nonlinear programming. Oper. Res. 21, 1–9 (1973) 6. Bazaraa, M.S., Sherali, H.D., Shetty, C.M.: Nonlinear Programming: Theory and Applications. Wiley, New York (1993) 7. Bector, C.R., Chandra, S.: Second order symmetric and self dual programs. Opsearch 23, 89–95 (1986) 8. Bector, C.R., Chandra, S., Bector, M.K.: Generalized fractional programming duality: a parametric approach. J. Optim. Theory Appl. 60, 243–260 (1989) 9. Ben-Israel, A., Mond, B.: What is invexity? J. Aust. Math. Soc. Ser. B 28, 1–9 (1986) 10. Ben-Israel, A., Ben-Tal, A., Zlobec, S.: Optimality in Nonlinear Programming: A Feasible Directions Approach. Wiley, New York (1981) 11. Bhatia, D., Mehra, A.: Lagrangian duality for preinvex set-valued functions. J. Math. Anal. Appl. 214, 599–612 (1997) 12. Borwein, J.M.: Proper efficient point for maximizations with respect to cones. SIAM J. Control Optim. 15, 57–63 (1977) 13. Cambini, A., Castagnoli, E., Martein, L., Mazzoleni, P., Schaible, S.: Generalized Convexity and Fractional Programming with Economic Applications. Springer, Berlin (1990) 14. Chandra, S., Abha, C.S.: A note on pseudo-invexity and duality in nonlinear programming. Eur. J. Oper. Res. 122, 161–165 (2000) 15. Chandra, S., Craven, B.D., Mond, B.: Symmetric dual fractional programming. Z. Oper. Res. 29, 59–64 (1985) 16. Chandra, S., Craven, B.D., Mond, B.: Generalized concavity and duality with a square root term. Optimization 16, 653–662 (1985) 17. Chandra, S., Goyal, A., Husain, I.: On symmetric duality in mathematical programming with F -convexity. Optimization 43, 1–18 (1998)
© Springer Nature Singapore Pte Ltd. 2018 X. Yang, Generalized Preinvexity and Second Order Duality in Multiobjective Programming, Springer Optimization and Its Applications 142, https://doi.org/10.1007/978-981-13-1981-5
159
160
References
18. Chankong, V., Haimes, Y.Y.: Multiobjective Decision Making: Theory and Methodology. North-Holland, New York (1983) 19. Chandra, S., Husain, I.: Symmetric dual non differentiable programs. Bull. Aust. Math. Soc. 24, 295–307 (1981) 20. Chandra, S., Husain, I., Abha, I.: On mixed symmetric duality in mathematical programming. Opsearch 36, 165–171 (1999) 21. Chandra, S., Prasad, D.: Symmetric duality in mathematical programming. J. Aust. Math. Soc. Ser. B 35, 198–206 (1993) 22. Chen, X.: Higher order symmetric duality in non-differentiable multiobjective programming problems. J. Math. Anal. Appl. 290, 423–435 (2004) 23. Craven, B.D.: Invex functions and constrained local minima. Bull. Aust. Math. Soc. 24, 357– 366 (1981) 24. Craven, B.D.: Lagrangian conditions and quasiduality. Bull. Aust. Math. Soc. 16, 325–339 (1977) 25. Craven, B.D., Glover, B.M.: Invex functions and duality. J. Aust. Math. Soc. Ser. A 39, 1–20 (1985) 26. Crouzeix, J.P., Martinez-Legaz, J.E., Volle, M.: Generalized Convexity, Generalized Monotonicity. Proceedings of the Vth International Workshop on Generalized Convexity, Marseille, 17–21 July 1996. Springer, Berlin (1994) 27. Dantzig, G.B., Eisenberg, E., Cottle, R.W., Symmetric dual nonlinear programs. Pac. J. Math. 15, 809–812 (1965) 28. Dorn, W.S.: A symmetric dual theorem for quadratic programs. J. Oper. Res. Soc. Jpn. 2, 93–97 (1960) 29. Geoffrion, A.M.: Proper efficiency and the theory of vector maximization. J. Math. Anal. Appl. 22, 618–630 (1968) 30. Giorgi, G.: A note on relationships between convexity and invexity. J. Aust. Math. Soc. Ser. B. 32, 97–99 (1990) 31. Gulati, T.R., Ahmad, I.: Second order symmetric duality for nonlinear minimax mixed integer programs. Eur. J. Oper. Res. 101, 122–129 (1997) 32. Gulati, T.R., Husain, I., Ahmed, A.: Multiobjective symmetric duality with invexity. Bull. Aust. Math. Soc. 56, 25–36 (1997) 33. Gulati, T.R., Mehndiratta, G.: Nondifferentiable multiobjective Mond-Weir type second-order symmetric duality over cones. Optim. Lett. 4, 293–309 (2010) 34. Gulati, T.R., Saini, H., Gupta, S.K.: Second-order multiobjective symmetric duality with cone constraints. Eur. J. Oper. Res. 205, 247–252 (2010) 35. Gupta, S.K., Kailey, N.: Second-order multiobjective symmetric duality involving conebonvex functions. J. Global Optim. 55, 125–140 (2013) 36. Hadjisavvas, N., Martinez-Legaz, J.E., Penot, J.P.: Generalized Convexity and Generalized Monotonicity. Springer, Berlin (2001) 37. Hanson, M.A.: On sufficiency of the Kuhn-Tucker conditions. J. Math. Anal. Appl. 80, 545– 550 (1981) 38. Hanson, M.A.: Second order invexity and duality in mathematical programming. Opsearch 30, 311–320 (1993) 39. Hanson, M.A., Mond, B.: Further generalization of convexity in mathematical programming. J. Inf. Optim. Sci. 3, 25–32 (1982) 40. Hanson, M.A., Mond, B.: Necessary and sufficient conditions in constrained optimization. Math. Program. 37, 51–58 (1987) 41. Holmes, R.B.: Geometric Functional Analysis and Its Applications. Springer, New York (1975) 42. Hou, S.H., Yang, X.M.: On second-order symmetric duality in nondifferentiable programming. J. Math. Anal. Appl. 255, 491–498 (2001) 43. Jeyakumar, V.: ρ-convexity and second order duality. Util. Math. 34, 1–15 (1985)
References
161
44. Kailey, N., Gupta, S.K., Dangar, D.: Mixed second-order multiobjective symmetric duality with cone constraints. Nonlinear Anal-Real 12, 3373–3383 (2011) 45. Karamardian, S.: Strictly quasi-convex (concave) functions and duality in mathematical programming. J. Math. Anal. Appl. 20, 344–358 (1967) 46. Karamardian, S., Schaible, S.: Seven kinds of monotone maps. J. Optim. Theory Appl. 66, 37–46 (1990) 47. Kaul, R.N., Kaur, S.: Generalizations of convex and related functions. Eur. J. Oper. Res. 9, 369–377 (1982) 48. Kaul, R.N., Kaur, S.: Optimality criteria in nonlinear programming involving nonconvex functions. J. Math. Anal. Appl. 105, 104–112 (1985) 49. Kaul, R.N., Lyall, V.: A note on nonlinear fractional vector maximization. Opsearch 26, 108– 121 (1989) 50. Khan, Z.A., Hanson, M.A.: On ratio invexity in mathematical programming. J. Math. Anal. Appl. 206, 330–336 (1997) 51. Kim, D.S., Kang, H.S., Lee, Y.J., Seo, Y.Y.: Higher order duality in multiobjective programming with cone constraints. Optimization 59, 29–43 (2010) 52. Kim, D.S., Yun, Y.B., Kuk, H.: Second order symmetric and self duality in multiobjective programming. Appl. Math. Lett. 10, 17–22 (1997) 53. Komlosi, S.: Generalized monotonicity and generalized convexity. J. Optim. Theory Appl. 84, 361–376 (1995) 54. Komlosi, S.: Monotonicity and quasimonotonicity in nonsmooth analysis. In: Du, D.Z., Qi, L., Womersley, R.S. (eds.) Recent Advances in Nonsmooth Optimization. World Scientific Publishing Co Pte Ltd, Singapore (1995) 55. Komlosi, S., Rapcsak, T., Schaible, S.: Generalized Convexity. Proceedings of the IVth International Workshop on Generalized Convexity, Pecs, 31 Aug–2 Sept 1992. Springer, Berlin (1994) 56. Kumar, P., Bhatia, D.: A note on symmetric duality for multiobjective nonlinear programs. Opsearch 32, 313–320 (1995) 57. Lal, S.N., Nath, B., Kumar, A.: Duality for some nondifferentiable static multiobjective programming problems. J. Math. Anal. Appl. 186, 862–867 (1994) 58. Mangasarian, O.L.: Nonlinear Programming. McGraw-Hill, New York (1969) 59. Mangasarian, O.L.: Second order and higher order duality in nonlinear programming. J. Math. Anal. Appl. 51, 607–620 (1975) 60. Martin, D.H.: The essence of invexity. J. Optim. Theory Appl. 47, 65–76 (1985) 61. Mishra, S.K.: Second order symmetric duality in mathematical programming with F convexity. Eur. J. Oper. Res. 127, 507–518 (2000) 62. Mishra, S.K.: Non-differentiable higher order symmetric duality in mathematical programming with generalized invexity. Eur. J. Oper. Res. 167, 28–34(2005) 63. Mishra, S.K., Lai, K.K.: Second order symmetric duality in multiobjective programming involving generalized cone-invex functions. Eur. J. Oper. Res. 178, 20–26 (2007) 64. Mishra, S.K., Rueda, N.G.: Higher order generalized invexity and duality in mathematical programming. J. Math. Anal. Appl. 247, 173–182 (2000) 65. Mishra, S.K., Rueda, N.G.: Higher-order generalized invexity and duality in nondifferentiable mathematical programming. J. Math. Anal. Appl. 272, 496–506 (2002) 66. Mohan, S.R., Neogy, S.K.: On invex sets and preinvex functions. J. Math. Anal. Appl. 189, 901–908 (1995) 67. Mond, B.: A symmetric dual theorem for nonlinear programs. Q. Appl. Math. 23, 265–269 (1965) 68. Mond, B.: Second order duality for nonlinear programs. Opsearch 11, 90–99 (1974) 69. Mond, B.: A class of nondifferentiable mathematical programming problems. J. Math. Anal. Appl. 46, 169–174 (1974) 70. Mond, B., Schechter, M.: Non-differentiable symmetric duality. Bull. Aust. Math. Soc. 53, 177–188 (1996)
162
References
71. Mond, B., Smart, I.: Duality with invexity for a class of nondifferentiable static and continuous programming problems. J. Math. Anal. Appl. 141, 373–388 (1989) 72. Mond, B., Weir, T.: Generalized concavity and duality. In: Schaible, S., Ziemba, W.T. (eds.) Generalized Concavity in Optimization and Economics, pp. 263–280. Academic, New York (1981) 73. Mond, B., Weir, T.: Generalized convexity and hogher order duality. J. Math. Sci. 16–18, 74–94(1981–1983) 74. Mond, B., Weir, T.: Symmetric duality for nonlinear multiobjective programming. In: Kumar, S. (ed.) Recent Developments in Mathematical Programming, pp. 137–153. Gordon and Breach Science, London (1991) 75. Mond, B., Zhang, J.: Higher order invexity and duality in mathematical programming. In: Gneralized Convexity, Generalzied Monotonicity: Recent Results, Luminy, 1996. Optimization and Its Applications, vol. 27, pp. 357–372. Kluwer Academic, Dordrecht (1998) 76. Mukherjee, R.N., Reddy, L.V.: Semicontinuity and quasiconvex functions. J. Optim. Theory Appl. 94, 715–726 (1997) 77. Nanda, S., Das, L.N.: Pseudo-invexity and duality in nonlinear programming. Eur. J. Oper. Res. 88, 572–577 (1996) 78. Ng, C.T.: On midconvex functions with midconcave bounds. Proc. Am. Math. Soc. 102, 538– 540 (1988) 79. Nikodem, K.: On some class of midconvex functions. Polska Akademia Nauk. Annales Polonici Mathematici 50, 145–151 (1989) 80. Noor, M.A.: Nonconvex function and variational inequalities. J. Optim. Theory Appl. 87, 615–630 (1995) 81. Pini, R.: Invexity and generalized convexity. Optimization 22, 513–525 (1991) 82. Pini, R., Singh, C.: A survey of recent advances in generalized convexity with applications to duality theory and optimality conditions (1985–1995). Optimization 39, 311–360 (1997) 83. Pini, R., Singh, C.: Generalized convexity and generalized monotonicity. J. Inf. Optim. Sci. 20, 215–233 (1999) 84. Ponstein, J.: Seven kinds of convexity. SIAM Rev. 9, 115–119 (1967) 85. Preda, V.: On efficiency and duality for multiobjective programs. J. Math. Anal. Appl. 166, 365–377 (1992) 86. Roberts, A.W., Varberg, D.E.: Convex Functions. Academic, New York (1973) 87. Rockafellar, R.T.: Convex Analysis. Princeton University Press, Princeton (1970) 88. Ruiz-Garzon, G., Osuna-Gomez, R., Rufian-Liznan, A.: Generalized invex monotonicity. Eur. J. Oper. Res. 114, 501–512 (2003) 89. Schaible, S.: Criteria for generalized monotonicity. In: Giannessi, F., Komlósi, S., Rapcsák, T. (eds.) New Trends in Mathematical Programming, pp. 277–288. Kluwer Academic, Boston (1998) 90. Schaible, S., Ziemba, W.T.: Generalized Concavity in Optimization and Economics. Academic, New York (1981) 91. Schechter, M.: More on subgradient duality. J. Math. Anal. Appl. 71, 251–262 (1979) 92. Singh, C., Suneja, S.K., Rueda, N.G.: Preinvexity in multiobjective fractional programming. J. Inf. Optim. Sci. 13, 293–302 (1992) 93. Suneja, S.K., Lalitha, C.S., Khurana, S.: Second order symmetric duality in multiobjective programming. Eur. J. Oper. Res. 144, 492–500 (2003) 94. Tang, L.P., Yan H., Yang, X.M.: Second order duality for multiobjective programming with cone constraints. Sci. China Math. 59, 1285–1306 (2016) 95. Tanino, T., Sawaragi, Y.: Duality theory in multiobjective programming. J. Optim. Theory Appl. 27, 509–529 (1979) 96. Wang, S.Y., Li, Z.F., Craven, S.D.: Global efficiency in multiobjective programming. Optimization 45, 396–385 (1999) 97. Weir, T., Jeyakumar, T.: A class of nonconvex functions and mathematical programming. Bull. Aust. Math. Soc. 38, 177–189 (1988)
References
163
98. Weir, T., Mond, B.: Pre-invex functions in multiple objective optimization. J. Math. Anal. Appl. 136, 29–38 (1988) 99. Weir, T., Mond, B.: Symmetric and self duality in multiple objective programming. Asia-Pac. J. Oper. Res. 4, 124–133 (1988) 100. Weir, T., Mond, B.: Generalized convexity and duality in multiple objective programming. Bull. Aust. Math. Soc. 39, 287–299 (1989) 101. Yang, X.Q.: Second-order global optimality conditions for convex composite optimization. Math. Program. 81, 327–347 (1998) 102. Yang, X.Q., Chen, G.Y.: A class of nonconvex functions and pre-variational inequalities. J. Math. Anal. Appl. 169, 359–373 (1992) 103. Yang, X.M.: Second order symmetric duality in nonlinear fractional programming. Opsearch 32, 205–209 (1995) 104. Yang, X.M.: On second order symmetric duality in nondifferentiable multiobjective programming. J. Ind. Manag. Optim. 5, 697–703 (2009) 105. Yang, X.M.: A note on preinvexity. J. Ind. Manag. Optim. 10, 1319–1321 (2014) 106. Yang, X.M., Hou, S.H.: Second-order symmetric duality in multiobjective programming. Appl. Math. Lett. 14, 587–592 (2001) 107. Yang, X.M., Li, D.: On properties of preinvex functions. J. Math. Anal. Appl. 256, 229–241 (2001) 108. Yang, X.M., Li, D.: Semistrictly preinvex functions. J. Math. Anal. Appl. 259, 287–308 (2001) 109. Yang, X.M., Liu, S.Y.: Three kinds of generalized convexity. J. Optim. Theory Appl. 86, 501–513 (1995) 110. Yang, X.M., Teo, K.L., Yang, X.Q.: A characterization of convex functions. Appl. Math. Lett. 13, 27–30 (2000) 111. Yang, X.M., Teo, K.L., Yang, X.Q.: Duality for a class of nondifferentiable multiobjective programming problems. J. Math. Anal. Appl. 252, 999–1005 (2000) 112. Yang, X.M., Teo, K.L., Yang, X.Q.: Second order strict converse duality in nonlinear fractional programming. In: Yang, X.Q., Teo, K.L., Caccetta, L. (eds.) Optimization Methods and Applications, pp. 295–306. Kluwer Academic, Boston (2001) 113. Yang, X.M., Teo, K.L., Yang, X.Q.: Higher order generalized convexity and duality in nondifferentiable multiobjective mathematical programming. J. Math. Anal. Appl. 297, 48– 55 (2004) 114. Yang, X.M., Yang, J., Lee, H.W. J.: Strong duality theorem for multiobjective higher order nondifferentiable symmetric dual programs. J. Ind. Manag. Optim. 9, 525–530 (2013) 115. Yang, X.M., Yang, J., Yip, T.L., Teo, K.L.: Higher-order Mond-Weir converse duality in multiobjective programming involving cones. Sci. China Math. 56, 2389–2392 (2013) 116. Yang, X.M., Yang, X.Q.: Mixed type converse duality in multiobjective programming problems. J. Math. Anal. Appl. 304, 394–398 (2005) 117. Yang, X.M., Yang, X.Q.: A note on mixed type converse duality in multiobjective programming problems. J. Ind. Manag. Optim. 6, 497–500 (2010) 118. Yang, X.M., Yang, X.Q., Teo, K.L.: Characterizations and applications of prequasiinvex functions. J. Optim. Theory Appl. 110, 645–668 (2001) 119. Yang, X.M., Yang, X.Q., Teo, K.L.: Generalized invexity and generalized invariant monotonicity. J. Optim. Theory Appl. 117, 607–625 (2003) 120. Yang, X.M., Yang, X.Q., Teo, K.L.: Criteria for generalized invex monotonicities. Eur. J. Oper. Res. 164, 115–119 (2005) 121. Yang, X.M., Yang, X.Q., Teo, K.L.: Non-differentiable second-order symmetric duality in mathematical programming with F -convexity. Eur. J. Oper. Res. 144, 554–559 (2003) 122. Yang, X.M., Yang, X.Q., Teo, K.L.: Converse duality in nonlinear programming with cone constraints. Eur. J. Oper. Res. 170, 350–354 (2006) 123. Yang, X.M., Yang, X.Q., Teo, K.L., Hou, S.H.: Second order duality for nonlinear programming. Indian J. Pure Appl. Math. 35, 699–708 (2004)
164
References
124. Yang, X.M., Yang, X.Q., Teo, K.L., Hou, S.H.: Second order symmetric duality in nondifferentiable multiobjective programming with F-convexity. Eur. J. Oper. Res. 164, 406–416 (2005) 125. Yang, X.M., Yang, X.Q., Teo, K.L., Hou, S.H.: Multiobjective second-order symmetric duality with F-convexity. Eur. J. Oper. Res. 165, 585–591 (2005) 126. Yang, X.M., Yang, X.Q., Teo, K.L.: Higher-order symmetric duality in multiobjective programming with invexity. J. Ind. Manag. Optim. 4, 385–391 (2008)
Index
B Bector’s parametric approach, 49 η-Bonvexity, 123
C Converse duality Mond-Weir type converse duality with cone constraints, 150–153 Mond-Weir type symmetric duality, 157–158 multiobjective Mond-Weir-type secondorder symmetric duality, 118–119 multiobjective second-order duality with cone constraints, 133–148 Wolfe type II symmetric duality, 107 Wolfe type I symmetric duality, 102 Convex cone, 150 ρ-Convexity, 109
G Generalized Fritz John conditions, 131, 133, 137, 141, 145 Generalized invariant monotonicity, 77 invariant monotone, 78–81 invariant pseudomonotone maps, 84–87 invariant quasimonotone maps, 81–83 strictly invariant monotone, 80–81, 90 strictly invariant pseudomonotone maps, 87–90 strictly monotone, 80 Generalized invexity, 77 Generalized monotonicity, 77 Geoffrion’s idea, 49 G-preinvex set, 24–25
H Hessian matrix, 109, 143
E Epigraph, 24
I Invariant convex, 3 Invariant monotone, 78–81 Invariant pseudomonotone maps, 84–87 Invariant quasimonotone maps, 81–83 Invex functions, 3
F F -pseudoconvexity, 123 F -quasiconvexity, 123 Fritz John optimality condition, 114
L Lagrangian-type duality, 48 Linearly dependent vectors, 136 Linearly independent vectors, 136, 140
D Dingle-objective symmetric duality, 95
© Springer Nature Singapore Pte Ltd. 2018 X. Yang, Generalized Preinvexity and Second Order Duality in Multiobjective Programming, Springer Optimization and Its Applications 142, https://doi.org/10.1007/978-981-13-1981-5
165
166 Local prequasiinvexity, 4, 5, 20 Local semistrict prequasiinvexity, 4, 5, 18–19
M Mond-Weir type converse duality with cone constraints, 150–153 Mond-Weir type symmetric duality converse duality, 157–158 dual problem, 153 primal problem, 153 strong duality, 155–157 weak duality, 153–155 Monotonicity variational inequality problems, 77 vector-valued function, 77 Multiobjective fractional programming problems efficient solution, 48–50 feasible solution, 48–50 inequality, 50 Lagrangian-type duality, 48 optimal solution, 49–50 primal problem, 48–49 saddle point optimality criteria, 48 Multiobjective higher-order duality differentiable functions, 149 Mond-Weir type converse duality with cone constraints, 150–153 Mond-Weir type symmetric duality converse duality, 157–158 dual problem, 153 primal problem, 153 strong duality, 155–157 weak duality, 153–155 nonlinear programming problem, 149 second-order symmetric dual models, 158 Multiobjective Mond-Weir-type second-order symmetric duality converse duality, 118–119 dual problem, 111 examples, 119–120 Hessian matrix, 109 primal problem, 110–111 strong duality, 114–118 support function, 110 weak duality, 111–114 Multiobjective programming problem, 96 Mond-Weir type converse duality with cone constraints, 150–151 prequasiinvex functions, 71–72 global efficient solution, 72, 73 global weakly efficient solution, 72, 74
Index local efficient solution, 72, 73 local weakly efficient solution, 72, 74 Multiobjective second-order duality with cone constraints η-bonvexity/η-pseudobonvexity, 123 converse duality, 133–148 corrected versions, duality theorems, 121–122 definitions, 123–126 F -convexity conditions, 123 F -pseudoconvexity and F -quasiconvexity, 123 nonsymmetric duality, 123 scalar nonlinear programming problem, 121 strong duality, 122, 131–133 weak duality, 122, 126–131 Multiobjective Wolfe type second-order symmetric duality notations and definitions, 96–97 Wolfe type II symmetric duality converse duality, 107 dual problem, 102, 103 primal problem, 102 weak duality, 103–107 Wolfe type I symmetric duality converse duality, 102 dual problem, 97, 98 primal problem, 97 strong duality, 100–102 weak duality, 98–100
P Polar cone, 150 Preinvex functions characterizations local prequasiinvexity, 20 local semistrict prequasiinvexity, 18–19 prequasiinvexity, 13–15 semistrict prequasiinvexity, 15–18 Condition C, 3–6 intermediate-point preinvexity, 4 and invex set, 4–5 nondifferentiable convex functions, 3 and semicontinuity, 6–12 Preinvexity, 27–32, 78, 91 Prequasiinvex functions applications, 71–74 definitions, 53–54 examples, 54–56 properties, 56–65 semistrictly prequasiinvex, 54, 65–69 strictly prequasiinvex, 53, 69–71
Index Prequasiinvexity, 4, 5, 13–15 Primal problem Mond-Weir type symmetric duality, 153 multiobjective fractional programming problems, 48–49 multiobjective Mond-Weir-type secondorder symmetric duality, 110–111 second-order convexity, 95 Wolfe type II symmetric duality, 102 Wolfe type I symmetric duality, 97 η-Pseudobonvexity, 123 Pseudo-convexity, 77 Pseudoinvexity, 84–87, 91 Pseudo-monotonicity, 77, 84
Q Quasiconvexity, 77 Quasiinvexity, 91 Quasi-monotonicity, 77, 81
S Saddle point optimality criteria, 43, 48 Scalar kernel function, 95 Second-order converse duality theorems, 133–148 Second-order convex functions, 95 Second-order duality, 95 Second-order F -pseudoconvexity, 109, 123–124, 126–131, 136 Second-order F -quasiconvex, 124, 127–128, 130 Second-order Mangasarian-type dual formulations, 95, 109 Second-order pseudoinvexity, 109 Semipreinvex functions multiobjective fractional programming applications, 48–50 preinvex functions and arc-connected convex functions, 43 properties, 44–48 saddle point optimality criteria, 43, 48 “semi-connected” property, 43 semi-connected set, 44–48 Semistrictly preinvex functions Condition C, 32–35 definitions, 21–22 gradient properties, 35–41 lower semicontinuity condition, 21, 32–35 and preinvexity relationship, 27–32 properties, 22–27
167 Semistrictly prequasiinvex functions definition, 54 properties, 65–69 Semistrictly quasiconvex function, 56 Semistrict prequasiinvexity, 4, 5, 15–18 Strictly invariant monotone, 80–81, 90 Strictly invariant pseudomonotone maps, 87–90 Strictly monotone, 80 Strictly preinvex functions, gradient properties, 35–41 Strictly prequasiinvex functions definition, 53 properties, 69–71 Strictly pseudomonotone, 87 Strictly quasiconvex function, 56 Strict preinvexity, 91 Strict pseudoinvexity, 91 Strong duality Mond-Weir type symmetric duality, 155–157 multiobjective Mond-Weir-type secondorder symmetric duality, 114–118 multiobjective second-order duality with cone constraints, 122, 131–133 Wolfe type II symmetric duality, 105–107 Wolfe type I symmetric duality, 100–102 V Variational inequality problems, 77 W Weak duality Mond-Weir type symmetric duality, 153–155 multiobjective Mond-Weir-type secondorder symmetric duality, 111–114 multiobjective second-order duality with cone constraints, 122, 126–131 Wolfe type II symmetric duality, 103–105 Wolfe type I symmetric duality, 98–100 Wolfe dual models, 95 Wolfe type II symmetric duality converse duality, 107 dual problem, 102, 103 primal problem, 102 strong duality, 105–107 weak duality, 103–105 Wolfe type I symmetric duality converse duality, 102 dual problem, 97, 98 primal problem, 97 strong duality, 100–102 weak duality, 98–100