Danskin's theorem

Danskin's theorem In convex analysis, Danskin's theorem is a theorem which provides information about the derivatives of a function of the form {displaystyle f(x)=max _{zin Z}phi (x,z).} The theorem has applications in optimization, where it sometimes is used to solve minimax problems. The original theorem given by J. M. Danskin in his 1967 monograph [1] provides a formula for the directional derivative of the maximum of a (not necessarily convex) directionally differentiable function.
An extension to more general conditions was proven 1971 by Dimitri Bertsekas.
Contents 1 Statement 1.1 Example of no directional derivative 1.2 Subdifferential 2 Extension 3 See also 4 References Statement The following version is proven in "Nonlinear programming" (1991).[2] Suppose {displaystyle phi (x,z)} is a continuous function of two arguments, {displaystyle phi :mathbb {R} ^{n}times Zto mathbb {R} } where {displaystyle Zsubset mathbb {R} ^{m}} is a compact set. Further assume that {displaystyle phi (x,z)} is convex in {displaystyle x} for every {displaystyle zin Z.} Under these conditions, Danskin's theorem provides conclusions regarding the convexity and differentiability of the function {displaystyle f(x)=max _{zin Z}phi (x,z).} To state these results, we define the set of maximizing points {displaystyle Z_{0}(x)} as {displaystyle Z_{0}(x)=left{{overline {z}}:phi (x,{overline {z}})=max _{zin Z}phi (x,z)right}.} Danskin's theorem then provides the following results.
Convexity {displaystyle f(x)} is convex. Directional semi-differential The semi-differential of {displaystyle f(x)} in the direction {displaystyle y} , denoted {displaystyle partial _{y} f(x),} is given by {displaystyle partial _{y}f(x)=max _{zin Z_{0}(x)}phi '(x,z;y),} where {displaystyle phi '(x,z;y)} is the directional derivative of the function {displaystyle phi (cdot ,z)} at {displaystyle x} in the direction {displaystyle y.} Derivative {displaystyle f(x)} is differentiable at {displaystyle x} if {displaystyle Z_{0}(x)} consists of a single element {displaystyle {overline {z}}} . In this case, the derivative of {displaystyle f(x)} (or the gradient of {displaystyle f(x)} if {displaystyle x} is a vector) is given by {displaystyle {frac {partial f}{partial x}}={frac {partial phi (x,{overline {z}})}{partial x}}.} Example of no directional derivative In the statement of Danskin, it is important to conclude semi-differentiability of {displaystyle f} and not directional-derivative as explains this simple example. Set {displaystyle Z={-1,+1},phi (x,z)=zx,} , we get {displaystyle f(x)=|x|} which is semi-differentiable with {displaystyle partial _{-}f(0)=-1,partial _{+}f(0)=+1} but has not a directional derivative at {displaystyle x=0} .
Subdifferential If {displaystyle phi (x,z)} is differentiable with respect to {displaystyle x} for all {displaystyle zin Z,} and if {displaystyle partial phi /partial x} is continuous with respect to {displaystyle z} for all {displaystyle x} , then the subdifferential of {displaystyle f(x)} is given by {displaystyle partial f(x)=mathrm {conv} left{{frac {partial phi (x,z)}{partial x}}:zin Z_{0}(x)right}} where {displaystyle mathrm {conv} } indicates the convex hull operation. Extension The 1971 Ph.D. Thesis by Bertsekas (Proposition A.22) [3] proves a more general result, which does not require that {displaystyle phi (cdot ,z)} is differentiable. Instead it assumes that {displaystyle phi (cdot ,z)} is an extended real-valued closed proper convex function for each {displaystyle z} in the compact set {displaystyle Z,} that {displaystyle operatorname {int} (operatorname {dom} (f)),} the interior of the effective domain of {displaystyle f,} is nonempty, and that {displaystyle phi } is continuous on the set {displaystyle operatorname {int} (operatorname {dom} (f))times Z.} Then for all {displaystyle x} in {displaystyle operatorname {int} (operatorname {dom} (f)),} the subdifferential of {displaystyle f} at {displaystyle x} is given by {displaystyle partial f(x)=operatorname {conv} left{partial phi (x,z):zin Z_{0}(x)right}} where {displaystyle partial phi (x,z)} is the subdifferential of {displaystyle phi (cdot ,z)} at {displaystyle x} for any {displaystyle z} in {displaystyle Z.} See also Maximum theorem Envelope theorem Hotelling's lemma References ^ Danskin, John M. (1967). The theory of Max-Min and its application to weapons allocation problems. New York: Springer. ^ Bertsekas, Dimitri P. (1999). Nonlinear programming (Second ed.). Belmont, Massachusetts. ISBN 1-886529-00-0. ^ Bertsekas, Dimitri P. (1971). Control of Uncertain Systems with a Set-Membership Description of Uncertainty (PDF) (PhD). Cambridge, MA: MIT. hide vte Convex analysis and variational analysis Topics (list) Choquet theoryConvex geometryConvex optimizationDualityLagrange multiplierLegendre transformationLocally convex topological vector spaceSimplex Maps Convex conjugateConcave(ClosedK-LogarithmicallyProperPseudo-Quasi-) Convex functionInvex functionLegendre transformationSemi-continuitySubderivative Main results (list) Ekeland's variational principleFenchel–Moreau theoremFenchel-Young inequalityJensen's inequalityHermite–Hadamard inequalityKrein–Milman theoremMazur's lemmaShapley–Folkman lemmaRobinson-UrsescuSimonsUrsescu Sets Convex hull(Pseudo-) Convex setEffective domainEpigraphHypographJohn ellipsoidZonotope Series Convex series related ((cs, lcs)-closed, (cs, bcs)-complete, (lower) ideally convex, (Hx), and (Hwx)) Duality Dual systemDuality gapStrong dualityWeak duality Categories: Convex optimizationTheorems in analysis
Si quieres conocer otros artículos parecidos a Danskin's theorem puedes visitar la categoría Convex optimization.
Deja una respuesta