Tuesday, November 18, 2025

When Infimum Breaks a Familiar Equivalence in Optimization

When Infimum Breaks a Familiar Equivalence in Optimization

When Infimum Breaks a Familiar Equivalence in Optimization

In parametric constrained optimization, it is common to define a value function by allowing a constraint bound to vary.
Given two functions f,g:XRf,g : X \to \mathbb{R} (not necessarily convex) and a constraint parameter aRa \in \mathbb{R}, consider the optimization problem

F(a)=min{f(x):g(x)a, xX}. F(a) = \min \{\, f(x) : g(x) \le a,\ x \in X \,\}.

The following logical equivalence is known:

F(a)bxX:f(x)b and g(x)a. F(a) \le b \quad\Longleftrightarrow\quad \exists x \in X : f(x) \le b \text{ and } g(x) \le a.

It turns out that this equivalence hides a subtle assumption: the minimum must actually be attained. Without attainment, only one direction is guaranteed; the other may fail completely.

This post explains this distinction, proves the valid part, and constructs a clean counterexample showing the failure in the general infimum case.

1. The Setup

Let

F(a)=inf{f(x):g(x)a}, F(a) = \inf \{\, f(x) : g(x) \le a \,\},

where we use inf\inf instead of min\min since a minimizer may or may not exist.

We are interested in the condition

F(a)b. F(a) \le b.

Intuitively, one might think that this should be equivalent to the existence of an xx satisfying both the constraint g(x)ag(x)\le a and the bound f(x)bf(x) \le b.

This intuition is only half correct.

2. The Always-True Direction

Assume there exists xXx \in X such that

  • g(x)ag(x) \le a
  • f(x)bf(x) \le b

Then clearly

F(a)=inf{f(y):g(y)a}f(x)b. F(a) = \inf \{ f(y): g(y) \le a \} \le f(x) \le b.

Thus we always have

x:f(x)b, g(x)aF(a)b. \exists x: f(x) \le b,\ g(x) \le a \quad\Longrightarrow\quad F(a) \le b.

This part does not require F(a)F(a) to be attained.

3. The Subtle Direction (and When It Holds)

The reverse implication

F(a)bx:f(x)b, g(x)a F(a) \le b \quad\Longrightarrow\quad \exists x: f(x) \le b,\ g(x) \le a

requires that the minimum be achieved.

Indeed, if there exists xx^* such that

  • g(x)ag(x^*) \le a
  • f(x)=F(a)f(x^*) = F(a)

then f(x)bf(x^*) \le b whenever F(a)bF(a) \le b, and the implication holds.

So the full equivalence

F(a)bx:f(x)b, g(x)a F(a) \le b \quad\Longleftrightarrow\quad \exists x: f(x) \le b,\ g(x) \le a

holds exactly when the minimum exists.

4. When the Minimum Does Not Exist: A Counterexample

To see what goes wrong, consider the following very simple example.

Let

  • X=(0,1)X = (0,1)
  • g(x)=0g(x) = 0 for all xXx \in X
  • f(x)=xf(x) = x

Then for any a0a \ge 0 we have

F(a)=inf{f(x):x(0,1)}=0, F(a) = \inf\{ f(x): x \in (0,1)\} = 0,

but the infimum is not attained because 0(0,1)0 \notin (0,1).

Now take b=0b = 0.

We have

F(a)=00, F(a) = 0 \le 0,

but there is no x(0,1)x \in (0,1) such that f(x)0f(x) \le 0, because x>0x > 0 for all xXx \in X.

Thus the implication

F(a)0x:f(x)0, g(x)a F(a)\le 0 \quad\Longrightarrow\quad \exists x: f(x)\le 0,\ g(x)\le a

fails. This shows that the reverse direction is not valid without attainment.

6. Summary

The small logical distinction between min and inf appears harmless at first, but it is crucial in duality theory and in constructing correct optimality conditions—especially outside the convex world, where minima may fail to exist and value functions may not behave smoothly.

Popular Posts