Singular Arcs in Optimal Periodic Controls for Scalar Dynamics and Integral Input Constraint

We revisit recent results about optimal periodic control for scalar dynamics with input integral constraint, under lack of convexity and concavity. We show that in this more general framework, the optimal solutions are bang-singular-bang and generalize the bang-bang solutions for the convex case and purely singular for the concave one. We introduce a non-local slope condition to characterize the singular arcs. The results are illustrated on a class of bioprocesses models.


Introduction
Periodic optimal control has received relatively few attention in the literature, apart the well-known π -criterion [8]. This latter one consists in using a linear-quadratic approximation around a steady state with constant control to study frequencies of a sinusoidal control that could improve the average performance index over a neighboring periodic solution. Extensions to other shapes of periodic controls have also been considered. However, the global optimal periodic optimal control has been very rarely investigated, apart from [17] for the characterization of the value function under quite strong assumptions. The two boundaries condition that periodic solutions have to  satisfy might explain the difficulties in extending the usual approaches. Indeed, most of the existing works deal with local necessary conditions ( [10,14]), second order conditions ( [9,15,23]) or approximations techniques ( [2,6,13]).
Recently, a class of scalar dynamics with integral input constraint has been investigated [5], and it has been shown that under convexity and monotonicity assumptions, the global optimal periodic solution is bang-bang and therefore improves the averaged criterion over constant controls. For the concave case, it has been shown that constant controls remain the best ones. These results have been in particular motivated by bioprocesses applications, for which a kind of duality has been derived [4].
However, situations for which neither the convex nor the concave conditions are fulfilled have not been yet considered, which is the purpose of the present work. This allows to solve the problem of optimal periodic operations of bioprocesses with growth functions that are neither convex nor concave, which has been an open problem up to now.
The paper is organized as follows. In Sect. 2, the main results of [5] are recalled, and the setting of the present contribution is specified. Then, in Sect. 3, we propose and prove a geometric optimality necessary condition in terms of slopes, which is central to our approach. In Sect. 4, further results on the optimal trajectories under the eventual lack of monotonicity in our assumptions are proved. Section 5 gives then the complete synthesis of the optimal control. Finally, our results are illustrated on a bioprocess model in Sect. 6, which shows the quantitative benefits of having singular arcs.

Some Preliminaries
Let us consider two functions f , g : R → R of class C 1 and the controlled dynamicṡ (H2) One has f − g < 0 and f + g > 0 on I . One can straightforwardly check that the interval I is invariant by (1) under Hypothesis H1. Hypothesis H2 implies the controllability of (1) on I . In the following, we shall consider solutions on the interval I only. For convenience, we define the function Hypotheses H1-H2 imply ψ(I ) ⊂ [−1, 1] and then for anyx ∈ I , the controlū := ψ(x), which allows the system to stay at steady statex, is admissible, i.e.,ū ∈ [−1, 1].
Let us stress that we do not impose f and g to be non-null on the boundary of I . Therefore ψ(I ) is not necessary [−1, 1] and ψ is not necessarily non-decreasing. Examples 2.1 and 2.2 show that ψ can be non-monotonic on I . Let us fixū ∈ ψ(I ) as a nominal constant control, and consider for T > 0 solutions x(·) in I that are T -periodic with a T -periodic control u satisfying the integral constraint We denote by U T the set of admissible controls, that is u is meas.,T -periodic and fulfills(2)} . (3) Let us now consider a function : R → R of class C 1 and associate the criterion to be minimized over controls u ∈ U T , where x u denotes the solutions of (1) in I associated to u.
In the former work [5], it has been shown that convexity is playing an important role in the possibility of having J T (u) lower than the cost with constant control J T (ū) under the following additional condition on the dynamics. (H3) The function : I → R is increasing and the function γ = ψ • −1 is strictly convex increasing over (I ). Under H3, the exists a uniquex in I with ψ(x) =ū and one has the following result. Proposition 2.1 If H1 and H3 hold true, any non-constant T -periodic solution x of (1) with x(0) =x and u ∈ U T satisfies J T (u) < J T (ū).
In the opposite way, it has been shown in [5] that concavity prevents improving the cost J T (ū) with non-constant controls, under the following condition on the dynamics. (H4) There exists a continuous functionψ such that Under H4 one has also the uniqueness ofx in I with ψ(x) =ū and the following result holds.

Proposition 2.2
If H1 and H4 hold true, any non-constant T -periodic solution x of (1) with x(0) =x and u ∈ U T satisfies J T (u) > J T (ū).
In the present work, we assume that Hypotheses H1-H2 are satisfied and aim at relaxing Hypothesis (H3) or (H4) by allowing a change of convexity as well as a change of monotonicity of the function γ := ψ • −1 on the interval I , while keeping increasing. Note that under Hypotheses H1-H2, there does not necessarily exist a uniquex such that ψ(x) =ū. However, we shall assume that there is a unique stable one, which is guaranteed by the following hypothesis.
(H) The function is increasing on I , and for anyȗ ∈ int (ψ(I )), there exists a uniquȇ x ∈ I , such that Then, as is increasing, the steady statex that gives the best cost J T (ū) is clearly the smallest one. Note that HypothesisH amounts to ψ having at most one change of monotonicity on I . Under Hypotheses H1-H2, if ψ is non-monotonic on I , it must be increasing first and then decreasing. Therefore, we can define valuesx andx as follows.

Remark 2.1
Under H1-H2-H, one has necessarilyx ≤x, withx being the only value fulfillingH. Moreover, having ψ non-monotonic with at most one change of monotonicity implies that the functions f and g are null at a or b.
Let us first give a preliminary result about periodic solutions, in the spirit of the former work [5].

Lemma 2.1
Under Hypotheses H1-H2-H, any T -periodic solution x of (1) in I with u ∈ U T fulfills the property and any optimal trajectory x takes the valuex.
Proof On the interval I , the function g is positive and from equation (1), one can write Consider then the function t → y(t) For any control function u that fulfills the constraint (2), one gets . Therefore, for any T -periodic solution x in I , y is also T -periodic and one obtains property (5). According to the above, for any T -periodic solution x, the map t → ψ(x(t)) −ū has to take the value 0 on [0, T ). Therefore, there existst ∈ [0, T ) such that x(t) =x with ψ(x) =ū. Ifx =x, then we have proved that x takes the valuex. If not, one has necessarilyx >x because of Definition 2.1. Therefore, if the solution x does not take the valuex, one should have x(t) >x for any t ∈ [0, T ). The function being increasing on I , it comes that which shows that x cannot be optimal. We conclude that one hasx =x.
This Lemma allows to look for optimal solutions with x(0) = x(T ) =x without any loss generality, as we shall do now. Note that whenx =x, the single T −periodic solution of (1) with the constraint (2) is the constant solution x =x, and there is no optimization to be made. Therefore, we shall assume in the following thatū is taken in int(ψ(I )), which impliesx <x.
We introduce now the Hypotheses H5a and H5b that generalize Hypotheses H3 or H4, keeping increasing. (H5a) The function : I → R is increasing and there exists x c ∈ (a, b] such that the function γ := ψ • −1 is strictly convex over ((a, x c )) and strictly concave over ((x c , b)).
(H5b) The function : I → R is increasing and there exists x c ∈ (a, b] such that the function γ := ψ • −1 is strictly concave over ((a, x c )) and strictly convex over ((x c , b)).

Remark 2.2
When x c = b, Hypotheses H5a and H3 are equivalent, as well as H5b and H4. Indeed, if x c >x, then H3 or H4 are also recovered, as it will be seen later in Proposition 4.1 of Section 4.
We provide now examples that fulfill H5a or H5b.

Example 2.1
Let a = 0, b = 3 and functions f , g defined as follows Hypotheses H1 and H2 are fulfilled (see Figure 1). We take the identify function for . One can straightforwardly check that the function ψ is given by the expression One can then easily check that HypothesesH and H5a are fulfilled (see also Figure 1).

Example 2.2
Let a = 0, b = 6 and functions f , g defined as follows Hypotheses H1 and H2 are fulfilled (see Figure 2). We take the identify function for . One can straightforwardly check that the function ψ is given by the expression One can then easily check that HypothesesH and H5b are fulfilled (see also Figure  2).

A Slope Condition
In this section, we derive as a "slope condition" a geometric necessary condition for optimality, which links the switching points of an optimal trajectory through the function ψ.
We first reformulate the constraint (2) by considering the augmented dynamics with the boundary conditions: The optimal control problem can then be stated as where U denotes the set of measurable control functions u over [0, T ] taking values in [−1, 1]. Note that Problem (8) in R 2 admits a solution by classical existence results. Indeed, the set of trajectories that satisfy the boundary conditions (7) is non-empty (it contains the steady statex with constant controlū), and since the system is affine w.r.t. the control and is continuous, the existence of an optimal control follows by Filippov's existence theorem [11]. We define the Hamiltonian H : where λ := (λ x , λ y ) is the adjoint vector. From the Pontryagin Maximum Principle [19], we know that for any optimal control u ∈ U and (x, y) the associated solution of (6)-(7), there exists a scalar λ 0 ≤ 0 and an absolutely continuous map λ : [0, T ] → R 2 solution of the adjoint dynamics for a.e. t ∈ [0, T ]. Moreover one has (λ 0 , λ) = 0 and the Hamiltonian condition writes A solution x satisfying (6)-(7) for a control u ∈ U and such that there exists (λ 0 , λ) = 0 verifying (9)-(10) is called an extremal. Since the dynamics is affine w.r.t. u, the switching function provides the following property of the optimal control u for almost any t ∈ [0, T ]: We recall that a singular arc occurs if φ vanishes on some time interval [t 1 , t 2 ] with t 1 < t 2 , and a switching time t s ∈ (0, T ) is such that an extremal control u is nonconstant in any neighborhood of t s (which implies φ(t s ) = 0). Let us mention that from Hypothesis H2, when φ > 0, resp. φ < 0, then x is increasing, resp. decreasing. For convenience, we shall define the following numbers. Proof If λ 0 = 0, then λ x cannot vanish from the adjoint Eq. (9). Otherwise λ x would be null over [0, T ] and the switching function would be constant equal to λ y . Since λ y cannot be simultaneously equal to 0, φ would be of constant sign over [0, T ] implying that u = 1 or u = −1 over [0, T ]. This is a contradiction with the periodicity of x(·) (recall that one has f + g > 0 and f − g < 0 over I ). Consequently, λ x has constant non-null sign.
Since λ 0 = 0, one has from the adjoint equations (9) If ψ is increasing on I , φ is monotonic and has at most one switching point, implying that x is either entirely above or entirely belowx. But then the equality (5) of Lemma 2.1 cannot be verified when ψ is increasing.
Consider now the case when ψ is non-increasing on I . One has thenx ∈ I . For any extremal x, we know from Lemma 2.1 that one has x m ≤x ≤ x M . Remind, as already mentioned in Sect. 2, that one has necessarilyx >x. Ifx > x M , then ψ is increasing on [0, T ] and we conclude as previously that this is not possible. Therefore an abnormal extremal should verify x m ≤x ≤ x M . Note that the constant solutionx cannot be an abnormal extremal whenx =x. The extreme values of x having necessarily to be a switching locus, one should have Since the Hamiltonian is conserved along extremal trajectories, it comes with λ 0 = 0 Since λ x is non-null and φ is null lat t m and t M , λ y cannot be null. We conclude that the equality ψ( , which is in contradiction with HypothesisH. Finally, note from equation (12) that a singular arc on a time We focus now on regular extremals, i.e., with λ 0 = −1. We begin by a lemma that characterizes the singular arcs as constant values of x.
If λ y = 0, then λ x = 0 on [t 1 , t 2 ] and from (9), it comes that λ 0 = 0. The extremal cannot be regular. So, one has λ y = 0. It comes from the adjoint Eqs. (9) with λ 0 = −1 thaṫ As g > 0 on I and φ = 0 on [t 1 , t 2 ], one has then Note that under hypothesis H5a or H5b, the function γ is almost everywhere increasing or decreasing and is increasing. Therefore, the function γ • cannot be constant on any interval of non-null length. The result follows withx solution of γ ( (x)) = 1/λ y .
We give now our optimality "slope condition" that shall play an important role in the derivation of an optimal synthesis. Proposition 3.2 Assume that hypotheses H1-H2-H and H5a or H5b are fulfilled. Let x be a non-constant regular extremal. Then λ y = 0 and the switching set S : Moreover, if x is optimal with a singular arc atx, iix = x m under Hypothesis H5b, satisfying the slope condition Since the Hamiltonian H is conserved along any extremal, one has (for a regular extremal) which shows that λ y = 0, using the fact that l is increasing on I . Then it comes the following "slope condition" As extreme values of x are necessarily switching locus, one has then Since γ change convexity only once on l(I ) under Hypothesis H5a or H5b, there can only exist at most one other value x I ∈ (x m , x M ) such that the slope constraint (13) is satisfied. Consider now the case when x admits a singular arcx under Hypothesis H5a (the proof for the case with Hypothesis H5b is similar and left to the reader). Let us first show that havingx < x c cannot be optimal. Otherwise, considering any interval with δ > 0, the constant solution x =x has to be optimal for the periodic optimal problem (4) with period δ, initial condition x(0) =x and input constraint 1 δ δ 0 u(t)dt =ũ = ψ(x). But γ being increasing and strictly convex atx, any admissible solution on [0, δ] with x(0) =x lies on a domain where γ is increasing and strictly convex, provided that δ is small. Then, Proposition 2.1 applies, which proves that x =x cannot be optimal. We have thusx ≥ x c . Ifx = x M , then the slope condition (13) gives With 1 λ y = γ ( (x)) given by Lemma 3.1, it comes the following equality which contradicts the function γ being strictly concave on (x, x M ). We conclude that x has to be equal to x M . Now, let us show that S = {x m ,x}. Suppose by contradiction that S = {x m , x I , x M }. Using the same concavity argument as before, it comes that x I < x c . Consider now the functionγ : which implies thatγ is convex and above γ on (I ), withγ = γ on ([a, x c ]). Now let us consider the point implying that x m < x I < x i . By convexity ofγ • , γ ( (x I )) is strictly under the straight line that passes through (x m , γ ( (x m ))) and (x i , γ ( (x i ))) which is by construction the line x −→ γ ( (x))( (x) − (x)) + γ ( (x)), hence showing that x I cannot satisfy the slope condition (13).
Finally, the slope condition (13) with x 1 = x M , x 2 = x m and 1 λ y = γ ( (x M )) gives the condition (14). Now, let us show that one has necessarily x M <x. If x M ≥x, then (x M ) has to be in the concave part of γ with γ ( (x M )) < 0 (and γ is necessarily concave on ( (x), (b)). This implies the inequality Recall from Lemma 2.1 that one has x m <x (andx ≤x from Remark 2.1). As γ is increasing on ( (a), (x)) (and is an increasing function), one gets which contradicts the condition (14).
The slope condition is a necessary condition for optimality, which states that two cases can exist : either there are two switching points (the maximum and the minimum), and one of them might correspond to a singular arc (i.e., a constant portion of the trajectory), or there are three switching points without any singular arc.

Restriction to the Increasing Part of the Function Ã
In this section, we show that when the function ψ is not increasing on the whole interval I , an optimal trajectory remains necessarily in the domain I ∩ {x <x} where the function ψ is increasing (or equivalently that one has x M <x). The main idea of the proof of Proposition 4.1 is to show that if it is not the case, one can exhibit a piece of the trajectory which is increasing up to x M and decreasing so that replacing it by a constant state remains admissible and gives a better cost (see Fig. 3).
For convenience, we first introduce the following function, as in [5], , x ∈ I which is positive and C 1 on I thanks to H2. This function possesses the following property related to bang-bang controls.
Then, one has defines a diffeomorphism from [t 0 , t 1 ] to its image, and from [t 2 , t 0 + T 0 ] to its image as well. Then, one can write . .
Proceeding with the same decomposition of the interval [t 0 , t 0 + T 0 ], one obtains and with conditions (16) the equality x(t 1 )

Proposition 4.1 Suppose that hypotheses H1-H2-H and H5a or H5b are verified. A non-constant optimal trajectory x(·) for Problem (8) verifies x(t) <x for any t ∈ [0, T ].
Proof Let u(·) be an optimal solution of Problem (8) and x(·) its associated trajectory. Let us remind the notation x m , x M given by Definition 3.1, that we shall use below. If it is a regular extremal, the slope condition (13) of Proposition 3.2 gives Recall that is increasing. Therefore one has ψ(x M ) = ψ(x m ). We distinguish two cases.

ψ(x M ) < ψ(x m ).
One has λ y < 0 and a switching at x I such that x m < x I < x M imposes to have ψ(x I ) < ψ(x m ) and ψ(x I ) < ψ(x M ) from the slope condition (13). Since ψ is increasing on (x m ,x) and decreasing on (x, x M ), it comes that x I must satisfy x I > x M , which is a contradiction. Now suppose that x(·) admits a singular arc. The inequality ψ( If it is an abnormal extremal, one has ψ(x M ) = ψ(x m ) and from Proposition 3.1, x(·) has switching only at x m and x M and no singular arc, as for case 1. Now, we posit (which satisfiesx m <x) and define the functions Clearly, one has .

η(ξ)ψ(ξ)dξ = ψ(x)T > ψ(x m )T = g(x m ).
In the case where x(·) commutes more than once at x m or x M on the interval [0, T ], say n times, the trajectory x(·) is T /n periodic and we can apply Lemma 4.1 on the interval [0, T /n] to obtain the same inequality. When Since the functions g and h are continuous, we deduce the existence of a number From what precedes, x d is not a switching point of x(·), and by periodicity of the trajectory, x(·) has to pass by x d alternatively by increasing and by decreasing or vice-versa. We can then define is above x d and from what precedes, switching occurs only at x M with no singular arc. Therefore one has The equality g( and its associated trajectory x # (·) on [0, T ] is given by which consists in a truncation of the original trajectory x(·) (see Fig. 3). Clearly u # (·) is admissible, and its cost satisfies being increasing and x(·) > x d on (t d , t d + T d ), which contradicts the optimality of u(·).
From Proposition 3.1, one obtains immediately the following property.

Optimal Synthesis
Let us recall that the former results in [5] do not cover the case of a change of convexity of the function γ . However, these results can still be applied whenx and T are such that for any admissible solution x(t) remains in one of the subsets (a, x c ) or (x c , b), where γ does not change its convexity. Then, either bang-bang, or constant solutions are optimal (depending if γ is convex or concave on the subset, as stated in [5]). However, situations for which solutions x(·) can pass from one subset to another one have not been yet treated. We consider here the class of BSB (for "bang-singular-bang") control strategies with at most one singular arc.
For the initial condition x(0) =x of system (1), we call BSB controls any time function u a (t; ·) or u b (t; ·) such that where switching times t 1 , t 2 are such that 0 ≤ t 1 ≤ t 2 ≤ T −t, and where switching times t 1 , t 2 are such that 0 ≤ t 1 ≤ t 2 −t ≤ T −t.
Note thatt represents the duration of the trajectory spent on the singular arcx (with x = x(t 2 ) for the control u a (t; ·), andx = x(t 1 ) for the control u b (t; ·)). With the notations introduced in Definition 3.1, one has x m = x(t 2 ), x M =x with control u a (t; ·), and x m =x, x M = x(t 2 ) with control u b (t; ·). Note also that the particular caset = 0 corresponds to pure "bang-bang" trajectories (i.e., without singular arc) whilet = T corresponds to the constant solutionx (for these two particular cases, the definitions of u a and u b coincide). We show now that for any value oft, there exist unique controls u a , u b that are admissible and such that the trajectory is periodic with x M <x. The following proposition is in the spirit of Proposition 3.2 in [5] but extended to the present context with singular arcs.
withx = x M for the control u a (t; ·), andx = x m for the control u b (t; ·).
Proof We consider controls u a only (the proof for controls u b is analogous). On the interval [0, t 1 ], one hasẋ = f (x) − g(x) < 0 and thus ξ : t → x(t) defines a diffeomorphism from [0, t 1 ] to its image, and similarly on the interval [t 2 +t, T ]. On the interval [t 1 , t 2 ], one hasẋ = f (x) + g(x) > 0 and again ξ : t → x(t) defines again a diffeomorphism from [t 1 , t 2 ] to its image. Then one can write +t + , is T -periodic when x(T ) =x and one has then which is exactly Eq. (18). In the same way, one can write and get for an admissible control u(·) which is exactly Eq. (19). We show now that fort in [0, T ], there exists a unique pair (x m , x M ) in I ∩ {x <x} that satisfy conditions (18) and (19).
By the Intermediate Value Theorem, we deduce that there exists α ∈ (β −1 (x),x) such that F(α) = 0. Moreover, one has Therefore, we deduce that there exists an unique x m ∈ I such that F(x m ) = 0 with β(x m ) <x, and x M is then uniquely defined as x M = β(x m ).

Remark 5.1
Under uniqueness of x m , x M solutions of (18), (19), the controls u a (t; ·), u b (t; ·) can be expressed as follows We state now our main result, which says that optimal trajectories are of the BSB type.
Proof Let u(·) be an optimal solution for Problem (8) and x(·) the corresponding trajectory. If x M ≤ x c or x m ≥ x c , then x(·) remains in a domain of I where γ does not change its concavity. One can then apply the former results of [5] that state that the optimal trajectory on [0, T ] is either constant, or "bang-bang" with a single switch at x m and at x M . This amounts to claim that u a (t; ·) or u b (t; ·) is optimal witht = 0 ort = T . Let us now consider cases for which x M > x c and x m < x c .
Assume that Hypothesis H5a is fulfilled (the proof under Hypothesis H5b is similar, where u a is replaced by u b , and is left to the reader). If x(·) does not have a singular arc, let us show that x(·) cannot switch more than once at One can consider four numbers t i in (0, T ), i ∈ {1, . . . , 4}, such that , which amounts to have the averageũ of the control u • θ(·) on the interval [t,t +T (x)] equal to ψ(x). On this interval, we can consider the optimal control problem (8) with T =T (x),ū =ũ andx =x. As γ is concave increasing on [x b , x M ], we can use the results of [5] that claim that a non-constant trajectory cannot minimize the average of l • x θ (·) on this interval, leading to a contradiction with the optimality of x θ (·), and thus of x(·).
In a similar way, one can prove that x(·) cannot switch more than once at x m . Indeed, if it is the case, one can consider an analogous construction of an optimal trajectory with a piece in the domain where γ is convex, and where it switches twice at x m , contradicting the former results of [5] for the convex case.
Thus, if x(·) has no singular arc, it has exactly one switch at x M and one switch at x m , and is synthesized by the control u a (0; ·), which is uniquely defined according to Proposition 5.1.
Finally, if x(·) possesses a singular arc at a certainx, we know from Proposition 3.2 that x M and x m are the only values of x(·) for which switches occur, and that x = x M . As x(·) cannot have more than one switch at x m we deduce that the set S = {t ∈ [0, T ], x(t) = x M } is connected. Therefore, x(·) has a unique singular arc of lengtht = |S| > 0 and is synthesized by a control with a BSB structure, such as u a (t; ·) which is uniquely defined according to Proposition 5.1.

Remark 5.2
In practice, one has simply to look for the best value J T (u a (t; ·)), resp. J T (u b (t; ·)), amongt ∈ [0, T ] such that the slope condition is verified wheñ t is not 0 or T , as illustrated in Sect. 6. Accordingly to Proposition 5.1, one can equivalently look for valuesx of the singular arc that give the best value of the criterion. For this purpose, one can first determine the subset X M , resp. X m , of values x M ∈ (x c ,x) such that the slope condition (14) is fulfilled for some x m < x c (with a numerical tolerance), resp. of values x m < x c , such that the slope condition (15) is fulfilled for some x M ∈ (x c ,x). Then, one has simply to test the performances of the BSB strategy withx in X M , resp. X m .

Illustration on a Bioprocess Model
In the past decades, periodic operations of biological or chemical processes have been investigated to enhance their performances [1,3,20,22]. Several contributions have identified situations for which a periodic solution improves an objective function, such as the productivity, compared to its value at steady state [1,12,18,24], or not [21]. In the recent work [4], an application for the piloting of wastewater bioprocesses has been investigated. It has been shown that depending on the characteristics of the growth function of the micro-organisms, a non-constant periodic flow rate could provide a lower average concentration of pollutant at the output of the process, compared to a constant flow rate treating the same quantity of contaminated water on a given period of time. However, when the growth function is neither convex nor concave, such as the Hill growth function (see below), a bang-bang periodic control is optimal when the nominal steady state is in a region of local convexity of the growth function and when the period is small enough. For larger periods, this control strategy could lead the state of the dynamics to a region of concavity of the growth rate, and the criterion could be even worse than for a constant control. For these cases, a repetition of bang-bang over the period has been proposed as a sub-optimal strategy. We are now in position to show that this strategy is indeed not optimal and that the optimal bang-singular-bang does a much better job.
We recall the chemostat model, traditionally used in wastewater treatment modeling (see, e.g., [16])) Given T > 0 andD ∈ (D − , D + ), the optimal control problem considered in [4] is Note that the system (20) can be reduced to a one-dimensional control dynamics. Indeed, for any periodic solution (s(·), b(·)) one hasż = −Dz where z = s in − s − b.
Since z(·) is also periodic, one has necessarily z(t) = 0 for any t so that (20) becomeṡ The function μ is assumed to be C 1 increasing with μ(0) = 0. The nominal dilution rateD is chosen in (0, μ(s in )), so that there exists an unique steady states ∈ (0, s in ) for constant control D =D, as the unique solution of μ(s) =D, which is globally asymptotically stable on the domain (0, s in ).
Let us show that this problem falls exactly in the framework of Section 2. We take for I the largest open interval containings that is invariant for (22)   and for the criterion (21)  From the expression of f and g, we get ψ(s) = αμ(s) − β which is increasing. HypothesisH is thus fulfilled. Here, we have considered for the function μ the Hill function (as in [4]) which is a monotonic growth function given by the expression μ(s) := μ max s n K n s + s n , (n > 1).

Fig. 6
Examples of trajectories for the three control laws Fig. 7 Graphs of the cost function J T (u a (t, ·)) as a function oft, for the four cases considered in Table  1. The portions of the curves in red correspond to BSB trajectories for which the slope condition (14) is verified (with a numerical tolerance of 5%). Dashed lines represent the costs of the constant controls This function is convex for s lower than s c = K s n − 1 n + 1 1 n and concave for s above this value (see Figure 5). Hypothesis H5a is thus fulfilled. For the simulations, we have considered the following values of the parameters of the Hill function: n = 2, K s = √ 3 and μ max = 2, and for the operating conditions s in = 4, D − = 0, D + = 1.2μ(s in ) with T = 20. For different values ofD, we have computed the cost of the periodic solution for the three control laws i constant control, ii bang-bang control, iii optimal bang-singular-bang control, for which the optimal value oft ∈ (0, T ) has been determined numerically, for which the values of the singular arc satisfies the slope condition (14) (see Figure 7) .
Computations have been performed with the programming language Julia [7]. Results are summed up in Table 1, and Figure 6 depicts the corresponding trajectories.
In Figure 7, one can also verify that the slope condition is verified for the optimal BSB trajectory (given by the optimalt), in agreement with Proposition 3.2.

Conclusion
In this work, we have revisited a class of optimal periodic control problems linear with respect to the control, relaxing the convexity or concavity hypothesis on the dynamics. Based on the adjoint Eqs. of the Maximum Principle, we have introduced several non-local techniques (slope condition, trajectory truncation, swap of pieces of trajectory...) to show that the optimal solution admits a single singular arc. This result generalizes former ones in the sense that a singular arc of null length gives a pure bang-bang solution which is optimal in the convex case, while a singular arc of full length is a constant solution that is optimal in the concave case. An illustration on a biological model shows the gains of the optimal strategy over bang-bang or steadystate solutions. More generally, this result shows the interest of "bang-singular-bang" periodic controls, compared to patterns considered in other approaches such as the π -criterion.