3 Our Motivation: Falsification of Conditional Safety Property
Tải bản đầy đủ - 0trang
Falsification of Conditional Safety Properties
441
With this class of formulas, we could express various requirements of behavior of
the system under various speciﬁc conditions. Hence, for a given system, verifying
conditional safety property is as important as for safety property.
On STL, we usually encode such a condition into a STL formula in the form
of I (¬ϕcond ∨ ϕsafe ). Note that, in conventional Boolean semantics, the formula
is equivalent to I (ϕcond → ϕsafe ). In robustness guided falsiﬁcation, we search
for a counterexample by minimizing the robustness of the formula ¬ϕcond and
ϕsafe simultaneously.
However there exists the following gap between this straightforward attribution to the numerical optimization and what we expect to obtain through the
falsiﬁcation: if we write down a conditional safety property, we would like to
say something meaningful about dynamics of the model when the antecedent
condition ϕcond holds; but in the iteration of simulation, we could not guarantee that enough number of behavior are observed in which the system satisfies
the antecedent condition ϕcond . From this point of view, we would expect an
optimization algorithm that solves conditional safety property
– with as small as number of iteration to ﬁnd a counterexample x ∈ D; and
– with picking up enough number of inputs xj1 . . . xjn that steers the whole
model to satisfy the antecedent condition ϕcond .
To this end, we propose a novel algorithm to pick up a suitable input in each
step of the iteration with satisfying the above twofold requirements. A technical
highlight is that, with Gaussian process regression, we estimate the function
F ∗ : x → M(x), I ¬ϕcond , and obtaining the input subspace D ⊂ D such
that, for any input x ∈ D , the output M(x) satisﬁes the antecedent condition
ϕcond with high probability.
Related Work. The diﬃculty of the falsiﬁcation is to observe the rare event (here,
conditional safety property is false). Our technique is based on the following
idea: we consider a superset-event that happens much likely than the original
one (ϕcond holds), and from the input space, we “prune” the region in which the
superset-event does not happen. This idea is common with importance sampling.
Actually, our Proposition 2.4 is an instance of decomposition in Sect. 4.1 in [10].
While importance sampling explores the input by stochastic sampling,
GP-UCB deterministically chooses the next input, hence combining these two
optimization algorithms are not straightforward. One of our contributions is
that we realize the above “pruning” in GP-UCB style optimization by employing regression.
2
Signal Temporal Logic (STL)
Definition 2.1 (syntax). Let Var be a set of variables. The set of STL formulas are inductively deﬁned as follows.
ϕ ::= f (v1 , . . . , vn ) > 0 | ⊥ |
| ¬ϕ | ϕ ∨ ϕ | ϕ UI ϕ
442
T. Akazaki
where f is an n-ary function f : Rn → R∪{−∞, ∞}, v1 , . . . , vx ∈ Var, and I is a
closed non-singular interval in R≥0 , i.e. I = [a, b] or [a, ∞) where a < b and a ∈ R.
We also deﬁne the following derived operators, as usual: ϕ1 ∧ϕ2 ≡ ¬(¬ϕ1 ∨¬ϕ2 ),
ϕ1 RI ϕ2 ≡ ¬(¬ϕ1 UI ¬ϕ2 ), ♦I ϕ ≡ UI ϕ, and I ϕ ≡ ⊥ RI ϕ.
Definition 2.2 (robust semantics of STL). Let σ : R≥0 → RVar be a signal
and ϕ be an STL formula. We deﬁne the robustness σ, ϕ ∈ R≥0 ∪ {−∞, ∞}
inductively as follows. Here and denote inﬁmums and supremums of real
numbers, respectively.
σ, f (v1 , · · · , vn ) > 0
f σ(0)(v1 ), · · · , σ(0)(vn )
σ, ⊥
−∞
σ,
∞
σ, ϕ1
σ, ϕ2
σ, ¬ϕ
− σ, ϕ
σ, ϕ1 ∨ ϕ2
( σ t , ϕ2
σ t , ϕ1 )
σ, ϕ1 UI ϕ2
t∈I
t ∈[0,t)
Notation 2.3. Let f : Rn → R ∪ {−∞, ∞}. We deﬁne the Boolean abstraction
of f as the function f : Rn → B] such that as f (v) = if f (v) > 0, otherwise
f (v) = ⊥. Similarly, for an STL formula ϕ, we denote by ϕ the formula which
is obtained by replacing all atomic functions f occurs in ϕ with the Boolean
abstraction f . We see that σ, ϕ > 0 implies σ, ϕ > 0.
As we see in Sect. 1.3, conditional safety properties are written as STL formulas in the form of σ, I (¬ϕcond ∨ ϕsafe ) , and its intuitive meaning is “ϕsafe
holds whenever ϕcond is satisﬁed.” To enforce our algorithm in Sect. 4 to pick
inputs satisfying the antecedent conditions ϕcond , we convert the formula to the
logically equivalent one. The converted formula consists of mainly into the two
parts such that one of them stands for “the antecedent condition ϕcond is satisﬁed
or not.”
Proposition 2.4. For any signal σ and STL formulas ϕ1 , ϕ2 , the following
holds.
σ,
3
I (¬ϕ1
∨ ϕ2 ) > 0 ⇐⇒ σ,
I ¬ϕ1
σ,
I (¬ϕ1
∨ ϕ2 ) > 0
Gaussian Process Upper Confidence Bound (GP-UCB)
As we mentioned in Sect. 1.3, in robustness guided falsiﬁcation to minimize F ∗ :
x → M, ϕ , we pick inputs iteratively hopefully with smaller robustness value.
For this purpose, Gaussian process upper confidence bound ( GP-UCB) [15,16]
is one of the powerful algorithm as we see in [3–5].
The key idea in the algorithm is that, in each iteration round t = 1, . . . , N ,
we estimate the Gaussian process [13] GP(μ, k) that most likely to generate the
points observed until round t. Here, we call two parameters μ : D → R and
k : D2 → R as the mean function and the covariance function respectively.
Falsification of Conditional Safety Properties
at iteration t
443
at iteration t + 1
Fig. 1. An intuitive illustration of GP-UCB algorithm. Each figure shows the estimated Gaussian process GP(μ, k) at iteration round t and t + 1: the middle curve is a
plot of the mean function μ, and the upper and lower curve are a plot of μ + β 1/2 k,
μ − β 1/2 k. In each iteration round t, we pick the point x[t] (red point in the left figure)
that minimizes the lower curve. Once we observe the value F ∗ (x[t]), the uncertainty at
x[t] becomes smaller in the next round t + 1. In general, as a confidence parameter β
we choose an increasing function to guarantee the algorithm not to get stuck in local
optima (e.g. β(t) = 2 log(ct2 ) for some constant c). See [15, 16]) (Color figure online)
Very roughly speaking, for each x ∈ D, the value μ(x) of mean function stands
for the expected value of F ∗ (x), and the value k(x, x) of co variance function at
each diagonal point does for the magnitude of uncertainty of F ∗ (x).
Pseudocode for the GP-UCB algorithm is found in Algorithm 1. As we see in
the code, we pick x[t] = argminx∈D μ(x) − β 1/2 (t)k(x, x) as the next input. Here,
the ﬁrst term try to minimize the expected value F ∗ (x[t]), and the second term
try to decrease uncertainty globally. In Fig. 1, we see an illustration of how the
estimated Gaussian process is updated in each iteration round of optimization.
Thus, the strategy balancing exploration and exploitation helps us to ﬁnd a
minimal input with as small as number of iteration.
Algorithm 1. The GP-UCB algorithm for falsiﬁcation
Hyper parameters: A confidence parameter β : N → R; Maximal number of iteration N ;
Input: Input space D; An uncertain function F : D → R to be minimized;
Output: An input x ∈ D such that F (x) ≤ 0
for t = 1 . . . N do
x[t] = argminx∈D μ(x) − β 1/2 (t)k(x, x);
Choose a new sample input
y[t] = F (x[t]);
Observe the corresponding output
if y[t] ≤ 0 then
return x[t];
end if
(μ, k) = regression (x[1], y[1]), . . . (x[t], y[t]) ;
Perform Bayesian update to obtain new mean and covariance function
end for
4
Our Algorithm: GP-UCB with Domain Estimation
Now we give our algorithm for falsiﬁcation of conditional safety properties with
enough number of testing in which the model satisﬁes the antecedent condition.
444
T. Akazaki
Algorithm 2. The GP-UCB algorithm for falsiﬁcation with domain estimation
Hyper parameters: A confidence parameter β : N → R and its bound βmin , βmax ∈ R; Maximal
number of iteration N ; Target hit rate R ∈ (0, 1)
Input: Input space D; Uncertain functions F, G : D → R;
Output: An input x ∈ D such that max(F (x), G(x)) ≤ 0
for t = 1 . . . N do
r = (R × N − nhit )/(N − t)
Calculate the current objective probability r of satisfying F (x) ≤ 0
βF = min(max( 2erf −1 (1 − 2r), βmin ), βmax ) where erf is the error function
D = {x ∈ D | μF (x) − βF kF (x, x) ≤ 0}
Estimate a region in which F (x) ≤ 0 holds with probability r
if D == ∅ then
xF [t] = argminx∈D μF (x) − βF kF (x, x);
else
xG [t] = argminx∈D μG (x) − β 1/2 (t)kG (x, x);
end if
Choose a new sample input
yF [t] = F (xt );
if yF [t] ≤ 0 then
n = n + 1; xG [n] = xF [t]; yG (xG [n]);
if yG [n] ≤ 0 then
return xG [n];
end if
end if
Observe the corresponding output
(μF , kF ) = regression (xF [1], yF [1]), . . . (xF [t], yF [t]) ;
(μG , kG ) = regression (xG [1], yG [1]), . . . (xG [n], yG [n]) ;
Perform Bayesian update to obtain new mean and covariance function
end for
As we show in Proposition 2.4, falsiﬁcation of the speciﬁcation
could be reduced to the following problem.
Find x such that M(x),
I ¬ϕcond
M(x),
I (¬ϕcond
I (¬ϕcond
∨ ϕsafe )
∨ ϕsafe ) ≤ 0.
A key observation here is that, when the ﬁrst part of the robustness
M(x), I ¬ϕcond becomes less than zero, then with this input x, the corresponding behavior of the system M(x) satisﬁes the antecedent condition ϕcond .
Based on this observation, we propose the GP-UCB with domain estimation
algorithm. Pseudocode of the algorithm is available in Algorithm 4. This algorithm takes a hyper parameter R which stands for a target hit rate, that is, how
large ratio of the input x[1], ..., x[N ] steer the model to satisfy the antecedent
condition. In each iteration round of the falsiﬁcation, to guarantee both fast
minimization and enough testing on which ϕcond holds, we pick the next input
by the following strategy: (1) calculate how many ratio r of the input should
make ϕcond true through the remaining iteration; (2) estimate the input subdomain D ⊂ D in which the antecedent condition ϕcond holds with probability
r; (3) from the restricted domain x ∈ D , pick a new input x to falsify the whole
speciﬁcation in the GP-UCB manner.
5
Experiments
To examine that our GP-UCB with domain estimation algorithm achieves both
fast minimization and enough testing with the antecedent condition ϕcond .
As a model of the CPSs, we choose the powertrain control veriﬁcation benchmark [11]. This is an engine model with a controller which try to keep the air/fuel
Falsification of Conditional Safety Properties
445
ratio in the exhaust gas. This model has 3-dimensional input parameters, and
the controller have mainly two modes—feedback mode and feed-forward mode.
As conditional safety speciﬁcations to falsify, we experiment with the following
STL formula ϕ. In this formula, the antecedent condition is mode = feedforward,
that is, we would like to observe behavior of the system in the feed-forward mode.
[τ,∞)
¬(mode = feedforward) ∨ |ratioA/F | < 0.2
(1)
In fact of the model, the formula (1) does not have any counterexample input,
and with the original GP-UCB algorithm, about 58 % of the input leads the
whole systems to feed-forward mode. Then, we run our GP-UCB with domain
estimation algorithm with setting the target hit rate as R = 0.8, and observe
that about 79 % of the inputs satisfy the antecedent condition.
6
Conclusion
To solve falsiﬁcation of conditional safety properties with enforcing the generated
inputs to satisfy the antecedent condition, we provide an optimization algorithm
based on Gaussian process regression techniques.
References
1. Alur, R., Feder, T., Henzinger, T.A.: The benefits of relaxing punctuality. J. ACM
43(1), 116–146 (1996)
2. Annpureddy, Y., Liu, C., Fainekos, G., Sankaranarayanan, S.: S-TaLiRo: a tool
for temporal logic falsification for hybrid systems. In: Abdulla, P.A., Leino, K.R.M.
(eds.) TACAS 2011. LNCS, vol. 6605, pp. 254–257. Springer, Heidelberg (2011)
3. Bartocci, E., Bortolussi, L., Nenzi, L., Sanguinetti, G.: On the robustness of temporal properties for stochastic models. In: Dang, T., Piazza, C. (eds.) Proceedings Second International Workshop on Hybrid Systems and Biology, HSB 2013,
Taormina, Italy, 2nd September 2013, vol. 125 of EPTCS, pp. 3–19 (2013)
4. Bartocci, E., Bortolussi, L., Nenzi, L., Sanguinetti, G.: System design of stochastic
models using robustness of temporal properties. Theor. Comput. Sci. 587, 3–25
(2015)
5. Chen, G., Sabato, Z., Kong, Z.: Active requirement mining of bounded-time temporal properties of cyber-physical systems. CoRR abs/1603.00814 (2016)
6. Donz´e, A.: Breach, a toolbox for verification and parameter synthesis of hybrid
systems. In: Touili, T., Cook, B., Jackson, P. (eds.) CAV 2010. LNCS, vol. 6174,
pp. 167–170. Springer, Heidelberg (2010)
7. Donz´e, A., Maler, O.: Robust satisfaction of temporal logic over real-valued signals.
In: Chatterjee, K., Henzinger, T.A. (eds.) FORMATS 2010. LNCS, vol. 6246, pp.
92–106. Springer, Heidelberg (2010)
8. Fainekos, G.E., Pappas, G.J.: Robustness of temporal logic specifications for
continuous-time signals. Theor. Comput. Sci. 410(42), 4262–4291 (2009)
9. Hoxha, B., Abbas, H., Fainekos, G.: Benchmarks for temporal logic requirements
for automotive systems. In: Proceedings of Applied Verification for Continuous and
Hybrid Systems (2014)
446
T. Akazaki
10. Jegourel, C., Legay, A., Sedwards, S.: Importance splitting for statistical model
checking rare properties. In: Sharygina, N., Veith, H. (eds.) CAV 2013. LNCS, vol.
8044, pp. 576–591. Springer, Heidelberg (2013)
11. Jin, X., Deshmukh, J.V., Kapinski, J., Ueda, K., Butts, K.: Powertrain control
verification benchmark. In: Fră
anzle, M., Lygeros, J. (eds.) 17th International Conference on Hybrid Systems: Computation and Control (part of CPS Week), HSCC
2014, Berlin, Germany, 15–17 April 2014, pp. 253–262. ACM (2011)
12. Maler, O., Nickovic, D.: Monitoring temporal properties of continuous signals. In:
Lakhnech, Y., Yovine, S. (eds.) FORMATS 2004 and FTRTFT 2004. LNCS, vol.
3253, pp. 152–166. Springer, Heidelberg (2004)
13. Rasmussen, C.E., Williams, C.K.I.: Gaussian Processes for Machine Learning
(Adaptive Computation and Machine Learning). The MIT Press, Massachusetts
(2005)
14. Sankaranarayanan, S., Fainekos, G.: Falsification of temporal properties of hybrid
systems using the cross-entropy method. In: Proceedings of the 15th ACM International Conference on Hybrid Systems: Computation and Control, HSCC 2012,
pp. 125–134. ACM, New York (2012)
15. Srinivas, N., Krause, A., Kakade, S., Seeger, M.W.: Gaussian process optimization in the bandit setting: no regret and experimental design. In: Fă
urnkranz, J.,
Joachims, T. (eds.) Proceedings of the 27th International Conference on Machine
Learning (ICML 2010), 21–24 June 2010, Haifa, Israel, pp. 1015–1022. Omnipress
(2010)
16. Srinivas, N., Krause, A., Kakade, S.M., Seeger, M.W.: Information-theoretic regret
bounds for Gaussian process optimization in the bandit setting. IEEE Trans. Inf.
Theor. 58(5), 3250–3265 (2012)
Reactive Property Monitoring of Hybrid
Systems with Aggregation
Nicolas Rapin(B)
CEA LIST, Boˆıte Courrier 174, 91191 Gif sur Yvette, France
nicolas.rapin@cea.fr
Abstract. This work is related to our monitoring tool called ARTiMon for the property monitoring of hybrid systems. We explain how the
aggregation operator of its language derives naturally from a generalization of the eventually operator as introduced by Maler and Nickovik for
M IT L[a,b] . We present its syntax and its semantics using an intervalbased representation of piecewise-constant functions. We define an online algorithm for its semantics calculus coupled with an elimination of
irrelevant intervals in order to keep the memory resource bounded.
1
Introduction
Property monitoring is a uniﬁed solution in order to detect failures at many
stages of systems life-cycle. Supervision, applied during exploitation phase,
requires reactive monitoring: monitors have to run on-line, in real time and
indeﬁnitely. The motivation of our work is to deﬁne an expressive speciﬁcation
language suitable for systems evolving in dense time, like continuous and hybrid
systems, coupled to an eﬀective monitoring approach suitable for supervision
purpose. In this short paper we restrict the presentation of this approach to one
single operator, called the aggregation operator, which makes more expressive
real time temporal logics restricted to the boolean type or to a booleanization [5]
of non-boolean types. Our presentation is strongly based on a work due to Maler
and Nickovic [4]. Signals and the eventually operator are recalled and discussed
in Sect. 2. Section 3 is dedicated to the aggregation operator and gives some
examples of properties. Section 4 describes an algorithm for reactive monitoring
of aggregation properties.
2
Signals
In [4] Maler and Nickovik study M IT L[a,b] a bounded version of M IT L and its
interpretation over behaviors of continuous and hybrid systems modelled by signals deﬁned as partial piecewise-constant boolean time functions satisfying the
ﬁnite variability property. Formally, a signal s is a function ranging in B = {⊥, }
and whose time deﬁnition domain is a bounded interval of R, noted |s|. This domain
is bounded because, in the context of monitoring a running system always delivers a partial trace. But as time elapses the monitoring process extends the domain
c Springer International Publishing AG 2016
Y. Falcone and C. Sanchez (Eds.): RV 2016, LNCS 10012, pp. 447–453, 2016.
DOI: 10.1007/978-3-319-46982-9 28
448
N. Rapin
of signals i.e. |s| is successively of the forms ∅, δ1 , δ1 ∪ δ2 , . . . where δi , δi+1 are
adjacent intervals (δi ∪ δi+1 is an interval and δi ∩ δi+1 = ∅) satisfying δi ≺ δi+1
(which holds if t < t holds for any t ∈ δi , t ∈ δi+1 ). Of course monitoring produces only conservative extensions: noting sn the signal s at the nth extension,
for any n > 1, the restriction of sn+1 to |sn | is sn . The ﬁnite variability property
ensures that any signal can be represented by a ﬁnite and minimal set of intervals carrying the value (called positive intervals). Notice that ﬁnite variability
does not imply the bounded variability property which is satisﬁed when the function changes with a bounded rate χ ∈ N (i.e. at most χ variations over any interval
of length 1). A signal changing n times on [n, n + 1[ satisﬁes ﬁnite variability but
not bounded variability. The eventually operator is derived from the until operator primary in M IT L[a,b] . Its syntax is ♦i φ where φ is a boolean sub-term and i
a bounded interval of R+ . Its semantics is deﬁned with |= called the satisfaction
relation: (s, t) |= ♦i φ iﬀ ∃t ∈ t ⊕ i.(s, t ) |= φ where t ⊕ i denotes the interval i
shifted of t (for example t ⊕ [a, b[ is [t + a, t + b[). Notice that notation (s, t ) |= φ is
equivalent to s(t ) = when s is a time function. As far as we know the relation |=
comes from model theory. Using it subsumes that signals are considered as models
and that all terms of the logic should be interpreted with respect to those models.
Our approach, which constitutes one of our contribution, is diﬀerent. We do not
really interpret terms over models as usual. Instead we consider there exists a set
of ground signals and that operators of a logic proceed as constructors for building
new time functions or, as it will be proven, new signals. According to this point
of view the term ♦i φ builds a time function noted (♦i φ) (we add parenthesis to
the term to denote the function it builds) which derives inductively from (φ). Let
us begin by the deﬁnition of (♦i φ)(t) and secondly we will focus on its deﬁnition
domain. Derived from the above deﬁnition based on |= relation, a ﬁrst deﬁnition
is: (♦i φ)(t) = iﬀ ∃t ∈ (t ⊕ i).(φ)(t ) = . Another equivalent deﬁnition can be
given by introducing the set of values taken by a time function over a restriction
of its domain. Let g be a signal and r satisfying r ⊆ |g| then g(r) denotes the set
{g(t)/t ∈ r}. The semantics deﬁnition of an eventually terms at a time instant
becomes:
Definition 1 (Eventually as Aggregation). (♦i φ)(t) =
b∈(φ)(t⊕i)
b
Since (φ) is a boolean function we have (φ)(t ⊕ i) ⊆ {⊥, }. It suﬃces that
∈ (φ)(t ⊕ i) for (♦i φ) being true at t. This deﬁnition emphasis the fact that
the eventually modality is the result of the aggregation of a set of values using
disjunction. Extending this aggregation notion, which is the main idea of this
work, will be studied below. For now let us deﬁne |(♦i φ)|. We consider that (♦i φ)
is reliable at time instant t if (t ⊕ i) ⊆ |(φ)|. We deﬁne |(♦i φ)| as the set of all
reliable time instants, so |(♦i φ)| = {t/t ⊕ i ⊆ |(φ)|}. We will note this latter
set i
|(φ)| in the sequel. As |(φ)| is a bounded interval it is also a bounded
interval.
Definition 2 (Eventually Semantics). Let φ be a boolean signal.
|(♦i φ)| = i
|(φ)| (♦i φ)(t) = b∈(φ)(t⊕i) b
Reactive Property Monitoring of Hybrid Systems with Aggregation
449
Remark. A important point to notice here, which constitutes one of our
contribution, is that the completeness and reliability of the domain enables an
incremental and inductive computation of signals. Basic Case: as mentioned in
Sect. 2 the extension of a ground signal is conservative. Induction Step: consider
the conservative extension of (φ) from domain D to D∪λ. According to Deﬁnition
D to i
(D ∪ λ). By induction
2 the domain |(♦i φ)| is extended from i
hypothesis (φ) remains the same on D. It follows that (♦i φ) remains the same
on i
D. Hence the extension of (♦i φ) is also conservative. Thus one has
(D ∪ λ)) \ (i
D) in order to know
to compute (♦i φ) only on Δ = (i
(D ∪ λ). One can already feel the beneﬁt of such a
the function (♦i φ) on i
restriction for the on-line calculus. It will be detailed below in Sect. 4.
Lemma 1 (Signal Property Conservation). If (φ) is a signal then the time
function (♦i φ), as deﬁned in Deﬁnition 2, is also a signal.
We have already mentioned that |(♦i φ)| deﬁned as i
|(φ)| is bounded (in the
algorithm below we give an operational calculus for i
|(φ)|). What remains to
be proved is that (♦i φ) satisﬁes the ﬁnite variability property. To establish this
we need to describe the operational semantics calculus of (♦i φ). The so-called
backward propagation proposed in [4] plays an important role in this calculus. For
the ease of the presentation, in an algorithmic context, any signal is assimilated to
its interval based representation being a chronologically ordered list of positive
intervals (i.e. ordered by ≺). It is also useful to formalize intervals and their
associated operations before introducing backward propagation. Formally an
interval is a 4-tuple (l, lb, ub, u) of B × R × R × B (for example ( , a, b, ⊥) stands
for [a, b[). We use pointed notation to denote interval attributes: (l, a, b, u).ub
denotes b. The opposite of i, noted −i is (i.u, −i.ub, −i.lb, i.l) ; notation t ⊕ i
stands for (i.l, t + i.lb, t + i.ub, i.u) and t i for t ⊕ −i. The ⊕ operation can be
extended to an interval: given k, i two intervals, k ⊕ i denotes t∈k t ⊕ i which is
(k.l ∧ i.l, k.lb + i.lb, k.ub + i.ub, k.u ∧ i.u). A valued interval is an interval carrying
a value. val(i) denotes the value carried by i. For example a boolean positive
interval is an interval carrying the value .
Backward Propagation. Let us suppose that for t ∈ |(φ)| we have φ(t ) =
then also (♦i φ)(t) = provided t satisﬁes t ∈ t ⊕ i i.e. t ∈ t ⊕ −i. The interval
t ⊕ −i, also noted t i, is the backward propagation of the true value of φ at t .
This can be extended to an interval: if φ is valid over k then also (♦i φ) over k i.
Given that signals representations are based on positive intervals, an algorithm
for computing (♦i φ) could be the following. Init : (♦i φ) = ∅. Iteration : for all
interval k of (φ) aggregate j = k i to (♦i φ). P ost−T reatment : merge adjacent
intervals of (♦i φ) (until no more adjacent can be found). We will refer to this
algorithm as the oﬀ-line algorithm as (φ) is assumed to exist as an input. In the
Iteration step aggregate has diﬀerent meanings depending on how j covers the
existing positive intervals of (♦i φ): if covers none it is purely added to (♦i φ);
if covers some empty spaces, each empty space covered by j is converted into a
positive interval added to (♦i φ). The merging step is achieved in order to obtain
minimality of the representation. It follows that the backward propagation of
450
N. Rapin
one interval of φ produces three kind of modiﬁcations of (♦i φ): (1) it adds one
interval (2) it extends one interval (3) it reduces the number of intervals (when
j ﬁlls the gap between intervals which are merged). By induction hypothesis
(φ) is a signal; it satisﬁes the ﬁnite variability property and hence its interval
representation is composed of a ﬁnite number of positive intervals. So according
to modiﬁcations (1), (2), (3) it is also the case for (♦i φ) which satisﬁes the ﬁnite
variability assumption; hence it is a signal. With the same argumentation we
can prove that bounded variability is preserved.
3
Aggregation Operator
Our idea, ﬁrstly appearing in [6], of the aggregation operator stems from the
algorithm described in the previous Section. Let us interpret propagation as an
aggregation process. Distinguishing (♦i φ) before (with superscript bf ) the propagation of k and after (with superscript af ) we have ∀t ∈ k i.(♦i φ)af (t) =
is aggregated by disjunction to the
∨ (♦i φ)bf (t). This equality shows that
value of (♦i φ)bf for every t of k i. This is coherent with Deﬁnition 2 relating eventually modality with disjunction. Now φ could have another type than
boolean type and the aggregation could be based on other functions than disjunction. This is what we investigate in the remainder. A non-boolean signal
diﬀers from a boolean one by its range which is of the form E = E × {∅}
where E gives the type of the signals. Notice that a non-boolean signal may
takes the value ∅ which stands for the undeﬁned value. Interval based representations of non-boolean signals is also based on positive intervals whose deﬁnition is extended to intervals not carrying the special value ∅. The syntax for an
aggregation term is A{f }i φ where f is an aggregation function, i is an interval of R with ﬁnite bounds and φ is a term. An aggregation function is any
binary function f (e, a) which aggregates an element e to an aggregate a (where
a can be the special value ∅). Formally it is a function of E × A → A where
E and A are sets, both containing the special value noted ∅, and satisfying
f (∅, a) = a. A term A{f }i φ is well formed if φ and f are compatible regarding their types: if range of φ is E then f must be of the form E × A → A.
Then A{f }i φ type is A \ {∅}. Examples of aggregation functions. Let
max min : R×((R×R)∪{∅}) → ((R×R)∪{∅}) be the aggregation function satisfying: max min(x, (M, m)) = (max(x, M ), min(x, m)), max min(x, ∅) = (x, x).
Let sum(e, a) : R×(R∪{∅}) → (R∪{∅}) satisfying sum(e, a) = e+a, sum(e, ∅) =
e; disj satisfying disj(e, a) = e ∨ a, disj(e, ∅) = e. For aggregation the backward
propagation satisﬁes: ∀t ∈ k i.(A{f }i φ)af (t) = f (val(k), (A{f }i φ)bf (t)). If f
is an aggregation function we note f its extension to ﬁnite sequences. For e1 , . . . ,
en being elements of E it satisﬁes: f (()) = ∅ and f ((e1 , . . . , en ) = f (en , f ((e1 ,
. . . , en−1 ))). For Deﬁnition 1 we introduced g(r) denoting a set of values, for
general aggregation we need to denote a sequence. Let g be a signal and r ⊂ |g|
be an interval, it follows that the restriction of g to r is the concatenation of a
ﬁnite number of constant functions g1 → c1 , . . . , gn → cn satisfying |gw | ≺ |gw+1 |
for w ∈ [1, n − 1]. We note gseq (r) the sequence (c1 , . . . , cn ).
Reactive Property Monitoring of Hybrid Systems with Aggregation
451
Definition 3 (Aggregation). Let φ be a signal of range E, i a bounded interval
of R, and f an aggregation function of E × A → A then: |(A{f }i φ)| = i
|(φ)|, (A{f }i φ)(t) = f ((φ)seq (t ⊕ i))
Though Maler and Nickovic introduce also in [4] non-boolean signals in their
logic, those are always composed with non-temporal predicative functions reducing the composition to the boolean framework. We claim that with the aggregation the logic is more expressive as one can form terms with a spread temporal
dependency (not reduced to current time). The oﬀ-line semantics calculus for
(A{f }i φ) can be obtained by achieving a slight modiﬁcation of the backward
propagation in the algorithm described for the eventually modality. Iteration
over intervals of (φ) is achieved in the chronological order. At each step of the
iteration, it aggregates j = k i with value val(k) to (A{f }i φ). Due to the
chronological iteration, there is only three cases: (1) j covers an empty space
beyond (w.r.t ≺), if any, all positive intervals of (A{disj}i φ); this space is converted into an interval with value f (val(k), ∅) and added (2) the value of any
interval m ⊆ j is changed to f (val(k), val(m)) (3) it may exists one interval m
partially covered by j, it is split in two, the uncovered part value is set to val(m)
and the covered to f (val(k), val(m)). It follows that one propagation adds at
most two intervals into (A{disj}i φ). The number of intervals of (A{disj}i φ) is
then at most the double of (φ).
Examples. Notice that ♦i φ and A{disj}i φ are equivalent. Invariant a ⇒ A
{disj}[−1,1] b speciﬁes that b should be always present around a in a the
time window [−1, 1]. Notice that our logic supports pairing of signals and the
application of functions and predicates like STL [5]. For example if (s, s ) is a
pair of signals and g a binary function or predicate g(s, s )(t) = g(s(t), s (t)).
For readability we may note s g s instead of g(s, s ) (typically s < s for
< (s, s )). Example 2. The variation of the ﬂow over 60 s should be under
10 percent. Let us consider this term ÷(A{max min}[−60,0] f low). At t its
value is ÷(M, m) = M ÷ m where M and m are respectively the max and
the min values of f low over t ⊕ [−60, 0]. The invariant is then formalized by:
÷(A{max min}[−60,0] f low) < 1.1. Example 3. When the temperature is under
10 degrees the motor should not be started more than 3 times during the next
hour. Here we consider that motor starts function has the form of a Dirac function (its value is 0 except at some time instants where its value is 1). The invariant
is formalized by: (temp < 10) ⇒ (A{sum}[0,3600] motor starts ≤ 3). Example
4. With inc(e, a) satisfying inc(e, ∅) = ( , e), inc(e, (b, a)) = ((e > a) ∧ b, e) the
invariant motor starts ⇒ (A{inc}[0,150] temp)[0] formalizes temperature should
not decrease for 150 s after the motor starts. Example 5. A{owr}[c,c] φ where owr
(for overwrite) satisﬁes: owr(x, ∅) = x, owr(x, y) = x shifts (φ) of c in time.
4
On-Line Monitoring
For us supervision consists in checking that some invariants remain true. To
achieve this we exploit the remark made in Sect. 2 about an incremental semantics calculus completed it with a garbage collection mechanism which, assuming