3 Wait-Free Algorithm: Then a Local Simulation Stage of AST-CV
Tải bản đầy đủ - 0trang
124
A. Casta˜
neda et al.
reliable synchronous network. This DECOUPLED model is weaker than the
synchronous model (on the process side) and stronger than the asynchronous
crash-prone model (on the communication side), while encompassing in a single
framework two fundamental issues of distributed computing, locality [16] and
wait-freedom [13].
A 3-coloring algorithm for a ring was derived for the DECOUPLED model.
This algorithm uses as a subroutine a generalization of Cole and Vishkin’s algorithm [7]. A process needs to obtain initial information from processes at distance
at most O(log∗ n) of it. As far as we know, this is the ﬁrst wait-free local coloring
algorithm, which colors a ring with at most three colors.
In contrast to LOCAL, in the DECOUPLED model, after d rounds of communication, a process collects the initial inputs of only a subgraph of its dneighborhood. The paper has shown that, despite this uncertainty, it is possible
to combine locality and wait-freedom, as far as 3-coloring is concerned. The
keys to this marriage were (a) the decoupling of communication and processing,
and (b) the design of a synchronous coloring algorithm (AST-CV), where the
processes are reliable, proceed synchronously, but are not required to start at
the very same round, which introduces a ﬁrst type of asynchrony among the
processes. As we have seen, the heart of this algorithm lies in the consistent coloring of the border vertices of subgraphs which started at diﬀerent times (unit
segments).
It would be interesting if this methodology applies to other coloring algorithms, or even to other distributed graph problems which are solvable in the
LOCAL model.
Acknowledgments. This work has been partially supported by the French ANR
project DESCARTES, devoted to abstraction layers in distributed computing. The
ﬁrst author was supported in part by UNAM PAPIIT-DGAPA project IA101015. The
fourth author is currently on leave at CSAIL-MIT and was supported in part by UNAM
PAPIIT-DGAPA project IN107714.
References
1. Arjomandi, E., Fischer, M., Lynch, N.: Eﬃciency of synchronous versus asynchronous distributed systems. J. ACM 30(3), 449–456 (1983)
2. Awerbuch, B.: Complexity of network synchronization. JACM 32(4), 804–823
(1985)
3. Awerbuch B., Patt-Shamir B., Peleg D., Saks M.: Adapting to asynchronous
dynamic networks (extended abstract). In: Proceedings of the 24th ACM Symposium on Theory of Computing (STOC 1992), pp. 557–570 (1992)
4. Barenboim, L., Elkin, M.: Deterministic distributed vertex coloring in polylogarithmic time. J. ACM 58(5), 23 (2011)
5. Barenboim, L., Elkin, M.: Distributed Graph Coloring, Fundamental and Recent
Developments, 155 p. Morgan & Claypool Publishers (2014)
6. Barenboim, L., Elkin, M., Kuhn, F.: Distributed (Delta+1)-coloring in linear (in
Delta) time. SIAM J. Comput. 43(1), 72–95 (2014)
Making Local Algorithms Wait-Free: The Case of Ring Coloring
125
7. Cole, R., Vishkin, U.: Deterministic coin tossing with applications to optimal parallel list ranking. Inf. Control 70(1), 32–53 (1986)
8. Casta˜
neda, A., Delporte, C., Fauconnier, H., Rajsbaum, S., Raynal, M.: Waitfreedom and locality are not incompatible (with distributed ring coloring as an
example). Technical report #2033, 19 p., IRISA, University of Rennes, France
(2016)
9. Fischer, M.J., Lynch, N.A., Paterson, M.S.: Impossibility of distributed consensus
with one faulty process. J. ACM 32(2), 374–382 (1985)
10. Fraigniaud, P., Gafni, E., Rajsbaum, S., Roy, M.: Automatically adjusting concurrency to the level of synchrony. In: Kuhn, F. (ed.) DISC 2014. LNCS, vol. 8784,
pp. 1–15. Springer, Heidelberg (2014)
11. Fraigniaud, P., Korman, A., Peleg, D.: Towards a complexity theory for local distributed computing. J. ACM 60(5), 16 (2013). Article 35
12. Garey, M.R., Johnson, D.S.: Computers, Intractability: A Guide to the Theory of
NP-Completeness, 340 p. W.H. Freeman, New York (1979)
13. Herlihy, M.P.: Wait-free synchronization. ACM Trans. Program. Lang. Syst. 13(1),
124–149 (1991)
14. Keidar, I., Rajsbaum, S.: On the cost of fault-tolerant consensus when there are
no faults: preliminary version. ACM SIGACT News 32(2), 45–63 (2001)
15. Kuhn, F., Moscibroda, T., Wattenhofer, R.: What cannot be computed locally! In:
Proceedings of the 23rd ACM Symposium on Principles of Distributed Computing,
pp. 300–309. ACM Press (2004)
16. Linial, N.: Locality in distributed graph algorithms. SIAM JC 21(1), 193–201
(1992)
17. Meincke, T., et al.: Globally asynchronous locally synchronous architecture for
large high-performance ASICs. In: Proceedings of the IEEE International Symposium on Circuits and Systems (ISCAS 1999), vol. 2, pp. 512–515 (1999)
18. Naor, M., Stockmeyer, L.: What can be computed locally? SIAM J. Comput. 24(6),
1259–1277 (1995)
19. Peleg, D.: Distributed computing, a locally sensitive approach. SIAM Monographs
on Discrete Mathematics and Applications, 343 p. (2000). ISBN 0-89871-464-8
20. Raynal, M.: Fault-Tolerant Agreement in Synchronous Message-Passing Systems,
165 p. Morgan & Claypool Publishers (2010). ISBN 978-1-60845-525-6
21. Raynal, M.: Communication and Agreement Abstractions for Fault-Tolerant Asynchronous Distributed Systems,251 p. Morgan & Claypool Publishers (2010). ISBN
978-1-60845-293-4
22. Raynal, M.: Concurrent Programming: Algorithms, Principles, and Foundations,
530 p. Springer (2013). ISBN 978-3-642-32026-2
23. Suomela, J.: Survey of local algorithms. ACM Comput. Surv. 45(2), 40 (2013).
Art. 24
Meta-algorithm to Choose a Good On-Line
Prediction (Short Paper)
Alexandre Dambreville1(B) , Joanna Tomasik2 , and Johanne Cohen3
1
2
LRI, CentraleSup´elec, Universit´e Paris-Sud,
Universit´e Paris-Saclay, Orsay, France
Alexandre.Dambreville@lri.fr
LRI, CentraleSup´elec, Universit´e Paris-Saclay, Orsay, France
Joanna.Tomasik@lri.fr
3
LRI, CNRS, Universit´e Paris-Saclay, Orsay, France
Johanne.Cohen@lri.fr
Abstract. Numerous problems require an on-line treatment. The variation of the problem instance makes it harder to solve: an algorithm used
may be very eﬃcient for a long period but suddenly its performance
deteriorates due to a change in the environment. It could be judicious
to switch to another algorithm in order to adapt to the environment
changes.
In this paper, we focus on the prediction on-the-ﬂy. We have several
on-line prediction algorithms at our disposal, each of them may have a
diﬀerent behaviour than the others depending on the situation. First, we
address a meta-algorithm named SEA developed for experts algorithms.
Next, we propose a modiﬁed version of it to improve its performance in
the context of the on-line prediction.
We conﬁrm the eﬃciency gain we obtained through this modiﬁcation
in experimental manner.
1
Introduction
Let us assume that we have several algorithms at our disposal to solve a given
problem. One of them may perform very well for a situation but badly for another
situation whereas for another algorithm it is the opposite. If we were in an oﬀline scenario, we could determine in which situation we are and select the best
algorithm once for all. In this paper, we address an on-line scenario, i.e. the
environment may change with time and evolve from one situation to another. Our
goal is to use a meta-algorithm that dynamically switches among the available
algorithms.
First, we analyse a meta-algorithm named Strategic Expert meta-Algorithm
(SEA) [3] and discuss its advantages and drawbacks in Sect. 2. Next, we modiﬁe
it (Sect. 3) to make it ﬁt the environment quicker. We evaluate the performance
of our meta-algorithm through numerical experiments in Sect. 4.
c Springer International Publishing AG 2016
B. Bonakdarpour and F. Petit (Eds.): SSS 2016, LNCS 10083, pp. 126–130, 2016.
DOI: 10.1007/978-3-319-49259-9 10
Meta-algorithm to Choose a Good On-Line Prediction
2
127
Existing Meta-algorithm, SEA
Let us assume that we have n algorithms at our disposal. We denote Mi the
average payoﬀ of algorithm i since we used it, and Ni the number of steps
on which we use algorithm i when it is selected. SEA (Strategic Expert metaAlgorithm [3]) alternates the exploration and exploitation phases as described
in Algorithm 1:
Algorithm 1. SEA
1: For each i ∈ [[1; n]], Mi ← 0, Ni ← 0, iter ← 1
2: procedure SEA
3:
loop
4:
U ← Random(0, 1)
5:
if U < 1/iter then i ← Random([[1; n]])
6:
else i ← argmax Mi
i∈[[1;n]]
Exploration phase;
Exploitation phase;
7:
Ni ← Ni + 1.
8:
Execute algorithm i for Ni steps;
9:
R ← average payoﬀ of i during these Ni steps;
10:
Mi ← Mi + Ni2+1 (R − Mi );
11:
iter ← iter + 1;
12:
end loop
13: end procedure
The analysis of the SEA algorithm leads us to formulate a list of its advantages and a list of its drawbacks. Its strengths are:
1. If the environment does not change, SEA is able to ﬁnd the best algorithm
which ﬁts the situation.
2. If the environment does change, the average reward of SEA is at least as good
as the average reward of the best algorithm when it was played in inﬁnite time
(see Theorem 3.1 of [3]).
Its weaknesses are:
1. It is proved that, in the long run, all of the algorithms will be used countless
times by SEA. However, if there are many algorithms available, some of them
might not be tried before a long time. Indeed, the more the time passes, the
smaller the probability of an exploration is (Lemma 3.1 of [3]).
2. SEA computes the mean Mi since the ﬁrst iteration that is why Mi suﬀers
from inertia when the number of iterations increases. Even a drastic change
for the average payoﬀ R may be impossible to be detected what slows down
the switching between algorithms. In certain situations, it would have been
advantageous to switch to a very eﬃcient algorithm, but SEA is not reactive
enough to do it (see Figs. 2a and b).
128
3
A. Dambreville et al.
Our Dynamic SEA
We modify SEA, trying to overcome its weaknesses mentioned above. For the
second point, to make the mean be more reactive, instead of a long run mean,
for Mi , we use the average payoﬀ during the last Ni steps, i.e. at line 10 of
Algorithm 1, we put Mi ← R. It allows SEA to have a good overview of the
recent performance of an algorithm. Now, to switch to another algorithm, SEA
just has to wait for a new exploration. This brings us to the ﬁrst point of the
drawbacks: an exploration may take a long time to come and it will take even
much more time to try each algorithm.
In order to ensure more frequent explorations, we reset our meta-algorithm
occasionally. During the exploitation of an algorithm i (line 6 of Algorithm 1), if
the payoﬀ is smaller compared to the previous iteration, we set ∀i = i, Mi = ∞
(after line 9). With this mechanism, the next exploitations will try each algorithm
(diﬀerent than i) at least once and then determine the best of them for the actual
situation. Likewise, we use this mechanism to overcome the ﬁrst weakness listed
and we make our version of SEA try each algorithm at least once. Thereby we
avoid having an untested algorithm for too long time.
4
Experiments
We start this section by explaining the experimental setup used. We evaluate our
meta-algorithm for the following prediction problem. Let (Di ) be a positive integer sequence. This sequence is disturbed by a noise (Ni ), which give us a jammed
sequence (Ji ) = (Di +Ni ). At time i we receive the real data Di and the jammed
data for the next step Ji+1 . Our goal at each time i is to recover Di+1 from Ji+1 .
We denote (Ri+1 ) the result of our recovering. To measure the performance of
−Di
, whose
the result at time i, we propose to use a reward δi = exp − RiD
i
value always is in (0; 1]. If we obtain Ri = Di (the optimal result), the reward
reaches the maximal value and δi = 1. Moreover, the farther from Di our result
Ri is, the closer to 0 the reward δi is.
Our proposition consists in using
multi-armed bandit algorithms [1]. A
bandit is a method that oﬀers us
several strategies, represented by its
arms, to play. Each arm has a certain reward attributed. At each time, a
player choses a bandit arm and expects
to win, i.e. to maximize the mean of
the rewards obtained. In our problem,
each arm corresponds to a modiﬁcation of Ji , expressed in terms of a perFig. 1. Three bandits
centage (x%) of Ji . We denote (Arm)(Ji ) = Ji + x%(Ji ) = Ri the eﬀect of an
arm on the jammed value Ji .
Meta-algorithm to Choose a Good On-Line Prediction
129
In our experiments, we use the trace of the 1998 World Cup Web site [2],
which gives the number of requests by hour on the site as (Di ) (this trace
is commonly used in the context of evaluation of scheduling algorithms). We
generate diﬀerent kind of noise on this trace in order to pinpoint the eﬀect of our
modiﬁcations and validate the dynamic version of SEA. We use a Gaussian noise
for (Ni ): at each time i, we set the mean and the variance as percentages of Di :
Ni = N (Di μ%, Di σ%). More precisely, we divide (Di ) into three equal parts and
we add a diﬀerent noise on each of them. We denote n1 → n2 → n3 the sequence
of noise used. The four types of noise we use are: n+20,±3 = N (+20 %, 3 %),
n−20,±3 = N (−20 %, 3 %), n0,±3 = N (0 %, 3 %) and n0,±30 = N (0 %, 30 %).
For the ﬁrst three noise variants, we have three bandit algorithms, one specialized for each environment as illustrated in Fig. 1. The last noise variant has a
great variability that makes it unpredictable for any of our bandit algorithms. We
consider three scenarii: n−20,±3 → n20,±3 → n0,±3 , n0,±3 → n−20,±3 → n20,±3
and n0,±30 → n0,±30 → n0,±30 .
We show our results in Fig. 2 which represent the evolution of the average
reward of our algorithms. Each curve is the mean of one hundred diﬀerent runs
of the algorithm.
Fig. 2. Mean rewards of our algorithms
130
A. Dambreville et al.
The half-width of conﬁdential intervals computed at conﬁdence level α = 0.05
never exceeds 1.5 % of the corresponding mean. We do not thus incorporate them
in the ﬁgures.
We discuss the results of our experiments for n−20,±3 → n20,±3 → n0,±3 and
for n0,±3 → n−20,±3 → n20,±3 depicted in Figs. 2a and b respectively.
We build Ni in such a way that each bandit algorithm outperforms the others
for a third of the time, and indeed, it is what we note in Figs. 2a and b. We
observe that the SEA algorithm follows the best algorithm in average as time
grows. Nevertheless, due to the inertia of the mean, SEA is very slow to switch
from an algorithm to another. At the opposite, the Dynamic SEA can ﬁt the
environment very quickly.
For the last experiment in which the prediction is characterized by an excessive variability of the noise (Fig. 2c), both SEA and Dynamic SEA follow the ﬁrst
bandit (Fig. 1a) which has the best reward in average. Whatever the situation,
Dynamic SEA is at least as good as SEA.
5
Conclusion
At ﬁrst, we tested the SEA algorithm to dynamically choose an algorithm among
those available. We observed the deterioration of SEA performance with time.
The modiﬁcation we brought to SEA improved its reactivity and its overall
performance.
Acknowledgment. The PhD thesis of Alexandre Dambreville is ﬁnanced by Labex
Digicosme within the project E-CloViS (Energy-aware resource allocation for Cloud
Virtual Services).
References
1. Bubeck, S., Cesa-Bianchi, N.: Regret analysis of stochastic and nonstochastic multiarmed bandit problems. Found. Trends Mach. Learn. 5, 1–122 (2012)
2. http://ita.ee.lbl.gov/html/contrib/WorldCup.html
3. Farias, D.P.D., Megiddo, N.: Combining Expert Advice in Reactive Environments.
J. ACM 53, 762–799 (2006)
On-Line Path Computation
and Function Placement in SDNs
Guy Even1 , Moti Medina2(B) , and Boaz Patt-Shamir1
1
School of Electrical Engineering, Tel Aviv University, Tel Aviv, Israel
{guy,boaz}@eng.tau.ac.il
2
MPI for Informatics, Saarbră
ucken, Germany
mmedina@mpi-inf.mpg.de
Abstract. We consider service requests that arrive in an online fashion
in Software-Deﬁned Networks (SDNs) with network function virtualization (NFV). Each request is a ﬂow with a high-level speciﬁcation of
routing and processing (by network functions) requirements. Each network function can be performed by a speciﬁed subset of servers in the
system. The algorithm needs to decide whether to reject the request, or
accept it and with a speciﬁc routing and processing assignment, under
given capacity constraints (solving the path computation and function
placement problems). Each served request is assumed to “pay” a prespeciﬁed beneﬁt and the goal is to maximize the total beneﬁt accrued.
In this paper we ﬁrst formalize the problem, and propose a new service model that allows us to cope with requests with unknown duration
without preemption. The new service model augments the traditional
accept/reject schemes with a new possible response of “stand by.” We
also present a new expressive model to describe requests abstractly using
a “plan” represented by a directed graph. Our algorithmic result is an
online algorithm for path computation and function placement that guarantees, in each time step, throughput of at least a logarithmic fraction
of a (very permissive) upper bound on the maximal possible beneﬁt.
1
Introduction
Conventional wisdom has it that in networking, models are reinvented every
twenty years or so. A deeper look into the evolution of networks shows that there
is always a tension between ease of computation, which favors collecting all data
and performing processing centrally, and ease of communication, which favors
distributing the computation over nodes along communication paths. It seems
that recently the pendulum has moved toward the centralized computation side
once again, with the emergence of software-deﬁned networks (SDNs), in which
one of the main underlying abstractions is of a centrally managed network. Network Function Virtualization (NFV) is another key abstraction: roughly speaking, the idea is that instead of having functions implemented by special-purpose
This work was supported in part by the Neptune Consortium, Israel.
The full version of this paper can be found in http://arxiv.org/abs/1602.06169.
c Springer International Publishing AG 2016
B. Bonakdarpour and F. Petit (Eds.): SSS 2016, LNCS 10083, pp. 131–147, 2016.
DOI: 10.1007/978-3-319-49259-9 11
132
G. Even et al.
expensive hardware, functions can be virtualized and implemented by virtual
machines running on cheap general-purpose boxes.
Among the key conceptual components of such networks are path computation
and function placement [12], which allows potentially complex requests to be
routed over the network. Informally, each request speciﬁes a “processing plan”
for a ﬂow, that includes a source-destination pair as well as a description of a
few processing stages that the ﬂow needs to go through. The task is to ﬁnd a
route in the network starting at the source to the destination that includes the
requested processing. The main diﬃculty, of course, is the bounded processing
capacity of servers and links, so not all requests can be served.
Our Contribution. Our contribution is both conceptual and technical. From the
conceptual viewpoint, we introduce a new service model that is both natural
from the user’s perspective and, from the operator’s perspective, allows for online algorithms with strong performance guarantees, even when dealing with
requests that do not specify their duration upon arrival, and without resorting
to preemption (i.e., once a request is admitted, it has the resources secured until
it leaves). The main idea in the new service model is to place a non-admitted
request in a “standby” mode, until (due to other requests leaving the system)
there is room to accept it. Once a request is accepted, it is guaranteed to receive
service until it ends (i.e., until the user issues a “leave” signal). We also present
a new powerful model for describing requests. In a nutshell, a request speciﬁes
and abstract directed graph whose nodes represent the required functions, and
the system is required to implement that abstraction by a physical route that
includes the requested processing, in order.
Our algorithmic contribution consists of a deterministic algorithm that
receives requests in an on-line fashion, and determines when each request starts
receiving service (if at all), and how is this service provided (i.e., how to route
the request and where to process it). We note that in this, our algorithm solves
path computation and function placement combined, which is diﬀerent from the
common approach that separates the problems (separation may result in performance penalties). Quantitatively, in our model each request speciﬁes a beneﬁt
per time unit it pays when it is served, and the algorithm is guaranteed to
obtain Ω(1/ log(nk)) of the best possible beneﬁt, where n is the system size
and k is the maximum number of processing stages of a request.1 More precisely, in every time step t, the total beneﬁt collected by the algorithm is at
least an Ω(1/ log(nk))-fraction of the largest possible total beneﬁt that can be
obtained at time t (i.e., from all requests that have arrived and did not leave
by time t) while respecting the capacity constraints. The above performance
guarantee of the algorithm holds under the conditions that no processing stage
of a request requires more than an O 1/(k log(nk)) fraction of the capacity of
any component (server or link) in the system, and assuming that the ratio of
the highest-to-lowest beneﬁts of requests is bounded by a polynomial in n. (We
1
Typically, k is constant because the number of processing stages does not grow as a
function of the size n of the network.
On-Line Path Computation and Function Placement in SDNs
133
provide precise statements below.) We also prove a lower bound on the competitive ratio of Ω(log n) for every online algorithm in our new model. Hence, so long
as k, the number of processing stages in a request, is bounded by a polynomial
in n, our algorithm is asymptotically optimal (see Sect. 6).
1.1
Previous Work
Abstractions via High Level SDN Programming Languages. Merlin [14,15] is a
language for provisioning network resources in conjunction with SDNs. In Merlin,
requests are speciﬁed as a regular expression with additional annotation, and the
main task is path computation. The system works in an oﬀ-line fashion: given
the set of all requests and a system description, an algorithm computes feasible
routes for (some of) the requests. The algorithm suﬀers from two weaknesses:
ﬁrst, as mentioned above, it is oﬀ-line, i.e., it requires knowing all requests
ahead of time; and second, the algorithm is not polynomial, as it is based on
employing a solver for integer linear programs (ILP). For more information on
SDN languages (and SDN in general) we refer the reader to [12].
Function placement. Cohen et al. [6] present a model and an oﬄine bi-criteria
approximation algorithm for the NFV placement of functions. In their model,
routes are already determined, and the question is where to place the requested
functions: if a required function is not placed on the given route of some ﬂow,
then a detour must be taken by that ﬂow. The goal is to minimize the cost, which
consists of a setup cost for each function placed, and a penalty for each detour
from the prescribed routes. This algorithm is also oﬀ-line, and it has another
serious handicap in that it supports only requests in which the functions are
unordered: there is no way to require that a ﬂow ﬁrst undergoes processing of
some function f and only then it is handled by another function g.
Path Computation and Function Placement in SDNs. Recently, Even et al. [9] followed our model for describing SDN requests. They designed an oﬄine randomized algorithm. The algorithm serves at least a (1−ε) fraction of the requests the
optimal solution can serve provided that the SDN requests have small demands
(i.e., maxj dj ≤ mine ce · ε2 /(k · O(log n))).
Online Routing Algorithms. Our work leverages the seminal algorithm of Awerbuch et al. [2], which is an on-line algorithm for routing requests with given
beneﬁts and known durations. The algorithm of [2] decides whether to admit or
reject each request when it arrives; the algorithm also computes routes for the
admitted requests. The goal of the algorithm in [2] is to maximize the sum of
beneﬁts of accepted requests.
From the algorithmic point of view, one should note that the throughputmaximization algorithm of [2] resembles the load-minimization algorithm presented in [1], both dealing with on-line routing. In [1], each request has a speciﬁed
non-negative load. All requests must be served, and the goal is to minimize the
maximal ratio, over all links, between the load on the link and its capacity (link
load is the sum of the loads of the requests it serves).
134
G. Even et al.
Buchbinder and Naor [4,5] analyze the algorithm of [2] using the primal-dual
method. This allows them to bound the beneﬁt of the computed solution as a
function of the beneﬁt of an optimal fractional solution (see also [11]).
As mentioned, the above algorithms assume that each request speciﬁes the
duration of the service it needs when it arrives. The only on-line algorithm for
unknown durations we know of in this context is for the problem of minimizing
the maximal load [3]. The algorithm in [3] is O(log n)-competitive, but it requires
rerouting of admitted requests (each request may be rerouted O(log n) times).
Our algorithm is for beneﬁt maximization, and it handles unknown durations
by allowing the “standby” mode, without ever rerouting an accepted request.
1.2
Advocacy of the Service Model
In the classical non-preemptive model with guaranteed bandwidth, requests must
specify in advance what is the exact duration of the connection (which may be
inﬁnite), and the system must give an immediate response, which may be either
“reject” or “admit.” While immediate responses are preferable in general, the
requirement that duration is speciﬁed in advance is unrealistic in many cases (say,
because the length of the connection may depend on yet-unavailable inputs).
However, requests with unknown durations seem to thwart the possibility for a
competitive algorithm due to the following reasoning. Consider any system, and
suppose that there are inﬁnitely many requests available at time 0, all with unit
beneﬁt per time step. Clearly there exists a request, say r∗ , that is rejected due
to the ﬁnite capacity of the system. Now, the following adversarial scenario may
unfold: all admitted requests leave the system at time 1 (accruing some ﬁnite
beneﬁt), and request r∗ persists forever (i.e., can produce any desired beneﬁt).
Clearly, this means that no deterministic algorithm can guarantee a non-trivial
competitive ratio in the worst case.
We therefore argue that if unknown durations are to be tolerated, then the
requirement for an immediate reject/admit response must be relaxed. One relaxation is to allow preemption, but if preemption is allowed then the connection
is never certain until it terminates. Our service model suggests to commit upon
accept, but not to commit to rejection. In other words, we never reject with
certainty because we may accept later, but when we accept, the request is certain to have the required resources for as long it wishes. This type of service
is quite common in many daily activities (e.g., waiting in line for a restaurant
seat), and is actually implicitly present even in some admit/reject situations: in
many cases, if a request is rejected, the resourceful user may try to re-submit it
or abandon.
Finally, from a more philosophical point of view, the “standby” service model
seems fair for unknown durations: on one hand, a request does not commit ahead
of time to when it will leave the system, and on the other hand, the algorithm
does not commit ahead of time to when the request will enter the system.