3 Soundness/Completeness of Multiple-Agent Possibilistic Logic
Tải bản đầy đủ - 0trang
Reasoning with Multiple-Agent Possibilistic Logic
3
73
A Refutation Method by Linear Multiple Agent
Resolution
In possibilistic logic, the linear resolution strategy for the procedure of refutation
by resolution, deﬁned in [7], works in the same way as in classical logic, and
thanks to an A∗ -like search method (changing the sum of the costs into their
minimum), one can obtain the refutation having the strongest weight ﬁrst, this
weight being the one of the formula we want to prove. Here, the (fuzzy) subsets
of agents play the role of weights, but they are not totally ordered, while the
weights in possibilistic logic are; this makes the problem more tricky (since the
costs in the A∗ -like algorithm will be computed from these weights). However,
the procedure can be adapted to multiple-agent logic.
3.1
Refutation by Linear Multiple Agent Resolution
Let Γ be a knowledge base composed of multiple agent formulas. Proving (a, A)
from Γ comes down to adding (¬a, All), in clausal form, to Γ and applying
the resolution rule repeatedly until producing (⊥, A). Clearly, it comes down to
getting the empty clause with the greatest subset of agents set(a, Γ ). Formally:
set(a, Γ ) = ∪{A|Γ |= (a, A)}
Refutation by resolution using a linear strategy can be expressed in terms of
tree search in a state space. A state (C0 C1 , ..., Ci ) is deﬁned by a central clause
Ci and the sequence (C0 C1 , ..., Ci−1 ) of central clauses ancestors of Ci . For each
state of the search tree, a subset of agents is associated, playing the role of a
cost. It corresponds to the subset of agents of the latest generated central clause
s.t. set(Ci ) (short for set(Ci , Γ )) is associated with state (C0 C1 , ..., Ci ). The goal
is to ﬁnd the states ending with the empty clause with the greatest subsets of
agents. An analogy with the search in the state space with costs is established
in the following way:
– The initial state S0 is deﬁned by the initial central clause C0 with a cost equal
to set(C0 ),
– The cost associated with the arc (C0 C1 , ..., Ci ) → (C0 C1 , ..., Ci Ci+1 ) is the set
associated with Ci+1 ,
– The global cost of the path C0 → C1 → ... → Ci is the intersection of (setvalued) costs of the elementary arcs,
– The objective states are states (C0 C1 , ..., Ci ) such that Ci = (⊥, Ai ) with
Ai = ∅,
– The state (C0 C1 , ..., Cn ) is expanded by generating all resolvents of Cn authorized by the linear strategy.
Searching for a refutation with the greatest subsets of agents is then equivalent
to searching for a path with maximal cost from the initial state to the objective
states. However, many diﬀerences exist:
74
A. Belhadi et al.
– costs here are to be maximized not to be minimized. Indeed, the goal is to
ﬁnd the greatest subset of agents who believe a formula.
– costs are not additive but they are combined using the intersection operator.
– since only partial order can be deﬁned between subsets, several objective states
exist. The latter are then combined by the union operator.
– if an order exists between subsets, the greatest subset is considered and the
other path is never explored, unlike search in space states.
As for heuristic search in space states, the ordered search is guided by an evaluation function f calculated as follows: for each state S of the search tree,
f (S) = g(S) ∩ h(S) where g(S) is the path cost from the initial state to S, and
h(S) a cost estimation from S to an objective state.
The diﬀerent steps of the refutation by resolution using a linear strategy,
presented by Algorithm 1, can be summarized in the following way:
Let R(Γ ) be the set of clauses that has been produced (using resolution)
from Γ . For each refutation using the clause C, for each literal l of C and in
order to obtain ⊥, the use of a clause C containing the literal ¬l is required. A
refutation expanded from C will have a cost less than or equal to:
H(l) =
{set(C ), C ∈ R(Γ ), ¬l ∈ C }
The cost of the path until the contradiction developed from the clause C is
then:
h1 (C) =
{set(C ), C ∈ R(Γ ), ¬l ∈ C }
{H(l), l ∈ C} =
l∈C
with S = (C0 , ..., C). An admissible evaluation function is obtained f1 (S) =
set(C) ∩ h1 (S). h1 (S) depends only on C. A sequence of evaluation functions
can be deﬁned as follows:
h0 (C) = All;
fp (C) = set(C) ∩ hp (C); p ≥ 0
{fp (C ), C ∈ R(Γ ), ¬l ∈ C }; p ≥ 0
hp+1 (C) =
l∈C
Example 1. Let Γ be a multiple-agent clausal knowledge base:
C1 : (¬a ∨ b, All); C2 : (a ∨ d, All);
C3 : (a ∨ ¬c, A); C4 : (¬d, A);
C5 : (¬d, B).
Let us to consider the search of the greatest subset of agents who believe b.
Let then Γ be the set of clauses equivalent to Γ = Γ ∪{(¬b, All)}. C0 = (¬b, All)
as Γ − {C0 } is coherent. The only clause which contains the literal b is C1 (see
Fig. 1). The next state is then S1 = (C0 C6 ) with C6 : (¬a, All) and cost equal
to set(C0 ) ∩ set(C1 ) = set(C6 ) = All. Diﬀerent paths with C2 and C3 exist from
this state. The evaluation function then will be calculated. The greatest set that
maximizes the evaluation function is All, because A ⊂ All. Eﬀectively, taking
Reasoning with Multiple-Agent Possibilistic Logic
75
Fig. 1. Refutation tree of Example 1
into account this inclusion order, the path with the clause C3 is not explored. The
next state is then S2 = (C0 C6 C7 ) and has a cost set(C6 ) ∩ set(C2 ) = set(C7 ) =
All, with C7 : (d, All).
Several paths exist from this state. Those paths will be all explored because
they have incomparable evaluation functions, due to the partial order of subsets.
Let S3 = (C0 C6 C7 C8 ) be the next state. Its associated cost is set(C7 )∩set(C4 ) =
set(C8 ) = A. The clause C8 is a contradiction. So, the ﬁrst objective state is
reached.
When dealing with the clause C5 , the next state is then S4 = (C0 C6 C7 C9 )
having the cost set(C7 ) ∩ set(C5 ) = set(C9 ) = B. The clause C9 is a contradiction. The last objective state is then reached. Thus Γ |= (b, A ∪ B).
3.2
Refutation by Linear Possibilistic Multiple Agent Resolution
In multiple-agent possibilistic logic, the gradual subset weakening states that if
β/B ⊆ α/A then (c, α/A) (c, β/B). The inclusion F ⊆ G between two fuzzy
subsets F and G of a referential U is classically deﬁned by ∀u ∈ U, F (u) ≤ G(u).
In particular, if U = All, then α/A ⊇ β/B if and only if A ⊇ B and α ≥ β.
The goal is then to ﬁnd a given formula with the greatest subset of agents
with the greatest certainty degree. Obviously, the union of two partial results
(⊥, α/A) and (⊥, β/B) should be taken if α > β and A ⊂ B. These observations
are used to directly extend the procedure of the previous section.
Example 2. Let Σ be a multiple-agent possibilistic knowledge base composed by
the following clauses:
C1 : (¬a ∨ b, 0.8/All)
C2 : (a ∨ d, 0.7/All)
C3 : (a ∨ ¬c, 0.9/A)
76
A. Belhadi et al.
Algorithm 1. Multiple agent refutation by resolution using linear strategy
begin
Open ← {S0 }; Closed ← {S0 }; bset = ∅
while Open = ∅ do
Select a state Sn in Open maximizing f
if Sn is an objective state then
bset = bset ∪ Sn
else
Explore the node Sn by creating the set En of produced states.
if In the set En there are subsets included in other then
remove them from En
end if
En ← En \ Closed
Open ← (Open − {Sn }) ∪ En
Closed ← Closed ∪ {Sn }
calculate f for each new state of Open
end if
end while
if Open = ∅ then
failure
else
display bset
end if
End.
C4 : (¬d, 0.4/A)
C5 : (¬d, 0.3/B)
Note that the propositional knowledge base Σ ◦ coincides with Γ ◦ in the
example of Sect. 3. The problem is to ﬁnd the greatest subset of agents who
believe b with the greatest certainty degree.
Let then Σ be the set of clauses equivalent to Σ = Σ ∪ {(¬b, 1/All)}. As
depicted in Fig. 2, let us take C0 = (¬b, 1/All) because Σ − {C0 } is coherent. As
the classical projection of Σ is the same as Γ , the next state is then S1 = (C0 C6 )
and the associated cost is f set(C0 ) ∩ f set(C1 ) = f set(C6 ) = 0.8/All. Diﬀerent
paths starting with C2 and C3 exist from this state. However, unlike in the
previous example, both paths will be explored because the fuzzy set 0.9/A is not
included in the fuzzy set 0.7/All. Using C2 , let S2 = (C0 C6 C7 ) be the next state
with cost f set(C6 ) ∩ f set(C2 ) = f set(C7 ) = 0.7/All.
Several paths exist from this state using C4 or C5 . Let S3 = (C0 C6 C7 C8 ) be
the next state using C4 . Its associated cost is f set(C7 ) ∩ f set(C4 ) = f set(C8 ) =
0.4/A. The clause C8 is a contradiction. The ﬁrst objective state is then reached.
With the path using the clause C5 , the next state is then S4 = (C0 C6 C7 C9 )
with the cost f set(C7 ) ∩ f set(C5 ) = f set(C9 ) = 0.3/B. The clause C9 is a
contradiction. An objective state is then reached.
The development of the path with the clause C3 induces the next state S5 =
(C0 C6 C10 ) with the cost f set(C6 ) ∩ f set(C3 ) = f set(C10 ) = 0.8/A. The clause
Reasoning with Multiple-Agent Possibilistic Logic
77
Fig. 2. Refutation tree of Example 2
C10 is not a contradiction and there is no clause containing a literal c so no
objective state is reached here. Thus Σ |= (b, 0.4/A ∪ 0.3/B).
4
Experimental Study
In order to analyse the behaviour of the proposed approach, the proposed algorithms were implemented with Java and intensive experiments have been performed. For this purpose, several consistent knowledge bases, including multipleagent knowledge bases and possibilistic multiple-agent knowledge bases, have
been generated by varying the number of clauses. For each case of the following
experiments, the execution time of the algorithm is evaluated in seconds. The
number of Booleanvariables is set to 30 and the number of groups of agents is
set respectively to 5, 10 and 15 by setting to 20 the number of agents.
1. Results with multiple-agent knowledge bases:
Figure 3 shows the behaviour of refutation algorithm by varying the number
of clauses from 5000 to 50000. According to the obtained results, we notice
that the execution time increase proportionally to the number of clauses.
2. Results with multiple-agent possibilistic knowledge bases:
Figure 4 shows the behaviour of refutation algorithm by varying the number
of clauses from 5000 to 50000. According to Fig. 4, we notice also that the
execution time is increased by rising the number of clauses.
3. Comparison between refutations by linear multiple agent resolution
and by linear possibilistic multiple agent resolution:
In order to compare both approaches, other experiments have been carried
out, using large bases containing 50000 clauses, 30 variables and 15 groups
of agents. By varying the number of agents from 25 to 200, Fig. 4 reveals
us that the execution time of refutation by linear possibilistic multiple agent
resolution is only slightly greater than the execution time of refutation by
linear multiple agent resolution.
78
A. Belhadi et al.
Fig. 3. Execution time of the refutation algorithm for large multiple agent bases.
Fig. 4. Execution time of the algorithm for large possibilistic multiple-agent bases
Discussion. The obtained results allow us to estimate the performance of the
proposed approach, which depends on the number of agent groups. Indeed, the
execution time linearly increases with the number of clauses, but it increases
exponentially with the number of variables. Whereas, when the number of group
of agents increases, the execution time increases exponentially (but it linearly
increases with the number of agents if their subsets are given in extension)1 . This
can be explained by the way of the refutation tree is constructed, which is based
on the suitable clauses. Moreover, each branch of the tree represents one suitable
clause for the literal to be deduced. The results also conﬁrm that the execution
time of the refutation algorithm for possibilistic multiple-agent knowledge bases
1
It should be noticed that a base Σ = {(a1 , α1 /A1 ), ..., (an , αn /An )} can be equivalently rewritten as a collection of at most 2n possibilistic logic bases, each of them
associated with an element of the partition of All induced by the Ai ’s. However, it is
in generally computationally better to handle the initial base in a global way using
the procedure described in this paper.
Reasoning with Multiple-Agent Possibilistic Logic
79
Fig. 5. Comparison between multiple-agent logic and possibilistic multiple-agent logic
in terms of computational time
is slightly greater than the one obtained for multiple-agent knowledge bases.
This is due to the fact that the construction of the refutation tree with fuzzy
sets of agents consumes more time than the construction of refutation trees with
crisp groups of agents.
5
Conclusion
This paper has investigated a multiple-agent logic. From a representation point
of view, this multiple-agent logic allows us to represent beliefs of groups of agents
and its possibilistic extension handles fuzzy subsets of agents, thus integrating
certainty levels associated with agent beliefs. From a reasoning point of view, we
proposed a refutation resolution based on linear strategy for the multiple logic
and its possibilistic extension. An experimental study was conducted to evaluate
the proposed algorithms. It shows the tractability of the approach.
One may think of several extensions. On the one hand, the multiple agent
extension of the Boolean generalized possibilistic logic [5] would allow us to
consider the disjunction and the negation of formulas like (p, A), and to express
quantiﬁers in propositions such as “at most the agents in subset A believe p”. On
the other hand, one might also take into account trust data about information
transmitted between agents [6,12]. For instance, assume agent a trusts agent
b at level θ, which might be written (b, θ/a), assimilating a, b to propositions.
Then together with (p, α/b) (agent b is certain at level α that p is true), it would
enable us to infer (p, min(α, θ)/a) [2].
References
1. Belhadi, A., Dubois, D., Khellaf-Haned, F., Prade, H.: Multiple agent possibilistic
logic. J. Appl. Non-Class. Logics 23(4), 299–320 (2013)
2. Belhadi, A., Dubois, D., Khellaf-Haned, F., Prade, H.: Reasoning about the opinions of groups of agents. In: 11th Europe Workshop on Multi-Agent Systems
(EUMAS 2013), Toulouse, France, 12–13 December (2013). https://www.irit.fr/
EUMAS2013/Papers/eumas2013 submission 68.pdf
80
A. Belhadi et al.
3. Belhadi, A., Dubois, D., Khellaf-Haned, F., Prade, H.: Algorithme d’infrence
pour la logique possibiliste multi-agents. In: Actes Rencontres francophones sur
la logique floue et ses applications (LFA 2014), Cargese, France, 22–24 October,
pp. 259–266. C´epadu`es (2014)
4. Belhadi, A., Dubois, D., Khellaf-Haned, F., Prade, H.: Lalogique possibiliste multiagents: Une introduction. In: Actes Rencontres francophones sur la logique floue
et ses applications (LFA 2015), Poitiers, France, 5-6 November, pp. 271–278.
C´epadu`es (2015)
5. Dubois, D., Prade, H., Schockaert, S.: Stable models in generalized possibilistic
logic. In: Brewka, G., Eiter, Th., McIlraith, S.A. (eds.) Proceedings of the 13th
International Conference on Principles of Knowledge Representation and Reasoning (KR 2012), Roma, June 10–14, pp. 519–529. AAAI Press (2012)
6. Cholvy, L.: How strong can an agent believe reported information? In: Liu, W.
(ed.) ECSQARU 2011. LNCS, vol. 6717, pp. 386–397. Springer, Heidelberg (2011)
7. Dubois, D., Lang, J., Prade, H.: Theorem proving under uncertainty - a possibility
theory-based approach. In: McDermott, J.P. (ed.) Proceedings of the 10th International Joint Conference on Artificial Intelligence (IJCAI 1987), Milan, August,
pp. 984–986. Morgan Kaufmann (1987)
8. Dubois D., Lang J., Prade H.: Possibilistic logic. In: Gabbay, D.M., Hogger, C.J.,
Robinson, J.A., Nute, D. (eds.) Handbook of Logic in Artificial Intelligence and
Logic Programming, vol. 3, pp. 439–513. Oxford University Press (1994)
9. Dubois, D., Prade, H.: Possibilistic logic: a retrospective and prospective view.
Fuzzy Sets Syst. 144, 3–23 (2004)
10. Dubois D., Prade H.: Extensions multi-agents de la logique possibiliste. In: Proceedings of the Rencontres Francophones sur la Logique Floue et ses Applications
(LFA 2006), Toulouse, 19–20 October, pp. 137–144. C´epadu`es (2006)
11. Dubois, D., Prade, H.: Toward multiple-agent extensions of possibilistic logic. In:
Proceedings of the IEEE International Conference on Fuzzy Systems (FUZZ-IEEE
2007), London, 23–26 July, pp. 187–192 (2007)
12. Gutscher, A.: Reasoning with uncertain and conflicting opinions in open reputation
systems. Electron. Notes Theor. Comput. Sci. 244, 67–79 (2009)
Incremental Preference Elicitation
in Multi-attribute Domains for Choice
and Ranking with the Borda Count
Nawal Benabbou1(B) , Serena Di Sabatino Di Diodoro1,2 , Patrice Perny1 ,
and Paolo Viappiani1
1
Sorbonne Universit´es, UPMC Univ Paris 06 and CNRS,
LIP6, UMR 7606, Paris, France
{nawal.benabbou,serena.disabatinodidiodoro,
patrice.perny,paolo.viappiani}@lip6.fr
2
Department of Electronics and Information (DEIB),
Politecnico di Milano, Milan, Italy
Abstract. In this paper, we propose an interactive version of the Borda
method for collective decision-making (social choice) when the alternatives are described with respect to multiple attributes and the individual
preferences are unknown. More precisely, assuming that individual preferences are representable by linear multi-attribute utility functions, we
propose an incremental elicitation method aiming to determine the Borda
winner while minimizing the communication eﬀort with the agents. This
approach follows the recent work of Lu and Boutilier [8] relying on the
minimax regret as a criterion for dealing with uncertainty in the preferences. We show that, when preferences are expressed on a multi-attribute
domain and are additively separable over attributes, regret-based incremental elicitation methods can be made more eﬃcient to determine or
approximate the Borda winner. Our approach relies on the representation
of incomplete preferences using convex polyhedra of possible utilities and
is based on linear programming both for minimizing regrets and selecting informative preference queries. It enables to incrementally collect
preference judgements from the agents until the Borda winner can be
identiﬁed. Moreover, we provide an incremental technique for eliciting a
collective ranking instead of a single winner.
1
Introduction
Voting is an eﬀective method for collective decision-making, used in political
elections, technical committees, academic institutions. Recently, interest in voting has increased in computer science, given the possibility oﬀered by online web
systems to support voting protocols, or protocols inspired by voting, for group
decision-making (for example, for scheduling a meeting). In many real situations,
however, it may be necessary to reason with partial preferences, as some preferences are not available and too expensive to obtain (with respect to a cognitive
or economic cost). This observation has motivated a number of recent works on
c Springer International Publishing Switzerland 2016
S. Schockaert and P. Senellart (Eds.): SUM 2016, LNAI 9858, pp. 81–95, 2016.
DOI: 10.1007/978-3-319-45856-4 6
82
N. Benabbou et al.
social choice with partial preferences, e.g., [2–6,8,9,12]. In this research stream,
typical questions concern the determination of possible and necessary winners,
the selection of preference queries to ask to the agents for further eliciting preferences, the approximation of optimal solutions or the determination of robust
recommendations based on the available preference information.
Acquiring agents’ preferences is expensive (with respect to time and cognitive cost). It is therefore essential to provide techniques that allow to reason
with partial preference information, and that can eﬀectively elicit the most relevant part of preferences to make a decision. Adaptive utility elicitation [1,10,11]
tackles the challenges posed by preference elicitation by representing the system knowledge about the agents’ preferences in the form of a set of admissible
utility functions. This set includes all functions compatible with the preferences
collected so far, and is updated following agents’ responses. In this way, one can
often make good (or even optimal) recommendations with sparse knowledge of
the users’ utility functions.
The aim of this paper is to introduce an adaptive utility elicitation procedure in the context of voting, for the fast determination of a Borda winner or a
social ranking based on the Borda score, and to test the practical eﬃciency of
this procedure. In particular, we extend the work of [8] to the multi-attribute
case. Multiple attributes may appear in well-known collective decision problems such as committee elections or voting in multi-issue domains [7]. In these
cases, attributes are boolean and represent elementary decisions on candidates
or issues. More generally, the multi-attribute case occurs when the alternatives of
a collective decision problem are described by diﬀerent features, non-necessarily
boolean. Individual preferences are assumed here to be representable by a linear
function of the attribute values. Since utilities are decomposable over attributes,
a set of preference statements formulated by an agent on some pairs of alternatives will possibly allow to infer other preference statements with respect to
other pairs, without asking them explicitly. We show in the paper how this type
of inference mechanism can be implemented using mathematical programming
to reduce the number of queries and speed-up the determination of a necessary
Borda winner.
The paper is organized as follows: in Sect. 2, we introduce the basic framework for voting on multi-attribute domains. Then, we present the minimax regret
decision criterion as a useful tool for decision under uncertainty and preference elicitation. In Sect. 3, we introduce a new method based on mathematical
programming to minimize regrets based on the Borda count. Section 4 deals
with preference elicitation for the Borda count; we introduce diﬀerent strategies
for generating preference queries and compare them experimentally. Finally, in
Sect. 5, we extend the approach to ranking problems based on the Borda score
and provide additional numerical tests to evaluate the eﬃciency of our approach
in ranking.
Incremental Preference Elicitation in Multi-attribute Domains
2
83
Social Choice in Multi-attribute Domains
with Incomplete Preferences
We consider a set of n voters or agents and a set X of m alternatives (candidates, options, items), characterized by a ﬁnite set of q attributes or criteria; an
alternative is associated to a vector x = (x1 , . . . , xq ) where each xk represents
the value of an attribute k or a performance with respect to a given point of
view.
Individual preferences are assumed here to be represented by linear utilities
q
of the form ui (x) = k=1 ωki xk , where ω i = (ω1i , . . . , ωqi ) is a vector of weights
characterizing the preferences of agent i. Hence, an alternative x is as least as
q
q
good as y for agent i whenever k=1 ωki xk ≥ k=1 ωki yk . Our framework can be
used to address two diﬀerent cases: a multi-criteria decision setting or a multiattribute utility where the utility is deﬁned as the weighted sum of attribute
values. Formally, these preferences are deﬁned by the following relation i :
q
x
i
y
ωki (xk − yk ) ≥ 0
iﬀ
k=1
A preference proﬁle
1 , . . . , n of an election is therefore completely characterized by the weight vectors ω 1 , . . . , ω n (each associated with an agent). We can
now deﬁne the Borda score in our multi-attribute settings, where preferences are
deﬁned by the utility weights. Given ω = ω 1 , . . . , ω n , the Borda score s(x, ω)
of an alternative x is
n
s(x, ω) =
si (x, ω i )
i=1
where si (x, ω i ) = |{y ∈ X | x i y}| counts the number of alternatives that are
strictly beaten by x according to the preference relation induced from ω i , where
i is the asymmetric part of
i: x
i y iﬀ
i and ¬(y
i x). Our deﬁnition
allows for ties in each ranking. When using only linear orders (i.e. the ω i s are
such that there are no ties) we get the usual Borda count.
When the weights of the agents are not known to the system with certainty,
we need to reason about partially speciﬁed preferences. This is done by assuming
a vector Ω = Ω 1 , . . . , Ω n where each Ω i is the set of feasible ω i that are
consistent with the available preference information on agent i. Later, we will
use Ω (that represents our uncertainty about the weights associated with the
agents) in order to provide a recommendation based on minimax regret. At the
level of a single agent i, we can check whether pairs of alternatives are in a
necessary preference relation given Ω i .
Definition 1. Alternative x is necessarily weakly preferred to y for agent i,
q
i
i
i
written x N
i y, iﬀ ∀ω ∈ Ω ,
k=1 ωk (xk − yk ) ≥ 0. Similarly, x is necessarily
q
i
i
i
strictly preferred to y for agent i, written x N
i y, iﬀ ∀ω ∈ Ω ,
k=1 ωk (xk −
yk ) > 0.