Soft Aggregation Methods in Ca Bad Reasoning

更新时间:2023-07-04 17:09:48 阅读: 评论:0

Applied Intelligence21,277–288,2004
c 2004Kluwer Academic Publishers.Manufacture
d in Th
e United States.
Soft Aggregation Methods in Ca Bad Reasoning
RONALD R.YAGER
Machine Intelligence Institute,Iona College,New Rochelle,NY10801,USA
Abstract.Our goal is to provide some tools,bad on soft computing aggregation methods,uful in the two fundamental steps in ca ba reasoning,matching the target and the cas and fusing the information provided by the relevant cas.To aid in thefirst step we introduce a methodology for matching the target and cas which us a hierarchical reprentation of the target object.We also introduce a method for fusing the information provided by relevant retrieved cas.This approach is bad upon the nearest neighbor principle and us the induced ordered weighted averaging operator as the basic aggregation operator.A procedure for learning the weights is described. Keywords:aggregation,OW A operators,fuzzy logic,induced aggregation
1.Introduction
Ca bad reasoning(CBR)and other variants of in-stance bad inference have become important method-ologies in information technology[1–4].They can be ud in many areas such as data mining,information re-trieval,questions answering systems,situation asss-ment and other higher level fusion tasks[5,6].The key idea in CBR involves the u of already existing knowledge about objects or situations to predict as-pects of similar objects for which all information may not be known.The fundamental concepts of CBR can be best expresd within the framework of the problem of feature valuation.Assume we have some object or situation,called the target object,which
has as one of its features an attribute V who value is unknown to us. We shall denote this as the notable feature.We further assume that this target object can be described in terms of other attributes,we shall call the the characteriz-ing features.Our problem is to determine the value of the notable feature for the target object.In ca bad reasoning the prime method of obtaining the value of the notable feature is to draw upon a t of related ob-jects,called the ca ba,such that for each object in the ca ba we have values for both its characteriz-ing features as well as its notable feature.The basic paradigm in CBR is to u the objects in the ca ba most similar to the target object,with respect to the
characterizing features,to determine the notable value
of the target object.Solutions to problems of this type
are generally bad upon a class of tools called nearest
neighbor methods[7–10].
The nearest neighbor method involves a two step
process.Thefirst step in the nearest neighbor approach
is to calculate the degree of matching or similarity of
each of the objects in the ca ba with target object.
With the ca ba consisting of the t D={d i}of cas and x indicating the our target object we denote
the similarities as Match(x,d i)=ηi.This step can
be en as a kind of retrieval of relevant cas.
Once having obtained the similarities in the cond
step of the nearest neighbor method we u an aggre-
gation algorithm to generate v,the value of the notable
feature for the target object,from the pairs(ηi,v i),v i being the value of notable feature for ca d i.The es-ntial character of this nearest neighbor aggregation algorithm is that the stronger the match of an object in the ca ba to the target object the stronger its role in the determination of v;
the largerηi the more important v i.If we let Z=(ηi,v i)then a nearest neighbor rule can be en as some kind of aggregation procedure, v=Agg(Z).
Central to the ca ba reasoning method is the pro-
cess of aggregation.In the task of retrieval this aris
278Yager
when we obtain the overall match of a ca to the tar-get using their matches with respect to the individual components characterizing the target.In the process of fusing the information provided by the relevant cas we again u aggregation methods.In the following we shall describe how the Ordered Weighted Averaging operator,a soft computing type aggregation method, can provide tools that are uful in both aspects of this problem,the matching of the ca and target and the generation of the missing value using the matched cas.
2.Aggregating Attribute Satisfactions
As we have indicated thefirst step in using a ca ba reasoning system consists of retrieving the relevant cas from the collection D of cas.The capability to effectively retrieve relevant cas has
emerged as one of the most important issues for the development of ca bad reasoning systems.At the heart of the problem,is the need to effectively describe the target situation within a language that can be ud to match the target object with the instances in the ca ba. Here we are limited by the computers ability to under-stand.In the following we prent one approach to this problem.
In the approach prented here we assume the CBR system has a collection of basic primitive attributes or features.If A j is a primitive attribute we have available any ca d in D a value A j(d)∈[0,1]indicating the degree to which ca d satisfies A j.Thus in this model a ca is reprented by its score for each of the prim-itive attributes.In addition the primitive features are ud to describe the target situation we are trying to match.The description of the target object can be en as a query to the system.More specifically a target sit-uation is described by a subt of relevant primitives and a relationship between this primitives.For exam-ple,a ur interested infinding cas that match a target object having attributes A1and A2would simply po the Boolean query(A1and A2).The degree of a ca’s matching to this target situation would be bad upon its score for the two concepts.The key issue here is the specification of the relationship between the desired features so as to appropriately characterize the target situation.
In this work we provide tools to enable the formu-lation of the requirements using sophisticated combi-nations of the primitives.Central to this approach is ability to construct queries that describe the relevant cas in D using operations that allow the valuation of a ca using its scoring on the basic primitive attributes. As we shall e the expressive capability of our query language will be enhanced by the u of a hierarchical structure to reprent queries.
Here then we have an aggregation problem.In order to provide a general framework to implement the kinds of operations needed to describe target objects we shall u the Ordered Weighted Averaging(OW A)operator [11,12].We briefly review the basic ideas associated with this class of aggregation operators.
Definition.An Ordered Weighted Averaging(OW A) operator of dimension n is a mapping
F(a1,a2,...,a n)=
n
工程施工会计科目j=1
w j b j
where b j is the j th largest of the a i and w j are a collec-tion of weights such that w j∈[0,1]and
n
j=1
w j=1.
The key feature of this operator is the ordering of the arguments by their value,a process that in-troduces a nonlinearity into the operation.We note if index is function so that index(j)is the index of the j th largest of the a i then we can express F(a1,a2,...,a n)=
nsupply是什么意思
j=1
w j a index(j).Formally,we can reprent this aggregation operator in vector no-tation as F(a1,a2,...,a n)=W T B,where W,a vec-tor who components are the w j,is called weighting vector and B,a vector who components are the b j, is called the ordered argument vector.We can shown that this operator is a mean operator,it is commutative, monotonic,and bounded,Min[a i]≤F(a1,...,a n)≤Max[a i].
The great generality of the operator lies in the fact that by lecting the w j,we can implement different aggregation operators.A number of special cas of the operators are worth pointing out.If the weights are such that w1=1and w j=0for all j=1then we get F(a1,a2,...,a n)=Max j[a j].This weighting vector is denoted as W∗.If the weights are such that w n=1and w j=0for j=n we get F(a1,a2,..., a n)=Min j[a j].This weighting vector is denoted as W∗.If the weights are such that w j=1n for all j then F(a1,a2,...,a n)=1
n
j=1
a j.This weighting vector is denoted as W ave.
More generally by appropriately lecting the weights in W,we can emphasize different arguments bad upon their position in the ordering.If we place
Soft Aggregation Methods in Ca Bad Reasoning279
most of the weights near the top of W,we can em-phasize the higher scores,while placing the weights near bottom of W emphasizes the lower scores in the aggregation.
3.Linguistic Description of Attribute Relations We now describe the basic application of the OW A operator in ca retrieval systems.Assume A1,A2,...,A n is a group of primitive attributes rel-
evant to particular target situation.For any given ca d we let A i(d)∈[0,1]indicate the degree to which ca d has the attribute A i.If W is an OWA weighting vector then Val(d)=F w(A i(d),A2(d),...,A n(d))provides the an aggregation of the satisfactions.By lecting different forms for W we can capture different rela-tionships between the primitives.For example if we u W=W∗then Val(d)=Min i[A i(d)]we are es-ntially implementing an anding operation.Thus this evaluates the degree to which it is true that d satis-fies and A n.On the other hand if we u W=W∗then Val(d)=Max i[A i(d)]then we are implementing an oring operation.Here we are evaluating the degree to which it is true that d satis-fies A1or A2···or A n.We e that this operator al-lows valuation of the classically quantified statements For all and at least one.The interesting thing about the OW A operator is that it can provide a structure for the reprentation and valuation of statements involv-ing a much wider class of quantifiers.As we shall e, this capability can be ud as a mechanism for model-ing sophisticated linguistically rooted queries to ca bas.
The starting point of our development is the con-cept of a linguistic quantifier an idea which was orig-inally introduced by Zadeh[13,14]as extension of the classical quantifiers.A linguistic quantifier,more
specifically a proportional linguistic quantifier,is a term corresponding to a proportion of objects.While most formal logical systems allow just two quantifiers, for all and there exists,as noted by Zadeh,human dis-cour is replete with a vast array of terms,fuzzy and crisp,that are ud to express information about pro-portions.Examples of this are most,at least half,all, about1/3.Motivated by this Zadeh[13]suggested a method for formally reprenting the linguistic quan-tifiers.Let Q be a linguistic expression corresponding to a quantifier,such as most;Zadeh suggested repre-nting this as a fuzzy subt Q over I=[0,1].Under this reprentation for any proportion r∈I its mem-bership grade in Q,Q(r),indicates the degree to which
r satisfies the concept indicated by the quantifier Q. For our purpos we shall restrict ourlves to regu-
larly increasing monotonic(RIM)quantifiers.A fuzzy
subt Q:I→I is said to reprent a RIM linguistic
quantifier if Q(0)=0,Q(1)=1and if r1>r2then Q(r1)≥Q(r2)(monotonic).The RIM quantifiers
model the class of quantifiers in which an increa in
proportion results in an increa in compatibility with
the linguistic expression being modeled.Examples of
the types of quantifiers are at least one,all,at least α%,most,more than a few,some.
In[15]we showed how we can u a linguistic quan-
tifier to induce a weighting vector W associated with
an OWA aggregation.Assume Q is a RIM quantifier.
Then we associate with Q an OW A weighting vector
W such that for j=1to n
w j=Q
j
n
−Q
j−1
n
Thus using this approach we obtain the weighting vec-
tor directly from the linguistic expression of the quan-
tifier.The properties of RIMness guarantee that the
properties of W are satisfied:
1.Since Q is monotonic,
Q
j
n
≥Q
炫耀的反义词
j−1
n
,
and so w j≥0
2.
n
j=1
w j=
n
j=1
Q
j
n
−Q
j−1
n
kung fu pandagosh什么意思=Q(1)−Q(0)=1
We are now in a position to address the issue of
using the OW A aggregation to arch a ca ba.
We make available to the ur a vocabulary,Q= {Q1,Q2,...,Q q},of linguist expressions,each cor-responding to a linguistic quantifier.When posing a query describing the cas of interest after specifying a collection of attributes of interest(A1,A2,...,A n), a ur will be prompted to also specify o
ne of the lin-guistic quantifiers in Q as guiding the query formation. Transparent to the ur is the association of each of the linguistic terms in Q,with a reprentative fuzzy subt,Q i⇔Q i,and the process of converting this
280Yager
fuzzy subt into an OWA weighting vector bad on the formulation w j=Q i(j n)−Q i(j−1n).
One of the elements in Q should be designated as the default quantifier,this is the one that is to be ud when no quantifier lection is specified by the ur. The most appropriate choice for this is the average quantifier w j=1,which corresponds to the linguistic expression some.
As a result of the ideas so far prented here we can introduce the idea of a basic query module:1 A1,A2,...A n:Q ,consisting of a collection of at-tributes describing the target situation and a linguistic quantifier indicating the proportion of the attributes we desire.Implicit in this query module is the fact that the linguistic expression Q is esntially defining a weight-ing vector W for an OW A aggregation.Using the OW A operator any object in ca ba can be evaluated using this query.
4.Introducing Importances
In the preceding we have indicated a query object to be a module consisting of a collection of attributes of inter-est and a quantifier Q indicating a mode of interaction between the attributes.As noted the quantifier is to be lected from a collection of quantifiers,among which are the logical“anding”of the attribute scores,the logical“oring”of the attribute scores,and the simple averaging of attribute scores.Implicit in the preceding is the equal treatment of all desired attributes.At times a ur may desire to ascribe different importances to the different attributes.In the following we shall con-sider the introduction of importance weights into our procedure.
Letαi∈[0,1]be a value associated with an at-tribute indicating the importance associated with the attribute A i.We shall assume the largerαi the more important the attribute is to the ur and letαi=0stip-ulate zero importance.With the introduction of the weights we can now consider a more general query object:
A1,A2,...,A n:M:Q .
Here as before,the A i are a collection of attributes and Q is a linguistic quantifier,however,here M is an n vector who component m j=αj is the importance associated with A j.
communicatorOur goal now is to calculate the overall satisfaction to  A1,A2,...,A n:M:Q for ca d.We shall denote t
his
Val(d)=F Q/M(A1(d),A2(d),...,A n(d)) Here F Q/M indicates an OW A operator.Our procedure here,as suggested in[16],will be tofirstfind an associ-ated OW A weighting vector,W(d),bad upon both Q and M.Once having obtained this vector we calculate Val(d)by the usual OW A process
Val(d)=W(d)T B(d)=
n
j=1
w j(d)b j(d)
Here b j(d)indicates the j th largest of the A i(d)and w j(d)is the j th component of the associated OWA vector W(d).
哥伦比亚自行车队服
What is important to point out here is that,as we shall subquently e,as oppod to the original ca, where no importances are considered,the associated OWA vector will be different for each d.This situa-tion accounts for our denotation W(d).Actually the weighting vector will be influenced by the ordering of the A i(d).
We now describe the procedure[16]that shall be ud to calculate the weighting vector,w j(d).Thefirst step is to order the A i(d),we let index d(j)be the in-dex of the j th largest of the A i(d).Our next step is to calculate the OW A weighting vector W(d).We obtain the associated weights as
w j(d)=Q
S j
T
−Q
S j−1
T
where
S j=
j
k=1
αindex
d
(k)
and T=S n=
n
k=1
αindex
d
(k).
Thus T is the sum of all the importances and S j is the sum of the importances of the j th most satisfi
ed attributes.Once having obtained the weights we can then obtain the aggregated value by the usual method, B T W.The following example will illustrate the u of this technique.
Example.We shall assume there are four criteria of interest to the ur:A1,A2,A3,A4.The impor-tances associated with the criteria areα1=1,α2= 0.6,α3=0.5andα4=0.9.From this we get T=
4
k=1
αk=3.We shall assume the quantifier
Soft Aggregation Methods in Ca Bad Reasoning281 guiding this aggregation is most,which is defined by
Q(r)=r2.Assume we have two cas x and y and
the satisfactions to each of the attributes by the cas
is given by the following:
A1(x)=0.7A2(x)=1A3(x)=0.5A4(x)=0.6
A1(y)=0.6A2(y)=0.3A3(y)=0.9A4(y)=1
Our objective here is to obtain the valuations of each of
the cas with respect to this query structure.Wefirst
consider the valuation for x.In this ca the ordering
of the attribute satisfactions gives us:
j index x(j)b jαj
1210.6
210.71
340.60.9
430.50.5
Calculating the weights associated with x,which we
denoted w i(x),we get
w1(x)=Q
0.6
3
−Q
3
=0.04
w2(x)=Q
1.6
3
−Q
0.6
3
=0.24
w3(x)=Q
2.5
3
−Q
1.6
3
=0.41
w4(x)=Q
3
3
−Q
1.6
3
=0.31
Using this we calculate Val(x)
Val(x)=
4
j=1
w j(x)b j=(.04)(1)+(.24)(.7) +(.41)(.6)+(.31)(.5)=0.609
To calculate the score for ca y we proceed as follows. In this ca the ordering of the criteria satisfaction is
j index y(j)b jαj
1410.9
230.90.5
310.61
420.30.6The weights associated with the aggregation are
w1(y)=Q
0.9
3
−Q
3
=0.09
w2(y)=Q
1.4
3
−Q
0.9
3
=0.13
w3(y)=Q
2.4
3
−Q
1.4
3
=0.42
w4(y)=Q
3
3
爱探险的朵拉−Q
2.4
3
=0.36
Using this we calculate
Val(y)=
4
j=1
w j(y)bj=(.09)(1)+(.13)(.9)
+(.42)(.6)+(.36)(.3)=0.567 Hence in this example x is the better scoring ca. We point out some features of this process.When all the attributes have the same importance,αj=α, w j(d)=Q
1
j
k=1
α
−Q
1
j−1
k=1
α
=Q
j
n
−Q陈虎平
j−1
n
.
This is the same t of weights we obtained when we didn’t include any information with respect to impor-tance.It can also be shown if an attribute has zero im-portance it makes no contribution to the aggregation, that is ifαindex
d
(j)=0then w j(d)=0.
The situation with the quantifier where Q(x)=x is of special interest.Here with
w j(d)=Q
S j
T
−Q
高二英语语法
S j−1
T
we get w j(d)=1
T
αindex
d
(j)
.This can be en to result in the usual weighted average,
Val(d)=
1
T
n
i=1
αj A i(d)
5.Including Priorities Between Attributes
In the preceding we have described a method for eval-uating the overall score of cas bad upon a target query:: A1,A2,...,A n:M:Q .In this object the componentαj of the vector M indicates the impor-tance associated with the attribute A j.Implicit in our
282Yager
formulation was the idea that the weight αj was ex-plicitly provided by the ur.This is not necessarily required.It is possible for the weight associated with attribute A j to be determined by some property of the ca itlf.Thus let B j be some measurable attribute associated with the ca,and let B j (d )be the degree to which ca d satisfies this attribute.Then without any additional complexity we can
allow αj (d )=B j (d ).Thus here the weight associated with attribute A j de-pends upon the ca itlf via the value B j (d ).Thus within this framework we have the option of speci-fying the importance directly,conditionally or not at all.
The association of importances with attributes gen-erally conveys some measure of their relative worth and forms the basis for tradeoffs in satisfactions between the different attributes.Consider the situation where
Val(d )=1
n j =1A j (d )αj .Here we e that a gain of  in A j (d )results in an increa in overall satisfaction
of αj  ,while a gain of  in A i (d )is worth an increa of αi
T .If αj =2and αi =1,then we are willing to trade a gain in satisfaction of  in A j for a loss of less than 2 in A i .In some cas where we desire two attributes,we may not be willing to trade-off one of for the other.For example assume we are arching a ca ba of helicopter designs for tho that are both safe and low-cost,however we are not willing to give up safety for low cost.Such a situation is characterized as one in which a priority exists between the attributes,safety has priority over cost.
In the following we shall suggest a mechanism that allows for the inclusion of a priority type effects.As-sume A 1and A 2are two attributes for which there ex-ists a priority relationship;A 1has priority over A 2.In order to manifest this relationship,we allow the im-portance associated with A 2to be dependent upon the satisfaction of attribute A 1.Here then α2(d )=A 1(d ).Let us investigate this for the simple weighted average.Assuming α1is fixed,we get
Val(d )=α1A 1(d )+A 1(d )A 2(d )=A 1(d )(α1+A 2(d ))Here we e that if A 1(d )is low,the contribution of A 2(d )becomes small and hence it is not possible for a high value of A 2to compensate.
More generally,consider the quantifier Q and assume A i has priority over A j .To implement this priority we make the importance associated with A j related to the satisfaction of A i .In particular,we let αj =αA i ,where α∈[0,1].Using this we get for the weight w j
associated with A j that
w j =Q  S k −1+
αA i (d )
T
−Q (S k −1)
where
S k −1=
j −1
k =1
αk
T
.
We e that as A i (d )gets smaller,the value w j will decrea.6.
Concepts and Hierarchies
In the preceding we have considered the problem of ca retrieval within the following framework.W
e have assumed a t of cas D ,called the ca ba,from which we are interested in retrieving the most appro-priate cas by determining their satisfaction to a target query.We have associated with this ca ba a collec-tion of attributes A i ,i =1to n .The attributes are characterized by the fact that for any d ∈D ,we have available A i (d )∈[0,1],the degree of satisfaction of attribute A j is directly accessible.We now associate with a ca ba a slightly more general idea which we call a concept.We define a concept as an object such that a valuation of its satisfaction can be obtained for any ca in D .Thus if Con is a concept we can obtain Con(d )∈[0,1].The primitive attributes are examples of concepts,they are special concepts in that their val-ues are directly accessible from the ca ba.
Consider now a query object of the type we have previously introduced.This is an object of the form  A 1,A 2,...,A q :M :Q  .As we have indicated,the measure of satisfaction of this object for any d ∈D can be obtained by our aggregation process.In the light of this obrvation,we can consider this query object to be a concept,with
Con = A 1,A 2,...,A q :M :Q
then
Con(d )=F Q /M (A 1(d ),A 2(d ),...,A q (d )).Thus a query object is a concept.A special concept is an individual attribute,
Con = A j :M :Q  =A j ,

本文发布于:2023-07-04 17:09:48,感谢您对本站的认可!

本文链接:https://www.wtabcd.cn/fanwen/fan/90/166969.html

版权声明:本站内容均来自互联网,仅供演示用,请勿用于商业和其他非法用途。如果侵犯了您的权益请与我们联系,我们将在24小时内删除。

相关文章
留言与评论(共有 0 条评论)
   
验证码:
Copyright ©2019-2022 Comsenz Inc.Powered by © 专利检索| 网站地图