Mining Positive and Negative Association Rules
from Large Databas
Chris Cornelis
Dept.Appl.Math.and Comp.Sci.
Ghent University
Ghent,Belgium
Chris.Cornelis@UGent.be
Peng Yan,Xing Zhang,Guoqing Chen School of Economics and Management
Tsinghua University
Beijing,China
{yanp,zhangx4,chengq}@em.tsinghua.edu
Abstract—This paper is concerned with discovering positive and negative association rules,a problem which has been ad-dresd by various authors from different angles,but for which no fully satisfactory solution has yet been propod.We catalogue and critically examine the existing definitions and approaches, and we prent an Apriori-bad algorithm that is able tofind all valid positive and negative association rules in a support-confidence framework.Efficiency is guaranteed by exploiting an upward closure property that holds for the support of negative association rules under our definition of validity.
Keywords—association rules,data mining,Apriori
I.I NTRODUCTION
The idea of association rule(AR)mining already dates back to H´a jek et al[7].Its application to market basket analysis, the discovery of co-occurence relationships among purchad items in supermarket transaction databas,gained enormous popularity after its re-introduction by Agrawal et al[2]in the mid-nineties.Apart from algorithmic efficiency and ea of implementation by algorithms like Apriori[12], its appeal is also due to its wide applicability;rearchers have successfully exported the AR concept—originally specific to binary data tables—to a multitude of domains,involving quan-ti
tative,hierarchical,fuzzy,and many other kinds of databas. At the heart of all the methods is the ability to predict that,given the prence in a databa record(“transaction”) of one pattern(“frequent itemt”)X,the record is likely to contain another pattern Y as well,concily summed up by the (positive)rule X⇒Y.Recently,the problem of identifying negative associations(or“dissociations”),addressing also the abnce of itemts,has been explored and considered relevant (e e.g[1],[3],[4],[5],[11],[13],[14],[15],[16],[17]). For instance,a shop manager may like to be informed that “customers who buy an IBM notepad are unlikely to buy a D-link network card”.
Mining negative association rules,however,rais a number of critical issues.First of all,in typical supermarket transaction databas,there are thousands of items(wares)and each record (customer transaction)only contains a few of them.For a databa containing10,000items,such that each customer on average buys10of them,the density of the databa is 0.1%.From the perspective of negative patterns(indicating abnce of items),the density is a soaring99.9%,leading to explosive amounts of,mostly uninteresting,rules.Secondly, the complexity of AR mining algorithms is exponential in the number of items;if for each item from the databa a corresponding negated item(indicating abnce of the original item)is considered,the computational costs will skyrocket. As
a conquence,the way a valid negative association rule is defined plays an important role.Finally,as we will point out further on,the introduction of negative association rules invalidates some important pruning aids ud to restrict the arch space and guarantee efficiency in classical algorithms.
Existing approaches have tried to address the problems by incorporating attribute correlations or rule interestingness measures tofilter out unimportant rules,or by relying on additional background information concerning the data.As oppod to this,the aim of this paper is to propo a com-putationally workable approach that stays within the strict bounds of the original support-confidence framework.Such an approach has the advantage of being intuitive to urs(no additional parameters required),and may rve as a“skeleton”for the development of more specialized mining algorithms.
The paper is structured as follows:the next ction recalls preliminaries about ARs,and compares two alternative ways to define negative ARs.In Section3,existing strategies to mining negative ARs are reviewed.The core of the paper is Section4, wherefirst we propo a suitable definition of valid negative association rule,and then proceed to spell out,and evaluate,a new Apriori-bad algorithm forfinding all valid positive and negative association rules.Section5offers a brief conclusion and suggests future work in this direction.
II.P OSITIVE AND N EGATIVE A SSOCIATION R ULES A.Support-Confidence Framework of Positive ARs
Let D={t1,t2,...,t n}be a relational databa of n records(or transactions)with a t of binary attributes(or items)I={i1,i2,...,i m}.For an itemt X⊆I and a transaction t in D,we say that t supports X if t has value 1for all the attributes in X;for conciness,we also write X⊆t.By D X we denote the records that contain all attributes
in X.The support of X is computed as supp(X)=|D X|
n
, i.e the fraction of transactions containing X.A(positive)
association rule is of the form:X⇒Y,with X,Y⊂I, X∩Y=∅.Support and confidence of X⇒Y are defined as supp(X⇒Y)=supp(XY)and conf(X⇒Y)=supp(XY)
supp(X) respectively1.A valid positive association rule has support and confidence greater than given thresholds ms and mc.[2] Furthermore,an itemt X is called frequent if supp(X)≥ms.
B.Negative Association Rules
What exactly constitutes a negative association rule?In literature,two formats prevail.
Definition I:in[1],[11],[15],[17],the traditional definition of itemt is maintained(so X,Y⊂I),and to each positive rule X⇒Y correspond three negative ones,X⇒¬Y,¬X⇒Y and¬X⇒¬Y.A transaction t supports X⇒¬Y if X⊆t and Y⊆t.Hence,the meaning of a rule like{i1}⇒¬{i2,i3}is that“the appearance of i1in a transaction t induces that i2and i3are unlikely to appear simultaneously in t”;hence a record containing i1and i2,but not i3,supports this rule.It can be verified([15],[17])that supp(X⇒¬Y)=supp(X¬Y)=supp(X)−supp(XY)for X,Y⊂I,and similarly support and confidence of the other kinds of negative ARs can be be straightforwardly deduced from the corresponding positive itemt supports.
The total number of positive and negative ARs that can be generated is4(3m−2m+1+1),of which3m−2m+1+, one fourth,are positive.Although theoretically the complexity of mining both positive and negative ARs is in the same level as that of mining only positive ARs,naive implementations will run into trouble;this is becau one of the most important pruning aids in Apriori,namely that superts of infrequent (positive)itemts need not be considered as they cannot be frequent(downward closure property of supp),is no longer valid for negative itemts.For the latter,a dual upward closure property of supp holds:if supp(¬X)≥ms,then, for every Y⊂I such that and
X∩Y=∅,¬(XY) also meets the support threshold.As a conquence,all the frequent negative itemts must be further considered,and many meaningless negative rules with large conquents will ari.In Section IV,we will show how to exploit the upward closure property to define an alternative definition of validity for negative ARs and to restrict the arch space of our mining algorithm.
Definition II:in[3],[4],[16]authors view each item’s opposite as a new item which can be inrted into I to derive I =I∪{¬i1,¬i2,...,¬i m}.Since no transaction contains both an item and its opposite,we can restrict our attention to tho itemts that do not contain i j and¬i j simultaneously for j=1,...,m.An example of a possible negative rule under this definition is{i1}⇒{¬i2,¬i3};a transaction t supports it if i1∈t,i2∈t and i3∈t.Hence the meaning of this rule is that“the appearance of i1in a transaction t induces that neither i2nor i3is likely to appear also in t”.A record that contains i1and i2,but not i3,does not support this rule.It was 1shown in[16]that the support of itemts in I (and hence
also support and confidence of negative ARs under Definition
II)can be deduced from that of itemts in I.
Unlike Definition I,this definition allows pruning bad on
the downward closure property of supp,and hence is easier
to interpret and to implement.Unfortunately,the total number
of possible rules has now grown to5m−2(3m)+1,which is much more than under Definition I;it can also be calculated
that the sum of support for all possible rules is3m−2m+1+1, which means that the average support equals0when m is large enough.Practically,the obrvations make the u of Definition II under the classical support-confidence framework reasonable only in relatively small databas.
III.R ELATED W ORK
Bad on the previous discussion,two important(and
cloly intertwined)problems should be solved in negative
AR mining,that is:to keep the number of discovered rules in
check and tofind efficient algorithms.To reduce the arch
space,and to improve the quality of the mined rules,it is
fruitful to introduce additional measures apart from support
and confidence.Most papers consider some kind of correlation
between attributes.For instance,Brin et al[4]udχ2tests
to detect item dependencies within an itemt.An efficient
algorithm forfinding correlated itemts ensued from the
upward closure of theχ2statistic(if an itemt X is correlated,
this information may be ud forfinding negative ARs under
Definition II.In a similar vein,Aggarwal and Yu[1]evaluated
correlations in itemts by a measure they called“collective
strength”,and provided an efficient algorithm to identify so-
called“strongly collective itemts”from databas(leading
to negative ARs under Definition I).Some disadvantages of
the approaches are that(a)they are not in line ,
cannot be put on top of)the support-confidence framework
and(b)afterwards,additional operations are still needed to
determine the exact form of negative rules.On another count,
heuristic methods,like the ones propod by Savare et al[11] and Yuan et al[17]incorporate domain knowledge,usually a taxonomy,into the algorithms,and only part of the rules are focud on to reduce the scale of the problem.In the methods,expected support or confidence of an itemt(bad on itemts’positions in the taxonomy)is compared to the actual value of the measures,and the larger the deviation, the more interesting such itemts are considered to be.While they may reveal interesting negative ARs,domain knowledge may often not be readily available,limiting the applicability of such approaches.
当季蔬菜On the other hand,Apriori-bad implementations are very
attractive since they can guarantee efficiency for large scale
databas,do not depend on other algorithms and do not
require additional information on the data.For example,Wu et
al[15]prented an Apriori-bad framework for mining both
positive and negative ARs(in the n of Definition I)that
focus on rule dependency measures(rather than on item-
t dependency measures),and in particular us Piatetsky-
Shapiro’s argument[10]that a rule X⇒Y is not interesting if supp(X⇒Y)≈supp(X)supp(Y).For instance,for the rule X⇒¬Y to be valid,the authors in[15]required interest(X,¬Y)=|supp(X¬Y)−supp(X)supp(¬Y)|≥mi,with mi an additional threshold.Furthermore,X⇒¬Y
is only considered a valid negative AR if both X and Y are frequent.Apart from being intuitive(to be re
levant,mined patterns should involve frequent itemts only),this criterion also warrants a significant efficiency boost:if X is infrequent, then no negative ARs with X or any of its superts in the antecedent or conquent have to be considered.There are however substantial problems with this approach,since the algorithm propod in[15]cannot generate all valid negative ARs.Without going into the details here,the problem is due to the fact that interest,which is ud during algorithm execution for pruning itemts,does not have a downward closure property like supp.
Another Apriori-bad algorithm was given by Antonie and Za¨ıane[3]for the purpo of simultaneously generating positive ARs and(a subclass of)negative ARs under Defi-nition II,using Pearson correlation coefficient for measuring interestingness.Their approach suffers from a similar problem like the one propod in[15].
IV.E FFICIENT D ISCOVERY OF B OTH P OSITIVE AND
N EGATIVE A SSOCIATION R ULES
The above-described Apriori-bad implementations are ef-ficient but cannot generate all valid positive and negative ARs. In this ction,we try to solve that problem without paying too high a price in terms of computational costs.In accordance with the obrvations made in Section II-B,we work wi
th negative ARs under Definition I.For simplicity,we also limit ourlves to support and confidence to determine the validity of ARs.
A.Valid Positive and Negative Association Rules
By a valid AR,we mean any expression C1⇒C2,C1∈{X,¬X},C2∈{Y,¬Y},X,Y⊂I,X∩Y=∅,s.t.
1)supp(C1⇒C2)≥ms
2)supp(X)≥ms,supp(Y)≥ms
3)conf(C1⇒C2)≥mc
4)C1⇒C2is support of its negative
itemts开原老城
In1)it is possible to replace ms by another value(support threshold for negative bad on the density of D), but this is not necessary;in general,a larger threshold yields more significant,and hence more interesting negative rules. Condition2)is bad on the work of Wu et al[15](e Section I
II).The distinguishing part of this definition is condition4); it means that if C1=¬X,then there should not exist X ⊂X such that supp(¬X ⇒C2)≥ms(analogously for C2).This condition is bad on the upward closure property of supp for negative itemts from Section II-B;it is reasonable to consider only the boundary of negative ARs with minimal negative parts,since clearly the non-negative ARs are redundant.Like the downward closure property for positive itemts,it will be ud as an efficient pruning device in our algorithm.
Our algorithm for mining both positive and negative valid ARs(PNAR)is built up conceptually around a partition of the itemt space into4parts,which is shown in Fig.1.Each part is then dealt with parately.Before spelling out the steps of the algorithm,we introduce the relevant notations.
B.Notations
X,Y:positive itemts
I,I :itemts,which can be positive or negative;
L(P i):frequent itemts in Part i;
L(P i)k:k-length frequent itemts in Part i;
L(P4)k,p:in Part4,frequent itemts with k-length negative part and p-length positive part;
C(P i):candidate itemts in Part i(analogously: C(P i)k,C(P4)k,p);
S(P i):positive itemts who support needs to be calculated via DB scan in Part i;if i=1,S(P i)=C(P i)(analogously: S(P i)k,S(P4)k,p);
S:the t of positive itemts who supports are known;this includes L(P1),C(P1)and S(P i)after DB scan;送给妈妈的花
for itemt I=¬ative=X and I.positive=Y C.Main Procedure of PNAR
1:Generate all positive frequent itemts L(P1),and put S= C(P1)
2:For all itemts I in L(P1),inrt I into L(P2)if1−supp(I)≥ms
3:Generate all negative frequent itemts L(P3)
4:Generate all negative frequent itemts L(P4)
5:Generate all valid positive and negative ARs
Line1is the same as in classical AR mining,and can be implemented by Apriori,but also by other algorithms like Frequent-Pattern Tree[6],Clo[9],Charm[18],...The supports in Line2can be immediately derived after we get all frequent positive itemts,since supp(¬I)=1−supp(I),for I in I.For line3and line4,we can easily calculate the support of an itemt¬XY(or¬X¬Y,X¬Y)in Part3and Part4 if we know the support of X,Y and XY.Since antecedent and conquent of a negative rule must be frequent by our definition,for all such potential valid negative ARs,supp(X) and supp(Y)are known.The computational efficiency of the algorithm depends highly on the number of itemts I=XY that need further checking via databa scan.In the following two subroutines,the upward closure property of supp is ud to reduce that number.
D.Generating Frequent Itemts in P3
1:L(P3)=Ø,C(P3)=Ø,S(P3)=Ø
2:C(P3)2={¬{i1}¬{i2}|i1,i2∈L(P1)1,i1=i2}
3:for{k=2;C(P3)k=Ø;k++}do
4:for all I=¬X¬Y∈C(P3)k do
5:if supp(I)≥ms then
6:inrt I into L(P3)k
el
Fig.1.Partition of itemt space into four parts
8:for all i∈XY do
9:Cand=check candidates(I,i)中国美术教育
护眼设置10://Cand⊆{¬(X{i})¬Y,¬X¬(Y{i})}
11:C(P3)k+1=C(P3)k+1∪Cand
12:if Cand=∅,XY{i}∈S and
(∃I ⊂XY{i})(supp(I )=0)then
13:inrt XY{i}into S(P3)k+1
14:end if
15:end for
16:end if
狗过敏
17:end for
18:compute support of itemts in S(P3)k+1
19:S=S∪S(P3)k+1
20:end for
We start the algorithm from negative itemts with2items, and then add items to each of the negative parts.We discuss some of the steps in detail:
•Line3,the algorithm will terminate when no candidate itemts are generated.
•Line5,if supp(I)=supp(¬X¬Y)≥ms,becau of the minimality constraint in the definition of negative ARs, no item will be added to the¬X or¬Y part becau obviously I =¬(X∪{i})¬Y and I =¬X¬(Y∪{i}) wil
l be frequent(upward closure property of supp).It should be noted that it is not necessary to scan D to calculate support of2-length itemts in C(P3)2becau all related positive itemts must be in C(P1)2.•Line8-9,for every item i not yet in X or Y,func-tion check candidates generates the candidate itemts
¬(X{i})¬Y and¬X¬(Y{i})and then checks whether they can be pruned.For instance,for I =¬(X{i})¬Y, it should hold that X{i}is X{i}∈L(P1);
moreover if there exists¬X ¬Y in L(P3)such that X ⊆X{i}and Y ⊆Y,then I needs not be considered according to the upward closure property.
•Line12-13,supp(XY{i})has to be checked if it has not yet been computed and if none of its subts is known to have support0(in which ca supp(XY{i})=0by the downward closure property of supp).
•Line18,this step needs DB scan.
E.Generating Frequent Itemts in P4
For the itemts with both a positive part and a negative part,wefirst generate frequent itemts with one negative item, then two negative items,etc.Both the downward and upward closure properties ar
e using as pruning devices.
1:L(P4)=Ø,C(P4)=Ø,S(P4)=Ø
2:C(P4)1,1={¬{i1}{i2}|i1,i2∈L(P1)1,i1=i2}
3:for{k=1;C(P4)k,1=Ø;k++}do
4:for{p=1;C(P4)k,p=Ø;p++}do
5:for all I∈C(P4)k,p do
6:if supp(I)≥ms then
7:inrt I into L(P4)k,p
8:end if
9:end for
风味茶10:for all joinable I1,I2∈L(P4)k,p do
11:ative,Y=I1.positive∪I2.positive 12:I=¬XY
if(∃X ⊂X)(supp(¬X Y)≥ms)and
(∃Y ⊂Y)(supp(¬XY )<ms)then
14:inrt I into C(P4)k,p+1
15:if XY/∈S and∃I ⊂XY,supp(I )=0then 16:inrt XY into S(P4)k,p+1
17:end if
18:end if
19:end for
20:compute support of itemts in S(P4)k,p+1
21:S=S∪S(P4)k,p+1
22:end for
23:for all X∈L(P1)k+1,i∈L(P1)1do
24:if(∃X ⊂X)(¬X {i}∈L(P4)then
25:C(P4)k+1,1=C(P4)k+1,1∪¬X{i}
26:end if
27:end for
28:end for
Below are some comments pertaining to the above pudocode:
•Line10–12correspond to the join operation in classical AR mining.I1and I2are joinable if I1=ative =I2.negative,I1.positive and I2.positive share the same k−1items,and I1.positive∪I2.positive∈L(P1)p+1.
For example,(¬X)(Y {i})and(¬X)(Y {j})generate (¬X)(Y {i,j}).
•Line13–14exploit the upward and downward clo-sure property of negative itemts.For example,for
a candidate itemt¬({i1,i2}){i3,i4},it requires that
¬({i1}){i3,i4}and¬({i2}){i3,i4}are not frequent,and that¬({i1,i2}){i3}and¬({i1,i2}){i4}are frequent.
Note that due to the structure of the algorithm,the supports of X Y and¬XY are known.
•Line20is the only place where a DB scan occurs.•Lines23–27generate candidate itemts with k–length negative and1-length positive parts.Candidates arefil-tered by the condition on Line24.
F.Generating Rules
Rule generation,then,is straightforward.According to their particular form,we distinguish four class:R1(X⇒Y),R2 (¬X⇒¬Y),R3(X⇒¬Y)and R4(¬X⇒Y).
1:generate all positive association rules in R1
2:for all I=¬X¬Y∈L(P3)do
3:if conf(¬X⇒¬Y)≥mc then
4:inrt¬X⇒¬Y into R2
5:end if
6:end for
7:for all I=¬XY∈L(P4)do
8:if conf(X⇒¬Y)≥mc then
9:inrt X⇒¬Y into R3
10:end if
11:if conf(¬X⇒Y)≥mc then
12:inrt¬X⇒Y into R4
13:end if
14:end for
It can be verified(detailed proof omitted)that PNAR is finds all valid positive and negative ARs.G.Experimental Results
To test the efficiency of PNAR,we have applied it to
synthetic data generated according to the supermarket basket
data generation algorithm described in[12]with changing
values for ms,n(number of transactions),m(number of
items)and|T|(average transaction size).The default values
for the parameters are ms=0.01,n=1000K,m=1000, |T|=10.Moreover,mc=0.7and the average size of maximal frequent ,without frequent superts)is
taken to be4.
We have compared PNAR with a naive,straightforward
algorithm called S-PNAR(simple PNAR).S-PNAR mines all
valid positive and negative ARs by the following steps:(1)
find all frequent L(P1),(2)for X and Y in L(P1),compute supp(XY)if it is not yet known,(3)check the validity of all rules X⇒Y,¬X⇒¬Y,X⇒¬Y and ¬X⇒Y.The results are shown in Figure2.It is clear from the graphs that PNAR has good scalability for the different parameters,and that the efficiency gain thanks to the propod optimizations is significant.
Furthermore,note that since PNAR needs additional pass
over the databa to derive all required supports,obviously it
is more time-consuming than Apriori.However,Apriori only
finds positive ARs;moreover,experiments have revealed that
the execution time of PNAR is never greater than3.5times
that of Apriori.This is especially remarkable since we noted
in Section II-B we have en that the number of potentially
valid positive and negative ARs is four times larger than that
of positive ARs only.Finally,we note that a comparison with
the algorithms in[3]and[15]is not possible since the latter
do notfind all valid positive and negative ARs.
V.C ONCLUDING R EMARKS AND F UTURE W ORK
Negative AR mining is a highly relevant,yet difficult
problem in data mining which recently has caught many re-
archers’interest.This paper has contributed to this emerging
topic by cataloguing,and identifying problems with,exist-
ing negative AR definitions and mining approaches,and by
proposing a new Apriori-bad algorithm(PNAR)that exploits
the upward closure property of negative association rules under
Definition I.Compared to a naive implementation,PNAR is
very efficient infinding all positive and negative ARs within a
reasonable time framework.The prent algorithm does not
make u of interestingness measures;for future work,we
plan to incorporate them into our implementation,in order
to improve the quality and ufulness of the mined negative
association rules,and to further reduce the computational
workload.Among some of the relevant applications of this
work,we envisage PNAR-bad classifiers that generalize the
popular CBA(Classification Bad on Association rules[8])
tools,as well as mining for negative quential patterns such as
“customers who bought shampoo are unlikely to buy it again
within the next month”.
Fig.2.Comparison between PNAR and S-PNAR for changing values of|T|,n,ms and m.
A CKNOWLEDGMENT
This work was partly supported by the National Natural Science Foundation of China(70231010/70321001)and the Bilateral Scientific and Technological Cooperation Between China and Flanders(174B0201).Chris Cornelis would like to thank the Rearch Foundation–Flanders for funding his rearch.
R EFERENCES
[1] C.C.Aggarwal and P.S.Yu,”A New Framework for Itemt Genera-
坦的组词tion”,Proc.ACM SIGACT-SIGMOD-SIGART Symposium on Principles of Databa Systems,1998,pp.18–24.
[2]R.Agrawal,T.Imielinski and A.Swami,”Mining Association Rules
between Sets of Items in Large Databas”,Proc.ACM-SIGMOD Intl.
< Management of Data,1993,207–216.
[3]M.L.Antonie and O.R.Za¨ıane,”Mining Positive and Negative Asso-
ciation Rules:an Approach for Confined Rules”,Proc. Principles and Practice of Knowledge Discovery in Databas,2004,pp 27–38.
[4]S.Brin,R.Motwani and C.Silverstein,”Beyond Market Baskets:Gen-
eralizing Association Rules to Correlations”,Proc.ACM SIGMOD on Management of Data,1997,pp265–276.
[5]O.Daly and D.Taniar,”Exception Rules Mining Bad On Negative
Association Rules”,Lecture Notes in Computer Science,V ol.3046,2004, pp543–552.
[6]J.Han,J.Pei,Y.Yin and R.Mao,”Mining Frequent Patterns without
Candidate Generation:a Frequent-Pattern Tree Approach”,Data Mining and Knowledge Discovery,V ol.8,2004,pp53–87.
[7]P.H´a jek,I.Havel and M.Chytil,”The GUHA Method of Automatic
Hypothes Determination”,Computing,V ol.1,1966,pp293–308. [8] B.Liu,W.Hsu and Y.Ma,”Integrity Classification and Association Rule
Mining”,Proc.Fourth Intl.Conference on Knowledge Discovery and Data Mining,New York,NY,1998,pp80–86.
[9]N.Pasquier,Y.Bastide,R.Taouil and L.Lakhal,”Efficient Mining
of Association Rules Using CLOSED Itemt Lattices”,Information Systems,V ol.24(1),1999,pp25–46.
[10]G.Piatetsky-Shapiro,”Discovery,Analysis and Prentation of Strong
Rules”,Knowledge Discovery in Databas,AAAI/MIT,Menlo Park,CA, 1991,pp229–248.
[11] A.Savare,E.Omiecinski and S.Navathe,”Mining for Strong Negative
Associations in a Large Databa of Customer Transactions”,Proc.Intl.
< Data Engineering,1998,pp494–502.[12]R.Srikant and R.Agrawal,”Fast Algorithms for Mining Association
Rules”,Proc.VLDB Conference,1994,pp487–499.
[13] D.R.Thiruvady and G.I.Webb,”Mining Negative Association Rules Us-
ing GRD”,Proc.Pacifi Advances in Knowledge Discovery and Data Mining,2004,pp161–165.
[14]Q.Wei and G.Chen,”Association Rules with Opposite Items in Large
Categorical Databa”,Proc. Flexible Query Answering Systems,2000,pp507–514.
[15]X.Wu,C.Zhang and S.Zhang,”Efficient Mining of Both Positive and
Negative Association Rules”, Information Systems,vol.
22(3),2004,pp381–405.
[16]P.Yan,G.Chen,C.Cornelis,M.De Cock and E.E.Kerre,”Mining
Positive and Negative Fuzzy Association Rules”,LNCS3213,2004,pp 270–276.
[17]X.Yuan, B.P.Buckles,Z.Yuan and J.Zhang,”Mining Negative
Association Rules”,Proc.Seventh Intl.Symposium on Computers and Communication,Italy,2002,pp623–629.
[18]M.J.Zaki and C.J.Hsiao,”CHARM:an Efficient Algorithm for Clod
Itemt Mining”,Proc.Second SIAM Data Mining,Arling-ton,V A,2002,pp12–28.