[ieee 2006 ieee international conference on computational cybernetics - talinn, estonia...

4
On New General Class of 3π - Uninorms Kalle Saastamoinen Lappeenranta University of Technology Department of Information Technology P.O. Box 20, 53851 Lappeenranta, Finland [email protected] Abstract— In this article we are presenting new class of uninorms starting from the 3π-uninorm. We give proof that this new class of the parameterized 3π-uninorm still holds all the properties of uninorms. We also add this new measure generalized mean and weights. We add this comparison measure into the typical instance based classification procedure and show that this new comparison measure gives more flexibility for results achieved than simple 3π-uninorm gives. Comparison with some known classification results with different classifiers is also added in practical part of this article. I. I NTRODUCTION Uninorms are logical structures very closely related to the operators called t-norms and t-conorms [1]. Uninorms were presented in 1996 [2] by Ronald Yager. Operators called t- norms and t-conorms are generally accepted as equivalent for the many valued or fuzzy intersections and unions. Only difference for t-norms and t-conorms what uninorms have is that they have identity element e lying anywhere in the unit interval and e =1 is giving a t-norm, while e =0 is giving a t-conorm. For this reason uninorms are normally considered as generalizations of fuzzy intersections and unions.Fuzzy intersections and unions can be seen as generalizations for two valued intersections and unions and they are logically seen as corresponding the words and and or. As uninorms are generalizations of many valued intersections and unions they can as well be seen as generalizations of words and and or. Since identity element is lying anywhere in the unit interval we can see that uninorms are in between intersections and unions. In this article we will present new class of representable uninorms, which has first been presented in [3]. This class can be considered as generalization of 3π-uninorm. 3π-uninorm were first presented by J´ ozsef Dombi in [4]. II. PARAMETERIZED 3π - UNINORM WITH GENERALIZED MEAN Definition 1: A mapping U : [0, 1] 2 [0, 1] is called uninorm if it is a commutative, increasing and associative operator that satisfies (e [0, 1]) (x [0, 1]) (U (e, x)= x) (1) element e is unique and is called the identity or neutral element of U . The case e =1 leads back to t-norms and the case e =0 leads back to t-conorms. Definition 2: Representative theorem for uninorms can be defined for uninorms as follows: U (x 1 ,x 2 )= h -1 (h (x 1 )+ h (x 2 )) (2) x 1 ,x 2 [0, 1], where h : [0, 1] R is a continuous, strictly increasing function. In 1982 Dombi [4] presented his class of rational aggrega- tive operators which are the only class of uninorms that can be presented by representative theorem [5]. Definition 3: Multiplicative form of the representation the- orem for uninorms can be defined for uninorms by setting h to be an additive generator of the uninorm U and Ψ(x)= exp h (x), which is strictly increasing continuous function from [0, 1] to [0, ] such that U (x 1 ,x 2 )= h -1 (h (x 1 )+ h (x 2 )) = Ψ -1 (Ψ (x 1 )Ψ(x 2 )) (3) holds. Definition 4: If we set Ψ(x p )= x p 1-x p we get Ψ -1 (x p )= x p 1+x p from which follows U (x p 1 ,x p 2 )= 0 if x 1 =0 and x 2 =1 or x 1 =1 and x 2 =0 x p 1 x p 2 x p 1 x p 2 +(1-x p 1 )(1-x p 2 ) otherwise (4) That is our parameterized form of the 3π-uninorm, p> 0 and neutral element e = p 1 2 . Next equation we have two vectors x 1 ,x 2 [0, 1] n whose elements presents properties of two objects. These objects are to be compared by 3π-uninorm with generalized mean. Parameters for measures are mean value m R\{0} , weights w =(w 1 ,...,w n ) and parameter value p. Theorem 5: Weighted generalized mean compensated form of the parameterized 3π-uninorm is a function U 3π : ([0, 1] n ) 2 [0, 1] defined: U 3π (x p 1 ,x p 2 ; m, w)= 0 if x 1 =0 and x 2 =1 or x 1 =1 and x 2 =0 n i=1 w i x p 1 (i)x p 2 (i) x p 1 (i)x p 2 (i)+(1-x p 1 (i))(1-x p 2 (i)) m 1 m (5)

Upload: kalle

Post on 10-Mar-2017

215 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: [IEEE 2006 IEEE International Conference on Computational Cybernetics - Talinn, Estonia (2006.08.20-2006.08.22)] 2006 IEEE International Conference on Computational Cybernetics - On

On New General Class of 3π− Uninorms

Kalle Saastamoinen

Lappeenranta University of TechnologyDepartment of Information Technology

P.O. Box 20, 53851 Lappeenranta, [email protected]

Abstract— In this article we are presenting new class ofuninorms starting from the 3π-uninorm. We give proof thatthis new class of the parameterized 3π-uninorm still holds allthe properties of uninorms. We also add this new measuregeneralized mean and weights. We add this comparison measureinto the typical instance based classification procedure and showthat this new comparison measure gives more flexibility forresults achieved than simple 3π-uninorm gives. Comparison withsome known classification results with different classifiers is alsoadded in practical part of this article.

I. INTRODUCTION

Uninorms are logical structures very closely related to theoperators called t-norms and t-conorms [1]. Uninorms werepresented in 1996 [2] by Ronald Yager. Operators called t-norms and t-conorms are generally accepted as equivalentfor the many valued or fuzzy intersections and unions. Onlydifference for t-norms and t-conorms what uninorms have isthat they have identity element e lying anywhere in the unitinterval and e = 1 is giving a t-norm, while e = 0 is giving at-conorm. For this reason uninorms are normally consideredas generalizations of fuzzy intersections and unions.Fuzzyintersections and unions can be seen as generalizations fortwo valued intersections and unions and they are logicallyseen as corresponding the words and and or. As uninorms aregeneralizations of many valued intersections and unions theycan as well be seen as generalizations of words and and or.Since identity element is lying anywhere in the unit intervalwe can see that uninorms are in between intersections andunions.

In this article we will present new class of representableuninorms, which has first been presented in [3]. This class canbe considered as generalization of 3π-uninorm. 3π-uninormwere first presented by Jozsef Dombi in [4].

II. PARAMETERIZED 3π-UNINORM WITH GENERALIZEDMEAN

Definition 1: A mapping U : [0, 1]2 → [0, 1] is calleduninorm if it is a commutative, increasing and associativeoperator that satisfies

(∃e ∈ [0, 1]) (∀x ∈ [0, 1]) (U (e, x) = x) (1)

element e is unique and is called the identity or neutral elementof U .

The case e = 1 leads back to t-norms and the case e = 0leads back to t-conorms.

Definition 2: Representative theorem for uninorms can bedefined for uninorms as follows:

U (x1, x2) = h−1 (h (x1) + h (x2)) (2)

x1, x2 ∈ [0, 1], where h : [0, 1] → R is a continuous, strictlyincreasing function.

In 1982 Dombi [4] presented his class of rational aggrega-tive operators which are the only class of uninorms that canbe presented by representative theorem [5].

Definition 3: Multiplicative form of the representation the-orem for uninorms can be defined for uninorms by setting hto be an additive generator of the uninorm U and Ψ(x) =exph (x), which is strictly increasing continuous functionfrom [0, 1] to [0,∞] such that

U (x1, x2) = h−1 (h (x1) + h (x2)) =Ψ−1 (Ψ (x1) Ψ (x2))

(3)

holds.Definition 4: If we set Ψ(xp) = xp

1−xp we get Ψ−1 (xp) =xp

1+xp from which follows

U (xp1, x

p2) ={

0 if x1 = 0 and x2 = 1 or x1 = 1 and x2 = 0xp1xp

2

xp1xp

2+(1−xp1)(1−xp

2)otherwise

(4)

That is our parameterized form of the 3π-uninorm, p > 0 andneutral element e = p

√12 .

Next equation we have two vectors x1, x2 ∈ [0, 1]n whoseelements presents properties of two objects. These objectsare to be compared by 3π-uninorm with generalized mean.Parameters for measures are mean value m ∈ R\{0} , weightsw = (w1, . . . , wn) and parameter value p.

Theorem 5: Weighted generalized mean compensated formof the parameterized 3π-uninorm is a function U3π :([0, 1]n)2 → [0, 1] defined:

U3π (xp1, x

p2;m,w) =

0 if x1 = 0 and x2 = 1 or x1 = 1 and x2 = 0(n∑

i=1

wi

(xp1(i)xp

2(i)

xp1(i)xp

2(i)+(1−xp1(i))(1−xp

2(i))

)m) 1m

(5)

Page 2: [IEEE 2006 IEEE International Conference on Computational Cybernetics - Talinn, Estonia (2006.08.20-2006.08.22)] 2006 IEEE International Conference on Computational Cybernetics - On

, where p > 0, 0 ≤ wi ≤ 1 and 1n

n∑i=1

wi = 1. In fact, the way

how weights are normalized is not important in classificationsince it affect only to range of measure, not to the order ofvalues.

Proof: Since generalized mean is continuous and mono-tonic operator, when m ∈ R \ {0} all uninorm properties stayintact.

We see that the measure suggested in equation 5 has someadditional parameters, namely p, m and w. These additionalparameters are giving more freedom for comparison measuresto find best results.

III. CLASSIFICATION

Many time there are given a set of data which are alreadygrouped into classes and the problem is then to predict whichclass each new data belongs to this is normally referred to asclassification problem. First set of data is normally referredto as training set, while this new set of data is referred to astest set [6]. Classification is here seen as comparison betweentraining set and test set.

In this section we show comparison of how well some ofthe comparison measures presented in this article manage fromthe classification tasks compared to results reported in [7].

A. Data sets

We tested our measures with four different data sets whichare available from the [8]. Data sets chosen for the testwere: Pima Indians diabetes, Post-operative and Wine. Theselearning sets differ greatly in the magnitude of instances andnumber of predictive attribute values. We tested the stability ofclassification results with our measures for different parametervalues and optimized weight values.• Pima, row 1: The diagnostic, binary-valued variable in-

vestigated is whether the patient shows signs of diabetes.All instances here are females at least 21 years oldof Pima Indian heritage. Number of Instances is 768.Number of Attributes is 8 plus class. Class 1 (negativefor diabetes) 500, Class 2 (positive for diabetes) 268.

• Post Operative, row 2: Task of this database is todetermine where patients in a postoperative recoveryarea should be sent to next. The attributes correspondroughly to body temperature measurements. The numberof Instances is 90. The number of attributes is 9 includingthe decision (class attribute). Attribute 8 has 3 missingvalues. Missing values has in this study been replaced bythe average of attribute 8 values.

• Wine, row 3: The data is the result of a chemical analysisof wines grown in the same region in Italy but derivedfrom three different cultivars. The analysis determined thequantities of 13 constituents found in each of the threetypes of wines. The number and deviation of instances:class 1 59, class 2 71, class 3 48.

B. Description of the classifier

We want to classify objects, each characterized by onefeature vector in [0, 1]n to different classes. Assumption that

vectors belongs to [0, 1]n is not restrictive since appropriateshift and normalization can be done for any space [a, b]n.Equation U3π above can be used to compare objects to classes.

In the algorithm below we denote feature vectors by um,m = 1, . . . ,M i.e. we have M objects to be classified. We alsodenote the number of different classes by L. For simplicity,we also assume that used measure is U3π , algorithm is similarto all the measures of this kind. We prefer also that classifiershould be fast after right parameters have been chosen, i.e.U3π shouldn’t be calculated too many times after values forp, m and wi have been fixed.

General classification procedure that we have used can bedescribed now as follows:

Step 1: Choose m and p used with U3π .Step 2: Choose ideal vectors fl that presents classes as well

as possible. These ideal vectors can be either givenby expert knowledge or calculated in some way fromtraining set. We have calculated one ideal vector fl foreach class l by using generalized mean i.e.

fl(i) =

1nl

nl∑j=1

(vj,l(i))m

1m

∀i ∈ {1, . . . , n}, (6)

, where vectors vj,l are known to belong to class l andnl is number of those vectors.

Step 3: Choose values for weights wi. For example evolutionaryalgorithms can be used when training data is available.Of course we have some training data if we can calculategeneralized means in step 2, but here we used randomweights since we only wanted to test stability of ourmeasures.

Step 4: Compare each feature vector ui to each ideal vectorfl, i.e. calculate U3π (um, fl;m; p;w) for all m ∈{1, . . . ,M} and l = 1, . . . , L.

Step 5: Make decision that feature vector um belongs tothat class k for which U3π (um, fk;m; p;w) =max{U3π (um, fl;m; p;w) | l = 1, . . . , L}

Classifier used in this comparison task is presented moreprecisely in [9]. The advantage of using this classifier here isthat it results mainly depends on which comparison measurewe choose to use. One should also note that this classificationprocedure is iterative and that classification result is highlydependant on parameter values chosen.

1) Short description of other classifiers: Followings arevery short descriptions of classifiers that we use for compar-ison of our classification results and much better descriptionscan be found from [7].• C4.5 is popular decision tree classifiers.• ITI is described as being incremental decision tree induc-

tion classifier.• LMDT is described as being linear machine decision tree

classifier.• CN2 is a rule-based inductive learning system classifier.• LVQ is learning vector quantization based classifier.• OC1 is called Oblique Classifier and it is a system for

induction of oblique decision trees.

Page 3: [IEEE 2006 IEEE International Conference on Computational Cybernetics - Talinn, Estonia (2006.08.20-2006.08.22)] 2006 IEEE International Conference on Computational Cybernetics - On

In the following sections we are going to show table ofMaximum and average classification results some figures andshortly discuss about the results.

IV. RESULTS

In the table IV we have compared average classificationresults vs. to the results available in the report [7]. Resultsare represented as percents of correctly classified data. Bestaverage results are marked bold.

Data sets are by rows as follows 1 = Pima Indian, 2 = PostOperative and 3 =Wine. Fourth row shows average result fromclassification. If the corresponding classifier has been unableto give classification results it result has been considered aszero.

From the table IV we can see which parameter values havegiven best classification results and which parameter valueshave given best mean classification results.

From the figures 1, 2 and 3 we can see classification resultswith corresponding variances.

TABLE ICOMPARISON OF AVERAGE CLASSIFICATION RESULTS VS. SELECTED

METHODS

# 3π C4.5 ITI LMDT CN2 LVQ OC1

1 70.40 71.02 73.16 73.51 72.19 71.28 50.00

2 77.24 62.57 59.48 66.88 57.91 x x

3 83.78 91.09 91.09 95.40 91.09 68.90 87.31

4 77.14 74.89 74.58 78.60 73.73 46.73 45.77

TABLE IIMAXIMUM AND MAXIMUM MEAN VALUES WITH CORRESPONDING

PARAMETERS

# Maximum p-value m-value Mean p-value m-value

1 78.39 0.81 0.61 73.41 1.41 0.61

2 82.22 1.41 0.01 71.11 2.81 0.01

3 97.78 7.21 0.41 84.44 6.41 0.41

V. CONCLUSIONS

In theoretical part of this paper we presented new classfor uninorms, which we named parameterized 3π-uninorm.This new uninorm comparison measure is more flexible thanordinary 3π-uninorm, since it has an extra parameter whichgives it more flexibility. In order to gain more flexibility weaggregated this new comparison measure with generalizedmean and weights.

For practical test we used classification task, where weused simple instance based classification procedure to test ourmeasure in classification. For weight optimization we useddifferential evolution. We showed that mainly results achievedby using this simple comparison measure 5 see table IVwere better in classification tasks chosen to an example ofcomparison than most of the classifier results found from [7].We can also see from table IV that parameter values m and

Fig. 1. Classification results with weight optimization for Pima Indiansdiabetes data

Fig. 2. Classification results with weight optimization for Post operative data

p have great affect for the classification results. Figures 1, 2and 3 show that normally these classification results stay quitestable with respect to parameter values.

ACKNOWLEDGMENTS

This work was supported by Lappeenranta University ofTechnology and South Carelia Polytechnic.

Page 4: [IEEE 2006 IEEE International Conference on Computational Cybernetics - Talinn, Estonia (2006.08.20-2006.08.22)] 2006 IEEE International Conference on Computational Cybernetics - On

Fig. 3. Classification results from Wine data with weight optimization

REFERENCES

[1] Klir, G. J. and Yuan, B., Fuzzy Sets And Fuzzy Logic - Theory AndApplications, Prentice Hall PTR, USA, 1995, pp. 61-83.

[2] Yager, R.R. and Rybalov, A., Uninorm aggregation operators, Fuzzy Setsand Systems, 80, 1996, pp. 111-120.

[3] K. Saastamoinen and J. Sampo On General Class of Parameterized 3π−Uninorm Based Comparison, WSEAS TRANSACTIONS on MATHE-MATICS, 3(3), 482-486, 2004.

[4] Dombi, J., Basic Concepts for a theory of evaluation: The aggregativeoperator, European Journal of Operational Research, 10, 1982, pp. 282-293.

[5] Fodor, J., Yager, R.R. and Rybalov, A., Structure of uninorms, Interna-tional Journal of Uncertainty, Fuzziness and Knowledge Based Systems,5, 1997, pp. 411-427.

[6] T. Hastie and R. Tibshirani The Elements of Statistical Learning: DataMining, Inference, and Prediction, Springer Series in Statistics, Springer,New York, 2001.

[7] P.W. Eklund, A Performance Survey of Public Domain Supervised Ma-chine Learning Algorithms, KVO Technical Report 2002, The Universityof Queensland, submitted, 2002.

[8] UCI Repository of Machine Learning Databases network document,Available at: http://www.ics.uci.edu/˜mlearn/MLRepository.html, [Ac-cessed Mars 14, 2004]

[9] P. Luukka, Similarity measure based classification PhD thesis, Lappeen-ranta University of Technology, 2005.