Great Deal! Get Instant $10 FREE in Account on First Order + 10% Cashback on Every Order Order Now

The following data is a sample from a loan history database of a Japanese bank CST 6307 Data Mining, Spring 2022 Assignment 5 Model Evaluation (maximal grade: 80 points) I Write a report on the...

1 answer below »

The following data is a sample from a loan history database of a Japanese bank
CST 6307 Data Mining, Spring 2022
Assignment 5
Model Evaluation (maximal grade: 80 points)
I Write a report on the following experiments.
For the experiments use the original loan data (without any discretizaton) and the
following Weka classifiers:
1. 1-NN (weka.classifiers.IBk, use the default k = 1, no distance weighting)
2. 3-NN (weka.classifiers.IBk, set k = 3, no distance weighting)
3. Distance weighted 20-NN (weka.classifiers.IBk, set k = 20, weight by 1/distance)
4. Naive Bayes (weka.classifiers.NaiveBayes, default parameters)
5. Decision tree (weka.classifiers.j48.J48, default parameters)
6. OneR (weka.classifiers.OneR, default parameters)
7. ZeroR - majority predictor (weka.classifiers.ZeroR)

Problem 1. Run each of the seven classifiers in the three test options given below and
present the accuracy, precision, and recall displayed in the Classifier output window in
a table. (Create a table with 5 columns: Classifier, Test Option, Accuracy, Precision, and
Recall.)
1. 10-fold cross validation (the default number of folds).
2. Leave-one-out cross validation (20-fold cross-validation, set Folds to 20).
3. Percentage split (holdout) with 66% for training and 34% for testing (the
default).

Problem 2. Compare and analyze the results of the 7 classifiers and rate the
classifiers by using their accuracy, precision, and recall.

Problem 3. Apply Boosting - use AdaBoostM1 (in classifiers – meta) with default
parameter setting; change the base classifier (in methods parameters). Present the
esults in a table.

Problem 4. Compare and analyze the results and rate the above 7 classifiers by
the improvement of accuracy achieved by boosting.

Chapter 5
Data Mining
Practical Machine Learning Tools and Techniques
Slides for Chapter 5, Evaluation
of Data Mining by I. H. Witten, E. Frank,
M. A. Hall and C. J. Pal
2
Credibility: Evaluating what’s been learned
• Issues: training, testing, tuning
• Predicting performance: confidence limits
• Holdout, cross-validation, bootstrap
• Hyperparameter selection
• Comparing machine learning schemes
• Predicting probabilities
• Cost-sensitive evaluation
• Evaluating numeric prediction
• The minimum description length principle
• Model selection using a validation set
3
Evaluation: the key to success
• How predictive is the model we have learned?
• E
or on the training data is not a good indicator of
performance on future data
• Otherwise 1-NN would be the optimum classifier!
• Simple solution that can be used if a large amount of
(labeled) data is available:
• Split data into training and test set
• However: (labeled) data is usually limited
• More sophisticated techniques need to be used
4
Issues in evaluation
• Statistical reliability of estimated differences in performance
( significance tests)
• Choice of performance measure:
• Number of co
ect classifications
• Accuracy of probability estimates
• E
or in numeric predictions
• Costs assigned to different types of e
ors
• Many practical applications involve costs
5
Training and testing I
• Natural performance measure for classification
problems: e
or rate
• Success: instance’s class is predicted co
ectly
• E
or: instance’s class is predicted inco
ectly
• E
or rate: proportion of e
ors made over the whole set of
instances
• Resubstitution e
or: e
or rate obtained by evaluating
model on training data
• Resubstitution e
or is (hopelessly) optimistic!
6
Training and testing II
• Test set: independent instances that have played no part in
formation of classifie
• Assumption: both training data and test data are representative
samples of the underlying problem
• Test and training data may differ in nature
• Example: classifiers built using customer data from two different
towns A and B
• To estimate performance of classifier from town A in completely
new town, test it on data from B
7
Note on parameter tuning
• It is important that the test data is not used in any way to
create the classifie
• Some learning schemes operate in two stages:
• Stage 1: build the basic structure
• Stage 2: optimize parameter settings
• The test data cannot be used for parameter tuning!
• Proper procedure uses three sets: training data, validation
data, and test data
• Validation data is used to optimize parameters
8
Making the most of the data
• Once evaluation is complete, all the data can be used to
uild the final classifie
• Generally, the larger the training data the better the
classifier (but returns diminish)
• The larger the test data the more accurate the e
or
estimate
• Holdout procedure: method of splitting original data into
training and test set
• Dilemma: ideally both training set and test set should be large!
9
Predicting performance
• Assume the estimated e
or rate is 25%. How close is this to
the true e
or rate?
• Depends on the amount of test data
• Prediction is just like tossing a (biased!) coin
• “Head” is a “success”, “tail” is an “e
or”
• In statistics, a succession of independent events like this is
called a Bernoulli process
• Statistical theory provides us with confidence intervals for the true
underlying proportion
10
Confidence intervals
• We can say: p lies within a certain specified interval with a
certain specified confidence
• Example: S=750 successes in N=1000 trials
• Estimated success rate: 75%
• How close is this to true success rate p?
• Answer: with 80% confidence p is located in [73.2,76.7]
• Another example: S=75 and N=100
• Estimated success rate: 75%
• With 80% confidence p in [69.1,80.1]
11
Mean and variance
• Mean and variance for a Bernoulli trial:
p, p (1–p)
• Expected success rate f=S/N
• Mean and variance for f : p, p (1–p)/N
• For large enough N, f follows a Normal distribution
• c% confidence interval [–z X z] for a random variable X is
determined using:
• For a symmetric distribution such as the normal
distribution we have:
P(-z £ X £ z)=1-2´P(x ³ 2)
12
Confidence limits
• Confidence limits for the normal distribution with 0
mean and a variance of 1:
• Thus:
• To use this we have to transform our random variable f
to have 0 mean and unit variance
0.2540%
0.8420%
1.2810%
1.655%
2.33
2.58
3.09
z
1%
0.5%
0.1%
Pr[X  z]
13
Transforming f
• Transformed value for f :
(i.e., subtract the mean and divide by the standard deviation)
• Resulting equation:
• Solving for p yields an expression for the confidence limits:
14
Examples
• f = 75%, N = 1000, c = 80% (so that z = 1.28):
• f = 75%, N = 100, c = 80% (so that z = 1.28):
• Note that normal distribution assumption is only valid for
large N (i.e., N > 100)
• f = 75%, N = 10, c = 80% (so that z = 1.28):
(should be taken with a grain of salt)
? ∈ 0.691,0.801
? ∈ 0.549,0.881
15
Holdout estimation
• What should we do if we only have a single dataset?
• The holdout method reserves a certain amount for testing
and uses the remainder for training, after shuffling
• Usually: one third for testing, the rest for training
• Problem: the samples might not be representative
• Example: class might be missing in the test data
• Advanced version uses stratification
• Ensures that each class is represented with approximately equal
proportions in both subsets
16
Repeated holdout method
• Holdout estimate can be made more reliable by
epeating the process with different subsamples
• In each iteration, a certain proportion is randomly selected
for training (possibly with stratificiation)
• The e
or rates on the different iterations are averaged to
yield an overall e
or rate
• This is called the repeated holdout method
• Still not optimum: the different test sets overlap
• Can we prevent overlapping?
17
Cross-validation
• K-fold cross-validation avoids overlapping test sets
• First step: split data into k subsets of equal size
• Second step: use each subset in turn for testing, the remainder
for training
• This means the learning algorithm is applied to k different
training sets
• Often the subsets are stratified before the cross-validation
is performed to yield stratified k-fold cross-validation
• The e
or estimates are averaged to yield an overall e
or
estimate; also, standard deviation is often computed
• Alternatively, predictions and actual target values from the
k folds are pooled to compute one estimate
• Does not yield an estimate of standard deviation
18
More on cross-validation
• Standard method for evaluation: stratified ten-fold cross-
validation
• Why ten?
• Extensive experiments have shown that this is the best choice to
get an accurate estimate
• There is also some theoretical evidence for this
• Stratification reduces the estimate’s variance
• Even better: repeated stratified cross-validation
• E.g., ten-fold cross-validation is repeated ten times and results
are averaged (reduces the variance)
19
Leave-one-out cross-validation
• Leave-one-out:
a particular form of k-fold cross-validation:
• Set number of folds to number of training instances
• I.e., for n training instances, build classifier n times
• Makes best use of the data
• Involves no random subsampling
• Very computationally expensive (exception: using lazy
classifiers such as the nearest-neighbor classifier)
20
Leave-one-out CV and stratification
• Disadvantage of Leave-one-out CV: stratification is not
possible
• It guarantees a non-stratified sample because there is only one
instance in the test set!
• Extreme example: random dataset split equally into
two classes
• Best inducer predicts majority class
• 50% accuracy on fresh data
• Leave-one-out CV estimate gives 100% e
or!
21
The bootstrap
• CV uses sampling without replacement
• The same instance, once selected, can not be selected again for a
particular training/test set
• The bootstrap uses sampling with replacement to form
the training set
• Sample a dataset of n instances n times with replacement to
form a new dataset of n instances
• Use this data as the training set
• Use the instances from the original dataset that do not occur in
the new training set for testing
22
The 0.632 bootstrap
• Also called the 0.632 bootstrap
• A particular instance has a probability of 1–1/n of not
eing picked
• Thus its probability of ending up in the test data is:
• This means the training data will contain approximately
63.2% of the instances
23
Estimating e
or with the 0.632 bootstrap
• The e
or estimate on the test data will be quite
pessimistic
• Trained on just ~63% of the instances
• Idea: combine it with the resubstitution e
or:
• The resubstitution e
or gets less weight than the e
or
on the test data
• Repeat process several times with different samples;
average the results
24
More on the
Answered 1 days After Mar 19, 2022

Solution

Mohd answered on Mar 21 2022
96 Votes
Problem 1. Run each of the seven classifiers in the three test options given below and
present the accuracy, precision, and recall displayed in the Classifier output window in
a table. (Create a table with 5 columns: Classifier, Test Option, Accuracy, Precision, and
Recall.)
1. 10-fold cross validation (the default number of folds).
2. Leave-one-out cross validation (20-fold cross-validation, set Folds to 20).
3. Percentage split (holdout) with 66% for training and 34% for testing (the
default).
     
     
     
     
     
    Classifiers
    Test Option
    Accuracy
    Precision
    Recall
    1-NN (weka.classifiers.IBk, use the default k = 1, no distance weighting)
    10-fold cross validation
    0.600
    0.600
    0.600
     
    Leave-one-out cross validation
    0.600
    0.600
    0.600
     
    Percentage split
    0.714
    1.000
    0.714
    3-NN (weka.classifiers.IBk, set k = 3, no distance weighting)
    10-fold cross validation
    0.800
    0.800
    0.800
     
    Leave-one-out cross validation
    0.800
    0.800
    0.800
     
    Percentage split
    0.571
    1.000
    0.571
    Distance weighted 20-NN (weka.classifiers.IBk, set k = 20, weight by 1/distance)
    10-fold cross validation
    0.650
    _
    0.650
     
    Leave-one-out cross validation
    0.650
    _
    0.650
     
    Percentage split
    0.143
    1.000
    0.143
    Naive Bayes (weka.classifiers.NaiveBayes, default parameters)
    10-fold cross validation
    0.600
    0.600
    0.600
     
    Leave-one-out cross validation
    0.700
    0.687
    0.700
     
    Percentage split
    0.429
    1.000
    0.429
    Decision tree (weka.classifiers.j48.J48, default...
SOLUTION.PDF

Answer To This Question Is Available To Download

Related Questions & Answers

More Questions »

Submit New Assignment

Copy and Paste Your Assignment Here