Great Deal! Get Instant $10 FREE in Account on First Order + 10% Cashback on Every Order Order Now

Assignment Exercise 12B and Chapter 12A Reference Material Complete both Principal Components Analysis and Exploratory Factor Analysis, exercises using SPSS for both techniques. Download the Data File...

1 answer below »
Assignment Exercise 12B and Chapter 12A Reference Material
Complete both Principal Components Analysis and Exploratory Factor Analysis, exercises using SPSS for both techniques.
Download the Data File named Leanne and load it into SPSS. The data file contains 310 cases; however, the 30 individual item or the variables to be used for factor analysis in this assignment are named aspire01 through aspire30.
Complete the practice exercises in Chapter 12B using SPSS and the aspire01 though aspire30 variables in the Leanne dataset.
Write a 1,050- to 1,400-word paper that includes the following:
See Practice Exercises for 12B on page 2
· Analyze and explain similarities and differences in the exploratory factor analysis and principal component analysis.
· Assess how exploratory factor analysis can help to reduce a small number of themes, factors, or components and why the focus on commonality in the subsets of variables is important for the overall analysis.
· Replicate the major tables for principal component analysis and exploratory factor analysis. After each of the primary tables explain (from your own understanding) the primary results that are shown in certain table columns that lead to the interpretation of the test results. Evaluate the strengths and weaknesses or limitations of the model.
· Draw conclusions about how the early principal components analysis and exploratory factor analysis techniques might provide benefits for a projected dissertation study.
See Chapter Reference Material for Practice Exercises 12B on page 28
Exercise 12B
Principal Components Analysis and Exploratory Factor Analysis Using IBM SPSS
12B.1 Numerical Example
The numerical example we use in this chapter is based on responses to the Aspirations Index (Kasser & Ryan, 1993, 1996), an inventory assessing personal values or goals. The inventory assesses both extrinsic goals that are more materialistically oriented and intrinsic goals that are more closely aligned to personal development. Within each general domain, subsets of items assess more specific exemplars of these goals. The version of the inventory that was used within the context of a larger research project run in 2010 by Leanne Williamson, one of our graduate students, is shown in Table 12b.1. In this version, intrinsic orientation and extrinsic orientation are both represented by three subscales. Items said to comprise each subscale are also shown in Table 12b.1; participants used a 9-point summative response scale to indicate the extent to which the content of the item applied to them. The data file contains 310 cases and is named Leanne; the 30 individual items—the variables to be factor analyzed—are named aspire01 through aspire30.
Although there is a hypothesized structure to this inventory (which suggests that we might wish to engage in a confirmatory factor analysis procedure as discussed in Chapters 16A and 16B), for the purposes of this chapter we will apply the exploratory procedures of principal components and factor analysis to our data set. To illustrate how to perform and interpret the results of the analyses as well as to demonstrate how researchers might work with the resulting factor structure in subsequent analyses, we do the following:
Table 12b.1   The Structure of the Aspirations Inventory Used in Data Collection
· We perform a preliminary principal components analysis (without rotation) to obtain the scree plot and determine how much variance is explained as a function of the number of components.
· We then extract and rotate two components/factors, expecting to see a separation between the global intrinsic items and the global extrinsic items.
· Following that, we extract and rotate six components/factors, hoping to see a separation between the six subscales, but being prepared to deal with whatever results we obtain.
· We then evaluate the reliability of item sets defined by the factor structure that we have accepted.
· Based on the outcome of the reliability analysis, we build the co
esponding scales/subscales.
· We co
elate the scales/subscales that we have computed.
· Finally, we use the scales/subscales as dependent variables in an exploratory one-way between-subjects MANOVA.
12B.2 Preliminary Principal Components Analysis
12B.2.1 Preliminary Principal Components Analysis Setup
Selecting the path Analyze Dimension Reduction Factor from the main menu
ings us to the Factor Analysis main dialog window displayed in Figure 12b.1. We have highlighted and moved aspire01 through aspire30 into the Variables panel.
Figure 12b.1   The Main Factor Analysis Window
12B.2.1.1 Descriptive Statistics
Select the Descriptives pushbutton and check Univariate descriptives and Initial solution under Statistics as shown in Figure 12b.2. The Statistics section provides two options. The Univariate descriptives will provide the number of valid cases, mean, and standard deviation for each variable. The Initial solution checkbox will produce an initial (unrotated) solution including the complete analysis (all 30 components extracted) and a snapshot of the number of components we would wish to
ing into the rotation phase if in fact we requested a rotation of the structure. It will also provide the communalities, eigenvalues, and percentage of variance explained. We have requested that both sets of statistics be included in the output.
The Co
elation Matrix section displays eight separate options, including a Pearson co
elation matrix (Coefficients) with significance levels for all variables in the analysis. These Pearson coefficients should be scanned to check for consistent patterns of variability or relationships between variables. We recommend this inspection be conducted during the initial data screening and univariate/multivariate assumption violation check discussed in Chapters 3A and 3B and so will not do it here.
Choose the KMO and Bartlett’s test of sphericity checkbox as shown in Figure 12b.2. This produces the Kaiser–Meyer–Olkin (KMO) measure of sampling adequacy, which is a rough indicator of how adequate the co
elations are for factor analysis. As a general heuristic (see Kaiser, 1970, 1974), a value of .70 or above is considered adequate. Bartlett’s test of sphericity provides a test of the null hypothesis that none of the variables are significantly co
elated. This test should yield a statistically significant outcome before proceeding with the factor analysis. Click Continue to return to the main dialog window.
12B.2.1.2 Component Extraction
Selecting the Extraction pushbutton produces the dialog window shown in Figure 12b.3. This screen is composed of five sections: Method, Analyze, Display, Extract, and Maximum Iterations for Convergence.
Figure 12b.2   The Descriptives Window of Factor Analysis
The Method drop-down menu allows us to choose from one of seven methods of factor (or component) extraction. We have chosen Principal components (the default option) in this preliminary analysis but there are several other choices including principal factors, generalized least squares and ULS, and maximum likelihood. In the Analyze panel, we have chosen to analyze the Co
elation matrix. The Display panel allows us to obtain an Unrotated factor solution and a Scree plot. The Extract panel defaults to selecting factors or components whose eigenvalues exceed 1.00; this is a reasonable choice to make given that we are performing the preliminary analysis. Finally, the Maximum Iterations for Convergence section defaults at 25 iterations or algorithmic passes to achieve a solution; this will work for our data set, but it is sometimes necessary to raise this value to 100 or greater for data sets that have a less well-defined structure. Click Continue to return to the main dialog window.
12B.2.1.3 Component Rotation
The Rotation dialog window, shown in Figure 12b.4, consists of three sections: Method, Display, and Maximum Iterations for Convergence. The Method section provides for a variety of factor rotation methods. For this preliminary analysis, we have chosen None and therefore do not need to Display anything. Click Continue to return to the main dialog window.
Figure 12b.3   The Extraction Window of Factor Analysis
12B.2.1.4 Options
The Options window is shown in Figure 12b.5. The Missing Values section allows researchers to Exclude cases listwise (the default), which is what we have selected. This option excludes any cases missing one or more values on any of the variables in the analysis; thus, all cases included in the analysis have valid values on all of the variables.
The Coefficient Display Format section allows us to sort the factor weightings by size and to suppress weightings with absolute values less than a specified value. Sorting coefficients by size usually makes it easier for researchers to comprehend the results, and we will choose this form of display when we request the two-factor and later the six-factor rotated solution. Click Continue to return to the main dialog window and OK to perform the analysis.
12B.2.2 Preliminary Principal Components Analysis Output
12B.2.2.1 Descriptive Statistics
A portion of the Descriptive Statistics output is shown in Figure 12b.6. The mean, standard deviation, and sample size are displayed for each variable. Examining the sample size is especially important as that assures us that we have not lost too many cases; here, we analyzed data for 301 of the 310 cases in the data file.
The results of the KMO and Bartlett’s test of sphericity appear in Figure 12b.7. A KMO coefficient in the middle .8s suggests that the data are suitable for principal components analysis. Likewise, a statistically significant Bartlett’s test enables us to reject the null hypothesis of lack of sufficient co
elation between the variables. These two results give us confidence to proceed with the analysis.
Figure 12b.4 The Rotation Window of Factor Analysis
Figure 12b.5 The Options Window of Factor Analysis
Figure 12b.6   A Portion of the Descriptive Statistics Output
Figure 12b.7   Kaiser–Meyer–Olkin and Bartlett Test Output
12B.2.2.2 Explained Variance
The Total Variance Explained output is presented in Figure 12b.8. The first major set of three columns labeled Initial Eigenvalues shows the analysis that has been taken to completion. Thus, by the time we reach the 30th component, we have extracted 100% of the variance (because there are 30 items in the full inventory). What draws our immediate attention is that seven components have eigenvalues of 1.00 or greater, and for most researchers most of the time, that would indicate the maximum number of components they would accept (and most often they would accept fewer for their final solution).
The first two components each make great strides in accounting for the variance—almost 25% and 16%, respectively, cumulatively accounting for about 41% of the total variance. Gains in total variance explained for the next few components still jump in somewhat good-sized chunks—almost 8%, 5.5%, better than 4%, and almost 4% for the third through sixth components, respectively. The first six components cumulatively account for better than 62% of the variance.
The second major set of three columns labeled Extraction Sums of Squared Loadings conveys the same information (because this was a principal components analysis) for all components whose eigenvalue was better than 1.00 (as we specified in the Options dialog window).
The percentages shown in the Total Variance Explained table are based on the eigenvalues. With 30 variables in the analysis, there are 30 units of total variance. The first component had a computed eigenvalue of 7.422, which represents 24.739% of 30, and the second component had a computed eigenvalue of 4.819, which represents 16.062% of 30
Answered Same Day Aug 18, 2021


Mohd answered on Aug 26 2021
140 Votes
Similarities between Principal Components Analysis and Exploratory Factor Analysis
PCA and EFA share these assumptions for all intents and purpose: Measurement scale is ratio or interval. Random sample - in any event 5 perceptions for each sampled variable and in any event 100 perceptions.
Bigger sample sizes suggested for more steady gauges, 10-20 for every sample observations variable Over example to make up for missing qualities Linear connection between observed factors Normal appropriation for each observed variable Each pair of observed factors has a bivariate ordinary circulation PCA and EFA are both variable decrease methods. In the event that communalities are enormous, near 1.00, results could be comparable.
PCA accept the absence of outliers in the data. EFA expect a multivariate normal distribution when utilizing Maximum Likelihood extraction technique.
    Principal Component Analysis
    Exploratory Factor Analysis
    Principal Components held record for a maximal variance of difference of observed Factors
    Variables account for common variance in the dataset
    Analysis disintegrates co
elation matrix
    Analysis disintegrates adjusted co
elation matrix
    Minimizes whole of squared perpendicular separation to the component axis
    Estimates factors those affects responses on observed variables
    Component scores are a straight mix of the observed factors weighted by eigenvectors Observed factors are direct mixes of the basic and special elements
    Observed variables are linear combination of the underlying and unique variables.
Components are difficult to interpret, e.g., no underlying factors. The component score is a linear combination of observed factors weighted by eigenvectors. Component scores are a change of observed factors (C1 = b11x1 + b12x2 + b13x3 + . . . ).
Decide the proper statistical analysis to respond to explore questions from the earlier... It is wrong to run PCA and EFA with your data. PCA incorporates associated factors to diminish the quantities of factors and clarifying a similar measure of fluctuation with less factors (principal components). EFA gauges factors, fundamental develops that can't be estimated legitimately. In the models underneath, similar data is utilized to represent PCA and EFA. The techniques are adaptable to your data. Try not to run...

Answer To This Question Is Available To Download

Related Questions & Answers

More Questions »

Submit New Assignment

Copy and Paste Your Assignment Here