Sign in

User name:(required)

Password:(required)

Join Us

join us

Your Name:(required)

Your Email:(required)

Your Message :

0/2000

Effect of Item Order on Certain Psychometric Properties

Author: Ruby

Jun. 24, 2024

Effect of Item Order on Certain Psychometric Properties

This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

If you want to learn more, please visit our website 2.42 pm.

Many studies have been conducted on the effect of item order in self-report questionnaires on mean scores. This research aims to study the effect of item order on measurement invariance in addition to mean scores. To this end, two groups randomly obtained from the same sample were presented a fixed order form in which all items belonging to the same dimension were adjacent to each other, and a random order form in which the items were randomly sequenced respectively. The results obtained revealed a statistically significant difference between the mean scores of the two forms. In the next stage of the study, the fit indices obtained from the confirmatory factor analysis (CFA) applied to the two separate forms and the modification indices (MI) suggested by the software were compared. Both forms returned high modification suggestions for adjacent items or items presented near each other. Additionally, it was found that high χ 2 reductions suggested by the MIs in one form resulted in low χ 2 reductions in the other. Lastly, multiple group CFA (mg-CFA) was conducted to determine whether or not measurement invariance was achieved through different item order presentations of the scale. The findings indicate that measurement invariance could not be achieved even at the first stage of analysis. It may specifically be stated that presenting respondents items under the same dimension together ensures empirical findings congruent with theoretical structure.

Many factors impact the response patterns of a multi-item self-report questionnaire may be cited ( Weinberg et al., ). One such factor is the location of the items in the questionnaire. It is assumed that respondents answer adjacent items practically independent from each other and therefore results provide accurate information regarding personal behavior ( Bowman and Schuldt, ). However, studies on item order effects indicate this situation to be questionable. A study of the literature indicates that studies on item order effect to this date focused on characteristics such as reliability levels, anchor effect, means scores, and item parameters as explained in detail in the previous section. Studies on the influence of item orders or the factorial structures of self-reports are few and are conducted with narrow scopes. Therefore, within the scope of this study, in addition to the descriptives of item order as seen in previous studies, the influence of psychometric properties &#; especially on the factorial structure of the scale &#; have also been portrayed; the invariance of the factorial structure in two different forms was tested. To this end, two separate forms were presented to two randomly assigned groups from the same sample. One form was the original scale (fixed order form), while the other form had the items mixed randomly (random order form). The answers to the following research questions were sought:

In brief, it may be stated that item order effect on different self-report measures has rooted history in the field and continues in popularity today. There are many aspects that may contribute to the field by considering different perspectives and different characteristics of scales regarding item order effects; it is especially emphasized that there is a need for more studies of the influence of item order on the psychometric characteristics of scales, and such studies would be valuable contributions to the literature in the field.

Studies in the literature of the field generally portray findings indicating changes in the correlations between items and mean scores when the general and specific questions are collectively moved based on the characteristic of the measurement. Another significant finding is that items that are responded to first serve as an anchor for those responded to later. Additionally, studies with a small number of but important findings regarding the psychometric properties of the scales show that item order also has an effect on reliability, validity, and item statistics.

In research on item order effect, the focus appears to be on the differentiation obtained as a result of changing the order of a general question and a more specific question on a subject. In a study, which placed either the general question or specific question first through two separate forms, McFarland () found the item order had a low impact on the correlation among items. Schuman and Presser () noted that different results were obtained when two questions on a politically charged subject were asked in differing orders. Strack et al. () used the same method in following years on different subjects, discovering through their research that a differentiation of the presentation order of general-specific questions resulted in a general happiness and dating happiness correlation of 0.16 and 0.55, respectively. In his study of the influence of previously answered items on subsequent items in personality tests, Knowles () created many forms allowing for each item to be presented in every possible position from the beginning to the end of the measure. The findings indicated that the mean score was not influenced by serial position effect, and no interaction was observed between item content and serial position. Additionally, an increase in the reliability values was observed as the item positions moved toward the last positions. The researcher explained this phenomenon by stating that &#;answering one item leaves a residue that increases the reliability of the next items.&#; The study concluded that as serial position was advanced, the response consistency of responders increased; that responders continued their initial response tendencies; and that the responses provided were more meaningful predictors of total score, overshadowing the assumption that the measurement tool is independent from the subject measured. In an experimental study, where the key question was asked before and after other items through two separate forms, Lasorsa () , observed a 20% variation between responses; the findings indicating that the results obtained may not always be due to individual differences between participants but rather due to the item orders. Chen () studied the influence of item order effects on attitude measures regarding test reliability, item difficulty, item discrimination, test score, test length, reaction time, and person parameters. Findings of Chen () indicated evidence of item order effects on attitude measures. The findings supported the notion that initially presented items may serve as anchors for subsequent questions, as respondents tend to adjust their responses to these subsequent items based on the items presented first. In the first part of their two-part study, Kaplan et al. () determined the mean of the general scale and, subsequently, the strength of the relationship between the general and specific scale changes by rearranging the relative position of the general and specific scales. Similar to the first part, the second part of the study, which used a quasi-experimental design, showed that both the mean of overall satisfaction measure was lower and the magnitude of the specific-general scale relationship was stronger when the general scale preceded the specific scale than in the converse sequence. Bowman and Schuldt () randomized groups of university students found significant difference regarding the order of general and specific questions among the groups, and interpreted this as the influence of item order on item response. Study of Huang and Cornell () also relates to the effect of ordering specific and general items on results. Statistically significant differences were observed in the score averages of the differing forms for each order presented in their experiment. Huang and Cornell () continued their research with a larger and more diverse sample. Regarding the order of the specific and general questions, the test group showed between 20 and 45% differentiation from the control group. Weinberg et al. () conducted a study to evaluate the item order effect from a psychometric validation perspective. They established two forms, one with domain items fixed and general items random, the other with domain items random and general items fixed, and applied these forms to two different groups. The mean values obtained with the fixed domain forms were significantly higher than those obtained from the random domain forms. Additionally, exploratory factor analysis (EFA) and confirmatory factor analysis (CFA) were conducted on both forms separately, with EFA resulting in a one-dimensional structure for both forms, and CFA providing random domain forms with a good fit for one dimensional structure, and a poor fit with fixed domain forms. While this study obtained important and broad findings, the conclusions drawn by the researchers may be skewed due to the demographical imbalances among the groups to which the two forms were presented, with the researchers themselves stating that the findings must be tested with equivalent groups before being presented.

Despite the fact that awareness on item order effect may be traced back further, it may be stated that research contributing to the literature of the field began in the &#;s. Since then, studies on item order effect in self-reports have mainly focused on portraying the influence of different item orders on the level of information obtained from the participants. Studies on the comparison of information obtained from presenting items with specific or general statements on the subject first have found broad acceptance in the literature of the field ( McFarland, ; Strack et al., ; Schuman and Presser, ; Lasorsa, ; Kaplan et al., ; Huang and Cornell, , ). Since the typical method for determining the item order effect is to apply two different forms with different orders to two groups with similar demographic characteristics ( Kaplan et al., ), research has mostly been based on this approach. However, when the literature is examined, it can be said that there is a need for research on the impact of item order on self-reports&#; psychometric properties, especially on the factorial structure.

Item order effect is especially important for attitude measures. Chen () states that the approach to explaining this phenomenon began with primacy and recency, while in time the literature shifted to anchoring and adjusting. Anchoring and adjusting posits that people tend to anchor based on information initially presented to them, and they derive their plausible estimations through adjustments based on that anchor ( Zhao and Linderholm, ). Regarding item order effect, the initial responses to items serve as anchors for any subsequent responses ( Harrison and McLaughlin, ). In other words, anchoring and adjusting occurs when an individual&#;s stored memory of a context is weak, resulting in prior responses to items serving as anchors, which in turn change the responses given to subsequent items based on these anchors ( Chen, ).

The testing of measurement invariance is detailed by Widaman and Reise () in a four-step model. Despite Vandenberg and Lance () proposing an eight-stage approach, the use of this four-stage model is prevalent in the literature ( Putnick and Bornstein, ). The four stages begin with the least constrained model. The first step, known as configural invariance, freely estimates all parameters in two groups. The second step, called metric invariance (aka weak invariance), forces equal estimation of factor loadings in two groups. The third step, called scalar invariance (aka strong invariance), forces the equal estimation of intercepts in groups. The final stage is called strict invariance and forces the equal estimation of error variances in addition to the previously conducted limitations ( Widaman and Reise, ). The comparison of models is also conducted step by step. Each step reduces the number of parameters being estimated freely, and the degrees of freedom increase. Each model is nested in the previous model, and the likelihood ratio χ 2 difference test ( Bentler and Bonett, ) is used to calculate the χ 2 difference between subsequent models, allowing the determination of whether or not the difference in the degrees of freedom between the two models is significant. If no significant difference is found, the limitations in the parameter estimations of that step do not worsen the model-data fit significantly, resulting in the conclusion that measurement invariance is achieved for the step being tested. Therefore, measurement invariance stages are initiated with configural invariance, and if the fit indices indicating the fit of the data with the model that allows for free estimation of all parameters in the groups are good, the next step is conducted. No instance of a breach of univariate normality was encountered in the data distribution. However, the data also showed no multivariate normality, so all CFAs were conducted using maximum likelihood estimation with robust standard errors (MLR).

Lastly, to respond to the fourth research question, the invariance of the factorial structure of the scales applied to the treatment and control groups through different item orders was tested. Measurement invariance is achieved when a measurement tool retains the same structure when applied to different groups, or when repeated measurements are conducted on the same groups ( Marsh et al., ). This process is tested using multiple group CFA (mg-CFA; Dimitrov, ; Van De Schoot et al., ). Basically, the constancy of the scale in the face of different group characteristics are tested in measurement invariance studies. Within the scope of this study, mg-CFA was utilized to analyze the differentiation in psychometric characteristics among equivalent groups as a result of different item orders on the same scale. In other words, the existence of bias was sought when items of the same factor were applied together or in random order.

For the third research question; suggested modifications for improved fit indices, the items from both forms for which these modifications were suggested, and the similarities between the suggested modifications between the two forms were evaluated.

The first research question of this study required the use of an independent samples t-test to determine the differentiation between the mean values obtained from the fixed order and random order forms. In order to respond to the second research question, first the internal consistency coefficients for the two forms were obtained and secondly, separate CFAs were conducted on both forms to obtain certain item statistics and fit indices.

The scale used within the scope of this study was a five-factor scale consisting of 30 items &#; namely sharing (nine items), shopping (seven items), real-time updating (five items), accessing online content (five items), and gaming/gambling (four items), originally used to measure participants&#; cyberloafing levels, samples of which are included in Appendix A . The original five-factor scale was developed and validated by Akbulut et al. () , and used successfully in several studies (e.g., Akbulut et al., ; Dursun et al., ; Gökçearslan et al., ; Kian-Yeik, ; Wu et al., ; Sivrikova et al., ). In the scale&#;s original state, items within the same dimension were grouped together one after the other. In accordance with the aim of this study, this original form (fixed order form) with items gathered under the same initial dimension and a second form (random order form) in which all items were arranged randomly were created. Since the size of the item order effect may differ based on the demographic characteristics of participants ( McFarland, ), individuals included in the sample were randomly assigned to the treatment and control groups to ensure that all the demographic characteristics of the groups to which the fixed and random forms were to be applied would be equivalent. Randomization is sufficient to ensure the equivalence of groups in experimental studies as this method ensures the control of all extraneous variables that may influence the research results ( Fraenkel and Wallen, ). Then, the fixed order form was applied to one of these groups (control group), while the random order form was applied to the other (treatment group). To avoid any bias in responses, the purpose of the study was concealed from the students; the students responded to the items in their forms unaware of the fact that the item orders differed between them. Following data scrubbing procedures, 219 fixed order and 211 random order data sets with a total of 430 data sources were obtained.

A broad definition of cyberloafing would be employees wasting time at work ( Weatherbee, ). It may be stated that different types of cyberloafing have been put forth by researchers who based it on different theoretical foundations. Some of these proposals, which stand out include the ego depletion model of self-regulation ( Wagner et al., ), the theory of planned behavior ( Askew et al., ), and the theory of interpersonal behavior ( Moody and Siponen, ). The goal of all these approaches is to explain the nature and predictors of cyberloafing in different settings. However, these studies primarily focused on work-based settings rather than educational environments. The purpose of the cyberloafing scale used in this study is to determine the degree of cyberloafing levels of undergraduate students during lectures.

The data of the study were obtained from second to third year undergraduate students studying in seven different departments of an education faculty. 68.5% of the participants were female, while 31.5% were male. 25.4% of the participants were studying in foreign language departments, while 15.9% were in special education, 13.8% in guidance and counseling, 10.7% in primary education, 10% in social sciences, and 8.3% in preschool education. Of the data gathered from 445 students, five were discarded as the participants responded using only one selection throughout the form, and 10 were discarded based on the validation question (see Appendix A ), leaving 430 responses remaining with which the study was conducted. After these 430 students were randomly separated into two groups, one was provided with a fixed order form, while the other was given a random order form. In the original state of the scale presented in the fixed order form, the items referring to each of the five dimensions are grouped together. In the random order form, all of the items are randomly presented such that no items referring to the same dimension are presented sequentially. At first, a complete randomization was implemented; however, some items from the same factor were ordered successively when complete randomization was used. Therefore, the locations of those items were changed with another item from a different factor. To answer the research questions and to evaluate the results obtained regarding item order effect, the treatment, and control groups to which the fixed and random order forms were applied should have equivalent demographic characteristics. The chi-square test of independence conducted to ensure the truly randomized assignment of the treatment and control groups resulted in no connection being found between the group (treatment-control) and gender (Pearson χ 2 = 0.41, p = 0.52), department (Pearson χ 2 = 0.98, p = 0.99) or school year (Pearson χ 2 = 1.76, p = 0.62). In other words, the demographic characteristics of the groups to which the fixed and random forms were applied were similar.

Results

Firstly, to see whether the order of scales affected descriptive statistics, overall cyberloafing scores of the treatment and control groups were compared. It was observed that the mean in the random order (treatment group; M = 74.5, SD = 26.1) was significantly higher than the mean in the fixed order (control group; M = 69.2, SD = 26.9), t(428) = 2.07, p < 0.05. The effect size for this aforementioned influence was determined as Cohen&#;s d = 0.20.

For the second research question, to portray the influence of item order on the psychometric characteristics of the scale; the reliability coefficients as internal consistency for both forms were calculated for each factor using Cronbach&#;s α and McDonald&#;s ω, and the overall reliability of the scores was calculated using Stratified α. For the construct validity findings, the CFA results and other descriptive statistics regarding the items were reported.

A study of the Cronbach α and McDonald&#;s ω coefficients obtained in order to determine the reliability through internal consistency (see ) shows that internal consistency coefficients tend to be higher for the fixed order form. In addition, while the Stratified α obtained to evaluate the overall reliability of the scale was slightly higher for the fixed form as with the other subdimensions, the overall reliability obtained for both forms was quite high.

Table 1

Factorsn of itemsFixed orderRandom orderCronbach αMcDonald&#;s ωCronbach αMcDonald&#;s ωFactor 190....928Factor 270....871Factor 350....900Factor 450....919Factor 540....846Stratified α300..971Open in a separate window

In the following stage, two separate CFAs were conducted for the five-dimensional structure of the scale on the data obtained with the fixed order and random order forms. The CFAs conducted based on the correlated-traits model resulted in descriptive characteristic values regarding the items for both forms.

The univariate and multivariate normality of the score distribution was tested prior to conducting CFAs. This resulted in the skewness values of the overall scores of the random order and fixed order forms being 0.09 and 0.45, respectively, and kurtosis values of &#;0.47 and &#;0.87, respectively. Additionally, the Q-Q plots were analyzed and it was concluded that no instances violating univariate normality in the distribution were present. The multivariate normality of the data was tested using Mardia&#;s multivariate skewness and kurtosis tests through the MVN package in R (Korkmaz et al., ). The lack of multivariate normality was apparent from the significant p-values in these tests. Therefore, CFA and mg-CFA were conducted using the MLR estimation method in Mplus 8.0. The correlation values between the dimensions of the scale are presented in .

Table 2

ShoppingUpdatingAccessingGamingFixedRandomFixedRandomFixedRandomFixedRandomSharing0.73a0.70a0.53a0.55a0.78a0.81a0.46a0.40aShopping0.60a0.53a0.71a0.77a0.58a0.61aUpdating0.39a0.48a0.36a0.44aAccessing0.42a0.46aOpen in a separate window

indicates that the correlations between dimensions were at similar levels for both forms. There were three correlation coefficients that were higher in the fixed form, while the remaining seven correlation coefficients were higher in the random form.

As a result of CFAs conducted for two forms means, SD, factor loadings, t values, and residual errors for each item were obtained. A study of these values in by matching each item in one form with the corresponding item in the other form showed that the factor loadings of all the items except the last item were above 0.50. The averages for the factor loadings were 0.80 for the fixed order form and 0.76 for the random order form. This difference was not found to be statistically significant [t(58) = &#;1.19, p > 0.05]. When the factor loadings of the same items in different forms are analyzed, 19 items had higher factor loading in the fixed form, while 10 items had higher factor loadings in the random form. One item had the same factor loading in both forms.

Table 3

Factors and itemsFixed orderRandom orderMeanSDF. loadingt valueR. errorMeanSDF. loadingt valueR. error&#;Factor 1Item 13.071.310..920.383.351.170..660.36Item 22.641.230..410.312.981.190..650.45Item 32.361.330..280.292.581.340..130.44Item 43.221.310..100.403.521.240..210.34Item 52.001.040..770.502.231.170..360.36Item 62.021.140..500.412.291.290..760.21Item 72.201.260..580.362.481.290..620.66Item 83.661.110..500.453.981.120.551.890.69Item 93.091.500..720.383.141.480..740.56&#;Factor 2Item 102.391.390..210.232.651.400..260.32Item 111.701.140..030.441.450.960.732.410.46Item 122.631.380..530.282.871.280..830.35Item 131.831.220..760.402.111.240.813.860.33Item 142.601.390..380.432.931.390..280.37Item 151.811.240.732.660.461.951.190..130.26Item 161.581.060..730.671.611.030.916.590.17&#;Factor 3Item 171.651.130..500.351.921.260.59.080.74Item 182.531.480..230.332.751.440..710.52Item 191.921.300..650.222.081.350..370.29Item 202.141.420..010.222.421.450..700.26Item 211.671.070..930.501.821.090..530.50&#;Factor 4Item 222.681.610..790.342.931.590..940.43Item 233.051.620..00.083.211.600..440.16Item 242.931.740..70.073.101.700..310.24Item 252.321.470.712.570.492.281.390..420.14Item 263.041.500..340.263.241.360..760.60&#;Factor 5Item 271.401.020..760.071.521.090..480.20Item 281.350.960..250.051.430.970..530.44Item 291.791.250..080.441.911.300..290.56Item 301.981.260.447.800.812.101.350.539.800.72Open in a separate window

When the CFAs conducted for the random order and fixed order forms are studied (see ), the fit indices obtained for the fixed order form were found to be slightly better than those of the random order form, and very close to the acceptable fit values.

The third research question is directed at portraying whether or not the suggested modifications to improve the model for both forms as a result of the CFAs are influenced by item order. To this end, the modification indices (MI) proposed for both forms that ensured the largest χ2 reduction were compared (see ). One of the fundamental assumptions of structural equation modeling is that there should not be a relationship between the residuals of observed variables (Kline, ). Therefore, considering modification applications conflict with this fundamental assumption, it may be stated that only a limited amount of modification that can be theoretically explained and ensures a large decrease in χ2 in accordance with the parsimony principle should be applied.

Table 5

FormSuggested largest MI (χ2 > 30)Suggested χ2 reductionOrder of the items in the other formSuggested χ2 reduction for the other formFixeditem 10-item .6item 3-item 22-Fixeditem 1-item 248.0item 2-item 4-Fixeditem 23-item .8item 19-item 23-Randomitem 27-item .0item 6-item 23-Randomitem 1-item 253.1item 2-item 13-Open in a separate window

In , the first column indicates which form the items suggested for modification are in, the second column indicates order of the items for which modifications were suggested, and the third column indicates the χ2 reduction if the modification is applied. Based on the parsimony principle, suggested modifications that would reduce the χ2 value 30 or more for both forms were reported. The fourth column indicates corresponding item orders to the respective MI in the other form, while the last column indicates how much of a χ2 reduction is caused for the suggested MI for these items.

The values in the show that defining the relationship between the residual errors of items 10 and 12 in the fixed order form resulted in a large χ2 reduction of 84.6, while these items were numbered 3 and 22 in the random order form, and the suggested modification of the related items in this form resulted in a χ2 reduction of less than 10. Similarly, the χ2 reduction for items 1 and 2, and 23 and 24 in the fixed order form was found to be under 10 regarding their corresponding items in the random order form. An analysis of the corresponding modifications in the fixed order form of the high modifications suggested for the random order form resulted in a similar situation. As such, while the high χ2 reduction for the suggested modifications for items 27 and 28 in the random order form resulted in a value of 58.0, the same items in the fixed order form (items 6 and 23) resulted in a χ2 reduction of less than 10. Similarly, the suggested modification for items 1 and 2 in the random order form was 53.1, while the same items in the fixed order form, at numbers 2 and 13, resulted in a suggested modification under 10. In brief, the large modifications suggested were for items that were either successive or at most two items apart from each other in both forms, and changing the orders of these items in their respective forms also changes the suggested modifications.

To answer the fourth research question, mg-CFA was conducted to test measurement invariance. Despite there being a consensus in the literature regarding the four stages of measurement invariance, some researchers have stated that the final, strict invariance stage is an unnecessary test. This is supported by the fact that error variances are no longer part of the latent variable and therefore inconsequential when comparing latent variable means (Vandenberg and Lance, ). This results in most researchers excluding the final stage (Putnick and Bornstein, ). Therefore, the final stage was omitted in this study, and configural, metric, and scalar invariance were tested in stages. To this end, version 8.0 of Mplus, which has a syntax that allows for the simultaneous execution of all three stages (Şen, ), was used. A study of the fit indices (see ), used to evaluate whether or not configural invariance was achieved, shows that none of these indices reach the acceptable cut-off values. Based on this finding, it was observed that the model-data fit obtained was poor, therefore not even configural invariance, the first stage of measurement invariance, and was achieved. In other words, the two different forms with different item orders may result in different evaluations-understandings of the scale by two equivalent groups, therefore causing bias.

1.5: Diagramming Arguments

Before we get down to the business of evaluating arguments&#;of judging them valid or invalid, strong or weak&#;we still need to do some preliminary work. We need to develop our analytical skills to gain a deeper understanding of how arguments are constructed, how they hang together. So far, we&#;ve said that the premises are there to support the conclusion. But we&#;ve done very little in the way of analyzing the structure of arguments: we&#;ve just separated the premises from the conclusion. We know that the premises are supposed to support the conclusion. What we haven&#;t explored is the question of just how the premises in a given argument do that job&#;how they work together to support the conclusion, what kinds of relationships they have with one another. This is a deeper level of analysis than merely distinguishing the premises from the conclusion; it will require a mode of presentation more elaborate than a list of propositions with the bottom one separated from the others by a horizontal line. To display our understanding of the relationships among premises supporting the conclusion, we are going to depict them: we are going to draw diagrams of arguments.

Here&#;s how the diagrams will work. They will consist of three elements: (1) circles with numbers inside them&#;each of the propositions in the argument we&#;re diagramming will be assigned a number, so these circled numbers in the diagram will represent the propositions; (2) arrows pointed at circled numbers&#;these will represent relationships of support, where one or more propositions provide a reason for believing the one pointed to; and (3) horizontal brackets&#;propositions connected by these will be interdependent (in a sense to be specified below).

Our diagrams will always feature the circled number corresponding to the conclusion at the bottom. The premises will be above, with brackets and arrows indicating how they collectively support the conclusion and how they&#;re related to one another. There are a number of different relationships that premises can have to one another. We will learn how to draw diagrams of arguments by considering them in turn.

The last proposition is clearly the conclusion (the word &#;therefore&#; is a big clue), and the first two propositions are the premises supporting it. They support the conclusion independently . The mark of independence is this: each of the premises would still provide support for the conclusion even if the other weren&#;t true; each, on its own, gives you a reason for believing the conclusion. In this case, then, we diagram the argument as follows:

&#; Marijuana is less addictive than alcohol. In addition, &#; it can be used as a medicine to treat a variety of conditions. Therefore, &#; marijuana should be legal.

Often, different premises will support a conclusion&#;or another premise&#;individually, without help from any others. When this is the case, we draw an arrow from the circled number representing that premise to the circled number representing the proposition it supports.

The conclusion of this argument is the first proposition, so the premises are propositions 2 and 3. Notice, though, that there&#;s a relationship between those two claims. The third sentence starts with the phrase &#;This is because&#;, indicating that it provides a reason for another claim. The other claim is proposition 2; &#;This&#; refers to the claim that automatic weapons can kill large numbers of people quickly. Why should I believe that they can do that? Because all one has to do is hold down the trigger to release lots of bullets really fast. Proposition 2 provides immediate support for the conclusion (automatic weapons can kill lots of people really quickly, so we should make them illegal); proposition 3 supports the conclusion more indirectly, by giving support to proposition 2. Here is how we diagram in this case:

Consider this simple argument: &#; Automatic weapons should be illegal. &#; They can be used to kill large numbers of people in a short amount of time. This is because &#; all you have to do is hold down the trigger and bullets come flying out in rapid succession.

Some premises support their conclusions more directly than others. Premises provide more indirect support for a conclusion by providing a reason to believe another premise that supports the conclusion more directly. That is, some premises are intermediate between the conclusion and other premises.

Are you interested in learning more about graphic lcd display module manufacturer? Contact us today to secure an expert consultation!

Joint Premises

Sometimes premises need each other: the job of supporting another proposition can&#;t be done by each on its own; they can only provide support together, jointly. Far from being independent, such premises are interdependent. In this situation, on our diagrams, we join together the interdependent premises with a bracket underneath their circled numbers.

There are a number of different ways in which premises can provide joint support. Sometimes, premises just fit together like a hand in a glove; or, switching metaphors, one premise is like the key that fits into the other to unlock the proposition they jointly support. An example can make this clear:

&#; The chef has decided that either salmon or chicken will be tonight&#;s special. &#; Salmon won&#;t be the special. Therefore, &#; the special will be chicken.

Neither premise 1 nor premise 2 can support the conclusion on its own. A useful rule of thumb for checking whether one proposition can support another is this: read the first proposition, then say the word &#;therefore&#;, then read the second proposition; if it doesn&#;t make any sense, then you can&#;t draw an arrow from the one to the other. Let&#;s try it here: &#;The chef has decided that either salmon or chicken will be tonight&#;s special; therefore, the special will be chicken.&#; That doesn&#;t make any sense. What happened to salmon? Proposition 1 can&#;t support the conclusion on its own. Neither can the second: &#;Salmon won&#;t be the special; therefore, the special will be chicken.&#; Again, that makes no sense. Why chicken? What about steak, or lobster? The second proposition can&#;t support the conclusion on its own, either; it needs help from the first proposition, which tells us that if it&#;s not salmon, it&#;s chicken. Propositions 1 and 2 need each other; they support the conclusion jointly. This is how we diagram the argument:

The same diagram would depict the following argument:

&#; John Le Carre gives us realistic, three-dimensional characters and complex, interesting plots. &#; Ian Fleming, on the other hand, presents an unrealistically glamorous picture of international espionage, and his plotting isn&#;t what you&#;d call immersive. &#; Le Carre is a better author of spy novels than Fleming.

In this example, the premises work jointly in a different way than in the previous example. Rather than fitting together hand-in-glove, these premises each give us half of what we need to arrive at the conclusion. The conclusion is a comparison between two authors. Each of the premises makes claims about one of the two authors. Neither one, on its own, can support the comparison, because the comparison is a claim about both of them. The premises can only support the conclusion together. We would diagram this argument the same way as the last one.

Another common pattern for joint premises is when general propositions need help to provide support for particular propositions. Consider the following argument:

&#; People shouldn&#;t vote for racist, incompetent candidates for president. &#; Donald Trump seems to make a new racist remark at least twice a week. And &#; he lacks the competence to run even his own (failed) businesses, let alone the whole country. &#; You shouldn&#;t vote for Trump to be the president.

The conclusion of the argument, the thing it&#;s trying to convince us of, is the last proposition&#;you shouldn&#;t vote for Trump. This is a particular claim: it&#;s a claim about an individual person, Trump. The first proposition in the argument, on the other hand, is a general claim: it asserts that, generally speaking, people shouldn&#;t vote for incompetent racists; it makes no mention of an individual candidate. It cannot, therefore, support the particular conclusion&#;about Trump&#;on its own. It needs help from other particular claims&#;propositions 2 and 3&#;that tell us that the individual in the conclusion, Trump, meets the conditions laid out in the general proposition 1: racism and incompetence. This is how we diagram the argument:

Occasionally, an argumentative passage will only explicitly state one of a set of joint premises because the others &#;go without saying&#;&#;they are part of the body of background information about which both speaker and audience agree. In the last example, that Trump was an incompetent racist was not uncontroversial background information. But consider this argument:

&#; It would be good for the country to have a woman with lots of experience in public office as president. &#; People should vote for Hillary Clinton.

Diagramming this argument seems straightforward: an arrow pointing from 1 to 2. But we&#;ve got the same relationship between the premise and conclusion as in the last example: the premise is a general claim, mentioning no individual at all, while the conclusion is a particular claim about Hillary Clinton. Doesn&#;t the general premise &#;need help&#; from particular claims to the effect that the individual in question, Hillary Clinton, meets the conditions set forth in the premise&#;i.e., that she&#;s a woman and that she has lots of experience in public office? No, not really. Everybody knows those things about her already; they go without saying, and can therefore be left unstated (implicit, tacit).

But suppose we had included those obvious truths about Clinton in our presentation of the argument; suppose we had made the tacit premises explicit:

&#; It would be good for the country to have a woman with lots of experience in public office as president. &#; Hillary Clinton is a woman. And &#; she has deep experience with public offices&#;as a First Lady, U.S. Senator, and Secretary of State. &#; People should vote for Hillary Clinton.

How do we diagram this? Earlier, we talked about a rule of thumb for determining whether or not it&#;s a good idea to draw an arrow from one number to another in a diagram: read the sentence corresponding to the first number, say the word &#;therefore&#;, then read the sentence corresponding to the second number; if it doesn&#;t make sense, then the arrow is a bad idea. But if it does make sense, does that mean you should draw the arrow? Not necessarily. Consider the first and last sentences in this passage. Read the first, then &#;therefore&#;, then the last. Makes pretty good sense! That&#;s just the original formulation of the argument with the tacit propositions remaining implicit. And in that case we said it would be OK to draw an arrow from the general premise&#;s number straight to the conclusion&#;s. But when we add the tacit premises&#;the second and third sentences in this passage&#;we can&#;t draw an arrow directly from &#; to &#;. To do so would obscure the relationship among the first three propositions and misrepresent how the argument works. If we drew an arrow from &#; to &#;, what would we do with &#; to &#; in our diagram? Do they get their own arrows, too? No, that won&#;t do. Such a diagram would be telling us that the first three propositions each independently provide a reason for the conclusion. But they&#;re clearly not independent; there&#;s a relationship among them that our diagram must capture, and it&#;s the same relationship we saw in the parallel argument about Trump, with the particular claims in the second and third propositions working together with the general claim in the first:

The arguments we&#;ve looked at thus far have been quite short&#;only two or three premises. But of course some arguments are longer than that. Some are much longer. It may prove instructive, at this point, to tackle one of these longer bits of reasoning. It comes from the (fictional) master of analytical deductive reasoning, Sherlock Holmes. The following passage is from the first Holmes story&#;A Study in Scarlet, one of the few novels Arthur Conan Doyle wrote about his most famous character&#;and it&#;s a bit of early dialogue that takes place shortly after Holmes and his longtime associate Dr. Watson meet for the first time. At that first meeting, Holmes did his typical Holmes-y thing, where he takes a quick glance at a person and then immediately makes some startling inference about them, stating some fact about them that it seems impossible he could have known. Here they are&#;Holmes and Watson&#;talking about it a day or two later. Holmes is the first to speak:

&#;Observation with me is second nature. You appeared to be surprised when I told you, on our first meeting, that you had come from Afghanistan.&#;

&#;You were told, no doubt.&#;

&#;Nothing of the sort. I knew you came from Afghanistan. From long habit the train of thoughts ran so swiftly through my mind, that I arrived at the conclusion without being conscious of intermediate steps. There were such steps, however. The train of reasoning ran, &#;Here is a gentleman of a medical type, but with the air of a military man. Clearly an army doctor, then. He has just come from the tropics, for his face is dark, and that is not the natural tint of his skin, for his wrists are fair. He has undergone hardship and sickness, as his haggard face says clearly. His left arm has been injured. He holds it in a stiff and unnatural manner. Where in the tropics could an English army doctor have seen much hardship and got his arm wounded? Clearly in Afghanistan.&#; The whole train of thought did not occupy a second. I then remarked that you came from Afghanistan, and you were astonished.&#; (Also excerpted in Copi and Cohen, , Introduction to Logic 13e, pp. 58 - 59.)

This is an extended inference, with lots of propositions leading to the conclusion that Watson had been in Afghanistan. Before we draw the diagram, let&#;s number the propositions involved in the argument:

  1. Watson was in Afghanistan.
  2. Watson is a medical man.
  3. Watson is a military man.
  4. Watson is an army doctor.
  5. Watson has just come from the tropics.
  6. Watson&#;s face is dark.
  7. Watson&#;s skin is not naturally dark.
  8. Watson&#;s wrists are fair.
  9. Watson has undergone hardship and sickness.
  10. Watson&#;s face is haggard.
  11. Watson&#;s arm has been injured.
  12. Watson holds his arm stiffly and unnaturally.
  13. Only in Afghanistan could an English army doctor have been in the tropics, seen much hardship and got his arm wounded.

Lots of propositions, but they&#;re mostly straightforward, right from the text. We just had to do a bit of paraphrasing on the last one&#;Holmes asks a rhetorical question and answers it, the upshot of which is the general proposition in 13. We know that proposition 1 is our conclusion, so that goes at the bottom of the diagram. The best thing to do is to start there and work our way up. Our next question is: Which premise or premises support that conclusion most directly? What goes on the next level up on our diagram?

It seems fairly clear that proposition 13 belongs on that level. The question is whether it is alone there, with an arrow from 13 to 1, or whether it needs some help. The answer is that it needs help. This is the general/particular pattern we identified above. The conclusion is about a particular individual&#;Watson. Proposition 13 is entirely general (presumably Holmes knows this because he reads the paper and knows the disposition of Her Majesty&#;s troops throughout the Empire); it does not mention Watson. So proposition 13 needs help from other propositions that give us the relevant particulars about the individual, Watson. A number of conditions are laid out that a person must meet in order for us to conclude that they&#;ve been in Afghanistan: army doctor, being in the tropics, undergoing hardship, getting wounded. That Watson satisfies these conditions is asserted by, respectively, propositions 4, 5, 9, and 11. Those are the propositions that must work jointly with the general proposition 13 to give us our particular conclusion about Watson:

Next, we must figure out how what happens at the next level up. How are propositions 4, 5, 13, 9, and 11 justified? As we noted, the justification for 13 happens off-screen, as it were. Holmes is able to make that generalization because he follows the news and knows, presumably, that the only place in the British Empire where army troops are actively fighting in tropics is Afghanistan. The justification for the other propositions, however, is right there in the text.

Let&#;s take them one at a time. First, proposition 4: Watson is an army doctor. How does Holmes support this claim? With propositions 2 and 3, which tell us that Watson is a medical and a military man, respectively. This is another pattern we&#;ve identified: these two propositions jointly support 4, because they each provide half of what we need to get there. There are two parts to the claim in 4: army and doctor. 2 gives us the doctor part; 3 gives us the army part. 2 and 3 jointly support 4.

Skipping 5 (it&#;s a bit more involved), let&#;s turn to 9 and 11, which are easily dispatched. What&#;s the reason for believing 9, that Watson has suffered hardship? Go back to the passage. It&#;s his haggard face that testifies to his suffering. Proposition 10 supports 9. Now 11: what evidence do we have that Watson&#;s arm has been injured? Proposition 12: he holds it stiffly and unnaturally. 12 supports 11.

Finally, proposition 5: Watson was in the tropics. There are three propositions involved in supporting this one: 6, 7, and 8. Proposition 6 tells us Watson&#;s face is dark; 7 tells us that his skin isn&#;t naturally dark; 8 tells us his wrists are fair (light-colored skin). It&#;s tempting to think that 6 on its own&#;dark skin&#;supports the claim that he was in the tropics. But it does not. One can have dark skin and not visited the tropics, provided one&#;s skin is naturally dark. What tells us Watson has been in the tropics is that he has a tan&#;that his skin is dark and that&#;s not its natural tone. 6 and 7 jointly support 5. And how do we know Watson&#;s skin isn&#;t naturally dark? By checking his wrists, which are fair: proposition 8 supports 7.

So this is our final diagram:

And there we go. An apparently unwieldy passage&#;thirteen propositions!&#;turns out not to be so bad. The lesson is that we must go step by step: start by identifying the conclusion, then ask which proposition(s) most directly support it; from there, work back until all the propositions have been diagrammed. Every long argument is just composed out of smaller, easily analyzed inferences.

For more information, please visit tft motorcycle display.

39

0

Comments

0/2000

All Comments (0)

Guest Posts

If you are interested in sending in a Guest Blogger Submission,welcome to write for us!

Your Name:(required)

Your Email:(required)

Subject:

Your Message:(required)

0/2000