Validity refers to which characteristic of an instrument




















Qualities of Good Measuring Instruments I. Validity The extent to which the instrument really measures what it is intended to measure. Reliability Reliability is the extent to which a test is dependable, self consistent and stable.

Izzah Kamilah. Marisol Tabinas Gabon. Danusha Rajendran. Judy Smith. Boyet Aluan. Srikant Rao. SaNnilyn MiNon. Yeit Fong Tan. Veronica Mclain de Vera. HOney Maban Makilang. Albertus Widyantoro. Toga MarMar. Anshuman Pandey. Putri Adibah Kakuhito. Shantiram Dahal.

Environmental Psychology Lecture Notes: Disasters part 1 of 2. Iya Willya. Ocireg Llovido. Nikka Ysabel. Rio Dimaala. Aminuddin Muhammad. Pragyan Sarangi. Richard L. Chuangen Xu. Pat Raposas. Adri Maili. Indri Nurwahidah. Justine Leon A. We have already considered one factor that they take into account—reliability.

When a measure has good test-retest reliability and internal consistency, researchers should be more confident that the scores represent what they are supposed to. There has to be more to it, however, because a measure can be extremely reliable but have no validity whatsoever.

Although this measure would have extremely good test-retest reliability, it would have absolutely no validity. Here we consider three basic kinds: face validity, content validity, and criterion validity. Most people would expect a self-esteem questionnaire to include items about whether they see themselves as a person of worth and whether they think they have good qualities. So a questionnaire that included these kinds of items would have good face validity. The finger-length method of measuring self-esteem, on the other hand, seems to have nothing to do with self-esteem and therefore has poor face validity.

Although face validity can be assessed quantitatively—for example, by having a large sample of people rate a measure in terms of whether it appears to measure what it is intended to—it is usually assessed informally.

Face validity is at best a very weak kind of evidence that a measurement method is measuring what it is supposed to. It is also the case that many established measures in psychology work quite well despite lacking face validity. The Minnesota Multiphasic Personality Inventory-2 MMPI-2 measures many personality characteristics and disorders by having people decide whether each of over different statements applies to them—where many of the statements do not have any obvious relationship to the construct that they measure.

For example, if a researcher conceptually defines test anxiety as involving both sympathetic nervous system activation leading to nervous feelings and negative thoughts, then his measure of test anxiety should include items about both nervous feelings and negative thoughts.

Or consider that attitudes are usually defined as involving thoughts, feelings, and actions toward something. By this conceptual definition, a person has a positive attitude toward exercise to the extent that he or she thinks positive thoughts about exercising, feels good about exercising, and actually exercises. Like face validity, content validity is not usually assessed quantitatively. Instead, it is assessed by carefully checking the measurement method against the conceptual definition of the construct.

But if it were found that people scored equally well on the exam regardless of their test anxiety scores, then this would cast doubt on the validity of the measure. A criterion can be any variable that one has reason to think should be correlated with the construct being measured, and there will usually be many of them.

For example, one would expect test anxiety scores to be negatively correlated with exam performance and course grades and positively correlated with general anxiety and with blood pressure during an exam. Or imagine that a researcher develops a new measure of physical risk taking. Criteria can also include other measures of the same construct.

For example, one would expect new measures of test anxiety or physical risk taking to be positively correlated with existing measures of the same constructs.

This is known as convergent validity. Assessing convergent validity requires collecting data using the measure. Discriminant validity , on the other hand, is the extent to which scores on a measure are not correlated with measures of variables that are conceptually distinct. For example, self-esteem is a general attitude toward the self that is fairly stable over time.

It is not the same as mood, which is how good or bad one happens to be feeling right now. If the new measure of self-esteem were highly correlated with a measure of mood, it could be argued that the new measure is not really measuring self-esteem; it is measuring mood instead. All these low correlations provide evidence that the measure is reflecting a conceptually distinct construct.

Method of assessing internal consistency through splitting the items into two sets and examining the relationship between them. In reference to criterion validity, variables that one would expect to be correlated with the measure.

The extent to which scores on a measure are not correlated with measures of variables that are conceptually distinct. Skip to content Chapter 5: Psychological Measurement. A mathematics teacher develops an end-of-semester algebra test for her class. The test should cover every form of algebra that was taught in the class. Similarly, if she includes questions that are not related to algebra, the results are no longer a valid measure of algebra knowledge.

See an example. Face validity considers how suitable the content of a test seems to be on the surface. You review the survey items, which ask questions about every meal of the day and snacks eaten in between for every day of the week. On its surface, the survey seems like a good representation of what you want to test, so you consider it to have high face validity. However, it can be useful in the initial stages of developing a method.

Criterion validity evaluates how well a test can predict a concrete outcome, or how well the results of your test approximate the results of another test.

Criterion variables can be very difficult to find. To evaluate criterion validity, you calculate the correlation between the results of your measurement and the results of the criterion measurement. If there is a high correlation, this gives a good indication that your test is measuring what it intends to measure. If the outcomes are very similar, the new test has high criterion validity. Have a language expert improve your writing.

Check your paper for plagiarism in 10 minutes. Do the check. Generate your APA citations for free! APA Citation Generator.



0コメント

  • 1000 / 1000