Nursing research (Reading reflection)

Reading Reflection Chapter 9

Reliability

Please submit a minimum of 250-word statement (APA style) with references (2) regarding this week’s chapter reading, explain what are some of the important aspects of this week’s reading, provide examples when possible.

Powerpoint with chapter 9 added.

 

Reliability

What is Reliability?

Reliability is concerned with questions of consistency

Other terms for reliability are:

Repeatability

Reproducibility

Stability

Consistency

Predictability

Agreement

Homogeneity

Measurement

Measurement is the assignment of number to object or events according to certain rules (Carmines and Zeller, 1979)

Measurement

Measurement is important in quantitative research because:

Quantification allows for powerful statistical analysis

Numbers are often more clearly communicated

Objectivity is increased

Efficiency may be increased

Levels of Measurement

Nominal: a label but nothing more

Categorical: identifies group membership

Ordinal: indicates an order

Interval: also in order but an estimation of distance between the scores

Ratio: order, defined distance, and a zero point

Measurement Error

The sources of error causing unreliability may be one or more of the following:

Measurement is inaccurate or inconsistent

Raters or testers are inaccurate or inconsistent

Measurement Error

The sources of error causing unreliability may be one or more of the following:

Phenomenon being measured varies from one measurement time to the next

The situation is confounding the measurement

Classic Measurement Equation

X = t + e
Observed True Random
Score Score Error

Consistency

In order to maintain consistency of measurement there needs to be:

Interrater reliability

Intrarater reliability

Intercoder reliability

Cohen’s Kappa

A way to calculate the percent of agreement between the two coders

K = fo – fc K = kappa

N – fc fo = frequency of agreement

fc = frequency expected by chance

N = number evaluated

Test-Retest Reliability

A type of reliability that is evaluated by administering the same test to the same people or taking the same measurement on the same people after a specified period of time

The results of the two testing times are then compared statistically

Test-Retest Reliability

Factors affecting the test-retest reliability:

Assumes stability in the phenomenon being measured

May be affected by reactivity

Practice effect may also affect reliability

Test-Retest Reliability

Ways to calculate test-retest reliability include:

Pearson product moment correlation

Intraclass correlations (ICCs)

Homogeneity

Cronbach’s alpha can be used to test the homogeneity of items within a measure

It indicates the extent to which all of the items on the test are “behaving” similarly

Homogeneity

Alpha of 0.70 is acceptable for new measures

Alpha of at least 0.80 is expected for established measures

Higher alphas (at least 0.90 or higher) are desirable for use in clinical evaluation

Reliability of Physical Measures

Systematic error: a consistent error

Random error: inconsistent, unpredictable errors

Random errors can cancel each other out unless the researcher know how to detect them by using the technical error of measurement (TEM)

√ ∑ d2 d = the difference between scores of paired examiners

__________

2 N N = number of pairs of scores

Technical Error of Measurement (TEM)

Improving Reliability

Thoroughly trained raters

Periodic monitoring of raters

Retest and calibrate instruments

Add appropriate items and delete those that lower the alpha coefficient to increase homogeneity

Improving Reliability

Standardize the conditions under which testing is done and minimize any distractions

Make instructions clear, standardized

 
"Looking for a Similar Assignment? Get Expert Help at an Amazing Discount!"

Hi there! Click one of our representatives below and we will get back to you as soon as possible.

Chat with us on WhatsApp