I have the directions on a word document. Are you able to prepare the table with the analysis figures and formulas? I have attached a few resources and readings for its week. But my word document is below as to what the directions are. It is labeled Module 4 Directions Discussion Post and Notes.docx
Please let me know if there is anything else you need, or if you are unable to complete the Reflection and assignment.
There really isn’t a word count or a number of pages required. Just An explanation of the Item Anaylsis that you created and how.
The directions are below and attached in a word document.
Item Analysis
(NEED THIS COMPLETED) Order 2155165
Let’s
begin here
How do
we determine that a person has learned knowledge about a certain subject? Or
how do we determine that a learner has learned the individual skill in a
certain area? What kinds of tests allow us to make this assessment?
Share
your brief reflections with the Module 4 Discussion post.
Now that you have carefully constructed your
test items, you must ascertain that they are good as you really thought these
were. Every test item can be improved! To do this, a little bit of statistics
is important. No need to worry about being a stat genius. We just need to focus
of learning some key concepts related to a statistical analysis of test items
through a procedure called test item analysis.
There are three commonly used indices to conduct an
item analysis; these are Item Difficulty, Item
Discrimination Index, and Distractor Analysis. The
procedure for conducting an item analysis is found in Hale and Astolfi (2011,
pg. 151). Make sure to read this section carefully as you will be doing an item
analysis in this week’s discussion
AND
Discussion
Board 4- (Need This Completed) Order
2155165
First, share your thoughts from the reflection question from Let’s
Begin Here.
I would like for you to work on this question by yourself; however, you
can seek help from a peer if you need assistance.
Using the item analysis calculations discussed in Hale and Astolfi and
the two links given below, now try your hand practicing these key determinants
of how good is your items are and how well or poorly your students will be
doing.
Conduct an item
analysis with the following data (in the table below).
A test is given to 20 students. The table below shows the results for
Qs 1-5 and for the total score for the students on the whole test. I would
like you to do the item analysis for only Q1 (you can ignore for now Q2-5 or do
it for all 5 questions). The “max” in the last row is
the maximum credit assigned to each question (maximum score for the whole
test is the sum of these values, i.e. 10).
Colum X are the 20 students, numerically represented
Column Y shows the answers for Q1, where 1= correct and 0 = incorrect
Column AD shows the total score of the students on the quiz.
Here are the steps in the worked-out example at
https://weber.instructure.com/courses/351442/pages/analyze-and-revise-test?module_item_id=2986040
Now, this is what you do: Calculating the Difficulty Index
Rank the scores from the highest to lowest scores.
Keep 5 tests with the highest scores in one group.
(use around 25% of the class as a reference.)
Keep 5 tests with the lowest scores in another
group.
Set aside the rest of the tests because they will
not be included in the analysis
Determine how many students in the high scoring
groups and how many in the low scoring group
Now, compute the difficulty level of the
question. Here’s what to do. The difficulty level of a question is the
percentage of students in both the high-scoring (HS) group and low-scoring
(LS) group who answer the question correctly (Worthen et al., 1999). Out
of the ten students that you are looking to see how in the high-scoring
groups and how many in the low-scoring groups answered Question 1
correctly.
Calculate the difficulty level of Question 1 with the following
formula:
The number of correct answers in both HS and LS / the
total number of students in both groups) * 100
Checked out the worked example if you need for calculating the
difficulty level at
https://weber.instructure.com/courses/351442/pages/compute-difficulty-level
* For criterion-referenced tests (CRTs),
with their emphasis on mastery-testing, many items on an exam form will have
p-values of .9 or above. Norm-referenced tests (NRTs), on the
other hand, are designed to be harder overall and to spread out the examinees’
scores. Thus, many of the items on an NRT will have difficulty indexes between
.4 and .6.
Next, use the calculation methods described
in Hale and Astolfi or in the link below
https://weber.instructure.com/courses/351442/pages/compute-discrimination-index
to determine the Item
Discrimination Index (D) of the item to gain an
understanding of how well the item would be able to discriminate between
successful and unsuccessful tests.
In this data set since we do not have the whole test and its report on
the choice of answers by the students, we are not going to do the third step of
the item analysis: Distractor analysis.
Submit to the discussion board both your calculations for the two steps
of the item analysis: item difficulty and item discrimination. Don’t just
submit the answer, for each show how you followed the steps to arrive at the
answer.
Additionally, based on your understanding of item analysis and the
results you got for the two steps, what conclusions do you draw about the
items.
Last Completed Projects
| topic title | academic level | Writer | delivered |
|---|
