Database Reliability and Efficiency
Jennifer P. Lusung
Purdue Global University
Database Reliability and Efficiency
The importance of achieving a reasonable level of test reliability and validity is not solely be determined or measured due to lack of effectivity on methods. In determining the differences between multiple testing is the formulation of each methodology standards, plausibility, and outcome. Reliability and validity are technical properties of a test that indicate its quality and usefulness. Its value for any educator makes it a possible element in any item formulation. Educator’s analysis will support student understanding of information reported in test manuals and reviews usage of data to evaluate the effectivity of a test. From the previous discussion on evaluation tool, emphasizing on accountability to measure overall satisfaction from students with critical thinking is a priority. This inclusion on the curriculum measures improvement and will take an extensive tool for testing reliability and validity.
Test Reliability and Variability
Test Reliability defined as consistency of test scores from another testing. Meaning if analyzing two different kinds of formulated test question database and administered to similar class and students. It is assumed to have to unchanging outcomes due to the familiarity of topics. If the students received radically different ratings the second time, the test would have low reliability. One factor that may affect test reliability is taking the test more than once, test length, quality of examination and test-retest method(Billings ; Halstead, 2016). The specified database two sample questionnaires are stand-alone multiple-choice questions (MCQs), and integrated clinical-scenario samples compare in problem-based question form. The particular method to integrates the medical specialties and fundamentals of nursing. According to Vuma and Sa (2016), one of the best measurement tools to estimate the reliability of scores is the Kuder-Richardson Formula 20 (KR20). It measures the level of difficulty, discrimination index, item distractors and student performances(Vuma ; Sa, 2016). The multiple choice questions (MCQs) samples content used to assess or measure in a classroom. Its content validity of critical thinking and knowledge learned from the questionnaire obtained a rational or logical growth. The MCQ and database specified given in a quantitative method of assessing test item validity are also measured by reviewing the discrimination measure between students answering correctly and incorrectly(Vuma ; Sa, 2016).
In any nursing school program or licensure board exam, a database of nursing exam questions use to review for National Council Licensure Examination(NCLEX). These questionnaires database sharpen problem-solving and critical thinking skills before taking the actual exam. Examinations like the given database are common assessment and evaluation tool in any school organization. These examination questions categorized as multiple choice, true or false and matching. Students who answered correctly and incorrectly from this database used a discrimination index. As written by Haladyna in response to samples and application on the test, this database possibly shows the difference between learners and non-learners coinciding to questionnaire formulated (as cited in Billings ; Halstead, 2016, p. 438). Also, the questions consist of one item (stem) with several possible answers (choices), including the correct answer and several incorrect answers (distractors) (Simbak et al., 2014). An example of the true-false item is in question 3 from instructor A database, which possible ten students overall have answered incorrectly. Also, students that are non-learners chose to respond incorrectly due to distractors such as familiarity with the subject matter. Similar to Test B database question 4 as shown in test question A database, the number of errors are significant. These items with the same problem form will have the possibility of selecting the 20 percent correct in the single best answer (SBA) question type with a stem and while the odds of guessing the correct answer is fifty percent to each true-false question form due to distractors (Simbak et al., 2014). Students respond to matching questions by pairing each of a set of stems with one of the choices provided on the exam. These questions given on this database compare from database A to B, Test A is easier to read, review and mark, but students require more time to respond on test B to these questions on a similar item of multiple choice or true or false items (Jancarík ; Kostelecká, 2015). The following database test shows some distractors that include question influencing the overall test score. The data on multiple-choice and alternative- response tests known, but not in matching item which possibly affects answers on other type questions on the true-false item and multiple choice, also the way the problems formed. Therefore the end score for a student that is a learner or students responding too quickly results is the time consumed because of the generated plausible distractors.
D value = Ru-RL/1/2T D=7-3/10=
-0.6 D=10-0/10= 1 4)D=0-10/10= -1 5)D=0-10/10= -1
Note: referred from Unit 9 Assignment: Databases Instructor A Test Questions (Wittmann-Price, Godshall, ; Wilson, 2017).
D value = Ru-RL/1/2T D=5-5/10=0 D=7-2/10=
0.5 D=10-0/10= 1 4)D=0-10/10= -1 5)D=5-5/10=0
Note: referred from Unit 9 Assignment: Databases Instructor B Test Questions ( (Wittmann-Price, Godshall, ; Wilson, 2017)
Two learners out of the top third class chose the correct answers on example test A number 2)
D value = 2-8 / 10 ; = -6/10 ; = -0.6 for the D value split or split biserial (Wittmann-Price, Godshall, ; Wilson, 2017).
D value =5-5/10 = 0 is a discriminating question because of the results are given of fifty percent of learners responding correctly and fifty percent incorrectly (Wittmann-Price, Godshall, ; Wilson, 2017).
All learners D = 1 in the upper group answered correctly, but none of the learners responded incorrectly D= 10-0 / 10 = 1.0 (Wittmann-Price, Godshall, ; Wilson, 2017)
The item’s quality has affected students on reliability to reduce and an excellent question to increase reliability. The reasoning behind this is the item discrimination between the learners and their knowledge of the subject. One issue of the test database given is a learning curve and must be considered to develop a student concept of the question and understanding the facts presented. A formulated, and enhanced test with less error will cause proper test management and consistent scoring. It is because these database questionnaires are examples of the desired skills, behavior check, and differently formulated items. According to Haladyna, multiple-choice formats that involve scenarios provides better critical-thinking skills and offer a desirable mix of validity, reliability, fairness, and practicality ( As cited in Marcham, Nader, Turnbeaugh, & Gould, 2018, p.47).
Billings, D. I. & Halstead, J.A. (2016). Developing and using classroom tests: Multiple-choice and alternative format test items. In Teaching in nursing: A guide for faculty (5th ed., pp. 435-440). St. Louis, MO: Elsevier Saunders.
Jancarík, A., & Kostelecká, Y. (2015). The scoring of matching questions tests: A closer look. Electronic Journal Of E-Learning, 13(4), 270-276.
Marcham, C. L., Nader, J. T., Turnbeaugh, T. M., & Gould, S. (2018). Developing certification exam questions: More deliberate than you may think. (cover story). Professional Safety, 63(5), 44-49.
Simbak, N. B., Myat Moe The, A., Ismail, S. B., Jusoh, N. M., Ali, T. I., Yassin, W. K., Haque, M. & Rebuan, H. A. (2014). Comparative study of different formats of MCQs: Multiple true-false and single best answer test formats, in a New Medical School of Malaysia. International Medical Journal, 21(6), 562-566.
Vuma, S., & Sa, B. (2016). Original Article: A comparison of clinical-scenario (case cluster) versus stand-alone multiple choice questions in a problem-based learning environment in undergraduate medicine. Journal Of Taibah University Medical Sciences, doi:10.1016/j.jtumed.2016.08.014.
Wittmann-Price, R. A., Godshall, M., & Wilson, L. (2017). Using assessment and evaluation strategies. In Certified Nurse Educator (CNE) Review Manual (2nd ed., p. 205). New York, NY: Springer.