Attitude and satisfaction towards electronic exams in medical sciences faculty members

Introduction The use of technology in medical universities, especially in Iran, has increased during recent years due to its wider availability and its potential for improving teaching and learning.1 One of the pillars of learning is the assessment of learning. In both traditional and electronic learning, an exam is a useful tool for assessing learners.2 The use of technology has led to the development of electronic exams (E-exams). E-exams are computer-based exams designed to assess students. E-exams can include home-based (non-isolated) or campus-based (isolated) E-exams.2 Due to the different conditions of the two types and the high possibility of fraud in the first type, especially in Iran, this study has focused on campus-based E-exams. E-exams have benefits for learners and the learning process when used properly.2 E-exams provide several benefits over paper-based exams, as E-exams can incorporate text, images, audio, video, and interactive virtual environments.1 Other advantages of E-exams include ease of use, the ability to improve the assessment quality, faster and more accurate scoring, speedy data analysis, rapid feedback for learners and teachers, and savings of time and paper.1-4 In addition to the many benefits that E-exams bring, there are also disadvantages and challenges in using E-exams. E-exams requires the provision of equipment, infrastructure, and space. There is always a chance of hardware and software problems that can lead to data loss and even changes in test results. Other disadvantages include the possibility of security intrusion into the exam system and the need for computer literacy to take the test.1-4


Introduction
The use of technology in medical universities, especially in Iran, has increased during recent years due to its wider availability and its potential for improving teaching and learning. 1 One of the pillars of learning is the assessment of learning. In both traditional and electronic learning, an exam is a useful tool for assessing learners. 2 The use of technology has led to the development of electronic exams (E-exams). E-exams are computer-based exams designed to assess students. E-exams can include home-based (non-isolated) or campus-based (isolated) E-exams. 2 Due to the different conditions of the two types and the high possibility of fraud in the first type, especially in Iran, this study has focused on campus-based E-exams. E-exams have benefits for learners and the learning process when used properly. 2 E-exams provide several benefits over paper-based exams, as E-exams can incorporate text, images, audio, video, and interactive virtual environments. 1 Other advantages of E-exams include ease of use, the ability to improve the assessment quality, faster and more accurate scoring, speedy data analysis, rapid feedback for learners and teachers, and savings of time and paper. [1][2][3][4] In addition to the many benefits that E-exams bring, there are also disadvantages and challenges in using E-exams. E-exams requires the provision of equipment, infrastructure, and space. There is always a chance of hardware and software problems that can lead to data loss and even changes in test results. Other disadvantages include the possibility of security intrusion into the exam system and the need for computer literacy to take the test. [1][2][3][4] Positive and negative attitudes have been reported about E-exams in various studies. The use of E-exams instead of paper-based exams is the area of a continuing debate between pros and cons. [5][6][7] Faculty members' attitudes and satisfaction towards E-exams have not been widely studied, despite the potential of such research to help make E-exams more effective. According to studies, some faculty members prefer E-exams to manual methods of assessing students while others are resistant to the use of E-exams. 2 Educational evaluation is of great importance in the teaching-learning process. It is used to measure student progress and improve educational systems. It provides evidence-based reasoning about whether educational outcomes can be improved based on the implementation of interventions. 2,7 Currently, the use of E-exams at the Birjand University of Medical Sciences (BUMS) is expanding. Given the influence of this intervention in teaching-learning and evaluation processes and given the gaps and contradictions found in the literature review, this study aimed to assess faculty members' attitude and satisfaction toward E-exams at the BUMS to obtain the views of a key stakeholder group for improving learner evaluation and to evaluate the educational outcome of the implementation of E-exams interventions.

Materials and Methods
This descriptive-analytical cross-sectional study was conducted at the BUMS, Birjand, Iran from June to September 2020. The study population included all faculty members of the BUMS. All participants completed the Attitude Questionnaire, and faculty members with experience in conducting exams at the University E-exams center completed the Satisfaction Questionnaire. A total of 126 faculty members participated and completed the Attitude Questionnaire; 69 completed the Satisfaction Questionnaire. The data collection tool was a researcher-created questionnaire to assess the attitude and satisfaction of faculty members toward E-exams. Since existing standard questionnaires (especially around satisfaction) were mainly in the field of examining student attitudes and satisfaction with E-exams, there was a need for questionnaires that reflected faculty members' attitudes and satisfaction. In addition, considering the purpose of this study was to examine the attitude toward and satisfaction with isolated (campus-based) E-exams (which did not include non-isolated or home-based types) and considering that in the existing questionnaires, no distinction was made between these two types of E-exams, there was a need to add such changes to the questionnaire. After reviewing existing questionnaires, it was concluded that some dimensions or items had been neglected. Therefore, considering all these factors, a new questionnaire was created using some questionnaires from other studies as a question source. [8][9][10] The validity of the questionnaire was confirmed to evaluate the degree of coordination between the content of the measurement tool and the purpose of the research, and the reliability was also confirmed to discern if the measuring instrument gives similar results under similar conditions. The draft questionnaire was given to six specialists and experts in medical education at the BUMS to evaluate face validity. The specialists were asked to provide their corrections in writing after carefully reviewing the questionnaire. This review included grammar, use of appropriate and understandable words, placement of items, proper scoring, and proportionality of selected dimensions. Quantitative methods were used to evaluate the content validity. The content validity ratio (CVR) and the content validity index (CVI) were used to evaluate the content validity quantitatively; CVR was calculated for each question and matched to the Lawsche table. Questions with higher CVR higher were retained, and questions with lower CVR were removed. The mean CVR of the entire questionnaire of faculty members was 1.0, which was acceptable. The CVI calculation was based on the Walts and Basel validity index. Questions with a CVI greater than 0.78 were retained. To evaluate the reliability of the questionnaires, the questionnaire was filled out by a sample of 25 faculty members; the Cronbach's alpha coefficient was equal to 0.9; in social science research, validity above 0.7 is considered acceptable. Therefore, the reliability of the questionnaire was confirmed. 7 Participants were told about the purpose of the study: to evaluate faculty attitudes toward and satisfaction with isolated (campus-based) E-exams (as distinct from homebased or non-isolated E-exams). Campus-based E-exam is defined as an examination held in an actual physical room with a monitoring exam supervisor either on the Faradid platform or with an offline exam. These exams are taken in person, not in absentia. In the university E-exam center, there are many air-gapped computers that have been designed as separate stations for each student, so that it is not possible to view the next monitor screen. In the Faradid software, it is possible to distribute questions randomly for each exam taker. Reluctance to participate in the study, unfinished completion of the questionnaire, and conduct of E-exams held in a center other than the E-exam center of BUMS were among the exclusion criteria.
The study used a census sampling method. The questionnaire had three sections: demographic information (seven questions related to age, gender, academic rank, college, educational department, working experience), the Attitude Questionnaire (seven dimensions consisting of 21 questions with a five-point Likert scale (strongly agree = 5, agree = 4, neither agree nor disagree = 3, disagree = 2, strongly disagree = 1) and the Satisfaction Questionnaire (six dimensions consisting of nine questions with a five-point Likert scale (very high = 5, high = 4, medium = 3, low = 2 and very low = 1). The seven dimensions of the Attitude Questionnaire included affective, validity, practical features, reliability, test security and counter fraud, training and learning, and overall attitude. The six dimensions of the Satisfaction Questionnaire included test environment, providing information about the test and knowledge of the test result, quality of technical and software features of the system, basic knowledge about conducting E-exams, test security and quality of counter fraud, and responsiveness. A low-level attitude was defined as 21-49, medium-level from 49.1-76.9, and high-level was 77-105. Low-level satisfaction was defined as 9-21, medium-level as 21.1-32.9, and high-level as 33-45. SPSS 16 was used to conduct the data analysis. Mean and standard deviation, frequency, and frequency percentage were used to describe the data. The Kolmogorov-Smirnov test was used to check normality. Data were analyzed using an independent t-test and analysis of variance (ANOVA) to compare attitude and satisfaction scores by age, gender, school, academic rank, and working experience. The correlation between age, working experience, attitude, and satisfaction was assessed using the Pearson correlation coefficient. The significance level was 0.05. Table 1 shows demographic information of 126 study participants. Tables 2 and 3 show dimension and item scores for attitude and satisfaction questionnaires, respectively.

Results
Mean item scores for the Attitude Questionnaire ranged from 3.3 to 4.6. According to the results in Table 2, the following item received the highest mean score (4.6) among the attitude questionnaire items: "In my opinion, scoring in electronic tests is more accurate, because the computer does not have human error", and the following item received the lowest score (3.3): "In my opinion, logging in with a username and password provides sufficient security to electronic exams". In general, mean attitude scores in items related to the "test security and counter-fraud" dimension were lower than the other dimensions.
According to Table 3, the following item received the highest mean score (4.8) among the satisfaction questionnaire items: "How satisfied are you with the analytical results of the test?" and the following item received the lowest score (3.5): "How satisfied are you with the requirement for professors to be present at the test center to answer students' questions?".
The minimum total attitude score was 44, and the maximum was 88 out of 105 total possible points. The mean score of faculty members' attitude was 69.19 ± 9.51. Because the mean score of participant attitude was in the range defined for medium-level attitude, it can be said that faculty members have a moderate attitude towards E-exams. For satisfaction scores, the minimum was 20, and the maximum was 44 out of 45 total possible points. The mean score of faculty members' satisfaction was 33.27 ± 5.00. Because the mean satisfaction score was in the range defined for high-level satisfaction, it can be said that a high-level level of satisfaction was observed among faculty members towards E-exams. Table 4 shows faculty member's mean attitude and satisfaction scores of by gender. The mean score by gender was approaching significance but was not significant (P = 0.056), and the mean score of faculty members' satisfaction by gender was not significantly different (P = 0.169) Table 5 shows mean scores of faculty members' attitude and satisfaction in terms of academic rank. The mean attitude scores in terms of academic rank were not significantly different (P = 0.087). The mean score of faculty members' satisfaction in terms of academic rank was not significantly different from each other (P = 0.964). Table 6 shows the mean score of attitude and satisfaction by faculty/school. There was a significant difference between the mean score of faculty members' attitude according to faculty/school (P = 0.038). The highest attitude score was found in dental school faculty members, and the lowest attitude score was seen in medical school faculty members. The highest satisfaction score was found in faculty members of the pharmacy school, and the lowest satisfaction score was related to medical school, but there was no significant difference between the mean score of faculty members' satisfaction by faculty/school (P = 0.500). Table 7 shows results of correlations among age, work experience, attitude, and satisfaction. There was a positive and significant correlation between attitude and satisfaction. In other words, an increase in the mean score of faculty members' attitude was accompanied by an increase in the mean satisfaction score and vice versa; the  slope of this increase was relatively high and significant. There was also a positive correlation between attitude and satisfaction with work experience. In other words, as the amount of work experience increased, attitude and satisfaction also increased; however, the slope of this increase was not very high and was insignificant

Discussion
Overall attitude and satisfaction scores were mediumlevel and high-level, respectively. Among different factors, only faculty could affect the attitude towards E-exams, and none of them could affect the satisfaction score. There was a positive and significant correlation between faculty members' attitude and satisfaction as well as between faculty members' attitude and satisfaction with work experience. The existence of a positive relationship between faculty members' attitude and satisfaction in the current study indicates that these two categories are affected by each other and the more we move to strengthen the positive attitude of professors towards E-exams, the more we will face higher professor satisfaction, and vice versa. Therefore, in conducting studies and creating policies, it is wise to consider these two categories simultaneously.
A positive relationship between the amount of work experience and professors' attitude and satisfaction with E-exams indicates that with the passage of time and an increase in one's experience in solving problems related to E-exam systems, attitude and satisfaction both improve. This result affirms that less-experienced professors, and more-experienced professors should not be looked at with a single eye in planning and policy-making, and separate programs and policies should be adopted to promote the attitude and satisfaction based on the amount of experience a professor may have.
In Jabsheh's study on the usability outlook of computerbased exams as a means of assessment and examination at Palestine Technical University, findings showed that the use of computer tests instead of traditional paperbased tests is the scope of an ongoing debate between proponents and opponents who connect their views for various reasons and logic. 5 In Kuikka's study, the teaching staff was resistant to prefer E-exams to manual methods of examining students. 11 This result is somewhat inconsistent with the results of our study. This discrepancy may be related to differences between the place of study (Finland vs. Iran) or specific E-exam systems. Negative attitude towards E-exams and resistance to E-exams can be caused by such a change as instructors may not like to change their examination habits. 11 In most studies conducted among students, faculty members, or both, there was a positive attitude towards e-exams [1][2][3][4]12,13 consistent with the current study results. Among the reasons given for having a positive attitude towards E-exams or preferring them to paper exams are that E-exams are less stressful and more reliable and are more fair than traditional paper-based exams; technology has benefits in supporting learning and training. 1,2 In a study by Zaer Sabet et al among medical students at the Guilan University of Medical Sciences that compared attitudes towards two methods of traditional and electronic tests, results showed no significant difference between the electronic test group and the traditional group. 14 In Abu Alrob and colleagues' study at the Arab-American University-Palestine (AAUP) around teacher and student attitudes toward computer-based exams, the results found that the attitudes of instructors and students were not adequately considered by AAUP when it introduced the computer-based exam system. 3 This means that in implementing e-learning programs and e-exams, special attention should be paid to the primary users and beneficiaries of these programs, namely professors and students.
In Afacan Adanır and colleagues' study in state universities in Turkey and Kyrgyzstan on undergraduate students' attitudes towards E-exams, learners' attitudes differed according to gender and field of study. 2 In our study, there was no significant difference between the attitudes of men and women professors towards E-exams, which is somewhat inconsistent with the results of their study. The reason for this discrepancy may be related to the different target populations of the two studies (students in their study and faculty members in the current study) as well as cultural and attitudinal differences between communities and people in different countries (Turkey and Kyrgyzstan in their study and Iran in our study). If we consider the students' field of study to be somewhat equivalent to the faculty/school variable in the present study, there is a consistency in terms of the attitude difference based on the field of study or the faculty/school in the two studies.
Due to the lack of studies on faculty members' satisfaction, existing studies on students' attitudes toward E-exams have been reviewed. In these studies, there was relatively high satisfaction with E-exams, [15][16][17] which is consistent with the results of the current study.
One of the strengths of the current study is the simultaneous assessment of faculty members' attitude and satisfaction with E-exams since it appears that these two categories are affected by each other; the results of our study affirmed this. In other studies, attitude and satisfaction are often studied separately, and often little attention is paid to the relationship between these two factors.
Another strength of the current study is that in examining the attitude and satisfaction of professors towards E-exams in other studies, the focus is only on a positive or negative attitude or the degree of satisfaction of professors. Other influential factors such as work experience, academic rank and especially the faculty/school are not considered, while in the current study, these factors were taken into consideration and it was even observed that the faculty/ school is one of the factors that can affect faculty members' attitude. The method of the current study may especially help educational policymakers to pay attention to and focus on such factors in their planning to help improve attitude and satisfaction of teachers.
In some dimensions of attitude, such as test security and counter fraud, the average attitude score of faculty members was low. It is suggested that the esteemed educational vice-chancellor of the university may want to take this into consideration, perhaps taking steps to help diminish the negative attitude. For example, if the security of the exams is high, the high-security indicators of the exam center and the exam software system should be fully explained to the faculty members. If they notice the weakness in the security of the exams, they should learn to take steps to eliminate security deficiencies and report actions taken. Similarly, in the case of low satisfaction around some items such as insufficient training on the design and conduct of E-exams, in order to eliminate these shortcomings, faculty empowerment workshops may be held.
After planning to eliminate the existing shortcomings and corrective measures to increase the attitude and satisfaction of faculty members, in order to assess the educational results of the interventions, it is suggested to re-examine the attitude and satisfaction of faculty members in future studies.

Conclusion
Due to low attitude and satisfaction scores among some faculties/schools, such as the medical school, or some questionnaire items such as test security or insufficient training in designing and conducting E-exams, there seems to be a need for corrective action by educational policymakers to increase the attitude and satisfaction among these faculties/schools or these particular dimensions. As the focus of this study was the faculty members of the BUMS, it is suggested that the attitude and satisfaction of all professors of medical universities in the country be examined.