Interview With Amy
Interview with Amy about survey research insights and best practices.
Introduction
Recently we've introduced the idea of educational assessments, which are a valuable tool that helps all schools – from kindergarten and preschool to major universities – improve student outcomes and achievement. We were lucky enough to interview Amy, a young academic program assessment coordinator that is in charge of educational assessments at her university, about what's involved in the educational assessment process.
SurveyMethods: Tell Me A Little About What You Do
Amy: I assess data to see how we can improve our programs at the University. My job is to figure out what we can do to increase student retention and student success. I provide recommendations to the faculty and administrators on what we can do to improve the programs.
Programs include majors, minors, and our scholarship groups. I work with all steps in the process, from coming up with ideas of what should be measured to creating recommendations about what the University can implement based on the data. My job is to make sure that the faculty involved in assessment are identifying the correct planning learning outcomes that are due for assessment each year.
I also draft rubrics and methodologies and share them with the faculty chairs. Assessment itself is incredibly broad and changes regularly, so defining specific tasks would be difficult.
SM: How Long Have You Been Working in This Field?
A: 6 years, next month.
SM: How Do You Go About Creating Rubrics?
A: It's a very careful process, especially if it’s the first rubric and you don't have any baseline to compare it against. When you are creating a rubric from scratch, you generally need a brainstorming session with an assessment committee and make sure that everybody agrees with the rubric ideas.
For example, let's say that I am working with the biology department for the first time. The first step is to identify what the program learning outcome is. Say the program learning outcome is that students are able to write up a hypothesis that is clear, concise, and shows an understanding of the field by the time they graduate.
So what happens is that we look at an upper division course in biology, and we put embedded questions into their final exam that they need to come up with a hypothesis, show that they understand the hypothesis, etc. The TAs or professors grade the final like they would any other final, giving the final and the embedded questions a final score – just as they would any test. We then look through all of the finals and we get three samples of each: 3 examples of a high score on the question, 3 examples of a low score on the question, and 3 examples of a medium score on the question.
We – and when I say "we" I mean the committee and I – and we look at what the examples are. Does getting 15 out of 15 mean that it is high quality? Are they showing that they have met the goals in the program learning outcome, or did they simply get the question right like any other question? Did they show a grasp of the concept? We then look at the mediums, and we do the same thing.
We then look at the lows and do the same thing. We also don't look at the scores in general. We look for the quality of the answers and see if we can break them out into high, medium, and low quality.
Once we've broken them down and defined what high, medium, and low quality are, we create the characteristics of a "High Quality" answer; meaning it would have a clear and concise method, it would be easy to understand and easy to read – grammar and everything is correct – and it would judge if the questions are followed. We would put in a list of the things we think are high quality, and we would do that for medium and we would do that for low. At this point we're done with the rubric.
So then we'd take 9 examples from the same exact test and we'll have six different faculty break off into 3 pairs and grade the 9 examples using the rubric. Then we'd look at the percentage of agreeability within each pair. The pairs rate the "quality" of each of the embedded questions separately.
What I would do is ask "did Mark and Maria agree that exam number one is high quality, low quality, etc." I, personally, use a spreadsheet to keep track of agreeability, but different assessment coordinators do it differently. If they're too far off, we need to change the rubric. If they're not, we move forward.
SM: What Happens Next? A: So we've created the rubric. Now we will generally analyze the scores of all of the embedded questions from that initial test.
This can take a lot of time. In some cases, we would analyze only a certain percentage of tests – especially if the class is too large, but if possible we try to go through each one to come up with a final value of high, medium, and low quality outcomes. Now we have to figure out what the findings are, and if the findings tell us that a lot of our students have low quality skills working with hypotheses, or if the vast majority of the students seem to grasp the concepts well, etc.
We keep record of these findings and figure out how to explain them best in a report. Let's assume that we found that most students displayed medium to low quality hypotheses. From there, we would draft recommendations that are sent to the dean about what we believe can be done to improve student outcomes.
In this case, it might be adding more about hypotheses to the student's curriculum, possibly during lab classes or introductory classes. Maybe it is important to add a class in general. Sometimes we talk about program learning outcomes and what we are going to assess with the students, so that students are more aware of what the professors are looking for.
It may be important to add more papers to give students more experience. It depends on the class and how the current curriculum is constructed. For the sake of argument, we'll assume that the dean agrees with all of them.
The recommendations are then put in place. These may become part of a cycle, where every three years we re-assess to see if students have improved in these areas. In some cases it won't become part of a cycle because the priorities of the department sometimes change.
But if possible we try to test again every 3 years and make sure that the changes we've put in place made a difference.
SM: What Are Some of The Tools You Use in Assessment?
A: Well in the previous example we used embedded questions in a final exam. We may also use: Surveys Questionnaires Homework assignments Focus groups Oral presentations It depends a lot on what the program learning outcome is.
SM: What Are Some of the Ways You Have Used Surveys in Assessment?
A: The most common reason we would use a survey is for student satisfaction reasons. For example, if we're assessing advising – because staff and administrators have program learning outcomes too, not just faculty and students – and we need to know if advisers are effective in their own goals; say for example that one of their goals is to make sure that students feel that advisers are accessible, when students come in to talk to advisers, before they leave the advising session the students will fill out a small survey or questionnaire to find out more about why they're there, how long it took, was making an appointment easy, how long did it take to get a response, did you feel it was helpful, and more. It's a lot like a customer satisfaction survey.
Others reasons might include graduating senior surveys, alumni surveys, and more that have embedded questions in there that are program specific – maybe the majors want to know whether graduates were able to find a job after they got their degree, how long it took, and more. The alumni survey would help with that. There are plenty of ways we may use a survey or questionnaire of some type.
SM: What Are Some of The Challenges of Educational Assessment?
A: One of the biggest challenges is to get faculty to understand the importance of assessment. Currently, at least in my University, the faculty don't get any extra credit or pay for doing assessments or for writing up an assessment report, so they're often less motivated to do assessments. So when you have faculty that don't really want to do assessment but are forced to do it, sometimes it's hard to convince them to move towards long term assessment planning.
The goal should be to help programs get put in place and assessment practice. We don't want it to be a onetime data collection thing. We want them to be in a position to continue to collect student examples over the course of several years so that it's less likely to be a challenge for them in a future.
But since faculty are not motivated, it ends up being a significant challenge. This is something that affects faculty across all Universities, especially in higher education. SM: Have You Been Able See the Effects of Anything You Specifically Have Proposed? A: I have.
Assessment doesn’t always have to be just confined to programs. It can also be assessing a specific course. I've made recommendations in the past about creating prerequisite courses or requiring higher scores in placement exams has weeded out some of the students that were unprepared to take the course, and has allowed for higher passing rates in the classes.
Since the goal of assessment in its entirety is student success and making sure that the students are learning and succeeding, it's been enjoyable for me to see that some of the work I've completed has produced tangible results. SM: Great. Thanks Amy for your time.
Those interested in improving their educational assessment with our affordable SurveyMethods software should contact us today and see how we've helped countless educational institutions improve their long term outcomes.
Key Takeaways
- Introduction
- SurveyMethods: Tell Me A Little About What You Do
- SM: How Long Have You Been Working in This Field?
- SM: How Do You Go About Creating Rubrics?
Related Articles
10 for $X.XX Deals: Are They Using Research?
Learn how grocery stores use customer research data to create strategic product pairings and bundle sales that maximize revenue.
Survey Insights10 for $X.XX Deals: Follow-Up Part 1
Explore how retail sales strategies use customer survey data to create product bundles that drive purchasing behavior.
Survey Insights10 for $X.XX Deals: Follow-Up Part 2
Discover how anti-pairings in retail sales can increase profits by encouraging full-priced complementary purchases.
Ready to Get Started?
Create your first survey today with our easy-to-use platform.