Cycle 1 Data

Action Research Focus Statement

The problem I am seeking to address is that students lack engagement and do not often utilize instructor feedback in their upcoming projects. My proposed solution is to increase student engagement by providing effective multimedia feedback for future student projects.

Inquiry Questions

  • Will multimedia feedback improve student’s subsequent work in future projects?
  • Will the multimedia feedback increase student engagement in completing their projects?

Target Audience

The initial response to the project was 16 students in my Digital Literacy class.   The age range was 8 students 18-24; 7 students 25-31; and 1 student 39-45.   The gender break down was 14 male, 2 female.  Many felt they were visual learners primarily and held down a full time jobs, and had families, as well as attending online courses.

Summary of Cycle 1

The Capstone project began with an announcement in my class and information in an email sent out to each student’s email.   They were provided a presentation that explained the concept, and a link to this survey was placed within it.   After the initial submission of projects, students were given a standard text based feedback with a breakdown of the score based on the rubric.   They then responded to that format on this survey which measured both a sliding scale and their personal observations.

Second round of projects were responded to with visual feedback.  I made a deliberate point to not offer anything but musical background to this format.  Once they received that format, they responded through a similar survey to the first in gauging feedback to the visual feedback.  The final week, students were to do a Google drive presentation and I was going to going to use a Google drive add on to record comments directly on their presentation.  I had several false starts, and after some consultation found out while the software was compatible with leaving comments on Google drive documents, it did not support presentations.  I ended up adapting and doing recordings in Garageband and uploading MP3’s to Google Drive for the audio feedback.   I did put a placeholder of an image of myself to help reinforce the connection between students and myself giving the audio feedback.   They then responded to that feedback in this survey.  Once the final grades were in, I did a post research survey to see which one they responded to the best.

Data Collection

In Cycle 1 the data was collected through a pre-survey, three weekly surveys, and a post survey.

Data Report

The pre-survey was a commitment, demographic oriented survey.  Information about their programs, gender, perceived learning styles, age, and distractions that were in their lives.

The first week’s survey based on text feedback came back with an interesting break down.   Out of 10 survey participants 60% rated text-based feedback excellent and 40% considered it very good; because the feedback was concise and clear.  Participants reported that they understood the constructive criticism and knew what to expect from the instructor.

Second week’s feedback was based on participants’ response to video feedback.   Out of the 10 survey participants who submitted responses, they offered a slightly more diverse range than the previous week.  The video feedback format was considered excellent by fifty percent of the participants, with 20% choosing a level down from that.  The final 30% chose an average rating, suggesting a neutral response to the format.   This week the participants reported that they felt the video format was more personal, as if the instructor was right there explaining what was right and wrong.

The final week’s format was audio feedback, having only an image of the instructor.  It received the highest scoring responses.  I found this breakdown the most interesting.  Twelve participants submitted in this week.  Results were 83% rated audio feedback as excellent, 8% chose a above average, and the final submission, 8% of the total, rated it as poor; based on the fact that the student was not sure where to get this feedback.  Participants felt that this form of feedback was the best because they felt they received more information about their work and were able to get a better understanding based on the tone of the voice on the feedback.

The post survey was based on the participants’ reactions to the whole project.   They chose their least effective and most effective feedback format.  Nine participants submitted the final post survey.   Forty four percent chose Audio as the most effective feedback format.  Video feedback was chosen by 33% of the participants, and 22% chose the text-based format as the most effective.   Least effective was broken down slightly differently with 55% of the participants choosing text-based as the least effective.  Thirty three percent chose video feedback as the least effective.  Audio feedback was only chosen by 11% of the participants.   Participants found the text to be the most complete and detailed, but added that the audio was effective since you could hear the emotion in the details.   Some felt the video was too quick and impersonal.


I do think my efforts were relevant to the my focus statement but with a majority of my participants being high achievers there was not much change by their grades on how engaged they were in the process.   The visual grading had more text in it than I would have liked and was more time consuming than I had considered.  Though I do think I have more of a system down at this point. They do demonstrate specific preferences throughout the process. I have adjusted the post survey form to have students reconsider what they think their learning preferences are.   None of the feedback formats, including text were scored low in the Likert scale. In comparing overall grades with results, and trying to determine engagement does not seem feasible with the grouping I have in this first cycle as many of those that participated were higher achievers through out the class.  I do believe this cycle was relevant but could have been more effective both in the number of motivated participants and determining another factor to calculate engagement other than just grade fluctuations.


Interestingly, there is only a minor correlation between a participant’s perceived learning style and what they chose as their preferred feedback format.  Interestingly no student picked Reading/Writing as their learning preference but 22% responded in the post survey that they did prefer that method over all others presented. I wonder if the preference of some participants for text is more due to the training of standard school rather than a true preference.

Future Direction

In hindsight I should have asked students if their perceived learning preferences at the end in the post study.  I have added this question now for the second cycle.  That cycle will be the same feedback formats but different participants.  Adjustments were made in the weekly surveys to make it clearer that they were to respond to the previous week’s grading format, and I have added a video for how to check grading feedback.  This will also be included with the email sent to all participants.