Cycle 2 Data

Action Research Focus Statement

The problem I am seeking to address is that students lack engagement and do not often utilize instructor feedback in their upcoming projects. My proposed solution is to increase student engagement by providing effective multimedia feedback for future student projects.

Inquiry Questions

  • Will multimedia feedback improve student’s subsequent work in future projects?
  • Will the multimedia feedback increase student engagement in completing their projects?

Target Audience

The initial response to the project was 22 students in my Digital Literacy class.   The age range was 10 students 18-24; 7 students 25-31;  3 students 32-38, 1 student 39-45, and 1 student above that age range.   The gender break down was 15 male, 7 female.  More than 50% personally identified themselves as visual learners primarily and held down a full time jobs, had families, as well as attending online courses.

Summary of Cycle 2

The Capstone project began with an announcement in my class and information in an email sent out to each student’s email.   They were provided a presentation that explained the concept, and a link to this survey was placed within it.   After the initial submission of projects, students were given a standard text-based feedback with a breakdown of the score based on the rubric.   They then responded to that format on this survey which measured both a sliding scale and their personal observations.

Second round of projects were responded to with visual feedback.  I made a deliberate point to not offer anything but musical background to this format. A picture of myself was also placed on the opening slide of their feedback.  Once they received that format, they responded through a similar survey to the first in gauging feedback to the visual feedback.  The final week, students were to do a Google drive presentation, I created the feedback using Garageband and hosted it on picosong.com for quicker access.  The site does not support anything but mp3 format so I was not able to use any image of myself to encourage that connection between instructor and student. They then responded to that feedback in this survey.   Once the final grades were in, I did a post research survey to see which one they responded to the best.

Data Collection

In Cycle 2 the data was collected through a pre-survey, three weekly surveys, and a post survey.

Data Report

The pre-survey was a commitment, demographic oriented survey.  Information about their programs, gender, perceived learning styles, age, and distractions that were in their lives.

The first week’s survey based on text feedback came back with an interesting break down.   Out of 19 survey participants 84% rated text-based feedback excellent and 5% considered it very good.  The final 11% considered of average use, partially due to limitations of the LMS that was used to display the text feedback.  Participants reported that they appreciated that the feedback spoke of the purpose of the assignment and that it was detailed but spoke not only of what they did wrong but what was correct.

Second week’s feedback was based on participants’ response to video feedback.   Out of the 15 survey participants who submitted responses offered a slightly more diverse range than the previous week.  The video feedback format was considered excellent by thirty three percent of the participants, with another 33% choosing a level down from that.  Thirteen percent considered it of average use, neither beneficial nor unwelcome. The final 13% chose a rating that the video format was not a helpful format.  This week the participants responses ranged from preferring it since it demonstrated direct relations to their work, all the way to the animations and music being distracting to their focus on the feedback.

The final week’s format was audio feedback, with no visual context at all.  It received the highest scoring responses.  I found this breakdown the most interesting.  Fourteen participants submitted in this week.  Results were 72% rated audio feedback as excellent, 14% chose a above average, and the final submission, 14% of the total, rated it as neutral, having no effect either way on their work.  Participants felt that this form of feedback was a good choice as they felt it more personal since it was my voice and that it went over the work in a conversational tone.  Some suggested they would have preferred it if the visual and audio were together.

The post survey was based on the participants’ reactions to the whole project.   They chose their least effective and most effective feedback format.  Sixteen participants submitted the final post survey.   Fifty-six percent chose video as the most effective feedback format.  Audio feedback was chosen by 31% of the participants, and 13% chose the text-based format as the most effective.   Least effective was broken down slightly differently with 44% of the participants choosing text-based as the least effective.  Thirty-eight percent chose video feedback as the least effective.  Audio feedback was only chosen by 19% of the participants.   Their opinions ranged from preferring audio because they could hear my voice and gather perceptions from that, to connecting with the video feedback based on the note placement on their work, and relating to text-based so that they could use outside means to focus on the feedback context.

Additionally, with the added questions on the second version of the post survey there were additional questions added for insight as to the students’ perceptions of the benefits of the process.  Fully 76% of students felt more connected to the instructor with the use of video feedback.   Eighty-one percent reported the same based on the audio feedback.  This result was determined by any rating above 3 on the Likert scale.  Only three students out of the respondents felt their preferred method did not help in their later efforts.

Insights

Even though this cycle’s responses veered to more students preferring the video feedback, many tied in wanting to hear audio as well.  By their own comments they encouraged me to combine the two formats.    I do still think less text in the video feedback would have better served that format, and there is the additional awkwardness with speed of the video.  Some students are faster readers than others, so some comments spoke as to the video feedback being too long, and some, how many times they had to pause it so that they could read the notes.   In hindsight I would have preferred to still use an m4a file so that an image of me could be visible to them at least partly through their audio feedback.  I think this would further encourage them to identify with me as their instructor.   Once again, none of the feedback formats, including text were scored at the bottom of the Likert scale.  Consistent between the two cycles was the fact that with the audio portion they felt they connected with me more than with any other format even though the video had my image in it where the audio did not.

Surprises

The lack of correlation between a student’s initial perceived learning type and their feedback preference was still evident.   What was intriguing was that 50% of the students changed what they perceived their learning type to be after the feedback cycle was complete; mainly to Visual, due to their choice in preferred feedback.  This is most likely due to their lack of knowledge on the topic but still interesting.  Six percent  of the students felt that none of the format chances helped them; and another 6% just chose to reply with non-applicable.  The last 6% percent the student advised that he felt any difficulties with the course were his own fault. I am curious though that the video feedback format did not change other than adding background music to the feedback but the audio format literally had no physical form since it was purely a disembodied voice, not tied to an image.  Yes this time the video feedback rated higher over all.

Future Direction

My own background in scientific study makes me curious as to the variations in the results, though the change in participants might throw this off.  It would require doing feedback differently each week.  For instance the question of teacher connection could be pursued by utilizing a methodical approach.  The impact of music within the video feedback as opposed to silence might affect how the feedback is received by the student.  This could be done having a certain percentage of feedback that would have images of the instructor.  For the second question, some of the feedback would have audio, and some would not.  Out of all of these, I feel the identification with the instructor, is the most significant part and seems to rise in prominence when one considers which feedback was the most effective. Another project with a comparable group is the same formats, one with increased imagery and audio content from the instructor  against a group that is not provided those same stimuli would be the next logical follow up to me.  There is still a correlation between the grades that a longer term project would be better able to determine.

Advertisements