1 Translating Principles of Effective Feedback for Students into the [PDF]

small-scale programming problems are supported by PeerWise [Denny et al. 2008] and ..... Course Signals at Purdue: Using

10 downloads 4 Views 750KB Size

Recommend Stories


Principles for effective pedagogy
Keep your face always toward the sunshine - and shadows will fall behind you. Walt Whitman

Principles For Effective Intercession
The greatest of richness is the richness of the soul. Prophet Muhammad (Peace be upon him)

Translating Science into Stories
You're not going to master the rest of your life in one day. Just relax. Master the day. Than just keep

Principles of Effective Partnerships
Learn to light a candle in the darkest moments of someone’s life. Be the light that helps others see; i

Giving Effective Feedback
Don't count the days, make the days count. Muhammad Ali

giving effective Feedback
Forget safety. Live where you fear to live. Destroy your reputation. Be notorious. Rumi

Giving Effective Feedback
Why complain about yesterday, when you can make a better tomorrow by making the most of today? Anon

Translating Values into Design Requirements
You have to expect things of yourself before you can do them. Michael Jordan

The key principles of effective discharge planning
Come let us be friends for once. Let us make life easy on us. Let us be loved ones and lovers. The earth

The principles of effective complaint handling
In the end only three things matter: how much you loved, how gently you lived, and how gracefully you

Idea Transcript


1

Translating Principles of Effective Feedback for Students into the CS1 Context CLAUDIA OTT, ANTHONY ROBINS, and KERRY SHEPHARD, University of Otago

Learning the first programming language is challenging for many students. High failure rates and bimodally distributed grades lead to a pedagogical interest in supporting students in first-year programming courses (CS1). In higher education, the important role of feedback for guiding the learning process and improving the learning outcome is widely acknowledged. This article introduces contemporary models of effective feedback practice as found in the higher education literature and offers an interpretation of those in the CS1 context. One particular CS1 course and typical course components are investigated to identify likely loci for feedback interventions and to connect related computer science education literature to these forms of feedback. Categories and Subject Descriptors: K.3.2 [Computer and Information Science Education]: Computer Science Education General Terms: Human Factors, Theory Additional Key Words and Phrases: Effective feedback practice, higher education, CS1 ACM Reference Format: Claudia Ott, Anthony Robins, and Kerry Shephard. 2016. Translating principles of effective feedback for students into the CS1 context. ACM Trans. Comput. Educ. 16, 1, Article 1 (January 2016), 27 pages. DOI: http://dx.doi.org/10.1145/2737596

1. INTRODUCTION

Computer science education (CSEd) is an established research field but has been criticized in the past for being too focused on disciplinary aspects at course level rather than on higher-level pedagogy in terms of goals and the role of the teacher [Kinnunen et al. 2010]. Publications with a theoretical education focus were found to be rare [Joy et al. 2009], and a large number of studies were missing some kind of developed theory, model, framework, or instrument [Malmi et al. 2010]. Addressing Mark Guzdial’s comment that “too much of the research in computing education ignores the hundreds of years of education, cognitive science, and learning sciences research that have gone before us” [Alstrum et al. 2005], this article is intended to connect research from higher education regarding feedback with current developments as found in CSEd literature. Quality feedback for students is regarded as one of the main contributors to improved student learning (e.g., Hattie and Timperly [2007], Carless [2006], Hounsell [2003], Ramsden [2003], and Askew and Lodge [2000]). There is an immense body of research in education investigating the effects of feedback on student learning. Based on more than 800 meta-reviews, Hattie [2009] ranked feedback among the top 10 influences on students’ achievement of the 138 influences he investigated. Authors’ addresses: C. Ott, A. Robins, and K. Shephard, University of Otago, PO Box 56, 9054 Dunedin; emails: {claudia.ott, anthony.robins, kerry.shephard}@otago.ac.nz. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]. 2016 Copyright is held by the owner/author(s). Publication rights licensed to ACM. ACM 1946-6226/2016/01-ART1 $15.00 DOI: http://dx.doi.org/10.1145/2737596

ACM Transactions on Computing Education, Vol. 16, No. 1, Article 1, Publication date: January 2016.

1:2

C. Ott et al.

In higher education, feedback for students is mainly discussed in the context of assessment with a focus on formative assessments, but it also is increasingly discussed in the light of self-regulated learning and self-assessment. Over the past 50 years, the perspective on feedback shifted from a behavioral approach (e.g., using programmed instruction with immediate feedback to shape learners’ behavior with regard to desired responses) toward a constructivist one (e.g., Burke and Pieterick [2010]). A constructivist approach to learning emphasizes the role of the learner—knowledge needs to be actively constructed and is based on prior experiences and beliefs (e.g., Biggs and Tang [2007] and Prosser and Trigwell [1999]). Therefore, feedback needs to be investigated not only from an information perspective (What information is necessary at which stage of the learning process?) but also from a process perspective (How does feedback influence the learner’s cognitive and metacognitive processes?). Large-scale studies in Australia and the United Kingdom found that students are often dissatisfied with the feedback provided, namely the accuracy, timeliness, and consistency of feedback information [Carless et al. 2011]. Increasingly large class sizes and a wider diversity of student backgrounds can be seen as today’s main challenges in providing students with quality feedback [Hounsell 2007]. In addition, students’ level of engagement in the feedback process highly impacts the effectiveness of feedback [Price et al. 2011]. Acknowledging these challenges and the complexity of the learning process, the question of “What constitutes effective feedback for students?” has no simple answer. In Section 1, we extract the principles of effective feedback from the literature of higher education. The resulting framework is used in Section 2 to illustrate how those principles can be applied in the introductory programming (CS1) context. To do so, we use our own CS1 course as a case to assess current feedback practices for typical course components. It is our goal to initiate a discussion on best practice feedback but also to highlight opportunities for improvements with links to existing research in CSEd. The objectives of this work are twofold: first we aim for a shared understanding of what constitutes effective feedback for students, and second, we want to initiate a discussion on how those principles can be applied in the CS1 context. As a first step, we provide a road map (see Table II) highlighting feedback aspects for typical CS1 course components with links to related research in CSEd literature. 2. FEEDBACK IN HIGHER EDUCATION LITERATURE

To address the question of what constitutes effective feedback for students, researchers have either conducted experimental studies, reviewed the literature on these, or have investigated students’ and/or teachers’ perception of effective feedback. Beaumont et al. [2008] studied the perceptions of quality feedback of 37 high school students and 13 teachers and compared these with perceptions of first-year university students and tutors. The school students’ experiences of quality feedback are described as the dialogic feedback cycle (DFC). Feedback discussions are aimed to improve students’ grades and occur at three stages of an assessed coursework task: (1) preparatory guidance, (2) in-task guidance, and (3) performance feedback. The authors found that these experiences highly influence students’ expectations in the first year of their university study. Given large class sizes and a focus on self-directed learning at a university, it is not surprising that these expectations cannot be met. Focus groups and interviews involving 108 first-year students revealed a significant misalignment between students’ expectations regarding quality feedback at the beginning of the course and their actual experiences. Students reported little preparatory guidance (e.g., lack of explanation of the task criteria or discussion of model answers) and few opportunities for formative feedback. Issues with the feedback received (mainly at the performance stage of the DFC) involved inconsistency of marking and a lack of timeliness and detail. ACM Transactions on Computing Education, Vol. 16, No. 1, Article 1, Publication date: January 2016.

Translating Principles of Effective Feedback for Students into the CS1 Context

1:3

The authors are worried about the “demotivating effect that a perceived lack of quality feedback appeared to have on a significant proportion of students” [Beaumont et al. 2008, p. 9]. Wide variations in assessment and feedback practices, as well as inconsistency in quality and quantity of feedback, have been reported by Jessop et al. [2013]. The authors investigated 23 degree programs at eight universities and found a strong positive correlation of students’ overall satisfaction with the quality and the quantity of feedback provided. Furthermore, a strong positive relationship between feedback quality and students’ understanding of goals and standards was revealed. Given the variations in the feedback practices, it is not surprising that even final-year students reported a lack of confidence in judging the standard of work required. The authors conclude that a shared assessment culture and “more consistent approaches in the detail, language, tone, and timing of feedback” are needed [Jessop et al. 2013, p. 14]. 2.1. Principles of Good Feedback Practice

Principles of good feedback practice are widely discussed in the higher education literature. Sadler [1989] developed a theory of formative assessment where, to benefit from feedback, the learner needs to “(a) possess a concept of the standard (or goal or reference level) being aimed for, (b) compare the actual (or current) level of performance with the standard, and (c) engage in appropriate action which leads to some closure of the gap” (p. 121). Underlying is the notion of feedback as information to help students to close the gap between actual and desired performance. This definition is also used in a literature review by Nicol and Macfarlane-Dick [2006], which concludes with seven principles of good feedback practice (p. 205): “Good feedback practice: (1) (2) (3) (4) (5) (6)

helps clarify what good performance is (goals, criteria, expected standards); facilitates the development of self-assessment (reflection) in learning; delivers high-quality information to students about their learning; encourages teacher and peer dialogue around learning; encourages positive motivational beliefs and self-esteem; provides opportunities to close the gap between current and desired performance; (7) provides information to teachers that can be used to help shape teaching.” In contrast to Sadler’s learner-focused view, Nicol and Macfarlane-Dick’s emphasis is on the tutor and what kind of feedback needs to be in place. The authors stress that good feedback practice should strengthen students’ ability to self-regulate their learning and that self-assessment skills are a vital precondition. Feedback targeted on students’ selfregulation ability is seen to have a long-lasting effect beyond university study. 2.1.1. Focus on Self-Regulation. Ideally, self-regulated students work in cycles of setting their own goals based on a genuine interest in the topic. They select appropriate strategies to reach these goals, have the ability to focus their attention, self-monitor their progress, and adjust the strategies accordingly. They reflect critically at the end of the process in terms of self-evaluation, and attribution of failure or success, which in turn leads to improvements for the next cycle. These “self-fulfilling cycles of academic regulation” are introduced by Zimmerman [1998] with numerous references to research in the field. Butler and Winne [1995] published an influential literature review on feedback in the context of self-regulated learning. The authors developed a model of self-regulated learning that focuses on the student’s ability to generate internal feedback (“monitoring”) at all stages of the learning process. Further discussion about “sustainable” ACM Transactions on Computing Education, Vol. 16, No. 1, Article 1, Publication date: January 2016.

1:4

C. Ott et al.

feedback followed to discuss strategies and practices to help students to develop into self-regulated learners (e.g., Hounsell [2007] and Carless et al. [2011]). There is a clear shift from viewing feedback as information transmitted from lecturers and tutors to students toward a perspective where the student is seen as an active part in the feedback process [Nicol and Macfarlane-Dick 2006]. Carless et al. [2011] define sustainable feedback practices as “dialogic processes and activities which can support and inform the student on the current task, while also developing the ability to self-regulate performance on future tasks” (p. 397) and seek for an answer in an interview study as to what constitutes those practices. Besides multistage assessment practices, dialogical feedback on oral presentations and the promotion of student self-evaluation, the authors found feedback on self-regulation level to be neglected because of (1) students’ resistance based on the expectation to be told what to do, (2) lectures’ anxiety to challenge this resistance, and (3) a packed curriculum focusing on topic-specific content only. 2.1.2. Focus on Teacher Involvement. In his book Visible Learning, Hattie [2009] offers an additional perspective on feedback by stating that “teaching and learning can be synchronized and powerful” [p. 173] when teachers seek feedback from students to engage with their gaps in knowledge, level of understanding, the errors they make, or misunderstandings they hold. This kind of feedback makes the actual learning “visible” to the teacher and provides an opportunity to act as problems occur. Likewise Nicol and Macfarlane-Dick [2006] suggest frequent assessment and diagnostic tests to enable lecturers to adapt their teaching. However, in times of increasing class sizes and limited resources, frequent assessment of students’ actual understanding seems to be problematic. 2.1.3. Focus on Peer Assessment. Peer assessment is seen as one answer to improve feedback processes without increasing the workload of the teaching staff. Peer assessment involves students in the grading process and was reported to have positive effects on student learning and self-regulation. In their literature review, Liu and Carless [2006] describe the following advantages, that students:

—get an active role while managing their own learning, —improve their understanding of standards and grading criteria, —develop objectivity in the process of assessment, —improve their own understanding about the subject matter by reflecting on and articulating of issues involved, —receive more timely feedback than staff can provide (especially if challenged by mass education demands), —get used to making their work public as an act to facilitate social learning environments. Strong links are seen between the involvement in the process of peer assessment and the development of self-assessment skills [Nicol and Macfarlane-Dick 2006]. Despite these advantages, a large-scale survey conducted by Lui and Carless [2006] showed that students and staff alike are not in favor of peer assessment for four reasons: (1) concerns about reliability of grading, (2) perceived limited expertise of peers, (3) issues with power relations (academics sharing their powers as well as students feeling uncomfortable having the power of grading), and (4) time factors. To address the first three problems, the authors promote peer feedback, which is described as a dialogue providing “rich, detailed comments but without formal grades” [Liu and Carless 2006, p. 280] over peer assessment, where students primarily grade each other’s performance or work. ACM Transactions on Computing Education, Vol. 16, No. 1, Article 1, Publication date: January 2016.

Translating Principles of Effective Feedback for Students into the CS1 Context

1:5

2.1.4. Focus on Engagement. Gibbs and Simpson [2004] address quality feedback in the light of assessment. Their observation that “assessment sometimes appears to be at one and the same time enormously expensive, disliked by both students and teachers, and largely ineffective in supporting learning” (p. 11) motivates a review of theory and empirical studies. In conclusion, they state 10 conditions under which assessment improves student learning. The 5 conditions addressing feedback imply that students are more likely to engage with feedback: (1) frequent and detailed, (2) focused on task performance rather than on students’ personality, (3) timely to be still relevant for further learning and/or assistance, (4) aligned to the purpose and success criteria of the assignment, and (5) appropriate to students’ level of understanding. Even if all advice regarding good feedback practice is integrated into the course processes, it still needs the engagement of the learner for the feedback to be effective. Gibbs [2010] emphasizes this aspect and notes: “It is not inevitable that students will read and pay attention to feedback even when that feedback is lovingly crafted and promptly provided” (p. 18). He suggests steps to improve students’ engagement by providing feedback without marks and the incorporation of self-assessment and multistage assessments. Consecutive hurdles in the process of students’ engagement with assessment feedback were identified by Price et al. [2011] as the failure to (1) collect the feedback response; (2) failure to immediately attend to it; (3) understand the response; and (4) take action because of limitations in resources, skills, opportunities, or confidence. This perspective highlights important stages in the engagement process where each stage triggers further engagement (or disengagement). Repeated unsatisfactory prior experience with the feedback process is likely to lead to disengagement in such a way that students do not even collect the assessment feedback. The authors point out that such behavior is not an instant response but developed over time and involves different courses and programs, such as when feedback was continuously found to be useless (e.g., too general or not understandable) or staff were perceived as too busy to discuss feedback responses. 2.1.5. Focus on Dialogue. As dialogue appears to be an effective way to ensure that feedback is received and understood, it is not surprising that dialogic feedback is highly praised in the literature as desirable practice. For example, the already mentioned Principle 4 of good feedback practice [Nicol and Macfarlane-Dick 2006] focuses particularly on the encouragement of teacher and peer dialogue. This was reiterated by Carless et al. [2011] based on findings of a study of actual feedback practices. Likewise, Price et al. [2011] state that dialogue is a key element of the feedback process, which should (1) communicate feedback purpose and processes upfront, (2) foster positive learner identity early on, (3) address feedback and invite students’ responses, (4) guide and encourage students’ engagement with the feedback, and (5) as a result lead to a shared understanding of the complexity of the feedback process. Price et al. [2011] note:

Supportive dialogue is a key part of the social practice of assessment, and is dependent upon trust and the perception of a joint enterprise involving students and staff. Students recognise their need for dialogue to enable them to fully work with their feedback and to induct them into the disciplinary community but they are frequently frustrated by lack of opportunity and by the social structures that obstruct dialogue. (p. 894) 2.1.6. Focus on Students’ Personality and Their Perceptions of Feedback. The level of engagement with feedback also depends on the learners’ personality and their selftheories. Dweck [1999] found that students who see intelligence as developmental are more learning oriented and view challenges as opportunities for learning. In contrast, ACM Transactions on Computing Education, Vol. 16, No. 1, Article 1, Publication date: January 2016.

1:6

C. Ott et al.

students who believe that intelligence is fixed were found to be somewhat performance oriented and likely to give up when faced with difficulties. Those students may interpret negative feedback on a personal level with damaging effects on their motivation and self-esteem, because failure is seen as a lack of intelligence. For a summary of self-theories, see Yorke and Knight [2004]. A demoralizing effect of negative feedback was reported by Poulos and Mahony [2008] based on a study of students’ perceptions of effective feedback. Especially in the first year of university study, students experienced negative feedback or failure as devastating and expressed a need for more communication to aid the transition from school to university. How students’ perception of feedback influences their engagement with the feedback process was the focus of a study by McLean [2012]. Conducting a phenomenographic investigation, the author found that students view feedback as information in four different ways of increasing complexity: (1) as “telling,” (2) as “guiding,” (3) as “developing understanding,” and (4) as “opening up a different perspective.” The more complex conceptions include the simpler ones. Furthermore, McLean could demonstrate that the more inclusive and expansive students’ perception of feedback is, the lower are the barriers to respond to feedback provided. She concludes regarding “better” feedback practices: In this instance, “better” does not necessarily mean more work for teachers. Instead, “better” means working with students to figure out their view of feedback and tailoring feedback information to fit. It can also mean stimulating and challenging students to reflect on their current views. 2.2. A Framework for Effective Feedback

Best possible feedback processes acknowledge and respond to the learner’s personality and provide detailed, timely feedback on an individual level. This is particularly important in the first year to influence learners’ successive engagement in these processes positively and to help the transition from school to tertiary study. However, there seems to be a contradiction between the requirement of feedback being detailed, as well as learner- and task-specific and the aim of feedback to help students develop into independent, self-regulating learners over time. Hattie and Timperley [2007] established a framework to consider feedback that untangles the different levels and stages of the process. Based on their review of numerous meta-analyses, they identify four levels of feedback [Hattie and Timperley 2007, p. 87]: (1) (2) (3) (4)

Task level: How well tasks are understood/performed. Process level: The main process needed to understand/perform tasks. Self-regulation level: Self-monitoring, directing, and regulating of actions. Self level: Personal evaluations and affect (usually positive) about the learner.

On “self level,” feedback is often in the form of praise and used to comfort or support students, but it usually contains little task- or process-related information and is considered to have limited potential to improve learning. Despite the authors’ earlier remark that praise directed to the effort, self-regulation, or personal engagement “can assist enhancing self-efficacy” [Hattie and Timperley 2007, p. 96], the authors state that “praise may be counterproductive and have negative consequences on students’ self-evaluation of their ability” and point to a certain “unpredictability” of praise for different groups of learners [Hattie and Timperley 2007, p. 97]. Because of these inconclusive findings and the limited transferability into a specific learning domain, feedback on self level is not considered in the CS1 context in the remainder of this article. ACM Transactions on Computing Education, Vol. 16, No. 1, Article 1, Publication date: January 2016.

Translating Principles of Effective Feedback for Students into the CS1 Context

1:7

Table I. Framework of Questions to Guide Effective Feedback Practices

Note: Questions are applicable on task level, process level, and self-regulation level.

In addition to the four levels of feedback, Hattie and Timperley state three major questions that need to be addressed at each level. Only by answering these questions can a discrepancy or gap between the actual and the desired performance be highlighted and acted on: (1) Where am I going? (“feed up”): Learning intention, goals, success criteria—goals need to be specific rather than general and sufficiently challenging. (2) How am I doing? (“feed back”): Actual performance, understanding—feedback regarding expected standard or success criteria and not in comparison with other students’ progress. (3) Where to next? (“feed forward”): Progression and new goals—information that leads to greater learning possibilities, enhanced challenges, and the development of more self-regulated learning. The authors point out that these questions are interrelated—for example, next steps can only be considered in relation to the goals and the current progress. It is important to notice that Hattie and Timperly consider the self-regulation level as the most effective one but raise awareness that feedback on task and process levels needs to be established to enable students to act on the self-regulation level. Their framework illustrates that feedback should not be seen as a series of events but rather as an ongoing process on different levels where a shift from feedback on task level toward the self-regulation level is desired over time. 2.3. Summary

This section introduced general principles of good feedback practice and highlighted specific aspects that were found to be dominant in the higher education literature. The underlying notion of feedback as “closing the gap between desired and actual performance” highlights the need to establish feedback not only around students’ actual achievement but also to clarify goals and success criteria upfront, as well as to indicate steps for improvement. These three stages are reflected in Hattie and Timperley’s framework for effective feedback. This is used in Table I to structure the most important aspects of good feedback practice as questions to be answered on task, process, and selfregulation level in an attempt to improve existing feedback processes. Frameworks such as those presented earlier often lack advice for practical implementation. In the following section, we attempt to provide such practical advice in a ACM Transactions on Computing Education, Vol. 16, No. 1, Article 1, Publication date: January 2016.

1:8

C. Ott et al.

generic CS1 context. Given the proposed levels and stages of effective feedback practice, we will translate those for typical course components in the CS1 context, providing entry points into related CSEd literature. 3. FEEDBACK IN THE CS1 CONTEXT

CSEd literature reflects the importance of feedback, particularly to support students working on a programming task. For example, matters of error handling are addressed by integrated development environments (IDEs) consisting of compliers and debuggers especially designed to improve feedback for novices in the field of programming (e.g., Pears et al. [2007]). Pair programming, as a specific form of peer feedback, is a well-established research domain (e.g., Salleh et al. [2011]), and so is the provision of automated feedback regarding programming assignments (e.g., Ihantola and Ahoniemi [2010]), just to name a few. Advances to improve student engagement can be seen in a recent shift toward active learning approaches with a focus on instant feedback and dialogue. This journal’s special issue on “Alternatives to Lecture in the Computer Science Class Room” [Grissom 2013] introduced current research with an emphasis on peer feedback, such as collaborative learning [Renaud and Cutts 2013], peer instruction [Bailey-Lee et al. 2013], and peer code review [Hundhausen et al. 2013]. Similarly, recent conferences in CSEd put emphasis on peer review [Nicol 2014] and peer instruction [Grissom et al. 2014] with numerous publications on those topics (e.g., Zingaro [2014], Horton et al. [2014], and Porter et al. [2013]). For example, a “trio of instructional best practices in CS1” was introduced by Porter and Simon [2013] and combined peer instruction and pair programming with media computation components (see Guzdial and Ericson [2007]). The authors report significantly improved retention rates for a large CS1 course. In a systematic review, Vihavainen et al. [2014] analyzed various teaching approaches for CS1 courses and found that “cooperative learning” shows the largest positive effects on student pass rates. In this work, we use our own CS1 course (COMP160) as an illustrative case to identify practical opportunities for applying the principles of good feedback practice. The different levels and aspects of feedback are discussed regarding common course features, including lectures, practical (laboratory/tutorial) sessions, and assessment (including regular practical tasks, larger assignments, and exams). Whenever applicable, this investigation is linked to the CSEd literature and “best practice” examples are highlighted. Although the investigation is based on COMP160, it is intended to be generic, so the answers to the following three questions should be relevant to most typical CS1 courses: —How is feedback on task, process, and self-regulation levels considered in the CS1 context? —How are the questions guiding effective feedback practices (Table I) on those three levels addressed in a typical CS1 course? —What topics from CSEd research are relevant to support or establish current feedback practices? COMP160 is a well-established first-year Java introduction with about 200 students enrolled in the second semester each year. The content is based on a standard textbook by Lewis et al. [2010]. The course is offered in a typical combination of lectures (50 minutes twice a week) and tutor-assisted lab work (2 hours twice a week). Students are expected to submit one completed lab task per lab session. Usually there are 20 to 25 students in a lab session, and two to three tutors are present to help students on request. Students may work at their own pace but are advised to maintain the course schedule. The 25 lab tasks add up to 25% toward the final grade with no penalty for ACM Transactions on Computing Education, Vol. 16, No. 1, Article 1, Publication date: January 2016.

Translating Principles of Effective Feedback for Students into the CS1 Context

1:9

late submission. Written examinations at mid-semester and at the end of the semester contribute another 15% and 60%, respectively. 3.1. Feedback at Task Level

Feedback at this level is concerned with a specific learning task, such as writing an essay, or in the case of CS1, writing a small computer program. Goals and success criteria need to be clearly stated, and regular feedback should clarify how well the task has been accomplished. Missing aspects or faulty interpretations need to be addressed by corrective advice. Hattie and Timperly [2007] emphasize that if feedback on this level is too detailed or specific, it might distract from strategies and lead to trial-anderror behavior. It is likely to be most beneficial if erroneous hypotheses can be rejected and cues on further direction and strategies are provided [Hattie and Timperly 2007]. Usually this kind of feedback does not generalize to other tasks. Depending on the structure of a particular CS1 course, feedback on task level needs to be considered for some or all of (1) the practical tasks students are expected to solve during tutorial session (including any preparation for such tasks) or extended assignments (extending beyond a single tutorial session), (2) tasks or exercises that occur in lectures, and (3) any examinations, which can also be considered as tasks, because students have to accomplish them as part of the course. 3.1.1. Practical Tasks during Tutorial Time or as Extended Assignments. In our course, the two weekly lectures are accompanied by two weekly tutor-assisted laboratory sessions, during which no formal teaching is conducted and students work at their own pace on the practical tasks. There is one practical task designed for each of the 25 scheduled sessions. Students can “call” (via a queuing system) when problems occur or when they have completed the task and want to submit the work. Each call is attended by one of the two or three tutors present in the lab. The goals for task completion are typically stated as a description of the program’s intended functionality, which is then taken further apart into instructions. Test cases are defined to verify the correctness of the implementation. It is important that students fully understand the task requirements to avoid later confusion when working through the instructions. Embedded questions as part of the task description could be considered for more engagement with the intended functionality and the instructions, but we face a common problem here: we observed that some students are challenged by understanding the task goal and instructions, whereas others feel that the task goal is rather trivial and instructions are already overly prescriptive. “Extension tasks,” which are not a requirement for task completion but are a voluntary add-on, were introduced as one possibility to challenge those students and adjust the task goal to the learners’ situation. Following the advice for effective feedback, students should get regular, detailed, and timely feedback regarding their actual performance. Scheduled occasions to uncover gaps between the goals and the actual performance in a one-to-one situation are arranged if students are required to discuss their solutions with a tutor before submission. The tutor would ask the student to run the program and demonstrate some test cases. The code is checked for the required concepts (e.g., use of variables vs. “hard” coding) and programing style (e.g., commenting). If everything is solved correctly, this process takes 1 to 2 minutes. In the case that deficits are discovered, the amount and quality of feedback depends on the different factors: (1) the time pressure in the lab situation, (2) the tutor’s personality and experience, and (3) the student’s attentiveness and willingness to discuss problems. Within this spectrum, marking students’ lab work can be a lively discussion about misunderstandings or alternative solutions, or rather short advice regarding parts of the preparation task or the program solution to ACM Transactions on Computing Education, Vol. 16, No. 1, Article 1, Publication date: January 2016.

1:10

C. Ott et al.

be reviewed. This discussion should be focused on the solution at hand rather than on the student’s personality, but it needs to take the student’s level of understanding into account. We observed that our tutors’ varying experience (e.g., professional teaching fellows or third- or fourth-year students) and teaching styles can cause inconsistency in the quality of feedback, especially if students talk to different tutors regarding one task. The use of software metrics (e.g., Cardell-Oliver [2011]) and assignment rubrics (e.g., Becker [2003]) could be evaluated for shared marking criteria among tutors. Students’ level of activity and readiness to seek help also influences the amount of feedback they get. Active students with rather outgoing personalities get more feedback by “calling” for help frequently and at different stages of task completion. These calls are excellent opportunities for individual and detailed feedback on task level. However, problems occur if students are not attending the labs regularly or do not call for help or do not submit lab work. Here formal tutorials or a required minimum of lab sessions to be attended might be the answer. Feedback to indicate how a student can improve is directly linked to the feedback on actual performance. In the lab situation, tutors would discuss students’ actual programming solutions by referring to the requirements and give advice on how to improve the solution. This iterative process might start with the advice to review certain concepts in the textbook or lecture notes and develop into more task-specific feedback later on. From our experience, it is always tricky to maintain the right balance between “telling students what to do” and “helping them figure out how to get there themselves.” Hattie [2009] writes: “The art is to provide the right form of feedback at, or just above, the level where the student is working” (p. 177). Asking questions to guide their line of thinking seems to be a good approach for students who are “almost there.” However, such feedback is not effective if students are still at the knowledge acquisition phase—here, instructions are more helpful [Hattie 2009]. One-to-one conversations provide the most desirable form of feedback but may appear as not sustainable in times of increasing student numbers and current staff student ratios. “Extreme Apprenticeship,” a form of continuous, personal feedback between advisor and students, was introduced by Kurhila and Vihavainen [2011] for CS1 classes with about 150 to 200 students and resulted in higher pass rates and improved student satisfaction without extending the budget on teaching staff. Investigating students’ perception of feedback, Pears et al. [2013] found students unsatisfied with the feedback they get. As one way to improve the feedback processes, the authors suggest discussing not only assignment results on a one-to-one basis or as a group but also the expectations before the due date of the assignment. A preference for individual feedback over group feedback was voiced by teaching staff in Isom¨ott¨onen and Tirronen [2013] based on the experience that summarizing feedback relevant for an entire group was found to be too time consuming. Student appreciation of one-to-one feedback was shown by East and Schafer [2005], who conducted an experiment using in-person grading as one of three experimental conditions. Significant differences in student grades could not be observed between the groups, but students of the in-person grading group “were far more satisfied with their feedback method” [East & Schafer 2005, p. 381]. 3.1.2. Lectures. The introduction of interactive teaching techniques is a possibility to empower the lecturers to monitor students’ actual understanding more closely by frequent assessment of newly introduced concepts even in large classes [Bruff 2009]. Classroom response systems, such as Clickers (http://www.h-itt.com), are a technology to reveal students’ misconceptions by posting conceptual questions or small tasks. Students can provide an answer choice for multiple-choice questions (MCQs) or a short answer by using a remote control. Depending on students’ feedback, the lecturer might ACM Transactions on Computing Education, Vol. 16, No. 1, Article 1, Publication date: January 2016.

Translating Principles of Effective Feedback for Students into the CS1 Context

1:11

choose to alter the course of the lecture to address incorrect answers immediately. Improvements of students learning outcome have been reported, for example, for large physics classes [Deslauriers et al. 2011]. In the context of CS1 and CS2 courses, Chamillard [2011] reports on students’ enthusiasm for the new technology, whereas Cutts and Kennedy [2005] found students’ actual engagement with the technology to be limited and report a poor correctness of students’ responses. Just-in-Time Teaching (JiTT) is an example of an interactive teaching technique, which is often used in combination with a classroom response system. Bailey and Forbes [2005] implemented JiTT by providing Web assessments prior to the lecture (Web component) and feedback regarding these assessments in the lecture (classroom component) in an introductory computer science course. The classroom component also included the use of a personal response system similar to Clickers. Gannod et al. [2008] propose the “inverted classroom” to promote in-class activities over traditional lectures, which are delivered through podcasting, implementing immediate feedback on a regular basis for software engineering students. In general, the interventions mentioned earlier were seen to improve students’ engagement with the subject matter but rarely are linked to students’ learning outcome. Kennedy and Cutts [2005] found the frequency and correctness of students’ responses over the semester associated with their performance in the end-of-semester assessment and report a positive relationship between students’ usage of the voting system and their learning outcomes. The combination of JiTT and peer instruction as an alternative to the traditional lecture format has had some attention in recent years. Carter [2012] reports on a change of a module in a CS1 course where the traditional lecture content was delivered by screencasts, which students were expected to watch before the lecture. At the beginning of each lecture, Clickers were used to assess students’ comprehension of the material and “deliver mini-lectures on an as-needed basis” [Carter 2012, p. 362]. The remaining lecture time was used for in-class activities based on peer instruction, where students worked on activities in small, self-selected groups while the instructor and teaching assistant approached the students to join the discussions. Part of the study was a survey component demonstrating students’ support for the new lecture format. The question of how the introduction of peer instruction impacts student learning was the focus of a study by Spacco et al. [2013], in which the authors show a significant increase in the final exam score for the students in the peer instruction group. Both studies report increased student engagement and found that teachers enjoyed the interactive lecture format. Porter et al. [2013] report a significant decrease in failing rates for the CS1 offering after the introduction of peer instruction. The paper closes with recommendations for adopting peer instruction. Further advances toward students’ active contribution before or during the lecture time are described in a review by Hamer et al. [2010] and include “content creation” (e.g., course material or algorithm visualizations), “solution sharing” (e.g., code reviews), and “annotations” (e.g., digital ink annotations created and transmitted via a tablet PC during the lecture). The authors raise concerns about missing evaluations or unclear definitions of “success” in the reports they reviewed. 3.1.3. Mid-Semester and Final Examination. In COMP160, written examinations are held at mid-semester and at the end of the semester. Students must pass the final examination to pass the course. This passing criterion (achieve more than half marks) makes the final exam an important part of the course. However, performance-oriented goals, such as scoring over 50% to pass the exam, are not useful to guide students’ preparation and successful completion of the examinations. We make practice examinations available prior to the examinations to illustrate learning goals in terms of concepts or ACM Transactions on Computing Education, Vol. 16, No. 1, Article 1, Publication date: January 2016.

1:12

C. Ott et al.

skills required. These practice examinations, which are available online, can be studied and discussed with a tutor during the laboratory sessions. The format of a written, paper-based examination using MCQs and/or short answer questions appears to be problematic, because it leads to a mismatch between the overall intended learning outcome of solving programming problems and the actual assessment of definitions and selected concepts. Mismatches between intended learning outcome and actual assessment tasks are suspected to encourage short-term and surface learning (e.g., Biggs and Tang [2007] and Carless [2007]). Students’ tendency to “learn what they think they will be tested on” [Biggs and Tang 2007, p. 169] can be used constructively by aligning the assessment with the intended learning outcome. Learning-oriented assessments as described by Carless [2007] “should promote the kind of learning dispositions required of graduates and should mirror real-world applications of the subject matter” (p. 59). However, it may be difficult in practice to assess possibly hundreds of students’ programming skills in a computer laboratory situation. Automated assessment tools are widely discussed (for an overview, see Ihantola and Ahoniemi [2010]) and can be considered to reduce the workload of marking, but the conduction of a fair computer-based assessment for big classes seems still problematic. Feedback about the actual performance in the examinations as an achieved score provides only little information on how the learning goals have been met and which concepts need to be reviewed to improve future performance. Ramsden [2003] shares a strong opinion in his book Learning to Teach in Higher Education about assessment feedback in the form of a mark or grade only: It is impossible to overstate the role of effective feedback on students’ progress in any discussion of effective teaching and assessment. Students are understandably angry when they receive feedback on an assignment that consists only of a mark or grade. I believe that reporting results in this way, whatever the form of assessment, is cheating students. It is unprofessional teaching behaviour and ought not to be tolerated. (p. 187) He is not the only author raising concern. Gibbs and Simpson [2004] warn that “grades without feedback may be particularly damaging” (p. 18). Especially for weaker students, assessment grades or scores are a measure of their failure and are not likely to encourage learning. The authors suggest that feedback would be less personal if addressing the content and options for further actions. For the same motivational reason, Hughes [2011] advocates an “ipsative” approach for assessment, which measures the student’s best (based on previous performance) rather than a score against a fixed standard. Based on findings from the literature that students have been more engaged with assessment feedback when given without a grade, Irwin et al. [2013] conducted a case study using “adaptive release” of feedback, where students across different subjects and faculties were required to reflect on the feedback before grades were released. Investigating students’ perceptions about this process, the researchers found that students felt more engaged with the feedback but also annoyed when the grade was seen as the principal outcome of the assessment process. In terms of engagement, the purely summative assessment of the final exam is unlikely to encourage learning. With no opportunity to improve the final grade, feedback is most likely not perceived as relevant. This was emphasized by Pears et al. [2013], who investigated students’ perceptions of feedback regarding assessment practice for two first-year IT courses. The authors note that “students generally view feedback through the lens of the next assessment task” (p. 112) and emphasize on applying feed-forward principles to improve the relevance of assessment feedback for students. In this sense, the mid-semester exam should be utilized to reveal and communicate ACM Transactions on Computing Education, Vol. 16, No. 1, Article 1, Publication date: January 2016.

Translating Principles of Effective Feedback for Students into the CS1 Context

1:13

concepts for review and practice (e.g., lecturer notes, lab tasks, or textbook chapters) to help students’ performance in the final examination. This is important, as the concepts in computer programming are seen as highly integrated [Robins 2010], and students need to internalize lower-level concepts before moving on to higher-level concepts. Personalized feedback could be provided by combining prewritten text blocks, a method used by the Rubyric system as described in Auvinen [2011]. The idea of interactive cover sheets [Bloxham and Campbell 2010] can be also explored: after finishing the examination, students indicate the topics or tasks they want to discuss in a practical session later on or as a topic to be reviewed during a lecture. Prior to 2013, COMP160 students were allowed to collect their mid-semester exam script for revision but got no detailed feedback. In 2013, as a result of our reflection on feedback processes, we introduced individual summaries about the sections of the exam to be revised and related resources. The summaries were automatically generated (based on students’ performance data for single exam sections) and handed out with the marked exam sheets during lab time. We amended the lab task following the mid-semester exam with the possibility to do revisions instead of the lab task, which covered a topic not assessed in the final examination. Revision sheets were available with similar tasks as in the exam, which students needed to solve to get their lab mark. However, we could not measure any significant difference in the final exam score between the group doing the revisions (56 students) and the group prompted for revisions but who instead chose to do the original lab work (32 students). 3.1.4 Summary: Feedback on Task Level. Feedback on task level was reviewed on three course components: (1) the programming tasks to complete in tutor-assisted lab sessions or as extended assignments, (2) frequent small tasks during the lectures, and (3) mid-semester and end-of-semester examinations. By assessing our own course regarding principles of effective feedback practice, we found that good feedback practice is established in a tutor-assisted lab situation where students get timely, detailed feedback in a personal dialogue with the tutor. This way, feedback is often a cycle of discussing the requirements, revealing gaps/deficits of the actual solution and giving corrective advice on how to work toward the requirements. Problems in this process occur when (1) students are inactive or attend lab sessions infrequently or (2) tutors are inexperienced, not attentive to students’ level of understanding, or simply too busy. Inconsistency in the way feedback is provided when one student is attended by different tutors is not avoidable in our course but could be addressed by “tutoring guidelines” in the form of model answers and shared marking criteria. Formal feedback sessions might be introduced for students who appear inactive. The use of interactive teaching techniques and the conduction of small tasks before or during the lecture to establish discussions around students’ misunderstandings have been explored. In recent years, substantial research was conducted to develop evidencebased scenarios for the use of classroom response systems and peer instruction. The way in which timely feedback is provided and peer dialogue is encouraged meets the principles of effective feedback practice. Studies in CSEd involving interactive teaching approaches have shown improvements in students’ grades [Spacco et al. 2013], pass rates [Porter et al. 2013], and retention [Porter & Simon 2013]. There is a serious lack of feedback if examinations do not generate other feedback than a score. This is potentially a missed opportunity to guide students’ learning based on their actual performance. One way would be to address common mistakes, lack of knowledge, or misconceptions in a lecture. The other would be written feedback along with the results and/or a personal dialogue during a practical session regarding the exam results. In each case, it is important to add meaning to the score by giving clear advice regarding which topics need to be revised and what resources are available. At ACM Transactions on Computing Education, Vol. 16, No. 1, Article 1, Publication date: January 2016.

1:14

C. Ott et al.

the end of this article, Table II provides an overview of topics discussed for feedback on task level. 3.2. Feedback on Process Level

According to Hattie and Timperley [2007], feedback on process level is addressing the processes necessary for task completion. One aspect is to help students develop strategies for error detection and correction. Another aspect is to provide students with cues, which guide information search and the application of strategies. The authors claim that this type of feedback is more effective than feedback on task level because it is targeting the development of task strategies rather than the task outcome. In other words, this feedback should empower students to become more independent in the process of task completion. Translated to the CS1 context, feedback on process level is concerned first with the programming process rather than the program’s correctness and functionality, and second with the processes on course level, such as meeting terms requirements, attending the lectures and lab sessions, or preparing for lab tasks and examinations. These two aspects are used to structure the following two sections. 3.2.1. Programming Process. What are the expectations and goals in terms of managing the programming process? Ideally, students understand the requirements of the task and plan their program solution before they start to code. The program is developed in small, testable chunks where test cases are planned and checked at each stage of the development. If compiler or runtime errors occur, students are confident in reading and interpreting the error messages. They can solve most common problems independently by using an (online) reference, a debugger, or other strategies to localize and address the problem (e.g., commenting out statements in question, using printouts to watch variable content and program states). At the end of the development, requirements are reviewed, final tests are run, and a critical reflection might lead to some ideas about improvements. It goes without saying that the code is commented, legible, and follows general “clean code” rules as stated in Martin [2009]. The following are many implicit goals regarding the process of programming:

—Understanding the requirements —Planning the solution —Successive development —Frequent testing —Fluency in using error detection and correction —Good programming style —Critical reflection. For a CS1 course, it is important that these goals are clearly communicated at the beginning or reinforced during the course when becoming relevant. In our course, good programming style in terms of commenting code and sensible variable names is a requirement from the first task onward, and tutors are expected to discuss deficits and ways of improvement. In contrast, planning the solution, successive development, or frequent testing are not requirements for the task completion, and therefore these activities are neglected in the feedback process. If no goals are stated, the discussion about actual deficits and possible improvements is unfounded. In general, it is problematic to teach the advantages of good programming practice, because students need to have some programming experience and need to be faced with programming problems of a certain complexity to appreciate these strategies and techniques, such as planning and commenting. It is a matter of good timing to introduce process-related strategies when they are most relevant for a successful task completion. ACM Transactions on Computing Education, Vol. 16, No. 1, Article 1, Publication date: January 2016.

Translating Principles of Effective Feedback for Students into the CS1 Context

1:15

Exercises should be scheduled accordingly for discussion and feedback. Another way of communicating goals of performance is seen in providing “exemplars”—authentic student work to illustrate high-quality achievement [Sadler 2002]. Annotated model solutions could be useful to clarify some clean code principles and communicate expected standards. Assessing the actual performance while solving a programming task is closely related to error detection and correction, as well as the use of compilers/interpreters and debuggers. Compilers or interpreters will keep posting error messages until the last syntax error is resolved. That can be a frustrating process, especially for a beginner who might experience these errors as personal failure or lack of ability [Perkins et al. 1989]. Unlike in essay writing, for example, there is no way to assume that the program is well crafted if it is not compiling. On the other hand, also different from writing an essay, students are potentially able to detect and address program errors in a selfguided way. However, the tools involved are complex and require a certain degree of user experience. For example, error messages are often cryptic or even misleading and require tutor support to be resolved. Small exercises could help students learn how to read a runtime error or typical compiler errors and how to find a solution more independently. Programming environments for novices have been developed to reduce the complexity of professional environments. In a literature review of teaching introductory programming, Pears et al. [2007] provide an introduction and overview of educational programming environments. These are often equipped with visual debuggers, which provide instant feedback regarding actual program behavior. There are numerous algorithm visualization tools, working mainly as stand-alone applications (for an overview of the scope, see Shaffer et al. [2007]). These tools are not linked to actual programming tasks but can be considered as learning tools supporting the programming process. If students are required to talk to a tutor at the stage of task completion, this is an excellent occasion not only to talk about how students met our expectations of good programming practice and how they could improve but also to guide students toward a critical reflection on their current solution: What have you done to solve this problem? Why did you use concept X and not Y? Are there alternative solutions? A good way to assist the feed-forward step is to encourage students to reflect on their program solution and think about possible refinements. However, Isom¨ott¨onen and Tirronen [2013] found students’ engagement with reflection tasks at the point of task completion to be limited, as students had no opportunity to apply the insights to the already finished tasks. That “feedback should directly enhance student outcomes on subsequent tasks” [Pears et al. 2013, p. 107] was stated as one key principle to improve feed-forward feedback on process level. The authors note: “It is important that whatever the chosen exercises, that feedback deals with generic skill development, and is useful in improving performance on the next task in the sequence” [Pears et al. 2013, p. 111]. Buffardi and Edwards [2014] showed that automated assessment tools can be used to support process-related feedback. The authors introduced an adaptive feedback system to observe students’ software testing behavior and to reinforce incremental testing by providing rewards in the form of hints. The study showed that students responded with significantly improved test code and coverage. In summary, marking criteria for a programming task should not only address the program’s functionality but also the programming process and matters of good programming style. A clear communication of those marking criteria up front would help students understand the expectations and act accordingly. If process-related goals are clearly stated, feedback can address unsatisfactory programming approaches and assist improvement toward these goals for the next tasks. ACM Transactions on Computing Education, Vol. 16, No. 1, Article 1, Publication date: January 2016.

1:16

C. Ott et al.

3.2.2. Course-Related Processes. For most CS1 courses, students will be expected to attend all lectures and practical sessions. Furthermore, students may be expected to come prepared to practical sessions. Before examinations, students will generally be expected to review lecture notes and other course materials. If questions occur, practical session time can be used for clarification. These are all “commonsense” rules/goals for us as teachers, but are they also for our students? What needs explicit enforcement? What needs to be followed up? When to intervene if these standards are not met? How these questions are addressed depends on the specific characteristics of the course and on what were seen as successful strategies in the past. If lecture attendance is not compulsory, as in our course, students need to be selfmotivated to attend on a regular basis. A tight link between the lectures and the required practical work can help to emphasize the relevance of the lectures. Referring directly to upcoming practical tasks and presenting similar examples with specific clues or hints, which assist task completion by addressing the most common problems, may increase students’ motivation to attend the lectures and lead to more independent work in the practical sessions. If students are struggling with the practical work and have not been to the lecture, feedback can be directed to clarify the expectations and to find ways to motivate improvements in study attitudes and behavior on an individual basis. As mentioned before, peer instruction has been seen to improve students’ engagement during lecture time (e.g., Spacco et al. [2013]). If marks are rewarded for in-lecture activities, students are more likely to attend and participate in those activities. Anecdotally, there is a strong correlation between the completion of practical work and final outcomes in CS1 (certainly this is the case in our COMP160, where data from 2006 to 2014 show a strong correlation between the number of submitted lab tasks and the score in the final exam: r = 0.608, p < 0.01, N = 1,489), and hence reinforcement of regular practical session attendance and task completion should be a top priority. If goals (e.g., a number of practical sessions that must be attended) are clearly stated as course requirements, it is possible to inform students early on in case the requirements are not met and to search for solutions to improve actual study behavior. Ideally, students who are contacted need to get back to the course coordinator to ensure that the feedback was received and understood and that further action can be discussed by taking the their individual situations into account. Such a discussion can lead, for example, to the arrangement of private tutoring sessions, as well as the decision to withdraw from the course to concentrate on other courses. Assuming that better preparation for any mid-semester or final examination results in better understanding and a higher pass rate, providing practice examinations is a good starting point. Those practice exams could become an engaging and powerful element in the reviewing process if students (1) are clearly expected to solve these, (2) get automatic or personalized feedback on their performance and resulting scores, and (3) topics and resources for further review are provided. 3.2.3. Summary: Feedback on Process Level. The process level of feedback was discussed for two very different processes that CS1 students need to manage: (1) the process of completing programming tasks and (2) the process of meeting the course requirements. Although teachers may have high expectations with regard to how students manage these processes, they can also assist in terms of setting goals, providing well-structured feedback, and highlighting actual performance and opportunities for improvement. Feedback processes need to be established around all aspects of good programming practice. There are several expectations when it comes to the programming process, but few of them are stated as standards or goals in our course. The lack of explicit goals and related exercises (e.g., using pseudocode or UML diagrams for planning a solution) leads to a lack of feedback and feed forward. When goals regarding the programming ACM Transactions on Computing Education, Vol. 16, No. 1, Article 1, Publication date: January 2016.

Translating Principles of Effective Feedback for Students into the CS1 Context

1:17

process are only partially communicated and assessed, their relevance is diminished for students and tutors alike. However, process-oriented feedback targeting general aspects of good programming practice is important for students’ development into selfsufficient software programmers. The emphasis on planning programming tasks is as vital as providing students with knowledge and resources for effective error detection and correction. Tasks should be designed to aim for the development of those strategies and should offer possibilities to discuss students’ progress in this process. Commonsense rules define good study behavior. The question of what we see as “good enough” or as “not satisfactory” might be hard to judge for first-year students. Here, explicit expectations and goals need to be stated and reinforced at different stages of the course. Important goals need to be accompanied by regular feedback to inform students regarding their performance in meeting these goals and by guiding improvement if necessary. The feedback process is broken if scheduled occasions and information considering students’ actual performance are missing. Aspects of feedback on process level are summarized later in Table II. We found related CSEd literature limited in comparison to task-related feedback practices, suggesting a need for further research on how to improve feedback to support both programming processes and course-related processes. 3.3. Feedback on Self-Regulation Level

Feedback at self-regulation level is aimed to improve students’ self-monitoring skills and to address the way students direct and regulate their own learning. It seems that those strategies of self-regulated learning play an important role when mastering the first year at a university, as students are expected to learn much more independently than in high school. For many students, it is also their first year away from home, and managing the different demands of multiple courses and personal interests can be a big challenge. The description of “na¨ıve self-regulators” by Zimmerman [1998]—who are performance oriented rather than learning oriented, often disinterested, monitor the outcome rather than the process, and seem unable to adapt their learning strategies because failure is attributed to a lack of ability and not to a lack of effort—sounds all too familiar. The question of what kind of feedback supports students’ development into skillful self-regulated learners is addressed in this section. In the CS1 context, where students explore the new territory of computer programming, feedback on self-regulation level can help them become confident and efficient software developers who know the tools of the trade and how to approach the problems involved. If initial program designs are not successful, compiler errors frequently occur, and program crashes persist, these hurdles ideally should be taken as challenges and not cause frustration. If, in addition, students experience computer programming as exciting and rewarding and develop a genuine interest in improving their skills, then the best outcome that we could hope for is achieved. In this section, we introduce some ideas for feedback on self-regulation level that appear to be relevant in the CS1 context. 3.3.1. Multistage Assignments: Increasing Task Complexity with Fewer Instructions. Multistage assignments are seen to be effective when “required standards are becoming increasingly transparent and dialogic feedback processes support students’ self-monitoring” [Carless et al. 2011, p. 398] and when the single-assignment tasks are spaced out during the course to ensure consistent study activity rather than a rush toward the ´ end [Gibbs 2006]. In a study involving 138 undergraduate students, Hernandez [2012] found that the aspect of continuous assignments was indeed valued by students. Multiple lab tasks that are distributed over the semester can be seen as a multistage assignment, where (dialogic) feedback at the completion of one task should also be aimed to guide improvements for the next task. Furthermore, an increasing level ACM Transactions on Computing Education, Vol. 16, No. 1, Article 1, Publication date: January 2016.

1:18

C. Ott et al.

of complexity forces students to apply more and more advanced concepts. It is desirable that at the same time instructions become less prescriptive to foster students’ autonomy in understanding and implementing the requirements. However, there is a dilemma: more advanced programming concepts (e.g., inheritance) require nontrivial program designs to be illustrative. Detailed instructions are needed to scaffold the rather complex program development; however, toward the end of the course, minimal scaffolding should be needed to foster students’ confidence and a sense of ownership for their work. A scheme where instructions or tasks are at different levels for students to choose from (basic, intermediate, advanced) could support a sense of self-regulation. Students could be encouraged to challenge themselves by aiming for higher-level tasks or instructions that require more self-directed planning and implementation. As task instructions become increasingly less prescriptive to help students develop into autonomous programmers, tutor interaction with students should also reflect this intention. Giving students answers to questions that they could (in theory) work out themselves is counterproductive in this sense. Targeted questions and hints could instead guide the process of finding the answer or solution to a problem. That is not always easy, and compromises might be necessary when a busy lab situation requires quick responses to students’ programming problems or some students’ expectation to be “told what to do” collides with the tutor’s encouragement to find the answer independently. 3.3.2. Student Engagement through Peer Assessment and Peer Feedback. In the process of pair programming and peer feedback, students need to articulate task requirements and relevant concepts, and by judging other peers’ performance, students learn to monitor their own performance more accurately, which is an essential aspect of the self-regulated learning process (e.g., Butler & Winne 1995]). Peer assessment and peer feedback are established ideas in computer science (see Luxton-Reilly [2009] for a systematic literature review of tools supporting peer assessment). Pair programming can be seen as one specific form of peer feedback where two programmers are in active dialogue about the task at hand. Advantages and disadvantages of pair programming for educational purposes and specifically in CS1 are widely discussed (for a short summary, see Radermacher et al. [2012]), and improved learning, especially for weaker students, has been reported (e.g., Braught et al. 2008]). Addressing the problem of “unbalanced pairs” in pair programming, Wood et al. [2013] published positive results based on pairing first-year students with the same programming confidence level. These findings reiterate conclusions drawn from a review of 74 empirical studies [Salleh et al. 2011]. Apparently, sensible planning of the pairs and guidance throughout is important to get pair programming into fruitful action. For a comprehensive overview of studies using pair programming and important aspects of planning such an intervention, see Salleh et al. [2011]. Peer instruction, as already mentioned in Section 2.1 in the context of interactive lecture approaches (e.g., Porter et al. 2013]), and peer review (e.g., Smith et al. 2012]) are also approaches to engage computer science students in the peer feedback process without the pressure of grading. The effects reported are mainly positive. For example, Zingaro [2014] reports specifically on students’ increased self-efficacy measures after introducing peer instruction. In the study, the scale to measure self-efficacy refers to a questionnaire where students rate their confidence that they could master certain programming tasks. As self-regulation is suggested as one of the four factors composing this scale, it seems that peer instruction also supports students’ self-regulation. An excellent review of tools to support student collaboration in CSEd is provided by Hamer et al. [2010]. The authors found peer review tools as being “(by far) the most prevalent systems described in the literature” [Hamer et al. 2010, p. 5]. ACM Transactions on Computing Education, Vol. 16, No. 1, Article 1, Publication date: January 2016.

Translating Principles of Effective Feedback for Students into the CS1 Context

1:19

3.3.3. Opportunities for Self-Assessment. Self-assessment is seen as the key element in the process of self-regulated, lifelong learning (e.g., Tan [2007] or Butler & Winne [1995]). Only if students are able to judge their own performance in relation to (selfdefined) goals can further action be taken toward accomplishing these goals. There is an abundance of tools and materials to support self-directed study in computer programming: online tutorials, lectures, and quizzes (e.g., provided by Kahn Academy (http://www.khanacademy.org) or Udacity (http://www.udacity.com)); code-reading tasks (e.g., JavaGuide in Hsiao et al. [2010]); or code-writing tasks (e.g., JavaBat in Parlante [2007]). Student-generated learning content such as MCQs and small-scale programming problems are supported by PeerWise [Denny et al. 2008] and CodeWrite [Denny et al. 2011]. The integration of additional instant feedback into an automated tutor system to enhance sole error detection was investigated (e.g., Kumar 2005], as well as advantages and disadvantages of instant feedback [Corbett & Anderson 2001]. In a recent meta-analysis, Nesbit et al. [2014] demonstrated significant advantages of intelligent tutoring systems over teacher-led or group instruction as well as over other computer-based instruction systems in CSEd. Automated assessment tools are related. Enhancements of feedback from those tools for programming tasks were explored regarding visual feedback [Ihantola et al. 2011] and the awarding of “badges” [Haaranen & Ihantola 2014] but showed limited impact on students’ actual study behavior. Falkner et al. [2014] suggest that higher “granularity” of feedback units provided by an automated assessment tool impacts students’ submission behavior and performance positively. A good starting point into the literature on such tools is provided by Ihantola and Ahoniemi [2010] or Ala-Mutka [2005]. An in-depth literature review would be necessary to classify and evaluate the tools available and their potential support of self-directed learning and self-assessment. Integrating such resources for review or lab preparation might lead to higher engagement because of the interactive character and the instant feedback provided. Providing regular opportunities to “critically evaluate the quality of their own work during, as well as after, its production” [Sambell 2011, p. 22] is one strategy to foster self-regulation for undergraduate students. In a study by Robinson and Udall [2006], it was shown that interactive “learning conversations” for engineering students lead to a greater sense of ownership in students’ learning. As already mentioned in Section 3.2, a critical reflection on a finished programming task combined with tutor feedback could support improvements for the next task, but it also can be useful to get students into the habit of critical self-reflection. 3.3.4. Self-Monitoring Course Performance. If students are able to access their marks (and other information about completed assessment) as the course progresses, it should be easy enough to self-monitor their progress on the course level. However, the implications for not meeting goals, such as finishing or attending practical sessions or achieving a mid-semester score under a certain level, may be rather unclear. The process of self-monitoring performance could be enriched with information relevant to individual students and triggering (self-directed) action by answering questions such as follows: What are my prospects for the final examination based on my mid-semester score? How many practical tasks do I need to complete to have a good chance to pass the examination? Is there a correlation of timeliness in submission and examination performance? When should I seek additional help based on my course performance? We have explored these issues in our own COMP160 course and have found, for example, that the number of finished lab tasks correlates strongly with the final exam score, that the best predictor for students’ performance is the mid-semester exam score, and that there are also less obvious predictors such as the timeliness of practical work completion and a negative association between the time spent in the labs and the final ACM Transactions on Computing Education, Vol. 16, No. 1, Article 1, Publication date: January 2016.

1:20

C. Ott et al.

exam result. To help students in the process of self-monitoring their performance providing course information seems highly valuable. Adding meaning to available course data would inform students about their prospects in the course. Based on individual performance data, feedback could be personalized by adapting performance goals (e.g., goals to catch up or attend labs more regularly), relating students’ actual performance to what was observed as “successful” performance in the past, and pointing out aspects to improve. As those data also give indications about students at risk, interventions such as catch-up tutorials could be put into action. Predicting factors for students’ failure or success in introductory programming courses are widely discussed in CSEd. For example, factors such as math background and comfort level in class (e.g., Wilson and Shrock 2001; Bergin and Reilley 2005]), learning approach [Simon et al. 2006], final grade expectations [Rountree et al. 2004], and the degree of self-regulated learning [Bergin et al. 2005] have been found to be predictive. Compiler errors and students’ strategies for fixing those errors were also studied to estimate the final achievement in the course or to identify at-risk students [Tabanao et al. 2011]. However, the predictive value of students’ naturally occurring course performance data has had relatively little attention. Fenwick et al. [2009] found that students starting late with their assignment had a lower success rate. Falkner and Falkner [2012] investigated 220,000 records of 1,900 students and found that submission behavior early in the course can be linked to future performance. Students who are late with their first assignment are likely to establish a consistent pattern of lateness in the following years. The authors note: “If we assume, reasonably, that late submission is likely to lead to reduced marks or cascading lateness, with the inherent risks of disengagement and failure, this one measure immediately identifies a group who will benefit from contact, follow-up or mentoring” [Falkner & Falkner 2012, p. 60]. Learning analytics is an emerging research field in higher education and is most commonly defined as “the measurement, collection, analysis and reporting of data about learners and their contexts, for the purpose of understanding and optimizing learning and the environments in which it occurs” [Clow 2013]. Predictive modeling, based on a variety of datasets (e.g., students’ previous educational records, demographics, activity, and grades in current courses), is one major application of learning analytics that often is used to identify students at risk to fail or drop a course (e.g., Essa and Ayad [2012] and Barber and Sharkey [2012]). This information is made available for teachers to inform decisions for interventions supporting the learner. One successful example on how predictions are turned into feedback information for students is Course Signals [Arnold and Pistilli 2012], where a signal is generated and sent out to students to visualize their prospects for a certain course: green (indicating a high chance of success), yellow (indicating potential problems), and red (indicating a high chance of failure). The authors report evidence of “a strong increase in satisfactory grades, and a decrease in unsatisfactory grades” (p. 268) along with higher retention rates for students using Course Signals. On an anecdotal basis, faculty members observed students to be more proactive and “benefiting from knowing how they are really doing in the course” (p. 270). Considering the impact of learning edge momentum as postulated by Robins [2010] for CS1 courses, students falling behind early on are unlikely to catch up later in the course. The integrative nature of the course material leads to a spiraling effect on students’ consecutive performance. It can be assumed that basic concepts that are not fully understood cause trouble with comprehending higher-level concepts. Communicating such typical course characteristics that we know about, even if only from anecdotal evidence, should help students understand the factors influencing their performance and act accordingly. Our search for literature relating to the impact of feedback on selfregulation level using available course performance data did not yield any substantive ACM Transactions on Computing Education, Vol. 16, No. 1, Article 1, Publication date: January 2016.

Translating Principles of Effective Feedback for Students into the CS1 Context

1:21

results. Although the predictive value of those data might vary a lot between programs and institutions, the question of whether feedback on this level is effective to influence students’ study behavior positively in an introductory programming course remains. 3.3.5. Summary: Feedback on Self-Regulation Level. Feedback on self-regulation level is well regarded in the higher education literature as being important for students’ development toward becoming self-regulated, resourceful, lifelong learners. In the previous sections, matters of multistage assessments to engage students in critical reflection and self-regulation, opportunities for self-assessment on task, concept and course level, and different forms of peer feedback have been discussed in the CS1 context. Given the importance of fostering self-regulated learning, more high-quality feedback on this level is needed. Only if students understand the course mechanics and can judge their actual performance in relation to their course performance goals is self-regulation at this level possible. For example, naturally occurring course data were found to be predictive for our students’ final performance and could be used to assist the definition of course performance goals and the translation of actual course performance data into meaningful and relevant feedback information. In the field of learning analytics, these feedback aspects are acknowledged, and predictive student data is used to inform teachers about students at risk and their prospects in a course. However, in the CSEd literature, predictive factors are rarely discussed in the context of practical feedback intervention. For a summary of related topics to improve feedback on self-regulation level as discussed in this section, see the bottom section of Table II. Boundaries between goals-, performance-, and improvement-specific feedback are less apparent on this level. 4. SUMMARY AND FUTURE WORK

To translate theories of what constitutes effective feedback as found in the higher education literature into the CS1 learning context, we presented three related parts of the following: (1) Drawing strands together from the higher education literature to describe the importance and complex nature of feedback and to determine effective feedback practices. (2) Providing a practical background to consider these practices by reflecting on feedback processes in a typical CS1 course. (3) Reviewing the CSEd literature to link areas of possible improvement within the case study to existing research in the field. Literature in higher education provides many studies, frameworks, and opinions on feedback. Quality feedback is generally recognized as being timely, detailed, relevant, and understandable for students. For the process of learning, effective feedback states and clarifies the goals, informs students about actual performance, and indicates ways for improvements toward the goals; additionally, it supports the development of students’ own self-regulation. Ideally, feedback should take the learner’s characteristics and abilities into account and make sure that feedback messages are received, understood, and can be acted on. Furthermore, good assessment and feedback practice should provide opportunities for lecturers and tutors to gain insight into students’ difficulties and misunderstandings, which can be addressed while teaching. It is important to implement good feedback practices not only on task level but also at process and self-regulation levels. The question of how to consider good feedback practices in the CS1 context was addressed by describing common aspects of a typical CS1 course. This way, we could identify and discuss possible areas for improvement on task, process, and self-regulation ACM Transactions on Computing Education, Vol. 16, No. 1, Article 1, Publication date: January 2016.

C. Ott et al.

Table II. Road Map to Consider Effective Feedback Practices in the CS1 Context

1:22

ACM Transactions on Computing Education, Vol. 16, No. 1, Article 1, Publication date: January 2016.

Translating Principles of Effective Feedback for Students into the CS1 Context

1:23

level, which are summarized as a road map at the end of this article (Table II). This discussion not only illustrated how the framework of effective feedback practices can be translated into the CS1 context, it also structured a preliminary survey of the CSEd research literature, pointing to various aspects that might be helpful to establish good feedback practice. In-depth reviews of the literature would be required to plan and conduct feedback interventions for different parts and stages of the course. One exciting direction was seen in the investigation of the predictive value of course data to (1) estimate students’ performance, (2) provide tailor-made feedback to support students’ self-regulated learning, and (3) evaluate the effectiveness of feedback on this level. We hope to initiate a discussion on effective feedback techniques in the context of CS1 courses and encourage colleagues at other institutions to report feedback interventions. REFERENCES K. Ala-Mutka. 2005. A survey of automated assessment approaches for programming assignments. Computer Science Education 15, 2, 83–102. V. L. Almstrum, O. Hazzan, M. Guzdial, and M. Petre. 2005. Challenges to computer science education research. In Proceedings of the 36th SIGCSE Technical Symposium on Computer Science Education (SIGCSE’05). 191–192. K. E. Arnold and M. D. Pistilli. 2012. Course Signals at Purdue: Using learning analytics to increase student success. In Proceedings of the 2nd International Conference on Learning Analytics and Knowledge (LAK’12). 267–270. S. Askew and C. Lodge. 2000. Gifts, ping-pong and loops—linking feedback and learning. In Feedback for Learning, S. Askew (Ed.). RoutledgeFalmer, London, UK, 1–18. T. Auvinen. 2011. Rubyric. In Proceedings of the 11th Koli Calling International Conference on Computing Education Research. 102–106. T. Bailey and J. Forbes. 2005. Just-in-Time Teaching for CS0. ACM SIGCSE Bulletin 37, 1, 366. C. Bailey-Lee, S. Garcia, and L. Porter. 2013. Can peer instruction be effective in upper-division computer science courses? ACM Transactions on Computing Education 13, 3, 12.1–12.22. R. Barber and M. Sharkey. 2012. Course correction: Using analytics to predict course success. In Proceedings of the 2nd International Conference on Learning Analytics and Knowledge (LAK’12). 259–262. C. Beaumont, M. O’Doherty, and L. Shannon. 2008. Staff and student perceptions of feedback quality in the context of widening participation. Available at https://www.heacademy.ac.uk/. K. Becker. 2003. Grading programming assignments using rubrics. ACM SIGCSE Bulletin 35, 3, 253–253. S. Bergin and R. Reilly. 2005. Programming: Factors that influence success. ACM SIGCSE Bulletin 37, 1, 411–415. S. Bergin, R. Reilly, and D. Traynor. 2005. Examining the role of self-regulated learning on introductory programming performance. In Proceedings of the 2005 International Workshop on Computing Education Research (ICER’05). 81–86. J. Biggs and C. Tang. 2007. Teaching for Quality Learning at University: What the Student Does. McGraw-Hill. S. Bloxham and L. Campbell. 2010. Generating dialogue in assessment feedback: Exploring the use of interactive cover sheets. Assessment and Evaluation in Higher Education 35, 3, 291–300. G. Braught, L. M. Eby, and T. Wahls. 2008. The effects of pair-programming on individual programming skill. In Proceedings of the 39th SIGCSE Technical Symposium on Computer Science Education (SIGCSE’08). 200–204. D. Bruff. 2009. Teaching with Classroom Response Systems: Creating Active Learning Environments. Wiley. K. Buffardi and S. Edwards. 2014. Responses to adaptive feedback for software testing. In Proceedings of the 2014 Conference on Innovation and Technology in Computer Science Education (ITiCSE’14). 165–170. D. Burke and Pieterick. 2010. Giving Students Effective Written Feedback. McGraw-Hill International. D. L. Butler and P. H. Winne. 1995. Feedback and self-regulated learning: A theoretical synthesis. Review of Educational Research 65, 3, 245–281. R. Cardell-Oliver. 2011. How can software metrics help novice programmers? In Proceedings of the 13th Australasian Computing Education Conference. 55–62. D. Carless. 2006. Differing perceptions in the feedback process. Studies in Higher Education 31, 2, 219–233. D. Carless. 2007. Learning-oriented assessment: Conceptual bases and practical implications. Innovations in Education and Teaching International 44, 1, 57–66.

ACM Transactions on Computing Education, Vol. 16, No. 1, Article 1, Publication date: January 2016.

1:24

C. Ott et al.

D. Carless, D. Salter, M. Yang, and J. Lam. 2011. Developing sustainable feedback practices. Studies in Higher Education 36, 4, 395–407. P. Carter. 2012. An experience report: On the use of multimedia pre-instruction and Just-in-Time Teaching in a CS1 course. In Proceedings of the 43rd ACM Technical Symposium on Computer Science Education (SIGCSE’12). 361–366. A. T. Chamillard. 2011. Using a student response system in CS1 and CS2. In Proceedings of the 42nd ACM technical symposium on computer science education (SIGCSE’11). 299–304. D. Clow. 2013. An overview of learning analytics. Teaching in Higher Education 18, 6, 683–695. A. Corbett and J. Anderson. 2001. Locus of feedback control in computer-based tutoring: Impact on learning rate, achievement and attitudes. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (SIGCHI’01). 245–252. Q. Cutts and G. Kennedy. 2005. Connecting learning environments using electronic voting systems. In Proceedings of the 7th Australasian Conference on Computing Education. 181–186. P. Denny, A. Luxton-Reilly, and J. Hamer. 2008. The PeerWise system of student contributed assessment questions. In Proceedings of the 10th Australasian Conference on Computing Education. 69–74. P. Denny, A. Luxton-Reilly, E. Tempero, and J. Hendrickx. 2011. CodeWrite: Supporting student-driven practice of java. In Proceedings of the 42nd ACM Technical Symposium on Computer Science Education (SIGCSE’11). 471–476. L. Deslauriers, E. Schelew, and C. Wieman. 2011. Improved learning in a large-enrollment physics class. Science 332, 6031, 862–864. C. S. Dweck. 1999. Self-Theories: Their Role in Motivation, Personality, and Development. Psychology Press. J. East and J. Schafer. 2005. In-person grading: An evaluative experiment. ACM SIGCSE Bulletin 37, 1, 378–382. A. Essa and H. Ayad. 2012. Student success system: Risk analytics and data visualization using ensembles of predictive models. In Proceedings of the 2nd International Conference on Learning Analytics and Knowledge (LAK’12). 158–161. N. J. G. Falkner and K. E. Falkner. 2012. A fast measure for identifying at-risk students in computer science. In Proceedings of the 9th Annual International Conference on International Computing Education Research (ICER’12). 55–62. N. Falkner, R. Vivian, D. Piper, and K. Falkner. 2014. Increasing the effectiveness of automated assessment by increasing marking granularity and feedback units. In Proceedings of the 45th ACM Technical Symposium on Computer Science Education (SIGCSE’14). 9–14. J. B. Fenwick, C. Norris, F. Barry, J. Rountree, C. Spicer, and S. Cheek. 2009. Another look at the behaviors of novice programmers. ACM SIGCSE Bulletin 41, 1, 296–300. G. C. Gannod, J. E. Burge, and M. T. Helmick. 2008. Using the inverted classroom to teach software engineering. In Proceedings of the 13th International Conference on Software Engineering (ICSE’08). 777–786. G. Gibbs. 2006. How assessment frames student learning. In Innovative Assessment in Higher Education, C. Bryan and K. Clegg (Eds.). Routledge, London, UK, 23–36. G. Gibbs. 2010. Using Assessment to Support Student Learning. Leeds Met Press, Leeds Metropolitan University. G. Gibbs and C. Simpson. 2004. Conditions under which assessment supports students’ learning. Learning and Teaching in Higher Education 5, 1, 3–31. S. Grissom. 2013. Introduction to special issue on alternatives to lecture. ACM Transactions on Computing Education 13, 3, 1–5. S. Grissom, C. Hundhausen, and P. Conrad. 2014. Alternatives to lecture: Experience peer instruction and pedagogical code reviews. In Proceedings of the 45th ACM Technical Symposium on Computer Science Education (SIGCSE’14). 275–276. M. Guzdial and B. Ericson. 2007. Introduction to Computing and Programming in Java: A Multimedia Approach. Pearson Prentice Hall. L. Haaranen and P. Ihantola. 2014. How (not) to introduce badges to online exercises. In Proceedings of the 45th ACM Technical Symposium on Computer Science Education (SIGCSE’14). 33–38. J. Hamer, H. C. Purchase, A. Luxton-Reilly, and J. Sheard. 2010. Tools for “contributing student learning.” In Proceedings of the 2010 ITiCSE Working Group Reports (ITiCSE-WGR’10). 1–14. J. Hattie. 2009. Visible Learning: A Synthesis of Over 800 Meta-Analyses Relating to Achievement. Routledge. J. Hattie and H. Timperley. 2007. The Power of Feedback. Review of Educational Research 77, 1, 81–112. ´ R. Hernandez. 2012. Does continuous assessment in higher education support student learning? Higher Education 64, 4, 489–502.

ACM Transactions on Computing Education, Vol. 16, No. 1, Article 1, Publication date: January 2016.

Translating Principles of Effective Feedback for Students into the CS1 Context

1:25

D. Horton, M. Craig, J. Campbell, P. Gries, and D. Zingaro. 2014. Comparing outcomes in inverted and traditional CS1. In Proceedings of the 2014 Conference on Innovation and Technology in Computer Science Education (ITiCSE’14). 261–266. D. Hounsell. 2003. Student feedback, learning and development. In Higher Education and the Lifecourse, M. Slowey and D. Watson (Eds.). Society for Research into Higher Education Series. Open University Press, Buckingham, UK, 67–78. D. Hounsell. 2007. Towards more sustainable feedback to students. In Rethinking Assessment in Higher Education: Learning for the Longer Term, D. Boud and N. Falchikov (Eds.). Routledge, London, UK. I.-H. Hsiao, S. Sosnovsky, and P. Brusilovsky. 2010. Guiding students to the right questions: Adaptive navigation support in an E-learning system for Java programming. Journal of Computer Assisted Learning 26, 4, 270–283. G. Hughes. 2011. Towards a personal best: A case for introducing ipsative assessment in higher education. Studies in Higher Education 36, 3, 353–367. C. Hundhausen, A. Agrawal, and P. Agarwal. 2013. Talking about code: Integrating pedagogical code reviews into early computing courses. ACM Transactions on Computing Education 13, 3. P. Ihantola and T. Ahoniemi. 2010. Review of recent systems for automatic assessment of programming assignments. In Proceedings of the 10th Koli Calling International Conference on Computing Education Research. 86–93. ¨ a. ¨ 2011. Automated visual feedback from programming assignments. P. Ihantola, V. Karavirta, and O. Seppal In Proceedings of the 6th Program Visualization Workshop (PVW’11). 87–95. B. Irwin, S. Hepplestone, G. Holden, H. J. Parkin, and L. Thorpe. 2013. Engaging students with feedback through adaptive release. Innovations in Education and Teaching International 50, 1, 51–61. V. Isom¨ott¨onen and V. Tirronen. 2013. Teaching programming by emphasizing self-direction: How did students react to the active role required of them? ACM Transactions on Computing Education 13, 2. T. Jessop, Y. El Hakim, and G. Gibbs. 2013. The whole is greater than the sum of its parts: A large-scale study of students’ learning in response to different programme assessment patterns. Assessment and Evaluation in Higher Education 39, 1, 73–88. ´ M. Joy, J. Sinclair, S. Sun, J. Sitthiworachart, and J. L´opez-Gonzalez. 2009. Categorising computer science education research. Education and Information Technologies 14, 2, 105–126. G. Kennedy and Q. Cutts. 2005. The association between students’ use of an electronic voting system and their learning outcomes. Journal of Computer Assisted Learning 21, 260–268. P. Kinnunen, V. Meisalo, and L. Malmi. 2010. Have we missed something? Identifying missing types of research in computing education. In Proceedings of the 6th International Workshop on Computing Education Research. ACM, New York, NY, 13–22. A. Kumar. 2005. Generation of problems, answers, grade, and feedback—case study of a fully automated tutor. ACM Journal of Educational Resources in Computing 5, 3, 1–25. J. Kurhila and A. Vihavainen. 2011. Management, structures and tools to scale up personal advising in large programming courses. In Proceedings of the 2011 Conference on Information Technology Education (SIGITE’11). 3. J. Lewis, P. J. DePasquale, and J. Chase. 2010. Java Foundations: Introduction to Program Design and Data Structures. Pearson Education. N.-F. Liu and D. Carless. 2006. Peer feedback: The learning element of peer assessment. Teaching in Higher Education 11, 3, 279–290. A. Luxton-Reilly. 2009. A systematic review of tools that support peer assessment. Computer Science Education 19, 4, 209–232. L. Malmi, J. Sheard, Simon, R. Bednarik, J. Helminen, A. Korhonen, N. Myller, J. Sorva, and A. Taherkhani. 2010. Characterizing research in computing education: A preliminary analysis of the literature. In Proceedings of the 6th International Workshop on Computing Education Research. ACM, New York, NY, 3–12. R. C. Martin. 2009. Clean Code: A Handbook of Agile Software Craftsmanship. Prentice Hall. A. McLean. 2012. An Anatomy of Feedback: A Phenomenographic Investigation into Undergraduate Students’ Experiences of Feedback. Ph.D. Dissertation. University of Otago, Dunedin, New Zealand. J. C. Nesbit, O. O. Adesope, Q. Liu, and W. Ma. 2014. How effective are intelligent tutoring systems in computer science education? In Proceedings of the 2014 IEEE 14th International Conference on Advanced Learning Technologies. 99–103. D. Nicol. 2014. Unlocking learners’ evaluative skills: A peer review perspective. In Proceedings of the 10th Annual Conference on International Computing Education Research (ICER’14).

ACM Transactions on Computing Education, Vol. 16, No. 1, Article 1, Publication date: January 2016.

1:26

C. Ott et al.

D. J. Nicol and D. Macfarlane-Dick. 2006. Formative assessment and self-regulated learning: A model and seven principles of good feedback practice. Studies in Higher Education 31, 2, 199–218. N. Parlante. 2007. Nifty reflections. ACM SIGCSE Bulletin 39, 2, 25–26. A. Pears, J. Harland, M. Hamilton, and R. Hadgraft. 2013. What is feedback? Connecting student perceptions to assessment practices. In Proceedings of Learning and Teaching in Computing and Engineering (LaTiCE’13). 2013. A. Pears, S. Seidman, L. Malmi, L. Mannila, E. Adams, J. Bennedsen, M. Devlin, and J. Paterson. 2007. A survey of literature on the teaching of introductory programming. ACM SIGCSE Bulletin 39, 4, 204. D. Perkins, C. Hancock, R. Hobbs, F. Martin, and R. Simmons. 1989. Conditions of learning in novice programmers. In Studying the Novice Programmer, E. Soloway and J. C. Spohrer (Eds.). Lawrence Erlbaum, Hillsdale, NJ, 261–279. L. Porter, C. Bailey-Lee, and B. Simon. 2013. Halving fail rates using peer instruction: A study of four computer science courses. In Proceedings of the 44th ACM Technical Symposium on Computer Science Education (SIGCSE’13). 177–182. L. Porter and B. Simon. 2013. Retaining nearly one-third more majors with a trio of instructional best practices in CS1. In Proceedings of the 44th ACM Technical Symposium on Computer Science Education (SIGCSE’13). 165–170. A. Poulos and M. J. Mahony. 2008. Effectiveness of feedback: The students’ perspective. Assessment and Evaluation in Higher Education 33, 2, 143–154. M. Price, K. Handley, and J. Millar. 2011. Feedback: Focusing attention on engagement. Studies in Higher Education 36, 8, 37–41. M. Prosser and K. Trigwell. 1999. Understanding Learning and Teaching. Society for Research into Higher Education Series. Open University Press. A. Radermacher, G. Walia, and R. Rummelt. 2012. Improving student learning outcomes with pair programming. In Proceedings of the 9th Annual International Conference on International Computing Education Research (ICER’12). 87–92. P. Ramsden. 2003. Learning to Teach in Higher Education. RoutledgeFalmer. K. Renaud and Q. Cutts. 2013. Teaching human-centered security using nontraditional techniques. ACM Transactions on Computing Education 13, 3, 11.1–11.23. A. Robins. 2010. Learning edge momentum: A new account of outcomes in CS1. Computer Science Education 20, 1, 37–71. A. Robinson and M. Udall. 2006. Using formative assessment to improve student learning through critical reflection. In Innovative Assessment in Higher Education, C. Bryan and K. Clegg (Eds.). Routledge, London, UK, 92–99. N. Rountree, J. Rountree, and A. Robins. 2004. Interacting factors that predict success and failure in a CS1 course. ACM SIGCSE Bulletin 36, 4. D. R. Sadler. 1989. Formative assessment and the design of instructional systems. Instructional Science 144, 119–144. D. R. Sadler. 2002. Ah! . . . So that’s ‘quality.’ In Assessment: Case Studies, Experience and Practice from Higher Education, P. Schwartz and Webb (Eds.). Kogan Page, London, UK, 130–137. N. Salleh, E. Mendes, and J. Grundy. 2011. Empirical studies of pair programming for CS/SE teaching in higher education: A systematic literature review. IEEE Transactions on Software Engineering 37, 4, 509–525. K. Sambell. 2011. Rethinking Feedback in Higher Education: An Assessment for Learning Perspective. Higher Education Academy, Bristol, UK. C. Shaffer, M. Cooper, and S. Edwards. 2007. Algorithm visualization: A report on the state of the field. ACM SIGCSE Bulletin 39, 1, 150–154. B. Simon, R. Lister, and S. Fincher. 2006. Multi-institutional computer science education research: A review of recent studies of novice understanding. In Proceedings of the 36th Annual Conference on Frontiers in Education. 12–17. J. Smith, J. Tessler, E. Kramer, and C. Lin. 2012. Using peer review to teach software testing. In Proceedings of the 9th Annual International Conference on International Computing Education Research (ICER’12). 93–98. J. Spacco, B. Simon, and J. Parris. 2013. How we teach impacts student learning: Peer instruction vs. lecture in CS0. In Proceedings of the 44th ACM Technical Symposium on Computer Science Education (SIGCSE’13). 41–46.

ACM Transactions on Computing Education, Vol. 16, No. 1, Article 1, Publication date: January 2016.

Translating Principles of Effective Feedback for Students into the CS1 Context

1:27

E. S. Tabanao, M. M. T. Rodrigo, and M. C. Jadud. 2011. Predicting at-risk novice Java programmers through the analysis of online protocols. In Proceedings of the 7th International Workshop on Computing Education Research (ICER’11). 85. K. Tan. 2007. Conceptions of self-assessment. In Rethinking Assessment in Higher Education: Learning for the Longer Term, D. Boud and N. Falchikov (Eds.). Routledge, London, UK. A. Vihavainen, J. Airaksinen, and C. Watson. 2014. A systematic review of approaches for teaching introductory programming and their influence on success. In Proceedings of the 10th Annual Conference on International Computing Education Research (ICER’14). 19–26. B. C. Wilson and S. Shrock. 2001. Contributing to success in an introductory computer science course: A study of twelve factors. ACM SIGCSE Bulletin 33, 1, 184–188. K. Wood, D. Parsons, J. Gasson, and P. Haden. 2013. It’s never too early: Pair programming in CS1. In Proceedings of the 15th Australasian Computing Education Conference (ACE’13). 13–21. M. Yorke and P. Knight. 2004. Self-theories: Some implications for teaching and learning in higher education. Studies in Higher Education 29, 1, 25–37. B. J. Zimmerman. 1998. Developing self-fulfilling cycles of academic regulation: An analysis of exemplary instructional models. In Self- Regulated Learning: From Teaching to Self-Reflective Practice, D. H. Schunk and B. J. Zimmerman (Eds.). Guilford Press, New York, NY, 1–19. D. Zingaro. 2014. Peer instruction contributes to self-efficacy in CS1. In Proceedings of the 45th ACM Technical Symposium on Computer Science Education (SIGCSE’14). 373–378. Received December 2013; revised February 2015; accepted February 2015

ACM Transactions on Computing Education, Vol. 16, No. 1, Article 1, Publication date: January 2016.

Smile Life

When life gives you a hundred reasons to cry, show life that you have a thousand reasons to smile

Get in touch

© Copyright 2015 - 2024 PDFFOX.COM - All rights reserved.