How frequent and useful feedback benefits student learning

GIVING FEEDBACK: In the following video from the CIRTL online course “Introduction to STEM Teaching”,  Dr. Angela Little from the University of California Berkeley and a group of graduate students from the University of Colorado Boulder discuss their experiences with useful vs unhelpful feedback, describing the importance of frequent feedback to students from peers and instructors. To view the full module with additional information about assessment, click here.

What is formative assessment?

Formative vs. Summative (sometimes called “Auditive”) Assessment:

How will you assess student learning throughout the quarter (“formative assessment”) and at the end (“summative assessment”)?

The purpose of an assessment is to provide an opportunity for the student to evidence that they have learned what they were supposed to. Using formative assessment means that over time they have the opportunity to first make mistakes and then learn from those mistakes. This is more aligned with a “mastery” approach – basically designing assessments so students eventually will master the competency.  A summative/auditive assessment is one where you are looking back at what they should have learned, assessing them on it, and then moving on to something new. Both can be used effectively.

Resource:

Fink, L. Dee “A Self-Directed Guide to Designing Courses for Significant Learning: This workbook style guide will walk you through the different elements of backwards course design, including designing for assessment of student learning.

  • Integrate active learning techniques during lecture, discussion and lab.
  • Have drafts due throughout the quarter
  • Have students grade one another’s work using structured, rubric-based protocols
    • CCLE has a “Workshop” assignment, in which each student submission can automatically be assigned to a few other students to grade based on a rubric. The students who are assigning the grade get more points for assigning similar grades to the same work (i.e. there is a reward for grading carefully).
    • Use Calibrated Peer Review to help students learn not only how to grade one another, but what good examples look like.

How to Assess Students Based on Mastery of Material

FEATURED RESOURCE: In October 2018, the Associate Dean of UCLA’s School of Engineering issued a memorandum with guidance on assessment and grading. In particular, pages 5-7 provide guidance on course design, assigning grades based on content mastery rather than relative performance, and why and how to clearly state the grading policy in the syllabus.

Grading Sytems UNCC: Summary of advantages and disadvantages of different grading systems

Schinske and Tanner: Teaching more by grading less or differently

VALUE Rubrics: These AAC&U rubrics include detailed language for describing and assessing skills, such as critical thinking, problem solving, and quantitative literacy. Adapting such rubrics can help you build better assignments, while being more clear about the skills you want your students to develop and demonstrate.

NY Times Article: Why We Should Stop Grading on a Curve

Hughes, B., Hurtado, S. & Eagan, M. K. (Nov 2014). Driving Up or Dialing Down Competition in Introductory STEM Courses: Individual and Classroom Level Factors. Paper presented at the Annual Meeting of the Association for the Study of Higher Education, Washington, DC, November 20-22, 2014.: Research finding norm-referenced grading associated with higher student perceptions of competition; Faculty can “dial down” competitiveness by structuring collaboration into courses.

Fink, L. Dee “A Self-Directed Guide to Designing Courses for Significant Learning: This workbook style guide will walk you through the different elements of backwards course design, including designing for assessment of student learning.

There are online grading platforms that allow instructors and TAs to score assignments with open-response questions including exams and homework problems.  UCLA now has a licensing agreement with Gradescope (www.gradescope.com) for their online grading tool, which can be incorporated into CCLE course sites.


Why use Gradescope?

Benefits to instructors and TAs:

  • Rubric-based scoring system allows for more consistent and fair grading
  • Choice of using a positive (points added for correct answers) or negative (points taken away for wrong answers) rubric.
    • Positive scoring rubrics reward students for their achievements and sets a positive tone in a classroom – one that is focused on what students are doing right, rather than what students are doing wrong.
  • Streamlines the grading process, achieving greater grading efficiencies in large-enrollment courses
    • TAs can replicate the same feedback to multiple students without having to re-write the same comments over and over again (a very time-intensive effort)
    • Changes to scores get propagated to all assignments, so TAs save time because they do not have to regrade questions
  • Electronic grading provides flexibility because instructors and TAs can grade assignments from anywhere (home, a café, their lab, their office)
  • Academic integrity issues mediated with repository of electronic, scanned copies of every student’s exam or assignment
  • Tracks and reports analytics giving instructors and TAs substantially more feedback on student mastery of concepts and skills
  • Great online documentation with instructions (including video tutorials), suggestions and short-cuts

Benefits to students:

  • Transparency in points breakdown (partial credit) for the score a student receives on a question because the points assigned are linked to comments explaining what was done correctly or incorrectly
  • Students receive more feedback from TAs grading their assignments
  • Quality of feedback a student receives is superior to that they could get from hand-written comments by a TA
  • Electronic return of graded assignments to students permits rapid feedback used by students to gauge learning gains and progress towards mastery
  • Integration with CCLE allows secure sign-in with UCLA BOL account

There’s an app for that!

Here is an app you can share with your students to scan their homework into a pdf: https://acrobat.adobe.com/us/en/mobile/scanner-app.html


How to add Gradescope to your CCLE course site

Since UCLA now has a licensing agreement with Gradescope, instructors may incorporate this grading tool into their CCLE course sites as follows:

  1. Sign in to CCLE, and turn editing on.
  2. Navigate to the section of your CCLE site where you want the link for Gradescope to appear, and click Add an activity or resource. Select External tool, and click the Add button.
  3. In the box for Activity name, enter a name, such as “Gradescope”.
    From the drop-down menu for Preconfigured tool, select Gradescope.
    Scroll down and click the button Save and return to course.
  4. Click the link that has been created. This will open Gradescope in a new browser tab, and Gradescope will ask if you want to link this course with a new or existing course in Gradescope. Assuming you have not already created a Gradescope course for this class, select A new Gradescope course, and click Link course.
  5. After editing settings in Gradescope as desired, select Roster from the left panel. Click the Sync Moodle Roster button, and then Sync Roster. Go through the resulting roster in Gradescope and make sure your TA(s) have been assigned the “TA role”, rather than being listed as “Student”. (This is a known bug in the CCLE-Gradescope link.)

Tips for Using Gradescope at UCLA

For additional tips using Gradescope, please click here to see our FAQ document created by our CEILS Instructional Consultants, Will Conley (Math) and Josh Samani (Physics).  Both are available to answer questions and provide best practice advice about Gradescope.  Please email media@ceils.ucla.edu if you need assistance, and we’ll get the message to Will and Josh.

Assessments Repositories

Repository of reliable and validated assessment instruments and references for existing evaluation tools used to measure changes in student learning or student attitudes.

The 2010 User-Friendly Handbook for Project Evaluation published by NSF

Online Evaluation Resource Library – (OERL)

Concept Inventories (FC, Bioliteracy Project, Bibliography, Q4B)
Please inquire with CEILS Director about specific instruments available to UCLA faculty.

List of Concept Inventories

Other Biology Concept Inventories

Validated assessment tools for Physics, Astronomy, and Mathematics
This is a collection of resources that gather together information about published diagnostic tests or instruments that probe conceptual understanding, in various topic areas relating to (but not restricted to) the physical sciences.

Field-tested Learning Assessment Guide – (FLAG)
The FLAG offers broadly applicable, self-contained modular classroom assessment techniques (CATs) and discipline-specific tools for STEM instructors interested in new approaches to evaluating student learning, attitudes and performance.

Student Assessment of their Learning Gains – (SALG)
SALG is a developed and tested modular curricula and pedagogy for undergraduate chemistry courses.

Development of Biological Experimental Design Concept Inventory – (BEDCI)
This CI can be used to diagnose specific examples of non-expert-like thinking in students and to evaluate the success of teaching strategies that target conceptual changes.

The Experimental Design Ability Test – (EDAT)
The EDAT measures the level of students’ understanding towards what constitutes a good experimental design.

The Laboratory Course Assessment Survey (LCAS)
The LCAS was developed for Course-Based undergraduate research experiences (CUREs) to measure students’ perceptions of three design feautures of Biology lab courses.

Development and Validation of a Rubric for Diagnosing Students’ Experimental Design Knowledge and Difficulties – (REDI)
The development of this rubric assesses aquired knowledge about the processes of science and helps diagnose the key difficulties of student understandings.

Results Analysis Concept Inventory
The concept inventory was created to assess students’ comprehension about data analysis. These skills are necessary to make informed decision and reliable conclusions, going beyond applications in science towards the use of these skills in every day life.

Developing a Test of Scientific Literacy Skills (TOSLS)
TOSLS is designed to assess the scientific literacy skills that are necessary in a STEM field.

SPARST, multiple choice
While it is still in development, this online, multiple choice test is designed to assess reasoning skills, data analysis, and science communication.

Lawson Classroom Test of Scientific Reasoning (CTSR)
The Lawson CTSR is designed to measure concrete and formal operational reasoning, focusing on proportional thinking, probabilistic thinking, correlational thinking, and hypothetico-deductive reasoning. It is intended for students ranging from middle school to introductory college.

Critical Thinking Assessment Test – (CAT)
The CAT Instrument is a unique tool designed to assess and promote the improvement of critical thinking and real-world problem solving skills.

California Critical Thinking Skills Test – (CCTST)
This test has been used by various studies to assess gains in critical thinking. Here is the google scholar list of publications related to the creation and use of the test.

In-Class Concept Questions (CLASS)
This report examiles the role of conceptual questions that students answer using personal response systems or “clickers” to promote student leaning

Course-based Undergraduate Research Experiences (CURE)
Initiated in 2013, CUREnet aims to address topics, problems, and opportunities inherent to to integrating research experiences into undergraduate courses.

Science Motivation Questionnaire
Designed to give education researchers and science instructors insight into what factors impact how students determination to complete their science degree.

Survey of Undergraduate Research Experiences (SURE)
SURE exmamines the hypothesis that undergraduate research enhances the education experience of science undergraduates and retains talented students to careers in science.

Undergraduate Research Student Self-Assessment
The URSSA focuses on assessing student learning from undergraduate research, rather than whether they like it. The assessment looks at research outcomes such as evidence-based thinking, skills such as lab work and communication, conceptual knowledge, linkages among ideas in their field and with other fields, and preparation and clarity towards a career or educational path after graduating.

Perceived Cohesion (Bollen and Hoyle)
This theoretical definition of perceived cohesion says individuals’ perceptions of their own cohesion to a group has two dimensions: sense of belonging and feelings of morale. The study proposes that group members’ perceptions of cohesion are important for the behavior of the individual as well as the group as a whole.

Sense of Community Index (Chipuer)
This paper outlines the psychological sense of community (PSC)  in the neighborhood for adults and adolescents, and workplace PSC for adults, using true/false and three-point response formats.

Views of Nature of Science Questionnaire (VNOS)
The VNOS couples survey responses with individualized interviews to understand student perceptions about research and science education.

Views about Science Survey
This survey aims to understand the preconceptions students have towards science concepts before they learn them in the classroom through understanding the epistemology of science and its social context.

The Project Ownership Survey: Measuring Differences in Scientific Inquiry Experiences (POS) (Hauner et al.)
Project ownership is one of the psycho-social factors involved in student retention in the sciences. This instrument is designed to measure project ownership.

Self-authorship (Creanerm Baxter-Magolda)
This instrument measures factors that influence individuals’ construction of knowledge, identities, and relationships.

Grit Scale (Duckworth)
The Grit Scale measures trait-level perseverance and passion for long-term goals. Among adolescents, the Grit–S longitudinally predicted GPA and, inversely, hours watching television.

The Role of Efficacy and Identity in Science Career Commitment Among Underrepresented Minority Students (Chemers)
This instrument examines the role of efficacy and identity and its role in student pursuit of career in science.

Using the Multiple Lenses of Identity: Working with Ethnic and Sexual Minority College Students (Estrada)
This instrument investigates contexts that influence ethnic and sexual minority self-concept.

There are many attitudinal surveys that can be used to assess students shifts in attitude about how to approach a given discipline. One example is the Mathematics Attitudes and Perceptions Survey.