Glasenapp, Matthew
Matthew Glasenapp is a graduate student in the Department of Ecology and Evolutionary Biology at the University of California, Santa Cruz. His research focuses on understanding the genetic basis of speciation through investigating how reproductive barriers evolve between closely related species. Matthew participated in the PDP to improve his pedagogy to better serve the students he teaches, hoping to one day become a liberal arts college professor. |
Teaching Activity Summary
Name of Teaching Activity: Toxicology WEST
Teaching Venue: Workshops for Engineer & Science Transfers, Toxicology Strand (WEST), Sept 2019
Learners: 22 community colleget transfer-students.
Reflection on teaching and assessing the core science or engineering concept:
“The dose makes the poison” is an old adage in the field of toxicology that has led to the common
misconception that dosage and toxicity have a relationship of direct proportionality. Although true in
some cases, this assumption is largely simplistic of true biological processes. Our content goal was for
students to design experiments using toxins and model organisms to investigate the intricacies of the
dose-response relationship in ecologically relevant study systems. Dose-response curves are a core
concept in a broad range of environmental toxicology and medical fields. Understanding the complexity
of dose-response relationship will benefit learners in their classes, research, and own health and nutrition.
Students have numerous difficulties in understanding dose-response curves. Often, they assume the
relationship between dose and toxicity to be linear, failing to consider the possibility of nonlinear
relationships and important biological thresholds. Other common pitfalls include failing to recognize that
different biological organisms may respond in different ways to doses of the same toxin and neglecting to
consider other variables that may influence the dose-response relationship of a single system.
To address these shortcomings, we designed a role-playing activity where students assumed the role of an
Environmental Protection Agency (EPA) scientist tasked with understanding the toxicity of mystery
chemicals on model organisms C. elegans and Daphnia magna. In small groups, students were asked to
design and carry out experiments and report back in a culminating poster session. Students were assessed
on their learning through performance on two different tasks: an individual written response and a group
poster presentation. Towards the end of the activity, students were given a written prompt asking them to
individually describe their understanding of the dose-response relationship after reflecting on the results
of their own experiments as well as the experiments of other groups. In their response, students were
asked to discuss the relative effect of the three mystery chemicals, draw comparisons between toxicity
between the two organisms tested (C. elegans and D. magna), offer implications of their experimental
outcomes, and propose a follow-up experiment. Next, learners were asked to create a collaborative poster
with their experiment group summarizing their experimental design, results, implications, and proposed
follow-up experiment. This was an opportunity for learners to fill in knowledge gaps they may noticed
after writing their individual responses.
To evaluate learners’ performance, we designed a rubric consisting of three dimensions relating to our
core concept. Students did not need to have any prior toxicology knowledge to score high on our rubric.
For each dimension, the highest possible score was a 2, indicating sufficient understanding. A score of 1
signaled misunderstanding or incomplete understanding. Overall, we succeeded in guiding most students
to a sufficient level of understanding of our content goals. Evaluation of both the individual written
response and the group poster allowed us to give a majority of the students’ scores of 2 in all three
dimensions. We did not receive adequate evidence of understanding from several students, highlighting a
few minor flaws in our rubric design. We noticed during evaluation that the language of our assessment
task prompts may have been too vague, leading some students to misunderstand what was expected of
them. These students didn’t necessarily provide incorrect responses, they just missed a few of the key
items we were looking for. If we had to teach this activity again, we would likely adjust the wording of
our assessment task prompts, as our shortcoming was not in our ability to guide learners to the content
goal, but in prompting them to put what they had learned into writing for fair evaluation.