Home / Projects / Results and Lessons Learned from Implementation in Bio20A
Results and Lessons Learned from Implementation in Bio20A
We piloted our assessment to a Bio 20A course with 49 students in order to see how they engaged in this practice in a lecture course. The class had been focusing on using scientific models throughout the quarter, but were not necessarily familiar with the framework we had developed here. We collected and scored the assessments, and summarized patterns and trends below, along with the frequency of scores for each dimension.
Dimension of Rubric and notes about student responses |
# of 1s |
# of 2s |
# of 3s |
Models are evaluated for their ability to represent and explain phenomena
- Students focused mostly on directly comparing the models instead of trying to use them to explain something. Our revised prompt was adjusted to ask students to explicitly make a claim, and was added to the rubric.
- Many answers simply noted scale differences (e.g. "this one give you zoomed out look, this one gives you zoomed in")
|
14
|
21
|
14
|
Phenomena can be represented by multiple models
- Students had difficulty in using multiple models to bolster an explanation or describing how they work synergistically to explain something.
- Consistent use of "more complete picture" or phrases that imply one model had a more accurate representation of the phenomena. What is interesting about this is that students used it for both model A and B, meaning some students see "more complete picture" as a model that refers to signal cascade, while some view it as a more detailed molecular snapshot.
|
12
|
31
|
6
|
Models should be distinguished from the actual phenomena that it represents
- Most students recognize that the models were not exact replicas of the phenomenon, but there was little discussion on traits or features of the model, which led to many 1's.
- Little discussion of specifics on limitations of the models. Students described one model as opposite of the other model (ie “this one tells you structure, not function, the other one tells you function, not structure” vs. “this model is able to show us what domain of the protein binds zinc, but doesn’t have a.a. resolution”)
|
21
|
19
|
9
|
Based on the responses and summary above, we revised our final assessment to push students further in making specific explanations about the phenomena and making the task slightly more open-ended so that they do not feel as though they are being narrowly asked to compare the models to each other.