This project evaluates the viability of revision as a pedagogical technique by determining whether student interactions with an NLP-based revision assistant enables them to learn to write better -- that is, whether certain forms of the feedback (in terms of the perceived purposes and scopes of changes) encourage students to learn to make more effective revisions.
More specifically, the project works toward three objectives:
- Define a schema for characterizing the types of changes that occur at different levels of the rewriting. For example, the writer might add one or more sentences to provide evidence to support a thesis; or the writer might add just one or two words to make a phrase more precise.
- Based on the schema, design a computational model for recognizing the purpose and scope of each change within a revision. One application of such a model is a revision assistant that serves as a sounding board for students as they experiment with different revision alternatives.
- Conduct experiments to study the interactions between students and the revision writing environment in which variations of idealized computational models are simulated. The findings of the experiments pave the way for developing better technologies to support for student learning.
For more details of the project, please try our demo!