Rater Scoring Modeling Tool (RSMTool)

RSMTool logo
_images/spacer.png

Automated scoring of written and spoken responses is a growing field in educational natural language processing. Automated scoring engines employ machine learning models to predict scores for such responses based on features extracted from the text/audio of these responses. Examples of automated scoring engines include MI Write for written responses and SpeechRater for spoken responses.

RSMTool is a python package which automates and combines in a single pipeline multiple analyses that are commonly conducted when building and evaluating automated scoring models. The output of RSMTool is a comprehensive, customizable HTML statistical report that contains the outputs of these multiple analyses. While RSMTool does make it really simple to run this set of standard analyses using a single command, it is also fully customizable and allows users to easily exclude unneeded analyses, modify the standard analyses, and even include custom analyses in the report.

We expect the primary users of RSMTool to be researchers working on developing new automated scoring engines or on improving existing ones. Note that RSMTool is not a scoring engine by itself but rather a tool for building and evaluating machine learning models that may be used in such engines.

The primary means of using RSMTool is via the command-line.

Documentation

Note

If you use the Dash app on macOS, you can also download the complete RSMTool documentation for offline use. Go to the Dash preferences, click on “Downloads”, then “User Contributed”, and search for “RSMTool”.

Indices and tables