Assessment Modeling: Fundamental Pre-training Tasks for Interactive Educational Systems
Abstract.
Like many other domains in Artificial Intelligence (AI), there are specific tasks in the field of AI in Education (AIEd) for which labels are scarce and expensive, such as predicting exam score or review correctness. A common way of circumventing label-scarce problems is pre-training a model to learn representations of the contents of learning items. However, such methods fail to utilize the full range of student interaction data available and do not model student learning behavior. To this end, we propose Assessment Modeling, a class of fundamental pre-training tasks for general interactive educational systems. An assessment is a feature of student-system interactions which can serve as a pedagogical evaluation. Examples include the correctness and timeliness of a student’s answer. Assessment Modeling is the prediction of assessments conditioned on the surrounding context of interactions. Although it is natural to pre-train on interactive features available in large amounts, limiting the prediction targets to assessments focuses the tasks’ relevance to the label-scarce educational problems and reduces less-relevant noise. While the effectiveness of different combinations of assessments is open for exploration, we suggest Assessment Modeling as a first-order guiding principle for selecting proper pre-training tasks for label-scarce educational problems.
1. Introduction

Interactive Educational Systems (IESs) have been developed rapidly in recent years to address the issue of quality and affordability in education. IESs automatically collect observations of student behaviors at scale, and thus can power data-driven approaches for many Artificial Intelligence in Education (AIEd) tasks. However, there are important tasks where a label-scarce problem prevents relevant models from attaining their full potential. For instance, information about exam scores and grades are essential to understanding a student’s educational progress and is a key factor affecting social outcomes. However, unlike interactive features automatically collected by IESs, obtaining the labels is costly as they are often generated outside the IESs. Other examples of scarce labels include data on course dropout and review correctness. While this data is automatically recorded by IESs, they tend to be few in number as the events occur sporadically in practice.
Pre-train/fine-tune paradigm is a common way of circumventing the label-scarce problem that has been actively explored in the machine learning community. In this paradigm, a model is first pre-trained in an unsupervised auxiliary task for which data is abundant. Then, the model is slightly modified to match the main task and fine-tuned with possibly scarce data. This approach has seen success in other subfields of AI including Natural Language Processing (NLP), computer vision, and motion planning (Devlin et al., 2018; Studer et al., 2019; Schneider et al., 2019). Following this line of inquiry, content-based pre-train/fine-tune methods (Huang et al., 2017; Sung et al., 2019; Yin et al., 2019) have been studied in AIEd community. However, student interactions are not considered and the work was limited to capturing the content of learning materials. Accordingly, they do not make use of the information carried by the learning behavior of students using IESs.
In this paper, we propose Assessment Modeling, a class of fundamental pre-training tasks for general IESs. Here, an assessment is any feature of student-system interactions which can act as a criterion for pedagogical evaluation. Examples of assessments include the correctness and timeliness of a student response to a given exercise. While there is a wide range of interactive features available, we narrow down the prediction targets to assessments to focus on the information most relevant to label-scarce educational problems. Inspired by the recent success of bidirectional representations in NLP domain (Devlin et al., 2018), we develop an assessment model using a deep bidirectional Transformer (Vaswani et al., 2017) encoder. In the pre-training phase, we randomly select a portion of entries in a sequence of interactions and mask the corresponding assessments. Then, we train a deep bidirectional Transformer encoder-based assessment model to predict the masked assessments conditioned on the surrounding interactions. After pre-training, we replace the last layer of the model with a layer corresponding to each label-scarce educational task, and all parameters of the model are then fine-tuned to the task. To the best of our knowledge, this is the first work investigating appropriate pre-training methods for predicting educational features from student-system interactions.
We empirically evaluate the use of Assessment Modeling as pre-training tasks. Our experiments are conducted on EdNet (Choi et al., 2020), a large-scale dataset collected by an active mobile education application, Santa, which has more than 131M response data points from around 780K students gathered since 2016. The results show that Assessment Modeling provides a substantial performance improvement in label-scarce AIEd tasks. In particular, we obtain improvements of 13.34% mean absolute error and 4.26% area under the receiving operating characteristic curve from the previous state-of-the-art model for exam score and review correctness prediction, respectively.
2. Related Works
Pre-training is the act of training a model to perform an unsupervised auxiliary task before using the trained model to perform the supervised main task (Erhan et al., 2010). Pre-training has been shown to enhance the performance of models in various fields including NLP (Devlin et al., 2018; Liu et al., 2019; Lan et al., 2019; Yang et al., 2019; Clark et al., 2020; Brown et al., 2020), Computer Vision (Studer et al., 2019; Chen et al., 2020) and Speech Recognition (Schneider et al., 2019). Pre-training techniques have been also applied to educational tasks with substantial performance improvements. For example, (Hunt et al., 2017) predicts whether a student will graduate or not based on students’ general academic information such as SAT/ACT scores or courses taken during college. They predict the graduation of 465 engineering students by first pre-training on the data of 6834 students in other departments using the TrAdaBoost algorithm (Dai et al., 2007). (Ding et al., 2019) suggests two transfer learning methods, Passive-AE transfer and Active-AE transfer, to predict student dropout in Massive Open Online Courses (MOOCs). Their experimental results show that both methods improve the prediction accuracy, with Passive-AE transfer more effective for transfer learning across the same subject and Active-AE transfer more effective for transfer learning across different subjects.
Most of the pre-training methods used in interactive educational system are NLP tasks with training data produced from learning materials. For example, the short answer grading model suggested in (Sung et al., 2019) uses a pre-trained BERT model to ameliorate the limited amount of student-answer pair data. They took a pre-trained, uncased BERT-base model and fine-tuned it on the ScientsBank dataset and two psychology domain datasets. The resulting model outperformed existing grading models. Test-aware Attention-based Convolutional Neural Network (TACNN) (Huang et al., 2017) is a model that utilizes the semantic representations of text materials (document, question and options) to predict exam question difficulty (i.e. the percentage of examinees with wrong answer for a particular question). TACNN uses pre-trained word2vec embeddings (Mikolov et al., 2013) to represent word tokens. By applying convolutional neural networks to the sequence of text tokens and an attention mechanism to the series of sentences, the model quantifies the difficulty of the question. QuesNet (Yin et al., 2019) is a question embedding model pre-trained with the context information of question data. Since existing pre-training methods in NLP are unsuited for heterogeneous data such as images and metadata in questions, the authors suggest the Holed Language Model (HLM), a pre-training task, in parallel to BERT’s masking language model. HLM differs from BERT’s task, however, because it predicts each input based on the values of other inputs aggregated in the Bi-LSTM layer of QuesNet, while BERT masks existing sequences at random. Also, QuesNet introduces another task called Domain-Oriented Objective (DOO), which is the prediction of the correctness of the answer supplied with the question, to capture high-level logical information. QuesNet adds a loss for each of HLM and DOO to serve as its final training loss. Compared to other baseline models, QuesNet shows the best performance in three downstream tasks: knowledge mapping, difficulty estimation, and score prediction.
3. Assessment Modeling
3.1. Formal Definition of Assessment Modeling
Recall that Knowledge Tracing is the task of modeling a student’s knowledge state based on the history of their learning activities. Although Knowledge Tracing is widely considered a fundamental task in AIEd and has been studied extensively, there is no precise definition in the literature. In this subsection, we first define Knowledge Tracing in a form that is quantifiable and objective for a particular IES design. Subsequently, we introduce a definition of Assessment Modeling that addresses the educational values of the label being predicted.
A learning session in IES consists of a series of interactions between a student and the system, where each interaction is represented as a set of features automatically collected by the system. The features represent diverse aspects of learning activities provided by the system, including the exercises or lectures being used, and the corresponding student actions. Using the same notation, we define Knowledge Tracing and Assessment Modeling as follows:
Definition 0.
Knowledge Tracing is the task of predicting a feature of the student in the ’th interaction given the sequence of interactions . That is, the prediction of
(1) |
for some , where is the set of features that should be masked when the feature is guessed. This is to mask input features not available at prediction time, so that the model does not cheat while predicting .
This definition is compatible with prior uses of the term in works on Knowledge Tracing models (Piech et al., 2015; Zhang et al., 2017; Huang et al., 2019; Pandey and Karypis, 2019). Although a common set-up of Knowledge Tracing models is to predict a feature conditioned on only past interactions, we define Knowledge Tracing as a prediction task that can also be conditioned on future interactions to encompass the recent successes of bi-directional architectures in Knowledge Tracing (Lee et al., 2019).
Example 0 (Knowledge Tracing).
A typical instance of a Knowledge Tracing task might be response correctness prediction, where the interaction consists of an exercise given to a student, and the correctness of the student’s corresponding response (Piech et al., 2015; Zhang et al., 2017; Huang et al., 2019; Pandey and Karypis, 2019; Lee et al., 2019). In this setup, only the response correctness of the last interaction is predicted and the features related to are masked. Following our definition of Knowledge Tracing, the task can be extended further to predict diverse interactive features such as:
-
•
offer_selection: Whether a student accepts studying the offered learning items.
-
•
start_time: The time a student starts to solve an exercise.
-
•
inactive_time: The duration for which a student is inactive in a learning session.
-
•
platform: Whether a student responds to each exercise on a web browser or a mobile app.
-
•
payment: Whether a student purchases paid services.
-
•
event : Whether a student participates in application events.
-
•
longest_answer: Whether a student selected the answer choice with the longest description.
-
•
correctness: Whether a student responds correctly to a given exercise.
-
•
timeliness: Whether a student responds to each exercise under the time limit recommended by domain experts.
-
•
course_dropout : Whether a student drops out of the entire class.
-
•
elapsed_time: The duration of time a student takes to solve a given exercise.
-
•
lecture_complete: Whether a student completes studying a video lecture offered to them.
-
•
review_correctness : Whether a student responds correctly to a previously solved exercise.
In the aforementioned example, features like correctness and timeliness directly evaluate the educational values of a student interaction, while it is somewhat debatable whether platform and longest_answer are also capable of addressing such qualities. Accordingly, we define assessments and Assessment Modeling as the following:
Definition 0.
An assessment of the ’th interaction is a feature of which can act as a criterion for pedagogical evaluation. The collection of assessments is a subset of the available features of . Assessment Modeling is the prediction of assessment for some from the interactions . That is, the prediction of
(2) |
Example 0 (Assessments).
Among the interactive features listed in Example 3.2, we consider correctness, timeliness, course_dropout, elapsed_time, lecture_complete and review_correctness to be assessments. For example, correctness is an assessment as whether a student responded to each exercise correctly provides strong evidence regarding the student’s mastery of concepts required to solve the exercise. Also, timeliness serves as an assessment since a student solving given exercise within the recommended time limit is expected to be proficient in skills and knowledge necessary to answer the exercise. Figure 2 depicts the relationship between assessments and general Knowledge Tracing features.

3.2. Assessment Modeling as Pre-training Tasks
In this subsection, we provide examples of important yet scarce educational features and argue that Assessment Modeling enables effective prediction of such features.
Example 0 (Non-Interactive Educational Features).
In many applications, IES is often integrated as part of a larger learning process. Accordingly, the ultimate evaluation of the learning process is mostly done independently from the IES. For example, academic abilities of students are measured by course grades or standardized exams, and the ability to perform a complicated job or task is certified by professional certificates. Such labels are considered essential due to the pedagogical and social needs for consistent evaluations of student ability. However, obtaining these labels is often challenging due to their scarcity compared to that of features automatically collected from student-system interactions. We give the following examples (Figure 2):
-
•
exam_score: A student’s score on a standardized exam.
-
•
grade: A student’s final grade in a course.
-
•
certification: Professional certifications obtained by completion of educational programs or examinations.
Example 0 (Sporadic Assessments).
All assessments are automatically collected by IESs, but some assessments are few in number as the corresponding events occur rarely in practice. For example, it is natural for students to invest more time in learning new concepts than reviewing previously studied materials. course_dropout and review_correctness are examples of sporadic assessments (Figure 3).

To overcome the aforementioned lack of labels, we consider the pre-train/fine-tune paradigm that leverages data available in large amounts to aid performance in tasks where labels are scarce. In this paradigm, a model is first trained in an auxiliary task relevant to the tasks of interest with label-scarce data. Using the pre-trained parameters to initialize the model, the model is slightly modified to suit the task of interest, and then fine-tuned on the main tasks. This approach has been successful in AI fields like NLP, computer vision and speech recognition (Devlin et al., 2018; Studer et al., 2019; Schneider et al., 2019). Following this template, existing methods in AIEd pre-train on the contents of learning materials, but such methods do not capture student behavior and only utilize a small subset of features available from the data.
Instead, one may pre-train on different features automatically collected by IESs (Figure 4). However, training on every available feature is computationally intractable and may introduce irrelevant noise. To this end, Assessment Modeling narrows down the prediction targets to assessments, the interactive features that also hold information on educational progress. Since multiple assessments are available, a wide variety of pre-train/fine-tune pairs can be explored for effective Assessment Modeling (Figure 5). This raises the open-ended questions of which assessments to pre-train on for label-scarce educational problems and how to pre-train on multiple assessments.


3.3. Assessment Modeling with Deep Bidirectional Transformer Encoder
While there are several possible options for the architecture of the assessment model, we adopt the deep bidirectional Transformer encoder proposed in (Devlin et al., 2018) for the following reasons. First, (Pandey and Karypis, 2019) showed that the self-attention mechanism in Transformer (Vaswani et al., 2017) is effective for Knowledge Tracing. The Transformer-based Knowledge Tracing model proposed in (Pandey and Karypis, 2019) achieved state-of-the-art performance on several datasets. Second, the deep bidirectional Transformer encoder model and pre-train/fine-tune method proposed in (Devlin et al., 2018) achieved state-of-the-art results on several NLP tasks. While (Devlin et al., 2018) conducted experimental studies in the NLP domain, the method is also applicable to other domains with slight modifications. Figure 6 depicts our proposed pre-train/fine-tune approach. In the pre-training phase, we train a deep bidirectional Transformer encoder-based assessment model to predict assessments conditioned on past and future interactions. After the pre-training phase, we replace the last layer of the assessment model with a layer appropriate for each label-scarce educational task and fine-tune parameters in the whole model to predict labels in the task. We provide detailed descriptions of our proposed assessment model in the following subsections.

3.3.1. Input Representation
The first layer in the assessment model maps each interaction to an embedding vector. First, we embed the following attributes:
-
•
exercise_id: We assign a latent vector unique to each exercise id.
-
•
exercise_category: Each exercise has its own category tag that represents the type of the exercise. We assign a latent vector to each tag.
-
•
position: The relative position of the interaction in the input sequence. We use the sinusoidal positional encoding that is used in (Vaswani et al., 2017).
As shown in Example 3.2, IES collects diverse interactive features that can potentially be used for Assessment Modeling. However, not only using all possible interactive features for Assessment Modeling is computationally intractable, there is no guarantee that the best results on label-scarce educational tasks will be achieved when all the features are used. For experimental studies, we narrow down the scope of interactive features to the ones available from an exercise-response pair, the simplest and widely-considered interaction in Knowledge Tracing. In particular, we embed the following interactive features:
-
•
correctness: The value is 1 if a student response is correct and 0 otherwise. We assign a latent vector corresponding to each possible value 0 and 1.
-
•
elapsed_time: The time taken for a student to respond is recorded in seconds. We cap any time exceeding 300 seconds to 300 seconds and normalize it by dividing by 300 to have a value between 0 and 1. The elapsed time embedding vector is calculated by multiplying the normalized time by a single latent embedding vector.
-
•
inactive_time: The time interval between adjacent interactions is recorded in seconds. We set maximum inactive time as 86400 seconds (24 hours) and any time more than that is capped off to 86400 seconds. Also, the inactive time is normalized to have a value between 0 and 1 by dividing the value by 86400. Similar to the elapsed time embedding vector, we calculate the inactive time embedding vector by multiplying the time by a single latent embedding vector.
Let be the sum of the embedding vectors of exercise_id, exercise_category and position. Likewise, let , and be the embedding vectors of correctness, elapsed_time and inactive_time, respectively. Then, the representation of interaction is .
3.3.2. Masking
Inspired by the masked language model proposed in (Devlin et al., 2018), we use the following method to mask the assessments in a student interaction sequence. First, we mask a fraction of interactions chosen uniformly at random. If the -th interaction is chosen, we replace embedding vectors of interactive features that are being predicted as targets with , a trainable vector that represents masking. For instance, if the correctness and elapsed_time are prediction targets for Assessment Modeling, is replaced with and the embedding vector for interaction becomes .
3.3.3. Model Architecture
After the interactions are embedded and masked accordingly, they enter a series of Transformer encoder blocks, each consisting of a multi-headed self-attention layer followed by position-wise feed-forward layer. Every layer has input dimension . The first encoder block takes the sequence of interactions embedded in latent space and returns a series of vectors of the same length and dimension. For all , the ’th block takes the output of the ’th block as input and returns the series of vectors accordingly. We describe the architecture of each block as the following.
The multi-headed self-attention layer takes a series of vectors, . Each vector is projected to latent space by projection matrices and :
(3) |
Here , and each , and are the query, key and value of , respectively. The output of the self-attention is then obtained as a weighted sum of values with coefficients determined by the dot products between queries and keys:
(4) |
Models with self-attention layers often use multiple heads to jointly attend information from different representative subspaces. Following this, we apply attention times to the same query-key-value entries with different projection matrices for output:
(5) |
Here, each is equal to the output of self-attention in Equation 4 with corresponding projection matrices , and in Equation 3. We use the linear map to aggregate each attention result. After we compute the resulting value in Equation 5, we apply point-wise feed-forward layer to add non-linearity to the model. Also, we apply the skip connection (He et al., 2016) and layer normalization (Ba et al., 2016) to the output of the feed-forward layer.
Assume that the last encoder block returns the sequence of vectors. For pre-training, the predictions to features in ’th timestep are made by applying a linear layer with the proper activation function to . We consider four interactive features as prediction targets: correctness, timeliness, elapsed_time and inactive_time. If the prediction target is correctness or timeliness, the sigmoid activation function is applied to the linear layer output. If elapsed_time or inactive_time is the prediction target, the final output of the model is the output of the linear layer. The overall loss is defined to be
(6) |
where is the sum of binary-cross-entropy (resp. mean-squared-error) losses if the prediction target is correctness or timeliness (resp. elapsed_time or inactive_time). The value is a flag that represents whether the -th exercise is masked () or not (). The input embedding layer and encoder blocks are shared over pre-training and fine-tuning. For fine-tuning, we replace the linear layers applied to each in pre-training with a single linear layer that combines all the entries of to fit the output to label-scarce educational downstream tasks.
4. Experiments
4.1. Label-Scarce Educational Tasks
We apply Assessment Modeling to exam score (a non-interactive educational feature) and review correctness (a sporadic assessment) predictions.
4.1.1. Exam Score Prediction
Exam score prediction is the estimation of student’s scores in standardized exams, such as the TOEIC and the SAT, based on the student’s interaction history with IES. Exam score prediction is one of the most important tasks of AIEd, as standardized assessment is crucial for both the students and IES. Because a substantial amount of human effort is required to develop or take the tests, the number of data points available for exam score prediction is considerably fewer than that of student interactions automatically collected by IES. By developing a reliable exam score prediction model, a student’s universally accepted score can be estimated by IES with considerably less effort. Exam score prediction differs from response correctness prediction because standardized tests are taken in a controlled environment with specific methods independent of IES.
4.1.2. Review Correctness Prediction
Assume that a student incorrectly responds to an exercise and receives corresponding feedback. The goal of review correctness prediction is to predict whether a student will be able to respond to the exercise correctly if they encounter the exercise again. The significance of this AIEd task is that it can assess the educational effect of an exercise to a particular student in a specific situation. In particular, the correctness probability estimated by this task represents the student’s expected marginal gain in knowledge as they go through some learning process. For example, if the correctness probability is high, it is likely that the student will obtain relevant knowledge in the future even if their initial response was incorrect.
4.2. Dataset
We use the public EdNet (Choi et al., 2020) dataset obtained from Santa, a mobile AI tutoring service for TOEIC Listening and Reading Test preparation. The test consists of two timed sections named Listening Comprehension (LC) and Reading Comprehension (RC) with a total of 100 exercises, and 4 and 3 parts, respectively. The final test score ranges from 10 to 990 in steps of 5. Once a student solves each exercise, Santa provides educational feedback to their responses including explanations and commentaries on exercises. EdNet is the collection of student interactions of multiple-choice exercises, which contains more than 131M response data points from around 780K students gathered over the last four years. The main features of the student-exercise interaction data consists of six columns: student id, exercise id, exercise part, student response, received time and time taken. The student (resp. exercise) ID identifies each unique student (resp. exercise). The student response is a student’s answer choice for the given exercise. Exercise part is the part of the exam that the exercise belongs to. Finally, the absolute time when the student received the exercise and the time taken by the student to respond are recorded.
4.2.1. Dataset for Pre-training
For pre-training, we first reconstruct the interaction timeline of each student by gathering the responses of a specific student in increasing chronological order. For each interaction , is recorded as 1 if the student’s answer is correct and 0 otherwise. is recorded as 1 if the student responded under the time limits recommended by TOEIC experts (Table 1) and 0 otherwise. is the time taken for the student to respond recorded in seconds, and is the time interval between the current and previous interactions recorded in seconds. We exclude the interactions of students involved in any of the label-scarce educational downstream tasks for pre-training to preemptively avoid data leakage. After processing, the data consists of 414,375 students with a total of 93,121,528 interactions.
Part | 1 4 | 5 | 6 | 7 |
Time limit (sec) | audio duration + 8 | 25 | 50 | 55 |
4.2.2. Dataset for Label-Scarce Educational Tasks
For exam score prediction, we aggregate the real TOEIC scores reported by students of Santa. The reports are scarce in number because a student has to register, take the exam and report the score at their own expense. To collect this data, Santa offered a small reward to students in exchange for reporting their score. A total of 2,594 score reports were obtained over a period of 6 months, which is considerably fewer than the number of exercise responses. For our experiment, we divide the data into five splits, and use 3/5, 1/5 and 1/5 of the data as training, validation and test set, respectively.
For review correctness prediction, we look over each student’s timeline and find exercises that have been solved at least twice. That is, if an exercise appears more than once in a student interaction sequence , we find the first two interactions and () with the same exercise . The sequence of interactions and are taken as input, and is taken as the label. The total of 4,540 labeled sequences which are not appeared in the pre-training dataset are generated after pre-processing. Similar to the case in exam score prediction, we divide the data into five splits, and use 3/5, 1/5 and 1/5 of the data as training, validation and test set, respectively.
4.3. Setup
4.3.1. Assessment Model
Our assessment model consists of two encoder blocks () with a latent space dimension of 256 (). The model takes 100 interactions as input. For pre-training, the model parameters are first randomly initialized and trained with 0.6 making rate . Then we replace the last layer of the pre-trained model with a linear layer appropriate for each label-scarce educational downstream task, and fine-tune the whole parameters of the model. In fine-tuning, to alleviate label-scarce problem, we apply the following data augmentation strategy. Given the original interaction sequence with a label, we select each entry in the sequence with 50% probability to generate a subsequence with the same label. The model is fine-tuned on these subsequences. In both pre-training and fine-tuning, dropout rate and batch size are set to 0.2 and 128, respectively. We use the Adam optimizer (Kingma and Ba, 2014) with , and Noam scheme (Vaswani et al., 2017) to schedule the learning rate with 4000 warmup-steps. We conduct 5-fold cross validation to select the model with the best result on the validation set, and report the evaluation result of the model on the test set.
We compare the effectiveness of the assessment model with the following content-based pre-training methods. Since the existing content-based pre-training methods learn embedding vectors of each exercise, we replace the embedding of exercise_id in our model with respective exercise embedding for fine-tuning.
-
•
Word2Vec (Mikolov et al., 2013) is a standard word embedding model used in many tasks. (Huang et al., 2019) used word embedding vectors obtained from Word2Vec model to generate exercise embedding vectors. In our experiment, we obtain 256 dimension word embedding vectors from the continuous bag-of-words Word2Vec model trained on text corpus consists of exercise descriptions. The embedding vector assigned to each exercise is the average of embedding vectors of all words appearing in the exercise description.
-
•
BERT (Devlin et al., 2018) is a Transformer encoder-based bidirectional representation learning model used in various domains. The original BERT was pre-trained by masked language modeling and next sentence prediction objectives. However, several subsequent works (Liu et al., 2019; Lan et al., 2019) questioned the necessity of the next sentence prediction objective. Accordingly, we train 2 layers and 256 dimension BERT model using only masked language modeling objective on exercise descriptions. Similar to the case in Word2Vec, we obtain each exercise embedding vector by averaging the embedding vectors of words in the exercise description.
-
•
QuesNet (Yin et al., 2019) is a content-based pre-training method learning unified representation for each exercise comprises of text, images and side information. Following the method suggested in the original paper, we train bi-directional LSTM followed by multi-headed self-attention model with holed language modeling and domain oriented objectives. The embedding vector of each exercise is computed as the sentence layer output of the model when the exercise is taken as input.
4.3.2. Metrics
We use the following evaluation metrics to evaluate model performance on each label-scarce educational downstream task. For exam score prediction, we compute the Mean Absolute Error (MAE), the average of differences between the predicted exam scores and the true values. For review correctness prediction, we use Area Under the receiving operating characteristic Curve (AUC).
4.4. Experimental Results
4.4.1. Effect of Pre-training Tasks
We demonstrate the importance of choosing appropriate pre-training tasks by comparing models pre-trained to predict one of correctness, correctness + elapsed_time, correctness + timeliness, and correctness + inactive_time. Since correctness is the feature with the most pedagogical aspect, we include correctness in the prediction targets of all the pre-training tasks. The results are shown in Table 2. For both exam score and review correctness prediction, the results show that it is the best to pre-train the model to predict correctness and timeliness. This is because the exam score and review correctness distribution are related to not only correctness but also timeliness of the student’s answers. This shows the importance of choosing appropriate pre-training tasks relevant to the downstream tasks.
Exam score prediction | Review correctness prediction | |
---|---|---|
correctness | ||
correctness + elapsed_time | ||
correctness + timeliness | ||
correctness + inactive_time |
4.4.2. Effect of Assessment Modeling
Experimental results comparing Assessment Modeling with three different content-based pre-training methods and the model without pre-training are shown in the Table 3. In all downstream tasks, Assessment Modeling outperforms the other methods. Compared to the model without pre-training, MAE for exam score prediction is reduced by 21.96% and AUC for review correctness prediction is increased by 6.99%. Also, Assessment Modeling improves 13.34% MAE for exam score and 4.26% AUC for review correctness prediction, respectively, from the previous state-of-the-art content-based pre-training methods. These results support our claim that Assessment Modeling is more suitable for label-scarce educational downstream tasks than content-based pre-training methods.
Exam score prediction | Review correctness prediction | |
---|---|---|
Without pre-train | ||
Word2Vec | ||
BERT | ||
QuesNet | ||
Assessment Modeling |
5. Discussions
5.1. Choosing Appropriate Pre-training Task
Determining which pre-training task is appropriate for a specific downstream task is not clear and somewhat ambiguous. For instance, while the experimental results of including inactive_time in the prediction targets of pre-training shown in Table 2 is aligned with our intuition that inactive_time can not be an assessment, the experimental results of using timeliness and elapsed_time as assessments are not. As described in Example 3.4, intuitively, both timeliness and elapsed_time are assessments since the amount of time a student takes to respond to each exercise serves as a pedagogical evaluation. Also, since timeliness is a fine-grained version that binarizes elapsed time using the time limit for each exercise, timeliness and elapsed time have a lot of overlap of information. Nevertheless, according to the results shown in Table 2, when correctness + timeliness (resp. correctness + elapsed_time) are used as the prediction target of pre-training, there was a 2.77% reduction (resp. 0.53% increase) in MAE in exam score prediction than when only correctness is used as the prediction target, so the opposite results are obtained. These results represent the difficulty in defining appropriate pre-trainig task for Assessment Modeling. Although there remains room for development and challenges for identifying pre-train/fine-tune relations in Assessment Modeling, we do not dig into it any further and leave it as a future work.
5.2. Asymmetry of Assessment Modeling
While the masking scheme for Assessment Modeling was inspired by masked language modeling proposed in (Devlin et al., 2018), there is a key difference between the two approaches (Figure 9). In masked language modeling, the features available at a timestep are (the embeddings of) each word, the masked feature is the word at the timestep, and the target to predict is also the word at the timestep. That is, there is a symmetry in that the features that are available, the features being masked, and the features being predicted are all of the same nature. But that is not necessarily the case in Assessment Modeling. For example, suppose the features available at a given timestep are exercise_id, exercise_category, correctness, and elapsed_time. In the above situation, Assessment Modeling pre-training scheme may mask correctness and elapsed_time, and predict just correctness. This asymmetry raises the issue of precisely which features to mask and which features to predict, and the choices made will have to reflect the specific label-scarce educational downstream task that Assessment Modeling is being used to prepare for. While we draw attention to this issue, it is outside the scope of this paper and we leave the details for future study.

6. Conclusion
In this paper, we introduced Assessment Modeling, a class of fundamental pre-training tasks for IESs. Our experiments show the effectiveness of Assessment Modeling as pre-training tasks for label-scarce educational problems including exam score and review correctness prediction. Avenues of future research include 1) investigating forests of pre-train/fine-tune relations in AIEd, and 2) pre-training a model to learn not only assessments, but also representations of the contents of learning items.
References
- (1)
- Ba et al. (2016) Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. 2016. Layer normalization. arXiv preprint arXiv:1607.06450 (2016).
- Brown et al. (2020) Tom B Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. arXiv preprint arXiv:2005.14165 (2020).
- Chen et al. (2020) Mark Chen, Alec Radford, Rewon Child, Jeff Wu, Heewoo Jun, Prafulla Dhariwal, David Luan, and Ilya Sutskever. 2020. Generative Pretraining from Pixels. In Proceedings of the 37th International Conference on Machine Learning, Vol. 1.
- Choi et al. (2020) Youngduck Choi, Youngnam Lee, Dongmin Shin, Junghyun Cho, Seoyon Park, Seewoo Lee, Jineon Baek, Chan Bae, Byungsoo Kim, and Jaewe Heo. 2020. Ednet: A large-scale hierarchical dataset in education. In International Conference on Artificial Intelligence in Education. Springer, 69–73.
- Clark et al. (2020) Kevin Clark, Minh-Thang Luong, Quoc V Le, and Christopher D Manning. 2020. Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555 (2020).
- Dai et al. (2007) Wenyuan Dai, Qiang Yang, Gui-Rong Xue, and Yong Yu. 2007. Boosting for transfer learning. In Proceedings of the 24th international conference on Machine learning. ACM, 193–200.
- Devlin et al. (2018) Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018).
- Ding et al. (2019) Mucong Ding, Yanbang Wang, Erik Hemberg, and Una-May O’Reilly. 2019. Transfer Learning using Representation Learning in Massive Open Online Courses. In Proceedings of the 9th International Conference on Learning Analytics & Knowledge. ACM, 145–154.
- Erhan et al. (2010) Dumitru Erhan, Yoshua Bengio, Aaron Courville, Pierre-Antoine Manzagol, Pascal Vincent, and Samy Bengio. 2010. Why does unsupervised pre-training help deep learning? Journal of Machine Learning Research 11, Feb (2010), 625–660.
- He et al. (2016) Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition. 770–778.
- Huang et al. (2017) Zhenya Huang, Qi Liu, Enhong Chen, Hongke Zhao, Mingyong Gao, Si Wei, Yu Su, and Guoping Hu. 2017. Question Difficulty Prediction for READING Problems in Standard Tests. In Thirty-First AAAI Conference on Artificial Intelligence.
- Huang et al. (2019) Zhenya Huang, Yu Yin, Enhong Chen, Hui Xiong, Yu Su, Guoping Hu, et al. 2019. EKT: Exercise-aware Knowledge Tracing for Student Performance Prediction. IEEE Transactions on Knowledge and Data Engineering (2019).
- Hunt et al. (2017) Xin J Hunt, Ilknur Kaynar Kabul, and Jorge Silva. 2017. Transfer Learning for Education Data. In KDD Workshop.
- Kingma and Ba (2014) Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014).
- Lan et al. (2019) Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2019. Albert: A lite bert for self-supervised learning of language representations. arXiv preprint arXiv:1909.11942 (2019).
- Lee et al. (2019) Youngnam Lee, Youngduck Choi, Junghyun Cho, Alexander R. Fabbri, Hyunbin Loh, Chanyou Hwang, Yongku Lee, Sang-Wook Kim, and Dragomir Radev. 2019. Creating A Neural Pedagogical Agent by Jointly Learning to Review and Assess. arXiv:1906.10910 [cs.LG]
- Liu et al. (2019) Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019).
- Mikolov et al. (2013) Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems. 3111–3119.
- Pandey and Karypis (2019) Shalini Pandey and George Karypis. 2019. A Self-Attentive model for Knowledge Tracing. arXiv preprint arXiv:1907.06837 (2019).
- Piech et al. (2015) Chris Piech, Jonathan Bassen, Jonathan Huang, Surya Ganguli, Mehran Sahami, Leonidas J Guibas, and Jascha Sohl-Dickstein. 2015. Deep knowledge tracing. In Advances in neural information processing systems. 505–513.
- Schneider et al. (2019) Steffen Schneider, Alexei Baevski, Ronan Collobert, and Michael Auli. 2019. wav2vec: Unsupervised Pre-training for Speech Recognition. arXiv preprint arXiv:1904.05862 (2019).
- Studer et al. (2019) Linda Studer, Michele Alberti, Vinaychandran Pondenkandath, Pinar Goktepe, Thomas Kolonko, Andreas Fischer, Marcus Liwicki, and Rolf Ingold. 2019. A Comprehensive Study of ImageNet Pre-Training for Historical Document Image Analysis. arXiv preprint arXiv:1905.09113 (2019).
- Sung et al. (2019) Chul Sung, Tejas Indulal Dhamecha, and Nirmal Mukhi. 2019. Improving Short Answer Grading Using Transformer-Based Pre-training. In International Conference on Artificial Intelligence in Education. Springer, 469–481.
- Vaswani et al. (2017) Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information processing systems. 5998–6008.
- Yang et al. (2019) Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Russ R Salakhutdinov, and Quoc V Le. 2019. Xlnet: Generalized autoregressive pretraining for language understanding. In Advances in neural information processing systems. 5753–5763.
- Yin et al. (2019) Yu Yin, Qi Liu, Zhenya Huang, Enhong Chen, Wei Tong, Shijin Wang, and Yu Su. 2019. QuesNet: A Unified Representation for Heterogeneous Test Questions. arXiv preprint arXiv:1905.10949 (2019).
- Zhang et al. (2017) Jiani Zhang, Xingjian Shi, Irwin King, and Dit-Yan Yeung. 2017. Dynamic key-value memory networks for knowledge tracing. In Proceedings of the 26th international conference on World Wide Web. International World Wide Web Conferences Steering Committee, 765–774.