One of the limitations of the binary classification calibrator model is that it may not be able to fit all the negative samples in a single class as there could be a lot of variety in the samples. (Mohseni et al. 2020) proposed a strategy to add multiple rejection classes, converting the binary classification problem to a (1 + n) classification problem where n is the number of rejection classes. The heldout dataset is annotated in the same way as in the binary classification calibrator model but the training procedure changes slightly. Here, the gold labels for training are computed on the fly. When the prediction is one of the rejection classes and the annotation is negative then the predicted label is chosen as the gold label. Also, when the prediction is the positive class and annotation is negative then one of the rejection classes is randomly selected as the gold label. In another variant of this approach, instead of selecting a random rejection class, we select the rejection class with the highest probability among all rejection classes. A similar setup can be used for positive samples i.e multiple selection classes.