While, this is acceptable for tolerant applications like movie recommendations, high risk associated with wrong predictions hinders the adoption of such models in realworld safety critical applications like medical diagnosis and self-driving cars. (Amodei et al. 2016; Hendrycks and Gimpel 2016). The objective behind selective answering is to teach neural network models the ability to abstain from answering whenever they are not sufficiently confident. This capability enables the model to maintain high accuracy onthe questions they decide to answer and seek opportunity for human intervention in case of abstention, allowing wide adoption of AI systems possible.