AbstractArtificial neural networks ($${{{\texttt {ANN}}}}$$
ANN
) are widely used machine learning models. Their widespread use has attracted a lot of interest in their robustness. Many studies show that ’s performance can be highly vulnerable to input manipulation such as adversarial attacks and covariate drift. Therefore, various techniques that focus on improving $${{{\texttt {ANN}}}}$$
ANN
’s robustness have been proposed in the last few years. However, most of these works have mostly focused on image data. In this paper, we investigate the role of discretization in improving $${{{\texttt {ANN}}}}$$
ANN
’s robustness on tabular datasets. Two custom $${{{\texttt {ANN}}}}$$
ANN
layers– and (collectively called ) are proposed. The two layers integrate discretization during the training phase to improve $${{{\texttt {ANN}}}}$$
ANN
’s ability to defend against adversarial attacks. Additionally, integrates dynamic discretization during testing phase as well, to provide a unified strategy to handle adversarial attacks and covariate drift. The experimental results on 24 publicly available datasets show that our proposed add much-needed robustness to $${{{\texttt {ANN}}}}$$
ANN
for tabular datasets.