
All classical statistical methods rely explicitly or implicitly on parametric models based on number of assumptions. The most widely used assumption is that the observed data have normal distribution. This assumption about the structural and the stochastic parts of the model have been present in statistics for two centuries, and have been the framework for all the classical methods. Classical methods perform well if the data obeys the assumptions. Now-a-days data collected and stored at enormous speed (GB/TB/hr) and pressure to provide better customized service for an edge. The data does not follow the so-called assumptions then the result using classical methods get affected. In this context traditional techniques are infeasible due to enormity of data, high dimensionality of data and heterogeneous of data. The robust methods can be seen as extensions to the classical ones which can cope with deviations from the stochastic assumptions. Classification and data reduction techniques play an important role while handling large data. A reliable and precise classification aspect is essential in analyzing multivariate data. This paper presents the evaluation aspects such as apparent error rate of various classical and robust discriminant methods on a simulation study using R package.