Computer Vision / Video Analytics

NVIDIA DLI Teaches Supervised and Unsupervised Anomaly Detection

Graphic with black background with DLI anomaly course name

The NVIDIA Deep Learning Institute (DLI) is offering instructor-led, hands-on training on how to build applications of AI for anomaly detection. 

Anomaly detection is the process of identifying data that deviates abnormally within a data set. Different from the simpler process of identifying statistical outliers, anomaly detection seeks to discover data that should not be considered normal within its context. 

Anomalies can include data that are similar to captured and labeled anomalies, data that may be normal in a different context but not within the one it appeared, and data that can only be understood as anomalous through the insights of trained neural networks.

Anomaly detection is a powerful and important tool in many business and research contexts. Healthcare professionals use anomaly detection to identify signs of disease in humans earlier and more effectively. IT and DevOps teams for any number of businesses apply anomaly detection to identify events that may lead to performance degradation or loss of service. Teams in marketing and finance leverage anomaly detection to identify specific events with a large impact on their KPIs. 

In short, any team benefits from identifying the special cases in data relevant to their goals could potentially benefit from the effective use of anomaly detection.

Approaches to anomaly detection

It should come as no surprise that many approaches are available to perform anomaly detection, given its diverse range of important applications. One helpful factor in determining what approach will be most effective for a given scenario is whether labeled data already exist indicating which samples are anomalous. Supervised learning methods can be employed when an anomaly can be defined and sufficient representative data exists. Alternatively, unsupervised methods may be required in scenarios where no such labeled data is available and yet detection of novel anomalies is still necessary. 

The DLI workshop Applications of AI for Anomaly Detection cover both supervised and unsupervised cases. A supervised XGBoost model is employed to detect anomalous network traffic using the KDD network intrusion dataset. Additionally,  the model is trained to classify yet-unseen anomalous data not only as part of an attack, but also to identify the kind of attack.

Two approaches are considered for the unsupervised learning approach, beginning by training a deep autoencoder neural network. This is followed by introducing a two-network generative adversarial network (GAN), where the component discriminator network performs the anomaly detection. Below are more details on each of these approaches.

XGBoost details

XGBoost is an optimized gradient-boosting algorithm with a wide variety of applications. In addition to its extensive practical use cases, XGBoost has earned a strong reputation through its extensive and effective performance at Kaggle data science competitions. Given the presence of labeled data for training, the anomaly detection problem is considered a classification problem where a trained XGBoost model identifies anomalies in holdout test data. NVIDIA GPUs are leveraged to accelerate XGBoost by parallelizing training, first as a binary classifier, then as a multiclass classifier identifying the kind of anomaly.

AE details

Deep autoencoders consist of two symmetrical parts. The first part, called the encoder, compresses, or “encodes” data into a lower-dimensional latent representation. The second part, the decoder, attempts to reconstruct the original input from the latent vector produced by the encoder. During training, both encoder and decoder are optimized to create latent representations of the input data that better capture its essential aspects. When trained with a low prevalence of anomalies, the latent vector is better able to represent the plentiful samples of normal data than the anomalies. The output of the decoder will therefore more reliably reconstruct normal data than anomalies. Passing normal data through the autoencoder will generate relatively lower reconstruction errors than anomalies, and classification is accomplished by setting a threshold on this error.

GAN details

Generative adversarial networks consist of two neural networks that compete against each other to improve their overall performance. One network, the generator, learns to take a random seed and produce an artificial data sample drawn from the same distribution as training set data. The second network, the discriminator, learns to distinguish between samples from the training data set and those produced by the generator.

When trained properly, the generator learns to deliver realistic-looking artificial data samples while the discriminator can accurately identify data appearing like that from the training set. When trained with data representative of nonanomalous data, the generator is able to create new samples resembling normal data and the discriminator is able to classify samples as appearing normal.

Most typically, GANs are trained with the goal of using the generator to produce new, realistic-looking data samples while the discriminator is discarded. For anomaly detection, however, the generator is instead set aside and the discriminator leveraged to determine whether unknown input data is normal or anomalous. 

Learn more

AI-powered anomaly detection provides rich, and sometimes essential capabilities across a wide variety of fields. Furthermore, techniques applicable for anomaly detection can be used to great effect in other AI domains as well. 

If interested in anomaly detection, or extending your deep learning skills through hands-on interactive practice with expert instruction, sign up for an upcoming NVIDIA DLI workshop on Applications of AI for Anomaly Detection. This training is also available as a private workshop for organizations. 

Discuss (0)

Tags