Present inspection techniques in place at the Joint European Torus (JET), as well as some of those planned for ITER make use of robotically deployed inspection systems, which typically collect data for offline analysis. This can be a slow, laborious process with subjective or error-prone results. There are significant benefits to be gained through automation or user assistance, for example through prioritisation of samples for analysis. Automated visual anomaly detection is a highly challenging problem due to high dimensionality of the input data, meaning that the normal statistical distribution cannot be directly modelled. We provide a robotic and algorithmic framework that utilizes Generative Adversarial Ngenerative adversarial networks (GANs) to indirectly model this distribution, and hence provide a mechanism to quantify the anomalousness of given image data samples from a tokamak environment. This paper presents an approach to visual anomaly detection that combines multiple deep neural network architectures in order to extract individual components and then classify anomalies. An overview of the architecture and algorithms employed as well as quantitative and qualitative assessments of the performance against data from both a benchmark dataset, and real data gathered from JET components is provided.