This is the training data used to construct and evaluate trojan detection software solutions. This data, generated at NIST, consists of natural language processing (NLP) AIs trained to perform text sentiment classification on English text. A known percentage of these trained AI models have been poisoned with a known trigger which induces incorrect behavior. This data will be used to develop software solutions for detecting which trained AI models have been poisoned via embedded triggers. This dataset consists of 48 adversarially trained, sentiment classification AI models using a small set of model architectures. The models were trained on text data drawn from movie and product reviews. Half (50%) of the models have been poisoned with an embedded trigger which causes misclassification of the images when the trigger is present.