Unintended advantage regardless of source (exposure, prior knowledge, or cheating) presents persistent and pervasive threats to the validity of the interpretation and use of test scores resulting from certification exams. Research on various data forensic techniques has primarily focused on the susceptibility of selected-response items to exposure, collusion, piracy, and other types of test fraud. However, the content domain and delivery mode of IT certification exams has led to the increased development and use of performance item types, such as simulations. Some argue that these item types are memorable; others contend that exposure has less impact. The growing use of performance items could help to mitigate security concerns, as the structure of these items demand that candidates actually demonstrate their knowledge or skill and allow for the incorporation of a variety of other assessment approaches on a single exam. However, there is currently a lack of research focused on the security of performance items. With an increase in the use of these items, it is particularly important to understand the nature of the differences between item types, their respective impact on the relative security of exams, and ways to maintain quality exams in situations of known compromise. This session will further existing research by using data forensic techniques (moving averages, item parameter drift, and differential person functioning) on performance item types as well as selected-response item types. Specifically, the relative security of performance items will be compared to traditional selected-response item types (e.g., multiple-choice) and less traditional ones (e.g., a hot spot graphic) that may vary in their memorability. This session will attempt to generalize these findings across multiple information technology certification exam programs. Attendees can expect to gain (1) better understand the susceptibility of various item types to exposure, (2) possible next steps if their examinations display evidence of security issues based on item types, (3) ideas for better decision making based on improved processes that will help users better interpret examination results.