Security considerations for testing programs continue to influence policy and practice. Programs commit efforts to security considerations based on prioritization of prevention, detection, and enforcement activities. Prevention generally offers testing programs the greatest opportunity to exert control over their materials. However, these good intentions may have an unintended effect of causing individuals to focus even more on trying to determine test content. In such instances, the benefit of creating a large item pool may be counteracted by the limited number of items that appear on an operational form. An alternative approach to maintaining control of an item pool has been described as “item pool flooding” (ATP, 2013). Multiple variations of the concept have been postulated with one goal of trying to exhaust candidates who simply try to memorize all the items in the pool. This paper describes a study of an empirical evaluation of the stability of item and test form characteristics that was conducted as a result of a policy decision by a licensure examination program to release an approximately 7,000 item pool in 2009. Results are based on approximately 3,750 candidates who took the exam from 2007-2012. Analyses focused on test- and item-level performance and item level change across years. Of the 126 items that were reused on operational forms following the release, only one exhibited a statistically significant difference in difficulty. This study is a first step in evaluating innovative strategies in test security. Specifically, methods for prevention and detection of item exposure have relied predominantly on maintaining control over information. Differential resources for implementing these activities can yield unintended consequences. Practitioners are cautioned against making a leap to release their item pools without further exploration of the topic in the context of their testing program. Recommendations for considering this strategy given programmatic characteristics are also provided.