In-line Inspection (ILI) technologies have been routinely employed for assessing threats to pipeline integrity such as metal loss, mechanical damage and cracks for many years. When the frequency of in-line predictions is nil then discovery of conditions and remediation can be straightforward and safety insured with minimal impact to pipeline integrity resources. When significant numbers of conditions are reported by in-line inspection an understanding of the performance of in-line tools for detection and discrimination can be used to insure immediate and future safety while optimizing resources employed responding to the in-line tool predictions. ILI vendors routinely claim performance specifications for the more established technologies such ILI tools for metal loss based on laboratory level tests conducted on artificial or real metal loss features. However, it is not possible for such tests to account for and quantify all sources of error that could be encountered when ILI is used in real pipelines. For example, magnetic flux leakage (MFL) technology metal loss sizing performance can be significantly affected by defect size and shape. For this reason industry has recognized the role and value for comparison of ILI predictions with actual conditions determined by direct examination for the purposes of managing risk associated with false acceptance of ILI predictions as safe and to feed back to the ILI vendors performance data that can be used to improve the technologies which is typical for most expert systems. API 1163 was developed by industry to help facilitate such qualification of ILI data. This paper describes factors that must be considered when selecting conventional or advanced nondestructive examination (NDE) to determine true ILI performance and the role of true ILI performance in managing both safety and resources when responding to ILI predictions.