Evaluating
errors determines the impact of sampling errors on your data. The
parameters used to draw the sample and any errors that were found
in the sample, are used to calculate the upper error limit for the
data set. In evaluating sampling errors, *ACL* uses the upper error
limit cumulative factors of the Poisson distribution.

In record sampling, the upper error limit frequency is based on the number of errors, not the monetary value of the errors. The upper error limit is the maximum rate of error that is acceptable in the data set without detection, and is based on the number of errors and the specified confidence level. For example, if the upper error limit is 6.5%, you are 90% confident that the total error rate does not exceed 6.5%.

*ACL* uses the following formula to evaluate record errors:

Upper Error Limit Frequency = Upper Error Limit Cumulative/Sample Size

In monetary unit sampling, the upper error limit is expressed as a monetary amount and provides the “worst case” amount of error, based on the required confidence level.

Note

In monetary unit sampling, you must use the fixed interval or cell sampling method to accurately evaluate errors. You can evaluate errors with any method of record sampling.

For monetary unit samples, the report includes the effects of each error and shows the most likely amount of total error and the upper error limit expressed as a monetary amount. This is the amount you are confident that total errors do not exceed. For example, you can estimate that the most likely errors are 50,000, but you can also be 95% confident that the total errors do not exceed 288,000.

The formula that *ACL* uses to evaluate monetary errors
is based on the upper error limit cumulative factors for the Poisson
distribution:

The basic precision is the amount of error you are confident of not exceeding if no errors are reported for the sample. It is determined by multiplying the sampling interval by the Poisson upper error limit factor for the specified confidence (assuming no errors).

For each error entered, the percentage of tainting is determined by dividing the error amount by the recorded item amount.

For each error entered, an estimate of the most likely error in the data set is determined.

For items smaller than the selection interval, the most likely error is the tainting percentage multiplied by the interval used for selection. This calculation is based on the fact that the particular item selected was not certain to be selected, and therefore is representative of other errors in the data set.

For items equal to or greater than the interval, i.e., top stratum items, the most likely error is the amount of the error. The previous formula does not apply because all top stratum items are selected, and therefore the error is not representative of others in the data set.

On completing the error entry, the errors are sorted in decreasing size of most likely error amount, with top stratum and understatement items listed at the end.

A precision adjustment factor is calculated for each error.

For items smaller than the sampling interval, the precision adjustment factor is the most likely error multiplied by the upper error limit cumulative factor for that error number in the Poisson tables. This reordering of the errors matches the largest errors with the largest adjustment factors, ensuring the most conservative, or highest, estimate of the upper error limit.

For top stratum items, the precision adjustment factor is the amount of the error. Because all top stratum items are selected, all items and errors in this data set are detected.

For understatement errors, the precision adjustment factor is zero. This means that the estimate of the upper error limit is not reduced when understatements are detected because

*ACL*does not directly test for this type of error in a monetary sample.NoteVarious sample evaluation methodologies use adjustment values for understatement factors ranging from zero (as in

*ACL*) to the amount of the most likely error. If you prefer to use a different assumption regarding the treatment of understatement errors, you can adjust the detail supplied to reflect your reduction in the upper error limit. This does not affect the estimate of the most likely error, which is the same regardless of your assumptions about understatements.Finally, the most likely errors are added to produce the total most likely errors for the sample errors noted. As well, the basic precision is summed, together with all the precision adjustment factors for the errors noted, to produce the upper error limit for the sample within the required confidence.