Find answers to the most common questions about NAPT services, membership, and proficiency testing in general.
No. It is the responsibility of the lab submitting results to calculate all uncertainties associated with any measurement made by them. It is not NAPT’s place to state whether or not a reported uncertainty is correct or not. That oversight should come from your management and ultimately the accrediting body who granted your accreditation.
A normalized error compares a lab’s measurement result and its associated uncertainty with a reference value and uncertainty to determine if they are in agreement. If the absolute value of En is greater than one, the two measurements are not in agreement. For more information on calculation of En, see ISO/IEC 17043:2010 Conformity Assessment-General Requirements for Proficiency Testing.
Yes, please send a detailed email to email@example.com and we will get you in contact with the proper technical advisor or NAPT’s technical manager who will be able to assist.
The main goal of any enrollment is to compare measurements using your processes and reference values and evaluate your results against other like organizations.
There could be a number of reasons why your performance was unsatisfactory. NAPT staff cannot tell you why without having greater knowledge of your measurement process. NAPT members have the ability to work directly with the 50+ technical advisors NAPT currently has assisting its members. If you wish to receive assistance from a technical expert, contact NAPT and we will start that process with you.
The En value can be different because the values used by you are not the values used by NAPT. Numbers found on the preliminary or final report are not the exact values NAPT uses. These numbers are rounded. The only way that you are going to guarantee that your En calculations matches NAPT calculations is to use the exact same numbers. Feel free to contact NAPT for the exact numbers used in any calculation.
The primary purpose of the preliminary report is to ensure NAPT has entered your data submittal correctly into the QMS Navigator Management Application. To issue a preliminary report with values that may change would be inappropriate and misleading.
Yes, please send an email to firstname.lastname@example.org and NAPT staff will coordinate with the appropriate technical advisor to set up a date and time to discuss your results.
The reference value is a value against which your results are compared. In any proficiency test, the results are only as good as the established values assigned.
NAPT's process for establishing the reference value was designed with the help of lead statisticians in the metrology community plus technical oversight by statisticians at NIST. Using standard technical practices, an analysis is performed and an appropriate reference value is assigned to the data set. We ensure that reference values and their associated uncertainties assigned are stable and worthy of comparison.
Pivot and reference labs are accredited, helping to ensure credibility of reference labs. To further analyze the results we also perform the following statistical reviews: Two Sigma, Three Sigma, Chauvnet Criterion, Sample Median, Trimmed Mean, Interquartile, Q-Test, and Thompson Technique. This is done in all cases to assure sound and meaningful results are published. During the course of each proficiency test, comprehensive technical reviews are conducted before, during, and after a kit is put into distribution. This is done to assure test integrity, establish and/or validate reference data and check for trends and/or anomalies in the data. NAPT's statistical review process is so robust that many of our advisors feel we have gone overboard in the analysis. It is their opinion that our technical review is unmatched by any ILC Provider.
These requirements are described in NAPT Quality Procedure 304-1 Data and Statistical Analysis Procedure. Only after a careful review of the data does NAPT assign an established reference value. To prematurely assign a reference value could be inaccurate and may result in a value that would not pass a robust analysis. Doing so would not ensure confidence in the reference value assigned. Making an assumption that a single measurement is the correct measurement, is not a technically sound process for ensuring the validity of the data. (No single laboratory is infallible.) That is why only after a thorough review of all data, should a reference value be assigned.
No, the technical advisors assigned to a proficiency test do not have access to client data. The technical advisors are given raw data only. We know that one of the most important factors in designing meaningful proficiency tests is the involvement of technical advisors. One of the strengths behind NAPT’s success is the technical experts that assist us on a daily basis. The technical advisor is an expert in the field for which they provide assistance.
Your results will depend, in large part, upon the reference value assigned to the artifact(s) contained in the ILC/PT. See “What are reference values?” question above for how NAPT obtains its reference values. Since ILC/PTs are in constant circulation, it is necessary to periodically ensure that reference values have not changed. In the case where an artifact is recalibrated and the reference value is found to have changed, participant data is reviewed to identify when the drift occurred. If the drift is found to have occurred before your preliminary report was issued, your results will then be compared to the newly established reference values. This may lead to a pass on your preliminary report, but a failure on your final report.
NAPT strives to minimize the impact of drift by selecting quality artifacts, scheduling regular pivot checks and monitoring data submitted by participants for any sudden changes in measurement data.
Copyright 2020 National Association for Proficiency Testing (NAPT)
Website developed by 10 Pound Gorilla