Viewing: All Topics

Does NAPT calculate uncertainties for participants?

No. It is the responsibility of the lab submitting results to calculate all uncertainties associated with any measurement made by them. It is not NAPT’s place to state whether or not a reported uncertainty is correct or not. That oversight should come from your management and ultimately the accrediting body who granted your accreditation.  

What is a normalized Error (En)?

A normalized error compares a lab’s measurement result and its associated uncertainty with a reference value and uncertainty to determine if they are in agreement. If the absolute value of En is greater than one, the two measurements are not in agreement. For more information on calculation of En, see ISO/IEC 17043:2010 Conformity Assessment-General Requirements for Proficiency Testing.

Can I talk with someone about an unsatisfactory measured value?

Yes, please send a detailed email to napt@proficiency.org and we will get you in contact with the proper technical advisor or NAPT’s technical manager who will be able to assist.

How do you want me to calibrate your artifact?

The main goal of any enrollment is to compare measurements using your processes and reference values and evaluate your results against other like organizations.

Why did I fail a set point?

There could be a number of reasons why your performance was unsatisfactory.  NAPT staff cannot tell you why without having greater knowledge of your measurement process.  NAPT members have the ability to work directly with the 50+ technical advisors NAPT currently has assisting its members.  If you wish to receive assistance from a technical expert, contact NAPT and we will start that process with you.

Why are my En calculations different from the values on your reports?

The Evalue can be different because the values used by you are not the values used by NAPT. Numbers found on the preliminary or final report are not the exact values NAPT uses. These numbers are rounded. The only way that you are going to guarantee that your Ecalculations matches NAPT calculations is to use the exact same numbers. Feel free to contact NAPT for the exact numbers used in any calculation.

Why aren’t the En values on the preliminary report?

The primary purpose of the preliminary report is to ensure NAPT has entered your data submittal correctly into the QMS Navigator Management Application.  To issue a preliminary report with values that may change would be inappropriate and misleading.

Do you have someone I can speak to about my data results?

Yes, please send an email to napt@proficiency.org and NAPT staff will coordinate with the appropriate technical advisor to set up a date and time to discuss your results.

What are reference values?

The reference value is a value against which your results are compared. In any proficiency test, the results are only as good as the established values assigned.

NAPT's process for establishing the reference value was designed with the help of lead statisticians in the metrology community plus technical oversight by statisticians at NIST. Using standard technical practices, an analysis is performed and an appropriate reference value is assigned to the data set. We ensure that reference values and their associated uncertainties assigned are stable and worthy of comparison.

Pivot and reference labs are accredited, helping to ensure credibility of reference labs. To further analyze the results we also perform the following statistical reviews: Two Sigma, Three Sigma, Chauvnet Criterion, Sample Median, Trimmed Mean, Interquartile, Q-Test, and Thompson Technique. This is done in all cases to assure sound and meaningful results are published. During the course of each proficiency test, comprehensive technical reviews are conducted before, during, and after a kit is put into distribution. This is done to assure test integrity, establish and/or validate reference data and check for trends and/or anomalies in the data. NAPT's statistical review process is so robust that many of our advisors feel we have gone overboard in the analysis. It is their opinion that our technical review is unmatched by any ILC Provider.

These requirements are described in NAPT Quality Procedure 304-1 Data and Statistical Analysis Procedure. Only after a careful review of the data does NAPT assign an established reference value. To prematurely assign a reference value could be inaccurate and may result in a value that would not pass a robust analysis. Doing so would not ensure confidence in the reference value assigned. Making an assumption that a single measurement is the correct measurement, is not a technically sound process for ensuring the validity of the data. (No single laboratory is infallible.) That is why only after a thorough review of all data, should a reference value be assigned.

Do your technical advisors have access to my results?

No, the technical advisors assigned to a proficiency test do not have access to client data. The technical advisors are given raw data only. We know that one of the most important factors in designing meaningful proficiency tests is the involvement of technical advisors. One of the strengths behind NAPT’s success is the technical experts that assist us on a daily basis. The technical advisor is an expert in the field for which they provide assistance.

Why did I pass my preliminary report but fail my final report?

Your results will depend, in large part, upon the reference value assigned to the artifact(s) contained in the ILC/PT.  See “What are reference values?” question above for how NAPT obtains its reference values.  Since ILC/PTs are in constant circulation, it is necessary to periodically ensure that reference values have not changed.  In the case where an artifact is recalibrated and the reference value is found to have changed, participant data is reviewed to identify when the drift occurred.  If the drift is found to have occurred before your preliminary report was issued, your results will then be compared to the newly established reference values.  This may lead to a pass on your preliminary report, but a failure on your final report.

NAPT strives to minimize the impact of drift by selecting quality artifacts, scheduling regular pivot checks and monitoring data submitted by participants for any sudden changes in measurement data. 

Theme picker