Aa
Aa
A
A
A
Close
Avatar universal

Question for Dr. HHH on State testing information

Dr,
I’m hoping you can draw from some of your CDC experiences and contacts with those in the public health field to answer this question.  Hopefully this is a little deeper than the usual testing question.  I had a very low risk and decided to be tested on National Testing Day at eight weeks after my exposure.  The tester explained to me that 90 percent of people seroconvert by 3 months and so the test was meaningless.  I questioned her information, and she showed me the same information written on a card from the Maryland AIDS Administration.  The local news here had cited the same statistics from the Virginia agency.  Both of these organizations are headed by MDs and (I’m almost sure) receive Federal funding from CDC.  They must receive money from the State taxpayers.  As you know, Massachusetts AIDS services would declare this 8-week test absolutely conclusive without qualification since I’m a healthy young male.  

Common sense tells me that MA could never get away with their 6-week window periods if MD and VA were anywhere near correct with their statistics.  So, why does CDC allow these State health initiatives, of which they hold the purse strings, to offer such divergent information?  The MD and VA stats of 90 percent by 3 months scared me, and I think it’s doing the public a horrendous disserve to quote these numbers.  I understand the legal need to be conservative, but isn’t this just a lie?  Am I right that the scientific consensus is 95% by 6 weeks and 98% by 8 weeks?   Have any of your colleagues, assuming they agree with these numbers, expressed dissatisfaction with what the States are saying?  I doubt that the Oraquick tests they're doing in MA are conclusive months before those same tests in MD.            
2 Responses
Sort by: Helpful Oldest Newest
239123 tn?1267647614
MEDICAL PROFESSIONAL
I'll try to help, but there are no clear answers -- at least none that solve the discrepant information that can be found from various health agencies.  But I will try to explain.  My reply is lengthy.  It has all been said before, but it bears repeating from time to time.

First, let's discuss "...why does CDC allow..." variable information?  CDC has no regulatory authority to allow or disallow such information.  In the US, health and public health policies are the prerogative of the states.  Even "required" disease reporting to CDC -- to compile national statistics on everything from chlamydia to AIDS to meningitis -- is not required at all.  CDC requests that states compile their data and report, but they do so on a voluntary basis.  Similarly, CDC cannot dictate prevention strategies, educational messages, or anything else to the states.  And typically state health departments cannot dictate such things to local health departments or practicing physicians.  On top of all that, there are hundreds of nongovernmental, community-based organizations that provide health education and public health services, and nobody can dictate to them.  

As a result, when people interpret data differently, and when they take varying attitudes about legal risks, and when they also factor in their own biases (e.g., religious and social perspectives), there is a lot of room for variable messages and advice to patients.  These situations are not unique to the US.  Even in authoritarian countries, there is often as much variability as in North America and Europe.

As to what the data acutally show, the data are less precise than you might assume.  No research has ever determined precisely the proportion of newly infected people who develop positive antibody tests at various times after exposure.  To have such data would require research studies that are essentially impossible at practical costs.  Obviously, you cannot intentionally expose people to HIV and then follow them.  Therefore, you would need literally thousands of people, exposed to persons with known HIV infection at a single, precisely known point in time, then tested, ideally with several different types of tests by different manufacturers) at frequent intervals (e.g., once a week) for several weeks or months.  Even then, there would be a certain amount of statistical variability around the results at every time interval.

Absent such data, all that can be done is to make reasonable estimates, based on the biology of HIV and the immune response to it; the chemistry of the test; how a test performs in animal model research; and interpreting inherently imprecise data in both infected and uninfected people, whose recollection of exactly when and how they were exposed is often faulty.  So you combine imprecise data with the politica/social element I described above, and you are bound to have mixed messages.  And with so many agencies and health departments offering such information, it is easy to understand that not all of them will instantaneously change their messages as new and better tests come into use.  Finally, even if 90% of labs are using the newest, best tests, a government agency -- believing that a few labs might use older tests that don't work as well -- might conservatively gear their messages to the least effective tests in use in the community.

With all those uncertainties, most knowledgeable experts would agree that with modern (third or fourth "generation") antibody tests, your figures of 95% and 98% by 6 and 8 weeks probably are about right, and that 3 months covers 99+% of cases.  But even among the experts, not all would agree, and these need to be viewed as ballpark figures, without a lot of precision.  For all I know, the real figure at 6 weeks might be 90%, not 95%; there is no way to be certain.  

Anybody who says a negative test is "meaningless" after 3 months (or 4 weeks, or any other interval) is simply wrong, and insightful providers and agencies would never say that.  If someone has, say, a 1% chance of having caught HIV, and has a negative test result that is 90% reliable, then the chance that person actually was infected calculates at 0.1%, or one chance in 1,000.  That's a huge difference compared to 1 in 100, and should be very reassuring.  Such an outcome clearly is not "meaningless".

Therefore, on this forum Dr. Hook and I never use test performance alone to come up with a judgment on a particular person's risk of having HIV.  Neither do any careful providers or counselors.  We consider the overall context -- the nature of the exposure, the likelihood the partner had HIV, etc, in addition to reliability of the test.  That's why you will find us recommending to some people that testing need not be repeated beyond 6 weeks, whereas in others (those at particularly high risk) we might recommend a 3 month test.  Of course we all could wish that all providers and counselors were "careful" in the same way.  But of course human nature dictates that it will never happen.  You can expect that similar discrepancies in test interpretation and patient education by state and local health departments will still be with us 20 years from now, probably forever.

Thank you for the opportunity to elaborate on these important issues.  I hope many forum users will find it helpful.

Best wishes--   HHH, MD
Helpful - 1
480448 tn?1426948538
I found this information most interesting and enlightening.  Thank you for sharing it.
Helpful - 0

You are reading content posted in the HIV - Prevention Forum

Popular Resources
Condoms are the most effective way to prevent HIV and STDs.
PrEP is used by people with high risk to prevent HIV infection.
Can I get HIV from surfaces, like toilet seats?
Can you get HIV from casual contact, like hugging?
Frequency of HIV testing depends on your risk.
Post-exposure prophylaxis (PEP) may help prevent HIV infection.