I'll try to help, but there are no clear answers -- at least none that solve the discrepant information that can be found from various health agencies. But I will try to explain. My reply is lengthy. It has all been said before, but it bears repeating from time to time.
First, let's discuss "...why does CDC allow..." variable information? CDC has no regulatory authority to allow or disallow such information. In the US, health and public health policies are the prerogative of the states. Even "required" disease reporting to CDC -- to compile national statistics on everything from chlamydia to AIDS to meningitis -- is not required at all. CDC requests that states compile their data and report, but they do so on a voluntary basis. Similarly, CDC cannot dictate prevention strategies, educational messages, or anything else to the states. And typically state health departments cannot dictate such things to local health departments or practicing physicians. On top of all that, there are hundreds of nongovernmental, community-based organizations that provide health education and public health services, and nobody can dictate to them.
As a result, when people interpret data differently, and when they take varying attitudes about legal risks, and when they also factor in their own biases (e.g., religious and social perspectives), there is a lot of room for variable messages and advice to patients. These situations are not unique to the US. Even in authoritarian countries, there is often as much variability as in North America and Europe.
As to what the data acutally show, the data are less precise than you might assume. No research has ever determined precisely the proportion of newly infected people who develop positive antibody tests at various times after exposure. To have such data would require research studies that are essentially impossible at practical costs. Obviously, you cannot intentionally expose people to HIV and then follow them. Therefore, you would need literally thousands of people, exposed to persons with known HIV infection at a single, precisely known point in time, then tested, ideally with several different types of tests by different manufacturers) at frequent intervals (e.g., once a week) for several weeks or months. Even then, there would be a certain amount of statistical variability around the results at every time interval.
Absent such data, all that can be done is to make reasonable estimates, based on the biology of HIV and the immune response to it; the chemistry of the test; how a test performs in animal model research; and interpreting inherently imprecise data in both infected and uninfected people, whose recollection of exactly when and how they were exposed is often faulty. So you combine imprecise data with the politica/social element I described above, and you are bound to have mixed messages. And with so many agencies and health departments offering such information, it is easy to understand that not all of them will instantaneously change their messages as new and better tests come into use. Finally, even if 90% of labs are using the newest, best tests, a government agency -- believing that a few labs might use older tests that don't work as well -- might conservatively gear their messages to the least effective tests in use in the community.
With all those uncertainties, most knowledgeable experts would agree that with modern (third or fourth "generation") antibody tests, your figures of 95% and 98% by 6 and 8 weeks probably are about right, and that 3 months covers 99+% of cases. But even among the experts, not all would agree, and these need to be viewed as ballpark figures, without a lot of precision. For all I know, the real figure at 6 weeks might be 90%, not 95%; there is no way to be certain.
Anybody who says a negative test is "meaningless" after 3 months (or 4 weeks, or any other interval) is simply wrong, and insightful providers and agencies would never say that. If someone has, say, a 1% chance of having caught HIV, and has a negative test result that is 90% reliable, then the chance that person actually was infected calculates at 0.1%, or one chance in 1,000. That's a huge difference compared to 1 in 100, and should be very reassuring. Such an outcome clearly is not "meaningless".
Therefore, on this forum Dr. Hook and I never use test performance alone to come up with a judgment on a particular person's risk of having HIV. Neither do any careful providers or counselors. We consider the overall context -- the nature of the exposure, the likelihood the partner had HIV, etc, in addition to reliability of the test. That's why you will find us recommending to some people that testing need not be repeated beyond 6 weeks, whereas in others (those at particularly high risk) we might recommend a 3 month test. Of course we all could wish that all providers and counselors were "careful" in the same way. But of course human nature dictates that it will never happen. You can expect that similar discrepancies in test interpretation and patient education by state and local health departments will still be with us 20 years from now, probably forever.
Thank you for the opportunity to elaborate on these important issues. I hope many forum users will find it helpful.
Best wishes-- HHH, MD
I found this information most interesting and enlightening. Thank you for sharing it.