I wrote about critical thinking in the last post. And while it would seem that most health care professionals should be pretty good at applying this skill in evaluating information, based on my experience that may not be the case.
Obviously a lot of doctors are when it comes to diagnoses and treatment plans, but I am still surprised at times when I hear what competitors have said and wonder how they get away with it. My friend Ryan Perry told me a story about how, when he was training a new rep (who had come to us from a company that’s very long on “story” and very short on real, objective proof to support their products), he asked this person how his former company had told their reps to respond when a doctor wanted to know how a particular product worked. He said “They told us to look the doctor in the eye and say “Doctor, it works GREAT!”
It’s amazing that anyone would try to get away with nothing but a sales person’s assertion, especially when patients’ trust in their doctor’s treatment is part of the equation.
But what constitutes adequate evidence of efficacy? Should a doctor expect to have a battery of double-blind, placebo-controlled randomized trials before telling a patient they should be taking vitamin C for the sniffles? Especially given the known safety of most supplements? And are RCTs (randomized, controlled trials) even the best way to study supplements? That is the gold standard for pharmaceuticals, but those are novel, individual molecules that are most commonly new-to-nature. Virtually all of them have side effects; some profound and even life-threatening. Should supplements be held to that standard, or is there another way to establish efficacy?
I’m going to address that last question in future posts, but first I want to talk a little bit about “research.” I saw a coffee cup in a physician’s office not long ago that had the inscription “Please don’t confuse your Google search with my medical degree.” This is a common complaint: patients find some obscure website full of dubious, over-the-top claims and march into their doctor’s office clutching this “research” asking why the doctor is hiding this information from them, as it obviously holds the answer to all their health woes. Laughable if it weren’t so annoying. We all know (or should know) that not everything one finds on the “Interweb” should be given credence. Or even a second thought. Finding a reputable source of unbiased scientific information is important, and the data are definitely out there; a Google Scholar search for “Vitamin D immunity” just now yielded a little shy of 740,000 hits. Of course all the hits weren’t unique studies, but that’s still a lot of data.
So in my chosen industry, how should we evaluate what we hear?
First, let’s get some clarity around “research.” A personal experience, no matter how compelling or trustworthy the source, cannot be called research. That’s an anecdote. Coincidence, confirmation bias, unrelated causes, and on and on could be contributing to the observed results. We’ve all heard that visits to ERs spike during a full moon (I’ve had nurses tell me they have no doubt about it), but a careful statistical analysis shows that’s not true.
So we’re talking about actual scientific studies, not anecdotal information. But the vast majority of information available on a given nutrient is generic. This is because, unlike a pharmaceutical, individual nutrients are not patentable, so companies are generally not interested in funding expensive research projects when there’s no hope of recouping their costs through a patent. So most of the research on vitamins is funded by government grants to universities. In some ways this is very good because it removes the charge of a biased outcome (as might be the case if the work were done by a company with an economic interest in the results). Also, if the test product is vitamin D for example, any company that distributes a vitamin D supplement can point to that paper in support of their vitamin D. This is called “borrowed” research, in that it wasn’t an individual company’s vitamin D used in the study, but it still supports the function of vitamin D. The downside of course is that any other company with a vitamin D product can refer to that study in support of their product.
Let’s say however that there IS something unique about the vitamin D used in the research; maybe a novel absorption system. The results prove positive, so some other company (not the supplier of the test product) uses that paper in support of their product (which, as I said, is different than what was in the study). In that case, the company using the paper to support their product is implying that their product will perform the same as the test product, without supporting evidence. This, in my opinion, is fraud.
There’s another, more murky area of borrowed research. Let’s say I put a product with 6 or 8 ingredients together, for, say, support of connective tissue. Since many doctors give several different supplements at a time to their patients based on their experience, it makes sense for a company to save the patients money and the necessity of opening a half-dozen bottles by this “mixology.” Now, to support my product, I could go to PubMed and find published articles on each of the ingredients. I now have “research” on my product, even though none of the supportive papers specifically studied my product. I am not suggesting this is fraudulent or even inappropriate, because recall that doctors routinely do this in giving several different supplements at the same time to patients. But it is a bit of an assumption to say that adding each of those ingredients into a mixology approach will yield effects as from each of the individual ingredients. It may, but it may also be true that two of the ingredients interfere with one another in some way. In our industry, this is a commonly used strategy and again, I’m not suggesting it’s wrong to do so (in fact we do so ourselves). It’s a lot better than anecdotes or simple assertions, but it’s not really solid research.
The next would be “licensed” research. Let’s say we connect with a researcher who has done original work on a particular extract or unique ingredient. In our case at Metagenics that’s what happened with our product named Estrovera. It is an extract of the Siberian rhubarb plant named ERr731 and is unique to the German company that isolated and studied it. We license that specific extract from this company, and the research that is associated with it (something like 100 published papers) becomes available to use because it is in fact exactly that extract used in the studies. This is much better (and obviously also more expensive) that borrowed research, but if our relationship with the German supplier ends and we attempt to use a different product than what was used in the study, we’re back to fraud.
One step higher up the food chain would be original studies conducted on our specific product. For example we patented an extract of hops and conducted (and published) a number of studies showing its impact of kinases and inflammation. Since we discovered the product and conducted the original research, we obviously own that information and will continue to as long as our patents last. These studies could by RCTs or case studies; if submitted for IRB approval and conducted with appropriate scientific (and ethical) rigor they provide truly solid support of a product.
So while all of these three types of “research” are in fact appropriate to use (Borrowed, licensed and original), clearly the last is the most expensive for a company, but also gives them a significant, long-term advantage over any competitors.
Just to be clear, Metagenics uses all three. Most companies in our industry only have borrowed research; a few license products (and the associated research) but it’s much more rare to find any significant data published in peer-reviewed journals. Metagenics has something over 80 papers published on our products, with nearly 200 patents granted or pending.
I applaud any company that is willing to spend the capital required to support their product. This adds credence to the industry in general and helps to dispel the belief that “supplements don’t have any research to support them.”
But this line of information also shows there really is a difference among brands. “Doctor, it works GREAT!” has no place when you’re dealing with something as critical as the health of patients.