I had some good responses to my last e-mail about finding the balance between critical thinking and a blanket mistrust of expert opinion.

As someone with a history of rebellious opinions and clashes with authority, I find it strange that I’ve found myself in the position of defending “expertise” and establishment.

However, I am currently much more concerned with the rise of crackpotism than I am with mindless obedience to authority.

By removing gatekeepers and democratizing the ability to spread information (especially through algorithmically curated social feeds), we seem to have accelerated the rate at which both autodidacts can take in and filter huge amounts of information as well as the percentage of potential crackpots that can be exposed to conspiracy theory.

I’m trying to parse out the difference between what I would consider to be healthy and appropriate skepticism versus what I would consider to be dangerous and foolhardy crackpotism.

If we use the example of CrossFit HQ sharing articles regularly calling into question the expertise of medical professionals, conflicts of interest in science, and potential corruption through pharmaceutical money that I discussed last week, I think I can clarify what I mean.

Let’s think through a rough Bayesian framework for trusting expert consensus on medical matters.

We can kind of bastardize the the details for the purpose of conversation without getting too into the weeds in defining our question appropriately, and say something like:

“The probability that expert consensus on matters related to health, wellness and prevention of long-term disease is largely correct is X%”

Or – to rephrase – “Of 100 consensus expert opinions on health and wellness topics, X of those opinions are largely correct.”

To list some specific examples, we can think of things like the following examples making up our list of expert consensuses:

•Recommend dosing of creatine supplementation

•Recommended daily allowances of various micronutrients

•Recommendations on total sleep

Then, if we happen to have additional information about a topic or reason to doubt expert consensus, then we can engage in Bayesian updating.

From the link:

Situation # 1:

Given: The median height of an average American Male is 5’10”.  (I don’t know if this is accurate; that’s not the point.)  You are on a business trip and are scheduled to spend the night at a nice hotel downtown.

Wanted:  Estimate the probability that the first male guest you see in the hotel lobby is over 5’10”.

Solution:  50%  (Well, that’s certainly self-evident.)

Situation # 2:

On your way to the hotel you discover that the National Basketball Player’s Association is having a convention in town and the official hotel is the one where you are to stay, and furthermore, they have reserved all the rooms but yours.

Wanted:  Now, estimate the probability that the first male guest you see in the hotel lobby is over 5’10”.

Solution:  More than 50%   Maybe even much more, and that’s obvious too.

I have no issues with updating off of expert consensus based upon new or more nuanced information.

However, I have a concern that a lot of the information that CrossFit is putting out is not intended just as information for updating a Bayesian prior, but as an attempt to downgrade our trust in expert consensus as a whole.

Meaning that – rather than clarifying the nuance on a specific issue – it’s meant to change our belief in whether or not we can trust expert consensus in general

My 17-year old punk self probably can’t believe I’m saying this, but I actually think that most people don’t trust experts enough.

Or, they potentially miscategorize experts and put someone like Dr. Oz on the same footing as lipidologists who have been studying the metabolism of cholesterol for decades.

Here’s to a return to true expertise!