Science isn't lab coats and PhDs. It's the willingness to test your assumptions. Here's how animal activists can use that mindset to build a stronger movement
I believe the scientific approach enables us to *discover* what works rather than pretending to have all the answers and *planning* for what we would like to work.
But how does this square with the measurement bias? You may only want to test out what can be measured, which is also bad because you never reach for more ambitious projects.
Excellent article once again Zachary. It's funny how I always look back at the history of medicine, thinking "how did it never occur to them to look for counterexamples?", and yet I have been following that same behavior, when it comes to my involvement with PBU, or to the work of other actual activists around me.
I think part of this, at least to me, is that sometimes I almost don't want to check. It can be very discouraging to have all these 1-to-1 interventions, and _feel_ like they went well, only to look at the numbers and realize it had no impact. But of course, we _must_ check.
Another thought is that Science is two things: the discipline, as you layed out, which describes a method of finding truth. But it is also its community, both because of peer-review and because of replication. I think we would do well in recreating some of that, too. (You did allude to this in the post ofc) Activists should post the results of their "experiments", in part so that others may use that information, but also so that others may find flaws in how it was tested. And also so that others may repeat the same test and report _their_ results.
(PSA If anyone ever wants to implement some of this, I'm also willing to help with the numbers! I'm finishing an undergrad in EE and am soon to study Quantum Eng., so I'm alright with statistics.)
I totally agree with you that gathering data and using it to assess what works is useful and crucial. And that failing to do so leads to the "reasoning" you describe about historical allopathy and that you can still find everywhere in pop health stuff. However, I would like to suggest you don't get hung up on the hypothesis testing experiment notion of science that is taught in grade school. A large portion of scientific inquiry (most of it, I would say, though I could not tell you exactly what the measure of that should be) is about measurement, not dichotomous hypothesis testing like you suggest. The example you give with the 10% should be rethought as being a measure of how many conversions you get, not the yes-or-no answer to whether it is 10% or not. You want to know if you got 4% or so close to 0% it is basically nothing, not merely that each of those fails to meet the 10% goal. Additionally, while you describe opportunities to do controlled experiments, you are mostly talking about observational data, sometimes even when you describe it as an experiment. Sure, when you are ready to drill down to choose "the three key tactics" or whatever, you might want to repeat an intervention with and without the particular tactic and see what happens. But until then, you should be happy to assess outcomes like "when we do this thing in the way we think is best, we get about 4% success". That is an observational study. Most of what you describe as experimental is actually observational.
Most of epidemiology, which is the science that most people in this space are most familiar with, is measurement based on observational data. If that is done well and honestly, it is quite capable of providing useful measures and clear evidence of effect. In a lot of cases, those methods are the only source of data available.
Thanks for this great post Zachary.
I believe the scientific approach enables us to *discover* what works rather than pretending to have all the answers and *planning* for what we would like to work.
But how does this square with the measurement bias? You may only want to test out what can be measured, which is also bad because you never reach for more ambitious projects.
I could not agree more
Excellent article once again Zachary. It's funny how I always look back at the history of medicine, thinking "how did it never occur to them to look for counterexamples?", and yet I have been following that same behavior, when it comes to my involvement with PBU, or to the work of other actual activists around me.
I think part of this, at least to me, is that sometimes I almost don't want to check. It can be very discouraging to have all these 1-to-1 interventions, and _feel_ like they went well, only to look at the numbers and realize it had no impact. But of course, we _must_ check.
Another thought is that Science is two things: the discipline, as you layed out, which describes a method of finding truth. But it is also its community, both because of peer-review and because of replication. I think we would do well in recreating some of that, too. (You did allude to this in the post ofc) Activists should post the results of their "experiments", in part so that others may use that information, but also so that others may find flaws in how it was tested. And also so that others may repeat the same test and report _their_ results.
(PSA If anyone ever wants to implement some of this, I'm also willing to help with the numbers! I'm finishing an undergrad in EE and am soon to study Quantum Eng., so I'm alright with statistics.)
Compelling post. U make me wanna try being an animal advocate 😅
I totally agree with you that gathering data and using it to assess what works is useful and crucial. And that failing to do so leads to the "reasoning" you describe about historical allopathy and that you can still find everywhere in pop health stuff. However, I would like to suggest you don't get hung up on the hypothesis testing experiment notion of science that is taught in grade school. A large portion of scientific inquiry (most of it, I would say, though I could not tell you exactly what the measure of that should be) is about measurement, not dichotomous hypothesis testing like you suggest. The example you give with the 10% should be rethought as being a measure of how many conversions you get, not the yes-or-no answer to whether it is 10% or not. You want to know if you got 4% or so close to 0% it is basically nothing, not merely that each of those fails to meet the 10% goal. Additionally, while you describe opportunities to do controlled experiments, you are mostly talking about observational data, sometimes even when you describe it as an experiment. Sure, when you are ready to drill down to choose "the three key tactics" or whatever, you might want to repeat an intervention with and without the particular tactic and see what happens. But until then, you should be happy to assess outcomes like "when we do this thing in the way we think is best, we get about 4% success". That is an observational study. Most of what you describe as experimental is actually observational.
Most of epidemiology, which is the science that most people in this space are most familiar with, is measurement based on observational data. If that is done well and honestly, it is quite capable of providing useful measures and clear evidence of effect. In a lot of cases, those methods are the only source of data available.