Back to Step #1, I see
1 - Present a biased and only fractionally true problem
2 - Sit back and wait for others to come up with an Integral perspective. No reason for you to do any actual heavy lifting
3 - React to other’s perspectives with straw manning, deflection and name calling
4 - never actually even try present your own ideas from an integral perspective. Wait for others to try
5 - when someone tries to pin down the actual facts of the topic, go back to #1 with another issue - preferably in another thread
6 - After several days, return back to original thread with another provocative topic (return to step #1)
From the Article:
Analysis conducted by our team demonstrates this money significantly increased Joe Biden’s vote margin in key swing states.
The research methods showed correlation, not causation., regardless of what the article falsely claims.
Data point 1 - 2016 voter turnout
Data point 2 - 2016 Hillary voter share
Data Point 3 - County share of State poplulation
Data Point 4 - longitude and latitude
Data Point 5 - Per capita CTCL and CEIR spending
Data points 3 and 4 are clearly irrelevant. What does longitude and latitude and population density have to do with anything. So we don’t even have 5 useful data sets - we are down to 3. So then we have a situation where people did not turn out to support Hillary in 2016. There is no data set regarding Hillary’s popularity (the real reason for the low number in 2016). And there is no data set for the unpopularity of Trump in 2020 (the most likely reason for increased votes for Biden).
So they set up the experiment to come up with a conclusion and are just using (incorrectly) a fancy-sounding model to make it sound good, but they are programming in confirmation bias by only using 3 relevant data sets.
From the Article:
BART is a machine-learning algorithm that is considered a gold standard in making causal inferences.
It enables us to avoid mistaking correlation for causation in our estimations.
False. No it does not.
The people who created this analytical model don’t even use such language in the fields where it was intended to be applied - much less something as complicated as analyzing the complex behavior of society.
They only entered 5 data sets, which is absurd. And only 3 of those are clearly relevant. An AI can’t “learn” anything from just 3 data sets - except what you want it to learn by only entering 3 specific data sets. It can only make conclusions from within those data sets. It would only be a legitimate “learning” if thousands of data sets were included.
If I tell you only 3 things and ask you to formulate a conclusion - that answer will be limited to those 3 variables. It I then tell you two additional variables (like both Hillary in 2016 and Trump in 2020 were very unpopular) your conclusion will shift dramatically.
Fun fact - I took a University course on similar methods 30 years ago.
The model is more accurate the more data sets there are, and less accurate the fewer data sets there are, and each additional Data set will increase the likelyhood (never certainty). Less than 20 data sets would have gotten me a complete F.