Thanks Corey for those additions!
I think my main problem with talking about Integral’s “mission-problem” as a software problem, with software being a metaphor and not literal, is the use of that language itself. I feel like it really dehumanizes the interpersonal and social problems that need to be addressed for us to move forward; as though we can simply update the programming and everything will get better. It also has some echoes of determinism in it for me that don’t feel great (ie, software programs only ever operate within the parameters of their programming, and cannot – currently, setting aside whether not AI with true “free will” could ever be created – evolve themselves out of their programming). TL;DR, I don’t feel much “heart” in software as a term, and I have to go back to Terry Patten’s “A New Republic of the Heart” and his encouragement to include our humanity in our activism.
I can see how the Chinese proverb could be seen as taking a larger view, which is a very Integral thing. Where I personally get a little wary of that proverb, though, is that here is the hero of our proverb, seemingly unable able to experience both joy or pain. He reminds me of the “Grays” from Futurama (#letthenerddingbegin!) that would always respond to any crazy situation with “I have no strong feelings one way or the other.” Netflix’s Don’t Look Up seems to cover this on the “bypass / avoidance” end of the spectrum we’re seeing (at least, in the United States), which depicts an over-reliance on “feel good” behavior and messaging while ignoring that felt pain and grief is a very real (and valuable) human experience. So, I think the larger view of the proverb is in fact Integral (with an emphasis on Wholeness, and that the individual at their personal holon level probably won’t ever see that Wholeness in its entirety), while the behavior it wants to profess through its lesson is maybe more flatland postmodern than I care for. Sure, it’s good to not become overly invested in circumstances or a specific outcome, but we also don’t want to ignore circumstances entirely or stop seeking specific outcomes – especially at the expense of felt emotion, which is an incredibly valuable aspect of being human. I think we also don’t want to lose sight of working toward specific goals in Integralism in service of trying to “boil the ocean” with our more-comprehensive worldview.
Now, going to the larger topic here because others have asked, if I were to start analyzing Integralism in the desire to determine our mission parameters and then work to accomplish them, here’s how I would start (and I by no means think this is the only way – just the way my prior IT project manager brain knows how to get large, complex tasks done).
- Determine our core values
- Determine the goals that are related to the core values (ie, where do we NOT see those values expressing en masse in the world, and then set a goal to bridge the gap)
- Get all the goals up on a whiteboard
- Start to build out dependencies for the goals. Which goals are dependent on others?
- Start to map how long (estimated) those goals will take to complete
- Map out which goals can run in parallel
- Run a “critical-path” analysis on the whole chain of parallelized, dependency-aligned goals to determine which specific goals are critical, and ensure they are receiving appropriate attention since they will hold up every other subsequent goal.
Believe it or not based on my prior comments, this is where software can help There are a lot of project management tools on the web, such as Trello and Smartsheet, that can help to organize in a way that ensures we’re staying on task and not missing those critical paths. Smartsheet in particular (even though I hate its UI for project management) has built in critical-path logic based on the dependencies you enter in your waterfall diagram. If I were doing this myself, I would map out all those tasks on Trello, put them into specific lanes to categorize them (most likely based on the stakeholders needed to accomplish them so I can map any “handoffs” I need to do, which is typically where errors occur), and then prioritize them into Trello with their dependencies. What’s also nice about that method is you can put in the estimated time to completion for each task, and let the software then tell you when it thinks you’ll be done.
And, of course, each goal here can always be broken into sub-tasks and sub-goals, so if we Russian Doll the whole thing, we can break a HUGE effort into bite-sized pieces that will work at the individual holon level.
Anyway, I hope this is helpful. Maybe what I’m saying is that Integralism needs a really talented project manager?