top of page
  • gvanpatter

Convergent Thinking Surprise?

Updated: May 24, 2023

Welcome back Humantific readers. Having access to ChatGPT has given us an opportunity to conduct a few rapid, outside-the-box experiments around several questions related to innovation and knowledge dynamics embedded in literature that we have wondered about for some time.

Long story short: In our first book; Innovation Methods Mapping, looking across 80+ years of process design history, we wondered if humans tend to create methods that are direct projections of their thinking style / cognitive preferences. Being sensemakers and organizational change practitioners, we further wondered if humans create vision documents, strategies, reward systems and organizations via the same projections. In that 2020 book we tabled the possibility framed as the *Preference Projection Theory.

Beginning in 2011 we undertook an R&D exercise geared to creating a set of analytic tools based around Preference Projection, called *Think Balance Toolset, including a digital text analysis tool synchronized to an assembled bundle of words connected to Humantific’s innovation cycle.

Screen from Humantific: Think Balance Analytics Prototype Study, 2012-2019

We built a prototype and undertook several enlightening document analysis studies. The “magic word bundle” at the core of the prototype, containing 50+ words, was/is much more robust than the simple 6 word bundles that we are using in this series of ChatGPT related experiments. The prototype also looked at more than word counts. The Think Balance Analytics Text Tool was originally conceived as a mechanism to help us look at the vibrations, signaling and weighting inside various corporate strategic documents as part of adaptive readiness assessment.


When ChatGPT arrived, we wondered if it might be applied in this text analysis context. Was ChatGPT ready? Was it geared in this *digital archeology direction? We did not yet know. Experimenting outside any corporate context we contemplated beginning by asking ChatGPT to help us examine the published works of Herbert Simon (1916-2001). Key was going to be that we wanted to do that without us searching for or uploading any text.

An esteemed figure, author of many books, Herbert Simon is perhaps best known in the design community for his 1969 Sciences of the Artificial and this quote: “Everyone designs who devises courses of action aimed at changing existing situations into preferred ones.”

ChatGPT describes him this way: “Herbert Simon was a prominent American economist, political scientist, and cognitive psychologist.”…. “He explores the use of problem-solving techniques in engineering contexts, such as the use of simulation models and decision analysis in the design of manufacturing systems. Simon also discusses the importance of interdisciplinary collaboration between engineers, scientists, and other experts in solving complex problems.”

We did note that much of Simon’s formal 1947-1997 writing focused on the subject framed as “decision-making”. Herbert Simon is a much beloved and quoted figure, but perhaps not so well understood from the direction that we were interested in examining here with the help of ChatGPT. As researchers, sensemakers and innovation capacity builders we wondered if there were any preference projection signal patterns visible in plain sight within the Herbert Simon texts.


Acknowledging that the current version of ChatGPT remains somewhat unstable for use as a tool in this direction we decided that the experiment was worth doing, as a Version 1 that we might revisit later as Generative AI evolves. We like innovation related mysteries so off we went with the rapid-fire assistance of ChatGPT!

To get to the question of whether or not there were any weighting patterns in the Simon text, we asked ChatGPT to look at five Herbert Simon books: "Models of Man", 1957, “The Sciences of the Artificial”, 1969, “Human Problem Solving”,1972, “The Structure of Ill-Structured Problems”, 1973, and "The New Science of Management Decision", 1965-1997.

That turned out to be approximately 568,0000 words and 1501 digital pages estimated and examined by ChatGPT within one day, without us uploading any text.

If we take the numbers generated in this micro, ChatGPT aided experiment literally it suggests that a pattern and weighting does exist in the context of the Simon texts studied. The complexity of the outcome, with various language and conceptual wrinkles folded in certainly provides lots to think about from several different innovation related directions. (See EMERGENT PICTURE at the end of this post.)

In this experiment we utilized four rounds of ChatGPT prompts. The prompts were designed around the Humantific understanding that all forms of innovation in organizational and societal contexts contain both expansive divergence and narrowing convergence, terminologies first tabled by American psychologist J.P. Guildford in the 1950s.



ChatGPT prompts for Round One:

DIVERGENT PROMPT: In the book entitled “The Sciences of the Artificial” written by Herbert Simon in 1969 how many times do the words imagination, ideating, idea-making, generating, diverging or ideas appear?

CONVERGENT PROMPT: In the book entitled “The Sciences of the Artificial” written by Herbert Simon in 1969 how many times do the words analyzing, deciding, decision-making, choosing, converging, or decisions appear?

These two prompts were repeated for each of the 5 books.

BOOK #1:

"Models of Man", Social and Rational: Mathematical Essays on Rational Human Behavior in a Social Setting" Herbert Simon,1957

Approx.: 284 pages, 113,600 words.


  • Imagination: 1 time

  • Ideating: 0 times

  • Idea-making: 0 times

  • Generating: 3 times

  • Diverging: 1 time

  • Ideas: 2 times


  • Analyzing: 44 times

  • Deciding: 51 times

  • Decision-making: 2 times

  • Choosing: 12 times

  • Converging: 9 times

  • Decisions: 61 times

SUMMARY: "Models of Man":

7 Divergent / 179 Convergent

BOOK #2:

“The Sciences of the Artificial”, Herbert Simon, 1969

Approx.: 174 pages, 69,600 words


  • Imagination: 15 times

  • Ideating: 0 times

  • Idea-making: 0 times

  • Generating: 13 times

  • Diverging: 0 times

  • Ideas: 12 times


  • Analyzing: 21 times

  • Deciding: 35 times

  • Decision-making: 5 times

  • Choosing: 2 times

  • Converging: 3 times

  • Decisions: 47 times

SUMMARY: The Sciences of the Artificial”:

40 Divergent / 113 Convergent

BOOK #3:

Human Problem Solving”, Herbert A. Simon, 1972

·254 pages, 101,600 words


  • Imagination: 5 times

  • Ideating: 0 times

  • Idea-making: 0 times

  • Generating: 6 times

  • Diverging: 1 time

  • Ideas: 4 times


  • Analyzing: 74 times

  • Deciding: 43 times

  • Decision-making: 11 times

  • Choosing: 8 times

  • Converging: 4 times

  • Decisions: 30 times

SUMMARY: Human Problem Solving”

16 Divergent / 162 Convergent

Book #4:

“The Structure of Ill-Structured Problems”, Herbert A. Simon, 1973

211 pages, 84,400 words


  • Imagination: 8 times

  • Ideating: 0 times

  • Idea-making: 0 times

  • Generating: 16 times

  • Diverging: 4 times

  • Ideas: 9 times


  • Analyzing: 80 times

  • Deciding: 40 times

  • Decision-making: 6 times

  • Choosing: 7 times

  • Converging: 11 times

  • Decisions: 36 times

SUMMARY: “The Structure of Ill-Structured Problems”:

37 Divergent / 180 Convergent

BOOK #5:

"The New Science of Management Decision" Herbert A. Simon, 1960-1997

578 pages, 200,000 words


  • Imagination: 6 times

  • Ideating: 0 times

  • Idea-making: 0 times

  • Generating: 6 times

  • Diverging: 1 time

  • Ideas: 15 times


  • Analyzing: 78 times

  • Deciding: 50 times

  • Decision-making: 0 times

  • Choosing: 17 times

  • Converging: 1 times

  • Decisions: 136 times

SUMMARY: “The New Science of Management Decision”:

28 Divergent / 282 Convergent



  • Imagination: 35 times

  • Ideating: 0 times

  • Idea-making: 0 times

  • Generating: 44 times

  • Diverging: 7 times

  • Ideas: 41 times



  • Analyzing: 297 times

  • Deciding: 219 times

  • Decision-making: 24 times

  • Choosing: 46 times

  • Converging: 28 times

  • Decisions: 310 times



OK, you are no doubt starting to get the Round 1 picture; Heavy weighting to the convergent word bundle. Clearly ChatGPT found a weighting signal pattern in the Simon texts. What it means is a topic of conversation among us humans!

>In real world practice what interests us are these kinds of questions:

If this same weighting was found in corporate leadership documents what would that likely mean for the company culture and its day-to-day dynamics, what gets rewarded, etc?

>If a person, with such a weighting was tasked with creating/leading a cognitively inclusive innovation culture building initiative what challenges might arise?

>If this is a typical traditional weighting in some management disciplines how does that sync up or not with the current continuous change challenges of VUCA?

>If this weighting reflects the typical emphasis in some disciplines, how does it map to or not, the emphasis required in cognitively inclusive innovation today?

These are questions of considerable interest to us in Humantific practice.

For those who might not know: These are, for us, everyday questions in the context of the complexities arenas of Design 3.0 and Design 4.0, no longer just focused externally on the physical nature and dimensionalities of external users/humans. We are equally interested in what is and is not going on inside the organization related to enabling innovation.



With the results of Round 1 in hand we straight forwardly asked ChatGPT: Did Herbert Simon prefer divergent words over convergent words?

ChatGPT responded: “Herbert Simon is best known for his work on decision-making, problem-solving, and artificial intelligence, rather than his preferences for specific types of words. Simon was known for his ability to communicate complex ideas in a clear and concise manner, and he often used simple and straightforward language to explain his theories and models. He was also known for his use of analogies and metaphors to illustrate abstract concepts, which suggests that he valued creativity and imagination in language use.

In terms of divergent and convergent words specifically, it is difficult to make a generalization about Simon's preferences without a more specific context or source. However, it's worth noting that Simon's work often involved exploring multiple possible solutions to a given problem (divergent thinking), as well as selecting and evaluating the best option (convergent thinking), so it's possible that he saw value in both types of language and thinking.”



We then asked ChatGPT: Did Herbert Simon use more divergent words than convergent words?

ChaptGPT responded: “It's difficult to make a quantitative assessment of Herbert Simon's use of divergent versus convergent words without analyzing a large corpus of his writing and applying a specific methodology to identify and categorize such words. However, it's worth noting that Simon's work often involved exploring multiple possible solutions to a given problem (divergent thinking), as well as selecting and evaluating the best option (convergent thinking). In his writing, Simon tended to use clear and concise language to explain complex ideas and theories, which may have included both divergent and convergent words. Additionally, he was known for his use of analogies and metaphors to illustrate abstract concepts, which suggests that he valued creativity and imagination in language use.

Overall, it's likely that Simon used a variety of language styles and techniques depending on the context and purpose of his writing, rather than favoring one type of word over the other.”



Seeing the results of Rounds 1,2,3 brought us to the additional complexity wrinkle that does exist around inclusive culture building in the real world; that is some tribes involved in design, engineering, decision-making, decision support, management science, etc. might be using terms other than those in the word bundles even today. To say this another way; some folks might not be aware today of the foundational mechanics of cognitively inclusive innovation culture building and psychological safety, so those terms do not appear in their consciousness, approaches or materials. They are focused in different directions. This is not a timeline related disconnect but a tribal awareness and orientation disconnect.

Not being aware of the role of divergence and convergence in innovation, in cognitively inclusive culture building and in psychological safety leaves open the question of how would, how do such folks, lead inclusive innovation initiatives, where operating from meta rather than one’s person thinking preferences is key. Championing convergent thinking (decision-making) is not going to do it. Nor is championing divergent thinking alone.

Acknowledging the possibility that different terms might be in use, in Round 4 we prompted ChatGPT to take another forensic look using different words: How many times do these ten words appear in the five books: design, designing, designer, brainstorming, decision trees, decision analysis, creative problem solving, creative thinking and engineering?

Book #1: Additional Words

"Models of Man" 1957

"design": 59 times

"designing": 14 times

“designer": 14 times

“brainstorming”: 0 times

“analogical reasoning”: 6 times

“decision trees”: 0 times

“decision analysis” : 2 times

“creative problem solving” : 0 times

“creative thinking”: 0 times

"engineering": 27 times

Book #2: Additional Words

“The Sciences of the Artificial” 1969

"design": 173 times

"designing": 36 times

“designer": 114 times

“brainstorming”: 0 time

“analogical reasoning”: 6 times

“decision trees”: 0 times

decision analysis” : 6 times

“creative problem solving”: 0 times

“creative thinking”: 5 times

"engineering": 56 times

Book #3: Additional Words

"Human Problem Solving" 1972

"design": 162 times

"designing": 41 times

"designer": 12 times

“brainstorming”: 0 times

“analogical reasoning”: 22 times

“decision trees”: 0 times

“decision analysis”: 17 times

“creative problem solving”: 0 times

“creative thinking”: 7 times

"engineering": 3 times.

Book #4: Additional Words

“The Structure of Ill-Structured Problems,” 1973

"design": 66 times

"designing": 22 times

"designer": 6 times

“brainstorming”: 0 times

“analogical reasoning”: 14 times

“decision trees”: 4 times

“decision analysis”: 12 times

“creative problem solving”: 0 times

“creative thinking”: 1 time

"engineering": 6 times

Book #5: Additional Words

"The New Science of Management Decision", 1960-1997

"design": 93 times

"designing": 11 times

“designer": 12 times

“problem solving” : ? times

“brainstorming”: 0 times

“analogical reasoning”: 10 times

“decision trees”: 3 times

“decision analysis”: 30 times

“creative problem solving”: 0 times

“creative thinking”: 0 times

"engineering": 26 times


"design": 553 times

"designing": 124 times

"designer": 158 times.

“brainstorming” 0 times

“analogical reasoning”: 58 times

“decision trees”: 7 times

“decision analysis”: 65 times

“creative problem solving”: 0 times

“creative thinking”: 13 times

"engineering": 118 times


In summary, what we saw emerging from the four rounds of ChatGPT output was a somewhat complex signaling picture where a body of historical text repeatedly presents the word “design” but was/is heavily weighted to convergent thinking. Words from the convergent word bundle were found by ChatGPT to be 8x more present than words from the divergent bundle. Complicating the picture, the most frequently appearing words from Round ONE and ROUND FOUR were 1. Design/Designing/ Designer, followed by 2. Decisions, 3. Analyzing and 4. Deciding. That emphasis picture is bit of a head spinner that has probably confused many people over the years.

This is a body of historical text, not so subtly signaling, modeling, that the primary dynamic of design is convergent thinking. In the big picture sense, it had us wondering if preference projection can occur at the subject level?

Can entire subjects be depicted according to personal cognitive preferences? Do subjects exist and can they evolve outside powerful, popular, personal, high-profile depictions? Is the purpose of design to adopt or mimic the behavioral dynamics and thinking preferences of business management, management science or would that confuse the subject? Has it contributed to the confusion around the subjects of design and innovation? If there is a Convergent Tilted School of Design this would appear to be it. Lots to think about there.

To be clear, in Think Balance we are not questioning the validity of the dynamics within decision science, decision support, management science, etc. but rather the assumption that those dynamics are identical to, perfectly suited for and or mirror those of enabling inclusive innovation.

Of course, we recognize that in organizational settings strategic document authors would probably be present. They could be engaged in further conversation. In the context of historical documents, there is no way to ever really know what the cognitive thinking style preferences of the author or authors were. We can only speculate given the signals present. Like any other signals in the mix today one can always recognize or ignore them. That is our choice to make.

We parked this outcome for now as we consider additional ChatGPT experiments.


Big thanks to the spirit of Herbert Simon for allowing us to undertake this historical forensic text experiment via ChatGPT. There will be many more such digital archeology studies to come no doubt as generative AI evolves.

Hope this is helpful. We have other Think Balance / ChatGPT experiments in progress at Humantific. We will likely publish more in this “Signals” series. If your leadership team or organization has an interest in this work, feel free to let us know.


Note 1: To reiterate; we knew that any historical text analysis spanning the 40 year period 1957 to 1997 is vulnerable to the possibility that not all words in the bundles were in common, cross-tribal use during that entire period. In English there are only so many words to describe generative divergence and narrowing convergence. Somewhat oddly there seem to be more old words for convergence than there are for divergence. In a further study we might examine, with the help of ChatGPT how often words such as imagination appear in business management literature across multiple authors and time periods.

Note 2: In our humble opinion, we would say that ChatGPT is an incredibly helpful tool that is not really ready for prime time when it comes to this particular use which requires stability of outcomes. Presently ChatGPT is not 100% stabilized and consistent. Without any explanation it would at times give different totals to the same prompt. Perhaps in a future evolution of ChatGPT this aspect will become more stable.

Note 3: ChatGPT would often generate this cavoite: “Please note that the word counts may not be completely accurate, as they were generated through an automated search and may include instances where the words are not used in a relevant context.” and “I'm sorry, but as an AI language model, I don't have the ability to browse the internet” and “ChatGPT may produce inaccurate information about people, places, or facts.”

Note 4: Strangely ChatGPT sometimes behaves like a spoiled research assistant...You ask it something and it says it has no access and cannot be found....Ask again and get the same answer....Ask again and it finds the source and provides the answer with an apology!

Note 5: The black box nature of where the figures are coming from within ChatGPT creates a sense of unease that would not be suitable or acceptable in a “normal” research project. There is a giant leap of faith involved in using the ChatBPD tool. We guessed that if we did this experiment 5 times using different versions/evolutions of ChatGPT the numbers would probably be different, at least until the technology becomes more stable.

Note 6: One revelation from the practice arena is that we are aware, that in much of the literature originating in business management and often seen being utilized in graduate business school programs does not point out that decision making is convergent thinking. In high contrast this has been recognized as such in the CPS (Creative Problem Solving) community for decades. The two literatures are very different. In that difference what begins to bubble up is that management leadership and innovation leadership involve quite different dynamics that are often being confused.

Note 7: In real world practice we do see many organizations with imbalance in the direction of convergent thinking struggling with innovation. It is likely the most often encountered blocking dynamics that we have seen in more than a decade of practice working with organizations. On the question of where does the dynamic come from? The literature from several communities seems to play a significant role in that regard. It seems possible that Generative AI can help us better and more rapidly understand that the dynamics of some historical and contemporary literatures, are not in sync with the continuous change challenges presented by present day VUCA. We are guessing that some bumps might lie ahead as awareness of the various correlations grow that represent challenges to traditional approaches deeply embedded in several communities.

Note 8: Due to various constraints, we did not take the time to visualize the results of this experiment but may do so in the future.


Source Credits:

  • All figures from ChaptGPT, April 2023.

  • "Preference Projection Theory" and "Think Balance Analytics" from Humantific, Innovation Methods Mapping: DeMystifying 80+ Years of Innovation Process Design, 2020

  • Think Balance Image from Humantific Think Balance Analytics, 2012-2019

  • "Digital archeology”, from “Out of the Box Literature Archeology, 2013, By Roger James, University of Southhampton, UK


Recent Posts

See All


bottom of page