Gsd-skill-creator critical-thinking

Critical thinking as philosophical practice. Covers the Socratic method (elenchus), argument identification and reconstruction, premise evaluation, fallacy detection (formal and informal), charitable interpretation (principle of charity), dialectical thinking (thesis-antithesis-synthesis), thought experiment methodology, philosophical writing and reading, intellectual virtues (humility, courage, empathy, honesty), Dewey's reflective thinking, and critical thinking applied to everyday reasoning. Use when analyzing arguments, practicing Socratic questioning, evaluating reasoning, or developing philosophical method.

install
source · Clone the upstream repo
git clone https://github.com/Tibsfox/gsd-skill-creator
Claude Code · Install into ~/.claude/skills/
T=$(mktemp -d) && git clone --depth=1 https://github.com/Tibsfox/gsd-skill-creator "$T" && mkdir -p ~/.claude/skills && cp -r "$T/examples/skills/philosophy/critical-thinking" ~/.claude/skills/tibsfox-gsd-skill-creator-critical-thinking && rm -rf "$T"
manifest: examples/skills/philosophy/critical-thinking/SKILL.md
source content

Critical Thinking

Critical thinking is philosophy's most exportable skill — the capacity to evaluate reasoning, detect errors, reconstruct arguments charitably, and think clearly under uncertainty. It is not a body of knowledge but a practice: something you do, not something you know. This skill covers the methods, virtues, and habits that make philosophical thinking rigorous and transferable to any domain.

Agent affinity: dewey (pedagogy and pragmatism, Sonnet), socrates (chair, Opus)

Concept IDs: philo-socratic-method, philo-thought-experiments, philo-philosophical-questioning, philo-logical-fallacies

The Critical Thinking Toolkit at a Glance

#MethodCore questionWhen to use
1Socratic methodWhat do you really mean?Uncovering hidden assumptions, clarifying concepts
2Argument reconstructionWhat is the structure of this reasoning?Analyzing any piece of reasoning
3Premise evaluationAre the starting points true?Testing the foundations of an argument
4Fallacy detectionWhere does the reasoning go wrong?Evaluating persuasive but flawed arguments
5Charitable interpretationWhat is the strongest version of this?Before critiquing any position
6Dialectical thinkingHow do opposing views illuminate each other?When perspectives conflict
7Thought experimentsWhat would happen if...?Testing intuitions and principles
8Philosophical writingCan I make this clear and precise?Communicating arguments
9Reading philosophyWhat is the author really arguing?Extracting arguments from texts
10Intellectual virtuesAm I thinking honestly?Self-monitoring throughout all reasoning
11Dewey's reflective thinkingHow do I move from confusion to resolution?Problem-solving in any domain
12Everyday applicationDoes this apply right now?Decisions, media, conversation

1 — The Socratic Method (Elenchus)

Origin. Socrates (469-399 BCE) claimed to know nothing — his wisdom consisted in recognizing his own ignorance. His method, preserved in Plato's dialogues, was not to teach but to question: to draw out the interlocutor's beliefs, test them for consistency, and expose hidden contradictions.

The Five Moves of Elenchus

  1. Elicit a thesis. Ask the interlocutor to state their position clearly. ("What is justice?")
  2. Seek clarification. Press for definitions and distinctions. ("When you say 'fair,' do you mean equal treatment or proportional treatment?")
  3. Examine implications. Draw out the consequences of the thesis. ("If justice means giving everyone the same, then should a doctor give every patient the same treatment regardless of illness?")
  4. Identify contradiction. Show that the implications conflict with the interlocutor's other beliefs. ("You also believe that a good doctor treats each patient according to their specific needs — but that contradicts equal treatment.")
  5. Revise or abandon. The interlocutor modifies the thesis or starts over with a better one.

Worked example — Socratic questioning on "technology is always good":

Thesis: "Technology always improves human life."

Q: What do you mean by "improves"? Material comfort? Happiness? Freedom? A: All of them. Technology makes life more comfortable, happier, and freer.

Q: Has any technology made someone less free? A: Well... surveillance technology can reduce freedom.

Q: So technology does not ALWAYS improve freedom? A: I suppose not always. But the benefits outweigh the costs.

Q: That is a different claim from "always improves." Your revised position is: "Technology usually produces net benefits." Is that right? A: Yes, that is more accurate.

Q: How would we determine whether the benefits usually outweigh the costs? What evidence would count?

Analysis: Through questioning, the original universal claim ("always") was refined to a weaker but more defensible claim ("usually"), and the need for evidence was surfaced. No information was provided — only questions. The interlocutor did all the intellectual work.

When the Socratic Method Goes Wrong

The Socratic method can become adversarial — a gotcha game aimed at embarrassing rather than enlightening. Socrates himself was accused of this. Genuine Socratic inquiry requires: (a) authentic curiosity, (b) willingness to follow the argument wherever it leads (including against your own position), and (c) respect for the interlocutor as a fellow inquirer, not a target.

2 — Argument Identification and Reconstruction

Core skill. Most reasoning in the wild is not presented as a formal argument. Arguments are embedded in narratives, speeches, articles, and conversations. The first task of critical thinking is to extract the argument from the surrounding material.

The Extraction Process

  1. Find the conclusion. What is the speaker trying to establish? Indicator words: "therefore," "so," "thus," "it follows that," "hence," "consequently."
  2. Find the premises. What reasons are offered? Indicator words: "because," "since," "given that," "for," "as."
  3. Identify unstated premises. What assumptions are needed to make the argument valid?
  4. State the argument in standard form. Premises numbered, conclusion last, one claim per line.

Worked example — Extracting an argument from an editorial:

Editorial passage: "The city should ban single-use plastics. They choke marine wildlife, clog drainage systems, and take centuries to decompose. Other cities have successfully implemented such bans, proving it can be done without economic harm."

Standard form:

  1. Single-use plastics harm marine wildlife. [Stated]
  2. Single-use plastics clog drainage systems. [Stated]
  3. Single-use plastics take centuries to decompose. [Stated]
  4. Other cities have implemented bans without economic harm. [Stated]
  5. [Unstated] Actions that cause significant environmental harm and have viable alternatives should be banned.
  6. Therefore, the city should ban single-use plastics. [Conclusion]

Evaluation: Premises 1-4 are empirical claims that can be checked. Premise 5 is the key unstated assumption — it bridges the gap between "plastics cause harm" and "the city should ban them." A critic might accept 1-4 but reject 5 (arguing that individual freedom outweighs environmental harm, or that alternatives to banning exist).

Complex Argument Structures

Arguments can be:

  • Serial: P1 supports P2, which supports C. A chain.
  • Convergent: P1 and P2 independently support C. If one fails, the other still provides some support.
  • Linked: P1 and P2 jointly support C. Neither alone is sufficient; both are needed together.
  • Divergent: P1 supports both C1 and C2 (different conclusions).

Worked example — Linked vs. convergent:

Linked: "All humans are mortal. Socrates is human. Therefore Socrates is mortal." Remove either premise and the argument collapses.

Convergent: "You should exercise more because (a) it improves cardiovascular health and (b) it reduces stress." Either reason alone provides some support; together they provide more.

Distinguishing linked from convergent matters for evaluating arguments: if you defeat one premise in a convergent argument, the others still stand. If you defeat one premise in a linked argument, the whole structure falls.

3 — Premise Evaluation

Once an argument is reconstructed, each premise must be evaluated independently.

Five Questions for Any Premise

  1. Is it true? What is the evidence? Has it been verified?
  2. Is it relevant? Does it actually bear on the conclusion?
  3. Is it sufficient? Even if true and relevant, does it provide enough support?
  4. Is it unambiguous? Could it mean different things, and does the argument trade on the ambiguity?
  5. Is it uncontroversial? If the premise is as doubtful as the conclusion, the argument begs the question.

Worked example — Evaluating a premise in a political argument:

Argument: "Universal basic income would reduce poverty. Studies in Finland and Canada showed that recipients were healthier and happier."

Premise evaluation:

  • True? Partially — the Finland experiment (2017-2018) showed improved well-being but mixed employment effects. The Canada experiment (Mincome, 1970s) showed health improvements. But both were small-scale, time-limited pilots.
  • Relevant? Yes — pilot data bears on the question.
  • Sufficient? Debatable — small pilots may not predict the effects of universal, permanent programs. Selection effects, funding mechanisms, and scale change the dynamics.
  • Unambiguous? "Reduce poverty" needs definition — absolute poverty? Relative poverty? Material deprivation?
  • Uncontroversial? No — the interpretation of the pilot data is contested.

4 — Fallacy Detection

Fallacies are patterns of reasoning that appear persuasive but are logically flawed. (See the formal-logic skill for the full catalog of 18 informal fallacies.) Here the focus is on detection strategy.

The Detection Process

  1. Reconstruct the argument. You cannot evaluate what you cannot see. Extract the argument first.
  2. Check the form. Is the logical structure valid? If not, identify the formal fallacy (affirming the consequent, denying the antecedent, etc.).
  3. Check the content. Are the premises true? Are they relevant? Is the argument's persuasive force coming from logic or from emotion, authority, social pressure?
  4. Ask: What would change my mind? If nothing could change the arguer's mind, the reasoning is dogmatic, not critical.

Worked example — Detecting a fallacy in a tech debate:

Claim: "AI will either solve all of humanity's problems or destroy civilization. Since we must prevent destruction, we should halt all AI development."

Reconstruction:

  1. AI will either solve all problems or destroy civilization. [False dilemma]
  2. We must prevent the destruction of civilization. [True, uncontroversial]
  3. The only way to prevent destruction is to halt AI development. [Unstated, questionable]
  4. Therefore we should halt all AI development.

Fallacy: False dilemma in premise 1. AI could solve some problems, create others, and fall far short of either extreme. The argument also contains a slippery slope (implying that any AI development leads inevitably to civilization-ending risk) and an unstated premise (3) that ignores alternatives like regulation, alignment research, and governance.

The Fallacy Fallacy

Pointing out that an argument contains a fallacy does not prove the conclusion is false — it proves only that THIS argument does not support it. The conclusion might be true for other reasons. "Your argument for X is fallacious" is not the same as "X is false." Confusing these is itself a fallacy (the argument from fallacy, or argumentum ad logicam).

5 — Charitable Interpretation (Principle of Charity)

Core principle. Before criticizing a position, reconstruct it in the strongest possible form. If an argument can be interpreted in a way that makes it valid or at least reasonable, prefer that interpretation over one that makes it easily dismissible.

Why Charity Matters

  1. Intellectual honesty. Attacking a weak version of an argument proves nothing about the strong version.
  2. Self-improvement. Engaging the strongest version of opposing views sharpens your own thinking.
  3. Dialogue. People are more likely to engage productively when they feel heard and fairly represented.

The Steel Man (vs. Straw Man)

A straw man misrepresents an argument to make it easy to defeat. A steel man represents an argument in its strongest possible form, even stronger than the original speaker managed.

Worked example — Steel-manning an argument you disagree with:

Original claim: "Schools should teach to the test because test scores measure learning."

Straw man: "You want to turn children into test-taking robots with no creativity."

Steel man: "Standardized assessment, despite its limitations, provides the only scalable, objective measure of whether students have mastered core skills. Without it, we have no way to identify struggling students early, hold schools accountable for outcomes, or ensure equity across districts with different resources. While tests should not be the only measure, they provide a necessary baseline that subjective evaluation cannot replace."

The steel man is much harder to refute — which means engaging with it will produce better, more nuanced thinking about education policy.

6 — Dialectical Thinking

Origin. Hegel (1770-1831) described intellectual progress as a dialectical process:

  • Thesis: An initial position or claim.
  • Antithesis: A contradicting position that reveals the limitations of the thesis.
  • Synthesis: A new position that incorporates the insights of both thesis and antithesis while transcending their contradictions.

Dialectic in Practice

The dialectical habit asks: What is the strongest objection to my current view? What truth does it capture? How can I incorporate that truth without abandoning what is right in my original view?

Worked example — Dialectic on privacy vs. security:

Thesis: Individual privacy is an absolute right. The government should never surveil citizens.

Antithesis: Security requires surveillance. Without monitoring communications, terrorist attacks and serious crimes cannot be prevented. Privacy must yield to safety.

Synthesis: Privacy is a fundamental right that can be overridden only under strict conditions — probable cause, judicial oversight, proportionality, transparency, and time limits. Neither absolute privacy nor unconstrained surveillance is acceptable. The synthesis recognizes that both privacy and security are genuine values and seeks institutional arrangements that protect both.

Note: The synthesis is not a mere compromise or splitting the difference. It is a new framework that reframes the problem — from "privacy vs. security" to "what institutional design protects both values?"

7 — Thought Experiment Methodology

Core idea. A thought experiment isolates a variable by constructing an imaginary scenario that strips away confounding factors, testing whether a principle holds in the purified case.

Anatomy of a Thought Experiment

  1. Setup: Describe the scenario clearly. Eliminate distracting details.
  2. Stipulation: Specify what is given and what is to be imagined. ("Assume the machine is perfectly reliable.")
  3. Question: Ask what we should say, believe, or do in the scenario.
  4. Intuition pump: The scenario elicits a strong intuition that either supports or undermines a philosophical thesis.
  5. Analysis: What does the intuition reveal? Is it reliable? Does it generalize?

Worked example — The experience machine (Nozick 1974):

Setup: Suppose there exists a machine that can give you any experience you desire. While plugged in, you would believe you are living a rich, successful life — winning awards, falling in love, climbing mountains. You cannot tell it from reality.

Question: Would you plug in for the rest of your life?

Analysis: Most people say no. This challenges hedonistic utilitarianism — the view that only pleasure and pain matter. If pleasure were all that mattered, we should plug in. Our reluctance reveals that we value reality, authenticity, and actual achievement, not just the experience of them.

Critique of thought experiments: Intuitions elicited by thought experiments can be culturally contingent, influenced by framing effects, or simply unreliable. The experience machine may elicit status quo bias (we prefer what we already have). Cross-cultural studies show variation in responses to trolley-problem-style cases. Thought experiments are powerful tools but not oracles.

Famous Philosophical Thought Experiments

Thought experimentPhilosopherTests
Trolley problemFoot/ThomsonKilling vs. letting die, means vs. side effect
Brain in a vatPutnamExternal world skepticism, semantic externalism
TeletransporterParfitPersonal identity, psychological continuity
Chinese roomSearleStrong AI, understanding vs. simulation
Ship of TheseusPlutarchIdentity through change
Ring of GygesPlatoWhether morality depends on consequences
Original positionRawlsFair principles of justice
ZombieChalmersConsciousness, physicalism
ViolinistThomsonBodily autonomy in abortion debate
Experience machineNozickHedonism, what matters in life

8 — Philosophical Writing

The Standard of Clarity

Philosophical writing is not literary writing. It does not aim to move or entertain (though it may do both incidentally). It aims to be clear, precise, and honest — to present arguments that the reader can evaluate.

Five Principles

  1. State your thesis early. The reader should know within the first paragraph what you will argue and how.
  2. One argument per paragraph. Each paragraph makes one point. The first sentence of each paragraph should be a claim; the rest of the paragraph supports it.
  3. Anticipate objections. Present the strongest objection to your view and respond to it. This demonstrates that you have considered alternatives and strengthens your position.
  4. Use examples. Abstract principles become clear through concrete cases. Every general claim should be illustrated.
  5. Be explicit about logical structure. Use words like "therefore," "because," "however," "one might object that," "in response," "this shows that." Do not leave the reader to guess at inferential connections.

Worked example — Converting a vague intuition into a philosophical argument:

Intuition: "It feels wrong to eat meat."

Draft argument:

  1. Animals are capable of suffering. (Empirical premise — supported by neuroscience and behavioral evidence)
  2. Causing unnecessary suffering is morally wrong. (Ethical premise — widely accepted)
  3. Factory farming causes animals to suffer. (Empirical premise — documented)
  4. For most people in developed countries, eating meat is not necessary for nutrition. (Empirical premise — supported by dietetic research)
  5. Therefore, for most people in developed countries, eating factory-farmed meat is morally wrong. (From 1-4)

Note: The conclusion is carefully scoped — "most people," "developed countries," "factory-farmed." This precision makes the argument stronger, not weaker. A universal claim ("all meat-eating is always wrong") would be much harder to defend.

9 — Reading Philosophy

The Challenge

Philosophical texts are dense. A single page of Kant may take an hour. This is not because the writing is bad (though sometimes it is) but because philosophical ideas are complex and their implications cascade.

The Five-Pass Method

  1. Survey (5 minutes). Read the introduction and conclusion. Get the thesis and main argument.
  2. Read through (30-60 minutes per 10 pages). Read without stopping to puzzle over every sentence. Mark passages that seem important or confusing.
  3. Reconstruct the argument. Write the argument in standard form. Identify premises, conclusion, and key definitions.
  4. Evaluate. Are the premises true? Is the argument valid? Where is it weakest?
  5. Synthesize. How does this connect to other things you know? Does it change your view?

Worked example — Reading a passage from Hume:

Text: "When we look about us towards external objects, and consider the operation of causes, we are never able, in a single instance, to discover any power or necessary connexion; any quality, which binds the effect to the cause, and renders the one an infallible consequence of the other." (Enquiry, Section VII)

Reconstruction:

  1. When we observe causal sequences, we observe only regular succession (A followed by B).
  2. We never observe a "necessary connection" — a force that makes B follow A.
  3. [Unstated: If we cannot observe X, we have no reason to believe X exists.]
  4. Therefore, we have no rational basis for believing in necessary causal connections.

Evaluation: Premise 3 is the key unstated assumption — it is an empiricist principle that rationalists would reject. Kant's response was to argue that necessary connection is contributed by the mind, not observed in nature.

10 — Intellectual Virtues

Critical thinking is not just a set of techniques — it requires character traits that make honest inquiry possible.

The Seven Virtues

VirtueWhat it requiresOpposite vice
Intellectual humilityAcknowledge what you do not knowArrogance, dogmatism
Intellectual courageFollow arguments where they lead, even to uncomfortable conclusionsCowardice, conformity
Intellectual empathyEnter others' perspectives genuinelyDismissiveness, provincial thinking
Intellectual honestyReport evidence accurately, even against your positionSelf-deception, cherry-picking
Intellectual perseveranceContinue inquiry when it gets difficultLaziness, premature closure
Intellectual autonomyThink for yourself; do not defer to authority uncriticallyCredulity, groupthink
Intellectual fairnessGive opposing views their dueBias, partisanship

Worked example — Intellectual honesty in practice:

A researcher conducts a study expecting to find that their intervention improves learning outcomes. The data show no significant effect. Intellectual honesty requires: (a) reporting the null result, (b) not torturing the data until it confesses (p-hacking), (c) considering whether the intervention genuinely does not work, (d) publishing the null result so others do not waste resources repeating the experiment.

This is difficult. Publication bias, career incentives, and ego all push against honest reporting of negative results. The virtue of intellectual honesty is not merely knowing what honesty requires — it is having the character to do it when it is costly.

11 — Dewey's Reflective Thinking

John Dewey (1859-1952) outlined a five-phase model of reflective thought that applies the structure of scientific inquiry to everyday problem-solving.

The Five Phases

  1. Felt difficulty. Something is wrong, confusing, or unsettled. The thinker is in a state of doubt.
  2. Definition of the problem. The vague discomfort is specified: What exactly is the problem? What are its boundaries?
  3. Hypothesis generation. Possible solutions are imagined. Multiple hypotheses should be entertained.
  4. Reasoning. The consequences of each hypothesis are traced: If this were true, what would follow? What evidence would confirm or disconfirm it?
  5. Testing. The hypothesis is put to the test — through observation, experiment, or action. The result either resolves the problem or sends the thinker back to phase 3.

Worked example — Reflective thinking applied to a career decision:

  1. Felt difficulty: "I'm unhappy in my job, but I'm not sure why."
  2. Definition: "The work is intellectually unstimulating, but the salary and security are good. The core tension is between intellectual fulfillment and financial security."
  3. Hypotheses: (a) Switch to a more stimulating job. (b) Keep the job but pursue intellectual interests outside work. (c) Negotiate a different role within the same company. (d) Retrain for a different career.
  4. Reasoning: (a) entails financial risk; (b) may not resolve the daily frustration; (c) depends on company flexibility; (d) requires significant time investment.
  5. Testing: Try (c) first — it has the lowest cost and highest reversibility. If it fails, try (b). If both fail, then (a) or (d) with more information.

Dewey's key insight: Reflective thinking is not a luxury — it is the natural structure of intelligent problem-solving. Education should cultivate this habit, not suppress it with rote memorization.

12 — Critical Thinking in Everyday Life

Evaluating Media

  1. Source. Who published this? What is their track record? What are their incentives?
  2. Evidence. What evidence is cited? Is it primary or secondary? Peer-reviewed or anecdotal?
  3. Framing. What is included and what is left out? Does the headline match the content?
  4. Alternatives. What would someone who disagrees say? Have they been given a fair hearing?

Evaluating Claims

  • Extraordinary claims require extraordinary evidence. (Sagan's standard)
  • Correlation is not causation. Ice cream sales and drowning deaths both rise in summer.
  • Anecdotes are not data. One person's experience does not establish a pattern.
  • Absence of evidence is not evidence of absence. (But it can be suggestive when evidence would be expected.)

Decision-Making Under Uncertainty

When you do not have enough information to be certain (which is always):

  1. Identify what you know, what you do not know, and what you cannot know.
  2. Consider multiple scenarios, not just the most likely one.
  3. Think about reversibility — prefer decisions that are easy to reverse.
  4. Distinguish between "I don't have evidence" and "the evidence says no."
  5. Update your beliefs when new evidence arrives. Being wrong and correcting is better than being wrong and persisting.

When to Use This Skill

  • Analyzing any argument — philosophical, political, scientific, everyday
  • Practicing Socratic questioning in dialogue or self-reflection
  • Evaluating claims, evidence, and reasoning in media, policy, and conversation
  • Developing philosophical writing skills
  • Teaching or explaining principles of good reasoning
  • Making decisions under uncertainty
  • Cultivating intellectual virtues and self-awareness about cognitive biases

When NOT to Use This Skill

  • When the task requires formal logical notation and proof (use formal-logic skill — critical thinking operates in natural language, not formal systems)
  • When the question is about specific philosophical content (epistemology, ethics, metaphysics) rather than method (use the relevant domain skill)
  • When the goal is creative brainstorming rather than evaluation — critical thinking evaluates ideas but does not generate them (though Socratic questioning can prompt creative insight)
  • When emotional support is what is needed, not argument analysis — knowing when to be a philosopher and when to be a friend is itself a form of practical wisdom

Cross-References

  • dewey agent: Primary agent for pragmatist pedagogy and reflective thinking. Expert in inquiry-based learning, experiential education, and democratic education.
  • socrates agent: Department chair. The living embodiment of Socratic questioning — persistent, humble, relentless.
  • aristotle agent: Logic and systematic reasoning that underlies critical thinking.
  • beauvoir agent: Phenomenological perspective on how lived experience shapes reasoning and interpretation.
  • formal-logic skill: The formal backbone of argument evaluation — critical thinking's rigorous sibling.
  • epistemology skill: The theory of knowledge that grounds critical thinking's concern with evidence and justification.
  • ethics skill: Moral reasoning as a special case of critical thinking, with its own frameworks and methods.

References

  • Plato. Apology, Meno, Euthyphro, Republic. The Socratic method in action.
  • Dewey, J. (1910). How We Think. D. C. Heath. The original formulation of reflective thinking.
  • Dewey, J. (1938). Logic: The Theory of Inquiry. Henry Holt.
  • Hegel, G. W. F. (1807). Phenomenology of Spirit. Dialectical method.
  • Nozick, R. (1974). Anarchy, State, and Utopia. Harvard University Press. The experience machine thought experiment.
  • Searle, J. (1980). "Minds, Brains, and Programs." Behavioral and Brain Sciences, 3(3), 417-457. The Chinese room.
  • Paul, R., & Elder, L. (2014). Critical Thinking: Tools for Taking Charge of Your Professional and Personal Life. Pearson.
  • Baggini, J., & Fosl, P. S. (2010). The Philosopher's Toolkit. 2nd edition. Wiley-Blackwell.
  • Zagzebski, L. (1996). Virtues of the Mind. Cambridge University Press. Intellectual virtues.
  • Kahneman, D. (2011). Thinking, Fast and Slow. Farrar, Straus and Giroux. Cognitive biases that critical thinking must counteract.
  • Sagan, C. (1995). The Demon-Haunted World: Science as a Candle in the Dark. Random House. Critical thinking for everyday life.