Scientific Critical Thinking
Scientific Critical Thinking automation and integration
Scientific Critical Thinking is a community skill for implementing structured reasoning frameworks that evaluate evidence quality, identify logical fallacies, produce well-supported conclusions, and generate transparent reasoning chains in AI-assisted analysis workflows.
What Is This?
Overview
Scientific Critical Thinking provides patterns for building systems that assess claims against evidence using established reasoning frameworks. It covers hypothesis evaluation, evidence grading, bias detection, and argument structure analysis. The skill translates principles from scientific methodology into programmable evaluation pipelines that improve the reliability of AI-generated analysis.
Who Should Use This
This skill serves developers building fact-checking tools, research teams creating evidence synthesis pipelines, and analysts who need structured approaches to evaluating complex claims. It benefits anyone building systems where conclusion reliability directly affects downstream decision quality.
Why Use It?
Problems It Solves
AI models generate plausible-sounding claims without distinguishing strong evidence from weak evidence. Confirmation bias in analysis workflows selectively surfaces supporting data while ignoring contradictions. Without structured evaluation, appeals to authority and other logical fallacies pass undetected. Complex arguments with multiple premises are accepted or rejected as wholes rather than evaluated component by component.
Core Highlights
Evidence hierarchy classification grades sources from anecdotal reports to systematic reviews. Argument mapping decomposes complex claims into individual premises and their logical relationships. Bias checkers flag common reasoning errors including confirmation bias, survivorship bias, and false causation. Confidence scoring assigns calibrated probability estimates to conclusions based on supporting evidence strength.
How to Use It?
Basic Usage
from dataclasses import dataclass, field
from enum import Enum
class EvidenceLevel(Enum):
ANECDOTAL = 1
OBSERVATIONAL = 2
CORRELATIONAL = 3
EXPERIMENTAL = 4
SYSTEMATIC_REVIEW = 5
@dataclass
class Evidence:
description: str
level: EvidenceLevel
source: str
supports_claim: bool
@dataclass
class CriticalAnalysis:
claim: str
evidence: list[Evidence] = field(default_factory=list)
def confidence_score(self) -> float:
if not self.evidence:
return 0.0
weighted = []
for e in self.evidence:
weight = e.level.value / 5.0
score = weight if e.supports_claim else -weight
weighted.append(score)
raw = sum(weighted) / len(weighted)
return max(0.0, min(1.0, (raw + 1) / 2))Real-World Examples
class ArgumentEvaluator:
FALLACY_PATTERNS = {
"ad_hominem": "attacks the person instead of the argument",
"straw_man": "misrepresents the opposing position",
"false_cause": "assumes correlation implies causation",
"appeal_to_authority": "cites authority without evidence",
"hasty_generalization": "concludes from insufficient sample"
}
def evaluate(self, premises: list[str], conclusion: str) -> dict:
analysis = {
"premise_count": len(premises),
"conclusion": conclusion,
"issues": []
}
if len(premises) < 2:
analysis["issues"].append(
"Insufficient premises for strong conclusion"
)
unsupported = [
p for p in premises
if "because" not in p.lower() and "study" not in p.lower()
]
if unsupported:
analysis["issues"].append(
f"{len(unsupported)} premises lack cited support"
)
analysis["strength"] = "strong" if not analysis["issues"] else "weak"
return analysisAdvanced Tips
Chain multiple evaluation passes to catch different error types. Run bias detection separately from logical structure analysis to prevent one check from masking another. Maintain a library of domain-specific fallacy patterns that extend the generic set with field-relevant reasoning errors.
When to Use It?
Use Cases
Build automated fact-checking pipelines that grade evidence quality for news claims. Create research assistant tools that evaluate the strength of arguments in academic papers. Develop decision support systems that present balanced evidence before recommending actions.
Related Topics
Formal logic systems, evidence-based medicine frameworks, argumentation theory, cognitive bias research, and epistemic reasoning in AI systems.
Important Notes
Requirements
Defined evidence hierarchies appropriate to the target domain, access to source materials for verification, subject matter expertise for validating automated reasoning outputs against ground truth, and test cases with known correct evaluations for benchmarking accuracy.
Usage Recommendations
Do: evaluate evidence quality separately from argument structure. Use multiple independent checks rather than a single pass. Document confidence levels explicitly with supporting rationale for every conclusion.
Don't: treat automated reasoning checks as infallible replacements for expert judgment. Apply domain-generic evidence hierarchies to specialized fields without adaptation. Ignore contradicting evidence simply because the majority of sources support a claim.
Limitations
Automated fallacy detection has high false positive rates on nuanced arguments that resemble but are not actually fallacious. Evidence grading requires domain-specific calibration that generic frameworks cannot fully provide. Confidence scores provide useful signals but should not substitute for expert evaluation on consequential decisions. Novel arguments that do not match existing patterns may be incorrectly flagged as fallacious.
More Skills You Might Like
Explore similar skills to enhance your workflow
XLSX
Boost productivity with XLSX spreadsheet support across essential tools and workflow integrations
Shopify Products
A Claude Code skill for shopify products workflows and automation
Saelens
Seamlessly automate and integrate Saelens into your existing workflows
Bunnycdn Automation
Automate Bunnycdn operations through Composio's Bunnycdn toolkit via
Listennotes Automation
Automate Listennotes tasks via Rube MCP (Composio)
Lemon Squeezy Automation
1. Add the Composio MCP server to your client configuration: