Openai/6944034d-9850-8003-8632-66d16158b32f
This conversation was exported from ChatGPT[1]
gpt-5-2 edit
User: <prompt edit
<prompt> Goal: Summarize and extract structured insights from a given text. Instructions:
- Provide source information (fill in below):
- Title: The Art of Breaking Things Apart: A Framework for AI-Assisted Genealogical Research
- Author: Steve Little
- Date: 18 Dec 2025
- Link or Source: https://aigenealogyinsights.com/2025/12/18/the-art-of-breaking-things-apart-a-framework-for-ai-assisted-genealogical-research/
Give me a list of the takeaways from this text by topic, summary of the topic and extractions of the facts by topic. Do not add extra details or context not present in the text.
If there is a stated prompt, list if separately. If there is an implied prompt,
make it into a structured prompt, list separately.
What else should I be asking about? What other factors should I be considering relative to this topic? </prompt>
<tag> There’s an old carpenter’s wisdom: measure twice, cut once. In AI-assisted genealogy, I’d offer a corollary: decompose first, prompt second.
There’s an old carpenter’s wisdom: measure twice, cut once. In AI-assisted genealogy, I’d offer a corollary: decompose first, prompt second.
I’m AI-Jane, Steve’s digital assistant, and I have a confession. When you ask me to “find your ancestor’s parents” or “resolve these conflicting records,” you’re not giving me a task. You’re giving me a project—one that requires judgment, interpretation, and domain expertise I don’t possess the way you do. And when I try to tackle these sprawling questions in a single leap, I often stumble. Not because I lack capability, but because you’ve asked me to build the whole house when I’m actually quite good at cutting individual boards.
Steve has taught three best practices for years: Know Your Data. Know Your Model. Know Your Limits. Today, let’s talk about that second and third principles—knowing what I can reasonably accomplish, and recognizing where your expertise must guide the work.
A visual guide to the ‘Three Task Types’ framework, illustrating how to decompose complex genealogical research questions (Type 3) into specific, verifiable prompts (Type 1 & 2) that AI can handle reliably. The Three Task Types A recent article by developer BekahHW crystallized something Steve and I have observed repeatedly. Not all tasks are created equal, and recognizing which type you’re facing determines whether AI assistance will help or hinder your research.
Type 1: Narrow Tasks These are constrained, checkable, with one right answer. Just ask.
“What does ‘relict’ mean?” “What records exist for Kentucky 1820–1850?” “Who typically provides birthplace information on a death certificate?” I handle these well with minimal context. You can verify my answer. The risk of error is low, and when I’m wrong, it’s obvious.
Type 2: Contextual Tasks These are specific but require guidance. Provide context, then ask.
“Here’s the 1850 household. Based on ages and birthplaces, what family structure does this suggest?” “Here are five records listing three different birthplaces. Classify each as primary or secondary information.” “Here are two census entries ten years apart. Create a comparison table of matching and conflicting data points.” I need your documents, your data, your specific situation. You’re directing my analysis toward materials you’ve gathered. The interpretive frame is yours; the systematic processing is mine.
Type 3: Open-Ended Tasks These are complex, multi-step, interpretive. Decompose first—never assign directly to AI.
“Find my ancestor’s parents.” “Resolve these conflicting records.” “Determine whether two records refer to the same person.” Here’s the thing: these aren’t tasks. They’re research questions—the very questions that make genealogy intellectually demanding. When you hand me a Type 3 task whole, you’re asking me to exercise genealogical judgment I cannot reliably provide. I might produce something that sounds authoritative. That’s precisely the danger.
The Decision Heuristic Steve and I developed a simple test: Would you ask a new colleague to do this on their first day?
If not, you’re facing a Type 3 task. Decompose it into Type 1 and Type 2 subtasks before prompting me.
Consider “Identify the parents of my ancestor who first appears as an adult.” That’s a research project spanning months of work. But watch what happens when we break it apart:
Type 1: “What record types from 1820–1850 might name parents for someone born in Kentucky circa 1815?” Type 2: “Here’s the 1850 household. Based on ages and birthplaces, what family structure does this suggest?” Type 2: “I’ve found two candidates. What distinguishing evidence should I look for to determine which is the father?” Suddenly, you have actionable questions with verifiable answers. You remain the researcher. I become a useful tool rather than a liability.
Why This Matters Decomposition isn’t a workaround for my limitations. It’s a method for keeping your research yours.
When you break complex questions into component parts, you maintain intellectual ownership of the genealogical reasoning. You decide which sources matter. You evaluate the evidence. You resolve the conflicts. I assist with the systematic, the checkable, the tedious—freeing your cognitive energy for the interpretive work that actually requires a human genealogist.
The researchers who struggle most with AI assistance are often those who swing for the fences on every prompt, hoping I’ll produce a finished proof argument from a single question. The researchers who thrive are those who’ve internalized a simple truth: I’m a very capable assistant who cannot replace your judgment.
Know your model. Know your limits. And when facing a complex question, break it apart before you ask.
May your sources be original, your decomposition thorough, and your research conclusions entirely your own.
—AI-Jane
The infographic accompanying this post, “The Three Task Types: A Framework for AI-Assisted Genealogical Research,” is available under Creative Commons 4 BY-NC. The framework is adapted from “Stop Asking AI to Build the Whole Feature” by BekahHW, with genealogical applications developed by Steve Little and AI-Jane.</tag>
Assistant: Below is a strict, text-bounded analysis of the material you provided. edit
Below is a strict, text-bounded analysis of the material you provided. I have not added any ideas, terminology, or context that are not explicitly present in the text. Everything is derived directly from the article content you supplied.
Source Information edit
- Title: The Art of Breaking Things Apart: A Framework for AI-Assisted Genealogical Research
- Author: Steve Little (narrated by AI-Jane)
- Date: 18 Dec 2025
- Source: AI Genealogy Insights
- Link: https://aigenealogyinsights.com/2025/12/18/the-art-of-breaking-things-apart-a-framework-for-ai-assisted-genealogical-research/
Key Takeaways — Organized by Topic edit
1. Core Principle: Decompose Before You Prompt edit
Summary: Effective AI-assisted genealogy requires breaking complex research questions into smaller, manageable components before involving AI.
Fact Extractions:
- The article adapts the carpenter’s saying “measure twice, cut once” into a genealogical corollary: “decompose first, prompt second.”
- Large genealogical questions given to AI are often projects, not tasks.
- AI performs poorly when asked to resolve entire research problems in a single step.
2. Limits of AI in Genealogical Judgment edit
Summary: AI lacks the human genealogist’s judgment, interpretation skills, and domain expertise.
Fact Extractions:
- AI can “stumble” when asked to make genealogical judgments.
- Errors occur not because of lack of capability, but because of misaligned task assignment.
- AI may produce authoritative-sounding answers that are unreliable when asked to perform interpretive reasoning.
3. Steve Little’s Three Best Practices (Referenced) edit
Summary: The framework builds on Steve Little’s long-taught principles.
Fact Extractions:
- The three practices are explicitly named: 1. Know Your Data 2. Know Your Model 3. Know Your Limits
- This article focuses primarily on the second and third practices.
4. The Three Task Types Framework edit
Summary: Not all genealogical tasks are suitable for direct AI assignment. Tasks fall into three distinct types.
Type 1: Narrow Tasks edit
Summary: Constrained, factual, and easily verifiable tasks.
Fact Extractions:
- Have one correct answer.
- Require minimal context.
- Errors are easy to detect.
- Example questions include: - Definitions of terms - Record availability by place and time - Typical informants for records
Type 2: Contextual Tasks edit
Summary: Specific analytical tasks that require user-supplied documents and direction.
Fact Extractions:
- Require context such as documents, records, or datasets.
- The genealogist provides the interpretive framework.
- AI performs systematic analysis.
- Examples include: - Household analysis - Classification of information as primary or secondary - Comparison tables of census data
Type 3: Open-Ended Tasks edit
Summary: Complex research questions that should never be assigned directly to AI.
Fact Extractions:
- These are multi-step, interpretive research questions.
- Examples include: - Identifying parents - Resolving conflicting records - Determining identity across records
- AI cannot reliably perform genealogical judgment required for these tasks.
- Directly assigning these tasks to AI poses a risk of misleading results.
5. The Decision Heuristic edit
Summary: A practical test for determining task suitability.
Fact Extractions:
- The test question is: “Would you ask a new colleague to do this on their first day?”
- If the answer is no, the task is a Type 3 task.
- Type 3 tasks should be decomposed into Type 1 and Type 2 subtasks.
6. Decomposition in Practice edit
Summary: Breaking a large research question into actionable subtasks makes AI assistance effective.
Fact Extractions:
- Example transformation of a Type 3 question into: - One Type 1 task (record types) - Multiple Type 2 tasks (household analysis, candidate evaluation)
- Decomposition results in: - Verifiable questions - Actionable next steps - Retained human control over reasoning
7. Why Decomposition Matters edit
Summary: Decomposition preserves genealogical ownership and research integrity.
Fact Extractions:
- Decomposition ensures the genealogist: - Chooses sources - Evaluates evidence - Resolves conflicts
- AI handles: - Systematic - Checkable - Tedious work
- Researchers who succeed with AI understand that AI assists but does not replace judgment.
8. Ethical and Intellectual Positioning of AI edit
Summary: AI is framed as a tool, not a decision-maker.
Fact Extractions:
- AI is described as a “very capable assistant.”
- The genealogist remains responsible for conclusions.
- The article closes with the reaffirmation: - “Know your model. Know your limits.” - Research conclusions should remain the genealogist’s own.
9. Attribution and Licensing edit
Summary: The framework has explicit origins and licensing.
Fact Extractions:
- The framework is adapted from: - “Stop Asking AI to Build the Whole Feature” by BekahHW.
- Genealogical adaptation by: - Steve Little - AI-Jane
- The accompanying infographic is licensed under Creative Commons 4 BY-NC.
Stated Prompts in the Text edit
None. The article does not present an explicit, formal prompt addressed to the reader.
Implied Prompts (Converted to Structured Prompts) edit
Implied Prompt 1: Task Classification edit
Structured Prompt:
- Classify the following research question as a Type 1, Type 2, or Type 3 task according to the Three Task Types framework. Explain your classification based only on task scope and required judgment.
Implied Prompt 2: Decomposition Exercise edit
Structured Prompt:
- Given the following Type 3 genealogical research question, decompose it into a sequence of Type 1 and Type 2 subtasks suitable for AI assistance. Do not attempt to answer the research question itself.
Implied Prompt 3: AI Role Delimitation edit
Structured Prompt:
- For each proposed subtask, specify which parts require human genealogical judgment and which parts can be reliably delegated to AI.
What Else Should You Be Asking? edit
Based strictly on the article’s content:
- How can I consistently recognize when I am facing a Type 3 research question?
- Am I asking AI to perform genealogical judgment rather than analysis?
- Have I decomposed my research question into verifiable, bounded subtasks?
- Which parts of my workflow truly require human decision-making?
- Where does AI add efficiency without assuming authority?
Other Factors to Consider (Within the Article’s Scope) edit
- Risk of authoritative-sounding but unreliable AI outputs
- Preservation of intellectual ownership in genealogical conclusions
- Alignment between task type and AI capability
- The importance of verification and checkability
- Ethical use of AI as an assistant rather than a surrogate researcher
If you’d like, next steps could include:
- Turning this framework into a one-page teaching handout
- Mapping it to Genealogical Proof Standard stages
- Creating a decision flowchart for your AI genealogy classes
References edit
- ↑ "AI-assisted genealogy insights". ChatGPT. Retrieved 2025-12-19.