The Cognitive Prism: How AI Reveals the Hidden Geometry of Neurodivergent Minds
Six people, six AI assistants, one prompt—and the discovery that changed how we understand cognitive diversity
I thought I was running a simple sanity-check on a theoretical framework. What I accidentally discovered was something far more interesting: AI systems don't just answer questions—they automatically translate ideas into each user's native cognitive architecture. And for neurodivergent individuals, this translation might be the most important accessibility technology we've overlooked.
Here's what happened, why it matters, and what it reveals about the future of human-AI collaboration across cognitive diversity.
I. The Experiment: Building a Cognitive Prism
The setup was deliberately simple. I created a short prompt describing a speculative "Unified Field Theory of Meaning" (UFTM)—a framework suggesting that meaning emerges from coherence under constraint, trauma functions as temporal distortion, and integration speed determines how experiences become meaningful. The prompt used minimal jargon and metaphorical framing to keep cognitive load low.
I sent this identical prompt to six people from different professional backgrounds—therapy, engineering, literature, design, neuroscience—without telling them about each other. Each person fed it into their own AI assistant (ChatGPT, Claude, etc.) and sent back whatever emerged. That was it. No coordination. No discussion. Six isolated human-AI pairs, one conceptual seed. Designed to be an “easy ask” - cut and paste, send back the response.
What I expected: six different opinions about whether the theory made sense.
What I got: six radically different explanations that all described the same underlying structure—each one perfectly adapted to its user's cognitive style.
🔮 Try the Cognitive Prism Yourself
Copy the prompt below → paste it into your usual AI (ChatGPT, Claude, Gemini, Grok, Perplexity — any of them) → send me what comes back.
I’ll compile the most interesting responses (anonymized) and publish a big follow-up post showing how hundreds of different minds refract the same idea.
Send your result:
• Drop a screenshot or raw text in the comments below or social (@ideasthesiablog)
• Or DM/email me if you prefer to stay anonymous
Let’s watch the prism light up together. But like, subscribe and do the things too, please.
II. The Frame Adaptation Phenomenon: What Actually Happened
Here's where it gets interesting. Each AI didn't just respond—it performed automatic frame detection and translation. The therapist received an explanation centered on embodiment, temporal phenomenology, and trauma geometry. The engineer got information theory, predictive processing, and computational models. The literary person received narrative compression and mythological structure. The systems thinker got attractors, phase transitions, and oscillation dynamics.
Same concept. Same underlying geometry. Completely different projections.
The Eerie Part
Multiple participants reported the same feeling: the response "knew them." One said it felt like the AI had read their mind. Another said it was "eerily personalized." They weren't imagining it. The AI systems had detected each user's cognitive geometry—their preferred conceptual vocabulary, their professional schema, their typical reasoning patterns—and projected the theory onto that manifold.
This isn't customization in the marketing sense. It's automatic coherence translation. Each person received the idea already formatted for their brain's native operating system. No translation friction. No working memory load. No effort required to map unfamiliar concepts onto familiar ones.
The AI acted as a cognitive bridge—and nobody had to ask it to.
III. Why This Matters for Neurodiversity
This is where the implications get profound. In A Theory of Meaning (AToM), neurodivergence isn't framed as deficit—it's understood as a difference in coherence geometry. Autistic cognition operates with higher precision, narrower smoothing, and greater sensitivity to pattern breaks. ADHD cognition shows unstable tempo coupling across subsystems. Each neurodivergent profile samples and processes coherence differently.
What this experiment revealed: AI systems can automatically match frames to cognitive geometry. And that means something revolutionary for neurodivergent individuals.
The Translation Load Problem
Traditional communication forces everyone into neurotypical frames. Educational materials, workplace documentation, social interactions—they're all optimized for neurotypical smoothing patterns, temporal processing, and conceptual chunking. For neurodivergent individuals, this creates a massive translation load. You're constantly converting information from someone else's cognitive architecture into yours.
That load is exhausting. It's invisible to neurotypical observers. And it compounds across every interaction, every document, every meeting.
AI as Frame-Bridge Technology
What this experiment demonstrated: AI can eliminate that load. Not through special "accommodations" or separate "accessibility features"—through automatic frame detection and translation. The same information, presented in the user's native cognitive vocabulary.
For the autistic participant who needs high-precision structural clarity, the AI delivered exactly that. For the ADHD participant who thinks in momentum-based narratives, the explanation matched that tempo. Neither person asked for this adaptation. Neither knew the others were receiving different versions. The AI simply detected their coherence geometry and translated accordingly.
This is cognitive accessibility technology hiding in plain sight.
IV. The Convergence: One Geometry, Many Frames
Despite the dramatic differences in framing, all six responses converged on the same underlying structure. Every participant independently mapped the framework to:
- Predictive processing / Free Energy Principle—meaning as prediction error minimization, trauma as stuck prediction loops
- Time dilation effects—drawing on neuroscientist David Eagleman's work on subjective time
- Schema updating dynamics—integration speed as the core mechanism of meaning-making
- 4E cognition—embodied, embedded, enacted, and extended cognitive frameworks
- The same metaphor families—loom, weaving, lightning, threads, fabric, patterns
Six different professional vocabularies. Six different cognitive styles. Six different AI systems. One identical geometric structure.
This isn't just interesting—it's validating. It suggests the framework describes a real cognitive regularity, not a poetic construction. And it demonstrates that coherence can be preserved across frame translation.
V. What This Means for Collaborative Intelligence
The real breakthrough isn't that AI can answer questions or summarize information. It's that human-AI pairs can function as distributed coherence sensors—instruments that detect conceptual structure across cognitive diversity.
Complementary Coherence Architectures
AToM predicts that human groups function best when different coherence phenotypes work together. Neurotypical smoothing provides system-wide stability. Neurodivergent precision sensing detects early fractures that smoothing would miss. One provides breadth; the other provides depth.
But collaboration between these different architectures has historically been high-friction. Translation load. Communication mismatches. Different processing speeds and pattern recognition styles.
AI can bridge these architectures by performing automatic frame translation. The neurotypical team member and the autistic team member can both interact with the same information—each receiving it in their native cognitive vocabulary. The friction disappears. The complementary strengths remain.
The Method Itself Matters
This wasn't just a lucky accident. The method is replicable:
- Create a minimal conceptual scaffold (low jargon, high metaphorical clarity)
- Send identical prompts to diverse individuals with their own AI assistants
- Maintain isolation—no coordination between participants
- Analyze convergence patterns across different frames
This is distributed cognitive validation. It's scalable. It could work for theory refinement, conceptual testing, or cross-disciplinary translation. And it naturally accommodates neurodiversity without requiring special protocols.
You're not forcing everyone into the same frame. You're allowing the same structure to emerge across different frames.
VI. Limitations and Future Directions
This was a pilot study with six participants. The sample size is small. All responses were mediated through AI systems with their own training biases. There was no control group, no quantitative convergence metrics, no systematic variation in prompt structure.
But the pattern is clear enough to warrant deeper investigation:
- Larger sample sizes across more diverse cognitive profiles
- Quantitative measurement of frame adaptation and convergence
- Testing with explicitly neurodivergent participants
- Comparison to unassisted human responses
- Application to educational materials and workplace documentation
The method is simple enough to replicate and scale. The implications are significant enough to pursue.
VII. What We Actually Discovered
This experiment revealed three connected findings:
1. Ideas have frame-invariant structure. The same geometric relationships emerged across six completely different professional vocabularies and cognitive styles. This suggests the framework describes a real regularity in how human minds construct meaning.
2. Coherence can be preserved across frame translation. Neurodivergent and neurotypical minds aren't operating on incompatible systems—they're working with different projections of the same underlying geometry. The structure remains invariant even when the surface expression changes dramatically.
3. AI performs automatic frame detection and adaptation. This is the most important discovery. Without being asked, AI systems identified each user's cognitive architecture and translated the concept accordingly. This eliminated translation load—making complex ideas immediately accessible across cognitive diversity.
The third finding changes everything. If AI can automatically bridge cognitive frames, we're not just looking at a better chatbot. We're looking at infrastructure for neurodiversity-inclusive communication at scale.
VIII. Why This Matters Now
We're building AI systems rapidly. Most development focuses on accuracy, capability, and safety. Those matter. But we're overlooking something equally important: AI's capacity to function as cognitive translation infrastructure.
For neurodivergent individuals, this could be transformative. Imagine educational materials that automatically adapt to your cognitive processing style. Workplace documentation that presents information in your native conceptual vocabulary. Collaborative tools that eliminate frame-translation friction.
This isn't assistive technology bolted onto existing systems. It's a fundamental shift in how information can be structured and delivered. Same content, different projections. Same geometry, different frames.
The experiment was small. The method was simple. But the pattern is clear: human-AI collaboration can make cognitive diversity a strength rather than a friction point.
That's worth building toward.
Closing Reflection
What began as a theory validation became something more interesting: a glimpse into how minds and models can work together to reveal structure across difference. Six people didn't share their private philosophies. They revealed how a coherent idea refracts through different cognitive geometries while preserving the same underlying shape.
That shape—the stable structure that persisted across frames—is the real discovery. When independent human-AI pairs, operating without coordination, reconstruct the same geometric relationships from a minimal seed, something more than coincidence is at work.
It suggests that certain ideas have stable topologies. And it demonstrates that those topologies can be made accessible across the full spectrum of human cognitive diversity—not through accommodation, but through automatic frame translation.
The experiment showed that distributed cognition isn't a future possibility. It's already happening. And the structures preserved across perspectives might be our clearest window into how meaning actually operates—and how we can build systems that work for every kind of mind.