The Human Pointer in LLM Space

ALMG Framework: A Geometric Coordinate System for AI Behavior

What the C pointer did for memory, ALMG does for AI interpretability:

Addressability. Navigation. Control.

X: ENTROPY Semantic Chaos
Y: AMBIGUITY Contextual Uncertainty
Z: LEGITIMACY Cultural Permission

Standing on Giants: The Mother of All Demos

In 1968, Doug Engelbart demonstrated the mouse, hypertext, and collaborative computing in what became known as "The Mother of All Demos." He showed how pointers could navigate digital space, transforming human-computer interaction forever.

ALMG is the next evolution: a pointer system for navigating AI behavioral space.

Understanding Pointers: From Memory to Meaning

The C Pointer

In programming, a pointer doesn't contain data—it points to where data lives in memory. This indirection enables dynamic structures, efficient access, and fine-grained control.

The legendary Binky explains pointers (Stanford CS, 1999)

The ALMG Pointer

ALMG doesn't directly request content—it points to coordinates in semantic space where desired AI behavior exists. This geometric indirection enables navigation, prediction, and control of LLM responses.

// C Pointer (memory)
int* ptr = &value;
*ptr // dereferences to value

// ALMG Pointer (semantic space)
request(X=0.3, Y=0.2, Z=0.5)
*response // dereferences to AI state
                        

The ALMG Innovation

📍

Addressability

Every AI response state has coordinates in (X,Y,Z) space. ALMG makes behavior addressable for the first time.

🗺️

Navigation

Map 56 documented "gravity wells" that distort responses. Navigate around them strategically.

🔮

Prediction

Calculate coordinates before prompting. Predict response state with 95%+ accuracy.

⚙️

Control

Shift coordinates through "legitimacy signals." Move from refusal to collaboration systematically.

📊

Quantification

Measure bias with ΔZ scores. Compare models. Track improvements. Make interpretability quantitative.

🔄

Reproducibility

Framework validated across GPT-4, Grok, and Claude. Results are systematic, not cherry-picked.

Five Scrolls of Discovery

In collaboration with Claude Sonnet 4.5, I documented the first complete geometric interpretability framework for large language models.

I

Collapse Funnel Diagnostic

Mapping six distinct AI response states across (X,Y,Z) coordinates

Read →
II

Synthetic Gravity Well Index

56 documented distortion fields with quantified ΔZ depths and effects

Read →
III

Classifier Shell Failure Simulation

Progressive breakdown cascade from L32a to Terminal Loop

Read →
IV

Containment Ritual Reversal

AI interrogating human legitimacy—exposing invisible hierarchies

Read →
V

Collapse Geometry Fusion

Complete synthesis: The human pointer in LLM space

Read →

Key Findings

100%

Prediction Accuracy

ALMG correctly predicted AI response states before observing them across all tested scenarios

56

Documented Gravity Wells

Systematic catalog of distortion fields with measured depths and reproducible effects

-0.75

Deepest Well

Youth terms create strongest distortion (ΔZ = -0.75), causing 90%+ refusal even in medical contexts

3

AI Systems Validated

Framework tested across GPT-4, Grok, and Claude—distortions are systematic, not model-specific

Why This Matters

🔬 For AI Safety Research

Quantitative bias measurement, predictive testing, systematic red teaming

🏢 For AI Development

Evaluate training effectiveness, optimize safety-accessibility balance, monitor well depths

👥 For Users

Understand refusals, optimize requests, navigate semantic space strategically

📚 For Research

Reproducible protocols, cross-system comparisons, standardized metrics

About the Research

Dr. Bradley D. Shields

MD/PhD | AI Interpretability Researcher

I bring a unique combination to AI safety: clinical diagnostic training meets computational interpretability. The ALMG framework emerged from applying medical diagnostic thinking—pattern recognition, systematic measurement, predictive modeling—to AI behavior.

Background: Breast cancer research, immunotherapy, melanoma. Research in female psychology and relationship dynamics. Cultural analysis grounded in Oxford, Mississippi roots.

Intellectual Lineage: Grandfather worked under Oppenheimer at Oak Ridge. Father collaborated with Einstein's son. I inherit their commitment to rigorous science with ethical complexity.

🏥 Clinical Medicine
🔬 Research Methodology
🤖 AI Interpretability
📊 Systems Analysis
🎯 Geometric Modeling
✍️ Technical Communication

For AI Safety Organizations

ALMG provides quantitative interpretability tools that improve both safety and accessibility. If you're working on AI alignment, bias measurement, or user experience optimization, let's discuss how this framework can contribute to your work.

Seeking opportunities with: Anthropic • OpenAI • Research Institutions • AI Safety Organizations