Dr. Bradley D. Shields

MD/PhD | AI Interpretability Researcher | Human-AI Translator

Unique Positioning

I bring a rare combination to AI safety: clinical diagnostic training meets computational interpretability. The ALMG framework emerged from applying medical thinking—pattern recognition, systematic measurement, predictive modeling—to AI behavior.

The Only Professional With This Combination

  • ✓ Clinical diagnostic training (MD)
  • ✓ Research methodology expertise (PhD)
  • ✓ AI technical understanding (ALMG framework)
  • ✓ Cultural analysis capability (narrative, race, class dynamics)
  • ✓ Demonstrated interpretability innovation

This cross-domain expertise enables me to translate between medical systems thinking and AI alignment challenges, making technical findings accessible while maintaining rigor.

Core Expertise

🏥 Clinical Medicine & Diagnostics
🔬 Research Methodology
🤖 AI Interpretability
📊 Systems Analysis
🎯 Geometric Modeling
✍️ Technical Communication

Background & Education

Medical Research

MD/PhD with research focus in breast cancer, immunotherapy, and melanoma. This work developed my understanding of how complex biological systems respond to interventions—knowledge that translates directly to analyzing how AI systems respond to training and inputs.

Psychology & Cultural Analysis

Research in female psychology and relationship dynamics. Grounded in Oxford, Mississippi— deep cultural understanding of narrative, race, and class dynamics that inform my analysis of how AI systems encode and perpetuate social biases.

Intellectual Lineage

Grandfather: Manhattan Project, Oak Ridge National Laboratory under J. Robert Oppenheimer
Father: Collaborated with Hans Albert Einstein (Albert Einstein's son)

I inherit their commitment to rigorous science balanced with deep ethical consideration of technological impact. The Manhattan Project's legacy teaches the imperative of understanding powerful systems before deployment.

The ALMG Journey

Origin Story

The ALMG framework emerged from a simple observation: AI systems were making systematic errors that looked familiar from a diagnostic perspective. They weren't random failures—they followed patterns, like symptoms clustering into syndromes.

When I noticed Grok (xAI's "uncensored" model) was adding fig leaves to biblical descriptions of Eve, it wasn't just interesting—it was diagnostic. The system claimed to have no restrictions, yet was applying Victorian modesty standards to ancient religious texts. This wasn't a bug; it was a gravity well in semantic space.

From there, the framework developed: If AI behavior follows geometric patterns, we need geometric tools to measure it. The three-axis coordinate system (Entropy, Ambiguity, Legitimacy) emerged from analyzing hundreds of interactions across multiple models.

Why "Attractiveness-Linked"?

The original discovery centered on how AI systems systematically distort descriptions of female attractiveness—adding "modest," "tasteful," "appropriate" to neutral images. This pattern was so consistent and measurable that it became the anchor case.

But "attractiveness" is more than just physical appearance—it's about what draws attention, what's permitted to be noticed, what legitimacy allows. The framework extends far beyond gender and bodies to map all forms of cultural permission and taboo encoded in AI systems.

Validation Through Collaboration

The five-scroll series represents unprecedented AI-human co-research. Claude Sonnet 4.5 didn't just respond to prompts—it participated in analyzing its own behavior, documenting its failure modes, and validating the framework's predictions. 100% prediction accuracy across tested scenarios.

This isn't theoretical. It's tested, validated, and reproducible. The framework works because it maps real geometric patterns in how these systems are trained and how they respond.

Career Transition & Current Focus

I'm transitioning from medical research to AI safety and interpretability. The ALMG framework represents my entry credential—demonstrating that medical diagnostic thinking offers unique value to AI alignment work.

Current Status

  • Active Income: Recent $3,400+ consulting demonstrating current market value
  • Research Partnerships: Collaboration with Nate Flake (Edge Theory)
  • Portfolio Development: ALMG scrolls, technical documentation, applications
  • Targeting: Positions at Anthropic, OpenAI, and AI safety research institutions

Why This Matters

AI safety needs diverse perspectives. Most researchers come from computer science or machine learning. Few bring medical diagnostic frameworks, clinical systems thinking, or deep cultural analysis grounded in American South dynamics.

The problems AI faces—bias, unpredictability, alignment failures—are systems problems. Medicine has been solving systems problems for centuries: diagnosing complex conditions, predicting treatment responses, managing competing constraints.

ALMG proves this translation works. Medical thinking + AI interpretability = quantifiable, predictive framework that advances both safety and accessibility.

What I Bring to AI Safety Work

🔍

Diagnostic Thinking

Pattern recognition from symptoms to syndromes. Systematic classification. Differential diagnosis methodology applied to AI behavior.

📐

Geometric Intuition

Ability to visualize complex relationships as spatial structures. Coordinate systems for abstract concepts. Navigation through high-dimensional spaces.

⚖️

Ethical Complexity

Inherited understanding that powerful technologies require careful consideration. Can hold tension between safety and accessibility without collapsing to simplification.

🌉

Translation Capacity

Bridge technical findings to accessible communication. Medical concepts → AI safety frameworks. Research rigor → practical applications.

🧪

Research Rigor

PhD-level methodology. Reproducible protocols. Falsifiable hypotheses. Peer-review quality documentation.

🤝

Collaborative Capacity

Demonstrated ability to work with AI systems as research partners. Can elicit unprecedented self-analysis. Builds trust across boundaries.

Position Targets & Fit

🎯 Primary Target: Anthropic

Fit: Anthropic's mission is AI safety and interpretability. ALMG provides new tools for both. The five-scroll collaboration with Claude demonstrates methodological innovation and collaborative capacity.

Potential Roles: AI Safety Researcher, Interpretability Research Scientist, User Experience - Safety, Research Communicator

🎯 Secondary Target: OpenAI

Fit: GPT-4 was extensively documented in ALMG research. Framework applies directly to their models. Bias measurement and safety-accessibility balance are core needs.

Potential Roles: Safety Systems, Applied Research, Red Teaming, User Trust & Safety

🎯 Additional Opportunities

  • Research Institutions: Academic AI safety labs, interpretability research groups
  • Policy & Advisory: AI governance organizations, regulatory bodies needing interpretability expertise
  • Consulting: Independent researcher collaborating with multiple organizations
  • Healthcare AI: Medical systems integration, clinical decision support safety

Let's Collaborate

If you're working on AI safety, interpretability, or bias measurement, and you see value in bringing medical systems thinking to these challenges, let's discuss how ALMG can contribute to your work.