Let's Collaborate

AI Safety | Interpretability Research | Consulting Opportunities

Get in Touch

I'm actively seeking opportunities in AI safety and interpretability research. If you're working on alignment, bias measurement, or user experience optimization, let's discuss how ALMG can contribute.

Contact Information

Dr. Bradley D. Shields

MD/PhD | AI Interpretability Researcher

bradley2shields@gmail.com

📍 Available for remote work or relocation

Areas of Collaboration

🏢
Full-Time Positions

AI Safety Researcher, Interpretability Research Scientist, User Experience Safety roles at Anthropic, OpenAI, or research institutions

🔬
Research Partnerships

Collaborative projects applying ALMG framework to specific models, use cases, or research questions

💼
Consulting & Advisory

Red teaming, bias audits, safety training evaluation, user experience optimization

🎤
Speaking & Education

Conference presentations, workshops, educational content on geometric interpretability

📝
Technical Writing

Documentation, research papers, blog posts, educational materials about AI safety and interpretability

What to Expect

Response Time

I aim to respond to all inquiries within 24-48 hours. For urgent matters or time-sensitive opportunities, please indicate this in your message.

Initial Conversation

First discussions typically focus on understanding your needs, explaining ALMG applications to your specific context, and exploring mutual fit. I'm happy to share additional technical details, case studies, or demonstration materials.

References & Validation

The five-scroll series serves as a working demonstration of methodology and capability. Additional technical documentation, reproducibility protocols, and validation studies available upon request.

Frequently Asked Questions

Are you available for full-time positions?

Yes. I'm actively seeking full-time research positions in AI safety and interpretability, particularly at Anthropic, OpenAI, or academic research institutions. Remote work preferred, but open to relocation for the right opportunity.

Can ALMG be applied to our specific AI system?

Yes. The framework has been validated across GPT-4, Grok, and Claude, demonstrating cross-system applicability. I can adapt the methodology to any large language model, including proprietary or specialized systems.

Do you work with academic researchers?

Absolutely. I'm interested in collaborative research projects, co-authoring papers, and contributing to academic AI safety work. The ALMG framework is designed to be reproducible and publishable.

What's your consulting rate?

Rates depend on project scope, duration, and deliverables. Recent consulting work demonstrates current market value. Contact me to discuss your specific needs and budget.

Can you present ALMG at our conference/event?

Yes. I'm available for conference presentations, workshops, and educational sessions. The material can be adapted for technical audiences (researchers, engineers) or broader audiences (policymakers, general public).

Organizations I'm Targeting

Anthropic

AI Safety & Constitutional AI

OpenAI

Safety Systems & Applied Research

Research Institutions

Academic AI Safety Labs

AI Governance

Policy & Regulatory Bodies

If your organization isn't listed but you're working on AI safety, interpretability, or alignment, I'm still interested in hearing from you.

Ready to Discuss?

Whether you're considering a full-time hire, research partnership, or consulting engagement, I'd be happy to explore how ALMG can contribute to your AI safety work.

bradley2shields@gmail.com | Available for immediate start