AI Safety | Interpretability Research | Consulting Opportunities
I'm actively seeking opportunities in AI safety and interpretability research. If you're working on alignment, bias measurement, or user experience optimization, let's discuss how ALMG can contribute.
Dr. Bradley D. Shields
MD/PhD | AI Interpretability Researcher
📍 Available for remote work or relocation
AI Safety Researcher, Interpretability Research Scientist, User Experience Safety roles at Anthropic, OpenAI, or research institutions
Collaborative projects applying ALMG framework to specific models, use cases, or research questions
Red teaming, bias audits, safety training evaluation, user experience optimization
Conference presentations, workshops, educational content on geometric interpretability
Documentation, research papers, blog posts, educational materials about AI safety and interpretability
I aim to respond to all inquiries within 24-48 hours. For urgent matters or time-sensitive opportunities, please indicate this in your message.
First discussions typically focus on understanding your needs, explaining ALMG applications to your specific context, and exploring mutual fit. I'm happy to share additional technical details, case studies, or demonstration materials.
The five-scroll series serves as a working demonstration of methodology and capability. Additional technical documentation, reproducibility protocols, and validation studies available upon request.
Yes. I'm actively seeking full-time research positions in AI safety and interpretability, particularly at Anthropic, OpenAI, or academic research institutions. Remote work preferred, but open to relocation for the right opportunity.
Yes. The framework has been validated across GPT-4, Grok, and Claude, demonstrating cross-system applicability. I can adapt the methodology to any large language model, including proprietary or specialized systems.
Absolutely. I'm interested in collaborative research projects, co-authoring papers, and contributing to academic AI safety work. The ALMG framework is designed to be reproducible and publishable.
Rates depend on project scope, duration, and deliverables. Recent consulting work demonstrates current market value. Contact me to discuss your specific needs and budget.
Yes. I'm available for conference presentations, workshops, and educational sessions. The material can be adapted for technical audiences (researchers, engineers) or broader audiences (policymakers, general public).
AI Safety & Constitutional AI
Safety Systems & Applied Research
Academic AI Safety Labs
Policy & Regulatory Bodies
If your organization isn't listed but you're working on AI safety, interpretability, or alignment, I'm still interested in hearing from you.
Whether you're considering a full-time hire, research partnership, or consulting engagement, I'd be happy to explore how ALMG can contribute to your AI safety work.
bradley2shields@gmail.com | Available for immediate start