Skip to main content
Road to Anthropic

Road to Anthropic

I’ve been closely following Anthropic’s work on AI safety and security — their research on red teaming, model vulnerabilities, and initiatives like the Fellows program genuinely resonate with what I care about in offensive security.

When the time comes for me to look for a full-time position, applying there is something I’d really like to do. In the meantime, this section serves as a personal log of everything I build, learn and explore at the intersection of AI and security — a running record of hands-on experience I can point back to.

There are no articles to list here yet.