Katalina Hernandez
Lawyer by education, researcher by vocation.
Stress-Testing Reality Limited | Katalina Hernández | Substack
→Ask me about: Advice on how to make technical research legible by lawyers and regulators, frameworks for AI liability (EU or UK Law), general compliance questions (GDPR, EU AI Act, DSA/DMA, Product Liability Directive).
→Book a free slot: https://www.aisafety.com/advisors
I produce independent legal research for AI Safety and AI Governance projects. I work to inform enforceable legal mechanisms with alignment, interpretability, and control research- and avoid technical safety being brought to the conversation too late.
How I work: I read frontier safety papers and reproduce core claims; map them to concrete obligations (EU AI Act, PLD, NIST/ISO) and propose implementation plans.
Current projects
- Law-Following AI (LFAI): drafting a paper (in prep for submission to the Cambridge Journal for Computational Legal Studies) on whether legal standards can serve as alignment anchors and how law-alignment relates to value alignment. Building on the original framework proposed by Cullen O'Keefe and the Institute of Law and AI.
- Regulating downstream modifiers: writing “Regulating Downstream Modifiers in the EU: Federated Compliance and the Causality–Liability Gap” for IASEAI, stress-testing Hacker & Holweg’s proposal against causation/liability and frontier-risk realities.
- Open problems in regulatory AI governance: co-developing with ENAIS members a tractable list where AI Safety work can close governance gaps (deceptive alignment, oversight loss, evaluations).
- AI-safety literacy for tech lawyers: building a syllabus used by serious institutions; focuses on translating alignment/interpretability/control into audits, documentation, and enforcement-ready duties.