Privacy in Action: Towards Realistic Privacy Mitigation and Evaluation for LLM-Powered Agents, accepted at the EMNLP 2025, introduces PrivacyChecker(opens in new tab), a lightweight module that can be integrated into agents, helping make them more sensitive to contextual integrity. It enables a new evaluation approach, transforming static privacy benchmarks into dynamic environments that reveal substantially higher privacy risks in real-world agent interactions. Contextual Integrity in LLMs via Reasoning and Reinforcement Learning, accepted at NeurIPS 2025, takes a different approach to applying contextual integrity. It treats it as a problem that requires careful reasoning about the context, the information, and who is involved to enforce privacy norms.