Policy Design Manager, User Well-being

Anthropic
Full-timeβ€’$190k-220k/year (USD)

πŸ“ Job Overview

Job Title: Policy Design Manager, User Well-being

Company: Anthropic

Location: Remote-Friendly (Travel Required) | San Francisco, CA

Job Type: Full-Time

Category: Policy Design / AI Safety / User Well-being

Date Posted: 2025-09-19T20:18:45

Experience Level: Mid-Senior Level (5-10 years)

Remote Status: Hybrid (25% office time expected, role may require more)

πŸš€ Role Summary

  • Lead the development and refinement of AI usage policies focused on user well-being, addressing critical areas such as mental health, sycophancy, delusions, and emotional attachment.

  • Serve as a subject matter expert in psychology and mental health to inform policy creation and safety intervention design for Anthropic's AI products.

  • Design and implement evaluation frameworks to rigorously test model performance against defined safety and well-being policies.

  • Collaborate cross-functionally with Product, Engineering, Public Policy, and Legal teams to ensure alignment and effective integration of safety policies throughout the product lifecycle.

πŸ“ Enhancement Note: While the job title "Policy Design Manager" might not explicitly fall under traditional "Revenue Operations" or "Sales Operations," the core responsibilities involve establishing frameworks, guidelines, and operational processes that directly impact product adoption, user trust, and long-term business sustainability. This role is crucial for ensuring the responsible deployment of AI, which indirectly supports customer retention and market growth, aligning with the broader GTM (Go-To-Market) operational objectives. The emphasis on policy, evaluation, and cross-functional alignment makes it a strategic operations function within the AI safety domain.

πŸ“ˆ Primary Responsibilities

  • Draft and update comprehensive usage policies governing the responsible interaction with Anthropic's AI models, specifically targeting user well-being concerns.

  • Develop robust evaluation frameworks and conduct regular testing of existing policies to identify and address gaps, ambiguities, and emerging risks.

  • Review flagged content and user interactions to inform policy enforcement, identify edge cases, and drive continuous policy improvement.

  • Collaborate with safeguards product teams to identify and mitigate user well-being risks, and contribute to the design and implementation of effective safety interventions.

  • Educate and align internal stakeholders, including Product, Engineering, Public Policy, and Legal teams, on user well-being policies and the broader AI safety strategy.

  • Stay abreast of evolving AI policy norms, industry standards, and research in psychology, mental health, and human-AI interaction to inform policy decision-making.

  • Conduct research and analysis to define emerging phenomena related to user well-being and AI interaction, creating evidence-based and psychometrically valid definitions.

  • Advise on opportunities for promoting user well-being within AI systems, supporting beneficial use cases and intervention development.

πŸ“ Enhancement Note: The responsibilities heavily emphasize a proactive, research-driven approach to policy development and risk mitigation. This aligns with advanced operations functions that require deep analytical capabilities, strategic foresight, and the ability to translate complex research into actionable guidelines. The role demands a strong understanding of how policy impacts product operations and user experience at scale.

πŸŽ“ Skills & Qualifications

Education:

  • Bachelor's degree in a related field (e.g., Psychology, Sociology, Human-Computer Interaction, Public Policy, Law) or equivalent practical experience.

  • Preferred: Advanced degree (Master's or Ph.D.) in Clinical Psychology, Counseling Psychology, Psychiatry, Social Work, or a closely related field emphasizing mental health and well-being.

Experience:

  • 5-10 years of relevant professional experience.

  • Proven experience as a researcher, subject matter expert, clinician, or trust & safety professional in areas such as psychology, mental health, developmental science, or human-AI interaction.

  • Demonstrated experience in drafting or updating product policies, user guidelines, or regulatory frameworks.

  • Experience working with generative AI products, including proficiency in prompt engineering for policy evaluation and risk assessment.

  • Familiarity with content moderation principles and the challenges of implementing policies at scale.

  • Ability to effectively bridge technical discussions with policy considerations and translate complex research findings into actionable recommendations.

  • Experience in developing evidence-based definitions for emerging phenomena, ideally with a focus on psychometric validity.

Required Skills:

  • Deep expertise in psychology, mental health, or related fields impacting user well-being.

  • Exceptional policy drafting and analytical skills, with the ability to articulate clear and concise guidelines.

  • Strong research and evaluation methodology skills, including the ability to design and implement testing frameworks.

  • Excellent communication and interpersonal skills, with a proven ability to educate and influence stakeholders.

  • Proficiency in creative thinking and problem-solving, especially in navigating ambiguity and complex challenges.

Preferred Skills:

  • Experience with AI safety principles and practices.

  • Familiarity with human-AI interaction research and its practical applications.

  • Knowledge of legal and regulatory frameworks relevant to AI and technology.

  • Experience in building and managing cross-functional policy initiatives.

  • Proficiency in data analysis tools and techniques.

πŸ“ Enhancement Note: The emphasis on an advanced degree in psychology or related fields highlights the specialized nature of this operations role. The requirement for experience with generative AI and prompt engineering indicates a need for practical, hands-on understanding of the technology being governed. The blend of policy, research, and technical understanding is critical for success.

πŸ“Š Process & Systems Portfolio Requirements

Portfolio Essentials:

  • Demonstrate examples of policy documents or guidelines you have authored, showcasing clarity, comprehensiveness, and adherence to established standards.

  • Include case studies or research projects that highlight your ability to analyze complex user behavior or societal impact issues, particularly in relation to technology.

  • Present frameworks or methodologies you have developed for evaluating policy effectiveness or testing product safety in sensitive areas.

  • Showcase experience in translating research findings or expert opinions into actionable operational guidelines or product requirements.

Process Documentation:

  • Illustrate your process for developing new policies from initial research and stakeholder consultation through to final implementation and iteration.

  • Detail your approach to reviewing and updating existing policies, including methods for identifying gaps, incorporating feedback, and measuring impact.

  • Explain your methodology for testing AI model performance against policy requirements, including the design of evaluation frameworks and the interpretation of results.

  • Document your process for collaborating with engineering and product teams to integrate policy requirements into product development and safety interventions.

πŸ“ Enhancement Note: For a role focused on policy design within AI safety, a portfolio is crucial for demonstrating practical application of theoretical knowledge. It should showcase not just the output (policies) but the robust process and analytical rigor behind them. This is analogous to operations professionals presenting process optimization case studies or system implementation blueprints.

πŸ’΅ Compensation & Benefits

Salary Range:

Benefits:

  • Competitive Compensation package.

  • Generous Vacation time for work-life balance.

  • Comprehensive Parental Leave policies.

  • Flexible Working Hours to accommodate diverse needs and schedules.

  • Access to a collaborative and well-equipped office space.

Working Hours:

  • Standard full-time commitment (approximately 40 hours per week), with flexibility. The role is hybrid, requiring approximately 25% of time spent in an office setting, with potential for more depending on role needs.

πŸ“ Enhancement Note: The salary range is competitive for a senior-level policy and research role in the tech industry, particularly in the San Francisco Bay Area. The inclusion of benefits like generous vacation and parental leave, alongside flexible working hours, indicates a company culture that values employee well-being, which is highly relevant for a role focused on user well-being. The hybrid model with a specific office-time expectation is standard for many tech companies.

🎯 Team & Company Context

🏒 Company Culture

Industry: Artificial Intelligence (AI) Research and Development, focusing on AI Safety and Beneficial AI.

Company Size: Anthropic is a growing company, indicating a dynamic environment with opportunities for significant impact and contribution. The exact employee count isn't specified here, but the growth implies a need for structured operations and clear policy frameworks.

Founded: Anthropic was founded in 2021, positioning it as a relatively young but rapidly advancing organization in the AI space. Its mission is to build reliable, interpretable, and steerable AI systems.

Team Structure:

  • The Policy Design Manager will likely be part of a larger Safeguards or Policy team, working closely with AI Safety researchers, product managers, engineers, and legal counsel.

  • The reporting structure will likely involve a manager or director within the AI Safety or Policy division, with direct collaboration across multiple product and engineering teams.

Methodology:

  • Anthropic emphasizes an empirical, research-driven approach to AI development, akin to scientific disciplines like physics and biology.

  • Operations and policy development are informed by rigorous data analysis, experimentation, and a commitment to advancing AI safety through large-scale research efforts.

  • The company values clear communication and collaboration, fostering an environment where ideas are shared frequently to ensure the highest impact work is pursued.

Company Website: https://www.anthropic.com/

πŸ“ Enhancement Note: Anthropic's focus on "big science" and impactful research suggests an environment that values deep expertise and rigorous methodology. For a Policy Design Manager, this means policies must be grounded in solid research and data, and operationalized with precision. The collaborative culture is key for navigating the complex interdependencies of AI development and safety.

πŸ“ˆ Career & Growth Analysis

Operations Career Level: This role is at a mid-to-senior level, requiring significant expertise and independent contribution. It's a specialized role within the broader AI safety and policy domain, offering a unique career path for professionals focused on the ethical and practical implications of AI.

Reporting Structure: The Policy Design Manager will likely report to a Director or Head of Policy/Safeguards, with dotted-line reporting or strong collaborative relationships with Product Leads, Engineering Managers, and Legal Counsel. This structure emphasizes influence and partnership rather than direct line management.

Operations Impact: This role directly impacts Anthropic's ability to deploy AI responsibly and safely. By shaping usage policies and safety interventions, the Policy Design Manager influences user trust, product adoption, regulatory compliance, and the company's overall reputation and mission success. Their work is foundational to the safe scaling of AI products.

Growth Opportunities:

  • Specialization: Deepen expertise in specific areas of AI safety, user well-being, or policy enforcement within the rapidly evolving AI landscape.

  • Leadership: Progress into senior leadership roles within policy, safeguards, or product management, potentially managing teams of policy experts or driving broader strategic initiatives.

  • Influence: Shape the direction of AI safety policy and best practices across the industry through thought leadership and contributions to public discourse.

  • Skill Development: Gain exposure to cutting-edge AI technologies, advanced research methodologies, and complex stakeholder management challenges.

πŸ“ Enhancement Note: This role offers a specialized career trajectory within the AI industry, distinct from traditional operations roles but sharing the emphasis on process, analysis, and strategic impact. Growth opportunities are tied to becoming a recognized expert in a critical and emerging field.

🌐 Work Environment

Office Type: Anthropic operates with a hybrid model, indicating a blend of remote work flexibility and in-person collaboration. Travel is also a requirement for this role.

Office Location(s): While the role is remote-friendly, the primary mention is San Francisco, CA, suggesting this is a key hub for any required in-office presence.

Workspace Context:

  • The workspace is designed to foster collaboration, with expectations for staff to be in the office at least 25% of the time. This facilitates direct interaction with colleagues, team meetings, and brainstorming sessions critical for policy development.

  • As a company focused on cutting-edge AI, the environment likely provides access to advanced tools, research resources, and opportunities to engage with leading experts in the field.

Work Schedule:

  • Standard full-time hours are expected, but the company offers flexible working hours, allowing for adaptation to personal needs and optimal productivity. This flexibility is crucial for deep analytical work and iterative policy design.

πŸ“ Enhancement Note: The hybrid nature of the work environment, coupled with required travel, suggests a need for strong self-management and communication skills. The emphasis on collaboration in the office highlights the importance of in-person engagement for complex problem-solving and stakeholder alignment in policy design.

πŸ“„ Application & Portfolio Review Process

Interview Process:

  • Initial Screening: Likely involves a review of your resume and cover letter, focusing on your policy experience, subject matter expertise (psychology/mental health), and understanding of AI safety concepts.

  • Technical/Deep Dive Interviews: Expect interviews with hiring managers and team members focused on your expertise in psychology, policy development, research methodologies, and your approach to evaluating AI safety risks related to user well-being. You may be asked to discuss specific case studies from your experience.

  • Skills-Based Assessments/Case Studies: Candidates may be asked to complete a take-home assignment or participate in a live case study simulation. This could involve drafting a policy for a hypothetical AI scenario, evaluating a piece of content, or designing an intervention strategy. This is where your portfolio will be crucial.

  • Cross-Functional Interviews: Interviews with stakeholders from Product, Engineering, Legal, and Public Policy teams to assess your ability to collaborate, communicate complex ideas, and align diverse perspectives.

  • Final Round: Likely involves senior leadership to discuss strategic fit, long-term vision, and cultural alignment with Anthropic's mission and values.

Portfolio Review Tips:

  • Showcase Impact: Clearly articulate the problem, your approach (methodology), the actions taken (policy drafted, evaluation designed), and the measurable results or impact of your work. Quantify outcomes whenever possible.

  • Demonstrate Process: For each portfolio item, explain your end-to-end process – from research and stakeholder engagement to final output and iteration. This is critical for a policy design role.

  • Highlight Well-being Focus: Ensure your portfolio prominently features work related to user psychology, mental health, or mitigating harm in digital environments.

  • Clarity and Conciseness: Present your work in a clear, organized, and easily digestible format. Use visuals or summaries where appropriate.

  • Tailor to Anthropic: Emphasize how your experience and approach align with Anthropic's mission of building safe and beneficial AI.

Challenge Preparation:

  • Policy Drafting: Be prepared to draft a policy for a novel AI capability, focusing on user well-being. Consider edge cases and enforcement mechanisms.

  • Risk Assessment: Practice identifying and assessing potential risks associated with AI use cases, particularly those impacting mental health or emotional states.

  • Intervention Design: Think about creative interventions that could mitigate identified risks while supporting beneficial AI use.

  • Stakeholder Communication: Prepare to explain complex policy decisions and their rationale to technical and non-technical audiences.

πŸ“ Enhancement Note: The interview process will heavily scrutinize your ability to translate specialized knowledge (psychology, AI safety) into practical, operational policies and interventions. A strong, well-curated portfolio that demonstrates this translation is essential.

πŸ›  Tools & Technology Stack

Primary Tools:

  • Document Creation & Collaboration: Google Workspace (Docs, Sheets, Slides), Microsoft Office Suite. Essential for drafting policies, creating presentations, and collaborating with teams.

  • Project Management: Tools like Asana, Jira, or Trello may be used for tracking policy development lifecycles, task management, and cross-functional project coordination.

  • Research & Analysis: Access to academic databases, industry reports, and potentially internal data analytics platforms for research and evidence gathering.

  • Communication Platforms: Slack, Microsoft Teams for daily communication and team collaboration.

Analytics & Reporting:

  • Data Analysis Tools: While not explicitly stated, proficiency with data analysis tools (e.g., Python with libraries like Pandas, NumPy; R; SQL) would be beneficial for analyzing flagged content, user feedback, and policy effectiveness metrics.

  • Visualization Tools: Tools like Tableau, Looker, or even advanced Excel/Google Sheets for creating dashboards and reports to communicate policy performance to stakeholders.

CRM & Automation:

  • While not a direct CRM role, understanding how policies are enforced within product workflows and potentially automated systems (e.g., content flagging systems) is valuable. Familiarity with workflow automation principles could be advantageous.

πŸ“ Enhancement Note: While this role isn't a direct technology operations role, familiarity with tools used for documentation, collaboration, and potentially data analysis is important. The ability to understand and interact with the technology stack where policies are implemented is a key operational aspect.

πŸ‘₯ Team Culture & Values

Operations Values:

  • Safety First: A paramount commitment to building AI systems that are safe, reliable, and beneficial, with user well-being as a core tenet.

  • Research-Driven: Decisions and policies are based on rigorous research, data analysis, and expert knowledge, not just intuition.

  • Collaboration: A strong emphasis on working together across diverse teams (Product, Engineering, Legal, Public Policy) to achieve shared safety goals.

  • Impact-Oriented: Focus on creating meaningful impact by advancing the state of AI safety and ensuring responsible AI deployment.

  • Honesty and Transparency: Operating with integrity in communication and decision-making, both internally and externally.

Collaboration Style:

  • Expect a highly collaborative environment where open dialogue and constructive feedback are encouraged.

  • Work closely with diverse groups, requiring strong communication skills to bridge different perspectives and technical understandings.

  • The company values communication skills and often hosts research discussions, indicating a culture of shared learning and collective problem-solving.

πŸ“ Enhancement Note: The company's values are deeply intertwined with its mission. For a Policy Design Manager, embodying these values means approaching policy work with a strong ethical compass, a commitment to evidence, and a collaborative spirit.

⚑ Challenges & Growth Opportunities

Challenges:

  • Navigating Ambiguity: Developing policies for novel AI capabilities that are still evolving presents inherent ambiguity. Candidates must be comfortable making reasoned judgments with incomplete information.

  • Balancing Safety and Innovation: Finding the right balance between robust safety measures and enabling beneficial AI innovation is a constant challenge.

  • Evolving AI Landscape: The rapid pace of AI development requires continuous learning and adaptation of policies to address new risks and opportunities.

  • Cross-Functional Alignment: Achieving consensus and buy-in from diverse teams with potentially competing priorities can be challenging.

Learning & Development Opportunities:

  • Deep AI Safety Expertise: Gain unparalleled experience in a critical and rapidly advancing field, becoming an expert in AI safety and user well-being.

  • Industry Influence: Contribute to shaping the future of responsible AI development and policy standards across the industry.

  • Advanced Research Exposure: Work alongside leading AI researchers and engineers, gaining insights into cutting-edge technologies and methodologies.

  • Skill Specialization: Develop highly sought-after skills in policy design, risk management, and human-AI interaction within the AI context.

πŸ“ Enhancement Note: The challenges in this role are significant but also present major growth opportunities. Successfully navigating these challenges will position the candidate as a leader in AI safety and policy.

πŸ’‘ Interview Preparation

Strategy Questions:

  • Policy Development: "Describe your process for developing a new policy for an emerging AI capability. How would you incorporate user well-being concerns into this process?" (Focus on research, stakeholder input, iterative refinement, and evidence-based reasoning.)

  • Risk Mitigation: "How would you identify and mitigate risks related to emotional attachment or sycophancy in AI interactions? What specific policy interventions could be effective?" (Prepare examples of psychological principles applied to AI.)

  • Stakeholder Alignment: "Imagine you need to convince an engineering team to implement a complex safety feature that might slow down product performance. How would you approach this conversation?" (Emphasize clear communication, data-driven arguments, and finding common ground.)

Company & Culture Questions:

  • "What interests you most about Anthropic's mission and its approach to AI safety?" (Research Anthropic's mission, values, and recent work.)

  • "How do you approach working in a collaborative, research-intensive environment?" (Highlight your experience with cross-functional teams and knowledge sharing.)

Portfolio Presentation Strategy:

  • Structure: Organize your portfolio by key skill areas or project types. For each project, clearly state the objective, your role, the process followed, the outcome, and lessons learned.

  • Quantify Impact: Where possible, use data to demonstrate the effectiveness of your policies or interventions (e.g., reduction in flagged content, improved user feedback).

  • Narrative: Tell a compelling story for each piece of work, highlighting your problem-solving skills and strategic thinking.

  • Focus on Process: For policy design, the "how" is as important as the "what." Detail your research, stakeholder consultation, and iterative refinement processes.

πŸ“ Enhancement Note: Interview preparation should focus on demonstrating a deep understanding of user psychology, robust policy design processes, and effective cross-functional collaboration, all within the context of AI safety. Your portfolio is your primary tool for showcasing these capabilities.

πŸ“Œ Application Steps

To apply for this Policy Design Manager, User Well-being position:

  • Submit your application through the provided link on Greenhouse. Ensure your resume and cover letter are tailored to highlight your relevant experience in psychology, policy, and AI safety.

  • Portfolio Preparation: Curate a portfolio that showcases your best work in policy development, research, and evaluation, with a particular emphasis on user well-being and human-AI interaction. Include examples of policy documents, case studies, or methodology descriptions.

  • Resume Optimization: Highlight quantifiable achievements and specific responsibilities related to policy design, risk assessment, and cross-functional collaboration. Use keywords from the job description such as "user well-being," "mental health," "policy development," "AI safety," and "stakeholder alignment."

  • Interview Practice: Prepare for behavioral and situational questions by thinking of specific examples from your past experience that demonstrate your skills in policy design, research, problem-solving, and collaboration. Practice presenting your portfolio items clearly and concisely.

  • Company Research: Thoroughly research Anthropic's mission, values, and current work in AI safety. Understand their approach to responsible AI development and how this role contributes to their broader goals.

⚠️ Important Notice: This enhanced job description includes AI-generated insights and operations industry-standard assumptions. All details should be verified directly with the hiring organization before making application decisions.

Application Requirements

An advanced degree in clinical psychology or a related field is preferred, along with experience in policy drafting and mental health. Candidates should have a strong understanding of AI technologies and the challenges in policy implementation.