Product & UX Tester - Gen AI

Wing Assistant
Full_time$1,500-2,000/month (USD)Istanbul, Turkey

📍 Job Overview

Job Title: Product & UX Tester - Gen AI Company: Wing Assistant (M32 AI) Location: Istanbul, Turkey Job Type: Full Time Category: Quality Assurance / Operations Date Posted: 2025-12-10 Experience Level: Mid-Level (2-5 years) Remote Status: Remote Solely

🚀 Role Summary

  • This role is critical for ensuring the quality and reliability of cutting-edge Generative AI (GenAI) products within a fast-paced, agile startup environment operating under the Wing Assistant umbrella.
  • Responsibilities encompass the end-to-end Quality Assurance (QA) lifecycle, from strategy and design to execution and stakeholder reporting, with a strong emphasis on AI behavior and decision-making validation.
  • The position requires a blend of manual, UX, and structured exploratory testing, augmented by targeted test automation to achieve rapid feedback loops and minimize production defects.
  • Success will be measured by achieving high test coverage, maintaining fast regression test execution times, and ensuring bug-free weekly releases, directly impacting product delight and business operations.

📝 Enhancement Note: While the title is "Product & UX Tester," the detailed responsibilities and desired skills strongly indicate a core Quality Assurance (QA) role with a specialization in AI and automation within an operations-focused context. The emphasis on owning the QA lifecycle, designing test plans, and improving test reliability aligns with typical QA Engineer or QA Automation Engineer responsibilities, particularly in a GTM or product development operations setting.

📈 Primary Responsibilities

  • Own the complete Quality Assurance lifecycle for Agentic AI products, encompassing strategy formulation, test plan design, execution, performance monitoring, and final release sign-off.
  • Develop and execute comprehensive test plans, including functional, regression, smoke, structured exploratory, and usability testing, with a specific focus on AI behavior and complex decision chains.
  • Rigorously validate multi-step decision flows and AI reasoning processes to identify logic gaps, guardrail failures, and deviations from specified requirements.
  • Conduct structured exploratory testing to proactively uncover emergent behaviors, edge cases, and potential cascading AI failures that might not be captured by scripted tests.
  • Build and maintain synthetic test scripts for user interface elements, APIs, and end-to-end user journeys to ensure consistent and reliable functionality across platforms.
  • Execute testing across various platforms, including web and mobile applications, as well as system integrations, to verify cross-platform consistency, performance, and user experience.
  • Develop and maintain quality dashboards that track key performance indicators (KPIs) such as test coverage, failure rates, and defect trends for all relevant stakeholders.
  • Proactively improve test reliability by addressing test flakiness, optimizing parallel execution strategies, and reducing overall test execution time to enable faster development cycles.
  • Collaborate closely with Product Management, Design, and Engineering teams to refine product requirements and establish clear, measurable go/no-go criteria for releases.
  • Monitor product quality pre- and post-release, leveraging data analytics to enhance AI evaluation frameworks and strengthen existing guardrails.

📝 Enhancement Note: The responsibilities clearly define a proactive QA role focused on the unique challenges of AI products. The emphasis on "owning the QA lifecycle," "validating multi-step decision flows," and "monitoring pre- and post-release quality" highlights the strategic importance of this position in ensuring the operational integrity of AI-driven services.

🎓 Skills & Qualifications

Education: While no specific degree is listed, a Bachelor's degree in Computer Science, Engineering, or a related technical field is commonly expected for roles involving test automation and AI product quality. Experience: A minimum of 2-5 years of experience in Quality Assurance, with a significant portion dedicated to test automation and product testing, ideally within software development or technology companies.

Required Skills:

  • Proven experience in owning and managing the full QA lifecycle for software products.
  • Proficiency in designing and executing comprehensive test plans (functional, regression, smoke, usability).
  • Experience with structured exploratory testing methodologies and generating actionable insights.
  • Ability to test and validate complex decision flows and logic in software applications.
  • Experience creating and maintaining synthetic test scripts for UI elements and APIs.
  • Familiarity with cross-platform testing (web, mobile) and integration testing.
  • Skill in developing and maintaining quality metrics dashboards and reporting on KPIs.
  • Experience in improving test reliability, debugging test failures, and optimizing execution efficiency.
  • Strong collaborative skills to work effectively with Product, Design, and Engineering teams.
  • Data analysis skills to monitor quality and inform AI guardrail improvements.

Preferred Skills:

  • Direct experience testing Generative AI (GenAI) or Large Language Model (LLM) driven products.
  • Understanding of common GenAI failure modes such as hallucinations, unsafe responses, bias, and brittle decision paths.
  • Exposure to performance and load testing tools and practices for web applications and APIs.
  • Familiarity with structured exploratory testing approaches, including test charters, especially for AI behavior and agent decision-making.
  • Prior experience in high-velocity environments, such as startups, where QA is positioned as an owner of quality.
  • A strong preference for automation over repetitive manual tasks, balanced with an understanding of the value of focused exploratory testing.
  • Familiarity with Agile development methodologies and CI/CD pipelines.

📝 Enhancement Note: The "What Great Looks Like" section provides quantifiable goals (90% coverage in 21 days, <10 min regression) that imply a need for strong automation skills and efficient process design, reinforcing the preference for automation. The emphasis on GenAI and LLM experience is a critical differentiator for this role.

📊 Process & Systems Portfolio Requirements

Portfolio Essentials:

  • Showcase examples of test plans developed for complex features or products, demonstrating a strategic approach to quality assurance.
  • Include case studies or examples of test automation frameworks and scripts built, highlighting efficiency gains and improved test reliability.
  • Provide evidence of experience in API testing, including examples of test cases and automation for API endpoints.
  • Demonstrate contributions to improving test execution speed and reducing flakiness, supported by quantifiable metrics.
  • Present examples of bug reports, emphasizing clarity, detail, and actionable information for engineering teams.

Process Documentation:

  • Documented test strategies for new product features or AI models, outlining scope, objectives, and testing approaches.
  • Examples of structured exploratory testing sessions, including test charters, findings, and recommendations.
  • Records of test coverage analysis and reporting, illustrating how quality KPIs were tracked and communicated to stakeholders.
  • Records of process improvements implemented within a QA workflow, such as optimizing CI/CD integration or enhancing defect triage.

📝 Enhancement Note: The asynchronous task in the hiring process ("Build and document a small automated test flow for a sample application") directly necessitates a portfolio that can demonstrate practical automation and documentation skills. This implies a need for candidates to have demonstrable projects that can be presented or discussed.

💵 Compensation & Benefits

Salary Range: $1,500 - $2,000 USD per month. Research Methodology: This range is based on the provided compensation data, which is explicitly stated as the monthly salary. For a mid-level QA role with automation and AI specialization in a remote-first international setting, this range is competitive, particularly considering the cost of living in regions where such talent might be located outside of major tech hubs. The USD denomination suggests the company is US-based or operates with a US-centric compensation model for its global talent.

Benefits:

  • Competitive salary
  • Performance-based bonuses tied to release quality, directly incentivizing high-quality output and operational excellence.
  • Software for Upskilling & Productivity, supporting continuous learning and efficiency.
  • Remote-first culture with the flexibility to "Work from anywhere."
  • Paid Time Off, ensuring work-life balance.
  • High autonomy and low bureaucracy, fostering an agile and efficient work environment.
  • Fast-track to leadership for high performers, offering clear career progression.
  • US HQ Opportunities, providing potential for international mobility or exposure.
  • Direct access to the founding team, allowing for significant influence and learning from leadership.
  • High visibility, autonomy, and ownership, empowering individuals to drive impact.
  • Optional in-person hack-weeks in Hong Kong, India, or London, fostering team cohesion and innovation.
  • A clear growth path into Head of QA as the team scales, indicating a structured career development plan.
  • Access to best-in-class tooling to support efficient and effective work.

Working Hours: Approximately 40 hours per week, consistent with a full-time role, but with the flexibility inherent in a remote-first, high-autonomy environment.

📝 Enhancement Note: The provided salary is a monthly figure in USD. For context, this translates to an annual range of $18,000 - $24,000 USD. This is a significant factor for candidates, especially when considering international locations. The benefits package is comprehensive and heavily emphasizes growth, autonomy, and direct impact, aligning with startup culture and the operations focus on efficiency and results.

🎯 Team & Company Context

🏢 Company Culture

Industry: Technology, specifically focused on Artificial Intelligence (AI) and Software Development, with a subsidiary (M32 AI) dedicated to building agentic AI for traditional service businesses. Company Size: Part of Wing Assistant, a larger entity, but M32 AI operates as a startup within a corporate structure – agile, fast-moving, and with minimal bureaucracy. This implies a dynamic and evolving organizational landscape. Founded: Wing Assistant's founding date is not explicitly stated, but its backing by top-tier Silicon Valley VCs suggests a relatively recent establishment or significant growth phase focused on innovation.

Team Structure:

  • The QA team is positioned for significant growth, with a clear path for the successful candidate to advance into a "Head of QA" role as the team scales.
  • This implies a current lean team structure where individuals are expected to be versatile and take on significant ownership.
  • Collaboration is expected to be high, with close interaction with the founding team (CPO and CTO) and Engineering/Product/Design departments.

Methodology:

  • Emphasis on a "fast-moving and agile" environment with "zero bureaucracy," suggesting lean operational processes and rapid iteration.
  • A strong focus on "output instead of pedigree," indicating a results-driven culture where demonstrated skills and impact are paramount.
  • The core mission revolves around shipping "delightful, bug-free experiences every week," underscoring a commitment to continuous delivery and high-quality standards.
  • Data-driven decision-making is implied through "monitoring pre- and post-release quality" and using "data to enhance AI evaluation and guardrails."

Company Website: wingassistant.com

📝 Enhancement Note: The description of M32 AI as a "startup within a corporate" with "zero bureaucracy" is a key cultural indicator. For operations professionals, this means a high degree of autonomy, direct impact, and the opportunity to shape processes from the ground up, but also potentially a need for adaptability and self-direction.

📈 Career & Growth Analysis

Operations Career Level: This role is positioned as a mid-level to senior individual contributor with significant potential for leadership. The clear path to "Head of QA" indicates an opportunity to build and manage a team, define QA strategy, and significantly influence the company's quality operations. The role requires owning the full QA lifecycle, which is a characteristic of senior or lead QA roles.

Reporting Structure: The candidate will have "direct access to founding team" members, including the CPO and CTO, for the final interview stage. This suggests a flat hierarchy where input from this role will be highly valued and directly considered by senior leadership. The successful candidate is expected to grow into a leadership role, likely reporting to product or engineering leadership.

Operations Impact: The role directly impacts the operational integrity and customer satisfaction of GenAI products. By ensuring "delightful, bug-free experiences every week," this position contributes to user adoption, trust in AI, and the overall success of the service businesses that M32 AI supports. Performance bonuses tied to "release quality" further highlight the direct link between the QA function and business outcomes.

Growth Opportunities:

  • Leadership Track: A clear, defined path to becoming the Head of QA, including team building and strategic responsibility.
  • Technical Skill Expansion: Opportunity to deepen expertise in GenAI/LLM testing, test automation, performance testing, and potentially gain exposure to broader product development operations.
  • Autonomy & Ownership: High degree of autonomy to define and improve QA processes, tools, and strategies, fostering significant professional development.
  • Startup Environment Acumen: Gaining experience in a fast-paced, agile, and high-growth startup environment, which is valuable for future career opportunities.
  • Cross-Functional Exposure: Direct collaboration with C-suite (CPO, CTO) and other key departments, enhancing communication and strategic thinking skills.

📝 Enhancement Note: The explicit mention of a "fast-track to leadership" and a "clear growth path into Head of QA" makes this role highly attractive for operations professionals looking to move into management or lead a specialized function within a growing company. The emphasis on "high performers" suggests that exceeding expectations in this role will be recognized and rewarded.

🌐 Work Environment

Office Type: The role is described as "Remote-first" and "Work from anywhere," indicating a distributed team structure with no central physical office requirement for day-to-day operations. Office Location(s): While the job posting specifies Istanbul, Turkey, the "Work from anywhere" benefit suggests flexibility beyond this primary location, though potential tax and legal implications would need clarification. Optional in-person hack-weeks are planned in locations like Hong Kong, India, or London.

Workspace Context:

  • The work environment is characterized by "high autonomy" and "low bureaucracy," which is ideal for self-motivated individuals who thrive in environments where they can define their own workflows and make decisions quickly.
  • Access to "best-in-class tooling" is promised, suggesting that the company invests in its employees' productivity and efficiency through modern technology and software.
  • The "remote-first culture" implies a reliance on digital collaboration tools and practices, requiring strong communication skills and proactive engagement.

Work Schedule: While stated as approximately 40 hours per week, the remote-first and high-autonomy nature suggests flexibility in how and when those hours are worked, as long as deliverables and feedback cycles are met. This is beneficial for operations roles that often require deep focus and can be managed around personal productivity peaks.

📝 Enhancement Note: The "Work from anywhere" benefit is a significant draw for operations professionals who value flexibility. However, candidates should be aware of any potential geographic restrictions for tax or legal compliance, even with this broad statement. The mention of hack-weeks indicates opportunities for in-person collaboration and networking, balancing the remote nature of the role.

📄 Application & Portfolio Review Process

Interview Process:

  1. Introductory Call (20 min): A brief discussion to assess cultural fit, understand expectations, and explore working styles within the company's agile, remote-first context.
  2. Asynchronous Task: A practical, hands-on exercise involving building and documenting a small automated test flow using either a testing framework or a no-code automation tool. This assesses practical skills in automation, documentation, and problem-solving relevant to AI product testing.
  3. Final Interview (45 min): A live session with the Chief Product Officer (CPO) and Chief Technology Officer (CTO). This is a high-stakes interview focused on deeper technical expertise, strategic thinking, and alignment with the company's vision and operational goals.

Portfolio Review Tips:

  • Focus on Automation: For the asynchronous task and potential portfolio discussion, prioritize demonstrating a working automated test flow. Clearly document the steps taken, the tools used, and the rationale behind your choices.
  • Showcase AI/LLM Relevance: If you have prior experience testing AI or LLM-driven products, be prepared to discuss specific challenges encountered and how you addressed them. Highlight any experience with testing for hallucinations, bias, or logical consistency.
  • Process Documentation: Ensure your documentation for the asynchronous task is clear, concise, and easy to follow. This reflects your ability to communicate technical processes effectively, a crucial skill for operations roles.
  • Quantify Impact: Where possible, use metrics to demonstrate the value of your work. For example, state the time saved by automation, the reduction in bugs found post-release, or the improvement in test coverage.
  • Problem-Solving Approach: Be ready to articulate your thought process for tackling the asynchronous task and any hypothetical scenarios related to testing AI behavior.

Challenge Preparation:

  • Master Automation Tools: Familiarize yourself with common testing frameworks (e.g., Selenium, Cypress, Playwright) and potentially no-code automation tools. The prompt allows for either, so be versatile.
  • Understand AI Testing Nuances: Research common failure modes and testing strategies for GenAI/LLMs. Think about how you would test for "hallucinations," bias, and logical consistency in AI-generated responses.
  • Structure Your Approach: For the asynchronous task, break down the problem into manageable steps: understanding the requirements, choosing tools, implementing the automation, and documenting the process.
  • Prepare for C-Suite Interview: Be ready to discuss your strategic thinking, your understanding of product quality in an AI context, and how you align with Wing Assistant's "output over pedigree" culture. Prepare thoughtful questions for the CPO and CTO.

📝 Enhancement Note: The asynchronous task is a critical gate and a direct assessment of the candidate's ability to deliver on core responsibilities. Candidates should treat this as a mini-portfolio piece and dedicate significant effort to its quality and documentation. The interview structure is lean and fast, emphasizing efficiency, which aligns with the company's operational ethos.

🛠 Tools & Technology Stack

Primary Tools:

  • Test Automation Frameworks: Candidates are expected to be proficient with either established testing frameworks (e.g., Selenium, Cypress, Playwright) or no-code automation tools, as indicated by the asynchronous task requirement.
  • API Testing Tools: Familiarity with tools for testing APIs (e.g., Postman, Insomnia) would be beneficial, given the responsibility to "Build synthetic test scripts for APIs."
  • Web & Mobile Testing: Experience with tools and methodologies for testing web applications and mobile platforms is essential.

Analytics & Reporting:

  • Dashboarding Tools: Experience with tools to create and maintain quality dashboards (e.g., Grafana, Tableau, Power BI, or even advanced spreadsheet tools) for tracking test coverage, failures, and quality KPIs.
  • Data Analysis Tools: The ability to use data to enhance AI evaluation and guardrails implies comfort with analyzing logs, test results, and performance metrics.

CRM & Automation:

  • While not explicitly mentioned for this QA role, a general understanding of how CRM systems and automation platforms integrate with product functionality might be beneficial for end-to-end testing scenarios.
  • CI/CD Tools: Familiarity with Continuous Integration/Continuous Deployment pipelines (e.g., Jenkins, GitLab CI, GitHub Actions) is highly relevant for integrating automated tests into the development workflow.

📝 Enhancement Note: The asynchronous task explicitly mentions "a testing framework or a no-code automation tool," indicating flexibility in the specific technologies used. However, a strong foundation in at least one major test automation framework is likely a prerequisite for success in building and maintaining automated test suites for AI products.

👥 Team Culture & Values

Operations Values:

  • Output Over Pedigree: A core value emphasizing results and demonstrated capability above formal qualifications or background. This means your ability to deliver high-quality, bug-free software is paramount.
  • Agility & Speed: Operating as a "startup within a corporate" means a fast-moving, iterative approach to development and quality assurance, with a focus on shipping weekly.
  • Autonomy & Ownership: Employees are expected to take initiative, own their responsibilities fully, and operate with minimal oversight. This fosters a culture of accountability and empowerment.
  • Data-Driven Improvement: Using data to "enhance AI evaluation and guardrails" highlights a commitment to empirical evidence and continuous improvement based on performance metrics.
  • Collaboration & Direct Access: Despite the autonomy, close collaboration with Product, Design, and Engineering is expected, with direct access to the founding team, encouraging open communication and shared goals.

Collaboration Style:

  • Cross-Functional Integration: Close partnership with Product, Design, and Engineering is essential for defining requirements, setting criteria, and ensuring product quality.
  • Lean & Efficient Communication: Given the "low bureaucracy" and "fast-moving" nature, communication is likely direct, concise, and focused on actionable outcomes.
  • Feedback Loops: The emphasis on "fast feedback" for engineers and monitoring "pre- and post-release quality" indicates a culture that values continuous feedback for iteration and improvement.
  • Shared Quality Responsibility: While this role owns the QA lifecycle, the overall culture likely promotes a shared sense of responsibility for product quality across teams.

📝 Enhancement Note: The "output over pedigree" value is a strong signal for experienced professionals who may not have traditional academic backgrounds but possess demonstrable skills and a track record of success. This aligns with the operations focus on tangible results and efficiency.

⚡ Challenges & Growth Opportunities

Challenges:

  • Testing Novel AI Behavior: The primary challenge lies in effectively testing and validating the complex, often unpredictable, behavior of Generative AI and LLM models, including issues like hallucinations, bias, and logical inconsistencies.
  • Maintaining Speed and Quality: Balancing the need for rapid, weekly releases with the rigorous demands of comprehensive AI testing and automation presents a significant operational challenge.
  • Building from the Ground Up: As part of a growing subsidiary, there's an opportunity to heavily influence and shape the QA processes and infrastructure, which can be both rewarding and challenging due to the lack of established systems.
  • Remote Collaboration: Effectively collaborating and maintaining team cohesion in a fully remote, "work from anywhere" environment requires proactive communication and engagement strategies.

Learning & Development Opportunities:

  • Deep AI/LLM Testing Expertise: Become a specialist in the unique challenges and methodologies for testing advanced AI systems.
  • Automation & Tooling Mastery: Gain hands-on experience with various automation tools and frameworks, potentially contributing to the selection and implementation of new technologies.
  • Leadership Development: The clear growth path to Head of QA provides an excellent opportunity to develop management, strategic planning, and team-building skills.
  • Startup Operations Experience: Learn to operate and excel in a fast-paced, high-growth startup environment, developing adaptability and problem-solving skills.
  • Cross-Functional Impact: Work directly with senior leadership and across multiple departments, broadening your understanding of business operations and product strategy.

📝 Enhancement Note: The challenges presented are inherent to working with cutting-edge AI technology and in a startup environment. These challenges also represent significant growth opportunities for ambitious operations professionals looking to expand their skill sets and career horizons.

💡 Interview Preparation

Strategy Questions:

  • AI Testing Strategy: Be prepared to discuss how you would approach testing an AI model for specific failure modes like hallucinations, bias, or inconsistent decision-making. Focus on structured methodologies and data-driven validation.
  • Automation Prioritization: How do you decide when to automate a test versus when to use exploratory testing? Discuss your approach to balancing efficiency with thoroughness, especially in an AI context.
  • Process Improvement: Describe a time you significantly improved a QA process or test suite. What was the challenge, what steps did you take, and what was the measurable impact? Highlight your ability to drive operational efficiency.

Company & Culture Questions:

  • "Output Over Pedigree": How do you demonstrate your value and capabilities in a results-driven environment? Be ready to share specific achievements and metrics from your past roles.
  • Remote Work Philosophy: How do you ensure productivity, collaboration, and strong communication in a fully remote setting?
  • Agile/Startup Environment: What are your experiences working in fast-paced, agile environments? How do you adapt to change and prioritize effectively?

Portfolio Presentation Strategy:

  • Asynchronous Task Walkthrough: Be prepared to clearly articulate your process, tool choices, and the logic behind your automated test flow. Explain any challenges you encountered and how you overcame them.
  • Demonstrate Impact: If you have other portfolio examples, focus on showcasing the impact of your work. Use metrics to highlight efficiency gains, bug reductions, or coverage improvements.
  • Conciseness and Clarity: Given the fast interview cycle, present your information clearly and concisely. Avoid jargon where simpler terms suffice, but be precise when discussing technical details.
  • Address AI Specifics: Frame your portfolio examples in a way that shows your understanding of product quality and testing, even if it wasn't directly AI-related. Highlight transferable skills like logic validation, complex scenario testing, and automation.

📝 Enhancement Note: The asynchronous task serves as a practical portfolio piece. Candidates should ensure it is well-documented and they can confidently explain their choices and results during the interview. The direct interview with CPO/CTO means questions will likely be strategic and focused on impact and problem-solving.

📌 Application Steps

To apply for this Product & UX Tester - Gen AI position:

  • Submit your application through the provided Lever.co link.
  • Tailor Your Resume: Highlight experience with test automation, AI/LLM testing (if applicable), and your ability to operate autonomously in fast-paced environments. Use keywords from the job description like "GenAI," "LLM," "test automation," "exploratory testing," and "quality KPIs."
  • Prepare Your Portfolio: Have examples of your work ready, especially related to test automation. Since an asynchronous task is part of the hiring process, ensure you have a strong understanding of how to build and document an automated test flow.
  • Research Wing Assistant & M32 AI: Understand their mission, their focus on agentic AI for traditional service businesses, and their "startup within a corporate" model. Familiarize yourself with their values, particularly "output over pedigree."
  • Practice for the Asynchronous Task: Before applying or immediately after, practice building a simple automated workflow. Consider common web application interactions or API calls. Focus on clear documentation of your process.
  • Prepare for the C-Suite Interview: Think about how your skills align with the company's strategic goals and your vision for quality operations, especially in the context of AI development. Prepare insightful questions for the CPO and CTO.

⚠️ Important Notice: This enhanced job description includes AI-generated insights and operations industry-standard assumptions. All details should be verified directly with the hiring organization before making application decisions.

Application Requirements

Experience testing GenAI or LLM-driven products is preferred, along with familiarity with performance testing tools. Candidates should have a background in high-velocity environments and a preference for automation over repetitive tasks.