We use cookies. Find out more about it here. By continuing to browse this site you are agreeing to our use of cookies.
#alert
Back to search results
New

LLM Penetration Tester

Metronome, LLC
401(k)
United States, Virginia, Fairfax
11350 Random Hills Road (Show on map)
Jun 27, 2025

Job Title: LLM Penetration Tester
Location: DMV Area
Clearance: TS/SCI w Full Scope Poly
Employment Type: Full Time

Education: Bachelor's in Cybersecurity, Information Security, or a related field
Work Status: DMV Area
Salary: $140,000-$180,000

Benefits: Competitive salary and bonus structure, comprehensive health insurance, 401(k) with company match, generous PTO and flexible work options.

Application: Apply here or on our Careers Page @ Careers - Metronome, or email your resume to Careers@wearemetronome.com

Overview:
Metronome is seeking a highly skilled and creative LLM Penetration Tester to join our AI Red Team. This role focuses on evaluating the security posture and behavioral integrity of large language models (LLMs) through the design and execution of advanced linguistic adversarial attacks. You'll be responsible for developing complex prompt-based attacks that test the limits of LLM guardrails and detect vulnerabilities such as prompt injection, jailbreaking, and data leakage.

This is an ideal position for individuals with a red team mindset, deep linguistic awareness, and hands-on experience with LLM platforms. You will work at the intersection of cybersecurity and AI safety to uncover critical insights into how modern language models behave under adversarial pressure.

Key Responsibilities:

  • Design and Execute Adversarial Attacks: Create and implement advanced linguistic adversarial attacks, including context-shifting prompts, semantic alterations, and nuanced language manipulations, specifically designed to bypass LLM security filters and elicit unintended or malicious responses.

  • Question-Answer Pair Generation: Develop comprehensive sets of question-answer pairs that effectively demonstrate an LLM's vulnerability or resilience to various attack vectors.

  • Red-Teaming LLMs: Actively engage in red-teaming exercises, attempting to "trick" LLMs into following instructions or commands that they are designed to resist, thereby testing their guardrails and security boundaries.

  • Behavioral Analysis: Observe and analyze LLM behavior under attack conditions, understanding how different linguistic manipulations impact their responses and security posture.

Required Skills:

  • 5+ years of experience in threat intelligence, penetration testing, or incident response

  • LLM Red-Teaming Expertise: Demonstrated professional, project, or personal experience in red-teaming large language models, including a strong understanding of common LLM vulnerabilities (e.g., prompt injection, jailbreaking, data leakage, hallucination, bias).

  • Advanced Linguistic Proficiency: Exceptional understanding of language nuances, semantics, syntax, context, and pragmatics, with the ability to creatively manipulate these elements to craft sophisticated prompts.

    • Adversarial Mindset: A strong "attacker" mentality, capable of thinking creatively to identify and exploit potential weaknesses in AI systems.

    • Familiarity with LLM Platforms: Hands-on experience working with and understanding the APIs and behaviors of major LLM providers and models.

Certifications:

  • CISSP - Certified Information Systems Security Professional

  • GCIH - GIAC Certified Incident Handler

Preferred Experience

Specific Applications, Technologies, and Services:

* LLM Platforms: Direct experience with models from OpenAI (e.g., GPT-3.5, GPT-4), Anthropic (e.g., Claude), Google (e.g., PaLM, Gemini), and open-source models.

* Security Concepts: Understanding of adversarial machine learning (Adversarial ML), prompt injection techniques, jailbreaking methodologies, and general cybersecurity principles as they apply to AI.

* Experience in cybersecurity, penetration testing, or ethical hacking, particularly with a focus on AI/ML systems.

* Research experience in adversarial AI, LLM security, or AI safety.

* Contributions to open-source projects related to LLM security or red-teaming.

Applied = 0

(web-8588dfb-vpc2p)