Intern - AI - LLMs and Safety Position Overview: We are seeking an enthusiastic and motivated AI Intern to join our innovative team. In this role, you will be at the forefront of enhancing the reliability, safety, and performance of AI models and systems. You will collaborate closely with developers, machine learning engineers, and product teams and contribute to cutting edge advancements in AI safety and the development of responsible AI solutions.
Responsibilities:
Assist in designing and implementing safety-focused evaluation frameworks for LLMs.
Identify vulnerabilities in AI systems and contribute to strategies for mitigation, including adversarial testing and bias detection.
Collaborate with cross-functional teams to integrate safety mechanisms into AI workflows and pipelines.
Fine-tune pre-trained LLMs on domain-specific datasets to improve task performance.
Stay up to date with the latest research papers, techniques, and advancements in deep learning and related fields.
Strong software engineering and programming skills, and ability to quickly develop working prototypes from research ideas.
Requirements:
Currently enrolled full time and pursuing Bachelor's, Master's, or Ph.D. degree in Computer Science, Electrical Engineering, or a related field.
Must be graduating between December 2025 and June 2026
Available to work from May 27 - August 16 OR June 17 - September 6 - Summer 2025
Nice to Haves:
Good understanding of natural language processing (NLP) fundamentals and techniques with a focus on architectures such as GPT, BERT, and their variants.
Experience with fine-tuning LLMs on large-scale datasets.
Familiarity with testing AI pipelines, data preprocessing, and model evaluation.