Contact Us

Get in touch with the vLLM team for feedback, collaborations, and more

Now Open - Year-round Recruitment

vLLM Talent PoolHiring

As LLM adoption accelerates, vLLM has become the mainstream inference engine across major cloud providers and leading model labs. We're collecting resumes year-round and helping with referrals for internships & full-time positions.

US:SF Bay Area and more
China:Beijing, Shanghai, Shenzhen, Guangzhou, Chengdu...
💰 Compensation: Highly competitive, with no upper limit for exceptional inference engineers.

* Sending your resume means you agree to share it with partner companies.

🏢 For partner companies: Want to join our partner list and access resumes? See the Collaboration section below.

Collaboration

Interested in collaborating with the vLLM community? Whether it's accessing resumes from our talent pool, organizing meetups, or exploring technical partnerships, we'd love to hear from you.

Resume AccessOrganize MeetupsTechnical Partnerships

Social Media Promotions

Want to collaborate on social media promotions? We welcome partnership opportunities to spread the word about vLLM across various platforms.

Website Feedback

Found a typo? Have suggestions for improving the website? We'd love to hear your feedback to make vLLM.ai better for everyone.

Typos & ErrorsUI/UX SuggestionsContent Improvements

Other Ways to Connect

For technical discussions, feature requests, or community support, please use our community channels: