World's Shortest Hackathon

NVIDIA and Vercel are teaming up and taking the World’s Shortest Hackathon to NYC’s Techweek!
Everyone’s wondering: “Is AI gonna take our jobs?” The answer is no! But it can make everyone a 100X engineer.

So, how does it work? We’re throwing a full hackathon in two hours, limited to 50 teams of 2. Leave the sleeping bag at home and come equipped with your best prompts and most powerful AI—and let it rip!

For prizes, we’ll have free credits from the best developer tools, and each person from the winning team will go home with an NVIDIA GPU signed by the man upstairs (Jensen Huang).

Date:  Tuesday, June 3, 2025 @ 5:00 PM ET
Location: New York City (Exact location shared to confirmed attendees)
Prize: Winning team (of 1 or 2) will receive a GeForce RTX 5080 signed by Jensen Huang, Vercel credits + more prizes to come

FAQ

  • When will I be notified?
    By May 27, 2025, when you will be emailed instructions and location for the event.

  • My attendance was not confirmed. Can I still go?
    Due to limited space, this event is invite only.

  • What technologies am I allowed to use?
    Anything that helps you code faster.

  • I’m not from the NYC area. Can I participate?
    Yes, as long as you’re able to attend in person.

  • How will submissions be judged?
    Judges will review submissions for real-world novel application as well as how highly leveraged gen AI tooling was used. Vibes will also matter.

  • Will food and drink be available at the event?
    Yes.

  • Can you ship me my prize?
    No.

Terms and Conditions

  • You must be a confirmed attendee of the event.

  • Only individuals or teams of two can compete in the event.

  • You must complete the registration form to be entered into the sweepstakes.

  • You must be 18 years or older to participate.

For access to the full NVIDIA X Vercel Hackathon terms and conditions, click here.

Explore the latest community-built AI models with an API optimized and accelerated by NVIDIA, then deploy anywhere with NVIDIA NIM™ inference microservices.

Learn More