Running generative AI models on AI-enabled devices offer several advantages over running them on the cloud. Local processing reduces latency, ensures real-time responsiveness, and enhances privacy by keeping sensitive data on-device, mitigating concerns associated with transmitting potentially confidential information to external servers. In this workshop, you will learn how to optimize and run a LLM on the Ryzen AI enabled processor without cloud connectivity.
If you'd like to attend, you can indicate your interest upon registration. All workshop attendees must be registered for the event.
Sponsor(s):
AMD