AMD publishes step-by-step guides for setting up locally-powered chatbot featuring Ryzen AI and Meta Llama 3

AMD publishes step-by-step guides for setting up locally-powered chatbot featuring Ryzen AI and Meta Llama 3

For those interested in setting up their own Gen-AI tools to run locally, AMD has released a couple of useful guides dedicated to creating a chatbot.

In this community-focused effort, AMD is taking two approaches: one for developers and another for beginners. For developers, there's an introductory explanation of the Ryzen AI SDK, which includes all the necessary base files like tools and runtime libraries for the NPU's inferencing feature, helping set up a machine-learning environment.

The guide also details the installation process with images and provides pre-quantized, ready-to-deploy models available on AMD's Hugging Face repository. This allows developers to quickly build Gen-AI applications in just a few minutes.

For those who prefer a more GUI-based approach and are familiar with platforms like ChatGPT and Gemini, setting up a Meta Llama 3-based chatbot through LM Studio is recommended.

This pre-trained, open-source model is available in 8B or 70B parameters, depending on your hardware capabilities (running the 70B model requires approximately 300GB RAM and 800GB GPU VRAM by the way so I recommend just sticking with 8B). Once installed, you can start using your own LLM for various tasks.

While setting up your own chatbot can be a valuable learning experience, the advanced multi-format support offered by tools like ChatGPT's GPT-4 and Google's Gemini can indeed be oppressive in terms of convincing people to move away from the duo.

But if you're dealing with sensitive information, it is always safer to run Gen-AI services within an isolated environment where corporates have no control over the access of your data, in this case, a local machine.