Local LLMs with llamafile

About this Course

In this 1-hour project-based course, you will learn to: * Package open-source AI models into portable llamafile executables * Deploy llamafiles locally across Windows, macOS and Linux * Monitor system metrics like GPU usage when running models * Query llamafile APIs with Python to process generated text * Experience real-time inference through hands-on examples

Created by: Duke University


Related Online Courses

This is a self-paced lab that takes place in the Google Cloud console. In this lab you will learn how to use Table Explorer and Data Insights with Gemini in BigQuery.Created by: Google Cloud more
Acquire the expertise needed to construct robust, scalable, and secure applications using .NET technology through this comprehensive specialization. It consists of three courses:\\n\\nC# for .NET... more
Mathematical thinking is crucial in all areas of computer science: algorithms, bioinformatics, computer graphics, data science, machine learning, etc. In this course, we will learn the most... more
The Cloud Migration Factory on AWS solution uses a serverless architecture to coordinate and automate your organization\'s medium-scale to large-scale migrations to the Amazon Web Services (AWS)... more
This course is designed for aspiring IT professionals who are eager to excel in the dynamic field of technical support. No previous experience is necessary. This course is aimed at equipping... more

CONTINUE SEARCH

FOLLOW COLLEGE PARENT CENTRAL