Local LLMs with llamafile
About this Course
In this 1-hour project-based course, you will learn to: * Package open-source AI models into portable llamafile executables * Deploy llamafiles locally across Windows, macOS and Linux * Monitor system metrics like GPU usage when running models * Query llamafile APIs with Python to process generated text * Experience real-time inference through hands-on examplesCreated by: Duke University

Related Online Courses
This is a self-paced lab that takes place in the Google Cloud console. Dev Ops best practices make use of multiple deployments to manage application deployment scenarios. This lab provides practice... more
Learners can expect to acquire advanced skills in leveraging data for financial decision-making, with outcomes including proficiency in risk assessment, investment valuation, and strategic... more
Embark on a transformative learning experience with our PyTorch Ultimate 2024 course. Begin with a solid foundation, understanding the key topics and objectives, and seamlessly transition through... more
This course will cover the basic elements of designing and evaluating questionnaires. We will review the process of responding to questions, challenges and options for asking questions about... more
This course will highlight the potential of quantitative marketing research for assessing new product opportunities. In addition to focusing on the skills and practices for a successful New Product... more