Open source LLMs can help you create intelligent apps without having to pay for each API call.

We just published a course on the freeCodeCamp.org YouTube channel that will teach you how to use the open source models from Mistral AI to create intelligent apps.

This course, created in collaboration between Mistral AI and Scrimba, will guide you through creating intelligent applications, from simple chat completions to advanced use cases like Retrieval-Augmented Generation (RAG) and function calling. Per Borgen developed this course.

Course Overview

The course begins with an introduction to Mistral's open-source models, including Mistral 7B and Mistral 8x7B, before progressing to their commercial models. You'll gain hands-on experience with Mistral’s La Plateforme, learning to leverage its full suite of tools.

Across 25 interactive lessons, you'll build applications ranging from straightforward chat completions to complex functionalities like RAG and function calling. The course code is available on the Scrimba course page for each lesson, ensuring you can follow along and practice.

In the knowledge retrieval segment, you'll learn to split text documents with LangChain, convert them into embeddings, store them in a vector database, and perform retrieval. This process is essential for building applications that can understand and retrieve information efficiently.

You'll also learn how to enable Mistral to access functions within your app, allowing the models to decide when to call them. This ability transforms user interactions, enabling conversational experiences instead of mere clicking.

Towards the end of the course, you'll explore using Ollama to run inference on your computer, making it the backbone of any AI app you develop locally. This practical skill is crucial for deploying AI applications on personal or small-scale setups.

Here is a list of the key topics covered in this course:

  • La Plateforme

  • Chat Completions API

  • Streaming

  • Mistral 7B and 8x7B

  • Mistral's commercial models

  • Embeddings and vectors

  • Setting up a vector database

  • Semantic search

  • Chunking with LangChain

  • Retrieval-Augmented Generation (RAG)

  • AI agents and function calling

  • Using Ollama for local model running

Conclusion

This course is perfect for anyone seeking a quick yet comprehensive introduction to Mistral AI. With practical examples and hands-on lessons, you'll be well-equipped to build and deploy intelligent applications.

Watch the full course on the freeCodeCamp.org YouTube channel (1.5-hour watch).