LLM Recipes is a collection of projects and tools aimed at creating decision agents with various capabilities such as speech, vision, and text search. This repository includes different versions of the project, each with unique features and functionalities.
To get started with LLM Recipes, follow these steps:
git clone https://github.com/your-repo/llm-recipes.git
cd llm-recipes
Detailed usage instructions for each project version can be found in the respective documentation links provided in the table below.
Version | Concept | Status | Tech |
---|---|---|---|
v0.11 | Voice - Shopping Bot | In-progress | Python |
v0.10 | Multi-modal Agents | In-progress | Python |
v0.9 | NoteBook LLama | Complete | Python + TTS |
v0.8 | Quantisation | Paused | llama.cpp |
v0.7 | On-device Mobile | Paused | Android + TF lite |
v0.6 | UI | Complete | Typescript - link |
v0.5 | Indoor Maps + v0.4 | Paused | ROS2 |
v0.4 | Image/Scene Recognition + v0.3 | Complete | llava/moondream |
v0.3 | Speech Output + v0.2 | Complete | coqui tts + v1 |
v0.2 | Speech Input + v0.1 | Complete | whisper + ffpmeg + v0 |
v0.1 | Text Query + API Calls | Complete | mistral7B-v0.3 + ollama + RestAPI |
We use SemVer for versioning. For the versions available, see the tags on this repository.
For any questions or support, please contact your-email@example.com or join our Discord Server.
This project is licensed under the MIT License - see the LICENSE file for details.