llm-recipes

LLM Recipes

Introduction

LLM Recipes is a collection of projects and tools aimed at creating decision agents with various capabilities such as speech, vision, and text search. This repository includes different versions of the project, each with unique features and functionalities.

Installation

To get started with LLM Recipes, follow these steps:

  1. Clone the Repository:
    git clone https://github.com/your-repo/llm-recipes.git
    cd llm-recipes
    
  2. Set Up Environment:

Usage

Detailed usage instructions for each project version can be found in the respective documentation links provided in the table below.

Projects

Version Concept Status Tech
v0.11 Voice - Shopping Bot In-progress Python
v0.10 Multi-modal Agents In-progress Python
v0.9 NoteBook LLama Complete Python + TTS
v0.8 Quantisation Paused llama.cpp
v0.7 On-device Mobile Paused Android + TF lite
v0.6 UI Complete Typescript - link
v0.5 Indoor Maps + v0.4 Paused ROS2
v0.4 Image/Scene Recognition + v0.3 Complete llava/moondream
v0.3 Speech Output + v0.2 Complete coqui tts + v1
v0.2 Speech Input + v0.1 Complete whisper + ffpmeg + v0
v0.1 Text Query + API Calls Complete mistral7B-v0.3 + ollama + RestAPI

Base Setup

ChatUI

Code CoPilot

Tutorials

Extra

Applications

Reconnaissance with Drone

Reconnaissance

Upcoming Challenges

Dependencies

FAQs

Screenshots

Screenshot 1 Screenshot 2

Versioning

We use SemVer for versioning. For the versions available, see the tags on this repository.

Acknowledgments

Contact

For any questions or support, please contact your-email@example.com or join our Discord Server.

License

This project is licensed under the MIT License - see the LICENSE file for details.