# Choosing the right OLLM models

## Choosing the Right OLLM Models

### Overview

The **OnChainBrain (OCB) Framework** provides seamless integration with multiple **Open Large Language Models (OLLMs)**, allowing you to toggle them **on and off** based on your needs. You can also use multiple models **simultaneously** to optimize performance, accuracy, and cost-effectiveness.

### **Available OLLM Models**

OCB supports the following AI models:

* **OpenAI (GPT-4, GPT-3.5)** - Best for general-purpose text generation and advanced NLP tasks.
* **Meta AI** (**Premium API**)- Ideal for multimodal AI applications and research-driven implementations.
* **Luma AI** (**Premium API**)- - Specialized in visual intelligence and image-related AI capabilities.
* **DeepSeek AI** (**Premium API**)-  - Designed for advanced data analysis, AI-powered search, and automation.
* **Grok AI** *(**Early*** ***Development**)* - Cutting-edge experimental AI model with evolving capabilities.

### **How to Choose the Right Model**

Depending on your use case, you can choose the appropriate model(s) for your application:

#### **1. General AI Agents & Chatbots**

* **Recommended Models**: OpenAI, Meta AI
* **Why?** OpenAI offers excellent conversational AI, while Meta AI provides robust NLP and multimodal functionalities.

#### **2. Blockchain & Smart Contract Analysis**

* **Recommended Models**: DeepSeek AI, OpenAI
* **Why?** DeepSeek AI excels in AI-powered data analysis, while OpenAI enhances query interpretation.

#### **3. Visual Intelligence & AI-Assisted Design**

* **Recommended Models**: Luma AI
* **Why?** Luma AI specializes in multimodal AI, processing visual and textual data effectively.

#### **4. High-Cost Efficiency & Scalability**

* **Recommended Models**: Grok AI (Early Access), DeepSeek AI
* **Why?** These models optimize performance while reducing API costs for large-scale deployments.

### **Enabling & Disabling Models**

You can **toggle AI models on and off** based on your specific requirements. This allows you to:

* Reduce API costs by using only necessary models.
* Improve response times by selecting the fastest-performing models.
* Experiment with different AI combinations for enhanced outcomes.

### **Using Multiple Models Simultaneously**

OCB allows **simultaneous model usage**, meaning:

* Different models can be used for specific **tasks** within the same agent.
* If one model fails or exceeds rate limits, another model can take over.
* You can distribute requests across multiple AI providers for redundancy and stability.

### **How to Configure AI Models in the OCB Framework v2**

To select or toggle OLLM models, modify your `.env` file:

```
# Enable or Disable AI Models
ENABLE_OPENAI=true
ENABLE_META_AI=false
ENABLE_LUMA_AI=true
ENABLE_DEEPSEEK=true
```

Alternatively, you can programmatically toggle models using the **OCB v2** **API**:

```
const enabledModels = {
    openai: process.env.ENABLE_OPENAI === 'true',
    meta_ai: process.env.ENABLE_META_AI === 'false',
    luma_ai: process.env.ENABLE_LUMA_AI === 'false',
    deepseek: process.env.ENABLE_DEEPSEEK === 'true',
};
```

### **Final Thoughts**

Choosing the right OLLM model(s) depends on your **use case, budget, and performance needs**. With **OCB’s flexible AI integration**, you can dynamically toggle models and optimize them for your application.

🚀 **Start experimenting and find the perfect combination for your AI agent today!**
