Navigating the Future of AI with Embedchain’s RAG Framework: The Power of Embedchain’s Vector Database

Imagine you’re an adventurer exploring an unknown land full of mysteries and opportunities. That’s similar to navigating the evolving landscape of artificial intelligence (AI). Imagine you have a magical guidebook called Embedchain, offering detailed maps and tools to make your journey smoother and more exciting. Embedchain is an innovative open-source retrieval-augmented generation (RAG) framework for AI enthusiasts like a Swiss Army knife. It’s designed to help you quickly create and deploy AI applications, whether you’re a seasoned explorer (developer) or just starting. It’s about making the complex world of AI as simple and enjoyable as building a castle out of toy blocks.

First, let’s explain what RAG is

Retrieval-augmented generation (RAG) is a technique used in natural language processing that combines the powers of both retrieval (searching for relevant information) and generation (creating coherent text). It’s designed to improve the quality and relevance of the generated text in models like chatbots or question-answering systems.

Here’s how RAG works

  1. Retrieval: When the model receives a prompt or a question, it first searches a large dataset or database to find relevant documents or text snippets. This is similar to how you might use a search engine to find information on a topic.
  2. Augmentation: The retrieved texts are then fed into a generative model. This model, often a large language model like GPT-4, PaLM 2, Claude, LLaMA, or BERT, uses the information from the retrieved texts to better understand the context and nuances of the topic.
  3. Generation: Finally, the model generates a response or completes the text, incorporating the relevant information retrieved. The model can provide more accurate, informative, and contextually relevant answers by grounding its responses in real-world information.

Benefits of RAG

  • Improved Accuracy: The model can provide more factual and up-to-date information by basing its responses on retrieved documents.
  • Contextual Understanding: RAG helps models understand the context better by providing background information.
  • Versatility: It’s useful for various applications, from chatbots and customer service to content creation.

Challenges

  • Quality of Sources: The output is only as good as the retrieved documents. The final output will suffer if the retrieval step fetches irrelevant or poor-quality information.
  • Complexity: Implementing RAG can be technically complex and resource-intensive, requiring powerful models and large, well-curated datasets.

What is Embedchain?

Embedchain is a bit like a bright and friendly robot that’s great at organizing things. Imagine you have a massive pile of Lego blocks in all shapes and sizes but want to build a specific model. Embedchain is the friend who sorts through the pile, finds exactly what you need, and hands it to you when needed. It does this for AI by handling various types of unstructured data, breaking them into manageable chunks, generating relevant ’embeddings’ (think of these as intelligent labels that help the computer understand the data), and then storing them in a ‘vector database’ for easy retrieval. For example, if you’re building an AI to help students learn history, Embedchain can take historical texts, understand the crucial parts, and help the AI use this information to answer student questions accurately.

Key Features of Embedchain:

  1. Data Processing: It automatically recognizes the data type, processes it, and creates embeddings for critical parts of the data.
  2. Data Storage: Users can choose where to store processed data in a vector database.
  3. Diverse APIs: Embedchain offers APIs that enable users to extract contextual information, find precise answers, or engage in interactive chat conversations.
  4. User-Friendly for Varied Expertise Levels: It is designed for a wide range of users, from AI professionals to beginners, offering ease of use and extensive customization options.
  5. Simplified RAG Pipeline Management: The framework handles the complexities of developing an RAG pipeline, such as integrating and indexing data from diverse sources, determining optimal data chunking methods, and implementing efficient data storage.
  6. Tailored Application Development: Users can tailor the system to meet specific needs, whether for simple projects or complex AI applications.

Who is Embedchain for?

Embedchain is like a universal toolset that’s helpful for a wide range of people. Whether you’re a data scientist, a machine learning engineer, a college student, an independent developer, or someone who loves tinkering with technology, Embedchain has something for you. It’s designed to be user-friendly, allowing beginners to build sophisticated AI applications with just a few lines of code. At the same time, it’s also highly customizable, letting experts tweak and fine-tune various aspects to fit their exact needs. Think of it as a set of building blocks that can be as simple or complex as you want them to be. For instance, a high school student might use Embedchain to create a simple chatbot for a school project, while a professional developer might use it to build a complex AI-powered system for analyzing scientific data.

Why Use Embedchain?

Using Embedchain is like having a personal assistant who’s good at jigsaw puzzles. Developing an AI involves combining many different data and processes, which can be complicated. Embedchain simplifies this by handling the tough stuff for you. It automatically recognizes and processes data, creates embeddings, and decides where to store this information. When your AI needs to answer a question or decide, Embedchain quickly finds the relevant information and helps the AI understand it. This means you can focus on the creative and essential parts of building your AI, like deciding what it should do and how it should interact with people. For example, if you’re creating an AI to provide cooking tips, Embedchain can help you understand and use a vast collection of recipes, cooking techniques, and flavor profiles so it can give you the best advice whether you’re making breakfast or planning a gourmet dinner.

How does Embedchain work?

The image outlines a workflow for an AI-powered application using Embedchain’s vector database system. Here’s how it works, explained in a simplified way:

Understanding the Flow:

  1. OpenAI API: This is the central hub where everything starts. It connects to two key components:

    • gpt-3.5-turbo: This is likely a model for generating responses or completing tasks based on user input.
    • text-embedding-ada-002: This component is responsible for turning text into numerical representations, called embeddings, which the computer can understand and process.
  2. Compute chunk embedding: This process involves breaking down large pieces of text into smaller, more manageable parts, called chunks. Each chunk is then transformed into an embedding by the text-embedding model.

  3. Vector Database: Think of this like a big, smart library where all the chunk embeddings are stored. It’s organized in such a way that it’s easy to find and retrieve the chunks later when needed.

  4. Database Interface: This acts as the librarian, helping users to upload their customized data (in chunks) into the Vector Database and retrieve them when needed.

  5. Query Interface: This is where users interact with the system. They ask questions, and the Query Interface translates those questions into embeddings, much like it does with the data chunks.

  6. Compute question embedding: When a user asks a question, the Query Interface calculates the embedding for this question to understand what’s being asked.

  7. Ask for chunks: Once the question is understood, the system looks for relevant chunks in the Vector Database that might contain the answer.

  8. Ask for responses: The relevant chunks are then passed to the gpt-3.5-turbo model, which uses them to generate a precise and informative response.

  9. Users: There are two main interactions for users:

    • Asking questions: Users can ask questions to get information or responses from the AI.
    • Uploading customized data: Users can add their own data to the Vector Database, which can then be used by the AI to generate responses.

The Role of Embedchain


Embedchain is the framework that facilitates this entire process. The underlying structure allows all these components to work together smoothly. Embedchain’s vector database is crucial, as it efficiently stores and retrieves the data embeddings. This enables the AI to provide fast and relevant responses to user queries, drawing on a wealth of organized information. The result is an intelligent system that can interact with users conversationally, providing them with information or assistance based on a vast and easily accessible knowledge database.

Let’s say you’re making a scrapbook, but instead of pictures and stickers, you’re using bits of information. Embedchain helps you by cutting and organizing these bits and then pasting them in the right places. For AI, this means taking data (like text, images, or sound), breaking it into pieces, understanding what each piece means, and then storing it in a way that’s easy to find later. When someone asks your AI a question, Embedchain quickly flips through the scrapbook to find the correct information and helps the AI understand it to give a good answer. For instance, if you’ve built an AI to help travelers find the perfect vacation spot, Embedchain can help it understand and remember details about thousands of destinations, from the best local dishes to the most exciting activities, to give personalized recommendations.

How to install it?

Installing Embedchain is like downloading a new app on your phone. You go to the place where it’s available, in this case, a website called GitHub, and follow some simple instructions to get it on your computer. There’s some technical stuff involved, like using a ‘command prompt’ to tell your computer what to do, but the instructions are clear and easy to follow. Once you’ve installed Embedchain, it’s like having a new superpower for your computer, letting it understand and use AI in all sorts of exciting ways.

Embedchain Installation Process

Embedchain Installation Process

The installation process for Embedchain is straightforward and can be completed in a few simple steps. Here’s a step-by-step guide to help you get started:

Step 1: Install the Python Package

  1. Open a Terminal: Start by opening your terminal or command prompt.
  2. Install Embedchain: Use Python’s package manager, pip, to install Embedchain. Enter the following command:
pip install embedchain

Step 2: Choose Your Model Type

With Embedchain, you have the option to use either open-source models or paid models.

Option 1: Open Source Models

  • Open-source LLMs (Large Language Models) like Mistral, Llama, etc., are free to use and run locally on your machine.

Option 2: Paid Models

  • This includes paid LLMs like GPT-4, Claude, etc. These models cost money and are accessible via an API.

Step 3: Set Up the Environment

For Open Source Models (e.g., Mistral)

  1. Obtain a Hugging Face Token: If you’re using a model hosted on Hugging Face (like Mistral), you’ll need a Hugging Face token. You can create one for free on their website.
  2. Set the Environment Variable: Replace "hf_xxxx" with your actual Hugging Face token in the following command and run it:
import os
os.environ["HUGGINGFACE_ACCESS_TOKEN"] = "hf_xxxx"

Or Paid Models (e.g., GPT-4)

  1. Obtain an OpenAI API Key: If you’re using a paid model from OpenAI, you’ll need an OpenAI API key.
  2. Set the Environment Variable: Replace "sk-xxxx" with your actual OpenAI API key in the following command and run it:
import os
os.environ["OPENAI_API_KEY"] = "sk-xxxx"

Step 4: Create and Run Your Application

  1. Import Embedchain: Import the App class from the embedchain package.
  2. Initialize the App: Create an instance of the App class.
  3. Add Data: Add URLs or other data sources to your application using the add method.
  4. Query: Use the query method to ask questions or get information from your data.

Example Code Snippet:

Python
import os
from embedchain import App

# replace this with your OpenAI key
os.environ["OPENAI_API_KEY"] = "sk-xxxx"

app = App()

app.add("https://www.forbes.com/profile/elon-musk")
app.add("https://en.wikipedia.org/wiki/Elon_Musk")

app.query("What is the net worth of Elon Musk today?")
# Answer: The net worth of Elon Musk today is $258.7 billion.

This basic guide should help you get Embedchain installed and running on your system. Remember to replace tokens and URLs with your specific data and credentials.

Cookbook for using Azure Open AI and OpenAI with Embedchain

1-Open AI

Step-1: Install Embedchain package

!pip install embedchain

Step-2: Set OpenAI environment variables

You can find this env variable on your OpenAI dashboard.

import os
from embedchain import App

os.environ["OPENAI_API_KEY"] = "sk-xxx"

Step-3 Create Embedchain app and define your config

app = App.from_config(config={
    "llm": {
        "provider": "openai",
        "config": {
            "model": "gpt-3.5-turbo",
            "temperature": 0.5,
            "max_tokens": 1000,
            "top_p": 1,
            "stream": False
        }
    },
    "embedder": {
        "provider": "openai",
        "config": {
            "model": "text-embedding-ada-002"
        }
    }
})

Step-4: Add data sources to your app

app.add("https://www.forbes.com/profile/elon-musk")
app.add("https://en.wikipedia.org/wiki/Elon_Musk")

Step-5: All set. Now start asking questions related to your data

while(True):
    question = input("Enter question: ")
    if question in ['q', 'exit', 'quit']:
        break
    answer = app.query(question)
    print(answer)

2-Azure Open AI

Step-1: Install Embedchain package

!pip install embedchain

Step-2: Set Azure OpenAI-related environment variables

You can find these env variables on your Azure OpenAI dashboard.

import os
from embedchain import App

os.environ["OPENAI_API_TYPE"] = "azure"
os.environ["OPENAI_API_BASE"] = "https://xxx.openai.azure.com/"
os.environ["OPENAI_API_KEY"] = "xxx"
os.environ["OPENAI_API_VERSION"] = "xxx"

Step-3: Define your LLM and embedding model config

config = """
llm:
  provider: azure_openai
  model: gpt-35-turbo
  config:
    deployment_name: ec_openai_azure
    temperature: 0.5
    max_tokens: 1000
    top_p: 1
    stream: false

embedder:
  provider: azure_openai
  config:
    model: text-embedding-ada-002
    deployment_name: ec_embeddings_ada_002
"""

# Write the multi-line string to a YAML file
with open('azure_openai.yaml', 'w') as file:
    file.write(config)

Step-4 Create Embedchain app based on the config

app = App.from_config(config_path="azure_openai.yaml")

Step-5: Add data sources to your app

app.add("https://www.forbes.com/profile/elon-musk")
app.add("https://en.wikipedia.org/wiki/Elon_Musk")

Step-6: All set. Now start asking questions related to your data

while(True):
    question = input("Enter question: ")
    if question in ['q', 'exit', 'quit']:
        break
    answer = app.query(question)
    print(answer)

Choosing the Right Model

Embedchain supports open-source and paid models, giving users flexibility based on their requirements and resources. Here’s an overview of the models supported by Embedchain and their benefits:

Open Source Models

  1. Mistral:
    • Hosted on Hugging Face.
    • It is free to use and runs primarily on your local machine.
    • Benefits: Ideal for users with privacy concerns or limited budgets. Suitable for experimentation and learning.
  2. Llama:
    • Another open source LLM.
    • Benefits: Offers a balance between performance and cost-effectiveness. Suitable for projects where cost is a concern.
  3. GPT4All:
    • A free-to-use, locally running model.
    • Benefits: Privacy-aware, does not require a GPU or internet. Good for local development and privacy-focused applications.
  4. JinaChat:
    • Requires setting up a JINACHAT_API_KEY.
    • Benefits: Provides flexibility and local control over the language model.
  1. GPT-4 (from OpenAI):
    • Accessible via an API.
    • Benefits: State-of-the-art model offering high-quality responses. Ideal for complex and commercial applications.
  2. Claude (from Anthropic):
    • Requires setting up the ANTHROPIC_API_KEY.
    • Benefits: Offers advanced AI capabilities for sophisticated applications.
  3. Azure OpenAI:
    • Provides access to OpenAI models through Azure’s cloud services.
    • Benefits: Combines the power of OpenAI models with the reliability and scalability of Azure’s cloud infrastructure.
  4. Cohere:
    • Access through COHERE_API_KEY.
    • Benefits: Known for its natural language understanding capabilities, it is suitable for various applications, including content generation and analysis.
  5. Together:
    • Accessed via the TOGETHER_API_KEY.
    • Benefits: Offers specialized language models for specific use cases.

Benefits of Open Source vs. Paid Models

  • Cost-Effectiveness: Open source models are generally free, making them accessible for users with limited budgets or experimenting.
  • Privacy and Security: Open source models can be run locally, providing better control over data privacy.
  • State-of-the-Art Performance: Paid models like GPT-4 often deliver more advanced capabilities and higher accuracy, suitable for professional and commercial applications.
  • Scalability: Paid models, especially those offered through cloud services like Azure OpenAI, provide scalability for handling large volumes of requests or data.
  • Support and Reliability: Paid models often come with professional support, regular updates, and reliability guarantees, which are crucial for business-critical applications.

Choosing between open-source and paid models depends on your specific needs, budget, and project scale. Embedchain’s support for various models ensures flexibility and adaptability for various use cases.

Use Cases of Embedchain

1. Chatbots

  • Application Areas:
    • Customer Service: Automating responses to common inquiries and providing round-the-clock support.
    • Education: Personalized tutoring and learning assistance.
    • E-commerce: Assisting in product discovery, making recommendations, and facilitating transactions.
    • Content Management: Helping in writing, summarizing, and content organization.
    • Data Analysis: Extracting insights from large datasets.
    • Language Translation: Offering real-time support in multiple languages.
    • Mental Health: Providing preliminary support and conversational engagement.
    • Entertainment: Engaging users through games, quizzes, and humorous interactions​​.

2. Question Answering

  • Versatile Applications:
    • Educational Aid: Enhancing learning experiences and helping with homework.
    • Customer Support: Efficiently addressing and resolving customer queries.
    • Research Assistance: Supporting academic and professional research.
    • Healthcare Information: Providing basic medical knowledge.
    • Technical Support: Resolving technology-related questions.
    • Legal Information: Offering essential legal advice and information.
    • Business Insights: Delivering market analysis and strategic business advice.
    • Language Learning: Aiding in understanding and translating various languages.
    • Travel Guidance: Providing travel and hospitality information.
    • Content Development: Assisting authors and creators in research and idea generation​​.
  • Enhanced Information Retrieval and Discovery:
    • Information Retrieval: Improving search accuracy in databases and websites.
    • E-commerce: Enhancing product discovery in online shopping platforms.
    • Customer Support: Empowering chatbots for more effective responses.
    • Content Discovery: Aiding in finding relevant media content.
    • Knowledge Management: Streamlining document and data retrieval in enterprises.
    • Healthcare: Facilitating medical research and literature searches.
    • Legal Research: Assisting in legal document and case law searches.
    • Academic Research: Aiding in academic paper discovery.
    • Language Processing: Enabling multilingual search capabilities​​.

Each of these use cases demonstrates the versatility and wide-ranging applications of Embedchain, highlighting its capability to enhance various domains with advanced AI-driven functionalities.

Configuration and Customization in Embedchain

Embedchain offers various configuration and customization options across its components, ensuring flexibility and adaptability for diverse use cases. Here’s an organized overview:

Components Configuration

  1. Data Source:
    • Embedchain supports a variety of data sources, enabling the loading of unstructured data through a user-friendly interface. Supported data sources include:
      • PDF, CSV, JSON files
      • Text, MDX, DOCX files
      • HTML web pages, YouTube channels, and videos
      • Docs websites, Notion, Sitemaps, XML files
      • Q&A pairs, OpenAPI, Gmail, GitHub repositories
      • PostgreSQL, MySQL databases
      • Slack, Discord, Discourse, Substack
      • Beehiiv, Dropbox, Images, and custom sources​​.
  2. Large Language Models (LLMs):
    • Embedchain integrates various popular LLMs, simplifying the process of incorporating them into your application. Supported LLMs include:
      • OpenAI (requiring OPENAI_API_KEY)
      • Google AI, Azure OpenAI, Anthropic, Cohere
      • Together, Ollama, GPT4All, JinaChat
      • Hugging Face, Llama2, Vertex AI​​.
  3. Embedding Models:
    • Embedchain supports several embedding models from providers such as:
      • OpenAI, GoogleAI, Azure OpenAI
      • GPT4All, Hugging Face, Vertex AI​​.
  4. Vector Databases:
    • The integration of vector databases is streamlined in Embedchain. You can configure them within the YAML configuration file. Supported databases include:
      • ChromaDB, Elasticsearch, OpenSearch
      • Zilliz, LanceDB, Pinecone, Qdrant
      • Weaviate (requiring WEAVIATE_ENDPOINT and WEAVIATE_API_KEY)​​.

Deployment of Embedchain

Embedchain simplifies the deployment process of RAG applications, allowing them to be hosted on various cloud platforms. This flexibility ensures that users can select a hosting service that best suits their needs and preferences. The various cloud providers supported by Embedchain for deployment are:

  1. Fly.io: A platform known for its simplicity and ease of use, suitable for applications requiring global distribution.
  2. Modal.com: Offers scalable computing for large-scale applications.
  3. Render.com: Known for its developer-friendly features, it provides static sites, web services, and private services.
  4. Streamlit.io: A popular choice for machine learning and data science applications, enabling easy creation of interactive web apps.
  5. Gradio. App: Ideal for creating sharable machine learning demos and web applications.
  6. Huggingface.co: A platform specializing in natural language processing and machine learning models, particularly those involving LLMs.
  7. Embedchain.ai: The native platform for Embedchain, likely offering the most integrated and streamlined experience for deploying Embedchain applications.

Each platform offers unique features and benefits, catering to various application requirements, from small-scale projects to large, enterprise-level deployments​​.

Practical Applications and Examples

Embedchain offers a versatile set of tools that can be utilized to create various types of chatbots, each tailored for specific applications and platforms. Here are some practical examples and applications:

  1. Full Stack Chatbot:
    • Application: integrate a chatbot within a full-stack application.
    • Use Case: Ideal for web applications that require interactive user engagement.
  2. Custom GPT Creation:
    • ApplicationBuild a tailored GPT chatbot suited to your specific needs.
    • Use Case: Useful for creating specialized chatbots for customer service or personalized assistance.
  3. Slack Integration Bot:
    • Application: Enhance your Slack workspace with a specialized bot.
    • Use Case: Integrating AI functionalities into Slack for improved workplace communication and automation.
  4. Discord Community Bot:
    • Application: Create an engaging bot for your Discord server.
    • Use Case: Enhancing community interaction on Discord servers with automated responses or interactive features.
  5. Telegram Assistant Bot:
    • Application: Develop a handy assistant for Telegram users.
    • Use Case: Providing assistance, automation, and engagement in Telegram channels or groups.
  6. WhatsApp Helper Bot:
    • Application: Design a WhatsApp bot for efficient communication.
    • Use Case: Automate responses and provide information services on WhatsApp.
  7. Poe Bot for Unique Interactions:
    • Application: Explore advanced bot interactions with Poe Bot.
    • Use Case: Creating bots with unique, advanced interaction capabilities, possibly for gaming, storytelling, or engaging user experiences.

These examples demonstrate Embedchain’s adaptability in creating chatbots for different platforms and purposes, ranging from simple automation to complex, interactive applications​​.

Access the Notebooks examples featuring LLMs, Embedding Models, and Vector DBs with Embedchain by clicking this link.

Conclusion

Embedchain is a beacon of guidance and empowerment in AI’s vast and ever-changing landscape. It’s akin to having a compass and a map while navigating uncharted territories. This remarkable tool demystifies the complexities of AI, making it approachable and accessible to everyone, from curious novices to seasoned experts. Whether you’re taking your first steps into this exciting field or an experienced traveler looking to push the boundaries further, Embedchain offers the resources, support, and flexibility you need to bring your visionary AI projects to life.

Embedchain isn’t just a tool; it’s a companion on your journey through the world of AI. It’s there to handle the heavy lifting, allowing you to focus on your projects’ creative and impactful aspects. With its user-friendly nature and adaptable framework, Embedchain ensures that the future of AI isn’t just a realm for the few but an accessible, enriching, and empowering experience for all. It’s your ally in unlocking the full potential of AI, helping you turn your imaginative ideas into real-world solutions and innovations.

That’s it for Today!

Sources

https://embedchain.ai/

https://embedchain.ai/blog/introducing-embedchain

https://gptpluginz.com/embedchain-ai/

embedchain/embedchain: The Open Source RAG framework (github.com)

Introducing the New Google Gemini API: A Comparative Analysis with ChatGPT in the AI Revolution

Google’s recent announcement of the Gemini API marks a transformative leap in artificial intelligence technology. This cutting-edge API, developed by Google DeepMind, is a testament to Google’s commitment to advancing AI and making it accessible and beneficial for everyone. This blog post will explore the multifaceted features, potential applications, and impact of the Google Gemini API, as revealed in Google’s official blogs and announcements.

What is the Google Gemini?

Google Gemini is a highly advanced, multimodal artificial intelligence model developed by Google. It represents a significant step forward in AI capabilities, especially in understanding and processing a wide range of data types.

Extract from the Google Germini official website

Gemini’s Position in the AI Landscape

Gemini is a direct competitor to OpenAI’s GPT-3 and GPT-4 models. It differentiates itself through its native multimodal capabilities and its focus on seamlessly processing and combining different types of information​​. Its launch was met with significant anticipation and speculation, and it is seen as a crucial development in the AI arms race between major tech companies​.

Below is a comparison of text and multimodal capabilities provided by Google, comparing Germi Ultra, which has not yet been officially launched, with Open AI’s GTP-4.


Key Features of Gemini

  1. Multimodal Capabilities: Gemini’s groundbreaking design allows it to process and comprehend various data types seamlessly, from text and images to audio and video, facilitating sophisticated multimodal reasoning and advanced coding capabilities.
  2. Three Distinct Models: The Gemini API offers three versions – Ultra, Pro, and Nano, each optimized for different scales and types of tasks, ranging from complex data center operations to efficient on-device applications.
  3. State-of-the-Art Performance: Gemini models have demonstrated superior performance on numerous academic benchmarks, surpassing human expertise in specific tasks and showcasing their advanced reasoning and problem-solving abilities.
  4. Diverse Application Spectrum: The versatility of Gemini allows for its integration across a wide array of sectors, including healthcare, finance, and technology, enhancing functionalities like predictive analytics, fraud detection, and personalized user experiences.
  5. Developer and Enterprise Accessibility: The Gemini Pro is now available for developers and enterprises, with various features such as function calling, semantic retrieval, and chat functionality. Additionally, Google AI Studio and Vertex AI support the integration of Gemini into multiple applications.

The New Google Gemini API

The Gemini API represents a significant stride in AI development, introducing Google’s most capable and comprehensive AI model to date. This API is the product of extensive collaborative efforts, blending advanced machine learning and artificial intelligence capabilities to create a multimodal system. Unlike previous AI models, Gemini is designed to understand, operate, and integrate various types of information, including text, code, audio, images, and video, showcasing a new level of sophistication in AI technology.

Benefits for Developers and Creatives:

Gemini’s versatility unlocks a plethora of possibilities for developers and creatives alike. Imagine:

  • Building AI-powered applications: Germini can power chatbots, virtual assistants, and personalized learning platforms.
  • Boosting your creative workflow: Generate song lyrics, script ideas, or even marketing copy with Gemini’s innovative capabilities.
  • Simplifying coding tasks: Let Germini handle repetitive coding tasks or write entire code snippets based on your instructions.
  • Unlocking new research avenues: Gemini’s multimodal abilities open doors for exploring the intersection of language, code, and other modalities in AI research.

How to use the Google Germini API?

Using the Google Gemini API involves several steps and can be applied to various programming languages and platforms. Here’s a comprehensive guide based on the information from Google AI for Developers:

Setting Up Your Project

  1. Obtain an API Key: First, create an API key in Google AI Studio or MakeSuite. Securing your API key and not checking it into your version control system is crucial. Instead, pass your API key to your app before initializing the model.
  2. Initialize the Generative Model: Import and initialize the Generative Model in your project. This involves specifying the model name (e.g., gemini-pro-vision for multimodal input) and accessing your API key.

Follow a quick start with Pyhton at Google Colab.

Implementing Use Cases

The Gemini API allows you to implement different use cases:

  1. Text-Only Input: Use the gemini-pro model with the generateContent method for text-only prompts.
  2. Multimodal Input (Text and Image): Use the gemini-pro-vision model. Make sure to review the image requirements for input.
  3. Multi-Turn Conversations (Chat): Use the gemini-pro model and initialize the chat by calling startChat(). Use sendMessage() to send new user messages.
  4. Streaming for Faster Interactions: Implement streaming with the generateContentStream method to handle partial results for faster interactions.

Germini Pro

Python
"""
At the command line, only need to run once to install the package via pip:

$ pip install google-generativeai
"""

import google.generativeai as genai

genai.configure(api_key="YOUR_API_KEY")

# Set up the model
generation_config = {
  "temperature": 0.9,
  "top_p": 1,
  "top_k": 1,
  "max_output_tokens": 2048,
}

safety_settings = [
  {
    "category": "HARM_CATEGORY_HARASSMENT",
    "threshold": "BLOCK_MEDIUM_AND_ABOVE"
  },
  {
    "category": "HARM_CATEGORY_HATE_SPEECH",
    "threshold": "BLOCK_MEDIUM_AND_ABOVE"
  },
  {
    "category": "HARM_CATEGORY_SEXUALLY_EXPLICIT",
    "threshold": "BLOCK_MEDIUM_AND_ABOVE"
  },
  {
    "category": "HARM_CATEGORY_DANGEROUS_CONTENT",
    "threshold": "BLOCK_MEDIUM_AND_ABOVE"
  }
]

model = genai.GenerativeModel(model_name="gemini-pro",
                              generation_config=generation_config,
                              safety_settings=safety_settings)

prompt_parts = [
  "Write a  10 paragraph about the Germini functionalities':",
]

response = model.generate_content(prompt_parts)
print(response.text)

Germini Pro Vision

Python
"""
At the command line, only need to run once to install the package via pip:

$ pip install google-generativeai
"""

from pathlib import Path
import google.generativeai as genai

genai.configure(api_key="YOUR_API_KEY")

# Set up the model
generation_config = {
  "temperature": 0.4,
  "top_p": 1,
  "top_k": 32,
  "max_output_tokens": 4096,
}

safety_settings = [
  {
    "category": "HARM_CATEGORY_HARASSMENT",
    "threshold": "BLOCK_MEDIUM_AND_ABOVE"
  },
  {
    "category": "HARM_CATEGORY_HATE_SPEECH",
    "threshold": "BLOCK_MEDIUM_AND_ABOVE"
  },
  {
    "category": "HARM_CATEGORY_SEXUALLY_EXPLICIT",
    "threshold": "BLOCK_MEDIUM_AND_ABOVE"
  },
  {
    "category": "HARM_CATEGORY_DANGEROUS_CONTENT",
    "threshold": "BLOCK_MEDIUM_AND_ABOVE"
  }
]

model = genai.GenerativeModel(model_name="gemini-pro-vision",
                              generation_config=generation_config,
                              safety_settings=safety_settings)

# Validate that an image is present
if not (img := Path("image0.jpeg")).exists():
  raise FileNotFoundError(f"Could not find image: {img}")

image_parts = [
  {
    "mime_type": "image/jpeg",
    "data": Path("image0.jpeg").read_bytes()
  },
]

prompt_parts = [
  image_parts[0],
  "\nTell me about this image, what colors do we have here? How many people do we have here?",
]

response = model.generate_content(prompt_parts)
print(response.text)

Implementing in Various Languages

The Gemini API supports several programming languages, each with its specific implementation details:

  • Python, Go, Node.js, Web, Swift, Android, cURL: Each language requires specific code structures and methods for initializing the model, sending prompts, and handling responses. Examples include setting up the Generative Model, defining prompts, and processing the generated content.

Further Reading and Resources

  • The Gemini API documentation and API reference on Google AI for Developers provide detailed information, including safety settings, guides on large language models, and embedding techniques.
  • For specific language implementations and more advanced use cases like token counting, refer to the respective quickstart guides available on Google AI for Developers.

By following these steps and referring to the detailed documentation, you can effectively utilize the Google Gemini API for various applications ranging from simple text generation to more complex multimodal interactions.

Germini vs. ChatGPT: The Ultimate Multimodal Mind Showdown

The world of large language models (LLMs) is heating up, and two titans stand at the forefront: Google’s Germini and OpenAI’s ChatGPT. Both boast impressive capabilities, but which one reigns supreme? Let’s dive into a head-to-head comparison.

Google Germini API – Pricing

Free for Everyone Plan:

  • Rate Limits: 60 QPM (queries per minute)
  • Price (input): Free
  • Price (output): Free
  • Input/output data used to improve our products: Yes

Pay-as-you-go Plan: ( will coming soon to Google AI Studio)

    • Rate Limits: Starts at 60 QPM
    • Price (input): $0.00025 / 1K characters, $0.0025 / image
    • Price (output): $0.0005 / 1K characters
    • Input/output data used to improve our products: No

    Source: Gemini API Pricing  |  Google AI for Developers

    Open AI ChatGPT API – Pricing

    GPT-4 Turbo

    With 128k context, fresher knowledge, and the broadest set of capabilities, the GPT-4 Turbo is more potent than the GPT-4 and is offered at a lower price.

    Learn about GPT-4 Turbo

    ModelInputOutput
    gpt-4-1106-preview$0.01 / 1K tokens$0.03 / 1K tokens
    gpt-4-1106-vision-preview$0.01 / 1K tokens$0.03 / 1K tokens

    GPT-4

    With broad general knowledge and domain expertise, GPT-4 can follow complex instructions in natural language and solve difficult problems accurately.

    Learn about GPT-4

    ModelInputOutput
    gpt-4$0.03 / 1K tokens$0.06 / 1K tokens
    gpt-4-32k$0.06 / 1K tokens$0.12 / 1K tokens

    GPT-3.5 Turbo

    GPT-3.5 Turbo models are capable and cost-effective.

    gpt-3.5-turbo This family’s flagship model supports a 16K context window optimized for dialog.

    gpt-3.5-turbo-instruct It is an Instruction model and only supports a 4K context window.

    Learn about GPT-3.5 Turbo

    ModelInputOutput
    gpt-3.5-turbo-1106$0.0010 / 1K tokens$0.0020 / 1K tokens
    gpt-3.5-turbo-instruct$0.0015 / 1K tokens$0.0020 / 1K tokens

    Source: Pricing (openai.com)

    Strengths of Germini:

    • Multimodality: Germini shines in its ability to handle text, code, images, and even audio. This opens doors for applications like generating image captions or translating spoken language.
    • Function Calling: Germini seamlessly integrates into workflows thanks to its function calling feature, allowing developers to execute specific tasks within their code.
    • Embeddings and Retrieval: Gemini’s understanding of word relationships and semantic retrieval leads to more accurate information retrieval and question answering.
    • Custom Knowledge: Germini allows fine-tuning with your own data, making it a powerful tool for specialized tasks.
    • Multiple Outputs: Germini goes beyond text generation, offering creative formats like poems, scripts, and musical pieces.

    Strengths of ChatGPT:

    • Accessibility: ChatGPT is widely available through various platforms and APIs, offering free and paid options. Germini is currently in limited access.
    • Creative Writing: ChatGPT excels in creative writing tasks, producing engaging stories, poems, and scripts.
    • Large Community: ChatGPT has a well-established user community that offers extensive resources and tutorials.

    An experiment comparing the Germini and ChatGPT APIs applying the Sparse Priming Representations (SPR) technique

    I conducted an experiment using the APIs from Open AI – ChatGPT and Google Germini, applying the technique(Sparse Priming Representations (SPR)) of prompt engineering to compress and decompress a text. Click here to access the experimental code I created in Google Colab.

    The outcome was interesting; both APIs responded very well to the test. In the table below, we can observe a contextual difference, but both APIs were able to perform the task satisfactorily.

    If you want to learn more about Sparse Priming Representations (SPR), I’ve written an entire post discussing it. Here it is below:

    Conclusion

    In the rapidly evolving landscape of artificial intelligence, the Google Gemini API represents a significant milestone. Its introduction heralds a new era where AI transcends traditional boundaries, offering multimodal capabilities far beyond the text-centric focus of models like ChatGPT. Google Gemini’s ability to process and integrate diverse data types — from images to audio and video — not only sets it apart but also showcases the future direction of AI technology.

    While ChatGPT excels in textual creativity and enjoys widespread accessibility and community support, Gemini’s native multimodal functionality and advanced features like function calling and semantic retrieval position it as a more versatile and comprehensive tool. This distinction is crucial in an AI landscape where the needs range from simple text generation to complex, multimodal interactions and specialized tasks.

    As we embrace this new phase of AI development, it’s clear that both ChatGPT and Google Gemini have unique strengths and applications. The choice between them hinges on specific needs and project requirements. Gemini’s launch is not just a technological breakthrough; it’s a testament to the ever-expanding possibilities of AI, promising to revolutionize various sectors and redefine our interaction with technology. With such advancements, the future of AI seems boundless, limited only by our imagination and the ethical considerations of its application.

    That’s it for today!

    Sources:

    https://tech.co/news/gemini-vs-chatgpt

    https://mockey.ai/blog/google-gemini-vs-chatgpt/

    https://www.pcguide.com/ai/compare/google-gemini-vs-openai-gpt-4/

    https://gptpluginz.com/google-gemini/

    https://www.augustman.com/sg/gear/tech/google-gemini-vs-chatgpt-core-differences-of-the-ai-model-chatbots/

    https://whatsthebigdata.com/gemini-vs-chatgpt-how-does-googles-latest-ai-compare/

    https://www.washingtonpost.com/technology/2023/12/06/google-gemini-chatgpt-alternatives/

    Google Gemini Vs OpenAI ChatGPT: What’s Better? (businessinsider.com)

    Navigating Vector Operations in Azure SQL for Better Data Insights: A Guide How to Use Generative AI to Prompt Queries in Datasets

    The evolving landscape of data analytics has brought vector databases to the forefront, especially with their application in finding similarities in diverse data types such as articles, photos, and products. Azure SQL, combined with the prowess of OpenAI, offers a powerful platform for executing vector operations, simplifying the task of finding similar items and enhancing recommendation systems in applications.

    What is Vector Similarity, and How do you calculate cosine similarity?

    Vector similarity revolves around transforming data into numerical vectors or embeddings. These embeddings are numerical representations of various concepts converted into sequences of numbers, making it easier for computers to grasp their relationships. This method is particularly effective in comparing and finding similarities between data points, a crucial feature in applications like search engines and clustering algorithms.

    Cosine similarity, a commonly used metric in vector similarity, measures the cosine of the angle between two vectors. This metric is crucial in determining the degree of similarity between the vectors, irrespective of their size. In Azure SQL, cosine similarity can be computed with a simple SQL formula involving SUM and SQRT functions applied to the vector elements, thus providing a straightforward yet powerful way to measure vector similarity.

    Cosine similarity can be calculated in SQL using the following formula, given two vectors, a and b:

    SELECT 
        SUM(a.value * b.value) / (  
            SQRT(SUM(a.value * a.value)) * SQRT(SUM(b.value * b.value))   
        ) AS cosine_similarity
    FROM
        vectors_values
    
    

    How to Querying Azure OpenAI?

    Azure SQL’s integration with Azure OpenAI simplifies generating and working with embeddings. Users can obtain the vector representation of any given text by creating an Azure OpenAI resource and deploying a model like text-embedding-ada-002. This integration enables the execution of REST API calls from within the Azure SQL Database, making fetching and working with embeddings more streamlined and accessible.

    SQL
    -- Declare a variable to hold the response from the external REST API
    declare @response nvarchar(max);
    
    -- Declare and initialize a variable with JSON payload. 
    -- The JSON object contains an 'input' key with a text value.
    declare @payload nvarchar(max) = json_object('input': 'This is how to futureproof your career in an AI world');
    
    -- Execute a stored procedure to invoke an external REST endpoint
    exec sp_invoke_external_rest_endpoint
        @url = 'https://<your-app-name>.openai.azure.com/openai/deployments/embeddings/embeddings?api-version=2023-03-15-preview', -- The URL of the REST endpoint
        @credential = [https://<your-app-name>.openai.azure.com], -- Credential for accessing the REST API
        @payload = @payload, -- The JSON payload defined earlier
        @response = @response output; -- Output parameter to store the response from the API

    Advantages of Implementing Vectors in Azure SQL Database

    1. Ease of Access with Azure OpenAI: Azure SQL combined with Azure OpenAI offers easy access to REST services for generating embeddings using pre-trained machine learning models. This accessibility facilitates the calculation of embeddings, which is otherwise a complex task.

    2. Efficient Storage with Columnstore Indexes: Azure SQL databases efficiently store vectors using column store indexes. This method is particularly beneficial since Azure SQL doesn’t have a specific vector data type. Vectors, essentially lists of numbers, can be conveniently stored in a table with one row per vector element.

    3. Fast Distance Calculation: The internal optimization of column store indexes in Azure SQL, employing SIMD and AVX-512 instructions, allows for high-speed calculation of distances between vectors, which is crucial for determining similarity.

    4. Integration with Azure AI Search: Azure SQL’s integration with Azure AI Search streamlines the entire process of chunking, generating, storing, and querying vectors for vector search, significantly speeding up the development of the vectorization pipeline and minimizing maintenance tasks.

    5. Capability for Complex Operations: Azure SQL enables complex operations like indexing, storing, and retrieving vector embeddings from a search index, which is essential for identifying the most similar documents in a vector space.

    6. Versatile Data Handling: Azure SQL’s ability to handle structured and unstructured data, along with vector data, provides more versatility compared to vector databases, which are primarily optimized for vector data storage and retrieval.

    7. Ease of Querying and Relevance Determination: The integration with Azure OpenAI allows easy querying of the REST service to obtain vector representations of text, which can then be used to calculate similarity against stored vectors, identifying the most relevant data.

    8. Simplified Deployment and Management: Deploying and managing an embedding model via the Azure portal is straightforward, reducing the complexity of managing vector databases.

    9. Suitability for a Range of Applications: While vector databases are specialized for high-dimensional similarity searches, Azure SQL’s broader application scope makes it suitable for various types of data and applications, from financial records to customer data.

    10. Support for Advanced Azure Features: Azure SQL supports advanced Azure features, such as AI and machine learning capabilities, which can be seamlessly integrated with vector similarity operations for enhanced analytics and insights.

    These advantages highlight the flexibility, efficiency, and ease of use of Azure SQL databases for vector similarity operations, making them a preferable choice in scenarios where diverse data types and complex operations are involved, alongside the need for seamless integration with other Azure services.

    Implementing Vectors in Azure SQL Database

    Azure SQL Database provides a seamless way to store and manage vector data despite not having a specific vector data type. Column-store indexes, vectors, and essentially lists of numbers can be efficiently stored in a table. Each vector can be represented in a row with individual elements as columns or serialized arrays. This approach ensures efficient storage and retrieval, making Azure SQL suitable for large-scale vector data management.

    I used the Global News Dataset from Kaggle in my experiment.

    First, you must create the columns to save the vector information. In my case, I created two columns: title_vector For the news title and content_vector the news content. For this, create a small Python code, but you can also do that directly from SQL using a cursor. It's important to know that you don't need to pay for any Vector Databases by saving the vector information inside the Azure SQL.

    Python
    import openai
    import pyodbc  # or another SQL connection library
    
    # Set up OpenAI credentials
    openai.api_type = "azure"
    openai.api_key = "<YOUR AZURE OPEN AI KEY>"
    openai.api_base = "https://<your-app-name>.openai.azure.com/"
    openai.api_version = "2023-07-01-preview"
    
    # Connect to your Azure SQL database
    conn = pyodbc.connect('DRIVER={ODBC Driver 17 for SQL Server};'
                          'SERVER=<SERVER>;DATABASE=<DATABASE>;'
                          'UID=<USER>;PWD=<PASSWORD>')
    
    def get_embeddings(text):
        # Truncate the text to 8000 characters
        truncated_text = text[:8000]
    
        response = openai.Embedding.create(input=truncated_text, engine="embeddings")
        embeddings = response['data'][0]['embedding']
        return embeddings
    
    def update_database(article_id, title_vector, content_vector):
        cursor = conn.cursor()
    
        # Convert vectors to strings
        title_vector_str = str(title_vector)
        content_vector_str = str(content_vector)
    
        # Update the SQL query to use the string representations
        cursor.execute("""
            UPDATE news
            SET title_vector = ?, content_vector = ?
            WHERE article_id = ?
        """, (title_vector_str, content_vector_str, article_id))
        conn.commit()
    
    def embed_and_update():
        cursor = conn.cursor()
        cursor.execute("SELECT article_id, title, full_content FROM news where title_vector is null and full_content is not null and title is not null order by published desc")
        
        title_vector = ""
        content_vector = ""
        
        for row in cursor.fetchall():
            article_id, title, full_content = row
            
            print(f"Embedding article {article_id} - {title}")
            
            title_vector = get_embeddings(title)
            content_vector = get_embeddings(full_content)
            
            update_database(article_id, title_vector, content_vector)
    
    embed_and_update()

    These two columns will contain something like this: [-0.02232750505208969, -0.03755787014961243, -0.0066827102564275265…]

    Second, you must extract these vectors line by line into a new table for each vector field. In this case, I created a procedure to do it.

    SQL
    -- =============================================
    -- Author:      Lawrence Teixeira
    -- Create Date: 11-24-2023
    -- Description: This stored procedure creates vectors for news titles and contents.
    -- It processes data from the 'news' table and stores the vectors in separate tables.
    -- =============================================
    
    CREATE PROCEDURE [dbo].[Create_News_Vector]
    AS
    BEGIN
        SET NOCOUNT ON; -- Prevents the sending of DONE_IN_PROC messages to the client.
    
        -- Extract and store title vectors:
        -- First, check if the 'news_titles_vector' table exists and drop it if it does.
        IF OBJECT_ID('dbo.news_titles_vector', 'U') IS NOT NULL
            DROP TABLE dbo.news_titles_vector;
    
        -- Using a Common Table Expression (CTE) to process title vectors.
        WITH cte AS
        (
            SELECT 
                v.article_id,    
                CAST(tv.[key] AS INT) AS vector_value_id, -- Casting 'key' as INT for vector ID.
                CAST(tv.[value] AS FLOAT) AS vector_value   -- Casting 'value' as FLOAT for vector value.
            FROM 
                dbo.news AS v 
            CROSS APPLY 
                OPENJSON(v.title_vector) tv -- Parses JSON of title_vector in the 'news' table.
        )
        -- Create 'news_titles_vector' table with processed vectors.
        SELECT
            article_id,
            vector_value_id,
            vector_value
        INTO
            dbo.news_titles_vector
        FROM
            cte;
    
        -- Extract and store content vectors:
        -- Check and drop 'news_contents_vector' table if it exists.
        IF OBJECT_ID('dbo.news_contents_vector', 'U') IS NOT NULL
            DROP TABLE dbo.news_contents_vector;
    
        -- CTE for processing content vectors.
        WITH cte AS
        (
            SELECT 
                v.article_id,    
                CAST(tv.[key] AS INT) AS vector_value_id, -- Casting 'key' as INT for vector ID.
                CAST(tv.[value] AS FLOAT) AS vector_value   -- Casting 'value' as FLOAT for vector value.
            FROM 
                dbo.news AS v 
            CROSS APPLY 
                OPENJSON(v.content_vector) tv -- Parses JSON of content_vector in the 'news' table.
        )
        -- Create 'news_contents_vector' table with processed vectors.
        SELECT
            article_id,
            vector_value_id,
            vector_value
        INTO
            dbo.news_contents_vector
        FROM
            cte;
    
        -- Columnstore indexes creation is advised outside the stored procedure.
        -- These indexes optimize data storage and query performance on vector tables.
        CREATE CLUSTERED COLUMNSTORE INDEX cci_news_titles_vector
        ON dbo.news_titles_vector 
    	order (article_id);
    
        CREATE CLUSTERED COLUMNSTORE INDEX cci_news_contents_vector
        ON dbo.news_contents_vector
    	order (article_id);
    END

    I also create a function to search directly to the dataset using the Cosine similarity formula.

    SQL
    -- This Azure SQL function finds news articles similar to the given content vector.
    CREATE function [dbo].[SimilarNewsContentArticles](
        @vector nvarchar(max) -- Input parameter: JSON string representing a content vector.
    )
    returns table -- The function returns a table.
    as
    return with 
    
    -- CTE for processing the input vector.
    cteVector as
    (
        select 
            cast([key] as int) as [vector_value_id], -- Extracts and casts the 'key' from JSON to int.
            cast([value] as float) as [vector_value] -- Extracts and casts the 'value' from JSON to float.
        from 
            openjson(@vector) -- Parses the input JSON vector.
    ),
    
    -- CTE for calculating similarity scores with existing articles.
    cteSimilar as
    (
        select top (50)
            v2.article_id, 
            sum(v1.[vector_value] * v2.[vector_value]) as cosine_distance 
            -- Calculates cosine similarity (distance) between vectors.
        from 
            cteVector v1 -- Uses the processed input vector.
        inner join 
            dbo.news_contents_vector v2 
            on v1.vector_value_id = v2.vector_value_id -- Joins with stored article vectors.
        group by
            v2.article_id
        order by
            cosine_distance desc -- Orders by similarity score, descending.
    )
    
    -- Final selection combining article data with similarity scores.
    select 
        a.*, -- Selects all columns from the news article.
        r.cosine_distance -- Includes the calculated similarity score.
    from 
        cteSimilar r -- Uses the similarity scores CTE.
    inner join 
        dbo.news a on r.article_id = a.article_id -- Joins with the news articles table.
    GO

    Finally, you can start querying your table using prompts instead of keywords. This is awesome!

    Check out the app I developed, which is designed to assist you in crafting prompts and evaluating your performance using my news dataset. To explore the app, click here.

    I also created this GitHub repository with everything I did.

    Click on this image to open the GitHub repository.

    Conclusion

    While the sample in this blog is not optimized for maximum efficiency, it is an excellent starting point for understanding and implementing vector operations in Azure SQL. The process, despite its simplicity, is relatively fast. For instance, querying an eight vCore Azure SQL database can return the fifty most similar articles in just half a second, demonstrating the efficiency and utility of vector operations in Azure SQL Database for data analytics and insights. If you want to know more about this topic, don’t hesitate to get in touch with me.

    That’s it for today!

    Sources:

    Vector Similarity Search with Azure SQL database and OpenAI – Azure SQL Devs’ Corner (microsoft.com)

    Vector Similarity Search with Azure SQL database and OpenAI | by Davide Mauri | Microsoft Azure | Medium

    Azure-Samples/azure-sql-db-openai: Samples on how to use Azure SQL database with Azure OpenAI (github.com)

    MemGPT: Unlimited Memory without Token Constraints for Generative AI Platforms, like GPT-4, LaMDA, PaLM, LLAMA, CLAUDE, and others

    The field of conversational AI has witnessed a substantial transformation with the emergence of large language models (LLMs) such as GPT-4, LaMDA, PaLM, LLAMA, CLAUDE, and others. These sophisticated models, founded on transformer architectures, have redefined the possibilities of natural language processing, paving the way for a myriad of applications across both consumer and enterprise sectors. However, despite this leap forward, LLMs are still bound by a significant limitation—their context window size. This bottleneck restricts their ability to manage extended dialogues and analyze lengthy documents efficiently. But what if there was a way to circumvent this limitation?

    What is MemGPT?

    MemGPT, standing for Memory-GPT, is a system devised to enhance the performance of Large Language Models (LLMs) by introducing a more advanced memory management scheme, helping to overcome the challenges posed by fixed context windows. Below are some of the key features of MemGPT:

    1. Memory Management: MemGPT incorporates a tiered memory system into a fixed-context LLM processor, granting it the ability to manage its own memory. By intelligently handling different memory tiers, it extends the context available within the limited context window of the LLM, addressing the issue of constrained context windows common in large language models.
    2. Virtual Context Management: MemGPT introduces a method known as virtual context management. This is a key feature that assists in managing the context windows of LLMs.
    3. Operating System-Inspired: The architecture of MemGPT draws inspiration from traditional operating systems, especially their hierarchical memory systems that facilitate data movement between fast and slow memory. This approach enables effective memory resource management, similar to how operating systems provide the illusion of large memory resources to applications through virtual memory paging.
    4. Interruption Handling: MemGPT employs interrupts to manage the control flow between itself and the user, ensuring smooth interaction and effective memory management during operations.
    5. Extended Conversational Context: Through effective memory management, MemGPT facilitates extended conversational context, allowing for longer and more coherent interactions that surpass the limitations imposed by fixed-length context windows.

    In essence, MemGPT represents a significant step forward in the utilization of Large Language Models, creating a pathway for more effective and extended interactions that resemble human discourse by smartly managing memory resources.

    For more information you can access the official website here.

    How does MemGPT Work?

    MemGPT gives LLMs a feedback loop between user events, searching virtual context, and performing a function (source)

    Imagine your computer’s OS, which deftly manages applications and data across RAM and disk storage, providing seamless access to resources beyond the physical memory limits. MemGPT mirrors this concept by working different memory tiers within an LLM. It includes:

    1. Main Context: Analogous to RAM, this is the immediate context the LLM processor works with during inference.
    2. External Context: Similar to a hard drive, this stores information beyond the LLM’s direct reach but can be accessed when needed.
    3. Interrupts: Like an OS interrupt, MemGPT can pause and resume the processor, managing the control flow with the user.

    This architecture allows for dynamic context management, enabling the LLM to retrieve relevant historical data akin to how an OS handles page faults.

    What problem does MemGPT solve?

    MemGPT addresses several challenges associated with language modeling, particularly enhancing the capabilities of existing large language models (LLMs) like GPT-3. Here are the key problems it resolves:

    1. Long-term Context Retention:
      MemGPT introduces solutions for managing long-term context, a significant hurdle in advancing language modeling. By effectively managing memory, it can retain and access information over extended sequences, which is crucial for understanding and generating coherent responses in conversations or documents with many interactions or long texts.
    2. Enhanced Memory Management:
      It employs a tiered memory system, data transfer functions, and control via interrupts to manage memory efficiently. This setup enhances fixed-context LLMs, allowing them to handle tasks like document analysis and multi-session chat more effectively, overcoming the inherent context limitations in modern LLMs for better performance and user interactions.
    3. Extended Context Window:
      MemGPT effectively extends the context window of LLMs, enabling them to manage different memory tiers intelligently. This extended context is crucial for LLMs to have a more in-depth understanding and generate more coherent and contextually relevant responses over a series of interactions.
    4. Improved Interaction with Chatbots:
      By utilizing a memory hierarchy, MemGPT allows chatbots to access and modify information beyond their limited context window, facilitating more meaningful and prolonged interactions with users. This memory hierarchy enables the chatbot to move data between different layers of memory, ensuring relevant information is readily accessible when needed.

    Through these solutions, MemGPT significantly bridges the gap between memory management and generative capacity in language modeling, paving the way for more sophisticated applications in various domains.

    Comparing context lengths of commonly used models / APIs (data collected 9/2023).

    *Assuming a preprompt of 1k tokens, and an average message size of ∼50 tokens (∼250 characters).

    How to install MemGPT

    PowerShell
    pip install pymemgpt

    Add your OpenAI API key to your environment:

    PowerShell
    export OPENAI_API_KEY=YOUR_API_KEY # on Linux/Mac
    set OPENAI_API_KEY=YOUR_API_KEY # on Windows
    $Env:OPENAI_API_KEY = "YOUR_API_KEY" # on Windows (PowerShell)

    Configure default setting for MemGPT by running:

    PowerShell
    memgpt configure

    Now, you can run MemGPT with:

    PowerShell
    memgpt run

    The run command supports the following optional flags (if set, will override config defaults):

    • --agent: (str) Name of agent to create or to resume chatting with.
    • --human: (str) Name of the human to run the agent with.
    • --persona: (str) Name of agent persona to use.
    • --model: (str) LLM model to run [gpt-4, gpt-3.5].
    • --preset: (str) MemGPT preset to run agent with.
    • --first: (str) Allow user to sent the first message.
    • --debug: (bool) Show debug logs (default=False)
    • --no-verify: (bool) Bypass message verification (default=False)
    • --yes/-y: (bool) Skip confirmation prompt and use defaults (default=False)

    You can run the following commands in the MemGPT CLI prompt:

    • /exit: Exit the CLI
    • /attach: Attach a loaded data source to the agent
    • /save: Save a checkpoint of the current agent/conversation state
    • /dump: View the current message log (see the contents of main context)
    • /memory: Print the current contents of agent memory
    • /pop: Undo the last message in the conversation
    • /heartbeat: Send a heartbeat system message to the agent
    • /memorywarning: Send a memory warning system message to the agent

    You can find more information on the official GitHub website.

    MemGPT for OpenAI Setup

    Matthew Berman has produced a great review of the original MemGPT research paper, and initial setup for OpenAi API users.

    Note in the video tutorial, Matthew refers to setup with a Conda environment, but this isn’t entirely necessary, it can also be done with a standard .venv environment.

    MemGPT and Open Source Models Setup

    In this video, Matthew Berman covers a quick setup for using MemGPT with open-source models like LLaMA, Airobors and Mistral via Runpod. Although this may sound complicated, it’s really not too difficult, and offers great potential cost savings vs using OpenAI.

    Note open-source model support is still in early-stage development.

    MemGPT and Autogen Setup

    AutoGen is a tool that helps create LLM applications where multiple agents can talk to each other to complete tasks like for example brainstorming a business proposal. These AutoGen agents can be tailored, they can chat, and they easily let humans join in the conversation. In this tutorial Matthew Berman explains how to expand the memory of these AI agents by combining Autogen with MemGPT.

    AutoGEN and MemGPT and Local LLM Complete Tutorial

    Created by Prompt Engineer this 30 minute video covers in vast detail all the steps required to get this combination of solutions live with Runpod. As Prompt Engineer explains, this tutorial took quite a long time to produce, as it necessitated a number of test and learn steps. So far this is one of the most comprehensive tutorials available.

    Summary: 00:11 🚀 The video demonstrates how to connect MemGPT, AutoGEN, and local Large Language Models (LLMs) using Runpods.

    01:32 🤖 You can integrate MemGPT and AutoGEN to work together, with MemGPT serving as an assistant agent alongside local LLMs.

    03:46 📚 To get started, install Python, VS Code, and create a Runpods account with credits. You can use Runpods for running local LLMs.

    06:43 🛠️ Set up a virtual environment, create a Python file, and activate the environment for your project.

    08:52 📦 Install necessary libraries like OpenAI, PyAutoGEN, and MGBPT to work with AutoGEN and MemGPT.

    16:21 ⚙️ Use Runpods to deploy local LLMs, select the hardware configuration, and create API endpoints for integration with AutoGEN and MemGPT.

    20:29 🔄 Modify the code to switch between using AutoGEN and MemGPT agents based on a flag, allowing you to harness the power of both.

    23:31 🤝 Connect AutoGEN and MemGPT by configuring the API endpoints with the local LLMs from Runpods, enabling them to work seamlessly together.

    Follow the exemple pyhton code:

    requirements.txt

    TeX
    pyautogen
    pymemgpt

    app.py

    Python
    ## pip install pyautogen pymemgpt
    
    import os
    import autogen
    import memgpt.autogen.memgpt_agent as memgpt_autogen
    import memgpt.autogen.interface as autogen_interface
    import memgpt.agent as agent       
    import memgpt.system as system
    import memgpt.utils as utils 
    import memgpt.presets as presets
    import memgpt.constants as constants 
    import memgpt.personas.personas as personas
    import memgpt.humans.humans as humans
    from memgpt.persistence_manager import InMemoryStateManager, InMemoryStateManagerWithPreloadedArchivalMemory, InMemoryStateManagerWithEmbeddings, InMemoryStateManagerWithFaiss
    import openai
    
    config_list = [
        {
            "api_type": "open_ai",
            "api_base": "https://ekisktiz8hegao-5001.proxy.runpod.net/v1",
            "api_key": "NULL",
        },
    ]
    
    llm_config = {"config_list": config_list, "seed": 42}
    
    # If USE_MEMGPT is False, then this example will be the same as the official AutoGen repo
    # (https://github.com/microsoft/autogen/blob/main/notebook/agentchat_groupchat.ipynb)
    # If USE_MEMGPT is True, then we swap out the "coder" agent with a MemGPT agent
    
    USE_MEMGPT = True
    
    ## api keys for the memGPT
    openai.api_base="https://ekisktiz8hegao-5001.proxy.runpod.net/v1"
    openai.api_key="NULL"
    
    
    # The user agent
    user_proxy = autogen.UserProxyAgent(
        name="User_proxy",
        system_message="A human admin.",
        code_execution_config={"last_n_messages": 2, "work_dir": "groupchat"},
        human_input_mode="TERMINATE",  # needed?
        default_auto_reply="You are going to figure all out by your own. "
        "Work by yourself, the user won't reply until you output `TERMINATE` to end the conversation.",
    )
    
    
    interface = autogen_interface.AutoGenInterface()
    persistence_manager=InMemoryStateManager()
    persona = "I am a 10x engineer, trained in Python. I was the first engineer at Uber."
    human = "Im a team manager at this company"
    memgpt_agent=presets.use_preset(presets.DEFAULT_PRESET, model='gpt-4', persona=persona, human=human, interface=interface, persistence_manager=persistence_manager, agent_config=llm_config)
    
    
    if not USE_MEMGPT:
        # In the AutoGen example, we create an AssistantAgent to play the role of the coder
        coder = autogen.AssistantAgent(
            name="Coder",
            llm_config=llm_config,
            system_message=f"I am a 10x engineer, trained in Python. I was the first engineer at Uber",
            human_input_mode="TERMINATE",
        )
    
    else:
        # In our example, we swap this AutoGen agent with a MemGPT agent
        # This MemGPT agent will have all the benefits of MemGPT, ie persistent memory, etc.
        print("\nMemGPT Agent at work\n")
        coder = memgpt_autogen.MemGPTAgent(
            name="MemGPT_coder",
            agent=memgpt_agent,
        )
    
    
    # Begin the group chat with a message from the user
    user_proxy.initiate_chat(
        coder,
        message="Write a Function to print Numbers 1 to 10"
        )

    Interview with MemGPT Co-Creator Charles Parker

    For more information on the creators of MemGPT, also consider watching this video interview with one of its co-creators UC Berkley PHD student Charles Parker

    MemGPT as Operation System

    MemGPT draws inspiration from the virtual memory concept in operating systems and is innovatively applied to large language models to create an expansive context space. This innovation shines in scenarios like continuous conversations where traditional limitations on context length pose a challenge. By enabling large language models to handle their memory, MemGPT circumvents the usual restrictions set by fixed context lengths.

    Limitations of MemGPT

    Firstly, it’s essential to be aware that MemGPT is an emerging project currently undergoing enhancements. They have established a Discord group to foster idea-sharing and enable direct interaction with the creators. You are welcome to join in https://discord.gg/9GEQrxmVyE

    Data Sensitivity: MemGPT’s reliance on previous interactions for context can raise concerns regarding data privacy and sensitivity, especially in scenarios involving personal or confidential information

    Contextual Misinterpretations: While adept at handling extended conversations, MemGPT can occasionally misinterpret context, especially in nuanced or emotionally charged communications, leading to responses that may seem out of touch.

    Resource Intensity: The system demands significant computational resources for optimal functionality, particularly for processing large volumes of data or maintaining extensive conversation histories.

    Dependency on Quality Training Data: MemGPT’s effectiveness is closely tied to the quality of training data. Biased, inaccurate, or incomplete data can hinder the learning process, affecting the quality of interactions.

    Adaptation to Diverse Discourses: The system’s ability to adapt to varying communication styles or understand different dialects and cultural nuances is still a work in progress, occasionally affecting its versatility in global or multicultural scenarios.

    MemGPT vs Sparse Priming Representations (SPR)

    MemGPT:

    • Inspiration: Takes cues from hierarchical memory systems used in traditional operating systems.
    • Functionality: Implements a tiered memory system that allows an LLM to extend its context window by managing which information is stored or retrieved, and when this should happen.
    • Structure: Comprises a Main Context (analogous to an OS’s main memory) and an External Context (similar to secondary storage).
    • Utility: Aims to revolutionize LLMs’ capabilities in tasks that involve unbounded context, such as long-form conversations and detailed document analysis.

    Sparse Priming Representations (SPR):

    • Inspiration: Modeled after human memory organization and retrieval systems, focusing on critical information.
    • Functionality: Enhances memory system efficiency by creating concise primers that represent complex ideas, supporting the accuracy in understanding and recall.
    • Approach: Prioritizes intuitive and user-friendly memory management, akin to how humans naturally process and store information.
    • Utility: Focused on making LLMs more efficient in knowledge retrieval and learning, improving user engagement and communication tools.

    Technical Implementation:

    MemGPT:

    • Utilizes a structured approach for memory tier management, allowing for effective data movement and context management.
    • Tailored for scalability in dealing with large datasets and complex, extended tasks.

    SPR:

    • Uses a method of creating primers that act as a distillation of complex information, allowing for a more intuitive memory management experience.
    • Geared towards mimicking human cognitive processes for better learning and communication outcomes.

    Applications and Implications:

    MemGPT:

    • May greatly benefit applications that require processing of large amounts of data over extended periods, like in-depth analysis and ongoing interactions.

    SPR:

    • Could significantly enhance tools for learning and communication by providing users with easy-to-understand summaries or primers of complex topics.

    Community and Engagement:

    MemGPT:

    • Offers an open-source platform for developers and researchers to contribute to and enhance the capabilities of the memory management system.

    SPR:

    • Encourages community involvement through contributions of new examples, research, and tools to improve the system’s efficiency and intuitiveness.

    In conclusion, Both MemGPT and SPR are innovative responses to the challenges of memory management in LLMs, each with its own philosophy and methodology. MemGPT is more structural and system-oriented, potentially better for tasks that need management of extensive contexts. SPR is more user-centric and intuitive, possibly better for learning and communication by simplifying complex information.

    While both aim to enhance LLMs’ handling of context, their underlying philosophies and expected applications differ, reflecting the diversity of approaches in advancing AI and ML capabilities. The ongoing developments and community contributions in both these areas show a vibrant and collaborative effort to push the boundaries of what’s possible with memory management in LLMs.

    Conclusion

    MemGPT stands as a testament to the power of innovation in AI, bridging the gap between what LLMs can do and what we aspire for them to achieve. As we march towards the future, the vision of LLMs as comprehensive operating systems doesn’t seem far-fetched—it’s nearly within our grasp, and MemGPT is leading the charge. What do you think?

    That’s it for today!

    Sources

    cpacker/MemGPT: Teaching LLMs memory management for unbounded context 📚🦙 (github.com)

    MemGPT: Overcoming Context Limitations for ChatGPT and Other LLMs for Document Chats & More (superthread.com)

    MemGPT

    2310.08560.pdf (arxiv.org)

    What is MemGPT AI and MemGPT Installation Tutorial 2023 (dragganaitool.com)

    Haly AI