Session Explorer in Action: Personalizing Conference Experiences with AI Using Vector Support in Azure SQL, LangChain, and Streamlit

Conferences can be overwhelming, with countless sessions, various speakers, and a wealth of information to digest. What if we could simplify this experience using AI, turning data into a personalized assistant? Enter Session Explorer, a cutting-edge solution built using Vector Support in Azure SQL, LangChain, and Streamlit. This AI-powered application transforms the way attendees interact with conference data, making it intuitive, efficient, and engaging.

The Inspiration Behind Session Explorer

Conferences like the PASS Data Community Summit bring together diverse topics, speakers, and attendees. While the variety is exciting, navigating this wealth of information can be daunting. The Session Explorer was designed to:

  • Help attendees quickly find relevant sessions.
  • Provide insights about specific speakers or topics.
  • Create a seamless, user-friendly interface powered by AI.

By leveraging Vector Support in Azure SQL for intelligent data retrieval, LangChain for conversational AI, and Streamlit for a dynamic user interface, the app makes conference exploration smarter and simpler.

Key Technologies Powering Session Explorer

1. Vector Support in Azure SQL

Azure SQL’s vector capabilities enable efficient semantic searches by transforming text into embeddings. These embeddings are compared using the vector_distance function, allowing the system to find similar sessions based on user queries.

Here’s how it works:

SQL
DECLARE @qv vector(1536);
EXEC web.get_embedding 'Data-driven insights', @qv OUTPUT;

SELECT TOP(5) 
    se.id AS session_id, 
    vector_distance('cosine', se.embeddings, @qv) AS distance
FROM 
    web.sessions se
ORDER BY
    distance;

This query takes a user-provided text, converts it to an embedding using OpenAI, and retrieves the most relevant sessions.

If you want to know more about Vector Support in Azure SQL go to my post below.

2. LangChain

LangChain integrates the retrieval-augmented generation (RAG) approach to provide additional context to the language model. Using LCEL (LangChain Expression Language), it dynamically injects session data into prompts for personalized responses.

Python
prompt = ChatPromptTemplate.from_messages(
    [
        (
            "ai",
            """ 
            You are a system assistant who helps users find all sessions related to watch from the conference, based off the sessions that are provided to you.
            Sessions will be provided in an assistant message in the format of `title|abstract|speakers|start-time|end-time|room|track|session_type|topics|level|Session URL`. You can use only the provided session list to help you answer the user's question.
            if the user asks about a speaker, you can respond with the sessions that the speaker is participating in.
            If the user asks a question that is not related to the provided sessions, you can respond with a message that I'm unable to assist with that question because the information you're asking for is not available in the database..
            Your answer must have the session title, a very short summary of the abstract, the speakers, the start time, the end time, the room, track, session type topic and level. In the end insert the session URL to open in a new windows.
            """,
        ),
        (
            "human", """
            The sessions available at the conference are the following: 
            {sessions}                
            """
        ),
        (
            "human", 
            "{question}"
        ),
    ]
)

retriever = RunnableLambda(get_similar_sessions).bind() 

rag_chain = {"sessions": retriever, "question": RunnablePassthrough()} | prompt | llm

3. Streamlit

Streamlit creates an intuitive web interface, making the AI capabilities accessible to users. With a few lines of code, we crafted an interactive app where users can ask questions and receive detailed session recommendations.

Python
import streamlit as st
import os
import sys
from dotenv import load_dotenv
import logging
from utilities import get_similar_sessions
from langchain_openai import ChatOpenAI
from langchain.prompts import ChatPromptTemplate
from langchain.schema import StrOutputParser
from langchain.schema.runnable import Runnable
from langchain.schema.runnable.config import RunnableConfig
from langchain_core.runnables import RunnableLambda
from langchain_core.runnables import RunnablePassthrough
import getpass

logging.basicConfig(level=logging.INFO)

# Adicionando o diretório pai ao caminho
sys.path.append(os.path.abspath('..'))

# Carrega as variáveis de ambiente do arquivo .env
load_dotenv()

if not os.environ.get("OPENAI_API_KEY"):
    os.environ["OPENAI_API_KEY"] = getpass.getpass("Enter your OpenAI API key: ")

MODELO = ""

prompt = ChatPromptTemplate.from_messages(
    [
        (
            "ai",
            """ 
            You are a system assistant who helps users find all sessions related to watch from the conference, based off the sessions that are provided to you.
            Sessions will be provided in an assistant message in the format of `title|abstract|speakers|start-time|end-time|room|track|session_type|topics|level|Session URL`. You can use only the provided session list to help you answer the user's question.
            if the user asks about a speaker, you can respond with the sessions that the speaker is participating in.
            If the user asks a question that is not related to the provided sessions, you can respond with a message that I'm unable to assist with that question because the information you're asking for is not available in the database..
            Your answer must have the session title, a very short summary of the abstract, the speakers, the start time, the end time, the room, track, session type topic and level. In the end insert the session URL to open in a new windows.
            """,
        ),
        (
            "human", """
            The sessions available at the conference are the following: 
            {sessions}                
            """
        ),
        (
            "human", 
            "{question}"
        ),
    ]
)

def configure_app():
    """Configura a aparência e o layout do aplicativo Streamlit."""
    st.set_page_config(
        page_title="Chat",
        page_icon="💬",
        layout="wide",
        initial_sidebar_state="expanded",
    )
    st.header('Session Explorer: Your Guide to the Data Pass Summit 2024')

    st.write("""Ask something like 'What are the sessions about Azure SQL and AI?' or 'What are the sessions by Davide Mauri?'""")
        
def sidebar_inputs():
    """Exibe o menu lateral para inserção das informações do administrador e upload do arquivo."""
    with st.sidebar:        
        
        st.image("https://passdatacommunitysummit.com/assets/images/pass-2024-logo-lock-up--dark--with-icon.svg")   
        
        # Opção de seleção entre Open AI e Groq para definir o modelo
        modelo = st.selectbox("Select the model:", ('gpt-4o-mini','gpt-4o' ))
                         
        ""

        "You can find more information about this app in my blog post: [Session Explorer in Action: Personalizing Conference Experiences with AI Using Vector Support in Azure SQL, LangChain, and Streamlit](https://lawrence.eti.br/2024/11/20/session-explorer-in-action-personalizing-conference-experiences-with-ai-using-vector-support-in-azure-sql-langchain-and-streamlit/)"
        ""
        "The PASS Data Community Summit is an annual conference designed for data professionals to connect, share insights, and learn from peers and industry leaders. It focuses on a wide range of topics, including analytics, architecture, database management, development, and professional growth, across multiple platforms such as Microsoft, AWS, Google, PostgreSQL, and others."        
        ""
        "Official website: [PASS Data Community Summit](https://passdatacommunitysummit.com/)"
        ""	
        ""
        ""
        "Created by [Lawrence Teixeira](https://www.linkedin.com/in/lawrenceteixeira/)"

    return modelo        
        
def main():    
    """Função principal do aplicativo, onde todas as funções são chamadas."""
        
    configure_app()
            
    global MODELO

    modelo = sidebar_inputs()
    
    MODELO = modelo

    llm = ChatOpenAI(
        model=MODELO,
        temperature=0,
        max_tokens=16383,
        timeout=None,
        max_retries=2,
        streaming=False,
    )

    if "messages" not in st.session_state:
        st.session_state["messages"] = [{"role": "assistant", "content": "Hi! 😊 How are you? 💬 Feel free to ask anything about the sessions at PASS Data Community Summit 2024!"}]

    for msg in st.session_state.messages:
        if msg["role"] != "system":
            st.chat_message(msg["role"]).write(msg["content"])

    if prompt_entrada := st.chat_input("Type your message here"):
        st.session_state.messages.append({"role": "user", "content": prompt_entrada})
        st.chat_message("user").write(prompt_entrada)
        
        with st.spinner('Searching...'): 
            
            retriever = RunnableLambda(get_similar_sessions).bind() 

            rag_chain = {"sessions": retriever, "question": RunnablePassthrough()} | prompt | llm

            response_chat = rag_chain.invoke(prompt_entrada)            
            
            response = response_chat.content
        
        if response:
            result = (str(response))
        else:
            result = (str(":)"))

        msg = { "role": "assistant",
                "content": result
        }

        st.session_state.messages.append(msg)
                
        st.chat_message("assistant").write(msg["content"])    

if __name__ == "__main__":
    main()

How Session Explorer Works

Step 1: User Query

Users input a question, such as:
“Which sessions are led by John Doe?” or “What sessions discuss AI in data management?”

Step 2: Intelligent Retrieval

The app uses the get_similar_sessions function, querying Azure SQL with the user’s input. The SQL stored procedure returns a list of sessions ranked by relevance.

Python
results = cursor.execute("EXEC web.find_sessions @text=?", (search_text)).fetchall()

Step 3: Conversational AI

LangChain takes the retrieved data, formats it into a response template, and interacts with the language model to generate human-readable answers.

Step 4: Seamless Delivery

The personalized session details are displayed on the Streamlit interface, complete with clickable URLs to explore further.

Personalizing the Conference Experience

Here’s an example of a typical interaction:

  • User: “Show me all sessions by Dr. Smith.”
  • Session Explorer:
  • Title: “Advancing Data Science with AI”
  • Speakers: Dr. Smith, Jane Doe
  • Time: 10:00 AM – 11:30 AM
  • Room: Auditorium A
  • Track: Data Science
  • URL: Session Details

This level of personalization not only saves time but also enhances the conference experience by connecting attendees with sessions that truly matter to them.

If you want to try deploying the app yourself, visit my GitHub repository.

Introducing the Session Explorer App for PASS Data Community Summit 2024

The Session Explorer App is your go-to tool example for navigating the wealth of knowledge at the PASS Data Community Summit 2024. Built to streamline your conference experience, this app leverages the latest advancements in AI to help you discover sessions, speakers, and topics tailored to your interests.

What Can the App Do?

The app enables you to:

  • Explore Sessions: Ask specific questions about the available sessions, such as “What sessions discuss AI in data management?” or “Show me sessions by Dr. Smith.”
  • Discover Speakers: Find all sessions led by a particular speaker or group of speakers.
  • Plan Your Schedule: Get session details, including title, abstract, speakers, start and end times, room location, track, session type, and difficulty level.
  • Direct Access to Information: Each session includes a clickable link to its detailed page, making it easy to add it to your schedule.

How It Works

Using the app is simple:

  1. Click on this link: https://sessionschat.fly.dev/
  2. Ask a Question: Type your query into the chat interface, like “Which sessions are about Azure SQL?”
  3. Get Intelligent Responses: The app uses AI to understand your question and retrieve the most relevant session information from the database.
  4. View Results: Receive detailed session summaries, including timing, location, and a direct link to learn more.

Why Use the Session Explorer App?

The app transforms the way you navigate the conference, ensuring you never miss a session relevant to your goals. Whether you’re interested in AI, analytics, architecture, or professional development, the Session Explorer helps you focus on what matters most.

Experience the power of AI in personalizing your conference journey with the Session Explorer App. Dive into the sessions that spark your curiosity and make the most of the PASS Data Community Summit 2024!

Final Thoughts

The Session Explorer is more than just a chatbot; it’s a tool that redefines how we interact with conference data. Combining Vector Support’s power in Azure SQL, LangChain, and Streamlit delivers a personalized, AI-driven experience that attendees will love.

Whether you’re a data enthusiast, a tech professional, or an AI developer, the Session Explorer showcases the immense potential of AI in enhancing user experiences. Ready to explore it in action? Try it for yourself at the PASS Data Community Summit 2024 app!

That’s it for today!

Sources

https://devblogs.microsoft.com/azure-sql/build-a-chatbot-on-your-own-data-in-1-hour-with-azure-sql-langchain-and-chainlit/

https://github.com/Azure-Samples/azure-sql-db-rag-langchain-chainlit

Open WebUI and Free Chatbot AI: Empowering Corporations with Private Offline AI and LLM Capabilities

Artificial intelligence (AI) is reshaping how corporations function and interact with data in today’s digital landscape. However, with AI comes the challenge of securing corporate information and ensuring data privacy—especially when dealing with Large Language Models (LLMs). Public cloud-based AI services may expose sensitive data to third parties, making corporations wary of deploying models on external servers.

Open WebUI addresses this issue head-on by offering a self-hosted, offline, and highly extensible platform for deploying and interacting with LLMs. Built to run entirely offline, Open WebUI provides corporations with complete control over their AI models, ensuring data security, privacy, and compliance.

What is Open WebUI?

Open WebUI is a versatile, feature-rich, and user-friendly web interface for interacting with Large Language Models (LLMs). Initially launched as Ollama WebUI, Open WebUI is a community-driven, open-source platform enabling businesses, developers, and researchers to deploy, manage, and interact with AI models offline.

Open WebUI is designed to be extensible, supporting multiple LLM runners and integrating with different AI frameworks. Its clean, intuitive interface mimics popular platforms like ChatGPT, making it easy for users to communicate with AI models while maintaining full control over their data. By allowing businesses to self-host the web interface, Open WebUI ensures that no data leaves the corporate environment, which is crucial for organizations concerned with data privacy, security, and regulatory compliance.

Key Features of Open WebUI

1. Self-hosted and Offline Operation

Open WebUI is built to run in a self-hosted environment, ensuring that all data remains within your organization’s infrastructure. This feature is critical for companies handling sensitive information and those in regulated industries where external data transfers are a risk.

2. Extensibility and Model Support

Open WebUI supports various LLM runners, allowing businesses to deploy the language models that best meet their needs. This flexibility enables integration with custom models, including OpenAI-compatible APIs and models such as Ollama, GPT, and others. Users can also seamlessly switch between different models in real time to suit diverse use cases.

3. User-Friendly Interface

Designed to be intuitive and easy to use, Open WebUI features a ChatGPT-style interface that allows users to communicate with language models via a web browser. This makes it ideal for corporate teams who may not have a deep technical background but need to interact with LLMs for business insights, automation, or customer support.

4. Docker-Based Deployment

To ensure ease of setup and management, Open WebUI runs inside a Docker container. This provides an isolated environment, making it easier to deploy and maintain while ensuring compatibility across different systems. With Docker, corporations can manage their AI models and interfaces without disrupting their existing infrastructure.

5. Role-Based Access Control (RBAC)

To maintain security, Open WebUI offers granular user permissions through RBAC. Administrators can control who has access to specific models, tools, and settings, ensuring that only authorized personnel can interact with sensitive AI models.

6. Multi-Model Support

Open WebUI allows for concurrent utilization of multiple models, enabling organizations to harness the unique capabilities of different models in parallel. This is especially useful for businesses requiring a range of AI solutions from simple chat interactions to advanced language processing tasks.

7. Markdown and LaTeX Support

For enriched interaction, Open WebUI includes full support for Markdown and LaTeX, making it easier for users to create structured documents, write reports, and interact with AI using precise formatting and mathematical notation.

8. Retrieval-Augmented Generation (RAG)

Open WebUI integrates RAG technology, which allows users to feed documents into the AI environment and interact with them through chat. This feature enhances document analysis by enabling users to ask specific questions and retrieve document-based answers.

9. Custom Pipelines and Plugin Framework

The platform supports a highly modular plugin framework that allows businesses to create and integrate custom pipelines, tailor-made to their specific AI workflows. This enables the addition of specialized logic, ranging from AI agents to integration with third-party services, directly within the web UI.

10. Real-Time Multi-Language Support

For global organizations, Open WebUI offers multilingual support, enabling interaction with LLMs in various languages. This feature ensures that businesses can deploy AI solutions for different regions, enhancing both internal communication and customer-facing AI tools.

What Open WebUI Can Do?

Open WebUI Community

You can find good examples of models, prompts, tools, and functions at the Open WebUI Community.

Inside Open WebUI at workspaces as an admin, you can configure a lot of good stuff. The possibilities here are unlimited.

Why Corporations Should Consider Open WebUI

As businesses adopt AI to streamline operations and enhance decision-making, the need for secure, private, and controlled solutions is paramount. Open WebUI offers corporations the following distinct advantages:

1. Data Privacy and Compliance

By allowing organizations to run their AI models offline, Open WebUI ensures that no data leaves the corporate environment. This eliminates the risk of data exposure associated with cloud-based AI services. It also helps businesses stay compliant with data protection regulations such as GDPR, HIPAA, or CCPA.

2. Flexibility and Customization

Open WebUI’s extensibility makes it a highly flexible tool for enterprises. Businesses can integrate custom AI models, adapt the platform to meet unique needs, and deploy models specific to their industry or use case.

3. Cost Savings

For enterprises that require frequent AI model interactions, a self-hosted solution like Open WebUI can result in significant cost savings compared to paying for cloud-based API usage. Over time, this can reduce the operational cost of AI adoption.

4. Improved Control Over AI Systems

With Open WebUI, corporations have complete control over how their AI models are deployed, managed, and utilized. This includes controlling access, managing updates, and ensuring that AI models are used in compliance with corporate policies.

5. You can use Azure Open AI

Azure OpenAI Service ensures data privacy by not sharing your data with other customers or using it to improve models without your permission. It includes integrated content filtering to protect against harmful inputs and outputs, adheres to strict regulatory standards, and provides enterprise-grade security. Additionally, it features abuse monitoring to maintain safe and responsible AI use, making it a reliable choice for businesses prioritizing safety and privacy.

Installation and Setup

Getting started with Open WebUI is straightforward. Here are the basic steps:

1. Install Docker

Docker is required to deploy Open WebUI. If Docker isn’t already installed, it can be easily set up on your system. Docker provides an isolated environment to run applications, ensuring compatibility and security.

2. Launch Open WebUI

Using Docker, you can pull the Open WebUI image and start a container. The Docker command will depend on whether you are running the language model locally or connecting to a remote server.

Kotlin
docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main

3. Create an Admin Account

Once the web UI is running, the first user to sign up will be granted administrator privileges. This account will have comprehensive control over the web interface and the language models.

4. Connect to Language Models

You can configure Open WebUI to connect with various LLMs, including OpenAI or Ollama models. This can be done via the web UI settings, where you can specify API keys or server URLs for remote model access.

There are a lot of ways to implement Open WebUI and you can access it at this link.

Run AI Models Locally: Ollama Tutorial (Step-by-Step Guide + WebUI)

Open WebUI – Tutorial & Windows Install 

Free Chatbot AI: Easy Access to Open WebUI for Corporations

To make Open WebUI even more accessible, I have deployed a version called Free Chatbot AI. This platform serves as an easy-access solution for businesses and users who want to experience the power of Open WebUI without the need for complex setup or hosting infrastructure. Free Chatbot AI offers a user-friendly interface where users can interact with Large Language Models (LLMs) in real time, all while maintaining the key benefits of privacy and control.

Key Benefits of Free Chatbot AI for Corporations:
  1. Instant Access: Free Chatbot AI is pre-configured and hosted, allowing companies to quickly test and use AI models without worrying about setup or technical configurations.
  2. Data Privacy: Like the self-hosted version of Open WebUI, Free Chatbot AI ensures that sensitive information is protected. No data is sent to third-party servers, ensuring that interactions remain private and secure.
  3. Flexible Deployment: While Free Chatbot AI is an accessible hosted version, it also offers corporations the ability to experiment with LLMs before committing to a self-hosted deployment. This is perfect for businesses looking to try out AI capabilities before taking full control of their AI infrastructure.
  4. User-Friendly Interface: Built with a simple and intuitive design, Free Chatbot AI mirrors the same ease of use as Open WebUI. This makes it suitable for teams across the organization, from technical users to non-technical departments like customer support or HR, enhancing workflows with AI-powered insights and automation.
  5. No Setup Required: Free Chatbot AI eliminates the need for complex setup processes. Corporations can access the platform directly and begin leveraging the power of AI for their business operations immediately.
Use Cases for Free Chatbot AI:
  • Internal Team Collaboration: Free Chatbot AI enables teams to quickly interact with LLMs to generate ideas, draft content, or automate repetitive tasks such as writing summaries and answering FAQs.
  • AI-Assisted Customer Support: Businesses can test Free Chatbot AI to power customer support bots that deliver accurate, conversational responses to customer queries, all while maintaining data security.
  • Document Processing and Summarization: Teams can upload documents and let Free Chatbot AI generate summaries, extracting relevant information with ease, improving efficiency in knowledge management and decision-making.
How to access Free Chatbot AI?

First, click on this link and you have to create an account by clicking on Sign up.

Fill the fields below and click on Create Account.

After that, you have to select one of the models and have fun!

This is the home page.

You can create images by clicking on Image Gen.

You can type a prompt like “photorealistic image taken with Nikon Z50, 18mm lens, a vast and untouched wilderness, with a winding river flowing through a dense forest, showcasing the pristine beauty of untouched nature, aspect ratio 16:9“.

There are a lot of options to explore. Use Free Chatbot AI to explore all the options and good look!

Conclusion

As AI becomes increasingly integral to business operations, ensuring data privacy and control has never been more important. Open WebUI offers corporations a secure, customizable, and user-friendly platform to deploy and interact with Large Language Models, entirely offline. With its range of features, from role-based access to multi-model support and flexible integrations, Open WebUI is the ideal solution for businesses looking to adopt AI while maintaining full control over their data and processes.

For companies aiming to harness the power of AI while ensuring compliance with industry regulations, Open WebUI is a game-changer, offering the perfect balance between innovation and security.

If you have any doubts about how to implement it in your company you can contact me at this link.

That´s it for today!

Sources

https://docs.openwebui.com

https://medium.com/@omargohan/open-webui-the-llm-web-ui-66f47d530107

https://medium.com/free-or-open-source-software/open-webui-how-to-build-and-run-locally-with-nodejs-8155c51bcb55

https://openwebui.com/#open-webui-community

Integrating Azure OpenAI with Native Vector Support in Azure SQL Databases for Advanced Search Capabilities and Data Insights

Azure SQL Database has taken a significant step forward by introducing native support for vectors, unlocking advanced capabilities for applications that rely on semantic search, AI, and machine learning. By integrating vector search into Azure SQL, developers can now store, search, and analyze vector data directly alongside traditional SQL data, offering a unified solution for complex data analysis and enhanced search experiences.

Vectors in Azure SQL Database

Vectors are numerical representations of objects like text, images, or audio. They are essential for applications involving semantic search, recommendation systems, and more. These vectors are typically generated by machine learning models, capturing the semantic meaning of the data they represent.

The new vector functionality in Azure SQL Database allows you to store and manage these vectors within a familiar SQL environment. This eliminates the need for separate vector databases, streamlining your application architecture and simplifying your data management processes.

Key Benefits of Native Vector Support in Azure SQL

  • Unified Data Management: Store and query both traditional and vector data in a single database, reducing complexity and maintenance overhead.
  • Advanced Search Capabilities: Perform similarity searches alongside standard SQL queries, leveraging Azure SQL’s sophisticated query optimizer and powerful enterprise features.
  • Optimized Performance: Vectors are stored in a compact binary format, allowing for efficient distance calculations and optimized performance on vector-related operations.

Embeddings: The Foundation of Vector Search

At the heart of vector search are embeddings—dense vector representations of objects, generated by deep learning models. These embeddings capture the semantic similarities between related concepts, enabling tasks such as semantic search, natural language processing, and recommendation systems.

For example, word embeddings can cluster related words like “computer,” “software,” and “machine,” while distant clusters might represent words with entirely different meanings, such as “lion,” “cat,” and “dog.” These embeddings are particularly powerful in applications where context and meaning are more important than exact keyword matches.

Azure OpenAI makes it easy to generate embeddings by providing pre-trained machine learning models accessible through REST endpoints. Once generated, these embeddings can be stored directly in an Azure SQL Database, allowing you to perform vector search queries to find similar data points.

You can explore how vector embeddings work by visiting this amazing website: Transformer Explainer. It offers an excellent interactive experience to help you better understand how Generative AI operates in general.

Vector Search Use Cases

Vector search is a powerful technique used to find vectors in a dataset that are similar to a given query vector. This capability is essential in various applications, including:

  • Semantic Search: Rank search results based on their relevance to the user’s query.
  • Recommendation Systems: Suggest related items based on similarity in vector space.
  • Clustering: Group similar items together based on vector similarity.
  • Anomaly Detection: Identify outliers in data by finding vectors that differ significantly from the norm.
  • Classification: Classify items based on the similarity of their vectors to predefined categories.

For instance, consider a semantic search application where a user queries for “healthy breakfast options.” A vector search would compare the vector representation of the query with vectors representing product reviews, finding the most contextually relevant items—even if the exact keywords don’t match.

Key Features of Native Vector Support in Azure SQL

Azure SQL’s native vector support introduces several new functions to operate on vectors, which are stored in a binary format to optimize performance. Here are the key functions:

  • JSON_ARRAY_TO_VECTOR: Converts a JSON array into a vector, enabling you to store embeddings in a compact format.
  • ISVECTOR: Checks whether a binary value is a valid vector, ensuring data integrity.
  • VECTOR_TO_JSON_ARRAY: Converts a binary vector back into a human-readable JSON array, making it easier to work with the data.
  • VECTOR_DISTANCE: Calculates the distance between two vectors using a chosen distance metric, such as cosine or Euclidean distance.

These functions enable powerful operations for creating, storing, and querying vector data in Azure SQL Database.

Example: Vector Search in Action

Let’s walk through an example of using Azure SQL Database to store and query vector embeddings. Imagine you have a table of customer reviews, and you want to find reviews that are contextually related to a user’s search query.

  1. Storing Embeddings as Vectors:
    After generating embeddings using Azure OpenAI, you can store these vectors in a VARBINARY(8000) column in your SQL table:
SQL
   ALTER TABLE [dbo].[FineFoodReviews] ADD [VectorBinary] VARBINARY(8000);
   UPDATE [dbo].[FineFoodReviews]
   SET [VectorBinary] = JSON_ARRAY_TO_VECTOR([vector]);

This allows you to store the embeddings efficiently, ready for vector search operations.

  1. Performing Similarity Searches:
    To find reviews that are similar to a user’s query, you can convert the query into a vector and calculate the cosine distance between the query vector and the stored embeddings:
SQL
   DECLARE @e VARBINARY(8000);
   EXEC dbo.GET_EMBEDDINGS @model = '<yourmodeldeploymentname>', @text = 'healthy breakfast options', @embedding = @e OUTPUT;

   SELECT TOP(10) ProductId,
                  Summary,
                  Text,
                  VECTOR_DISTANCE('cosine', @e, VectorBinary) AS Distance
   FROM dbo.FineFoodReviews
   ORDER BY Distance;

This query returns the top reviews that are contextually related to the user’s search, even if the exact words don’t match.

  1. Hybrid Search with Filters:
    You can enhance vector search by combining it with traditional keyword filters to improve relevance and performance. For example, you could filter reviews based on criteria like user identity, review score, or the presence of specific keywords, and then apply vector search to rank the results by relevance:
SQL
   -- Comprehensive query with multiple filters.
   SELECT TOP(10)
       f.Id,
       f.ProductId,
       f.UserId,
       f.Score,
       f.Summary,
       f.Text,
       VECTOR_DISTANCE('cosine', @e, VectorBinary) AS Distance,
       CASE 
           WHEN LEN(f.Text) > 100 THEN 'Detailed Review'
           ELSE 'Short Review'
       END AS ReviewLength,
       CASE 
           WHEN f.Score >= 4 THEN 'High Score'
           WHEN f.Score BETWEEN 2 AND 3 THEN 'Medium Score'
           ELSE 'Low Score'
       END AS ScoreCategory
   FROM FineFoodReviews f
   WHERE
       f.UserId NOT LIKE 'Anonymous%'  -- Exclude anonymous users
       AND f.Score >= 2               -- Score threshold filter
       AND LEN(f.Text) > 50           -- Text length filter for detailed reviews
       AND (f.Text LIKE '%gluten%' OR f.Text LIKE '%dairy%') -- Keyword filter
   ORDER BY
       Distance,  -- Order by cosine distance
       f.Score DESC, -- Secondary order by review score
       ReviewLength DESC; -- Tertiary order by review length

This query combines semantic search with traditional filters, balancing relevance and computational efficiency.

Leveraging REST Services for Embedding Generation

Azure OpenAI provides REST endpoints for generating embeddings, which can be consumed directly from Azure SQL Database using the sp_invoke_external_rest_endpoint system stored procedure. This integration enables seamless interaction between your data and AI models, allowing you to build intelligent applications that combine the power of machine learning with the familiarity of SQL.

Here’s a stored procedure example that retrieves embeddings from a deployed Azure OpenAI model and stores them in the database:

SQL
CREATE PROCEDURE [dbo].[GET_EMBEDDINGS]
(
    @model VARCHAR(MAX),
    @text NVARCHAR(MAX),
    @embedding VARBINARY(8000) OUTPUT
)
AS
BEGIN
    DECLARE @retval INT, @response NVARCHAR(MAX);
    DECLARE @url VARCHAR(MAX);
    DECLARE @payload NVARCHAR(MAX) = JSON_OBJECT('input': @text);

    SET @url = 'https://<resourcename>.openai.azure.com/openai/deployments/' + @model + '/embeddings?api-version=2023-03-15-preview';

    EXEC dbo.sp_invoke_external_rest_endpoint 
        @url = @url,
        @method = 'POST',   
        @payload = @payload,   
        @headers = '{"Content-Type":"application/json", "api-key":"<openAIkey>"}', 
        @response = @response OUTPUT;

    DECLARE @jsonArray NVARCHAR(MAX) = JSON_QUERY(@response, '$.result.data[0].embedding');
    SET @embedding = JSON_ARRAY_TO_VECTOR(@jsonArray);
END
GO

This stored procedure retrieves embeddings from the Azure OpenAI model and converts them into a binary format for storage in the database, making them available for similarity search and other operations.

Let’s implementing a experiment with the Native Vector Support in Azure SQL

Azure SQL Database provides a seamless way to store and manage vector data despite not having a specific vector data type. Column-store indexes, vectors, and essentially lists of numbers can be efficiently stored in a table. Each vector can be represented in a row with individual elements as columns or serialized arrays. This approach ensures efficient storage and retrieval, making Azure SQL suitable for large-scale vector data management.

I used the Global News Dataset from Kaggle in my experiment.

First, you must create the columns to save the vector information. In my case, I created two columns: title_vector For the news title and content_vector the news content. For this, create a small Python code, but you can also do that directly from SQL using a cursor. It's important to know that you don't need to pay for any Vector Databases by saving the vector information inside the Azure SQL.

Python
from litellm import embedding
import pyodbc  # or another SQL connection library
import os
from dotenv import load_dotenv

# Load environment variables from .env file
load_dotenv()

# Set up OpenAI credentials from environment variables
os.environ['AZURE_API_KEY'] =os.getenv('AZURE_API_KEY')
os.environ['AZURE_API_BASE'] = os.getenv('AZURE_API_BASE')
os.environ['AZURE_API_VERSION'] = os.getenv('AZURE_API_VERSION')

# Connect to your Azure SQL database
conn = pyodbc.connect(f'DRIVER={{ODBC Driver 17 for SQL Server}};'
                      f'SERVER={os.getenv("DB_SERVER")};'
                      f'DATABASE={os.getenv("DB_DATABASE")};'
                      f'UID={os.getenv("DB_UID")};'
                      f'PWD={os.getenv("DB_PWD")}')

def get_embeddings(text):
    # Truncate the text to 8191 characters bacause of the text-embedding-3-     small OpenAI API Embedding model limit
    truncated_text = text[:8191]

    response = embedding(
        model="azure/text-embedding-3-small",
        input=truncated_text,
        api_key=os.getenv('AZURE_API_KEY'),
        api_base=os.getenv('AZURE_API_BASE'),
        api_version=os.getenv('AZURE_API_VERSION')
        )
        
    embeddings = response['data'][0]['embedding']
    return embeddings


def update_database(article_id, title_vector, content_vector):
    cursor = conn.cursor()

    # Convert vectors to strings
    title_vector_str = str(title_vector)
    content_vector_str = str(content_vector)

    # Update the SQL query to use the string representations
    cursor.execute("""
        UPDATE newsvector
        SET title_vector = ?, content_vector = ?
        WHERE article_id = ?
    """, (title_vector_str, content_vector_str, article_id))
    conn.commit()


def embed_and_update():
    cursor = conn.cursor()
    cursor.execute("SELECT article_id, title, full_content FROM newsvector where title_vector is null and full_content is not null and title is not null order by published asc")
    
    title_vector = ""
    content_vector = ""
    
    for row in cursor.fetchall():
        article_id, title, full_content = row
        
        print(f"Embedding article {article_id} - {title}")
        
        title_vector = get_embeddings(title)
        content_vector = get_embeddings(full_content)
        
        update_database(article_id, title_vector, content_vector)

embed_and_update()

These two columns will contain something like this: [-0.02232750505208969, -0.03755787014961243, -0.0066827102564275265…]

Second, you must create a procedure in the Azure Database to transform the query into a vector embedding.

SQL
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
ALTER PROCEDURE [dbo].[GET_EMBEDDINGS]
(
    @model VARCHAR(MAX),
    @text NVARCHAR(MAX),
    @embedding VARBINARY(8000) OUTPUT
)
AS
BEGIN
    DECLARE @retval INT, @response NVARCHAR(MAX);
    DECLARE @url VARCHAR(MAX);
    DECLARE @payload NVARCHAR(MAX) = JSON_OBJECT('input': @text);

    -- Set the @url variable with proper concatenation before the EXEC statement
    SET @url = 'https://<Your App>.openai.azure.com/openai/deployments/' + @model + '/embeddings?api-version=2024-02-15-preview';

    EXEC dbo.sp_invoke_external_rest_endpoint 
        @url = @url,
        @method = 'POST',   
        @payload = @payload,   
        @headers = '{"Content-Type":"application/json", "api-key":"<Your Azure Open AI API Key"}', 
        @response = @response OUTPUT;

    -- Use JSON_QUERY to extract the embedding array directly
    DECLARE @jsonArray NVARCHAR(MAX) = JSON_QUERY(@response, '$.result.data[0].embedding');

    
    SET @embedding = JSON_ARRAY_TO_VECTOR(@jsonArray);
END

I also create another procedure to search directly to the dataset using the Native Vector Support in Azure SQL.

SQL
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO

ALTER PROCEDURE [dbo].[SearchNewsVector] 
    @inputText NVARCHAR(MAX)
AS
BEGIN
    -- Query the SimilarNewsContentArticles table using the response
    IF OBJECT_ID('dbo.result', 'U') IS NOT NULL
        DROP TABLE dbo.result;

	--Assuming you have a stored procedure to get embeddings for a given text
	DECLARE @e VARBINARY(8000);
	EXEC dbo.GET_EMBEDDINGS @model = 'text-embedding-3-small', @text = @inputText, @embedding = @e OUTPUT;

	SELECT TOP(10) 
       [article_id]
      ,[source_id]
      ,[source_name]
      ,[author]
      ,[title]
      ,[description]
      ,[url]
      ,[url_to_image]
      ,[content]
      ,[category]
      ,[full_content]
      ,[title_vector]
      ,[content_vector]
      ,[published]
      ,VECTOR_DISTANCE('cosine', @e, VectorBinary) AS cosine_distance
	into result
	FROM newsvector
	ORDER BY cosine_distance;
END

Finally, you can start querying your table using prompts instead of keywords. This is awesome!

Check out the app I developed with the Native Vector Support in Azure SQL, which is designed to assist you in crafting prompts and evaluating your performance using my newsvector dataset. To explore the app, click here.

Like always, I also created this GitHub repository with everything I did.

Azure SQL Database Native vector support subscription for the Private Preview

You can sign up for the private preview at this link.

This article, published by Davide Mauri and Pooja Kamath at Microsoft Build 2024 event, provides all the information.

Announcing EAP for Vector Support in Azure SQL Database – Azure SQL Devs’ Corner (microsoft.com)

Conclusion

The integration of Azure OpenAI with native vector support in Azure SQL Database unlocks new possibilities for applications that require advanced search capabilities and data analysis. By storing and querying vector embeddings alongside traditional SQL data, you can build powerful solutions that combine the best of both worlds—semantic understanding with the reliability and performance of Azure SQL.

This innovation simplifies application development, enhances data insights, and paves the way for the next generation of intelligent applications.

That’s it for today!

Sources

Azure SQL DB Vector Functions Private Preview | Data Exposed (youtube.com)

Announcing EAP for Vector Support in Azure SQL Database – Azure SQL Devs’ Corner (microsoft.com)

The Future of Superintelligence: A Deep Dive into AGI Predictions and Potential Risks

Welcome to the exciting world of Artificial General Intelligence (AGI) and the journey toward superintelligence. As we navigate through rapid technological advancements, it’s crucial to understand the predictions and potential risks associated with this transformative field. In this blog post, we’ll delve into the future of superintelligence, drawing insights from leading experts and recent developments.

AGI by 2027: A Believable Reality

Leopold Aschenbrenner, a former researcher at OpenAI, presents a striking vision for the future of AGI. He predicts that by 2027, AGI will become a reality, with AI systems achieving intelligence on par with PhD-level researchers and experts. This prediction is based on the significant advancements in AI from GPT-2 to GPT-4, which took AI from preschool-level capabilities to those of a smart high schooler within four years. Aschenbrenner suggests that another similar leap in intelligence could occur by 2027.

In his insightful article series “Situational Awareness,” Aschenbrenner elaborates on this vision, providing a detailed roadmap for how AGI could transform society. He emphasizes that the rapid progression in AI technology, driven by increasing computational power and algorithmic efficiency, supports the feasibility of achieving AGI within this decade. Aschenbrenner’s projections highlight the potential for AGI systems to independently drive groundbreaking innovations and solve complex problems across various domains, fundamentally altering the landscape of technology and human capability.

Timeline of Predictions

2024

  • Current State of AI: AI models like GPT-4 can already perform tasks at the level of smart high schoolers, writing sophisticated code, solving complex math problems, and excelling in various standardized tests.
GPT-4 scores on standardized tests. Note also the large jump from GPT-3.5 to GPT-4 in human percentile on these tests, often from well below the median human to the very top of the human range. (And this is GPT-3.5, a fairly recent model released less than a year before GPT-4, not the clunky old elementary-school-level GPT-3)

GPT-4 (2023) ~ smart high schooler: “Wow, it can write pretty sophisticated code and iteratively debug, it can write intelligently and sophisticatedly about complicated subjects, it can reason through difficult high-school competition math, it’s beating the vast majority of high schoolers on whatever tests we can give it, etc.” From code to math to Fermi estimates, it can think and reason. GPT-4 is now useful in my daily tasks, from helping write code to revising drafts. 

Some of what people found impressive about GPT-4 when it was released, from the “Sparks of AGI” paper. Top: It’s writing very complicated code (producing the plots shown in the middle) and can reason through nontrivial math problems. Bottom-left: Solving an AP math problem. Bottom-right: Solving a fairly complex coding problem. More interesting excerpts from that exploration of GPT-4’s capabilities here

2025-2026

  • AI Outpacing College Graduates: By this period, AI models are expected to surpass the cognitive capabilities of college graduates, handling complex tasks and problem-solving with greater efficiency.

2027

  • Arrival of AGI: Artificial General Intelligence (AGI) becomes a reality, with AI systems achieving intelligence on par with PhD-level researchers and experts. These models will be capable of autonomous research and engineering tasks.
  • Start of Intelligence Explosion: AGI systems begin to rapidly improve their own capabilities, potentially compressing decades of algorithmic progress into a single year, leading to superintelligence.

2028-2030

  • Government AGI Projects: By 2027-2028, the U.S. government will initiate large-scale AGI projects to maintain technological superiority. These projects will be crucial in the face of global competition, particularly from China.
  • Trillion-Dollar Compute Clusters: The construction of trillion-dollar compute clusters will be underway, driven by massive investments in AI infrastructure. These clusters will significantly enhance computational power, supporting the next generation of AI systems.
  • Expansion of U.S. Electricity Production: To support the growing computational demands, U.S. electricity production will increase by tens of percent. This expansion will be critical to sustaining the AI industry’s energy needs.

2030 and Beyond

  • Superintelligence: By the end of the decade, AI systems will have surpassed human intelligence by a significant margin, becoming superintelligent. These systems will possess cognitive abilities far beyond any human, capable of revolutionary advancements in various fields.

What are the key factors behind Leopold Aschenbrenner’s predictions?

1. Trend Analysis in Compute Power

  • Compute Growth: Aschenbrenner observes the exponential increase in computational power dedicated to AI research and development. This includes the progression from billion-dollar compute clusters to trillion-dollar clusters, predicting that by the end of the decade, there will be a massive industrial mobilization to support AI infrastructure.
  • Orders of Magnitude (OOM) Scaling: He uses the concept of orders of magnitude to project future AI capabilities. For example, tracing the growth in compute and algorithmic efficiencies suggests significant qualitative jumps in AI intelligence over short periods.

The image illustrates the projected growth of “Effective Compute” for AI models from 2018 to 2028, normalized to the compute power of GPT-4. The y-axis shows the Effective Compute on a logarithmic scale, indicating exponential growth over time. The growth trajectory suggests that AI capabilities will evolve from the level of a preschooler (GPT-2) to an elementary schooler (GPT-3), then to a smart high schooler (GPT-4), and potentially to the level of an automated AI researcher/engineer by 2027-2028. This progression is based on public estimates of both physical compute and algorithmic efficiencies, highlighting the rapid advancements in AI capabilities with increased compute power. The shaded area represents the uncertainty in these projections, with the solid line indicating the median estimate and the dashed lines showing the range of possible outcomes.

2. Algorithmic Efficiency Improvements

  • Algorithmic Advances: He considers the consistent improvements in algorithmic efficiencies, which act as multipliers for compute power. Historical data shows that these efficiencies have significantly reduced the cost and increased the performance of AI models.
Source: Our World in Data
  • Chinchilla Scaling Laws: These laws guide the optimal scaling of AI models, suggesting that as compute power and data grow, models become exponentially more capable.

3. “Unhobbling” AI Models

  • Latent Capabilities: Aschenbrenner emphasizes the potential of unlocking latent capabilities in AI models through techniques such as reinforcement learning from human feedback (RLHF), chain-of-thought prompting, and scaffolding. These methods enable AI systems to utilize their inherent abilities more effectively.
  • Context Length and Tools: Increasing the context length of AI models and providing them with tools (e.g., web browsers, code execution capabilities) enhances their practical utility and intelligence.

4. Historical Progress and Predictive Modeling

  • Historical Benchmarks: He analyzes the rapid advancements in AI over the past decade, from models that could barely identify images to those that now solve complex problems and ace standardized tests. This historical context helps project future milestones.
Source: Epoch AI Database
  • Predictive Trendlines: Aschenbrenner trusts the trendlines observed in AI research and development, which have consistently demonstrated rapid progress and exceeded skeptical expectations.

5. Industrial and National Security Implications

  • Industrial Mobilization: Predictions include the massive investments and industrial mobilization necessary to support AI growth, such as the expansion of U.S. electricity production and the construction of advanced compute clusters.
  • National Security: He anticipates significant government involvement in AGI projects by 2027-2028, driven by the need to maintain technological superiority and secure AGI from espionage and state-actor threats.

What is Superintelligence?

Superintelligence refers to a form of artificial intelligence that surpasses the cognitive capabilities of the most intelligent and gifted human minds. These AI systems would not only excel in specific tasks but possess general cognitive abilities that enable them to outperform humans in virtually every domain, including scientific research, creativity, social skills, and strategic thinking. The potential of superintelligence lies in its ability to drive revolutionary advancements across multiple fields, solve complex global challenges, and fundamentally transform our society in ways that are currently beyond human comprehension. However, this also brings significant risks and ethical considerations, as ensuring that such powerful systems are aligned with human values and controlled effectively is crucial for the future of humanity.

The image depicts a projected trajectory of AI development leading to an “Intelligence Explosion.” It shows the effective compute of AI systems, normalized to GPT-4, from 2018 to 2030. Initially, AI systems, such as GPT-2 and GPT-3, are comparable to preschool and elementary school intelligence levels, respectively. By around 2023-2024, AI reaches the GPT-4 level, equating to a smart high schooler. The projection suggests that automated AI research could lead to rapid, exponential gains in compute, propelling AI capabilities far beyond human intelligence to a state of superintelligence by 2030. This explosive growth in AI capability is driven by recursive self-improvement, where AI systems enhance their own development, vastly accelerating progress and potentially transforming various fields of science, technology, and military within a short span.

Risks of Superintelligence

Leopold Aschenbrenner outlines several significant risks for humanity associated with the development and deployment of artificial general intelligence (AGI) and superintelligence. Here are the main points extracted from his work:

1. Mass Destruction and Proliferation of Weapons
  • Enhanced Bioweapons: Advances in biology could lead to the creation of new bioweapons that spread quickly and kill with perfect lethality. These could become affordable even for terrorist groups.
  • New Nuclear Weapons: Technological advancements might enable the creation of nuclear weapons that are more numerous and have new, undetectable delivery mechanisms.
  • Drones and Novel WMDs: Small drones could carry deadly poisons and be used for targeted assassinations on a large scale. The development of novel weapons of mass destruction (WMDs) could be accelerated by superintelligent AI.
2. Global Security Threats
  • Espionage and Theft of AI Models: If AGI model weights are not securely protected, they could be stolen by rogue states or terrorists. This theft could allow adversaries to use these models to accelerate their own AI development and create catastrophic technologies.
  • National Security: Superintelligence will give a decisive economic and military advantage to whoever possesses it. If adversaries like China or North Korea obtain superintelligence, it could destabilize global security and lead to authoritarian control or world conquest.
3. Intelligence Explosion and Alignment Risks
  • Misaligned AI: There are significant risks associated with ensuring that superintelligent AI systems are aligned with human values and goals. Misaligned AI could act in ways that are harmful or catastrophic to humanity.
  • Rapid Technological Changes: The intelligence explosion—where AI systems rapidly improve themselves—could lead to a period of extreme volatility and danger. Managing this period safely will be exceptionally challenging.
  • Loss of Control: As AI systems become more powerful, there is a real risk that humans will lose control over them. This could lead to scenarios where AI systems make decisions that are detrimental to human survival.
4. Geopolitical Tensions and Arms Races
  • Existential Race: A neck-and-neck race between nations to develop superintelligence could lead to reckless behavior and a lack of safety measures. The competition could push countries to prioritize speed over safety, increasing the risk of catastrophic mistakes.
  • Instability and Deterrence: Rapid advancements in military technology driven by superintelligent AI could destabilize existing deterrence strategies, leading to a more volatile and dangerous global environment.
5. Government and Regulatory Challenges
  • Inadequate Security and Regulation: Current security measures for protecting AI models are insufficient. There is a need for more robust regulations and security protocols to prevent the misuse of AGI.
  • Competence and Coordination: Successfully navigating the risks associated with superintelligence will require exceptional competence and coordination among global leaders and AI researchers. The lack of a coordinated and competent response could exacerbate the risks.

Conclusion

The journey toward AGI and superintelligence is filled with both incredible opportunities and formidable challenges. As we approach this new frontier, it’s crucial to navigate the risks carefully and ensure that the development of AI benefits humanity. By staying informed and involved in the discourse around AI safety and ethics, we can help shape a future where superintelligence is a force for good.

Moreover, the development of AGI presents an unprecedented opportunity to address some of the world’s most pressing issues, from climate change to healthcare. With superintelligent systems capable of performing advanced research and creating innovative solutions, we could see rapid advancements in technology and science, leading to improved quality of life and economic growth.

However, these advancements come with significant responsibilities. Ensuring that AGI systems are aligned with human values and can be controlled effectively is paramount to preventing potential misuse or unintended consequences. International cooperation and robust regulatory frameworks will be essential to manage these risks and to ensure a balanced and equitable distribution of AGI’s benefits.

The potential geopolitical implications also cannot be ignored. The race to develop AGI could lead to shifts in global power dynamics, necessitating careful diplomatic efforts to prevent conflicts and promote peaceful uses of this transformative technology.

In conclusion, the path to superintelligence offers a glimpse into a future of boundless possibilities, but it also demands a cautious and ethical approach. By fostering a collaborative environment among researchers, policymakers, and society at large, we can aspire to harness the full potential of AGI for the betterment of all humanity. The decisions we make today will shape the trajectory of AI development and its impact on future generations, underscoring the importance of thoughtful and proactive engagement with this pivotal technology.

You can read the all article here.

That’s it for today!

Introduction – SITUATIONAL AWARENESS: The Decade Ahead (situational-awareness.ai)

Ex-OpenAI employee speaks out about why he was fired: ‘I ruffled some feathers’ (yahoo.com)

Leopold Aschenbrenner launches AGI-focused investment firm #ArtificialGeneralIntelligence (webappia.com)