Chatting with your Enterprise data privately and securely through the use of Azure Cognitive Search and Azure Open AI

In an age where data is power, businesses are constantly looking for ways to leverage their vast enterprise data stores. One promising avenue lies in the intersection of AI and search technologies, specifically through the use of Azure Cognitive Search and Azure Open AI. These tools provide powerful ways to converse with enterprise data privately and securely.

Enterprise data can take various forms, from structured database datasets to unstructured documents, emails, and files. Some examples are data about the company’s benefits, internal policies, job descriptions, roles, and much more.

What is Azure Cognitive Search?

Azure Cognitive Search is a cloud-based service provided by Microsoft Azure that enables developers to build sophisticated search experiences into custom applications. It integrates with other Azure Cognitive Services to enable AI-driven content understanding through capabilities such as natural language processing, entity recognition, image analysis, and more.

Here are some of the key benefits of Azure Cognitive Search:

  1. Fully Managed: Azure Cognitive Search is fully managed, meaning you don’t have to worry about infrastructure setup, maintenance, or scaling. You just need to focus on the development of your application.
  2. Rich Search Experiences: It allows for the creation of rich search experiences, including auto-complete, geospatial search, filtering, and faceting.
  3. AI-Enhanced Search Capabilities: When combined with other Azure Cognitive Services, Azure Cognitive Search can provide advanced search features. For example, it can extract key phrases, detect languages, identify entities, and more. It can even index and search unstructured data, like text within documents or images.
  4. Scalability and Performance: Azure Cognitive Search can automatically scale to handle large volumes of data and high query loads. It provides fast, efficient search across large datasets.
  5. Data Integration: It can pull in data from a variety of sources, including Azure SQL Database, Azure Cosmos DB, Azure Blob Storage, and more.
  6. Security: Azure Cognitive Search supports data encryption at rest and in transit. It also integrates with Azure Active Directory for identity and access management.
  7. Developer Friendly: It provides a simple, RESTful API and integrates with popular programming languages and development frameworks. This makes it easier for developers to embed search functionality into applications.
  8. Indexing: The service provides robust indexing capabilities, allowing you to index data from a variety of sources and formats. This allows for a more comprehensive search experience for end-users.

In summary, Azure Cognitive Search can provide powerful, intelligent search capabilities for your applications, allowing users to find the information they need quickly and easily.

What is Azure Open AI?

Azure OpenAI Service is a platform that provides REST API access to OpenAI’s powerful language models, including GPT-3, GPT-4, Codex, and Embeddings. It can be used for tasks such as content generation, summarization, semantic search, and natural language-to-code translation.

The security and safety of enterprise data is a top priority for Azure OpenAI. Here are some key points on how it ensures safety:

  • The Azure OpenAI Service is fully controlled by Microsoft and does not interact with any services operated by OpenAI. Your prompts (inputs) and completions (outputs), your embeddings, and your training data are not available to other customers, OpenAI, or used to improve OpenAI models, any Microsoft or 3rd party products or services, or to automatically improve Azure OpenAI models for your use in your resource. Your fine-tuned Azure OpenAI models are available exclusively for your use.
  • The service processes different types of data including prompts and generated content, augmented data included with prompts, and training & validation data.
  • When generating completions, images, or embeddings, the service evaluates the prompt and completion data in real-time to check for harmful content types. The models are stateless, meaning no prompts or generations are stored in the model, and prompts and generations are not used to train, retrain, or improve the base models.
  • With the “on your data” feature, the service retrieves relevant data from a configured data store and augments the prompt to produce generations that are grounded with your data. The data remains stored in the data source and location you designate. No data is copied into the Azure OpenAI service.
  • Training data uploaded for fine-tuning is stored in the Azure OpenAI resource in the customer’s Azure tenant. It can be double encrypted at rest and can be deleted by the customer at any time. This data is not used to train, retrain, or improve any Microsoft or 3rd party base models.
  • Azure OpenAI includes both content filtering and abuse monitoring features to reduce the risk of harmful use of the service. To detect and mitigate abuse, Azure OpenAI stores all prompts and generated content securely for up to thirty (30) days.
  • The data store where prompts and completions are stored is logically separated by customer resources. Prompts and generated content are stored in the Azure region where the customer’s Azure OpenAI service resource is deployed, within the Azure OpenAI service boundary. Human reviewers can only access the data when it has been flagged by the abuse monitoring system.
  • Customers who meet additional Limited Access eligibility criteria and attest to specific use cases can apply to modify the Azure OpenAI content management features. Suppose Microsoft approves a customer’s request to change abuse monitoring. In that case, Microsoft does not store any prompts and completions associated with the approved Azure subscription for which abuse monitoring is configured.

In conclusion, Azure OpenAI takes numerous measures to ensure that your enterprise data is kept secure and confidential while using its service.

Revolutionize your Enterprise Data with ChatGPT: step by step how to create your own Enterprise Chat

This sample demonstrates a few approaches for creating ChatGPT-like experiences over your own data using the Retrieval Augmented Generation pattern. It uses Azure Open AI Service to access the ChatGPT model (gpt-35-turbo), and Azure Cognitive Search for data indexing and retrieval.

The repo includes sample data so it’s ready to try end-to-end. In this sample application, we use a fictitious company called Contoso Electronics, and the experience allows its employees to ask questions about the benefits, internal policies, as well as job descriptions, and roles.

Features

  • Chat and Q&A interfaces
  • Explores various options to help users evaluate the trustworthiness of responses with citations, tracking of source content, etc.
  • Shows possible approaches for data preparation, prompt construction, and orchestration of interaction between model (ChatGPT) and retriever (Cognitive Search)
  • Settings directly in the UX to tweak the behavior and experiment with options
Chat screen

Getting Started

IMPORTANT: In order to deploy and run this example, you’ll need an Azure subscription with access enabled for the Azure OpenAI service. You can request access here. You can also visit here to get some free Azure credits to get you started.

AZURE RESOURCE COSTS by default this sample will create Azure App Service and Azure Cognitive Search resources that have a monthly cost, as well as Form Recognizer resource that has cost per document page. You can switch them to free versions of each of them if you want to avoid this cost by changing the parameters file under the infra folder (though there are some limits to consider; for example, you can have up to 1 free Cognitive Search resource per subscription, and the free Form Recognizer resource only analyzes the first 2 pages of each document.)

Prerequisites

To Run Locally

  • Azure Developer CLI
  • Python 3+
    • Important: Python and the pip package manager must be in the path in Windows for the setup scripts to work.
    • Important: Ensure you can run python --version from the console. On Ubuntu, you might need to run sudo apt install python-is-python3 to link python to python3.
  • Node.js
  • Git
  • Powershell 7+ (pwsh) – For Windows users only.
    • Important: Ensure you can run pwsh.exe from a PowerShell command. If this fails, you likely need to upgrade PowerShell.

NOTE: Your Azure Account must have Microsoft.Authorization/roleAssignments/write permissions, such as User Access Administrator or Owner.

Installation

Project Initialization

  1. Create a new folder and switch to it in the terminal
  2. Run azd auth login
  3. Run azd init -t azure-search-openai-demo
    • For the target location, the regions that currently support the models used in this sample are East US or South Central US. For an up-to-date list of regions and models, check here
    • note that this command will initialize a git repository and you do not need to clone this repository

Starting from scratch:

Execute the following command, if you don’t have any pre-existing Azure services and want to start from a fresh deployment.

  1. Run azd up – This will provision Azure resources and deploy this sample to those resources, including building the search index based on the files found in the ./data folder.
  2. After the application has been successfully deployed you will see a URL printed to the console. Click that URL to interact with the application in your browser.

For detailed information click here on my GitHub and follow a video from Microsoft talking about the example solution.

You can look at the Chat App that I’ve developed, which I will make available for you to test for a few days.

Firstly, it’s important to understand that you have the ability to replace the PDF files within the “./data” directory with your own business data.

If you wish to examine these files first to gain insights into the types of questions you can make in the chat to test, please click here.

Regrettably, the demo app had to be deactivated due to Azure expenses. If you’d like it to be reactivated, please click here to contact me. Thank you.

You’re able to query any content found within the enterprise PDF files located in the “./data” directory. The chat will respond with citations from the respective PDFs, and you have the option to click through and verify the information directly from the source PDF.

Conclusion

The vast universe of enterprise data, spanning from structured database datasets to unstructured documents, emails, and files, holds a wealth of insights that can drive an organization’s growth and success. Azure Cognitive Search and Azure OpenAI serve as powerful tools that make this data readily accessible, private, and secure. By leveraging these technologies, businesses can tap into the full potential of their internal data, from understanding the intricacies of their benefits and policies to defining roles and job descriptions more effectively. With a future powered by AI and machine learning, the conversations we can have with our data are only just beginning. This is more than just a technological shift; it’s a new era of informed decision-making, driven by data that’s within our reach. This solution provides an array of opportunities to assist businesses in leveraging their corporate data and disseminating it amongst their employees. This method simplifies comprehension, fostering organizational growth and enhancing the company culture. Should you require additional details on this topic, please do not hesitate to reach out to me.

That’s it for today!

AutoGPT: The Game Changer in Artificial Intelligence and Autonomous Agents

Auto-GPT is a revolutionary technology that unleashes new abilities for ChatGPT, enabling it to complete tasks all by itself, creating its own prompts to get the job done. AutoGPT, a groundbreaking artificial intelligence (AI) model, has taken the world by storm with its ability to provide large language models with “arms and hands” for task execution based on specific goals. This state-of-the-art technology has captured the attention of open-source developers and has the potential to revolutionize the AI landscape. For those who may not be familiar with AutoGPT, this article will provide an in-depth overview of this innovative AI model, its key features, and its impact on industries and applications.

The buzz around Auto-GPT has recently surpassed ChatGPT itself, trending as number one on Twitter for several days in a row.

How AutoGPT works?

AutoGPT works by utilizing the GPT-4 language model as its core intelligence to automate tasks and perform web searches. To use AutoGPT, you need to provide three inputs:

  1. AI Name: A name for the AI instance.
  2. AI Role: A description of the AI’s purpose.
  3. Up to 5 goals: Specific tasks you want the AI to accomplish.

Once these inputs are provided, AutoGPT starts working on the assigned goals. It may search the internet, extract information, or perform other necessary actions to complete the tasks.

AutoGPT also features long and short-term memory management, allowing it to learn from past experiences and make better decisions based on context. This is achieved through its integration with vector databases for memory storage. Additionally, unlike ChatGPT, AutoGPT has internet access, which enables it to fetch relevant information from the web as needed. Furthermore, it can manipulate files, access, and extract data from them, and summarize the information if required.

Follow 3 examples of how AutoGPT works:

1 – Market Research on Headphones: AI Name: ResearchGPT AI Role: An AI designed to conduct market research on tech products.

Goal 1: Do market research for different headphones on the market today. Goal 2: Get the top 5 headphones and list their pros and cons. Goal 3: Include the price of each one and save the analysis. Goal 4: Once you are done, terminate.

Auto-GPT will search the internet, find information on various headphones, list the top 5 headphones with their pros, cons, and prices, save the analysis, and terminate once the task is complete.

2 – Create FAQs for a Product: AI Name: FAQGPT AI Role: An AI designed to create FAQs for products.

Goal 1: Research a new smartphone model and its features. Goal 2: Create a list of 10 frequently asked questions about the smartphone. Goal 3: Provide clear and concise answers to the FAQs. Goal 4: Save the FAQs in a text file. Goal 5: Once you are done, terminate.

In this case, AutoGPT will research the specified smartphone model, create a list of FAQs, answer them, save the information in a text file, and terminate after completing the task.

3 – Writing a Python Program: AI Name: CodeGPT AI Role: An AI designed to write simple Python programs.

Goal 1: Write a Python program that calculates the factorial of a given number. Goal 2: Test the program with sample inputs and ensure it works correctly. Goal 3: Save the Python code in a .py file. Goal 4: Once you are done, terminate.

AutoGPT will generate the Python code to calculate the factorial of a given number, test it with sample inputs, save the code in a .py file, and terminate upon completion.

Keep in mind that AutoGPT might not always be perfect in completing the assigned tasks, as its performance depends on the accuracy and limitations of the GPT-4 model it is built upon.

AutoGPT boasts several key features that set it apart from its predecessors

  1. Dynamic learning: AutoGPT is designed to adapt to new data, making it an ever-evolving conversational AI model that stays up-to-date with the latest information and trends.
  2. Enhanced context awareness: AutoGPT’s understanding of context and user intent has been fine-tuned to provide more accurate and relevant responses.
  3. Customization capabilities: AutoGPT can be tailored to specific industries and applications, making it a versatile tool for many use cases.

AutoGPT’s Impact on Industries and Applications

The innovative features of AutoGPT are transforming various sectors through a wide range of applications:

  1. Personalized marketing: AutoGPT creates targeted marketing campaigns by continuously learning from user data and preferences.
  2. Sentiment analysis: AutoGPT accurately gauges user sentiment, providing valuable insights for businesses to improve customer experiences.
  3. Real-time adaptation: AutoGPT adapts to changing market conditions and trends, ensuring AI-powered solutions remain relevant and practical.
  4. Automation of complex tasks: AutoGPT’s self-improvement capabilities make it suitable for automating intricate tasks and streamlining processes across industries.

Integration of AutoGPT with advanced conversational AI models like BabyAGI, AgentGPT, and Microsoft’s Jarvis unlocks the full potential of AI and revolutionizes human-technology interactions. These AI models are transforming the world by enabling innovative applications across industries, such as enhanced customer support, improved content generation, seamless language translation, virtual personal assistants, healthcare applications, education and training, and human resources management.

AutoGPT also has a number of limitations, such as:

  1. Imperfect accuracy: AutoGPT is built upon the GPT-4 language model, which, although a significant improvement over GPT-3.5, is still not 100% accurate. Errors in the generated output might require further steps to resolve or could lead to an inability to complete the assigned task.
  2. Looping issues: While working on a task, AutoGPT may get stuck in a loop trying to find solutions to errors or problems. This can cause delays and increase costs, as the GPT-4 API usage fees can become expensive.
  3. Cost: The GPT-4 API, which Auto-GPT relies on, is more expensive than the GPT-3.5 API. The costs can quickly add up, especially if the AI is stuck in a loop or takes multiple steps to accomplish a task.
  4. Not production-ready: AutoGPT is not yet considered a production-ready solution. Users have reported that it often does not complete projects or only partially solves tasks. It requires further refinement and development before it can be relied upon as a complete, dependable solution.
  5. Task-specific limitations: AutoGPT might perform well for relatively simple and straightforward tasks, but it could struggle with more complex tasks or tasks requiring specialized knowledge. Its capabilities are limited by the underlying GPT-4 model and its ability to understand and solve a given problem.

These limitations should be taken into consideration when using AutoGPT, as it may not be suitable for all use cases or provide flawless results.

There are two methods for utilizing and evaluating AutoGPT

The first one is to download and install in our computer the AutoGPT source code from GitHub. Follow the instruction above. This maybe requires technical knowledge.

https://github.com/Significant-Gravitas/Auto-GPT/releases/latest

The second one involves utilizing a version I have deployed for direct access in your browser via this link and having fun!

You must input the name and goal, then click on “Deploy Agent.” Entering your OpenAI key is required. If you don’t possess one, I provide five tasks per goal at no cost.

Following this, the agent will commence processing.

The interactions will be divided into separate tasks.

Upon completing the five tasks, AutoGPT will cease operation. To run more than five tasks, you must input your OpenAI Key.

To deploy it yourself, click on this link and adhere to the provided guidelines.

Conclusion

AutoGPT is a game changer in the field of artificial intelligence and autonomous agents. Its dynamic learning, enhanced context awareness, and customization capabilities make it a powerful tool poised to revolutionize industries and applications. As AutoGPT continues to evolve and integrate with other advanced conversational AI models, the potential for AI to enhance and streamline various aspects of our lives grows exponentially. The future of AI is undoubtedly bright, with AutoGPT leading the way.

Additionally, I’ve developed a section on my blog dedicated to my Generative AI projects. You can view the screenshot below.

picture capture from my blog

That’s it for today!

Follow below some interesting articles talking about AutoGPT:

https://www.linkedin.com/pulse/what-auto-gpt-next-level-ai-tool-surpassing-chatgpt-bernard-marr/

https://en.wikipedia.org/wiki/Auto-GPT

https://levelup.gitconnected.com/autogpt-is-taking-over-the-internet-here-are-the-incredible-use-cases-that-will-blow-your-mind-ac31ea94e06e

THE RISE OF AUTONOMOUS AGENTS: PREPARING FOR THE AI REVOLUTION

How to build an AI chatbot with personalized customer data using Open AI API and GPT Index

By harnessing the potential of AI-powered chatbots and personalized interactions, businesses can revolutionize their customer experience, fostering stronger relationships and driving customer loyalty. Implementing cutting-edge technologies like OpenAI API enables the creation of intelligent chatbots capable of understanding and adapting to individual customer needs, preferences, and concerns. As a result, customers receive tailored support, enjoy seamless interactions, and feel valued, significantly enhancing their overall experience. By prioritizing customer-centric innovation, businesses can position themselves as industry leaders and create lasting, positive impacts on customer satisfaction and brand reputation.

What are OpenAI API and GPT Index?

OpenAI API is a powerful tool provided by OpenAI, a leading AI research organization, that enables developers to access state-of-the-art AI models like GPT-4. GPT, which stands for Generative Pre-trained Transformer, is a groundbreaking language model that has revolutionized natural language processing (NLP) with its ability to generate human-like text.

The GPT Index is a catalog of fine-tuned models, allowing developers to pick and choose the most suitable model for their specific needs. By leveraging OpenAI API and the GPT Index, you can create AI chatbots that provide personalized customer experiences, enhancing user engagement and satisfaction.

How to Fine-tune with OpenAI API?

Fine-tuning is training a pre-trained AI model on a custom dataset to make it more suitable for a specific task or application. In the context of chatbots, this can involve training the model on customer interactions, preferences, and other relevant data. Here’s a step-by-step guide to fine-tuning the OpenAI API:

Gather your data: Collect customer data, including conversation transcripts, customer preferences, and other relevant information. Ensure that your data is cleaned and formatted for training.

Prepare your dataset: Split your data into training and validation sets. The training set will be used to fine-tune the model, while the validation set will help you evaluate the model’s performance.

Select a base model: Choose a suitable base model from the GPT Index, considering factors like size, performance, and language capabilities. The base model will serve as the foundation for your custom chatbot.

Fine-tune the model: Use the OpenAI API to fine-tune your selected model on your dataset. This involves uploading your dataset, specifying the base model, and setting the training parameters. The API will then train the model on your data, adjusting its weights and biases to understand better and generate personalized customer interactions.

Evaluate and iterate: Once the fine-tuning is complete, evaluate the model’s performance on the validation set. Adjust the training parameters and fine-tune the model to improve its performance if necessary.

Integrate the chatbot: After achieving satisfactory performance, integrate your fine-tuned AI chatbot into your desired platform, such as a website or a mobile app, and start providing personalized customer experiences.

Is it possible to fine-tune the new GPT-4 model?

The OpenAI GPT-4 model is anticipated to have the capability for task or domain-specific fine-tuning, akin to its predecessor, GPT-3. However, OpenAI has not yet disclosed the particulars of fine-tuning GPT-4. As the model undergoes further exploration and experimentation by researchers and developers, it is expected that more details about fine-tuning GPT-4 will be revealed in the future.

Let’s dive into my actual fine-tuning experiment.

I created an example training the Open AI API with two articles from my blog about ChatGPT, titled “How can you earn money with ChatGPT and Power BI?” and “Open AI released the new ChatGPT API” this week.” I used this site tool to convert my post contents into text and stored it on my GitHub page. You can add as many files as needed to train your model or easily convert this process to store into a database. If you want to open directly in Google Colab, click this link.

Python
#This is ro run in the Google Colab notebook and has all the code you need to create your own chatbot with custom knowledge base using GPT-3. 

#Follow the instructions for each steps and then run the code sample. In order to run the code, you need to press "play" button near each code sample.

#Download the data for your custom knowledge base
#For the demonstration purposes we are going to use ----- as our knowledge base. You can download them to your local folder from the github repository by running the code below.
#Alternatively, you can put your own custom data into the local folder.

! git clone https://github.com/LawrenceTeixeira/data_to_train.git

# Install the dependicies
#Run the code below to install the depencies we need for our functions


!pip install llama-index
!pip install langchain

# Define the functions
#The following code defines the functions we need to construct the index and query it


from llama_index import SimpleDirectoryReader, GPTListIndex, readers, GPTSimpleVectorIndex, LLMPredictor, PromptHelper, ServiceContext
from langchain import OpenAI
import sys
import os
from IPython.display import Markdown, display

def construct_index(directory_path):
    # set maximum input size
    max_input_size = 4096
    # set number of output tokens
    num_outputs = 2000
    # set maximum chunk overlap
    max_chunk_overlap = 20
    # set chunk size limit
    chunk_size_limit = 600 

    # define prompt helper
    prompt_helper = PromptHelper(max_input_size, num_outputs, max_chunk_overlap, chunk_size_limit=chunk_size_limit)

    # define LLM
    llm_predictor = LLMPredictor(llm=OpenAI(temperature=0.5, model_name="text-davinci-003", max_tokens=num_outputs))
 
    documents = SimpleDirectoryReader(directory_path).load_data()
    
    service_context = ServiceContext.from_defaults(llm_predictor=llm_predictor, prompt_helper=prompt_helper)
    index = GPTSimpleVectorIndex.from_documents(documents, service_context=service_context)

    index.save_to_disk('index.json')

    return index

def ask_ai():
    index = GPTSimpleVectorIndex.load_from_disk('index.json')
    while True: 
        query = input("What do you want to ask? ")
        response = index.query(query)
        display(Markdown(f"Response: <b>{response.response}</b>"))

# Set OpenAI API Key
#You need an OPENAI API key to be able to run this code.

If you don't have one yet, get it by [signing up](https://platform.openai.com/overview). Then click your account icon on the top right of the screen and select "View API Keys". Create an API key.

#Then run the code below and paste your API key into the text input.


os.environ["OPENAI_API_KEY"] = input("Paste your OpenAI key here and hit enter:")

#Construct an index
#Now we are ready to construct the index. This will take every file in the folder 'data', split it into chunks, and embed it with OpenAI's embeddings API.

#*Notice:** running this code will cost you credits on your OpenAPI account ($0.02 for every 1,000 tokens). If you've just set up your account, the free credits that you have should be more than enough for this experiment.


construct_index("/content/data_to_train")

#Ask questions
#It's time to have fun and test our AI. Run the function that queries GPT and type your question into the input. 

#If you've used the provided example data for your custom knowledge base, here are a few questions that you can ask:
#1. What is the new API?
#2. ChatGPT is designed to handle what?
#3. What is BotGPT?
#4. Tell me what are the benefits of BotGPT?
#5. What the relations with Power BI and ChatGPT?
#6. How to make money with chatGPT and Power BI?

ask_ai()

Build AI chatbot with custom knowledge base using Open AI API and GPT index

Conclusion

Developing an AI chatbot that leverages personalized client information through the OpenAI API and GPT Index has the potential to transform customer experiences by providing customized interactions, education, and assistance. This solution offers numerous business prospects, including training models to comprehend a company’s areas of expertise, processes, policies, and protocols. By fine-tuning a model from the GPT Index with the OpenAI API, you can harness the power of AI to enhance user engagement and satisfaction, setting your business apart from the competition.

Embrace the world of AI chatbots and start providing a truly personalized experience for your customers today.

That’s it for today!

Here are the sources I used to create this article:

Open AI – Oficial Fine-tuning documentation

https://www.lennysnewsletter.com/p/i-built-a-lenny-chatbot-using-gpt

https://medium.datadriveninvestor.com/fine-tuning-gpt-3-for-helpdesk-automation-a-step-by-step-guide-516394df7f1

Open AI released this week the new ChatGPT API

OpenAI has introduced two new APIs to its suite of powerful language models this week. ChatGPT has been making waves in the market these past few months since its release to the public in November 2022 by Open AI. Now, any company can incorporate ChatGPT features into their applications. Using the API is very simple and will revolutionize how we know artificial intelligence today.

What is ChatGPT?

ChatGPT is an API (Application Programming Interface) developed by OpenAI, which is designed to facilitate the creation of chatbots that can engage in natural language conversations with users. ChatGPT is based on the GPT (Generative Pre-trained Transformer) family of language models, which have been pre-trained on vast amounts of text data and can generate high-quality text that closely mimics human writing.

ChatGPT aims to make it easier for developers to create chatbots that can understand and respond to natural language queries. The API can be fine-tuned for specific use cases, such as customer service or sales, and developers can integrate it into their applications with just a few lines of code.

ChatGPT works by taking in user input, such as a question or statement, and generating a response designed to mimic natural language conversation. The API uses machine learning to process and understand the input, allowing it to respond in a relevant and engaging way.

Overall, ChatGPT represents a significant step forward in developing conversational AI. By providing developers with a powerful and flexible tool for creating chatbots, OpenAI is making it easier for businesses and organizations to engage with their customers and users more naturally and intuitively.

What is the ChatGPT API?

The ChaGPT API is an extension of the GPT (Generative Pre-trained Transformer) family of language models. The GPT models are pre-trained on massive amounts of text data, allowing them to generate high-quality text with a natural language understanding often indistinguishable from the human-written text.

The ChaGPT API is designed to handle conversations, making it an excellent tool for building chatbots. With ChaGPT, developers can create chatbots that can respond to user input in a natural, conversational way. The API is flexible and can be fine-tuned for specific use cases, such as customer service or sales.

How to use the API?

You can integrate the ChatGPT features into your solution with a few lines. Follow an example code:

Python
import openai
openai.api_key = "PUT YOUR OPEN AI KEY HERE"

def ChatGPT(question):
 response = openai.ChatCompletion.create(
    model="gpt-3.5-turbo",
    messages=[
         {"role": "system", "content": "You are a helpful assistant."},
         {"role": "user", "content": question}
    ]
)
 return response["choices"][0]["message"]["content"]
 
print( ChatGPT("How to create a python code to analyze the increasing of the population arround the world?") )

Take a look at the ChatGPT API result:

One approach to analyzing the increase of population around the world in Python is to use data from the United Nations World Population Prospects. Here's an example of how to load and analyze this data in Python:

First, you will need to install the pandas package:

```python
!pip install pandas
```

Next, you can load the data into a pandas DataFrame:

```python
import pandas as pd

# Load the data from a CSV file
data = pd.read_csv('https://population.un.org/wpp/DVD/Files/1_Indicators%20(Standard)/CSV_FILES/WPP2019_TotalPopulationBySex.csv')

# Print the first few rows of the data
print(data.head())
```

This will load the data into a pandas DataFrame and print the first few rows of the data:

```
   LocID     Location  VarID Variant  ...  PopFemale  PopTotal  PopDensity  PopPyramid
0      4  Afghanistan      2  Medium  ...  19594.381  38928.35      54.422        0-24
1      4  Afghanistan      2  Medium  ...  20179.675  40341.77      56.937        0-24
2      4  Afghanistan      2  Medium  ...  20778.593  41754.96      59.661        0-24
3      4  Afghanistan      2  Medium  ...  21415.724  43203.14      62.529        0-24
4      4  Afghanistan      2  Medium  ...  22099.928  44701.23      65.484        0-24

[5 rows x 14 columns]
```

The `data` DataFrame contains the population data for all countries and regions of the world from 1950 to 2100, broken down by sex and age group.

To analyze the increase of population over time, you can use pandas' grouping and aggregation functions. For example, to calculate the total population by year, you can group the data by year and sum the population columns:

```python
# Group the data by year and sum the population columns
pop_by_year = data.groupby('Time')[['PopTotal', 'PopMale', 'PopFemale']].sum()

# Print the first few rows of the data
print(pop_by_year.head())
```

This will output:

```
         PopTotal       PopMale     PopFemale
Time                                         
1950  2.536542e+09  1.278728e+09  1.257814e+09
1951  2.584034e+09  1.303461e+09  1.280572e+09
1952  2.630584e+09  1.328990e+09  1.301594e+09
1953  2.676380e+09  1.354295e+09  1.322086e+09
1954  2.721384e+09  1.379313e+09  1.342071e+09
```

You can then plot the total population over time using the matplotlib package:

```python
import matplotlib.pyplot as plt

# Create a line plot of total population over time
plt.plot(pop_by_year.index, pop_by_year['PopTotal'])

# Set the plot title and axis labels
plt.title('World Population')
plt.xlabel('Year')
plt.ylabel('Population (billions)')

# Show the plot
plt.show()
```

This will plot the total population over time:

![World Population plot](https://i.imgur.com/XXcRVit.png)

You can also calculate the percentage increase in population from year to year using pandas' `pct_change()` function:

```python
# Calculate the percentage increase in population from year to year
pop_growth = pop_by_year.pct_change() * 100

# Print the first few rows of the data
print(pop_growth.head())
```

This will output:

```
      PopTotal    PopMale  PopFemale
Time                                
1950       NaN        NaN        NaN
1951  1.854079   1.934514   1.814192
1952  1.798119   1.952587   1.658901
1953  1.735834   1.900590   1.556621
1954  1.686583   1.845293   1.512103
```

You can then plot the percentage increase in population over time:

```python
# Create a line plot of percentage population growth over time
plt.plot(pop_growth.index, pop_growth['PopTotal'])

# Set the plot title and axis labels
plt.title('World Population Growth')
plt.xlabel('Year')
plt.ylabel('Population growth (%)')

# Show the plot
plt.show()
```

This will plot the percentage increase in population over time:

![World Population Growth plot](https://i.imgur.com/E18reRf.png)

This analysis can be extended in many ways depending on the specific questions you are interested in.

They also released the Whisper API. Whisper can transcribe speech to text in any language.

Follow an example:

Let’s test to extract the text with Wisper from this video:

First, you have to extract only the audio:

Python
!pip install pytube

import pytube as pt

yt = pt.YouTube("https://www.youtube.com/watch?v=XxIfSkkyAaQ")
stream = yt.streams.filter(only_audio=True)[0]
stream.download(filename="audio_ChatGPTAPI.mp3")

Now, you have to use the API to transcribe the audio:

Python
import openai

file = open("/path/to/file/audio_ChatGPTAPI.mp3", "rb")
transcription = openai.Audio.transcribe("whisper-1", file)

print(transcription)

Take a look at the result of the Whisper API result:

{
  "text": "OpenAI recently released the API of chatgpt. This is an API that calls gpt 3.5 turbo, which is the same model used in the chatgpt product. If you already know how to use the OpenAI API in Python, learning how to use the chatgpt API should be simple, but there are still some concepts that are exclusive to this API, and we'll learn these concepts in this video. Okay, let's explore all the things we can do with the chatgpt API in Python. Before we start with this video, I'd like to thank Medium for supporting me as a content creator. Medium is a platform where you can find Python tutorials, data science guides, and more. You can get unlimited access to every guide on Medium for $5 a month using the link in the description. All right, to start working with the chatgpt API, we have to go to our OpenAI account and create a new secret key. So first, we have to go to this website that I'm going to leave the link on the description, and then we have to go to the view API keys option. And here, what we have to do is create a new secret key in case you don't have one. So in this case, I have one, and I'm going to copy the key I have, and then we can start working with the API. So now I'm going here to Jupyter Notebooks, and we can start working with this API. And the first thing we have to do is install the OpenAI API. So chatgpt, the API of chatgpt or the endpoint, is inside of this library, and we have to install it. So we write pip install OpenAI, and then we get, in my case, a requirement already satisfied because I already have this library. But in your case, you're going to install this library. And then what we have to do is go to the documentation of chatgpt API, which I'm going to leave in the description, and we have to copy the code snippet that is here. So you can copy from my GitHub that I'm going to leave also in the description, or you can go to the documentation. So this is going to be our starting point. And before you run this code, you have to make sure that here in this variable OpenAI.API underscore key, you type your secret key that we generated before. So you type here your key, and well, you're good to go. And here's something important you need to know is that the main input is the messages parameter. So this one. And this messages parameter must be an array of message objects where each object has a role. You can see here in this case, the role is the user. And also we have the content. And this content is basically the content of the message. Okay. There are three roles. There are besides user, we have also the admin role and also the assistant role. And we're going to see that later. And now I'm going to test this with a simple message here in the content. Here I'm going to leave the role as user as it was by default. And here I'm going to change that content of the message. So I don't want to write hello, but I want to type this. So tell the world about the chatgpt API in the style of a pirate. So if I run this, we can see that we're going to get something similar that we'll get with chatgpt. But before running this, I'm going to delete this, this quote. And now I'm going to run and we're going to get a message similar to chatgpt. So here we have a dictionary with two elements, the content and the role. And here I only want the content. This is the text that we're going to get. We will get if we were using chatgpt. And if I write content, I'm going to get only the content. So only the text. So here's the text. So this is an introduction to the chatgpt API in the style of a pirate. And well, this is the message or the response. And if we go to the website to chatgpt, we're going to see that we're going to get something similar. So if I go here, and I go to chatgpt, and I write to the world about the chatgpt API in the style of a pirate, we can see we get this message in the style of a pirate. So we get this ahojder and then all the things that a pirate will say. And we get here the same. So we get a similar message. So basically, this response is what we will get with chatgpt, but without all this fancy interface. So we're only getting the text. Okay, now to interact with this API, as if we were working with chatgpt, we can make some modifications to the code. For example, we can use that input function to interact with with this API in a different way, as if we were working with chatgpt, like in the website. So here I can use that input. And I can, I can write, for example, users. So we are the users. And this is what we're going to ask chatgpt. And this is going to be my content. So here content. And instead of writing this, I'm going just to write content equal to content. And this is going to be the message that is going to change based on the input we insert, then instead of just printing this message, I'm going to create a variable called chat underscore response. And this is going to be my response, but we're gonna put it like in a chatgpt style. So here, I'm going to print this. And with this, we can recognize which is the user request and which is that chatgpt response. So let's try it out. Here, I'm going to press Ctrl Enter to run this. Okay, and here I'm going to type who was the first man on the moon. So if I press Enter, we get here the answer. And well, this is like in a chatgpt style, we get an input where we can type any question or request we have. And then we get the answer by chatgpt. And now let's see the roles that are going to change the way we interact with chatgpt. Okay, now let's see the system role. The system role helps set the behavior of the system. And this is different from that user role, because in the user role, we only give instructions to the system. But here, in the system role, we can control how the system behaves. For example, here, I add two different behaviors. And to do this, first, we have to use the messages object. It is the same messages object we had before. This is the same that we had here. But in this case, this is for the system role. And here I added two just to show you different ways to use this, this role. But usually you only have only one behavior for the system, or sorry for the system. And well, here in the first one, I'm saying you're a kind, helpful assistant. And well, in this case, we're telling the system to be as helpful as possible. And in the second one, is something I came up with. And it's something like you're a recruiter who asks tough interview questions. So for example, this second role, we can interact with chat GPT as if it was a job interview. So it's something like chat GPT is going to be the recruiter who asks questions, and we're going to be the candidate who answers all the questions. So let's use this, this second content. And now let's include this system role in our code. So to do this, I'm going to copy the code I had before, and I'm going to paste it here. And as you can see, we have two messages variable, one with a system role and the other with that user role. And what I'm going to do is just append one list into the other. So to do this, I'm going to create or write messages that append. And then I'm going to put this dictionary inside my variables. So here I write append, and now I put this inside. And after doing this, I'm just have to delete this and write messages equal to messages. And with this, we have that system role and also that user role in our code. Now I only have to put this content equal to input at the beginning. And with this, everything is ready. And now we can run this code. So first, I'm going to run the messages here, the list I have, and then I'm going to run the code we have before. And here is asking me to insert something. So here, I'm going to write just hi. And after this, we're going to see that the behavior of chat GPT change. So now is telling us Hello, welcome to the interview. Are you ready to get started? And this happened because we changed the behavior of the system. Now the behavior is set to you're a recruiter who asks tough interview questions. And well, here the conversation finished because this doesn't have a while loop. But here, I'm going to add a while loop. So I'm going to write while true. And then I'm going to run again. So here, I'm going to run again. And let's see how the conversation goes. So first, I write hi. And then this is going to give me the answer that well, welcome to the interview. And then can you tell me about a work related challenge that you overcame? So here, I can say, I had problems in public presentations. And I overcame it with practice. So I'm going to write this. And let's see how the conversation goes. And now it's asking me to add some specific actions I did to improve my presentation skills. So now you can see that chatgp is acting like a recruiter in a job interview. And this is thanks to this behavior we added in the system role. And well, now something that you need to know is that there is another role, which is the assistant role. And this role is very important. And it's important because sometimes here, for example, in this chat that is still on, if we write no, what we're going to see is that chatgpt is not able to remember the conversation we had. So it cannot read that preview responses. So here, for example, I type no. And what we got is thanks for sharing that. And actually, I didn't share anything. I just wrote no. And well, it's telling me to continue with something else. But as you can see, chatgpt is not able to remember what we said before. And if we add an assistant role, with this, we can make sure that we build a conversation history where chatgpt is able to remember the previous responses. So now let's do this. Let's create an assistant role. Okay, as I mentioned before, the system role is used to store prior responses. So by storing prior responses, we can build a conversation history that will come in handy when user instructions refer to prior messages. So here, to create this assistant role, we have to create again this dictionary. And then in the role, we have to type assistant, as you can see here. And then in the content, we have to introduce that chat response. And to understand this much better, I'm going to copy the previous code, and I'm going to paste it here. So here, the chat response is this one, this chat response that has that content of the response given by chatgpt. So here, I'm going to copy this code, and I'm going to paste it here. And what I'm going to do here to include this assistant role is to append this into that messages list. So here, I'm going to write messages.append() and then the parentheses. And with this, we integrated that assistant role to our little script. And here, for you to see the big picture of all of this, I'm going to copy also that assistant role. And well, it's here, the assistant role. I'm going to delete this first line of code. And well, this is the big picture. So we have the assistant role. This sets the behavior of the assistant. Then we have the user, which sets the instructions. And finally, we have the assistant, which stores all the responses. And with this, we can have a proper conversation with chatgpt. Here, before I run this code, I'm going to customize a little bit more the behavior of the assistant in the assistant role. And here, I'm going to type this. So it's basically the same, but here I'm adding, you ask one question or one new question after my response. So to simulate a job interview. And well, now that this is ready, here, I'm going to make sure that everything is right. And well, everything is perfect now. So here, I'm going to run these two blocks. And then I'm going to type hi, so we can start with that interview. So are you ready for that interview? Yes. So here, it's going to ask me a question. Let's get started. Can you tell me about your previous work experience? And well, I worked at Google, I'm going to say and well, now it tells me that's great. Can you tell me your role and responsibilities? And I can say, I was a software engineer. And well, now that conversation is going to keep going. And chatgpt is going to ask me more and more questions. And in this case, it remembers the previous responses I gave. So for example, I said I worked at Google. And here it's telling me the responsibilities I had at Google. And in the next response is also mentioning Google again. And I think if I mentioned the project that is asking here, for example, if I write, I had a credit card fraud detection project, and I overcame it with teamwork, I don't know, something like this, then it's going to ask me about this project. So now it mentions teamwork, which I said in my previous response. And now it's asking me more about this project. So with this, we can see that our assistant is storing our previous responses. And with this, we're building a conversation history that keeps the conversation going without losing quality in the responses. And that's pretty much it. Those are the three those are the three modes that you have to know to work with the chatgpt API. And in case you wonder about the pricing of the chatgpt API, well, it's priced at 0.002 per 1000 tokens, which is 10 times cheaper than the other models like gpt 3.5. And well, it is another reason why I wouldn't pay $20 for a chatgpt plus subscription. And well, in case you're interested why I am going to cancel my chatgpt plus subscription, you can watch this video where I explain why I regret paying $20 for a chatgpt plus subscription. And well, that's it. I'll see you on the next video."
}

OpenAI’s ChaGPT and Whisper APIs are a significant step forward for conversational AI. By making it easy for developers to build chatbots and voice assistants, these APIs have the potential to revolutionize the way we interact with technology. With the power of these language models at their fingertips, developers can create more intuitive and engaging user experiences than ever before.

Follow the official ChatGPT API post:

https://openai.com/blog/introducing-chatgpt-and-whisper-apis

Regarding ChatGPT, I would like to share the project I’m developing using the official ChatGPT API. It’s just the beginning!

BotGPT

BotGPT is a new service product that leverages the power of artificial intelligence to provide a personalized chat experience to customers through WhatsApp. Powered by the large language model, ChatGPT, BotGPT is designed to understand natural language and provide relevant responses to customer inquiries in real time.

One of the key benefits of using BotGPT is its ability to provide personalized recommendations to customers based on their preferences and past interactions. BotGPT can suggest products, services, or solutions tailored to each customer’s needs by analyzing customer data. This personalized approach helps to enhance the overall customer experience, leading to increased customer satisfaction and loyalty.

Unleash the potential of GPT-4 with BotGPT today by clicking this link and embarking on two days, cost-free journey into conversational AI without any payment information. Begin your adventure by clicking here. And finally, to make the monthly subscription after two days, click here.

Once subscribed after seven days, you can manage or cancel your subscription anytime via this link.

Should you encounter any obstacles, you can directly add the BotGPT WhatsApp: +1 (205) 754-6921 number to your phone.

If you have any questions or suggestions, please get in touch using this link.

That’s it for today!