From IDE Helpers to CLI Agents: How Agentic CLIs Are Accelerating Real-World Dev Workflows

The landscape of software development is undergoing a seismic shift, moving from a manual coding paradigm to an AI-assisted approach. This transition is not merely about autocomplete or syntax highlighting; it represents a fundamental change in how developers interact with their tools, codebases, and workflows. While IDE-based AI assistants like Claude Code and GitHub Copilot have become commonplace, a new frontier is opening up in the command-line interface (CLI). The emergence of powerful, agentic AI assistants that live and breathe in the terminal. Such as Anthropic’s Claude Code CLI, GitHub’s Copilot CLI, Google’s Gemini CLI, and OpenAI’s Codex CLI. Marks a significant acceleration of this evolution. For the technology leaders, understanding this new class of tools is no longer optional; it is a strategic imperative to boost productivity, enhance code quality, and maintain a competitive edge in the fast-paced world of IT development.

This blog post provides a deep dive into these four leading CLI-based AI code assistants. We will explore their core capabilities, compare their strengths and weaknesses, and provide a framework for selecting the right tool for your organization. Whether you are managing an internal development squad or collaborating with external contractors, this comprehensive guide will equip you with the knowledge needed to navigate the rapidly changing world of AI-assisted software engineering and make informed decisions that will shape the future of your development teams.

The Evolution: From IDE Plugins to Terminal Agents

The journey of AI in software development began in the integrated development environment (IDE). Tools like GitHub Copilot, Cursor, and Windsurf brought the power of large language models directly into the code editor, offering intelligent suggestions, completing lines of code, and even generating entire functions. These IDE plugins have undeniably enhanced developer productivity by reducing the cognitive load of writing boilerplate code and providing quick access to API documentation and best practices. However, their scope is often limited to the file or function at hand, lacking a holistic understanding of the entire project architecture.

The terminal, on the other hand, has always been the command center for serious software development. It is where developers manage version control with Git, run tests, build and deploy applications, and orchestrate complex workflows. The limitations of IDE-only assistance become apparent when dealing with tasks that span multiple files, require shell interaction, or involve the entire project lifecycle. This is where the new generation of CLI-based AI assistants comes into play. These are not just code-completion tools; they are agentic coding assistants that can understand and navigate your entire codebase, edit multiple files, execute shell commands, and integrate seamlessly into real-world development workflows. They represent a paradigm shift from a passive assistant to an active collaborator, working alongside developers in their native environment.

A graphical representation of the landscape of AI coding assistants, showing various tools categorized by their level of specialization and agency, with labeled axes for 'Agent' and 'Assistant'.

The AI coding assistant landscape is evolving from specialized IDE plugins to more generic, agentic tools that operate at the project level. 1

Generally, AI coding platforms can be categorized into the following:

  • CLI-Based Agents: Interact with AI agents through the command line using AiderClaude CodeCodex CLIGemini CLI, and Warp.
  • AI Code Editors: Interact with agents through GitHub Copilot, Cursor, and Windsurf.
  • Vibe Coding: Build web and mobile applications with prompts using Bolt, Lovable, v0, Replit, Firebase Studio, and more.
  • AI Teammate: A collaborative AI teammate for engineering teams. Examples include Devin and Genie by Cosine.

What Is a CLI Coding Tool?

Think of a CLI-based AI coding tool as an LLM like Claude, an OpenAI model, or Gemini in your Terminal. This category consists of closed- and open-source tools that enable developers to work on engineering projects directly by accessing coding agents from model providers such as Anthropic, OpenAI, xAI, and Google.

To understand how CLI tools differ, consider how IDE-based agents like Cursor work: You pick the agent you want to use in your project and add a prompt to begin interacting with it. Cursor then presents a UI to accept, reject, and review the agent’s changes based on your prompt.

In contrast, CLI coding tools streamline that experience. You run commands directly through the Terminal at the root of your project. After the agent analyzes your code, it asks yes/no questions about the task without leaving the Terminal.

Meet the Contenders: Four CLI Assistants Transforming Development

The current market for CLI-based AI code assistants is dominated by four major players, each with its unique philosophy, strengths, and target audience. Understanding the nuances of these tools is crucial for making an informed decision.

A. Claude Code (Anthropic)

Launched in early 2025, Claude Code by Anthropic has quickly established itself as a powerhouse in agentic coding. Its core philosophy is to provide a low-level, unopinionated, and highly customizable tool that gives developers raw access to the underlying model’s power without enforcing a specific workflow. This approach has resonated with experienced developers who value flexibility and control.

One of the standout features of Claude Code is its use of CLAUDE.md files. These are special configuration files that can be placed at various levels of a project’s directory structure to provide persistent context to the AI. Developers can use these files to document everything from standard bash commands and code style guidelines to repository etiquette and testing instructions. This allows for a high degree of customization and ensures the AI’s behavior aligns with the project’s specific needs.

In terms of performance, Claude Code has achieved impressive results, scoring 72.7% on the SWE-bench Verified benchmark, which evaluates an AI’s ability to resolve real-world GitHub issues. This high score is a testament to its strong capabilities in agentic planning, architectural reasoning, and complex multi-file changes. Claude Code is particularly well-suited for tasks that require a deep understanding of the codebase, such as complex refactoring, architectural changes, and test-driven development.

Terminal interface displaying the welcome message for Claude Code research preview, featuring stylized text.

The Claude Code interface provides a clean and focused environment for interacting with the AI assistant in the terminal. 2

Pricing and Availability: Claude Code’s pricing is based on the usage of the Anthropic API, with different tiers available for individuals and teams. Access to Claude Code is typically included in the Claude Pro and Max subscription plans, which start at around $20 per month. 3

B. GitHub Copilot CLI

GitHub Copilot CLI is the natural extension of the widely adopted Copilot ecosystem into the terminal. Its primary strength lies in its deep integration with GitHub, making it an indispensable tool for teams that rely heavily on the platform for their development workflows. Copilot CLI can be used in two modes: an interactive mode for conversational development and a programmatic mode for single-shot commands and scripting.

One of the most compelling features of Copilot CLI is its ability to interact directly with GitHub.com. Developers can use it to list open pull requests, work on assigned issues, create new PRs, and even review code changes in existing pull requests. This seamless integration with the GitHub workflow eliminates the need to switch between the terminal and the browser, resulting in significant productivity gains. Furthermore, Copilot CLI comes with the GitHub MCP server preconfigured, enabling it to leverage a wide range of tools and services on the GitHub platform.

A terminal interface for GitHub Copilot CLI version 0.0.1, showcasing its welcome message, features, and user login information.

The GitHub Copilot CLI provides a familiar and intuitive interface for interacting with the AI assistant, with a focus on GitHub-centric workflows. 4

Pricing and Availability: Access to GitHub Copilot CLI is included with the GitHub Copilot Pro, Business, and Enterprise plans. The Pro plan starts at $10 per month, making it a cost-effective option for individual developers and small teams. For larger organizations, the Business and Enterprise plans offer additional features such as centralized policy management and enhanced security. 5

C. OpenAI Codex CLI

OpenAI Codex CLI is a lightweight, open-source coding agent that brings the power of OpenAI’s most advanced reasoning models, including the o4 series, directly to the terminal. It is designed to be a versatile and powerful tool for a wide range of development tasks, from writing new features and fixing bugs to brainstorming solutions and answering questions about a codebase. Codex CLI runs locally on the developer’s machine, providing a secure and responsive experience.

One of the key features of Codex CLI is its full-screen terminal UI, which allows for a rich, interactive, and conversational workflow. Developers can send prompts, code snippets, and even screenshots to the AI and watch it explain its plan before making any changes. This transparency and control are crucial for building trust and ensuring that the AI’s actions are aligned with the developer’s intent. Codex CLI also supports conversation resumption, allowing developers to pick up where they left off without repeating context.

A terminal screen displaying an OpenAI Codex interface, showcasing a command execution related to a project in development, with input fields for commands and notes about the session.

The OpenAI Codex CLI offers a powerful, interactive terminal experience focused on reasoning and conversational development. 6

Platform Support and Pricing: Codex CLI has native support for macOS and Linux, with experimental support for Windows via WSL. This platform limitation is an essential consideration for teams with a mix of operating systems. Pricing is based on the usage of the OpenAI API, and developers can use their existing API keys to access the service. There is also an option to use a ChatGPT account to access the more cost-efficient gpt-5-codex-mini model.

D. Gemini CLI (Google)

Google’s Gemini CLI is a powerful, open-source AI agent that brings the capabilities of the Gemini family of models directly into the terminal. Its architecture is based on a reason-and-act (ReAct) loop, which allows it to break complex tasks into smaller, manageable steps and to use a variety of tools to accomplish them. This makes Gemini CLI a highly versatile tool that excels not only at coding but also at a wide range of other tasks, such as content generation, problem-solving, and deep research.

One of the key advantages of Gemini CLI is its seamless integration with the broader Google ecosystem. It is available without any additional setup in Google Cloud Shell and shares technology with Gemini Code Assist, which powers the agent mode in VS Code. This tight integration provides a consistent, unified experience for developers working across different environments. Gemini CLI also offers robust support for the Model Context Protocol (MCP), enabling it to leverage both built-in tools like grep and the terminal, as well as remote MCP servers.

Screenshot of a command-line interface showcasing the Gemini AI assistant. The UI displays colorful text with tips for getting started, including prompts for editing files and asking questions. The interface highlights a search command and encourages exploration of features.

The Gemini CLI features a vibrant, modern terminal interface that reflects its versatility and power. 8

Pricing and Availability: Gemini CLI is free with a Google account and includes a generous quota of requests. For users who require higher limits, it is also included in the Gemini Code Assist Standard and Enterprise plans. Additionally, developers can use a Gemini API key to access the powerful Gemini 2.5 Pro model, which offers up to 60 requests per minute and 1,000 requests per day. This flexible pricing model makes Gemini CLI an accessible option for a wide range of users, from individual developers to large enterprises. 9

How These Tools Accelerate IT Development

The adoption of CLI-based AI code assistants is not just about convenience; it is a fundamental driver of accelerated IT development projects. These tools offer a range of capabilities that translate directly into tangible benefits in terms of speed, quality, and overall developer experience.

Speed and Automation

One of the most immediate benefits of using these tools is automating repetitive, time-consuming tasks. This includes everything from generating boilerplate code and writing unit tests to refactoring large codebases and managing version control. By offloading these tasks to the AI, developers can focus their time and energy on higher-value activities, such as designing system architecture and solving complex business problems. The ability to perform multi-file operations and architectural refactoring with a single command is a game-changer for large, complex projects, where these tasks would otherwise require days or even weeks of manual effort.

Context Awareness

Unlike their IDE-based counterparts, CLI-based AI assistants have a deep understanding of the entire codebase. They can analyze relationships among files and modules, understand the project’s architecture, and maintain a persistent conversation history across multiple sessions. This deep context awareness allows them to provide more relevant and accurate suggestions and to perform complex tasks that require a holistic understanding of the project. This is particularly valuable in large, legacy codebases, where it can be a significant challenge for new developers to get up to speed.

Workflow Integration

The native integration of these tools into the terminal provides a seamless and frictionless developer experience. There is no need to switch between different applications or windows, as all development tasks can be performed within the same environment. This not only saves time but also reduces developers’ cognitive load, allowing them to stay in a state of flow for longer. The ability to integrate with Git, Docker, and CI/CD pipelines enables these tools to automate the entire development lifecycle, from coding and testing to deployment and monitoring.

Comparative Analysis: Choosing the Right Tool

With a clear understanding of each tool’s capabilities, the next step is to determine which is the best fit for your organization. This decision will depend on a variety of factors, including your team’s specific needs, your existing technology stack, and your budget. The following table provides a high-level comparison of the four tools across key dimensions:

FeatureClaude Code (CLI)Gemini CLICodex CLICopilot CLI
CompanyAnthropicGoogleOpenAIGitHub
CreatedFeb 2025 (research preview), GA May 2025. Jun 2025.May 2025.Sep 2025 (public preview).
Core useAgentic coding in your terminal (edits files, runs tests/commands, manages git).Open-source terminal agent; integrates with Gemini Code Assist.Local coding agent/CLI that runs on your machine.GitHub-native terminal agent for repos, PRs, and issues.
Context awarenessReads your repo & shell output; applies diffs.ReAct-style “reason & act”; 2.5 Pro + MCP tools/context.Navigates repo, edits files; MCP/tools supported.Operates in trusted project dirs; GH context/PRs.
Multi-languageModel-driven (Claude family)Model-driven (Gemini family)Model-driven (GPT-5-Codex)Model-driven (Copilot stack)
IntegrationsTerminal, web & VS Code.Terminal; Code Assist; Model Context Protocol (MCP).npm/Homebrew; IDEs via extensions; MCP.Deep GitHub: repos, PRs; new Copilot CLI.
PricingRequires Anthropic plan/API billing (Team/Enterprise for orgs). OSS client; usage via free/Std/Enterprise Gemini Code Assist. Included with ChatGPT tiers that include Codex access (per OpenAI post)Included with Copilot org plans (public preview CLI).
Data privacy posture (high level)Enterprise controls/admin policies via Anthropic; research preview had limited availability.Governed by Google Cloud’s Code Assist policies.Business/Enterprise data governed by OpenAI enterprise terms.Org-level GitHub policies & approvals.
Community/SupportOfficial docs & OSS repo.Google blog + OSS repo.OpenAI docs + GitHub repo.GitHub docs/changelog + releases.
Customization/ExtensibilityHooks/plugins & commands.Tools API + MCP (local/remote servers).MCP/tools and CLI config.Custom agents (preview).
OverallStrong agentic repo editing & workflows for teams on Anthropic.Best if you’re a Google/Gemini shop or want OSS + MCP. Natural fit if your org standardizes on ChatGPT/Codex.Best alignment for GitHub-centric orgs and PR workflows.

Conclusion

The world of software development is at an inflection point. The new generation of CLI-based AI code assistants is transforming the way we build software, offering unprecedented levels of speed, quality, and productivity. For technology leaders, the time to act is now. By carefully evaluating options, making informed decisions, and investing in the right tools and training, you can empower your teams to build better software faster and stay ahead of the competition in the age of AI.

That’s it for today!

References

[1] The Generative Programmer. (2025). AI Coding Assistants Landscape. Retrieved from

[2] The Discourse. (2025 ). Anthropic Claude Code: Command Line AI Coding – Review. Retrieved from thediscourse.co

[3] Claude.com. (2025). Pricing. Retrieved from

[4] GitHub. (n.d. ). GitHub Copilot CLI. Retrieved from

[5] GitHub. (n.d. ). GitHub Copilot Plans & pricing. Retrieved from

[6] Level Up Coding – Gitconnected. (2025 ). The guide to OpenAI Codex CLI. Hands-on review of the most. Retrieved from levelup.gitconnected.com

[7] OpenAI. (2025). Codex CLI features. Retrieved from

[8] Gemini-cli.xyz. (2025 ). Gemini CLI. Retrieved from

[9] Google AI for Developers. (2025 ). Gemini Developer API Pricing. Retrieved from

[10] Medium. (2025 ). Choosing the Right AI Code Assistant: A Comprehensive. Retrieved from medium.com

Open AI released this week the new ChatGPT API

OpenAI has introduced two new APIs to its suite of powerful language models this week. ChatGPT has been making waves in the market these past few months since its release to the public in November 2022 by Open AI. Now, any company can incorporate ChatGPT features into their applications. Using the API is very simple and will revolutionize how we know artificial intelligence today.

What is ChatGPT?

ChatGPT is an API (Application Programming Interface) developed by OpenAI, which is designed to facilitate the creation of chatbots that can engage in natural language conversations with users. ChatGPT is based on the GPT (Generative Pre-trained Transformer) family of language models, which have been pre-trained on vast amounts of text data and can generate high-quality text that closely mimics human writing.

ChatGPT aims to make it easier for developers to create chatbots that can understand and respond to natural language queries. The API can be fine-tuned for specific use cases, such as customer service or sales, and developers can integrate it into their applications with just a few lines of code.

ChatGPT works by taking in user input, such as a question or statement, and generating a response designed to mimic natural language conversation. The API uses machine learning to process and understand the input, allowing it to respond in a relevant and engaging way.

Overall, ChatGPT represents a significant step forward in developing conversational AI. By providing developers with a powerful and flexible tool for creating chatbots, OpenAI is making it easier for businesses and organizations to engage with their customers and users more naturally and intuitively.

What is the ChatGPT API?

The ChaGPT API is an extension of the GPT (Generative Pre-trained Transformer) family of language models. The GPT models are pre-trained on massive amounts of text data, allowing them to generate high-quality text with a natural language understanding often indistinguishable from the human-written text.

The ChaGPT API is designed to handle conversations, making it an excellent tool for building chatbots. With ChaGPT, developers can create chatbots that can respond to user input in a natural, conversational way. The API is flexible and can be fine-tuned for specific use cases, such as customer service or sales.

How to use the API?

You can integrate the ChatGPT features into your solution with a few lines. Follow an example code:

Python
import openai
openai.api_key = "PUT YOUR OPEN AI KEY HERE"

def ChatGPT(question):
 response = openai.ChatCompletion.create(
    model="gpt-3.5-turbo",
    messages=[
         {"role": "system", "content": "You are a helpful assistant."},
         {"role": "user", "content": question}
    ]
)
 return response["choices"][0]["message"]["content"]
 
print( ChatGPT("How to create a python code to analyze the increasing of the population arround the world?") )

Take a look at the ChatGPT API result:

One approach to analyzing the increase of population around the world in Python is to use data from the United Nations World Population Prospects. Here's an example of how to load and analyze this data in Python:

First, you will need to install the pandas package:

```python
!pip install pandas
```

Next, you can load the data into a pandas DataFrame:

```python
import pandas as pd

# Load the data from a CSV file
data = pd.read_csv('https://population.un.org/wpp/DVD/Files/1_Indicators%20(Standard)/CSV_FILES/WPP2019_TotalPopulationBySex.csv')

# Print the first few rows of the data
print(data.head())
```

This will load the data into a pandas DataFrame and print the first few rows of the data:

```
   LocID     Location  VarID Variant  ...  PopFemale  PopTotal  PopDensity  PopPyramid
0      4  Afghanistan      2  Medium  ...  19594.381  38928.35      54.422        0-24
1      4  Afghanistan      2  Medium  ...  20179.675  40341.77      56.937        0-24
2      4  Afghanistan      2  Medium  ...  20778.593  41754.96      59.661        0-24
3      4  Afghanistan      2  Medium  ...  21415.724  43203.14      62.529        0-24
4      4  Afghanistan      2  Medium  ...  22099.928  44701.23      65.484        0-24

[5 rows x 14 columns]
```

The `data` DataFrame contains the population data for all countries and regions of the world from 1950 to 2100, broken down by sex and age group.

To analyze the increase of population over time, you can use pandas' grouping and aggregation functions. For example, to calculate the total population by year, you can group the data by year and sum the population columns:

```python
# Group the data by year and sum the population columns
pop_by_year = data.groupby('Time')[['PopTotal', 'PopMale', 'PopFemale']].sum()

# Print the first few rows of the data
print(pop_by_year.head())
```

This will output:

```
         PopTotal       PopMale     PopFemale
Time                                         
1950  2.536542e+09  1.278728e+09  1.257814e+09
1951  2.584034e+09  1.303461e+09  1.280572e+09
1952  2.630584e+09  1.328990e+09  1.301594e+09
1953  2.676380e+09  1.354295e+09  1.322086e+09
1954  2.721384e+09  1.379313e+09  1.342071e+09
```

You can then plot the total population over time using the matplotlib package:

```python
import matplotlib.pyplot as plt

# Create a line plot of total population over time
plt.plot(pop_by_year.index, pop_by_year['PopTotal'])

# Set the plot title and axis labels
plt.title('World Population')
plt.xlabel('Year')
plt.ylabel('Population (billions)')

# Show the plot
plt.show()
```

This will plot the total population over time:

![World Population plot](https://i.imgur.com/XXcRVit.png)

You can also calculate the percentage increase in population from year to year using pandas' `pct_change()` function:

```python
# Calculate the percentage increase in population from year to year
pop_growth = pop_by_year.pct_change() * 100

# Print the first few rows of the data
print(pop_growth.head())
```

This will output:

```
      PopTotal    PopMale  PopFemale
Time                                
1950       NaN        NaN        NaN
1951  1.854079   1.934514   1.814192
1952  1.798119   1.952587   1.658901
1953  1.735834   1.900590   1.556621
1954  1.686583   1.845293   1.512103
```

You can then plot the percentage increase in population over time:

```python
# Create a line plot of percentage population growth over time
plt.plot(pop_growth.index, pop_growth['PopTotal'])

# Set the plot title and axis labels
plt.title('World Population Growth')
plt.xlabel('Year')
plt.ylabel('Population growth (%)')

# Show the plot
plt.show()
```

This will plot the percentage increase in population over time:

![World Population Growth plot](https://i.imgur.com/E18reRf.png)

This analysis can be extended in many ways depending on the specific questions you are interested in.

They also released the Whisper API. Whisper can transcribe speech to text in any language.

Follow an example:

Let’s test to extract the text with Wisper from this video:

First, you have to extract only the audio:

Python
!pip install pytube

import pytube as pt

yt = pt.YouTube("https://www.youtube.com/watch?v=XxIfSkkyAaQ")
stream = yt.streams.filter(only_audio=True)[0]
stream.download(filename="audio_ChatGPTAPI.mp3")

Now, you have to use the API to transcribe the audio:

Python
import openai

file = open("/path/to/file/audio_ChatGPTAPI.mp3", "rb")
transcription = openai.Audio.transcribe("whisper-1", file)

print(transcription)

Take a look at the result of the Whisper API result:

{
  "text": "OpenAI recently released the API of chatgpt. This is an API that calls gpt 3.5 turbo, which is the same model used in the chatgpt product. If you already know how to use the OpenAI API in Python, learning how to use the chatgpt API should be simple, but there are still some concepts that are exclusive to this API, and we'll learn these concepts in this video. Okay, let's explore all the things we can do with the chatgpt API in Python. Before we start with this video, I'd like to thank Medium for supporting me as a content creator. Medium is a platform where you can find Python tutorials, data science guides, and more. You can get unlimited access to every guide on Medium for $5 a month using the link in the description. All right, to start working with the chatgpt API, we have to go to our OpenAI account and create a new secret key. So first, we have to go to this website that I'm going to leave the link on the description, and then we have to go to the view API keys option. And here, what we have to do is create a new secret key in case you don't have one. So in this case, I have one, and I'm going to copy the key I have, and then we can start working with the API. So now I'm going here to Jupyter Notebooks, and we can start working with this API. And the first thing we have to do is install the OpenAI API. So chatgpt, the API of chatgpt or the endpoint, is inside of this library, and we have to install it. So we write pip install OpenAI, and then we get, in my case, a requirement already satisfied because I already have this library. But in your case, you're going to install this library. And then what we have to do is go to the documentation of chatgpt API, which I'm going to leave in the description, and we have to copy the code snippet that is here. So you can copy from my GitHub that I'm going to leave also in the description, or you can go to the documentation. So this is going to be our starting point. And before you run this code, you have to make sure that here in this variable OpenAI.API underscore key, you type your secret key that we generated before. So you type here your key, and well, you're good to go. And here's something important you need to know is that the main input is the messages parameter. So this one. And this messages parameter must be an array of message objects where each object has a role. You can see here in this case, the role is the user. And also we have the content. And this content is basically the content of the message. Okay. There are three roles. There are besides user, we have also the admin role and also the assistant role. And we're going to see that later. And now I'm going to test this with a simple message here in the content. Here I'm going to leave the role as user as it was by default. And here I'm going to change that content of the message. So I don't want to write hello, but I want to type this. So tell the world about the chatgpt API in the style of a pirate. So if I run this, we can see that we're going to get something similar that we'll get with chatgpt. But before running this, I'm going to delete this, this quote. And now I'm going to run and we're going to get a message similar to chatgpt. So here we have a dictionary with two elements, the content and the role. And here I only want the content. This is the text that we're going to get. We will get if we were using chatgpt. And if I write content, I'm going to get only the content. So only the text. So here's the text. So this is an introduction to the chatgpt API in the style of a pirate. And well, this is the message or the response. And if we go to the website to chatgpt, we're going to see that we're going to get something similar. So if I go here, and I go to chatgpt, and I write to the world about the chatgpt API in the style of a pirate, we can see we get this message in the style of a pirate. So we get this ahojder and then all the things that a pirate will say. And we get here the same. So we get a similar message. So basically, this response is what we will get with chatgpt, but without all this fancy interface. So we're only getting the text. Okay, now to interact with this API, as if we were working with chatgpt, we can make some modifications to the code. For example, we can use that input function to interact with with this API in a different way, as if we were working with chatgpt, like in the website. So here I can use that input. And I can, I can write, for example, users. So we are the users. And this is what we're going to ask chatgpt. And this is going to be my content. So here content. And instead of writing this, I'm going just to write content equal to content. And this is going to be the message that is going to change based on the input we insert, then instead of just printing this message, I'm going to create a variable called chat underscore response. And this is going to be my response, but we're gonna put it like in a chatgpt style. So here, I'm going to print this. And with this, we can recognize which is the user request and which is that chatgpt response. So let's try it out. Here, I'm going to press Ctrl Enter to run this. Okay, and here I'm going to type who was the first man on the moon. So if I press Enter, we get here the answer. And well, this is like in a chatgpt style, we get an input where we can type any question or request we have. And then we get the answer by chatgpt. And now let's see the roles that are going to change the way we interact with chatgpt. Okay, now let's see the system role. The system role helps set the behavior of the system. And this is different from that user role, because in the user role, we only give instructions to the system. But here, in the system role, we can control how the system behaves. For example, here, I add two different behaviors. And to do this, first, we have to use the messages object. It is the same messages object we had before. This is the same that we had here. But in this case, this is for the system role. And here I added two just to show you different ways to use this, this role. But usually you only have only one behavior for the system, or sorry for the system. And well, here in the first one, I'm saying you're a kind, helpful assistant. And well, in this case, we're telling the system to be as helpful as possible. And in the second one, is something I came up with. And it's something like you're a recruiter who asks tough interview questions. So for example, this second role, we can interact with chat GPT as if it was a job interview. So it's something like chat GPT is going to be the recruiter who asks questions, and we're going to be the candidate who answers all the questions. So let's use this, this second content. And now let's include this system role in our code. So to do this, I'm going to copy the code I had before, and I'm going to paste it here. And as you can see, we have two messages variable, one with a system role and the other with that user role. And what I'm going to do is just append one list into the other. So to do this, I'm going to create or write messages that append. And then I'm going to put this dictionary inside my variables. So here I write append, and now I put this inside. And after doing this, I'm just have to delete this and write messages equal to messages. And with this, we have that system role and also that user role in our code. Now I only have to put this content equal to input at the beginning. And with this, everything is ready. And now we can run this code. So first, I'm going to run the messages here, the list I have, and then I'm going to run the code we have before. And here is asking me to insert something. So here, I'm going to write just hi. And after this, we're going to see that the behavior of chat GPT change. So now is telling us Hello, welcome to the interview. Are you ready to get started? And this happened because we changed the behavior of the system. Now the behavior is set to you're a recruiter who asks tough interview questions. And well, here the conversation finished because this doesn't have a while loop. But here, I'm going to add a while loop. So I'm going to write while true. And then I'm going to run again. So here, I'm going to run again. And let's see how the conversation goes. So first, I write hi. And then this is going to give me the answer that well, welcome to the interview. And then can you tell me about a work related challenge that you overcame? So here, I can say, I had problems in public presentations. And I overcame it with practice. So I'm going to write this. And let's see how the conversation goes. And now it's asking me to add some specific actions I did to improve my presentation skills. So now you can see that chatgp is acting like a recruiter in a job interview. And this is thanks to this behavior we added in the system role. And well, now something that you need to know is that there is another role, which is the assistant role. And this role is very important. And it's important because sometimes here, for example, in this chat that is still on, if we write no, what we're going to see is that chatgpt is not able to remember the conversation we had. So it cannot read that preview responses. So here, for example, I type no. And what we got is thanks for sharing that. And actually, I didn't share anything. I just wrote no. And well, it's telling me to continue with something else. But as you can see, chatgpt is not able to remember what we said before. And if we add an assistant role, with this, we can make sure that we build a conversation history where chatgpt is able to remember the previous responses. So now let's do this. Let's create an assistant role. Okay, as I mentioned before, the system role is used to store prior responses. So by storing prior responses, we can build a conversation history that will come in handy when user instructions refer to prior messages. So here, to create this assistant role, we have to create again this dictionary. And then in the role, we have to type assistant, as you can see here. And then in the content, we have to introduce that chat response. And to understand this much better, I'm going to copy the previous code, and I'm going to paste it here. So here, the chat response is this one, this chat response that has that content of the response given by chatgpt. So here, I'm going to copy this code, and I'm going to paste it here. And what I'm going to do here to include this assistant role is to append this into that messages list. So here, I'm going to write messages.append() and then the parentheses. And with this, we integrated that assistant role to our little script. And here, for you to see the big picture of all of this, I'm going to copy also that assistant role. And well, it's here, the assistant role. I'm going to delete this first line of code. And well, this is the big picture. So we have the assistant role. This sets the behavior of the assistant. Then we have the user, which sets the instructions. And finally, we have the assistant, which stores all the responses. And with this, we can have a proper conversation with chatgpt. Here, before I run this code, I'm going to customize a little bit more the behavior of the assistant in the assistant role. And here, I'm going to type this. So it's basically the same, but here I'm adding, you ask one question or one new question after my response. So to simulate a job interview. And well, now that this is ready, here, I'm going to make sure that everything is right. And well, everything is perfect now. So here, I'm going to run these two blocks. And then I'm going to type hi, so we can start with that interview. So are you ready for that interview? Yes. So here, it's going to ask me a question. Let's get started. Can you tell me about your previous work experience? And well, I worked at Google, I'm going to say and well, now it tells me that's great. Can you tell me your role and responsibilities? And I can say, I was a software engineer. And well, now that conversation is going to keep going. And chatgpt is going to ask me more and more questions. And in this case, it remembers the previous responses I gave. So for example, I said I worked at Google. And here it's telling me the responsibilities I had at Google. And in the next response is also mentioning Google again. And I think if I mentioned the project that is asking here, for example, if I write, I had a credit card fraud detection project, and I overcame it with teamwork, I don't know, something like this, then it's going to ask me about this project. So now it mentions teamwork, which I said in my previous response. And now it's asking me more about this project. So with this, we can see that our assistant is storing our previous responses. And with this, we're building a conversation history that keeps the conversation going without losing quality in the responses. And that's pretty much it. Those are the three those are the three modes that you have to know to work with the chatgpt API. And in case you wonder about the pricing of the chatgpt API, well, it's priced at 0.002 per 1000 tokens, which is 10 times cheaper than the other models like gpt 3.5. And well, it is another reason why I wouldn't pay $20 for a chatgpt plus subscription. And well, in case you're interested why I am going to cancel my chatgpt plus subscription, you can watch this video where I explain why I regret paying $20 for a chatgpt plus subscription. And well, that's it. I'll see you on the next video."
}

OpenAI’s ChaGPT and Whisper APIs are a significant step forward for conversational AI. By making it easy for developers to build chatbots and voice assistants, these APIs have the potential to revolutionize the way we interact with technology. With the power of these language models at their fingertips, developers can create more intuitive and engaging user experiences than ever before.

Follow the official ChatGPT API post:

https://openai.com/blog/introducing-chatgpt-and-whisper-apis

Regarding ChatGPT, I would like to share the project I’m developing using the official ChatGPT API. It’s just the beginning!

BotGPT

BotGPT is a new service product that leverages the power of artificial intelligence to provide a personalized chat experience to customers through WhatsApp. Powered by the large language model, ChatGPT, BotGPT is designed to understand natural language and provide relevant responses to customer inquiries in real time.

One of the key benefits of using BotGPT is its ability to provide personalized recommendations to customers based on their preferences and past interactions. BotGPT can suggest products, services, or solutions tailored to each customer’s needs by analyzing customer data. This personalized approach helps to enhance the overall customer experience, leading to increased customer satisfaction and loyalty.

Unleash the potential of GPT-4 with BotGPT today by clicking this link and embarking on two days, cost-free journey into conversational AI without any payment information. Begin your adventure by clicking here. And finally, to make the monthly subscription after two days, click here.

Once subscribed after seven days, you can manage or cancel your subscription anytime via this link.

Should you encounter any obstacles, you can directly add the BotGPT WhatsApp: +1 (205) 754-6921 number to your phone.

If you have any questions or suggestions, please get in touch using this link.

That’s it for today!

Twitter Sentiment Analysis using Open AI and Power BI

This article is an experiment that explains how to use an Open AI to predict the sentiment analysis and gender in recent tweets for a specific topic and show the result in a Power BI dashboard.

What is Open AI?

The Open AI model is trained on a dataset of 3.6 billion Tweets. The training process takes about 4 days on 8 GPUs. After training, the model can accurately predict the sentiment of a tweet with 85% accuracy. The model can also be fine-tuned to accurately predict the sentiment of tweets from a specific Twitter user with 90% accuracy.

How does it work?

You input some text as a prompt, and the API will return a text completion that attempts to match whatever instructions or context you gave it.

You can think of this as a very advanced autocomplete — the model processes your text prompt and tries to predict what’s most likely to come next.

This video explains better how Open AI works

In our case, we use the expression, “Decide whether a Tweet’s sentiment is positive, neutral, or negative. Tweet:“, to extract the sentiment, and, “Extract the gender and decide whether a name´s gender is male, female, or unknown. Name:“, to extract the gender from the user name.

How does the experiment work?

The Python script gets the recent tweets about a topic and analyzes the sentiment and the gender of the text of each tweet. After that, the result is saved in an Excel file. I don’t recommend it because it can get slow, but it’s possible to run the Python code directly from Power BI. Follow the instructions here.

Before executing the Python script, you must create an account in Twitter develop and Open AI to obtain the “BEARER_TOKEN” and the “OPEN AI KEY” respectively.

Follow below the Python code:

Python
# Twitter sentiment analysis using Open AI and Power BI
# Author: Lawrence Teixeira
# Date: 2022-10-09

# Requirements
# pip install tweepy==4.0
# pip install openai

# Import the packages
import pandas as pd
import tweepy
import openai

# Connect to Twitter API
MY_BEARER_TOKEN = "YOU HAVE TO INSERT HERE YOUR TWITTER BEARER TOKEN"

# create your client
client = tweepy.Client(bearer_token=MY_BEARER_TOKEN)

# Functions to extract sentiment and gender with Open AI API
# if you want to know more examples about how to use Open AI click [here](https://beta.openai.com/examples/).

openai.api_key = "YOU HAVE TO INSERT HERE YOUR OPEN AI KEY"

def Generate_OpenAI_Sentiment(question_type, openai_response ):
    response = openai.Completion.create(
      engine="text-davinci-002",
      prompt= question_type + ":/n/n" + format(openai_response) +"/n/n Sentiment:",
      temperature=0.7,
      max_tokens=100,
      top_p=1,
      frequency_penalty=0.5,
      presence_penalty=0
    )
    return response['choices'] [0]['text']

def Generate_OpenAI_Gender(question_type, openai_response ):
    response = openai.Completion.create(
      engine="text-davinci-002",
      prompt= question_type + ":/n/n" + format(openai_response),
      temperature=0.7,
      max_tokens=100,
      top_p=1,
      frequency_penalty=0.5,
      presence_penalty=0
    )
    return response['choices'] [0]['text']

# Query search for tweets. Here your can put whatever you want.
# if you want to know more about que Twitter query parameters click [here](https://developer.twitter.com/en/docs/twitter-api/tweets/search/api-reference/get-tweets-search-recent/).
query = "#UkraineWarNews lang:en"

# if wnat to your start and end time for fetching tweets
#start_time = "2022-10-07T00:00:00Z"
#end_time   = "2022-10-08T00:00:00Z"

# get tweets from the API
tweets = client.search_recent_tweets(query=query,
                                    #start_time=start_time,
                                    #end_time=end_time,
                                     tweet_fields = ["created_at", "text", "source"],
                                     user_fields = ["name", "username", "location", "verified", "description"],
                                     max_results = 100,
                                     expansions='author_id'
                                     )

## Create a data frame to save the results
tweet_info_ls = []
# iterate over each tweet and corresponding user details
for tweet, user in zip(tweets.data, tweets.includes['users']):
    tweet_info = {
        'created_at': tweet.created_at,
        'text': tweet.text,
        'source': tweet.source,
        'name': user.name,
        'username': user.username,
        'location': user.location,
        'verified': user.verified,
        'description': user.description,
        'Sentiment': Generate_OpenAI_Sentiment("Decide whether a Tweet's sentiment is positive, neutral, or negative. Tweet", tweet.text ),
        'Gender': Generate_OpenAI_Gender("Extract the gender and decide whether a name´s gender is male, female, or unknown. Name", user.name ),
        'Query': query.rsplit(' ', 2)[0]
    }
    tweet_info_ls.append(tweet_info)
# create dataframe from the extracted records
tweets_df = pd.DataFrame(tweet_info_ls)

# remove the timezone format
tweets_df['created_at'] = tweets_df['created_at'].dt.tz_localize(None)

# if your use Google Colab, save the result of a csv file in the Google Drive
#tweets_df.to_excel("drive/MyDrive/datasets/Resulados_twitter.xlsx")

# if your want to insert direct in Power BI
print(tweets_df)

Once you execute this Python code and refresh the Power Bi report, you will see the analysis result. In my case, I chose UkraineWarNews. It’s interesting to see in the Power Bi dashboard, that 78% are negative tweets 16% of positive and 33% are male versus 5% female. You can interact with this report by clicking on the visuals.

Click here, to see this report in full-screen mode.

Important: This experiment gets only the last 100 tweets to analyze, and gender is defined only by the spelling of the name and not by the sexual orientation of each individual.

You can download the Power BI report here, and, the version of the Python code in Google Colab here.

There are a lot of possibilities for using this solution in the real world. The Open AI has a lot of other examples like extracting keywords, text summarization, grammar correction, restaurant review creator, and much more. You can access all the examples here. If you have questions about the solution, feel free to comment in the box below.

That´s it for today.