Beyond Automation: Delving Deep into Microsoft’s AutoGen Conversational AI Framework

In the heart of innovation, Microsoft has crafted a gem known as AutoGen, a framework designed to foster the creation of applications through Large Language Models (LLMs). Unveiling a world where multi-agent conversations drive solutions, AutoGen is not just a tool but a revolutionary stride in AI technology.

Moreover, the realm of Large Language Models (LLMs) has been a buzzing hive of potential waiting to be harnessed. With AutoGen, the wait is over as it paves the way for seamless interactions among AI agents, humans, and tools, crafting a narrative of endless possibilities.

The Core Essence of AutoGen

At its core, AutoGen is an enabler, a catalyst that simplifies the intricacies of developing LLM-based applications. Its philosophy is rooted in collaborative problem-solving, where multiple agents can converse and solve tasks collectively.

Additionally, AutoGen goes beyond mere automation. It embodies optimization, ensuring that the workflow of applications is automated and optimized for peak performance. This is where AutoGen shines, revolutionizing the LLM application framework.

What capabilities does AutoGen offer?

The brilliance of AutoGen is seen in its ability to seamlessly blend the power of LLMs, human insights, and other tools, thereby simplifying the orchestration and optimization of complex workflows inherent in LLM applications. AutoGen facilitates efficient problem-solving through customizable conversational agents and paves the way for innovative applications across various domains.

  1. Multi-Agent Conversations:
  • You can create multi-agent systems where agents with specialized capabilities converse to solve tasks collaboratively. These conversations can occur between AI agents, humans, and AI, or a mix, expanding possibilities.
  1. LLM Workflow Automation and Optimization:
  • AutoGen simplifies the automation and optimization of intricate LLM workflows, which is especially beneficial as LLM-based applications become increasingly complex. This alleviates the challenges of orchestrating optimal workflows with robust performance.
  1. Customizable Conversational Agents:
  • Design and customize agents to your needs, whether based on LLMs, other tools, or even human inputs. This customization facilitates more effective solutions tailored to the unique requirements of your projects.
  1. Human-AI Collaboration:
  • AutoGen facilitates seamless integration between human input and AI capabilities, allowing for collaborative problem-solving. This is particularly useful in scenarios where the strengths of both humans and AI can be leveraged for better outcomes.
  1. Development of Advanced Applications:
  • Use AutoGen to develop advanced applications such as code-based question-answering systems, supply-chain optimization, and other scenarios where automated and optimized multi-agent conversations can significantly reduce manual interactions.
  1. Enhanced LLM Capabilities:
  • Extend the capabilities of advanced LLMs like GPT-4 by addressing their limitations through integration with other tools and human input, making them more robust and capable of handling multi-faceted tasks.
  1. Learning and Experimentation:
  • Being an open-source framework, AutoGen provides a playground for developers, researchers, and enthusiasts to learn, experiment, and contribute to the growing knowledge in AI and LLMs.
  1. Research and Innovation:
  • AutoGen can serve as a solid foundation for research and innovation in AI, especially in exploring the dynamics of multi-agent systems and human-AI collaboration.
  1. Community Contributions:
  • Being open-source, AutoGen encourages community contributions, which can lead to the development of new features, capabilities, and improvements in the framework, fostering a collaborative environment for advancing the state of AI.

AutoGen, with its ability to meld the prowess of LLMs, humans, and other tools through conversational agents, opens up a vast spectrum of opportunities for developers and organizations alike to harness the potential of AI in novel and impactful ways.

Agent’s concepts behind AutoGen

AutoGen abstracts and implements conversable agents designed to solve tasks through inter-agent conversations. Specifically, the agents in AutoGen have the following notable features:

  • Conversable: Agents in AutoGen are conversable, which means that any agent can send and receive messages from other agents to initiate or continue a conversation
  • Customizable: Agents in AutoGen can be customized to integrate LLMs, humans, tools, or a combination of them.

The figure below shows the built-in agents in AutoGen.

Source: Multi-agent Conversation Framework | AutoGen (microsoft.github.io)

The agents ConversableAgent, AssistantAgent, UserProxyAgent, and GroupChatManager are classes provided within the AutoGen framework, a system by Microsoft for facilitating multi-agent conversations in large language models (LLMs). Here’s a detailed breakdown of these agents:

  1. ConversableAgent:
  • A generic class designed for agents capable of conversing with each other through message exchange to complete a task.
  • Agents can communicate with other agents and perform actions, with their efforts potentially differing based on the messages they receive.
  • Provides an auto-reply capability for more autonomous multi-agent communication while retaining the option for human intervention.
  • Extensible by registering reply functions with the register_reply() method.
  1. AssistantAgent:
  • Acts as an AI assistant using LLMs by default, without requiring human input or code execution.
  • Can write Python code for a user to execute when a task description message is received, with the code generated by an LLM like GPT-4.
  • Receives execution results and suggests corrections or bug fixes if necessary.
  • Its behavior can be altered by passing a new system message, and LLM inference configuration can be managed via llm_config.
  1. UserProxyAgent:
  • Serves as a proxy agent for humans, soliciting human input for the agent’s replies at each interaction turn by default while also having the ability to execute code and call functions.
  • Triggers code execution automatically upon detecting an executable code block in the received message when no human user input is provided.
  • Code execution can be disabled, and LLM-based responses, which are disabled by default, can be enabled via llm_config. When llm_config is set as a dictionary, the UserProxyAgent can generate replies using an LLM when code execution is not performed.
  1. GroupChatManager:
  • A class inherited from ConversableAgent, designed to manage a group chat involving multiple agents.
  • Provides a method run_chat to initiate and manage a group chat, with parameters for messages, sender, and configuration.
  • This class appears to be in preview, indicating it might be a newer or less stable feature of AutoGen.

In practical terms, these agents facilitate complex workflows and interaction patterns among multiple entities, be they other AI agents, human users, or a combination of both. For example, the GroupChatManager could potentially moderate conversations between agents and humans, passing messages according to specific rules.

Examples of Various Applications Executed with AutoGen

The figure below shows six examples of applications built using AutoGen.

Here are some of AutoGen examples:

A Rising Competitive Arena

The domain of Large Language Model (LLM) application frameworks is swiftly evolving, with Microsoft’s AutoGen contending robustly amidst many competitors. LangChain is a framework for constructing a diverse range of LLM applications, spanning chatbots, text summarizers, and agents. At the same time, LlamaIndex provides abundant tools for interfacing LLMs with external data reservoirs like documents and databases.

ADVERTISEMENT

Similarly, libraries such as AutoGPT, MetaGPT, and BabyAGI rely on LLM agents and multi-agent application spheres. ChatDev employs LLM agents to mimic a full-fledged software development team. Concurrently, Hugging Face’s Transformers Agents library empowers developers to craft conversational applications that bridge LLMs with external tools.

The arena of LLM agents is a burgeoning focal point in research and development, with early-stage models already devised for a spectrum of tasks, including product evolution, executive functionalities, shopping, and market analysis. Research has unveiled the potential of LLM agents in simulating mass populace behaviors or generating realistic, non-playable personas in gaming environments. Yet, a substantial portion of this endeavor remains in the proof-of-concept stage, not quite ready for full-fledged production due to hurdles like hallucinations and erratic behavior exhibited by LLM agents.

Nonetheless, the outlook for LLM applications is promising, with agents poised to assume a pivotal role. Major tech entities are placing substantial bets on AI copilots becoming integral components of future applications and operating systems. LLM agent frameworks will allow companies to design customized AI copilots. The foray of Microsoft into this burgeoning arena with AutoGen underscores the escalating competition surrounding LLM agents and their prospective future impact.

Bridging the Gap: Human and AI Interaction

One of AutoGen’s hallmark features is its seamless integration of human input within the AI conversation. This blend of human and AI interaction is innovative and a game-changer in resolving complex tasks.

Moreover, this integration goes a long way in addressing the limitations of LLMs, making AutoGen a torchbearer in promoting harmonious human-AI collaborations.

Conclusion

AutoGen is more than just a tool; it’s a promise of the future. With its relentless innovation, Microsoft has given the world a framework that simplifies the development of LLM applications and pushes the boundaries of what’s achievable.

Moreover, as we delve deeper into the realm of AI, frameworks like AutoGen are set to play a pivotal role in shaping the narrative of AI, presenting a future where the sky is not the limit but just the beginning.

That’s it for today!

Sources

AutoGen: Enabling next-generation large language model applications – Microsoft Research

microsoft/autogen: Enable Next-Gen Large Language Model Applications. Join our Discord: https://discord.gg/pAbnFJrkgZ (github.com)

Microsoft’s AutoGen has multiple AI agents talk to do your work | VentureBeat

The new Open AI GPT-4 Vision on ChatGPT: Bridging the Gap Between Text and Image Understanding

On September 25th, 2023, OpenAI expanded the capabilities of its advanced model, GPT-4, by introducing the ability to interpret images and speech alongside text. Dubbed GPT-4V(ision) or GPT-4V, this feature catapults GPT-4 into the realm of multimodal models, offering a richer interaction experience by allowing visual question answering (VQA). Users can upload images, inquire about them, and even have GPT-4V assess context and relationships within the visuals, marking a significant stride toward a holistic understanding of textual and visual data and broadening the horizons for real-world applications of GPT-4.

The GPT-4V feature sets a new benchmark in AI by successfully bridging the gap between text and image understanding. This leap is imperative in evolving towards more intuitive and human-like interactions with machines. Through experiments, GPT-4V has showcased its potential in accurately interpreting and answering questions about various visual inputs, from understanding humor in memes to identifying currency types and even providing a high-level analysis of a movie from a mere image. This impressive ability opens up endless possibilities and applications, making GPT-4V a groundbreaking stride in the journey of AI.

What can I do with GPT-4 Vision?

  1. Object Identification: GPT-4 Vision can accurately pinpoint objects and provide a detailed description by processing an image. Its recognition capabilities are spectacular, from everyday objects to complex machinery.
  2. Text Transcription from Images: Say goodbye to manually typing text from an image. This feature allows users to seamlessly extract written or printed text from images, a boon for those dealing with vast documents or old archives.
  3. Data Deciphering: Extract essential data from graphical content, such as charts, diagrams, and infographics, translating visual data into comprehensive text insights.
  4. Multi-condition Processing: GPT-4 Vision is adept at analyzing images under multiple conditions. Whether understanding a photograph taken under varying lighting or discerning details in a cluttered scene, its analytical prowess is unmatched.
  5. Educational Assistance: Students and educators can leverage GPT-4 Vision to analyze diagrams, illustrations, and visual aids, transforming them into detailed textual explanations, thus enhancing the learning process.
  6. Coding Enhancement: For the tech-savvy, GPT-4 Vision can interpret visual code structures or flowcharts and translate them into actual coding languages, making the development process swifter and more efficient.
  7. Design Understanding: For designers, understanding intricate patterns, layouts, or visual structures becomes easier. GPT-4 Vision can break down design elements and offer textual insights, aiding in refining and improving design concepts.

How To Get GPT-4 Vision Access on ChatGPT?

To access GPT-4 Vision, follow these steps:

  1. Visit the ChatGPT website and sign in or create an account.
  2. Click the “Upgrade to Plus” option.
  3. Click “Upgrade plan” under ChatGPT Plus in the pop-up.
  4. Enter your payment information, then click “Subscribe”.
  5. After payment, select “GPT-4” from the drop-down menu on your screen to use GPT-4 with ChatGPT.

Examples of Incredible Features You Can Do with GPT-4 Vision

Data Deciphering

It’s fantastic. I give this infographic with this prompt: “What does this tell you? Be very, very exact.

Follow the results ChatGPT gave me.

This infographic provides insights into what SEO professionals believe will be the most significant disruptors to SEO performance in 2024. Here are the exact details:

  1. Title and Context:
  • The title is “SEO Performance In 2024.”
  • SEO professionals were asked about what they thought would cause the most significant disruption to their SEO performance in the upcoming years.
  1. Most Significant Disruption In SEO Performance:
  • Generative AI is perceived to be the top disruptor, with 21.0% of professionals identifying it.
  • E-E-A-T & Trusted Sources comes second with 13.5%. There’s a note mentioning that an extra “E” was added to E-A-T in December 2022, which stands for “Experience.”
  • Automation (Tools) is the third most significant disruptor at 10.7%.
  • Other factors listed in descending order of their perceived disruption potential are:
    • Core Web Vitals (8.2%)
    • Google Discover (6.4%)
    • Social Search (6.3%)
    • Structured Data (5.9%)
    • Site Security (5.3%)
    • Google Passages (4.2%)
    • Knowledge Graph and Entities (3.9%)
    • Video Search (3.6%)
    • SERP Features (3.5%)
    • Mobile SEO (2.8%)
    • Voice Search (2.6%)
    • None/Other (2.2%).
  1. Visual Imagery:
  • There’s an image of a mobile phone with the ChatGPT logo on its screen.
  • Another visual element is a colorful wheel containing the letters E, E, A, T, and icons of various tech giants.
  1. Additional Text Information:
  • SEO professionals have many new technologies and updates to consider for the year. Specifically, tools like ChatGPT and Google’s “Generative Search Experience” are mentioned as generating considerable interest and concern among these professionals.

In summary, the infographic emphasizes the rapidly evolving landscape of SEO, with new technologies, especially Generative AI, becoming significant factors that professionals in the field need to be aware of and adapt to.


Another example of extracting information from infographics with the same prompt: “What does this tell you? Be very, very exact.

Certainly! The provided infographic conveys information about the usage patterns of social media among adults and teens. Here’s a detailed breakdown of the infographic’s contents:

  1. Title: The infographic is titled “Adults vs Teens: How We Use Social Media.”
  2. Background Information: The Pew Research Center released a study comparing the social media usage of adults and teens. The commentary mentions that teens today are inherently digital-savvy, having grown up in the digital era. However, when it comes to newer social platforms like Pinterest and Instagram, adults appear to have a slight edge in usage.
  3. Overall Social Media Usage:
  • Adults: 72% of adults use social media.
  • Teens: 81% of teens use social media.
  1. Platform-Specific Usage:
  • Facebook: Teens dominate the usage, with 94% of teens using it compared to 67% of adults.
  • Twitter: 26% of teens use Twitter, compared to 18% of adults.
  • Pinterest: 15% of adults use Pinterest, whereas only 1% of teens do.
  • Instagram: 13% of adults use Instagram, compared to 11% of teens.
  • Tumblr: 6% of adults use Tumblr, and 5% of teens use it.
  1. Age Breakdown of Social Media Users:
  • Ages 13-18: 89% use social media.
  • Ages 18-29: 81% use social media.
  • Ages 30-49: 78% use social media.
  • Ages 50-64: 60% use social media.
  • Age 65+: 43% use social media.
  1. Source: The data comes from Pew Research Center studies conducted in August 2013, May 2013, and February 2013.
  2. Sponsor/Creator: The infographic was created by “NextAdvisor,” which offers in-depth, independent research.

In summary, the infographic suggests that while teens use social media platforms like Facebook more dominantly, adults are more inclined towards newer platforms like Pinterest and Instagram. The percentage of users also declines as the age group increases, with the youngest age group (13-18) having the highest percentage of users and the oldest age group (65+) having the lowest.


Text Transcription

Text Transcription from Images is instrumental when extracting texts from images. I used this prompt: “Can you extract the text from this image?“.

Another example with text transcription:


Educational Assistance


Coding Enhancement

Create a live website from an image.


Multi-Condition Processing

This is awesome. Can you imagine with a mobile phone what can you do with this feature?


Identifying Objects

There are so many things you can do with this technology. Lock at the identifying objects.


What are the Potential Risks Of GPT-4V

OpenAI released a paper outlining potential risks associated with the use of GPT-4V, which include:

  • Privacy risks from identifying people in images or determining their location, potentially impacting companies’ data practices and compliance. The paper notes that GPT-4V has some ability to identify public figures and geolocate images.
  • Potential biases during image analysis and interpretation could negatively impact different demographic groups.
  • Safety risks from providing inaccurate or unreliable medical advice, specific directions for dangerous tasks, or hateful/violent content.
  • Cybersecurity vulnerabilities such as solving CAPTCHAs or multimodal jailbreaks.

Risks posed by the model have resulted in limitations, such as its refusal to offer analysis of images with people.

Screenshot from ChatGPT, September 2023

Screenshot from ChatGPT, September 2023

Overall, brands interested in leveraging GPT-4V for marketing must assess and mitigate these and other generative AI usage risks to use the technology responsibly and avoid negative impacts on consumers and brand reputation.

Conclusion

In conclusion, OpenAI’s GPT-4 Vision marks a monumental step towards harmonizing text and image understanding, paving the way for more intuitive and enriched interactions between humans and machines. As GPT-4V unfolds its potential, it not only broadens the horizon for real-world applications but also beckons a future where AI can perceive and interpret the world in a manner akin to human cognition, thereby significantly driving forward the frontier of what is achievable in the realm of artificial intelligence.

That’s it for today!

Sources:

11 creative ways to use GPT-4’s vision features in ChatGPT (the-decoder.com)

First Impressions with GPT-4V(ision) (roboflow.com)

7 Incredible Features of GPT-4 Vision (analyticsindiamag.com)

GPT-4 With Vision: Examples, Limitations, And Potential Risks (searchenginejournal.com)

Open Interpreter’s ChatGPT-Like Interface: The Future of Natural Language Code Execution

There’s an ever-increasing need for practical, adaptable, and user-friendly tools in the dynamic world of software development and programming. Developers are looking for solutions that allow them to operate effortlessly in their local development setups without being hampered by the confines of online hosted platforms. This article delves into a revolutionary answer to these demands – Open Interpreter.

While the Code Interpreter from OpenAI, based on the GPT-4 architecture, is undeniably transformative, it has limitations. Being a hosted solution, OpenAI’s offering runs in a distant, regulated environment, restricting developers from harnessing the full capabilities of their local configurations. Among the challenges faced are no access to the internet, a confined list of pre-loaded packages, a cap on uploads at 100 MB, and a runtime restriction of 120 seconds. Moreover, if the session terminates, any produced files or links vanish, creating obstacles for extended assignments.

With Open Interpreter, developers can fully utilize their computer’s resources, side-stepping the limitations of online services.

Understanding Open Interpreter

At its core, Open Interpreter is a bridge between human language and computer code. Instead of manually writing out every line of code, users can describe their intentions in natural language, and Open Interpreter translates that into executable code. This offers an entirely new programming approach, simplifying and streamlining the process.

Features of Open Interpreter

  1. Code Interpretation from Natural Language: The primary selling point is that this feature allows users to describe a task in English, and the tool will convert it into executable code. It significantly reduces barriers to programming for non-developers.
  2. ChatGPT-Like Interface: With a user-friendly, chatbot-like interface, users can have interactive programming sessions, making the coding experience more intuitive.
  3. Data Handling Capabilities: Beyond simple code interpretation, Open Interpreter is a powerhouse for data-related tasks. It provides a comprehensive toolset for data professionals, from data visualization to in-depth analysis.
  4. Open-Source Nature: Open Interpreter is a gift to the developer community. As an open-source project, developers can contribute to its growth, refining its features and expanding its capabilities.
  5. Local Execution: In an age where data privacy is paramount, Open Interpreter allows for local code execution, ensuring that sensitive data remains on the user’s machine.

What can I do with Open Interpreter?

  1. Educational Purposes: This can revolutionize teaching programming and other computer-related tasks. Students can communicate with their machines more intuitively, eliminating the steep learning curve associated with traditional programming.
  2. Data Science & Analysis: As it can plot, clean, and analyze large datasets, researchers and data scientists can streamline their workflow using natural language to manipulate and visualize their data.
  3. Automation: For individuals who aren’t traditionally programmers, Open Interpreter might offer a more accessible avenue for automating tasks on their computers, from simple file manipulations to more complex functions like web scraping.
  4. Content Creation: The ability to create and edit photos, videos, and PDFs using natural language commands could revolutionize how content creators work. Imagine telling your computer how to edit a video or photo without manually going through the process.
  5. Integration with Other Tools: Mention of it working with Code Llama or GPT-4 by using an OpenAI API key suggests that it can be integrated with other tools, increasing its utility.
  6. Accessibility: For people with disabilities, communicating with a computer using natural language could make computing more accessible, especially if paired with voice recognition software.
  7. Security: As with any powerful tool, there are potential risks. Running code or commands without understanding the implications can lead to unintended consequences. Users should be educated about the risks, especially when executing commands that can modify their system or data.
  8. Open Source Community: The rapid growth in stars on GitHub indicates a strong community interest. This suggests that developers worldwide might contribute to its development, making the tool even more robust and versatile.

How do you install and use the Open Interpreter?

At this moment, you have to install it inside a Python environment.

  1. First, you have to install Python; in my case, I have Python 3.11.2 installed.
  2. In this step, it’s not necessary, but I recommend you install the Visual Studio Code. Press shift+control+p and create a Python virtual environment.
  3. I created the “requirements.txt file” with the ChatGPT Code Interpreter libraries.
Python
pandas
numpy
scipy
xarray
#Machine Learning:
scikit-learn
xgboost
keras
torch
#Natural Language Processing:
nltk
spacy
textblob
gensim
#Data Visualization:
matplotlib
seaborn
plotly
bokeh
#Web Scraping and Network:
requests
urllib3
aiohttp
beautifulsoup4
#Deep Learning:
keras
torch
theano
#Image Processing:
pillow
imageio
opencv-python
scikit-image
#Audio Processing:
librosa
pyaudio
soundfile
#File Format:
openpyxl
xlrd
pyPDF2
python-docx
#Database:
sqlalchemy
psycopg2
mysql-connector-python
#Web Development:
flask
django
tornado
quart
#Others:
pytest
joblib
pytz
pyyaml

python-docx
pdfkit
wkhtmltopdf
pandoc 
pdfplumber  
reportlab
open-interpreter

Type pip install -r requirements.txt to install the most used Python libraries.

PowerShell
pip install -r requirements.txt

After that, install the Open Interpreter.

PowerShell
pip install open-interpreter

After installation, run interpreter:

PowerShell
interpreter -y

Now, you are free to use and test the Code Interpreter. If you want more detailed information, go to the GitHub oficial page.

Insert the prompt:

create a csv file with a year from 2000 to 2023 and total of sales amount column with ramdom numbers

Note that a file called sales_data.csv was automatically created in the folder.

Now type this prompt: create a line visual to show this sales_data.csv and plot on the screen.

You can do whatever you want. You also can create a code to use Open Interpreter, like the example below.

Python
import interpreter

interpreter.chat("Plot AAPL and META's normalized stock prices") # Executes a single command
interpreter.chat() # Starts an interactive chat

Type py teste.py to execute this code.

Python
PS D:\interpreter> & d:/interpreter/.venv/Scripts/Activate.ps1
(.venv) PS D:\interpreter> py teste.py

▌ Model set to GPT-4
Tip: To run locally, use interpreter --local

Open Interpreter will require approval before running code. Use interpreter -y to bypass this.

Press CTRL-C to exit.

  To plot AAPL and META's normalized stock prices, we will need to follow these steps:

   1 Install necessary packages. We will need pandas, yfinance, and matplotlib for Python.
   2 Download the stock prices for AAPL and META using yfinance.
   3 Normalize the stock prices.
   4 Plot the normalized stock prices.

  Let's start with the first step, which is installing the necessary packages.

  import pkg_resources                                                                                                                                                          
                                                                                                                                                                                
  REQUIRED_PACKAGES = ['pandas', 'yfinance', 'matplotlib']                                                                                                                      
                                                                                                                                                                                
  for package in REQUIRED_PACKAGES:                                                                                                                                             
      try:                                                                                                                                                                      
          dist = pkg_resources.get_distribution(package)                                                                                                                        
          print('{} ({}) is installed'.format(dist.key, dist.version))                                                                                                          
      except pkg_resources.DistributionNotFound:                                                                                                                                
          print('{} is NOT installed'.format(package))                                                                                                                          

  import pkg_resources                                                                                                                                                          
                                                                                                                                                                                
  REQUIRED_PACKAGES = ['pandas', 'yfinance', 'matplotlib']                                                                                                                      
                                                                                                                                                                                
  for package in REQUIRED_PACKAGES:                                                                                                                                             
          dist = pkg_resources.get_distribution(package)                                                                                                                        
          print('{} ({}) is installed'.format(dist.key, dist.version))                                                                                                          
      except pkg_resources.DistributionNotFound:                                                                                                                                
          print('{} is NOT installed'.format(package))                                                                                                                          


  pandas (2.1.0) is installed
  yfinance (0.2.28) is installed
  matplotlib (3.8.0) is installed

  Great! All the necessary packages are already installed.

  Now, let's move on to the next step: downloading the stock prices for AAPL (Apple Inc.) and META (Meta Platforms Inc., formerly Facebook Inc.) using the yfinance package.    
  We will download the last 5 years of daily closing prices.

  import yfinance as yf                                                                                                                                                         
                                                                                                                                                                                
  # Define the ticker symbols                                                                                                                                                   
  symbols = ['AAPL', 'META']                                                                                                                                                    
                                                                                                                                                                                
  # Download the stock data                                                                                                                                                     
  stock_data = yf.download(symbols, start='2017-01-01', end='2022-12-31')['Close']                                                                                              
                                                                                                                                                                                
  # Print the first few rows of the data                                                                                                                                        
  stock_data.head()                                                                                                                                                             

  import yfinance as yf                                                                                                                                                         
                                                                                                                                                                                
  # Define the ticker symbols                                                                                                                                                   
  symbols = ['AAPL', 'META']                                                                                                                                                    
                                                                                                                                                                                
  stock_data = yf.download(symbols, start='2017-01-01', end='2022-12-31')['Close']                                                                                              
                                                                                                                                                                                
  # Print the first few rows of the data                                                                                                                                        
  stock_data.head()                                                                                                                                                             

  [                       0%%                      ]
  [*********************100%%**********************]  2 of 2 completed
  AAPL        META
  Date
  2017-01-03  29.037500  116.860001
  2017-01-04  29.004999  118.690002
  2017-01-05  29.152500  120.669998
  2017-01-06  29.477501  123.410004
  2017-01-09  29.747499  124.900002

  We have successfully downloaded the stock prices for AAPL and META.

  The next step is to normalize these stock prices. Normalizing the stock prices allows us to compare the two stocks on the same scale. We will normalize the prices such that  
  the first price is 1 and all subsequent prices are relative to the first price.

  normalized_data = stock_data / stock_data.iloc[0]                                                                                                                             
                                                                                                                                                                                
  # Print the first few rows of the normalized data                                                                                                                             
  normalized_data.head()                                                                                                                                                        

  normalized_data = stock_data / stock_data.iloc[0]                                                                                                                             
                                                                                                                                                                                
  # Print the first few rows of the normalized data                                                                                                                             
  normalized_data.head()                                                                                                                                                        

  AAPL      META
  Date
  2017-01-03  1.000000  1.000000
  2017-01-04  0.998881  1.015660
  2017-01-05  1.003960  1.032603
  2017-01-06  1.015153  1.056050


  The stock prices for AAPL and META have been successfully normalized.

  Now, let's move on to the final step: plotting the normalized stock prices. We will use the matplotlib package for this.

  import matplotlib.pyplot as plt                                                                                                                                               
                                                                                                                                                                                
  # Create a new figure                                                                                                                                                         
  plt.figure(figsize=(14, 7))                                                                                                                                                   
                                                                                                                                                                                
  # Plot the normalized stock prices                                                                                                                                            
  for symbol in symbols:                                                                                                                                                        
      plt.plot(normalized_data.index, normalized_data[symbol], label=symbol)                                                                                                    
                                                                                                                                                                                
  # Add a title and labels                                                                                                                                                      
  plt.title('Normalized Stock Prices: AAPL vs META')                                                                                                                            
  plt.xlabel('Date')                                                                                                                                                            
  plt.ylabel('Normalized Price')                                                                                                                                                
  plt.legend()                                                                                                                                                                  
                                                                                                                                                                                
  # Save the plot as an image                                                                                                                                                   
  plt.savefig('normalized_stock_prices.png')                                                                                                                                    
                                                                                                                                                                                
  # Close the plot                                                                                                                                                              
  plt.close()                                                                                                                                                                   

  import matplotlib.pyplot as plt                                                                                                                                               
                                                                                                                                                                                
  # Create a new figure                                                                                                                                                         
  plt.figure(figsize=(14, 7))                                                                                                                                                   
                                                                                                                                                                                
  # Plot the normalized stock prices                                                                                                                                            
  for symbol in symbols:                                                                                                                                                        
      plt.plot(normalized_data.index, normalized_data[symbol], label=symbol)                                                                                                    
                                                                                                                                                                                
  # Add a title and labels                                                                                                                                                      
  plt.title('Normalized Stock Prices: AAPL vs META')                                                                                                                            
  plt.xlabel('Date')                                                                                                                                                            
  plt.ylabel('Normalized Price')                                                                                                                                                
  plt.legend()                                                                                                                                                                  
                                                                                                                                                                                
  # Save the plot as an image                                                                                                                                                   
  plt.savefig('normalized_stock_prices.png')                                                                                                                                    
                                                                                                                                                                                
  # Close the plot                                                                                                                                                              
  plt.close()█                                                                                                                                                                  

  <Figure size 1400x700 with 0 Axes>
  [<matplotlib.lines.Line2D object at 0x0000024DE75D0B90>]
  [<matplotlib.lines.Line2D object at 0x0000024DE7604990>]
  Text(0.5, 1.0, 'Normalized Stock Prices: AAPL vs META')
  Text(0.5, 0, 'Date')
  Text(0, 0.5, 'Normalized Price')
  <matplotlib.legend.Legend object at 0x0000024DE75DAB50>

Let’s try asking for a search on the internet: What are the last 10 BBC news headlines?

Python
(.venv) PS D:\interpreter> interpreter -y
> What are the last 10 BBC news headlines?

  To get the last 10 BBC news headlines, we can use web scraping techniques. Python has several libraries for this purpose, such as BeautifulSoup and requests. Here is the     
  plan:

   1 Install necessary Python packages (requests and BeautifulSoup) if they are not already installed.
   2 Fetch the BBC News homepage HTML using requests.
   3 Parse the HTML using BeautifulSoup to find the news headlines.
   4 Extract and return the last 10 news headlines.

  Let's start with the first step, which is to check if the necessary Python packages are installed. If not, we will install them.



  import pkg_resources                                                                                                                                                          
                                                                                                                                                                                
  REQUIRED_PACKAGES = ['beautifulsoup4', 'requests']                                                                                                                            
                                                                                                                                                                                
  for package in REQUIRED_PACKAGES:                                                                                                                                             
      try:                                                                                                                                                                      
          dist = pkg_resources.get_distribution(package)                                                                                                                        
          print('{} ({}) is installed'.format(dist.key, dist.version))                                                                                                          
      except pkg_resources.DistributionNotFound:                                                                                                                                
          print('{} is NOT installed'.format(package))                                                                                                                          


  beautifulsoup4 (4.12.2) is installed
  requests (2.31.0) is installed


  Great! Both 'beautifulsoup4' and 'requests' packages are already installed.

  Now, let's move to the next step of the plan: Fetch the BBC News homepage HTML using requests. We will send a GET request to the BBC News homepage URL and print the status   
  of the response to ensure we are able to connect to the page.

  import requests                                                                                                                                                               
                                                                                                                                                                                
  url = 'https://www.bbc.co.uk/news'                                                                                                                                            
  response = requests.get(url)                                                                                                                                                  
                                                                                                                                                                                
  print('Status code:', response.status_code)                                                                                                                                   

  Status code: 200


  The status code 200 indicates that we successfully fetched the HTML of the BBC News homepage.

  Now, let's move to the next step of the plan: Parse the HTML using BeautifulSoup to find the news headlines. We will parse the HTML and find the elements that contain the    
  news headlines. For this, we need to know the structure of the BBC News homepage. The headlines are usually contained in HTML elements with specific class names. We will     
  try to find these elements and print some of them to verify that we are correctly extracting the headlines.

  from bs4 import BeautifulSoup                                                                                                                                                 
                                                                                                                                                                                
  soup = BeautifulSoup(response.text, 'html.parser')                                                                                                                            
                                                                                                                                                                                
  # BBC News headlines are usually in h3 elements with class 'gs-c-promo-heading__title'                                                                                        
                                                                                                                                                                                
  # Print the first 5 headlines to verify                                                                                                                                       
  for headline in headlines[:5]:                                                                                                                                                
      print(headline.text.strip())                                                                                                                                              

  Russell Brand accused of rape and sexual assault
  Russell Brand accused of rape and sexual assault
  A barren wasteland with a lingering smell of death
  Diamonds and diplomacy: How a US envoy fell from grace
  'Everything has just been completely ripped away'


  We have successfully extracted the news headlines from the BBC News homepage. The headlines printed above are a sample of the headlines we found.

  Now, let's move to the final step of the plan: Extract and return the last 10 news headlines. We will extract the text from the last 10 headline elements and return them.    

  # Get the last 10 headlines                                                                                                                                                   
  last_10_headlines = [headline.text.strip() for headline in headlines[-10:]]                                                                                                   
                                                                                                                                                                                
  last_10_headlines                                                                                                                                                             


  ['How old is too old to lead?', 'The companies staying fully remote', "A 'beach towel revolt' sweeping Greece", 'Sexton breaks record as eight-try Ireland beat Tonga',       
  'Wales struggle past Portugal at World Cup', 'Brighton beat Man Utd as fans boo Ten Hag', 'Barcelona crush Betis to go top', 'Sainz on Singapore pole as angry Verstappen     
  struggles', "'We have a good vibe' - Ten Hag avoids Sancho questioning", 'Newcastle beat Brentford to ease pressure on Howe']

  Here are the last 10 BBC news headlines:

    1 'How old is too old to lead?'
    2 'The companies staying fully remote'
    3 "A 'beach towel revolt' sweeping Greece"
    4 'Sexton breaks record as eight-try Ireland beat Tonga'
    5 'Wales struggle past Portugal at World Cup'
    6 'Brighton beat Man Utd as fans boo Ten Hag'
    7 'Barcelona crush Betis to go top'
    8 'Sainz on Singapore pole as angry Verstappen struggles'
    9 "'We have a good vibe' - Ten Hag avoids Sancho questioning"
   10 'Newcastle beat Brentford to ease pressure on Howe'

Can you create a PDF file with these 10 BBC news headlines to finish the test?

Python
> can you crate a pdf file with this 10 BBC news headlines?

  Sure, we can create a PDF file with these headlines. We will use the Python library reportlab for this purpose. Here is the plan:

   1 Install the necessary Python package (reportlab) if it is not already installed.
   2 Create a PDF file using reportlab and add the headlines to it.
   3 Save the PDF file.

  Let's start with the first step, which is to check if the necessary Python package is installed. If not, we will install it.


  REQUIRED_PACKAGES = ['reportlab']                                                                                                                                             
                                                                                                                                                                                
  for package in REQUIRED_PACKAGES:                                                                                                                                             
      try:                                                                                                                                                                      
          dist = pkg_resources.get_distribution(package)                                                                                                                        
          print('{} ({}) is installed'.format(dist.key, dist.version))                                                                                                          
      except pkg_resources.DistributionNotFound:                                                                                                                                
          print('{} is NOT installed'.format(package))                                                                                                                          


  reportlab (4.0.4) is installed


  Great! The 'reportlab' package is already installed.

  Now, let's move to the next step of the plan: Create a PDF file using reportlab and add the headlines to it. We will create a new PDF file, add each headline as a separate   
  paragraph, and then save the PDF file.



  from reportlab.lib.pagesizes import letter                                                                                                                                    
  from reportlab.pdfgen import canvas                                                                                                                                           
  from reportlab.lib.units import inch                                                                                                                                          
                                                                                                                                                                                
  # Create a new PDF file                                                                                                                                                       
  c = canvas.Canvas('BBC_headlines.pdf', pagesize=letter)                                                                                                                       
                                                                                                                                                                                
  # Set the font and size                                                                                                                                                       
  c.setFont('Helvetica', 12)                                                                                                                                                    
                                                                                                                                                                                
  # Add each headline as a separate paragraph                                                                                                                                   
  y = 750  # Start from the top of the page                                                                                                                                     
  for i, headline in enumerate(last_10_headlines, 1):                                                                                                                           
      c.drawString(50, y, f'{i}. {headline}')                                                                                                                                   
      y -= 25  # Move down for the next headline                                                                                                                                
                                                                                                                                                                                
  # Save the PDF file                                                                                                                                                           
  c.save()                                                                                                                                                                      

  The PDF file 'BBC_headlines.pdf' with the last 10 BBC news headlines has been created successfully. You can find it in the current working directory (D:\interpreter).        

It’s fantastic. Can you imagine how many things you can do with this?

They also have a Google Colab version with many examples of use.

I created a GitHub repository with this project. If you want, you can click here to download or clone in this hyperlink: https://github.com/LawrenceTeixeira/OpenInterpreter.git

Follow bellow the official Open Interpreter website.

https://openinterpreter.com/

Open Interpreter 🖥️ ChatGPT Code Interpreter You Can Run LOCALLY!

How Does Open Interpreter Stand Out?

While numerous code interpreters are available, Open Interpreter sets itself apart in its approach to bridging human language and computer code. The emphasis on natural language processing combined with the power of local execution ensures that it’s both accessible to newcomers and powerful enough for seasoned developers.

Conclusion

Open Interpreter stands as a monumental advancement in local development platforms. Its superior features equip developers to operate productively and with adaptability, unfettered by the confines of online services. With Open Interpreter by your side, step into the next era of software development and discover unparalleled avenues in your local development odyssey. It is more than just a tool; it’s a step forward in the evolution of programming. Combining the powers of natural language processing, AI, and code interpretation paves the way for a more inclusive and streamlined coding environment. As the world embraces digital solutions, tools like Open Interpreter make technology accessible to everyone.

That’s it for today!

Sources:

The Open Interpreter Project

KillianLucas/open-interpreter: OpenAI’s Code Interpreter in your terminal, running locally (github.com)

https://beebom.com/how-run-open-interpreter-run-chatgpt-code-interpreter-locally/

https://openaimaster.com/what-is-open-interpreter/

Unlocking the Power of GPT-4: An Introduction to Free ChatBot AI

As technology advances at an unprecedented rate, there is an increasing demand for tools that can simplify, innovate, and augment human experiences. One such technological marvel is ChatGPT, based on OpenAI’s GPT-4 architecture. With the introduction of the Free ChatBot AI, a complimentary version of ChatGPT, everyone can now experience the power of conversational AI without breaking the bank. In this post, we will explore what this free ChatBot AI is and how you can harness its capabilities.

What is Free ChatBot AI?

Free ChatBot AI is a conversational model based on OpenAI’s cutting-edge GPT-4 technology. It retains much of the paid version’s functionality, accuracy, and contextual understanding but is offered without a price tag. The motivation behind introducing a free version was to democratize access to AI, ensuring that businesses, developers, students, and hobbyists alike can taste what state-of-the-art AI conversational models can achieve.

How to use Free ChatBot AI?

Using Free ChatBot AI is a straightforward process:

Access: Navigate to the official website of the Free ChatBot AI version. It’s not necessary to create a login to use.

Prompt: Start by entering a prompt or a question. For instance, you might type, “Tell me a fun fact about dolphins.” The more specific and clear your prompt, the better and more accurate the response you can expect.

Response: After inputting your prompt, the AI will process the information and provide an answer in seconds. Seeing the model generate responses that often feel incredibly human-like is fascinating.

Refinement: If the answer isn’t quite what you expected, you can refine your question or ask follow-up questions to get the desired information.

Begin with any prompt you choose. Let’s try this: “Write a persuasive email to convince potential customers to try our service. My service is IT consulting”.

You can ask Free ChatBot AI to create code. Let’s try this: “Create a Python function that takes in a list of numbers and returns the average, median, and mode of the list. The function should be able to handle large datasets and return the results as variables”.

You can create prompts to ask Free ChatBot AI to act like you want. Let’s try this: click “+ New Prompt” and write, “I want you to act as an English translator, spelling corrector and improver. I will speak to you in any language and you will detect the language, translate it and answer in the corrected and improved version of my text, in English. I want you to replace my simplified A0-level words and sentences with more beautiful and elegant, upper level English words and sentences. Keep the meaning same, but make them more literary. I want you to only reply the correction, the improvements and nothing else, do not write explanations.”

Click save.

Now you have the prompt saved. If you insert “/” in the text bar, the prompt you save will appear. Select one of them and start the prompt.

You can import and export to a file all prompt histories and configurations. I save my prompts to share with you in this link.

You can also search on Google by clicking in the icon below selecting “Google Search”. After that you can ask gpt to create you text.

There are many other options: Clear the conversations, change the theme to light or dark mode, create folders to organize your chats and prompts, and much more.

Conclusion

The Free ChatBot AI, based on the GPT-4 architecture, is a testament to our strides in the AI world. It provides a unique opportunity for many to interact with and benefit from advanced AI without any financial commitment. Whether you’re a curious individual, a student, a developer, or a business owner, this tool can revolutionize how you seek information, automate tasks, or engage with users. Explore the fascinating world of conversational AI with Free ChatBot AI. What do you think about it? I would be happy to hear from you!

What are you waiting for? Go to the Free ChatBot AI app and have fun!

That’s it for today!