ChatGPT Prompt-Only vs OpenAI API + Function Calling

ChatGPT Prompt-Only vs OpenAI API + Function Calling #

This document shows a side-by-side comparison between using ChatGPT manually (prompt-only) and using the OpenAI API with function calling.


🟦 ChatGPT (Prompt-Only / UI Version) #

This is what you do in the browser:

User: What’s the weather in Paris?

ChatGPT: The weather in Paris is likely mild and partly cloudy this time of year.
  • No function is actually called
  • The model guesses based on training data
  • Not real-time, not accurate, not programmable

🟨 OpenAI API + Function Calling Version (Working + Realistic) #

import openai
import json

openai.api_key = "your_api_key_here"

# 1. Define the available function
functions = [
    {
        "name": "get_weather",
        "description": "Get the current weather in a given city.",
        "parameters": {
            "type": "object",
            "properties": {
                "city": {"type": "string", "description": "Name of the city"}
            },
            "required": ["city"]
        }
    }
]

# 2. Send user prompt + function list to GPT
response = openai.ChatCompletion.create(
    model="gpt-4-0613",
    messages=[{"role": "user", "content": "What’s the weather in Paris?"}],
    functions=functions,
    function_call="auto"  # Let GPT decide
)

# 3. GPT returns a function call (it won't run it)
function_call = response["choices"][0]["message"].get("function_call")

if function_call:
    # Parse the function name and arguments
    fn_name = function_call["name"]
    args = json.loads(function_call["arguments"])

    # 4. Run the function yourself (e.g., call a real API or simulate)
    def get_weather(city):
        # In reality, you'd call a weather API like OpenWeatherMap here
        return f"The current weather in {city} is 68°F and sunny."

    result = get_weather(args["city"])

    # 5. Send result back to GPT for natural language output
    followup = openai.ChatCompletion.create(
        model="gpt-4-0613",
        messages=[
            {"role": "user", "content": "What’s the weather in Paris?"},
            {"role": "function", "name": fn_name, "content": result}
        ]
    )

    print(followup["choices"][0]["message"]["content"])
else:
    print("GPT did not call a function.")

🔍 Summary Table #

Step Prompt-Only (UI) API + Function Calling
Input User types text prompt Dev sends user prompt + function definitions
Function selection None — model guesses GPT chooses from provided functions
Execution Not possible Developer runs function (e.g., API call)
Output Text only, not real-time GPT formats real function output into natural reply