Skip to main content

o1 system instruction temperature

Understanding the Differences When Using OpenAI "o1" Models

If you’re integrating OpenAI’s REST API and plan to use the "o1" series models, it’s essential to be aware of some key differences compared to other models like the "gpt" series. This blog post highlights two notable changes and provides examples to help you adjust your implementation.

No "system" Role

The "o1" models only recognize the "assistant" and "model" roles in the conversation format. The "system" role, commonly used in the "gpt" series to define behavior, is not applicable when using these models. Instead, the functionality and behavior are implied by the model itself or managed through the prompt and conversation context.

No "temperature" Setting

Unlike other models where you can adjust the randomness of responses using the "temperature" setting, the "o1" models do not allow this parameter. This means you cannot modify the deterministic or creative nature of responses for these models. Instead, responses will have default behavior based on the model’s configuration.

These differences are detailed in OpenAI’s documentation, which you can find in the documentation .


Practical Examples Using "curl"

Here are two examples to illustrate how to call the API when using "o1" models.

Example 1: Basic Chat Request

curl https://api.openai.com/v1/chat/completions \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "o1-preview",
    "messages": [
      {"role": "assistant", "content": "What can I help you with today?"},
      {"role": "model", "content": "Can you summarize the latest news about AI?"}
    ]
  }'

In this example:

  • The "model" field specifies an "o1" model.

  • The conversation includes messages with only "assistant" and "model" roles.

Example 2: Chain of Thought Reasoning

curl https://api.openai.com/v1/chat/completions \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "o1-preview",
    "messages": [
      {"role": "assistant", "content": "Let’s solve a math problem together."},
      {"role": "model", "content": "Sure! What problem do you have in mind?"},
      {"role": "assistant", "content": "What is the sum of 23 and 45?"}
    ]
  }'

Popular posts from this blog

npm run build base-href

Using NPM to specify base-href When building an Angular application, people usually use "ng" and pass arguments to that invocation. Typically, when wanting to hard code "base-href" in "index.html", one will issue: ng build --base-href='https://ngx.rktmb.org/foo' I used to build my angular apps through Bamboo or Jenkins and they have a "npm" plugin. I got the habit to build the application with "npm run build" before deploying it. But the development team once asked me to set the "--base-href='https://ngx.rktmb.org/foo'" parameter. npm run build --base-href='https://ngx.rktmb.org/foo did not set the base href in indext.html After looking for a while, I found https://github.com/angular/angular-cli/issues/13560 where it says: You need to use −− to pass arguments to npm scripts. This did the job! The command to issue is then: npm run build -- --base-href='https://ngx.rktmb.org/foo...

wget maven ntlm proxy

How to make wget, curl and Maven download behind an NTLM Proxy Working on CentOS, behind an NTLM proxy: yum can deal without problem with a NTLM Proxy wget, curl and Maven cannot The solution is to use " cntlm ". " cntlm " is a NTLM client for proxies requiring NTLM authentication. How it works Install "cntlm" Configure "cntlm"  by giving it your credentials by giving it the NTLM Proxy Start "cntlm" deamon (it listens to "127.0.0.1:3128") Configure wget, curl and Maven to use "cntlm" instead of using directly the NTLM Proxy Note: You will have then a kind of 2 stages Proxy : cntlm + the NTLM proxy Configure CNTLM After installing cntlm, the configuration file is in "cntlm.conf". You must have your domain (in the Windows meaning), proxy login and  proxy password. Mine are respectively: rktmb.org, mihamina, 1234abcd (yes, just for the example) You must have you NTLM Proxy Hostnama or IP ...

VMWare Keyboard Latency

Workstation VM UI lag when typing When using a VMWare Workstation VM, I noticed there is a latency when typing in the keyboard and the real appearance of the typed character. I searched and found: Noticeable typing lag in Linux VM terminals since v16.2 upgrade on Linux host To make it short, what solved it for me: Disable 3D acceleration in the VM setting .