OpenAI: API updates, improved function calling capabilities


Certainly, OpenAI has gained popularity thanks to the launch of ChatGPT last fall. But above all, its application programming interface (API) has now become a tool much sought after by developers.

To cope with this increase in demand, OpenAI therefore announces some changes to its API, such as improved function calling capability, more orientable versions of GPT-4 and GPT-3.5 Turbo, a new version “context 16k” of GPT-3.5 Turbo and 75% cost reduction in Embeddings model. This translates into lower costs for developers paying for the API.

Also among the new updates is a new function call capability in the Chat Completions API that will allow developers to link the power of GPT templates to external tools more reliably.

The template will produce a JSON object with the necessary arguments to call these functions

With this update, developers will also be able to provide instructions for GPT-4 and GPT-3.5 Turbo by describing the functions, and the template will produce a JSON object with the necessary arguments to call those functions.

This update makes it easier for developers to build chatbots or apps that interact with external tools and APIs to perform specific tasks, such as sending emails, retrieving weather or flight information , or extracting data from textual sources such as websites.

The updates also make GPT-4 and GPT-3.5 models more orientable, so developers can exercise greater control over model results. OpenAI allows developers to build the context, specify the desired formatting, and provide instructions to the model on the desired result. In fact, developers have more say in the tone, style, and content of the responses generated by the models used in their applications.

What is “context 16k”?

OpenAI also announced the release of the new 16k context version of GPT-3.5 Turbo, which differs from the GPT-3.5 model behind ChatGPT, as it was specifically designed for developers building chat-based applications. . This latest 16k model is an improved variant of the standard 4k model used until now.

Context in “context 16k” is used to describe text in a conversation that helps provide context and helps the model understand the prompt and provide responses that are relevant to the conversation. The 4,000 tokens of the standard GPT-3.5 Turbo model limit the model’s ability to maintain the context of a conversation to a few paragraphs. 16k, or 16,000 tokens, equals about 20 pages of text, allowing the template to have more text for context.

Finally, OpenAI announced that it had succeeded in becoming more efficient and reducing its costs. It therefore reduces the Embeddings model by 75%, or $0.0001 for 1,000 tokens, and the GPT-3.5 Turbo model by 25%, or $0.0014 for 1,000 input tokens and $0.002 for 1,000 tokens Release. The new GPT-3.5 Turbo-16k model is priced at $0.003 for 1k input tokens and $0.004 for 1k output tokens.


Source: “ZDNet.com”



Source link -97