OpenAI Launches Fine-Tuning Feature for Enhanced GPT-3.5 Turbo Performance

OpenAI has announced an exciting enhancement for its customers: the ability to integrate custom data into the lightweight version of GPT-3.5, known as GPT-3.5 Turbo. This improvement allows developers to refine the text-generating AI model’s performance, making it more reliable and tailored to specific tasks.

According to OpenAI, fine-tuned iterations of GPT-3.5 can match or even exceed the baseline capabilities of its flagship model, GPT-4, in specific narrow applications.

“Since the launch of GPT-3.5 Turbo, developers and businesses have expressed a strong desire to customize the model to create unique and tailored experiences for their users,” the company stated in its recent blog post. “This update empowers developers to optimize models for their specific needs and deploy these custom solutions at scale.”

With this fine-tuning capability, companies utilizing GPT-3.5 Turbo via OpenAI’s API can enhance the model's adherence to instructions, such as requiring it to consistently respond in a particular language. Additionally, businesses can refine the model’s response formatting (such as completing code snippets) and adjust the model's tone to align with their brand voice or messaging style.

Moreover, fine-tuning allows OpenAI customers to streamline their text prompts, resulting in faster API responses and reduced costs. OpenAI notes, "Early testers have achieved prompt reductions of up to 90% by embedding instructions directly into the model during fine-tuning."

Currently, fine-tuning requires preparing data, uploading necessary files, and initiating a fine-tuning job through OpenAI’s API. All fine-tuning data must undergo scrutiny via an API moderation system powered by GPT-4 to ensure compliance with OpenAI’s safety standards. Looking ahead, OpenAI plans to introduce a user-friendly fine-tuning interface complete with a dashboard to monitor ongoing tuning tasks.

The costs for fine-tuning are structured as follows:

- Training: $0.008 per 1,000 tokens

- Usage input: $0.012 per 1,000 tokens

- Usage output: $0.016 per 1,000 tokens

Here, "tokens" refer to segments of text; for instance, the word “fantastic” is divided into three tokens: “fan,” “tas,” and “tic.” OpenAI estimates that a fine-tuning job for GPT-3.5 Turbo, using a training file of 100,000 tokens (approximately 75,000 words), would cost around $2.40.

In related news, OpenAI has released two updated versions of its GPT-3 base models (babbage-002 and davinci-002), which are also available for fine-tuning and now include features like pagination and enhanced extensibility. Notably, OpenAI plans to phase out the original GPT-3 base models on January 4, 2024.

Furthermore, OpenAI indicated that fine-tuning capabilities for GPT-4—which goes beyond text to comprehend images—will be available later this fall, though specific details are yet to be disclosed.

Most people like

Find AI tools in YBX