Earlier this year OpenAI introduced GPT-4o, a cheaper version of GPT-4 that is almost as capable. However, GPT is trained on the whole Internet, so it might not have the tone and style of output for your project – you can try to craft a detailed prompt to achieve that style or, starting today, you can fine-tune the model.

“Fine-tuning” is the final polish of an AI model. It comes after the bulk of the training is done but it can have strong effects on the output with relatively little effort. OpenAI says that just a few dozen examples are enough to change the tone of the output to one that fits your use-case better.

For example, if you’re trying to make a chat bot, you can write up several question-answer pairs and feed those into GPT-4o. Once fine-tuning completes, the AI’s answers will be closer to the examples you gave it.

ChatGPT-4o can now be fine-tuned to make it a better fit for your project

Maybe you’ve never tried fine-tuning an AI model before, but you can give it a shot now – OpenAI is letting you use 1 million training tokens for free through September 23. After that, fine-tuning will cost $25 per million tokens and using the tuned model will be $3.75 per million input tokens and $15 per million output tokens (note: you can think of tokens as syllables, so a million tokens is a lot of text). OpenAI has detailed and accessible documentation on fine-tuning.

The company has been working with partners to try out the new features. Developers being developers, what they did was try and make a better coding AI. Cosine has an AI named Genie, which can help users find bugs and with the fine-tuning option. Cosine trained it on real examples.

ChatGPT-4o can now be fine-tuned to make it a better fit for your project

Then there’s Distyl, which tried fine-tuning a text-to-SQL model (SQL is a language for looking things up in databases). It placed first in the BIRD-SQL benchmark with an accuracy of 71.83%. For comparison, human developers (data engineers and students) got 92.96% accuracy on the same test.

ChatGPT-4o can now be fine-tuned to make it a better fit for your project

You may be worried about privacy, but OpenAI says that users who fine-tune 4o have full ownership of business data, including all inputs and outputs. The data you use to train the model is never shared with others or used to train other models. But OpenAI is also monitoring for abuse, in case someone tries to fine-tune a model that will violate its usage policies.

Source



Source link


administrator