Skip to main content
Crowdfunding
Python + AI for Geeks
Practice

Fine-Tuning Practice

The CodeFriends fine-tuning practice environment consists of the following three steps.


1. Select Training Data

Click the + Select Data button to create your own training data or choose from sample data.

The training data must be formatted in JSONL and contain at least 10 pairs of questions and answers.


2. Set Hyperparameters

Configure hyperparameters, including Batch Size, Learning Rate, and Epoch Number. These values are essential for fine-tuning a GPT model using OpenAI.


3. Execute Fine-Tuning

Enter a fine-tuning model name and press the Execute Fine-Tuning button. Once fine-tuning is complete, you will be able to interact with the customized model.


The goal of [Understanding Fine-Tuning] is to equip learners with essential AI knowledge required for fine-tuning and enable them to perform fine-tuning independently on the OpenAI platform. Due to OpenAI’s policy and technical constraints, CodeFriends does not conduct actual fine-tuning.


What Happens During the Learning Process?

During fine-tuning, the weights and biases of the AI model are adjusted based on the configured hyperparameters.

  • Weights: Determine the importance of specific features in the input data.
  • Bias: Adjusts the model’s output to prevent it from being skewed in a particular direction, influencing the activation function of the neural network.

These adjustments help the model refine its responses, making it more specialized for the task at hand.


Fine-Tuning Practice Preview

In the upcoming lessons, we will explore JSONL data format, basic AI concepts, and hyperparameters that influence fine-tuning performance.


Try It Out

Select your training data, configure the hyperparameters, execute fine-tuning, and interact with your customized AI model to see how adjustments impact its responses.