POST
/
fine-tunes
from together import Together
import os

client = Together(
api_key=os.environ.get("TOGETHER_API_KEY"),
)

response = client.fine_tuning.create(
model="meta-llama/Meta-Llama-3.1-8B-Instruct-Reference",
training_file="file-id"
)

print(response)
{
  "id": "ft-01234567890123456789",
  "status": "completed",
  "created_at": "2023-05-17T17:35:45.123Z",
  "updated_at": "2023-05-17T18:46:23.456Z",
  "user_id": "user_01234567890123456789",
  "owner_address": "user@example.com",
  "total_price": 1500,
  "token_count": 850000,
  "events": [],
  "model": "meta-llama/Llama-2-7b-hf",
  "model_output_name": "mynamespace/meta-llama/Llama-2-7b-hf-32162631",
  "n_epochs": 3,
  "training_file": "file-01234567890123456789",
  "wandb_project_name": "my-finetune-project"
}

Authorizations

Authorization
string
header
default:default
required

Bearer authentication header of the form Bearer <token>, where <token> is your auth token.

Body

application/json

Response

200 - application/json

Fine-tuning job initiated successfully

A truncated version of the fine-tune response, used for POST /fine-tunes, GET /fine-tunes and POST /fine-tunes/{id}/cancel endpoints