카테고리 없음

스파르타 AI-8기 TIL(10/29) -> 도전 과제 코드

kimjunki-8 2024. 10. 29. 22:12
num_epochs = 50
for epoch in range(num_epochs):
  model.train()
  total_loss = 0

  for batch in train_dataload:
    content, score = batch

    optimizer.zero_grad()
    output = model(content)
    loss = criterion(output, score)
    loss.backward()
    optimizer.step()

    total_loss += loss.item()
  print(f'Epoch {epoch+1}/{num_epochs}, Loss: {total_loss/len(train_dataload)}')

이렇게 만들어봤습니다. 그런데

Epoch 1/50, Loss: 1.5913735551834107
Epoch 2/50, Loss: 1.5593869946875223
Epoch 3/50, Loss: 1.5339540939331056
Epoch 4/50, Loss: 1.5137940978073492
Epoch 5/50, Loss: 1.4978673339936792
Epoch 6/50, Loss: 1.4852498925371869
Epoch 7/50, Loss: 1.4752975536206874
Epoch 8/50, Loss: 1.4673858617689552
Epoch 9/50, Loss: 1.4611013261050714
Epoch 10/50, Loss: 1.456100899626569
Epoch 11/50, Loss: 1.4520779656200875
Epoch 12/50, Loss: 1.4488715845898885
Epoch 13/50, Loss: 1.4462546149230584
Epoch 14/50, Loss: 1.4441346940063848
Epoch 15/50, Loss: 1.442380579459958
Epoch 16/50, Loss: 1.4409827133039148
Epoch 17/50, Loss: 1.439809482923368
Epoch 18/50, Loss: 1.4388320062218642
Epoch 19/50, Loss: 1.4380178084257171
Epoch 20/50, Loss: 1.4373268416567546
Epoch 21/50, Loss: 1.4367451511243494
Epoch 22/50, Loss: 1.4362554534586465
Epoch 23/50, Loss: 1.435858862132561
Epoch 24/50, Loss: 1.4354861173862363
Epoch 25/50, Loss: 1.43516232920856
Epoch 26/50, Loss: 1.4348806563586725
Epoch 27/50, Loss: 1.4346109466320132
Epoch 28/50, Loss: 1.4343667002887261
Epoch 29/50, Loss: 1.434167114885842
Epoch 30/50, Loss: 1.4339572829269782
Epoch 31/50, Loss: 1.4338176792772805
Epoch 32/50, Loss: 1.433648743862059
Epoch 33/50, Loss: 1.4335091404100744
Epoch 34/50, Loss: 1.4333478525208263
Epoch 35/50, Loss: 1.4332644313021403
Epoch 36/50, Loss: 1.433092465354175
Epoch 37/50, Loss: 1.4329572735763179
Epoch 38/50, Loss: 1.4328293104288055
Epoch 39/50, Loss: 1.4327181347288735
Epoch 40/50, Loss: 1.4326233010640959
Epoch 41/50, Loss: 1.4324848038394278
Epoch 42/50, Loss: 1.432425147545047
Epoch 43/50, Loss: 1.4322903784542549
Epoch 44/50, Loss: 1.4321913776514006
Epoch 45/50, Loss: 1.4321085269276688
Epoch 46/50, Loss: 1.4320197780423047
Epoch 47/50, Loss: 1.4319086453507586
Epoch 48/50, Loss: 1.4318062442802801
Epoch 49/50, Loss: 1.4317411826761757
Epoch 50/50, Loss: 1.431648205350085

이렇게 Loss값이 1이 넘어가는 것을 볼 수 있습니다.

 

train_content, test_content, train_score, test_score = train_test_split(
    data['content'], data['score'], test_size=0.3, random_state=42)
batch_size = 16

train_dataset = ReviewDataset(train_content, train_score, preprocess_text, preprocess_score)
train_dataload = DataLoader(train_dataset, batch_size = batch_size, shuffle = True)

test_dataset = ReviewDataset(test_content, test_score, preprocess_text, preprocess_score)
test_dataload = DataLoader(test_dataset, batch_size = batch_size, shuffle = True)

def content_text(content):
    for text in content:
        yield text.split()
vocab = build_vocab_from_iterator(content_text(data['content']), specials=['<unk>'])
vocab.set_default_index(vocab['<unk>'])


class LSTMModel(nn.Module):
    def __init__(self, vocab_size, embed_dim, hidden_dim, output_dim, num_layers = 5):
        super(LSTMModel, self).__init__()
        self.embedding = nn.EmbeddingBag(vocab_size, embed_dim, sparse = True)
        self.lstm = nn.LSTM(embed_dim, hidden_dim, batch_first = True)
        self.fc = nn.Linear(hidden_dim, output_dim)
    def forward(self, content):
        embedded = self.embedding(content)
        embedded = embedded.unsqueeze(1)
        output, (hidden, cell) = self.lstm(embedded)
        return self.fc(hidden[-1])

Vocab_size = len(vocab)
Embed_dim = 64
Hidden_dim = 128
Output_dim = len(set(data['score']))

model = LSTMModel(Vocab_size, Embed_dim, Hidden_dim, Output_dim)
optimizer = optim.SGD(model.parameters(), lr = 0.0001)
criterion = nn.CrossEntropyLoss()

이 윗 코드에서 Loss값이 너무 큰 이유로 .lr과 batch size까지 값을 바꿨는데도, 값이 변하지 않는 것을 보아, 분명히 문제가 있는 것을 짐작할 수 있었습니다. 

 

하지만 난 초보라....계속 물어본 결과가 저것이라, 안을 제대로 만지지는 못하는 상황..아무래도 내일 끝까지 한번 봐야겠습니다.

 

vocab이 뭔지 더 자세하게..