[Deep Learning] Feedforward

2023. 3. 27. 13:47
๐Ÿง‘๐Ÿป‍๐Ÿ’ป์šฉ์–ด ์ •๋ฆฌ

Neural Networks
Feed-forward
Backpropagation

 

 

 

Feed-forward

 

Network inputs์ด Input layer๋ฅผ ๊ฑฐ์น˜๊ณ  ์—ฌ๊ธฐ์„œ ๊ณ„์‚ฐ์„ ํ†ตํ•ด Hidden layer๋กœ output์ด ์ „๋‹ฌ๋˜์–ด input์œผ๋กœ ์ „๋‹ฌ๋˜๊ณ , ๋˜ hidden layer์—์„œ ์ด๋ฃจ์–ด์ง„ ์—ฐ์‚ฐ์˜ ๊ฒฐ๊ณผ๊ฐ€ output layer์— ์ „๋‹ฌ๋˜์–ด network output์œผ๋กœ ์ตœ์ข… ์ „๋‹ฌ๋ฉ๋‹ˆ๋‹ค.

 

 

 

์ถœ์ฒ˜ : https://www.analyticsvidhya.com/blog/2020/02/cnn-vs-rnn-vs-mlp-analyzing-3-types-of-neural-networks-in-deep-learning/

 

 

๊ทธ๋ฆฌ๊ณ , ๊ฐ๊ฐ์˜ ๋ชจ๋“  input์— ๋Œ€ํ•˜์—ฌ hidden์— ๋ชจ๋‘ ์—ฐ์‚ฐ๋˜์–ด, ๊ฐ๊ฐ์˜ weight๋กœ ์—ฐ์‚ฐ๋˜์–ด ๋“ค์–ด๊ฐ€๊ฒŒ ๋ฉ๋‹ˆ๋‹ค.

 

์„œ๋กœ ๋‹ค๋ฅธ weight๋ผ๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค.

 

i -> j : Wji

 

 

์œ„ ํ•„๊ธฐ์™€ ๊ฐ™์ด ์šฐ๋ฆฌ๋Š” ์—ฐ์‚ฐ์€ ๊ฐ™์•„ ๋ณด์ด์ง€๋งŒ W^T์˜ ๊ฐ’์ธ Weight vector๋Š” ๋‹ค๋ฅด๊ฒŒ ๊ตฌ์„ฑ๋˜์–ด ์žˆ์Šต๋‹ˆ๋‹ค.

 

์ด ๊ณผ์ •์€ input -> hidden ์ž…๋‹ˆ๋‹ค.

 

๊ทธ๋ฆฌ๊ณ , 

 

์œ„์™€ ๊ฐ™์ด hidden -> output์˜ ๊ณผ์ •์ž…๋‹ˆ๋‹ค.

 

 

์œ„ ๊ณผ์ •์„ ํ†ตํ•ฉํ•˜์—ฌ,

 

์ด๋ ‡๊ฒŒ ์ •๋ฆฌํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.

 

๊ทธ๋Ÿฌ๋‚˜, Y๋Š” vector๋กœ ์—ฌ๋Ÿฌ ๊ฒฐ๊ณผ ๊ฐ’์„ ๊ฐ€์ง€๊ณ  ์žˆ์„ ์ˆ˜ ์žˆ๊ธฐ ๋•Œ๋ฌธ์— ์ •ํ™•ํžˆ ๋งค์น˜๋˜๋Š” ์ˆ˜์‹์€ ์•„๋‹™๋‹ˆ๋‹ค.

 

๊ทธ์ € ๋Š๋‚Œ์ ์œผ๋กœ ์•Œ๊ณ  ๋„˜์–ด๊ฐ€๋ฉด ๋  ๊ฒƒ ๊ฐ™์Šต๋‹ˆ๋‹ค.

 

์ค‘์š”ํ•œ ๊ฒƒ์€ input์˜ ์ •๋ณด๊ฐ€ layer๋ฅผ ๊ฑฐ์ณ ๊ณ„์‚ฐ๋˜์–ด ๋‚˜์˜ค๋Š” ์ •๋ณด๊ฐ€ ๊ณ„์†ํ•ด์„œ "์•ž์œผ๋กœ" ์ „๋‹ฌ๋œ๋‹ค๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค.

 

Network์ด multi layer๋ฅผ ์Œ“์•˜๋‹ค๋Š” ๊ฒƒ์€, ํ•จ์ˆ˜ ๊ณ„์‚ฐ๋“ค์ด ๊ณ„์† ์ค‘์ฒฉ๋˜์–ด ์ด๋ฃจ์–ด์ง„๋‹ค๊ณ  ๋ณผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.

์ฆ‰, ํ•จ์ˆ˜์˜ ๊ฒฐ๊ณผ๋ฅผ ๊ฐ€์ง€๊ณ  ๋‹ค์‹œ ํ•จ์ˆ˜๋ฅผ ๊ณ„์‚ฐํ•˜๋Š” ๊ฒƒ์ด์ง€์š”.

 

 

 

๊ทธ๋Ÿผ ์ˆ˜์‹์„ ๋” ์ž์„ธํžˆ ์‚ดํŽด๋ณผ๊นŒ์š”?

 

 

 

์•„๋ž˜ ์ˆ˜์‹์€ hiddden to output์„ ๋‚˜ํƒ€๋ƒ…๋‹ˆ๋‹ค.

 

 

net์€ ๊ทธ์ € network ๊ณ„์‚ฐ๊ฐ’์„ ์˜๋ฏธํ•ฉ๋‹ˆ๋‹ค.

 

์šฐ๋ฆฌ๋Š” layer์—์„œ ์•„๋ž˜์™€ ๊ฐ™์ด summation๊ณผ Activation part๋ฅผ ๊ฐ€์ง‘๋‹ˆ๋‹ค.

 

 

์ถœ์ฒ˜ : https://www.v7labs.com/blog/neural-networks-activation-functions

 

 

๊ทธ๋ž˜์„œ summation๊นŒ์ง€ ์ง„ํ–‰๋œ ๋ถ€๋ถ„์„ net์ด๋ผ๊ณ  ์นญํ•ฉ๋‹ˆ๋‹ค.

 

์—ฌ๊ธฐ์„œ activation๊นŒ์ง€ ๋˜์–ด์•ผ ํ•ด๋‹น layer์˜ output vector์— ํฌํ•จ๋˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค.

 

์ฆ‰, net์˜ ๊ฐ’์€ ์ตœ์ข… output์ด ์•„๋‹™๋‹ˆ๋‹ค.

 

net i == y i๊ฐ€ ์•„๋‹ˆ๋ผ๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค.

 

activation function์„ ํ†ต๊ณผํ•ด์•ผ y i ๊ฐ€ ๋  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.

 

๋”ฐ๋ผ์„œ,

 

f(net i) = y i์ž…๋‹ˆ๋‹ค !

 

 

์ด ์ˆ˜์‹์€ input to output์„ ๋‚˜ํƒ€๋ƒ…๋‹ˆ๋‹ค.

 

 

๊ฒฐ๊ตญ ์ค‘์š”ํ•œ ๊ฒƒ์€,

 

์ด feedforward process๊ฐ€ ์ˆœ์„œ๋Œ€๋กœ ์ง„ํ–‰๋˜๋ฉฐ, layer์˜ output์„ ๊ทธ ๋‹ค์Œ layer์˜ input์œผ๋กœ ์“ด๋‹ค๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค.

 

๊ทธ๋ฆฌ๊ณ  ํ•ญ์ƒ ํ•จ์ˆ˜๊ฐ€ ์ค‘์ฒฉ๋ฉ๋‹ˆ๋‹ค.

 

 

์ž ์ด๋ ‡๊ฒŒ ๋ดค์„ ๋•Œ,

 

MLP๊ฐ€ ์กด์žฌํ•  ๋•Œ,

 

์ž˜ ๊ตฌํ•ด์ง„ Weight์— ๋Œ€ํ•ด์„œ input data๋ฅผ ๋ฐ›์•„์„œ feedforward ์ฒ˜๋ฆฌ๋ฅผ ํ•ด์„œ ์šฐ๋ฆฌ๊ฐ€ ์›ํ•˜๋Š” ์˜ˆ์ธก ์ถœ๋ ฅ ๊ฐ’์„ ์–ป์–ด๋‚ผ ์ˆ˜ ์žˆ๊ฒ ๋‹ค๊ณ  ๋ณผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.

 

๊ทธ๋ ‡๋‹ค๋ฉด MLP๋ฅผ ๊ตฌ์„ฑํ•˜๊ธฐ ์œ„ํ•ด ์šฐ๋ฆฌ๊ฐ€ ํ•ด์•ผํ•  ๊ฒƒ์€ ๋ฌด์—‡์ผ๊นŒ์š”?

 

 

๋จผ์ € ์œ„ ๊ทธ๋ฆผ๊ณผ ๊ฐ™์ด Neural Net์ด ์ƒ๊ฒผ์Šต๋‹ˆ๋‹ค.

 

Weight๋Š” ํ•™์Šตํ•˜๋ฉฐ ๊ตฌํ•ด์ง€๋Š” ๊ฒƒ์ด๊ณ ,

 

์šฐ๋ฆฌ๊ฐ€ ๊ฒฐ์ •ํ•ด์•ผํ•  ๊ฒƒ์€,

 

input node์˜ ๊ฐœ์ˆ˜๋Š” ๋ฐ์ดํ„ฐ์— ์˜ํ•ด, feature์— ์˜ํ•ด ๊ฒฐ์ •๋˜๋ฏ€๋กœ ์ด๋ฏธ ์ •ํ•ด์ ธ ์žˆ๋Š” ๊ฐ’์ด๋‹ค.

 

๊ทธ๋ฆฌ๊ณ , Output node์˜ ๊ฐœ์ˆ˜๋Š” ์šฐ๋ฆฌ๊ฐ€ ํ’€๊ณ ์žํ•˜๋Š” ๋ฌธ์ œ์— ๋”ฐ๋ผ ๋‹ฌ๋ผ์ง‘๋‹ˆ๋‹ค.

 

Classification, Regression, .... ๋“ฑ ๊ฒฝ์šฐ์— ๋”ฐ๋ผ ๋‹ฌ๋ผ์ง‘๋‹ˆ๋‹ค.

 

 

๊ฒฐ๊ตญ ์šฐ๋ฆฌ๊ฐ€ ์ •ํ•ด์•ผ ํ•˜๋Š” ๊ฒƒ์€,

 

Hidden Layer์˜ ๊ฐœ์ˆ˜์™€ Hidden Node์˜ ๊ฐœ์ˆ˜์ž…๋‹ˆ๋‹ค.

 

๊ฐ๊ฐ์„ ์šฐ๋ฆฌ๋Š” depth์™€ width๋ผ๊ณ  ํ•ฉ๋‹ˆ๋‹ค.

 

๊ทธ๋ฆฌ๊ณ  ์ด๊ฒƒ์€ ๋ฌธ์ œ์— ๋”ฐ๋ผ ์ •์˜๋˜์–ด ์žˆ๋Š” ๊ฒƒ์ด ์•„๋‹Œ, ์‚ฌ๋žŒ์ด ์ •ํ•ด์ค˜์•ผ ํ•ฉ๋‹ˆ๋‹ค.

 

๋”ฐ๋ผ์„œ, hidden layer, hidden node๋ฅผ ๋งŽ์ด ์“ธ ์ˆ˜๋ก capacity๋ฅผ ๋” ๋งŽ์ด ์šฐ๋ฆฌ ๋ชจ๋ธ์—๊ฒŒ ๋ถ€์—ฌํ•œ๋‹ค๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค.

 

๊ทธ๋ž˜์„œ ๋„ˆ๋ฌด ๋งŽ์€ capacity๋ฅผ ๊ฐ–๋Š”๋‹ค๋ฉด overfittingํ•  ๊ฐ€๋Šฅ์„ฑ์ด ๋†’์•„์ง‘๋‹ˆ๋‹ค.

 

์ด ๋ชจ๋ธ์€ ์ผ๋ฐ˜ํ™”๋ฅผ ์ž˜ ํ•˜๋Š” ๋ชจ๋ธ์ด๋ผ๊ธฐ ๋ณด๋‹ค๋Š” ๊ทธ์ €, trainging ๋ฐ์ดํ„ฐ๋ฅผ ๋‹ค ์™ธ์›Œ๋ฒ„๋ฆฌ๋Š” ํšจ๊ณผ๊ฐ€ ๋‚˜ํƒ€๋‚˜๊ฒŒ ๋ฉ๋‹ˆ๋‹ค.

 

์—ฌ๊ธฐ์„œ ์ ๋‹นํžˆ ์ž˜ ํ•˜๋„๋ก ์„ค์ •ํ•˜๋Š” ๊ฒƒ์€,

 

Regularlization, Generalization, .. ๋“ฑ๋“ฑ์˜ ๊ฒƒ๋“ค์„ ํ†ตํ•ด์„œ ์–ป์–ด์ง‘๋‹ˆ๋‹ค.

 

 

 

Activation Function

 

 

๊ทธ ๋‹ค์Œ ๊ฒฐ์ •ํ•ด์•ผํ•  ๊ฒƒ์€ Activation function์„ ์–ด๋–ค ๊ฒƒ์œผ๋กœ ์จ์•ผํ•˜๋‚˜์ž…๋‹ˆ๋‹ค.

 

ํ˜„์žฌ ReLU์™€ ReLU์˜ ์นœ๊ตฌ๋“ค์ด ๊ฑฐ์˜ Default์ž…๋‹ˆ๋‹ค.

 

 

๊ทธ๋ฆฌ๊ณ  hyperblic tan์€ ์•„์ง RNN ๊ณ„์—ด์—์„œ ๋งŽ์ด ์‚ฌ์šฉ๋ฉ๋‹ˆ๋‹ค.

 

 

 

 

์œ„ ๊ทธ๋ฆผ๊ณผ ๊ฐ™์ด input layer์— ์žˆ๋Š” input node๋“ค์— y = x function ํ˜•ํƒœ์˜ ์„ ์„ ๊ทธ์–ด, ์—ฌ๊ธฐ์—๋Š” Activation function์ด ์กด์žฌํ•˜์ง€ ์•Š๋Š” ๋‹ค๋Š” ๊ฒƒ์„ ์•Œ๋ ธ์Šต๋‹ˆ๋‹ค.

 

๊ทธ๋ฆฌ๊ณ , Hidden layer์™€ Output layer์— ์žˆ์–ด์„œ, ๊ฐ๊ฐ Activation function์˜ ์—ญํ• ์ด ๋‹ค๋ฆ…๋‹ˆ๋‹ค.

 

Role of activation function in hidden unit

 

  • ๋งŒ์•ฝ signal, ์ฆ‰, net j๊ฐ€ ์ถฉ๋ถ„ํžˆ ํฌ๋‹ค๋ฉด, hidden unit์€ ์ด๊ฒƒ์„ ํฐ ๊ฐ’์œผ๋กœ ๋‚ด๋ณด๋ƒ…๋‹ˆ๋‹ค.
  • ๊ทธ๋Ÿฌ๋‚˜ ์ž‘๋‹ค๋ฉด, ์ž‘์€ ๊ฐ’์œผ๋กœ ๋‚ด๋ณด๋ƒ…๋‹ˆ๋‹ค.
  • ๊ทธ๋ฆฌ๊ณ  nonlinear transformation์„ ํ•ฉ๋‹ˆ๋‹ค. hidden์€ linearํ•˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค.

 

 

Role of activation function in output unit

 

  • ์šฐ๋ฆฌ๊ฐ€ ์–ด๋–ค ๋ฌธ์ œ๋ฅผ ๊ฐ–๊ณ ์žˆ๋ƒ์— ๋”ฐ๋ผ output node์˜ ํ˜•ํƒœ๊ฐ€ ๋‹ฌ๋ผ์ง‘๋‹ˆ๋‹ค.
    • 2-classification ์—์„œ๋Š” ๋‹น์—ฐํžˆ logistic ํ˜•ํƒœ์˜ sigmoid units์„ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค.

  • 2-classification ์ด๋ฏ€๋กœ output node๋Š” ํ•˜๋‚˜๋งŒ ์žˆ์œผ๋ฉด ๋ฉ๋‹ˆ๋‹ค.
  • ์ฒซ ๋ฒˆ์งธ or ๋‘ ๋ฒˆ์งธ class๊ฐ€ ๋  ํ™•๋ฅ ์„ ๊ตฌํ•˜๋Š” ๊ฒƒ์ด๊ธฐ ๋•Œ๋ฌธ์ž…๋‹ˆ๋‹ค.
  • ๊ทธ๋ฆฌ๊ณ , ์•„๋ž˜์™€ ๊ฐ™์ด Output์„ ํ‘œํ˜„ํ•ฉ๋‹ˆ๋‹ค.

 

 

 

 

 

  • ์šฐ๋ฆฌ๊ฐ€ ์–ด๋–ค ๋ฌธ์ œ๋ฅผ ๊ฐ–๊ณ ์žˆ๋ƒ์— ๋”ฐ๋ผ output node์˜ ํ˜•ํƒœ๊ฐ€ ๋‹ฌ๋ผ์ง‘๋‹ˆ๋‹ค.
    • Multi-class classification์—์„œ๋Š” ์ด์•ผ๊ธฐ๊ฐ€ ๋‹ฌ๋ผ์ง‘๋‹ˆ๋‹ค. ์—ฌ๊ธฐ์„  3 ์ด์ƒ์˜ class๋ฅผ ๋‚˜๋ˆ ์•ผ ํ•˜๋Š” ๋ฌธ์ œ๊ฐ€ ๋‚˜์˜ต๋‹ˆ๋‹ค.
    • ๋ณดํ†ต ๋ถ„๋ฅ˜ํ•˜๊ณ ์ž ํ•˜๋Š” ๊ฐœ์ˆ˜๋กœ Output์„ ์ •ํ•ฉ๋‹ˆ๋‹ค.
    • ๊ทธ๋ฆฌ๊ณ  Activation function์œผ๋กœ๋Š” softmax function์„ ์‚ฌ์šฉํ•˜๊ณ , one-hot encoding์„ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค.
    • ๊ทธ๋ฆฌ๊ณ  ๋‚˜์˜จ class์— ๋Œ€ํ•œ ๊ฐ๊ฐ์˜ ํ™•๋ฅ  ๊ฐ’์— ์žˆ์–ด์„œ, logistic์„ ์‚ฌ์šฉํ•˜๋ฉด ๊ฐ’๋“ค์˜ ํ•ฉ์ด 1์ด ์•„๋‹ˆ๊ฒŒ ๋ฉ๋‹ˆ๋‹ค.
      • ๊ฒฐ๊ตญ logistic์„ ์”Œ์› ์„ ๋•Œ, ๊ฐ๊ฐ์„ ๋ณด๋ฉด ํ™•๋ฅ ๊ฐ’์ฒ˜๋Ÿผ ๋ณด์ด์ง€๋งŒ, ์ „์ฒด์ ์ธ ๊ด€์ ์—์„œ๋Š” ํ™•๋ฅ ๊ฐ’์ด ์•„๋‹ˆ๊ฒŒ ๋ฉ๋‹ˆ๋‹ค.
    • ๊ทธ๋ž˜์„œ softmax๋ฅผ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค.

 

 

์ถœ์ฒ˜ : https://link.springer.com/article/10.1007/s10994-019-05786-2

 

  • ์šฐ๋ฆฌ๊ฐ€ ์–ด๋–ค ๋ฌธ์ œ๋ฅผ ๊ฐ–๊ณ ์žˆ๋ƒ์— ๋”ฐ๋ผ output node์˜ ํ˜•ํƒœ๊ฐ€ ๋‹ฌ๋ผ์ง‘๋‹ˆ๋‹ค.
    • ์šฐ๋ฆฌ๋Š” regression ๋ฌธ์ œ๋ฅผ ํ’€ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.
    • ์•ž์„œ ๋ณด์•˜๋˜ classification ์—์„œ๋Š” continuous ํ•œ ๊ฐ’๋“ค์„ probability๋กœ ๋ฐ”๊ฟ”์ค„ ํ•„์š”๊ฐ€ ์žˆ์—ˆ์Šต๋‹ˆ๋‹ค.
    • ๊ทธ๋Ÿฌ๋‚˜ regression์—์„œ๋Š” ๊ฐ’ ๊ทธ ์ž์ฒด๊ฐ€ ํ•„์š”ํ•˜๊ธฐ ๋•Œ๋ฌธ์— Activation function์ด ํ•„์š”ํ•˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค.
    • y = net์˜ ํ˜•ํƒœ๋กœ ๊ทธ์ € summation์„ ๋Œ€์ž…ํ•ด์„œ ํ•ด๊ฒฐํ•ฉ๋‹ˆ๋‹ค.
    • ์ด๋Š” ์–ด๋–ค function๋„ ๊ฑฐ์น˜์ง€ ์•Š๊ณ  ๊ทธ๋Œ€๋กœ output์„ ๋‚ด๋†“๋Š” ๊ฒƒ์„ ๋งํ•ฉ๋‹ˆ๋‹ค.
    • ๊ทธ๋ฆฌ๊ณ  output์ด ์—ฌ๋Ÿฌ ๊ฐœ์ผ ํ•„์š”๊ฐ€ ์—†์œผ๋ฉฐ, linear units์„ ์•„๋ž˜ ์‚ฌ์ง„๊ณผ ๊ฐ™์ด ๋ฐฐ์น˜ํ•ฉ๋‹ˆ๋‹ค.
      • ๊ทธ๋Ÿฌ๋ฏ€๋กœ ์ด linear units์€ output node activation function์ด ๋˜์ง€๋งŒ, ์‚ฌ์‹ค์ƒ ๊ทธ๋Œ€๋กœ ๊ฐ’์ด ๋‚˜์˜ค๋Š” ๊ฒƒ์— ๋ถˆ๊ณผํ•ฉ๋‹ˆ๋‹ค.

 

 

 

 

 

์ž, ๊ทธ ๋‹ค์Œ์œผ๋กœ ํ•„์š”ํ•œ ๊ฒƒ์ด ์–ด๋– ํ•œ Loss Function์„ ์‚ฌ์šฉํ•  ๊ฒƒ์ธ๊ฐ€ ์ž…๋‹ˆ๋‹ค.

 

Loss Function

 

์œ„์—์„œ ์‚ดํŽด๋ณธ Classification, Regression ๋“ฑ๋“ฑ์˜ ์ˆ˜ํ–‰์ด ์–ผ๋งˆ๋‚˜ ์ž˜ ์ด๋ฃจ์–ด์กŒ๋Š”์ง€๋ฅผ ํ™•์ธํ•˜๊ธฐ ์œ„ํ•ด์„œ Loss Function์„ ๊ณ„์‚ฐํ•  ์ฐจ๋ก€์ž…๋‹ˆ๋‹ค.

 

์šฐ๋ฆฌ Network์ด ์–ผ๋งˆ๋‚˜ ์ง„์‹ค๋งŒ์„ ์ด์•ผ๊ธฐ ํ•˜๋Š”์ง€. ์–ผ๋งˆ๋‚˜ ์ •๋‹ต์„ ์ด์•ผ๊ธฐ ํ•˜๋Š”์ง€.

 

 

๋จผ์ €,

 

Classification ์—์„œ๋Š” Cross-entropy๋ฅผ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค.

 

์ด๋ฅผ ํ†ตํ•ด ์šฐ๋ฆฌ๊ฐ€ ์ด ๊ฐ’์„ ์ตœ์†Œํ™” ํ•˜๋Š” ๋ฐฉํ–ฅ์„ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค.

 

 

๊ทธ๋ฆฌ๊ณ ,

 

Regression ์—์„œ๋Š” MSE์ธ Mean Squared Error๋ฅผ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค.

 

์ถœ์ฒ˜ : https://www.dataquest.io/blog/understanding-regression-error-metrics/

์ด ๊ฐ’์€ ๋ฏธ๋ถ„ ๊ฐ€๋Šฅํ•˜๊ธฐ ๋•Œ๋ฌธ์— ํ›„์† ์กฐ์น˜์— ๋Šฅํ•ฉ๋‹ˆ๋‹ค.

 

์œ„์™€ ๊ฐ™์ด ์ •๋ฆฌํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.

 

'Artificial Intelligence > Deep Learning' ์นดํ…Œ๊ณ ๋ฆฌ์˜ ๋‹ค๋ฅธ ๊ธ€

[Deep Learning] Backpropagation in special Functions  (0) 2023.04.03
[Deep Learning] Backpropagation  (0) 2023.03.30
[Deep Learning] - MLP(Multilayer Perceptron)  (0) 2023.03.26
[Deep Learning] - Neural Networks  (0) 2023.03.26
[Supervised Learning] ์ง€๋„ ํ•™์Šต  (0) 2023.02.08

BELATED ARTICLES

more