[Transformer] Time-Series Transformer
2023. 1. 27. 23:13
๐ง๐ป๐ป์ฉ์ด ์ ๋ฆฌ
Transformer
Transformer
- NLP๋ฅผ ์ํด ๋ง๋ค์ด์ง ๋ชจ๋ธ
- ํน์ ๋ฌธ์ฅ์ด ๋ฑ์ฅํ ํ๋ฅ ์ ๊ณ์ฐํด์ฃผ๋ ๋ชจ๋ธ
- ์ด๋ฏธ์ง์ ์๊ณ์ด๋ก๋ ํ์ฅ๋์ด ์ฌ์ฉ์ด ๋๊ณ ์์ต๋๋ค.
- Attention์ ๋ณ๋ ฌ์ ์ผ๋ก ์ฌ์ฉํฉ๋๋ค.
- encoder์ decoder ์กด์ฌ
- embedding vector
- semantic relationship
- word imbedding
- positional encoding
- feed forward neural network
- self attention
- query, key, value
- ์ ๋ ฅ ๋ฒกํฐ์ ๋ํด ์ธ ๊ฐ์ ๋ฒกํฐ ์์ฑ
- query์ ๋ํ Key ๊ฐ
- multi-headed attention
- position-wise feed-forward neural network
- masked multi-head attention
- Linear Layer
- Softmax Layer
- batch normalization vs. layer normalization
- freeze
- TST
'Artificial Intelligence' ์นดํ ๊ณ ๋ฆฌ์ ๋ค๋ฅธ ๊ธ
[Thinking] 2023. 4. 27. (0) | 2023.04.28 |
---|---|
[CNN] ํฉ์ฑ๊ณฑ ์ ๊ฒฝ๋ง ๊ธฐ๋ฐ์ ๋ค๋ณ๋ ์๊ณ์ด ๋ฐ์ดํฐ ํ๊ท๋ชจํ (0) | 2023.01.27 |
[Time-Series data] RNN ๊ธฐ๋ฐ ๋ค๋ณ๋ ์๊ณ์ด ๋ฐ์ดํฐ ํ๊ท๋ชจํ (0) | 2023.01.27 |
[Causality] ์ธ๊ณผ์ถ๋ก (0) | 2023.01.26 |
[Self-Supervised Learning and Large-Scale Pre-Trained Models] Part 6 (0) | 2023.01.25 |