๋ณธ๋ฌธ ๋ฐ”๋กœ๊ฐ€๊ธฐ
728x90
๋ฐ˜์‘ํ˜•

๋Œ€์™ธํ™œ๋™/2023 LG Aimers 3๊ธฐ7

LG Aimers 3๊ธฐ ์ˆ˜๋ฃŒ 2023. 12. 31.
Module 7. ๋”ฅ๋Ÿฌ๋‹ (Deep Learning) (KAIST ์ฃผ์žฌ๊ฑธ ๊ต์ˆ˜) ๋‚ ์งœ: 2023๋…„ 7์›” 15์ผ Part 1. Introduction to Deep Neural Networks 1. Deep Learning : ์‹ ๊ฒฝ์„ธํฌ๋“ค์ด ๋ง์„ ์ด๋ฃจ์–ด์„œ ์ •๋ณด๋ฅผ ๊ตํ™˜ํ•˜๊ณ  ์ฒ˜๋ฆฌํ•˜๋Š” ๊ณผ์ •์„ ๋ณธ๋”ฐ์„œ ๋งŒ๋“  ๋ฐฉ์‹์„ ์˜๋ฏธํ•จ 2. ์‹ฌ์ธต ์‹ ๊ฒฝ๋ง์˜ ๊ธฐ๋ณธ ๋™์ž‘ ๊ณผ์ • - Big Data์˜ ํ•„์š” - GPU Acceleration - Algorithm Improvements 3. Perceptron - ํผ์…‰ํŠธ๋ก ์€ ์ƒ๋ฌผํ•™์ ์ธ ์‹ ๊ฒฝ๊ณ„(Neual Network)์˜ ๊ธฐ๋ณธ ๋‹จ์œ„์ธ ์‹ ๊ฒฝ์„ธํฌ(=๋‰ด๋Ÿฐ)์˜ ๋™์ž‘ ๊ณผ์ •์„ ํ†ต๊ณ„ํ•™์ ์œผ๋กœ ๋ชจ๋ธ๋งํ•œ ์•Œ๊ณ ๋ฆฌ์ฆ˜ 4. Forward Propagation - ํ–‰๋ ฌ ๊ณฑ์„ ํ†ตํ•ด sigmoid function๊ณผ ๊ฐ™์€ actiavtion function์„ ์ง€๋‚˜๋ฉด ๊ฒฐ๊ณผ ๊ฐ’์ด ๋‚˜์˜ด 5. MSE - ์—๋Ÿฌ.. 2023. 7. 15.
Module 6. ๊ฐ•ํ™”ํ•™์Šต (Reinforcement Learning) (๊ณ ๋ ค๋Œ€ํ•™๊ต ์ด๋ณ‘์ค€ ๊ต์ˆ˜) ๋‚ ์งœ: 2023๋…„ 7์›” 13์ผ Part 1. MDP and Planning : Markov Decision Process์˜ ์•ฝ์ž Sequential Decision Making under Uncertainty๋ฅผ ์œ„ํ•œ ๊ธฐ๋ฒ• ๊ฐ•ํ™”ํ•™์Šต(Reinforcement Learning, RL)์„ ์œ„ํ•œ ๊ธฐ๋ณธ ๊ธฐ๋ฒ• ์•Œ๊ณ ๋ฆฌ์ฆ˜(transition probability, reward function)์„ ์•Œ๊ณ  ์žˆ์„ ๋•Œ๋Š” MDP(stocasitc control ๊ธฐ๋ฒ•)์„ ์ด์šฉ ์•Œ๊ณ ๋ฆฌ์ฆ˜์„ ๋ชจ๋ฅด๊ณ  simulation ๊ฒฐ๊ณผ(reward ๊ฐ’)๋ฅผ ํ™œ์šฉํ•  ๋•Œ๋Š” ๊ฐ•ํ™”ํ•™์Šต์„ ์ด์šฉ https://velog.io/@recoder/MDP%EC%9D%98%EA%B0%9C%EB%85%90 S : set of states(state space) state .. 2023. 7. 15.
Module 5. ์ง€๋„ํ•™์Šต (๋ถ„๋ฅ˜/ํšŒ๊ท€) (์ดํ™”์—ฌ์ž๋Œ€ํ•™๊ต ๊ฐ•์ œ์› ๊ต์ˆ˜) ๋‚ ์งœ: 2023๋…„ 7์›” 8์ผ Part 1. SL Foundation 1.Supervised Learning - label๊ฐ’์ด ์žˆ๋Š” ๊ฒƒ์„ ๋งํ•จ - training๊ณผ test ๋‹จ๊ณ„๊ฐ€ ์กด์žฌํ•จ - feature์˜ ๊ฒฝ์šฐ, domain ์ง€์‹์ด ์–ด๋Š ์ •๋„ ํ•„์š”ํ•จ - ๋”ฅ๋Ÿฌ๋‹์˜ ๊ฒฝ์šฐ, feature๋ฅผ ์Šค์Šค๋กœ ํ•™์Šตํ•˜๊ธฐ๋„ ํ•จ - SL์˜ ๊ฒฝ์šฐ, training error, val error, test error์„ ํ†ตํ•ด generalization error์„ ์ตœ์†Œํ™”ํ•˜๋„๋ก ํ•˜๋Š” ๋…ธ๋ ฅ์„ ํ•˜๊ฒŒ ๋จ - loss function=cost function 2. Bias-variance trade-off - bias์™€ variance์˜ trade off๋ฅผ ์ž˜ ์กฐ์ •ํ•ด์„œ ์ตœ์ ์˜ generalization error๋ฅผ ๋งŒ๋“œ๋Š” ๊ฒƒ์ด ์ค‘์š”ํ•จ - ๋”ฅ.. 2023. 7. 8.
Module 4. Bayesian (๊ณ ๋ ค๋Œ€ํ•™๊ต ๊น€์žฌํ™˜) ๋‚ ์งœ: 2023๋…„ 7์›” 4์ผ Part1. Principle and Structure 1. Bayesian ์›๋ฆฌ ๋ฐ ์ž‘๋™๋ฐฉ์‹ Part 2. Estimation Algorithm 1. Joint Probablity Distribution(๊ฒฐํ•ฉ ํ™•๋ฅ  ๋ถ„ํฌ) ๊ฒฐํ•ฉ ๋ถ„ํฌ๋ž€ ํ™•๋ฅ  ๋ณ€์ˆ˜๊ฐ€ ๋‘ ๊ฐœ ์ด์ƒ์ผ ๋•Œ ์—ฌ๋Ÿฌ ์‚ฌ๊ฑด์ด ๋™์‹œ์— ์ผ์–ด๋‚  ํ™•๋ฅ ์„ ๋งํ•จ. 2. Bayesian ์ถ”์ • ์•Œ๊ณ ๋ฆฌ์ฆ˜ 3. Random Walk Metropolis-Hastings Algorithm 1. Posterior Distribution ์ ๋ถ„ํ•˜์ง€ ์•Š๊ณ  ํ•  ์ˆ˜ ์žˆ๋Š” ๋ฐฉ๋ฒ•: Prior * Likelihood 2. Gibbs Sampling ์œ„์˜ ๋ฐฉ์‹์œผ๋กœ ์•ˆ๋  ๊ฒฝ์šฐ ์‚ฌ์šฉ parameter์˜ conditional ๋ถ„ํฌ๋ฅผ ์•Œ ๊ฒฝ์šฐ, Gibbs samplin.. 2023. 7. 4.
Module 3. SCM & ์ˆ˜์š”์˜ˆ์ธก (๊ณ ๋ ค๋Œ€ํ•™๊ต ์ดํ˜„์„ ๊ต์ˆ˜) ๋‚ ์งœ: 2023๋…„ 7์›” 3์ผ Part 1. Forecasting (1) 1. ์ˆ˜์š”์˜ˆ์ธก ๊ธฐ๋ฒ• ๋ฐ ์˜ˆ์ธก ์˜ค์ฐจ ๊ณผ๊ฑฐ์— observe๋œ ๋ฐ์ดํ„ฐ๋ฅผ ๊ฐ€์ง€๊ณ  ์˜ˆ์ธก Naive Method ๊ณผ๊ฑฐ period ์ •๋ณด๋ฅผ ๊ฐ€์ง€๊ณ  ์˜ˆ์ธก 2. The simple Average - ํ‰๊ท ๊ฐ’์„ ํ™œ์šฉํ•œ ์˜ˆ์‹œ ⇒ ์ด๋ ‡๊ฒŒ ๋˜๋ฉด 8, 10๋…„ ์ „ ๋ฐ์ดํ„ฐ๋„ ๋‹ค ์‚ฌ์šฉํ•˜๊ฒŒ ๋จ ⇒ ์ด๊ฒŒ ๊ณผ์—ฐ ์˜๋ฏธ๊ฐ€ ์žˆ๋Š”๊ฐ€ ⇒ ์ตœ๊ทผ ๋ช‡ ๋…„๊บผ ๋งŒ ๊ฐ€์ง€๊ณ  ์˜ค์ž ⇒ Method 3 3. The Moving Average Forecast ๊ณผ๊ฑฐ์˜ 3๊ฐœ์›”์น˜๋งŒ ๊ฐ€์ง€๊ณ  ๊ฐ€์ ธ์™€์„œ ์˜ˆ์ธกํ•˜๋Š” ๋ฐฉ์‹ ⇒ ๋™์ผํ•œ ๊ฐ€์ค‘์น˜๋ฅผ ์ฃผ๋Š”๊ฒŒ ๋งž๋Š”๊ฐ€ (๊ณผ๊ฑฐ์™€ ์ตœ๊ทผ ๋ฐ์ดํ„ฐ์˜ ๊ฐ€์ค‘์น˜๊ฐ€ ๋™์ผ) ⇒ ์ตœ๊ทผ ๋ฐ์ดํ„ฐ์— ๊ฐ€์ค‘์น˜๋ฅผ ๋” ์ฃผ๋Š”๊ฒŒ ๋งž์ง€ ์•Š๋‚˜ 4. Weighted Moving Average Forecast 3๊ฐœ์›”์น˜.. 2023. 7. 4.
728x90
๋ฐ˜์‘ํ˜•