๋ณธ๋ฌธ ๋ฐ”๋กœ๊ฐ€๊ธฐ
728x90
๋ฐ˜์‘ํ˜•

All Post105

Seq2Seq 1. Intro 1) DNNs(Deep Neural Networks) ์Œ์„ฑ ์ธ์‹, ์‚ฌ๋ฌผ ์ธ์‹๊ณผ ๊ฐ™์€ ๋ฌธ์ œ์— ์•„์ฃผ ์ข‹์Œ ํ•˜์ง€๋งŒ ์ด ์นœ๊ตฌ๋Š” input, output์ด ๊ณ ์ •๋œ ์ฐจ์›์˜ ๋ฒกํ„ฐ๋กœ ์ธ์ฝ”๋”ฉ ๋œ ๋ฌธ์ œ์—๋งŒ ์ ์šฉํ•  ์ˆ˜ ์žˆ๋‹ค๋Š” ๋‹จ์  ๋ฐœ์ƒ 2) sequential problems ์Œ์„ฑ์ธ์‹์ด๋‚˜ ๊ธฐ๊ณ„ ๋ฒˆ์—ญ ๊ฐ™์€ ๋ฌธ์ œ๋“ค์€ ๊ธธ์ด๋ฅผ ์•Œ ์ˆ˜ ์—†๋Š” ์‹œํ€€์Šค๋กœ ํ‘œํ˜„๋จ ๋Œ€ํ‘œ ์˜ˆ๋กœ) question-answering ๋ฌธ์ œ๋Š” ์งˆ๋ฌธ์— ๋Œ€ํ•œ ์ •๋‹ต ์‹œํ€€์Šค๋กœ ๋งค์นญํ•ด์ค˜์•ผ ํ•จ ๋”ฐ๋ผ์„œ DNN์€ ์ž…์ถœ๋ ฅ ์ฐจ์›์„ ์•Œ์•„์•ผ ํ•˜๊ณ , ๊ณ ์ •๋˜์–ด์•ผ ํ•˜๊ธฐ ๋•Œ๋ฌธ์—, ๊ธฐ์กด์˜ ๋ฐฉ๋ฒ•์€ ํ•ด๊ฒฐํ•˜๊ธฐ์— ์–ด๋ ค์›€์ด ๋ฐœ์ƒ ex) ‘๋‚˜๋Š” ๋„ˆ๋ฅผ ์ •๋ง ์‚ฌ๋ž‘ํ•ด’ ⇒ ‘ I love you so much’ ex) ๋ฌธ์žฅ ๋‹จ์–ด ๊ฐœ์ˆ˜์— ๋งž์ถฐ์„œ I love you very ๋ผ๋Š” ์–ด์ƒ‰ํ•œ ๋ฌธ์žฅ ์ถœ๋ ฅ .. 2023. 7. 5.
U-Net 1. Intro ๋ณธ ๋…ผ๋ฌธ์—์„œ๋Š” CNN์˜ ์„ฑ๊ณต์ด Training Set์˜ ์–‘์ด ์ปค์ง€๋ฉด์„œ ์ƒ๊ธด ์ œํ•œ์ ์ธ ์ด์œ ๋ผ๊ณ  ๋งํ•จ. ์ด์ „๊นŒ์ง€๋Š” CNN์€ Classification์„ ์œ„ํ•ด ๋งŽ์ด ์‚ฌ์šฉ๋˜์—ˆ์œผ๋‚˜ ์ƒ๋ฌผํ•™ ๋ถ„์•ผ์˜ ์˜์ƒ ์ฒ˜๋ฆฌ์—์„œ๋Š” Localization์ด ์ค‘์š”ํ–ˆ๊ณ , Semantic Segmentation์˜ ์ค‘์š”๋„๊ฐ€ ๋†’์•˜์Œ. ํ•˜์ง€๋งŒ ์ƒ๋ฌผํ•™์— ๋Œ€ํ•œ Sample์˜ ๊ฐœ์ˆ˜๊ฐ€ 1000๊ฐœ๋ฐ–์— ๋˜์ง€ ์•Š๋Š” ๊ฒƒ์ด ๋‹ค์ˆ˜. ๊ธฐ์กด์— ์‚ฌ์šฉํ•˜๋˜ sliding-window 2๊ฐ€์ง€ ๋‹จ์  redundancy of over lapping patch(๊ฒน์น˜๋Š” ํŒจ์น˜์˜ ๋ถˆํ•„์š”ํ•œ ์ค‘๋ณต์„ฑ)์œ„์˜ ์‚ฌ์ง„์—์„œ ๋ณด์ด๋Š” ๊ฒƒ๊ณผ ๊ฐ™์ด patch๋ฅผ ์˜ฎ๊ธฐ๋ฉด์„œ ์ค‘๋ณต์ด ๋ฐœ์ƒํ•˜๊ฒŒ ๋จ=> ์ด ์ค‘๋ณต๋œ ๋ถ€๋ถ„์€ ์ด๋ฏธ ํ•™์Šต๋œ(๊ฒ€์ฆ๋œ) ๋ถ€๋ถ„์„ ๋‹ค์‹œ ํ•™์Šตํ•˜๋Š” ๊ฒƒ์ด๋ฏ€๋กœ ๋˜‘๊ฐ™์€ ์ผ์„ ๋ฐ˜๋ณตํ•˜๋Š” ๊ฒƒ๊ณผ .. 2023. 7. 5.
Bert 1. Intro ๊ธฐ์กด์— GPT๋‚˜ ์•ž์˜ ๋ชจ๋ธ์—์„œ๋Š” ๋‹ค ํ•œ๋ฐฉํ–ฅ์œผ๋กœ๋งŒ ์›€์ง์ด๋Š”(์™ผ→์šฐ)๋กœ ์˜ˆ์ธกํ•˜๋Š” ๋ชจ๋ธ์„ ๋งŒ๋“ค์—ˆ์—ˆ์Œ ํ•˜์ง€๋งŒ ์–˜๋„ค๋“ค์€ ๊ฒฐ๊ตญ ์˜ˆ์ธกํ•  ๋•Œ๋Š” ๋‹จ๋ฐฉํ–ฅ์œผ๋กœ๋งŒ ์ฝ์–ด์„œ ์˜ˆ์ธกํ•ด์•ผ ํ•˜๊ธฐ์— ์ด์ „ ํ† ํฐ๋งŒ ์ฐธ์กฐํ•  ์ˆ˜ ์žˆ๋‹ค๋Š” ๋‹จ์ ์ด ์กด์žฌ ⇒ ๋‹ค์Œ ๋ฌธ์žฅ์— ๋Œ€ํ•œ ์˜ˆ์ธก์ด๋‚˜ ๋ฌธ์žฅ ๋นˆ์นธ์— ๋Œ€ํ•œ ์˜ˆ์ธก์˜ ๊ฒฝ์šฐ ์น˜๋ช…์ ์ž„. ๊ทธ๋ž˜์„œ bERT๋ฅผ ํ†ตํ•ด ์–‘๋ฐฉํ–ฅ ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•˜๋ ค๊ณ  ํ•˜๋Š” ๊ฒƒ์ž„. 2. Overall architecture ํŠน์ • ๊ณผ์ œ๋ฅผ ํ•˜๊ธฐ ์ „ ์‚ฌ์ „ ํ›ˆ๋ จ ์ž„๋ฒ ๋”ฉ(embedding)์„ ํ†ตํ•ด ์„ฑ๋Šฅ์„ ์ข‹๊ฒŒ ํ•  ์ˆ˜ ์žˆ๋Š” ์–ธ์–ด๋ชจ๋ธ ์‚ฌ์ „ ํ›ˆ๋ จ ์–ธ์–ด ๋ชจ๋ธ์ž„(pre-training) unlabeled data๋กœ๋ถ€ํ„ฐ pre-train ์ง„ํ–‰ ํ•œ ํ›„์—, labeled data๋ฅผ ๊ฐ€์ง€๊ณ  fine-tuning ์ง„ํ–‰ํ•˜๋Š” ๋ชจ๋ธ encoder ๋ชจ๋ธ๋งŒ ๊ฐ€์ ธ๋‹ค ์”€.. 2023. 7. 5.
VIT [AN IMAGE IS WORTH 16X16 WORDS: TRANSFORMERS FOR IMAGE RECOGNITION AT SCALE] ๐Ÿ’ก 0. Abstract While the Transformer architecture has become the de-facto standard for natural language processing tasks, its applications to computer vision remain limited. In vision, attention is either applied in conjunction with convolutional networks, or used to replace certain components of convolutional networks while keeping their overall structure in place. We show that this reliance on .. 2023. 7. 5.
RetinaNet 1. Intro Class Imbalance๋ž€→ ๋ถ„๋ฅ˜ ๋ฌธ์ œ์—์„œ ๊ฐ ํด๋ž˜์Šค์˜ ์ƒ˜ํ”Œ ์ˆ˜๊ฐ€ ๋ถˆ๊ท ํ˜•ํ•œ ์ƒํ™ฉ์„ ์˜๋ฏธํ•จ.→ ์ด์ง„ ๋ถ„๋ฅ˜ ๋ฌธ์ œ์—์„œ ํ•œ ํด๋ž˜์Šค์— ๋Œ€ํ•œ ์ƒ˜ํ”Œ ์ˆ˜๊ฐ€ ๋‹ค๋ฅธ ํด๋ž˜์Šค์— ๋น„ํ•ด ๋งค์šฐ ์ ์€ ๊ฒฝ์šฐ(class imbalance), ์ด๋Ÿฌํ•œ ๋ฌธ์ œ๊ฐ€ ๋ฐœ์ƒ. → ์˜ˆ๋ฅผ ๋“ค์–ด, ์งˆ๋ณ‘ ์œ ๋ฌด๋ฅผ ํŒ๋‹จํ•˜๋Š” ๋ถ„๋ฅ˜ ๋ฌธ์ œ์—์„œ, ๊ฑด๊ฐ•ํ•œ ์‚ฌ๋žŒ์ด ๋Œ€๋ถ€๋ถ„์ด๊ณ , ์งˆ๋ณ‘์„ ๊ฐ€์ง„ ์‚ฌ๋žŒ์˜ ์ˆ˜๊ฐ€ ๋งค์šฐ ์ ๋‹ค๋ฉด(class imbalance), ์ด๋Ÿฌํ•œ ๋ฌธ์ œ๊ฐ€ ๋ฐœ์ƒํ•จ. → ์ฆ‰, ํ•œ ํด๋ž˜์Šค์— ์†ํ•œ ์ƒ˜ํ”Œ ์ˆ˜๊ฐ€ ๋‹ค๋ฅธ ํด๋ž˜์Šค์— ์†ํ•œ ์ƒ˜ํ”Œ ์ˆ˜๋ณด๋‹ค ์›”๋“ฑํžˆ ๋งŽ๊ฑฐ๋‚˜ ์ ์€ ์ƒํ™ฉ์„ ๋งํ•จ. ๋…ผ๋ฌธ์—์„œ๋Š” ๋ฐฐ๊ฒฝ์˜์—ญ(easy negative)์ด ๋Œ€๋ถ€๋ถ„์ด๋ผ ํ•™์Šต์— ๋ผ์น˜๋Š” ์˜ํ–ฅ๋ ฅ์ด ์ปค์ ธ์„œ ๋ชจ๋ธ ์„ฑ๋Šฅ์ด ํ•˜๋ฝํ•œ๋‹ค๊ณ  ๋งํ•จ โ€ป ์ถ”๊ฐ€ ๋‚ด์šฉ : ๊ฐ์ฒด ๊ฒ€์ถœ ๋ชจ๋ธ์—์„œ๋Š” ์ด๋ฏธ์ง€ ๋‚ด์—์„œ ๊ฐ์ฒด๊ฐ€ ์žˆ๋Š” .. 2023. 7. 5.
GPT-1 1. Intro Text์˜ unlabeled๋œ ๋ฐ์ดํ„ฐ๋Š” ํ’๋ถ€ํ•จ ๋ฐ˜๋ฉด์— labeled๋œ ๋ฐ์ดํ„ฐ๋Š” ํ’๋ถ€ํ•˜์ง€ ์•Š๊ณ  ๋นˆ์•ฝํ•จ ๋”ฐ๋ผ์„œ model์ด ์ ์ ˆํ•œ ์ž‘์—…์„ ์ˆ˜ํ–‰ํ•˜๊ธฐ ์‰ฝ์ง€ ์•Š๋Š”๋‹ค๋Š” ๋ฌธ์ œ์  ๋ฐœ์ƒ ๊ทธ๋ž˜์„œ ๋‚˜์˜จ ์•„์ด๋””์–ด๊ฐ€ unsupervisedํ•œ ๋ฐ์ดํ„ฐ๋ฅผ ๋จผ์ € ํ•™์Šต์‹œํ‚ค๊ณ , label๊ฐ’์ด ์žˆ๋Š” ๋ฐ์ดํ„ฐ๋กœ ์žฌํ•™์Šต์‹œํ‚ค๋Š” ๋ฐฉ์‹์ด ๋‚˜์˜ค๊ฒŒ ๋จ. 2. Overall architectudre unsupervised pre-training + supervised fine-tuning ๊ตฌ์กฐ๋กœ ์ด๋ฃจ์–ด์ง 3. Unsupervised pre-training label๊ฐ’์ด ์—†๋Š” unsupervised data๋ฅผ input์œผ๋กœ ๋„ฃ์Œ word embedding ์ง„ํ–‰ํ•˜๊ณ  positional encoding ํ•ด์คŒ decoder์˜ masked self.. 2023. 7. 5.
728x90
๋ฐ˜์‘ํ˜•