본문 바로가기

전체 글

(38)
Denoising Diffusion Implicit Models Non-Markovian Forward Process를 적용하여 DDPM에 randomness를 줄인 모델인 DDIM 논문을 리뷰해봤다. https://ruddy-sheet-75d.notion.site/Denoising-Diffusion-Implicit-Models-f3900e97f13d445cbce896b62537b7ec Denoising Diffusion Implicit Models | Notion1. Introductionruddy-sheet-75d.notion.site 정리한 내용 중 오류가 있다면 댓글로 알려주시면 감사하겠습니다!
Generating Diverse High-Fidelity Images with VQ-VAE-2 VQ-VAE model에 hierarchical architecture를 적용한 모델인 VQ-VAE-2 논문을 리뷰해봤다. https://ruddy-sheet-75d.notion.site/Generating-Diverse-High-Fidelity-Images-with-VQ-VAE-2-b7ef04de460d4d839f97555767f32a0a?pvs=4 Generating Diverse High-Fidelity Images with VQ-VAE-2 | Notion 1. Introduction ruddy-sheet-75d.notion.site 정리한 내용 중 오류가 있다면 댓글로 알려주시면 감사하겠습니다!
Neural Discrete Representation Learning VAE의 latent variable에 vector quantization을 도입한 모델인 VQ-VAE 논문을 리뷰해봤다. https://ruddy-sheet-75d.notion.site/Neural-Discrete-Representation-Learning-de09712cedfa472a8fbaa84ff2f767dd?pvs=4 Neural Discrete Representation Learning | Notion 1. Introduction ruddy-sheet-75d.notion.site 정리한 내용 중 오류가 있다면 댓글로 알려주시면 감사하겠습니다!
Score-based Generative Modeling through Stochastic Differential Equations SMLD와 DDPM model은 모두 SDE를 discretize한 모델임을 시사하는 논문을 리뷰해봤다. https://ruddy-sheet-75d.notion.site/Score-based-Generative-Modeling-through-Stochastic-Differential-Equations-88c707e283874aac8c5143fb0d7587e8?pvs=4 Score-based Generative Modeling through Stochastic Differential Equations | Notion 1. Introduction ruddy-sheet-75d.notion.site 정리한 내용 중 오류가 있다면 댓글로 알려주시면 감사하겠습니다!
An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale Transformer를 vision task에 접목시킨 모델인 ViT 논문을 리뷰해봤다. https://ruddy-sheet-75d.notion.site/An-Image-is-Worth-16x16-Words-Transformers-for-Image-Recognition-at-Scale-8f616a55fa6a428a97845727882c1b02?pvs=4 An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale | Notion 1. Introduction ruddy-sheet-75d.notion.site 정리한 내용 중 오류가 있다면 댓글로 알려주시면 감사하겠습니다!
Language Models are Unsupervised Multitask Learners GPT 모델에서 더욱 많은 parameter를 사용하여 부족한 부분을 개선한 모델인 GPT-2 논문을 리뷰해봤다. https://ruddy-sheet-75d.notion.site/Language-Models-are-Unsupervised-Multitask-Learners-6f505abbb67243a9b2cdc6a658771aaf?pvs=4 Language Models are Unsupervised Multitask Learners | Notion 1. Introduction ruddy-sheet-75d.notion.site 정리한 내용 중 오류가 있다면 댓글로 알려주시면 감사하겠습니다!
Improving Language Understanding by Generative Pre-Training Large unlabeled text corpora로 학습시킨 generative pre-trained model인 GPT 논문을 리뷰해봤다. https://ruddy-sheet-75d.notion.site/Improving-Language-Understanding-by-Generative-Pre-Training-a0d9fb62ca004932953a822c13178e2e?pvs=4 Improving Language Understanding by Generative Pre-Training | Notion 1. Introduction ruddy-sheet-75d.notion.site 정리한 내용 중 오류가 있다면 댓글로 알려주시면 감사하겠습니다!
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding Deep bidirectional representation을 활용한 pre-trained model인 BERT 논문을 리뷰해봤다. https://ruddy-sheet-75d.notion.site/BERT-Pre-training-of-Deep-Bidirectional-Transformers-for-Language-Understanding-1c4323ee7ad844ca937240bf295e1788?pvs=4 BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding | Notion 1. Introduction ruddy-sheet-75d.notion.site 정리한 내용 중 오류가 있다면 댓글로 알려주시면 감사하겠습니다!