Materials
세미나에서 사용하는 프리젠테이션 파일, 참조 논문, 실습 코드 등 세미나 자료들을 정리한 것입니다.
기초 신경망 복습
- Slide
- Understanding deep learning requires rethinking generalization
- Deep Learning and the Information Bottleneck Principle
신경망 학습 전략
- Slide
- A Simple Framework for Contrastive Learning of Visual Representations
- Adam: A Method for Stochastic Optimization
신경망 모델링 기법
- Slide
- Code
- Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
- Layer Normalization
- Dropout: A Simple Way to Prevent Neural Networks from Overfitting
자연어 처리 기초 1
- Slide
- Efficient Estimation of Word Representations in Vector Space
- Empirical Evaluation of Gated Recurrent Neural Networks on Sequence Modeling
- Sequence to Sequence Learning with Neural Networks
- Neural Machine Translation by Jointly Learning to Align and Translate
자연어 처리 기초 2
- Slide
- Attention Is All You Need
- BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
- Improving Language Understanding by Generative Pre-Training
강화학습 기초
조합 최적화 특강
- Video
- Slide
- Proximal Policy Optimization Algorithms
- Attention, Learn to Solve Routing Problems!
- POMO: Policy Optimization with Multiple Optima for Reinforcement Learning
생성모델의 이론적 이해 (GMM to VAE)
생성모델의 응용과 연구동향 (VAE to Diffusion)
- Slide
- Denoising Diffusion Probabilistic Models
- Hierarchical Text-Conditional Image Generation with CLIP Latents
추천 시스템의 기초
딥러닝 추천 시스템의 고급 기법
- Slide
- Neural Collaborative Filtering
- Neural Attentive Session-based Recommendation
- KGAT: Knowledge Graph Attention Network for Recommendation
- DeepFM: A Factorization-Machine based Neural Network for CTR Prediction
Explainable AI 기초
- Slide
- LIME: “Why Should I Trust You?”: Explaining the Predictions of Any Classifier
- SHAP: A Unified Approach to Interpreting Model Predictions
- TreeSHAP: Consistent Individualized Feature Attribution for Tree Ensembles
- DeepSHAP: Explaining a Series of Models by Propagating Shapley Values
- Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization
Explainable AI 심화
- Slide
- Network Dissection: Quantifying Interpretability of Deep Visual Representations
- Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV)
- Deep Learning for Case-Based Reasoning through Prototypes: A Neural Network that Explains Its Predictions
- This Looks Like That: Deep Learning for Interpretable Image Recognition