Download presentation
Presentation is loading. Please wait.
1
Machine Learning to Deep Learning_3
Tutorial code: Jemin Lee Hompage: 역사 알고리즘 약간은 깊이 있는 이해 그것을 통한 실제 구현 과 응용 시스템 Area에서의 Deep-Learning
2
Table of Contents Fundamental Machine Learning (1일차)
Linear Regression: Gradient Descent Algorithm (optimization) Logistic Regression (Single Neuron=Perceptron): Sigmoid (Logistic function), Convexity, Cross Entropy, Decision Boundary Multiple Perceptron (Hidden Layer): Backpropagation algorithm Deep Neural Network Breakthrough (2-3일차) Rebirth of Neural Network, renamed DNN TensorFlow Basic DNN, ReLU, Pre-training, Dropout Convolutional Neural Network (CNN) How to apply DNN into real world problem (?) Use-case: smarttention 2016
3
Fully Connected(Dense) Neural Network
8
8-1. Convolutional Neural network, 홍정모 교수님
9
Preview
10
출처: 모두를 위한 머신러닝 시즌1, 김성훈 교수님
12
Let’s look at other areas with the same filter(w)
How many numbers can we get? 출처: 모두를 위한 머신러닝 시즌1, 김성훈 교수님
21
Output Image 크기 계산[1/2] 𝑜= 𝑖−𝑓 𝑠 +1= 4−3 1 +1=2 입력(i): 4x4 필터(f): 3x3
𝑜= 𝑖−𝑓 𝑠 +1= 4− =2 입력(i): 4x4 필터(f): 3x3 스트라이드(s): 1
22
Output Image 크기 계산[2/2] 𝑜= 𝑖−𝑓 𝑠 +1= 5−3 2 +1=2 입력(i): 5x5 필터(f): 3x3
𝑜= 𝑖−𝑓 𝑠 +1= 5− =2 입력(i): 5x5 필터(f): 3x3 스트라이드(s): 2
23
Padding 계산 𝑜= 𝑖−𝑓+2𝑝 𝑠 +1= 5−4+2×2 1 +1=6 입력(i): 5x5 필터(f): 4x4
𝑜= 𝑖−𝑓+2𝑝 𝑠 +1= 5−4+2× =6 입력(i): 5x5 필터(f): 4x4 스트라이드(s): 1
24
Activation Map 생성
31
Polling (subsampling)
가중치를 곱하거나 바이어스를 더하는 것이 없음 입력 맵에서 읽은 데이터를 재 가공함 보통 풀링 크기와 스트라이드 크기가 같음
34
Fully Connected Layer (FC Layer)
Contains neurons that connect to the entire input volume, as in ordinary Neural Networks
35
평균 풀링 𝑜= 𝑖−𝑓 𝑠 +1 = 5−3 1 +1=3 y 햇: h, f 등으로 쓰기도 함.
36
맥스 풀링 𝑜= 𝑖−𝑓 𝑠 +1 = 5−3 1 +1=3 y 햇: h, f 등으로 쓰기도 함.
37
MNIST 실습
38
구현할 CNN 구조 코드 위치: 일반코드: MNIST.ipynb TensorBoard 적용: MNIST-tensorboard.ipynb FC(2048,625) FC(625,10) Input(28x28) C(28x28) POOL(14x14) C(14x14) POOL(7x7) C(7x7) POOL(4x4) .... 9 3x3x1 .. 3x3x1 3x3x1 Activation Map 32 Activation Map 32 Activation Map 64 Activation Map 64 Activation Map 128 Activation Map 128
39
Parameter updates
40
Weight Initialization
Truncated normal initialization 사용
41
구현 내용 기본적인 MLP 구현 5-layer MLP with ReLU Pre-training and Dropout
One hidden layer Testing accuracy: ~92% 5-layer MLP with ReLU ReLU, Sigmoid, Softmax Testing accuracy: ~97% Pre-training and Dropout Xavier init. and new regularization Testing accuracy: ~99% Convolutional Neural Network Testing accuracy: 100%
42
Break down the CNN in more detail
Input: 28x28x1, memory: 784, weights: 0 Conv-32: 28x28x32, memory: 25k , weights: 3*3*1*32 = 288 Pool2- 14x14x32 , memory: 6k , weights: 0 Conv-64: 14x14x64 , memory: 12k , weights: 3*3*1*64 = 576 Pool2: 7x7x64 , memory: 3k , weights: 0 Conv-128: 7x7x128 , memory: 6k , weights: 3*3*1*128 = 1,152 Pool2: 4x4x128 , memory: 2k , weights: 0 FC: 1x1x2048 , memory: 2k , weights: 4*4*128*2048 = 4,194,304 FC: 1x1x10 , memory: 10 , weights: 2048*10 = 20,480 Total memory: k * 4bytes ~= k (only forward per a image) Total params: 4,216,800 (4M) parameters
43
구현 내용 기본적인 MLP 구현 Convolutional Neural Network
One hidden layer Testing accuracy: ~92% Memory: (784 + (784*256)+10) * 4bytes = 805,992 (804k) Params: [256,28x28+1] + [10,256+1]=203,530 (203k) parameters Convolutional Neural Network Testing accuracy: 100% Total memory: k * 4bytes ~= k (only forward per a image) Total params: 4,216,800 (4M) parameters Accuracy 8% 향상 vs. memory 5배 사용 + 엄청난 연산
44
속도 Processor FLOPS 속도 I7-4790k 34 GFLOPS 105분 31초, 1x GTX-745
AWS EC2, K520 2448 *2 GFLOPS 22분 4초, 4.78x GTX-970 3494 GFLOPS 9분 10초, 11.5x GTX-1080 8990 TFLOPS 5분 27초, 19.3x
45
VGGNet 16 ImageNet 2013 우승 모델
47
Wrap-up Nuts and Bolts of Applying Deep Learning (Andrew Ng), NIPS 2016 How do I become machine learning researcher? How do I have my own idea? Read a lot of papers (20~30) and then replicate results Human brain is remarkable device This is incredibly reliable process Dig into the dirty work: cleaning data, replicating results, or hacking the GPU. Almost Ph.D Students want to change the world. If you work on AI, what you do have to actually impact to a lot of people
Similar presentations