Download presentation
Presentation is loading. Please wait.
1
Optimization for Training Deep Models
8.3 ~ 8.6.3 병렬소프트웨어설계 연구실 오찬영
2
8.3.1 Stochastic Gradient Descent
Stochastic Gradient Descent (SGD)? takes the average gradient on a minibatch of m examples drawn i.i.d (independent and identically distributed) from the data generating distribution 즉, data의 일부(minibatch)만을 이용해서 update
3
8.3.1 Stochastic Gradient Descent
Algorithm: Note: Learning rate가 iteration (k)에 따라 달라짐
4
8.3.1 Stochastic Gradient Descent
변하는 learning rate? Batch gradient와는 다르게, minimum에 도달해도 gradient 값이 0이 되지 않음 아래 조건을 만족하는 경우 SGD 수렴 (충분 조건) 주로 아래와 같이 사용 여기서 α = k/τ τ iteration 이후 constant learning rate 사용
5
8.3.1 Stochastic Gradient Descent
Algorithm: Note: Learning rate가 iteration (k)에 따라 달라짐
6
8.3.1 Stochastic Gradient Descent
장점 Computation time per update does not grow with the number of training examples 단점 전체 training 시간은 느릴 수 있다.
7
8.3.2 Momentum Momemtum은 high curvature, small but consistent gradients, or noisy gradients인 경우 효과적 gradient의 과거 변화를 어느정도 유지함
8
8.3.2 Momentum SGD는 순간의 gradient에 의해서만 업데이트하므 로 좌우로 진동하게 됨 느림
9
8.3.2 Momentum gradient가 같은 방향으로 정렬된 경우 momentum의 step size 증가
α는 작은 값에서 점점 커지는 값
10
8.3.2 Momentum algorithm:
11
8.3.3 Nesterov Momentum 현재 velocity가 적용된 이후의 gradient를 사용
일종의 correction factor로 해석 가능
12
8.4 Parameter Initialization Strategies
Initialization determines… whether the algorithm converges at all how quickly learning converges generalization error 하지만 initial point를 잘 설정해주는 것은 매우 어려움 simple, heuristic 할 수 밖에 없음 단, unit의 “break symmetry” 가 중요 if two hidden units with the same activation function are connected to the same inputs, then these units must have different initial parameters.
13
8.4 Parameter Initialization Strategies
Random initialization (gaussian, uniform dist.) a simple method computationally cheaper and unlikely to assign same function to different units Larger initial value easy optimization (converge well) stronger symmetry breaking hard regularization (overfitting) exploding in propagation / saturation of activation func.
14
8.4 Parameter Initialization Strategies
Normalized Initialization a heuristic method it determines the initial scale of the weights Random orthogonal matrices
15
8.4 Parameter Initialization Strategies
Optimal criteria가 optimal performance를 보장 X wrong criteria initialization의 성질이 learning이 시작되고 사라짐 optimization은 개선되나 generalization 실패 그래서 initial weight를 hyperparameter로 두곤 함 computational resource가 충분하면 search 시도
16
8.4 Parameter Initialization Strategies
Sparse initialization input/output의 개수로 초기화를 하면, layer의 크기가 큰 경우 weight가 매우 작아짐 모든 unit을 k non-zero weights로 초기화 각 unit은 다양한 값을 가질 수 있음 단, 매우 큰 “wrong” weight를 가진 경우, training 시간 이 길어질 수 있음
17
8.4 Parameter Initialization Strategies
Bias의 initialization bias를 0으로 설정하는 경우 대부분 compatible non-zero bias를 설정해주는 경우 bias가 output unit에 대한 것일 경우 initialization에 대한 saturation을 피하기 위해 unit이 다른 unit의 activation을 결정하는 경우 𝑢ℎ≈𝑢, or 𝑢ℎ≈0 h는 gate의 역할
18
8.5 Algorithms with Adaptive LR
Learning rate는 성능에 매우 큰 영향을 미치지만 잘 설정하기는 어려움 momentum 개념은 이를 어느 정도 해결해주지만, 또 다른 hyperparameter의 도입이 필요 delta-bar-delta algorithm an early approach [Jacobs, 1988] loss functio의 partial derivativ의 sign이 그대로인 경우 LR 증가, sign이 바뀔 경우 감소 full batch gradient descent에만 적용 가능
19
8.5.1 AdaGrad 과거의 gradient를 squared sum 해서 사용
The parameters with the largest partial derivative of the loss have a correspondingly rapid decrease in their learning rate, while parameters with small partial derivatives have a relatively small decrease in their learning rate. 이론적으로 convex optimization에 큰 효과를 보 이나, 딥 러닝에 효과 안 좋음
20
8.5.2 RMSProp AdaGrad를 non-convex 형태에 맞게 개량
exponentially decaying average를 이용하여 오래 된 gradient의 영향을 제거 현재 널리 사용되는 방법 중 하나
21
AdaGrad vs. RMSProp
22
8.5.3 Adam RMSProp + momentum apply momentum to the rescaled gradients
bias corrections to the estimates of both the first-order moments (the momentum term) and the (uncentered) second-order moments RMSProp는 correction factor를 이용하지 않으므로 큰 bias 값을 갖게 될 수 있음
23
8.5.4 Choosing the Right Opt. Tech.
There is currently no consensus on this point The choice of which algorithm to use, at this point, seems to depend largely on the user’s familiarity with the algorithm (for ease of hyperparameter tuning)
24
8.6 Approximate 2nd-order Methods
Objective function we discuss here extend readily to more general objective functions that, for instance, include parameter regularization terms
25
8.6.1 Newton’s method Based on using a second-order Taylor series expansion where H is the Hessian of J 위 함수의 극점을 찾으면: Thus for a locally quadratic function (with positive definite H), by rescaling the gradient by H −1, Newton’s method jumps directly to the minimum. Otherwise, this update can be iterated
26
8.6.1 Newton’s method Algorithm two-step iterative procedure.
First, update or compute the inverse Hessian (i.e. by updating the quadratic approximation). Second, update the parameters according to following Eq.
27
8.6.1 Newton’s method In deep learning, the surface of the objective function is typically non-convex with many features, such as saddle points, that are problematic for Newton’s method This situation can be avoided by regularizing the Hessian. Common regularization strategies include adding a constant, α, along the diagonal of the Hessian
28
8.6.1 Newton’s method Newton’s method requires very high computational cost O(k^3), where k is # of parameters And it should be computed at every iteration Only networks with a very small number of parameters can be practically trained via Newton’s method
29
8.6.2 Conjugate Gradients a method to efficiently avoid the calculation of the inverse Hessian by iteratively descending conjugate directions This happens because each line search direction, when given by the gradient, is guaranteed to be orthogonal to the previous line search direction
30
8.6.2 Conjugate Gradients In the method of conjugate gradients, we seek to find a search direction that is conjugate to the previous line search direction, i.e. it will not undo progress made in that direction. search direction where βt is a coefficient whose magnitude controls how much of the direction we should add back to the current search direction.
31
8.6.2 Conjugate Gradients In the method of conjugate gradients, we seek to find a search direction that is conjugate to the previous line search direction, i.e. it will not undo progress made in that direction. search direction where βt is a coefficient whose magnitude controls how much of the direction we should add back to the current search direction.
32
8.6.2 Conjugate Gradients Algorithm
33
8.6.BFGS attempts to bring some of the advantages of Newton’s method without the computational burden inverse Hessian matrix를 low rank를 iteratively refine 한 값으로 근사 하지만, Hessian matrix (O(n^2))를 메모리에 계속 유 지해야 함 not practical L-BFGS는 이전 단계의 추정치를 identity matrix로 가 정하여 메모리 사용량을 줄임
Similar presentations