딥러닝계의블루오션, Apache MXNet 공헌하기 윤석찬, Amazon Web Services 오규삼, 삼성 SDS SAMSUNG OPEN SOURCE CONFERENCE 018
목차 11 1 : 4 2 031 1 1 21 51: 1 L 1 S D A 1 S D N M
Deep Learning 101 SAMSUNG OPEN SOURCE CONFERENCE 018
인공지능 > 기계학습 기계학습이란? Machine Learning
기계가못하는것 = 사람이잘하는것
기계학습 > 딥러닝
딥러닝 - Deep Neural Network Accuracy 1980s and 1990s Now more compute neural networks other approaches Scale (data size, model size) J.// D.an, Tr.n-: an- D.v.lopm.nt: in D..p L.arning R.:.ar,h http://www :li-.:har. n.t/aifronti.r:/j.//--.an-tr.n-:-an---.v.lopm.nt:-in--..p-l.arning-r.:.ar,h
딥러닝 - Deep Neural Network Human Performance
딥러닝학습방법 - 역전파 input 0 1 0 1 1.. -- Y label 0.4 0.3............ 0.2 0.9 Hyper parameters Y 1 Batch size Learning rate Number of epochs 0.4 ±! 0.3 ±! new weights new weights Y 1!= Y backpropagation (gradient descent) Y
딥러닝학습방법 - 조기종료 Accuracy Loss Training 100% Training accuracy Validation accuracy Data Set Validation Test Loss function OVERFITTING Epochs Best epoch
딥러닝학습방법 - 오픈소스활용 -
Apache MXNet 소개 SAMSUNG OPEN SOURCE CONFERENCE 018
h66ps://www.a//6h-1gsd-s6r-bu6ed.c20/2016/11/0x1e6-defau/6-fra0ew2r.-deep-/ear1-1g-aws.h60/ 아마존이선택한딥러닝프레임워크
아마존이선택한딥러닝프레임워크
Apache 오픈소스프로젝트 https://mxnet.apache.org/
MXNet 활용사례 - 스타트업도할수있다! 4 https://github.com/tusimple/mx-maskrcnn http://www.tusimple.com
MXNet 의주요특징 SAMSUNG OPEN SOURCE CONFERENCE 018
1. 유연한프로그래밍모델 명령형 NDArray API >>> import mxnet as mx >>> a = mx.nd.zeros((100, 50)) >>> b = mx.nd.ones((100, 50)) >>> c = a + b >>> c += 1 >>> print(c) 선언형 Symbolic Executor >>> import mxnet as mx >>> net = mx.symbol.variable('data') >>> net = mx.symbol.fullyconnected(data=net, num_hidden=128) >>> net = mx.symbol.softmaxoutput(data=net) >>> texec = mx.module.module(net) >>> texec.forward(data=c) >>> texec.backward() 명령형 NDArray 객체가그래프의입력으로사용됨
2. 다양한언어및플랫폼지원 지원언어 -C - 운영체제 +, A ( 컴퓨팅 - )- 패키지 - - (
3. 멀티 GPU 환경최적화 16 Resnet 152 Alexnet Inceptin V3 Ideal 12.25 Speed up (x) 8.5 91% Efficiency 88% Efficiency 4.75 1 # GPUs 1 4.75 8.5 12.25 16 # GPUs P2.16xlarge (8 Nvidia Tesla K80-16 GPUs) Synchronous SGD (Stochastic Gradient Descent) 16x P2.16xlarge by AWS CloudFormation Mounted on Amazon EFS
3. 멀티 GPU 환경최적화 학습에사용할 PU 목록을명시하는것으 1 멀티 PU 학습가능 MXNet 은자동으 1 학습 / 이터를 1/n1 나눠서 PU 학습을 G ## train data num_gpus = 4 gpus = [mx.gpu(i) for i in range(num_gpus)] model = mx.model.feedforward( ctx = gpus, symbol = softmax, num_round = 20, learning_rate = 0.01, momentum = 0.9, wd = 0.00001) model.fit(x = train, eval_data = val, batch_end_callback = mx.callback.speedometer(batch_size= batch_size))
4. 모바일환경으로모델배포 Amalgamation M N A C a c F F M Apple Core ML 지원,., L X,,, C a
Gluon - 딥러닝학습인터페이스 API 주요기능소개 f i em ) ( N f c a 공헌방법 htt.://glu-n.mxnet.i- htt.s://github.c-m/zackchase/mxnet-thest/aight-d-.e
Gluon Model Zoo 란? NM 1 1 3 G GD A R I G 1 1 1 1 1 1 2 1 1 1 1, 11 1 1 2 1 1 신경망아키텍처만이용하기 기존학습된모델이용하기 from mxnet.gluon.model_zoo import vision from mxnet.gluon.model_zoo import vision resnet18 = vision.resnet18_v1() alexnet = vision.alexnet() resnet18 = vision.resnet18_v1(pretrained=true) alexnet = vision.alexnet(pretrained=true)
Keras2 백 - 엔드지원 Keras 코드그대로활용가능 model = Sequential() model.add(embedding(max_features, 128)) model.add(lstm(128, dropout=0.2, recurrent_dropout=0.2)) model.add(dense(1, activation='sigmoid')) model.compile(loss='binary_crossentropy', optimizer='adam',metrics=['accuracy']) model.fit(x_train, y_train, batch_size=batch_size,epochs=15, validation_data=(x_test, y_test)) score, acc = model.evaluate(x_test, y_test, batch_size=batch_size) 간단한멀티 GPU 학습 gpu_list = ["gpu(0)", "gpu(1)", "gpu(2)", "gpu(3)"] model.compile(loss='categorical_crossentropy, optimizer=opt, metrics=['accuracy'], context=gpu_list)
Demo: Amazon SageMaker 를통한 MXNet 텍스트감성분석모델학습 SAMSUNG OPEN SOURCE CONFERENCE 018
MXNet 소스공헌경험및활용사례 오규삼, 삼성 SDS SAMSUNG OPEN SOURCE CONFERENCE 018
Apache MXNet arxiv 인용지수 처음사용 htt13://t6itte2.c0//.a21ath7/3tat53/972295865187512320
2016 년 10 월 안녕하세 E 2tt9s://2ot2ar.war/.-o6/n/ws/nuan-/-b//0s-u9-voi-/tot/xt-9ort0o5io-wit2-a-:uisition-o0-voi-/-t/-2-9rovi./r-.it/-2-n/tworks 2tt9s://0r//soun..or1/9/o95//Soun.E00/-tsPo.-ast_-o6/soun.s/256091/
Why MXNet CTC Loss (Connectionist Temporal Classification) Efficient Memory vs +0nsor1low : D00: Bi-/ir0.tional RNN + B lay0rs Multi + Distributed GPUs vs.a110 : Distri-ut0/ C+C htt:s://www mi.roway.om/h:.-t0.h-ti:s/nvi/ia-t0sla-k20-2:u-a..0l0rator-k0:l0r-2k110-u:-.los0/
초기 MXNet 의상태 많은부족한점발견 소스공헌목록 d c N U P ( h b i,. a L,,, ) R G a e (( D
코드공헌사례 - Deep Speech 2 h552s://g-5hub.c1//a2ache/-0cuba51r-/x0e5/5ree//as5er/exa/2.e/s2eech_rec1g0-5-10 h552s://arx-v.1rg/abs/1512.02595 h552s://g-5hub.c1//a2ache/-0cuba51r-/x0e5/5ree//as5er/exa/2.e/s2eech_rec1g0-5-10
/885s: arx0:.or. abs 1512.02595 /885s:.08/9b.co2 a5ac/- 0nc9ba8or-2xn-8 8r-- 2as8-r -xa251- s5--c/_r-co.n080on 코드공헌사례 - Deep Speech 2 arc/_d--5s5--c/.5y : 80
코드공헌사례 - Deep Speech 2 a1ch_dee020eech.0y : 1 2 htt02://a15.v./1g/ab2/1512. 2595
코드공헌사례 - Capsule Network 7ttps://g8t7u2.com/1p1c7e/8ncu21tor-mxnet/tree/m1ster/ex1mp:e/c1psnet 12/1(( 구현당 S 타플 M 폼대비가장빠른 tr18n 속도, 가장 I 은 TN 율 (-.,/0, %.29% 7ttps://1rx8v.org/12s/1(1%.%9)29 7ttps://www.9dnuggets.com/2%1(/11/c1psu:e-networ9s-s7198ng-up-18.7tm:
MXNet 공헌및커뮤니티참여방법 SAMSUNG OPEN SOURCE CONFERENCE 018
소스코드공헌 - 정말다양한 Issue 들 h..p-://gi.h/b.com/apache/inc/ba.or-mxne./i--/e-
다양한모델샘플코드공헌가능 h//p.://gi/hub.com/apache/incuba/o--mxne///-ee/ma./e-/example
MXNet 공식블로그글쓰기 https://medi-m.com/apache-m.net
MXNet 커뮤니티참여 MXNet 한국사용자모임 -ttps://www.faceboo..co//groups//xnet.r/ -ttps://www./eetup.co//mxnet-korea-user-group/
THANK YOU Q&A SAMSUNG OPEN SOURCE CONFERENCE 018