AI 의과거와현재그리고내일 AI is the New Electricity 2017.09.15
AI! 2
Near Future of Super Intelligence? *source l http://www.motherjones.com/media/2013/05/robots-artificial-intelligence-jobs-automation 3
4
I think that we live in a world where just as electricity transformed almost everything almost 100 years ago, Today I actually have a hard time thinking of an industry that I don't think AI will transform in the next several years 100 AI. 5
Bill Gates to college grads I expect AI to create breakthroughs that makes people better learners. Get a job in AI, but don t forget inequity around you. AI. AI,. 6
AI is a technology that gets so close to everything we care about. It s going to carry the values that matter to our lives, be it the ethics, the bias, the justice, or the access. AI.,,,,. 7
( Positive & Negative ) vs, vs vs Fake/Garbage vs 24hr Everything 8
1. AI 2. AI? - 3. State - of - Arts 4.
AI Breakthrough *source l http://www.slideshare.net/luma921/deep-learning-the-past-present-and-future-of-artificial-intelligence 10
AI *source l https://blogs.nvidia.com/blog/2016/07/29/whats-difference-artificial-intelligence-machine-learning-deep-learning-ai/ 11
Level - 1. If ~ then ~ 12
Level - 2. "A computer program is said to learn from experience E with respect to some class of tasks T and performance measure P if its performance at tasks in T, as measured by P, improves with experience E. by Tom M. Mitchell Classification Regression Clustering *source l https://www.slideshare.net/terrytaewoongum/machine-learning-54531674 13
Level - 3. Deep Neural Networks (DNN) DNN, AI *source l http://www.cs.toronto.edu/~ranzato/files/ranzato_cnn_stanford2015.pdf 14
AI 1997 2011 2016 Next 15
1. AI 2. AI? - 3. State - of - Arts 4.
AI AI Answer : it's a girl is brushing her hair. *source l https://www.captionbot.ai/ (MS ) 17
?? 18
? 19
AI Option Pricing. AI. 20
? 21
? Darpa Robot Challenge 2015 22
AI AI 23
AI? AI 24
AI = ( )?? [ ] [ ] [ ]. 25
F 26
(Machine Learning)? (T, Task) (P, Performance measure) (E, Experience). 27
? AI,! 28
? ( ) (Supervised Learning) >> >> SL >> ( Google -> Kaggle, DR ) (Unsupervised Learning) >> >> ( : ) >> ) Clustering (Reinforcement Learning) >> >> >> ) 29
-? *Video l 1m40s 30
(Machine Learning) (Artificial Neural Network) (Deep Learning) 31
AI Neural Information Processing Systems International Conference on Machine Learning 2012 AI breakthrough 2012 ImageNet 32
AI - Y. LeCun IBM, Google, MS G. Hinton Google Y. Bengio Facebook Andrew Ng Baidu (ex) "1990 2000 (Neural Network)... by Yoshua Bengio 33
AI - (ICML, NIPS) 34
AI? - 35
(ANN) (Neuron) (Synapse) 36
ANN 28 x 28 = 784 0~9 = 10 Labels Output 0 0.1 1 0.05 2 0.3 3 0.2 28 x 28 0 0 0 1 1 1 0 0 1 1 1 1 0 1 1 0 0 1 1 1 0 0 0 1 1 1 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 4 0.41 5 0.26 6 0.11 7 0.84 8 0.17 9 0.45 37
ANN Output Labels Input 0.1 0 0 0.05 1 0 0.3 2 0 28 x 28 = 784 0.2 0.41 3 0 4 0 0.26 5 0 0.11 6 0 0.84 7 1 0.17 8 0 RGB 28 x 28 x 3 = 2,352 0.45 9 0 Label Output Input Weight 38
ANN
Train : 94.6%, Test : 87.7% 40
ANN ANN? 28 x 28 = 784? Input Data Parameter Linear Combination Output Parameter 41
DNN (Deep Neural Network)? Hidden Layer 2 Deep Layer 2012 SuperVision 65 6000, 6 3000 42
DNN? Linear Equation Combination Output Parameter 43
Deep Learning (1) - Overfitting Data Over Fitting Data Fitting 44
Deep Learning (2) - Vanishing Gradient Signal 45
Deep Learning (3) - Local Minimum 46
Deep? Not Deep ImageNet Label Layer Deep,! 47
Deep? Performance Amount of data *source l Andrew Ng Deep Learning Lecture Note 48
DNN (Deep Neural Network)? 2012? Nvidia GPU Tesla 10 100 (miles) Computer Processing Power GPU *source l http://qz.com/694520/tesla-has-780-million-miles-of-driving-data-and-adds-another-million-every-10-hours/ (Tesla ) 49
Nvidia GPU Computing Power Nvidia - 11 50
Google Cloud Machine Learning 1,000 GPU 25 51
Google I/O 17 Chip ( TPU ) 52
Data? Critical Mass Deep Learning 2004 Caltech 101 10K Images 2005-2010 Pascal VOC 2K 30K objects 2010-2015 Image Net 10M 15M images Image source: http://www.vision.caltech.edu/ Image source: http://doi.ieeecomputersociety.org/ http://www.image-net.org/ 53
Data? A B C Deep Learning Neural Networks Log. Regression Deep learning K-Nearest neighbors Support vector machines Boosting Artificial neural networks Bayesian networks Sparse dictionary learning Regression forest 54
1. AI 2. AI? - 3. State - of - Arts 4.
Quality Data Labeled Data ( Supervised Learning ) 56
Input Output Application Home Features Ad, User Info Image Audio English Image, Radar, Sign Price Click Ad? (0/1) Object( cat, dog) Text trasnscipt Chiness People, Car Real Estate Online Ad Photo Tagging (name) Speech Recognition Translation Autonomous Car 57
TTS AI *source l http://campaign.happybean.naver.com/yooinna_audiobook 58
DeepMind WaveNet Google DeepMind TTS 1/1000 TTS WaveNet *source l https://deepmind.com/blog/wavenet-generative-model-raw-audio/ 59
Music Generation Daddy s Car, a pop song in the style of The Beatles. *source l http://www.theverge.com/2016/9/26/13055938/ai-pop-song-daddys-car-sony *Video l 3m00s 60
Recognizing Pain ARTIFICIAL INTELLIGENCE COULD END ANIMAL SUFFERING BY RECOGNIZING PAIN *source l http://www.newsweek.com/artificial-intelligence-sheep-pain-emotion-cambridge-618085 61
사진으로 나이 알아 내기 *source l https://how-old.net/ (by MS제공 ) 62
*source l https://github.com/alexjc/neural-doodle 63
이미지 설명하기 *source l http://cs.stanford.edu/people/karpathy/sfmltalk.pdf 64
Breast Cancer Detection Google uses machine learning to detect breast cancer better than pathologists *source l http://siliconangle.com/blog/2017/03/05/google-uses-machine-learning-better-detect-breast-cancer-pathologists/ 65
Image Generation (1) [ ] AI [ ] [ ] AI [ ]? 66
Image Generation (2) - GAN GAN(Generative Adversarial Network) g-net : d-net : *source l NIPS 2016. 67
Image Generation - 16 x 16 64 x 64 (x4) Augment *source l https://github.com/david-gpu/srez 68
1. AI 2. AI? - 3. State - of - Arts 4.
GAFA (Google, Amazon, Facebook, Apple) AI 70
AI 71
AI in 2017 Tech giants acquired 34 AI startups in Q1 2017 AI 17 1 Amazon $ 19M buy harvest.ai (AI-security) 17 2 Ford $1B buy Argo ( ) 17 5 Apple $200M buy Lattice Data ( ) 17 5 Cisco $125M buy Mindmeld ( ) > AI *source l https://venturebeat.com/2017/05/28/tech-giants-acquired-34-ai-startups-in-q1-2017/ 72
AI 73
AI? 1 AI AI 3 Domain API, 2 Data Garbage In -> Garbage Out Big Data 4 Computing Power Trial & Error
AI 1. ( SL ) 2. ( ) AI. 75
Tesla Real Time - Data Processing 76
AI 2017 3. https://brunch.co.kr/@kakao-it/ KAKAO AI REPORT import kakao.ai.dataset.daisy import kakao.ai.image import kakao.ai.classifier import mxnet as mx def Conv(data, num_filter, kernel=(1, 1), stride=(1, 1), pad=(0, 0), name=none, suffix=''): conv = mx.sym.convolution(data=data, num_filter=num_filter, kernel=kernel, stride=stride, pad=pad, no_bias=true, name='%s%s_conv2d' %(name, suffix)) bn = mx.sym.batchnorm(data=conv, name='%s%s_batchnorm' %(name, suffix), fix_gamma=true) act = mx.sym.activation(data=bn, act_type='relu', name='%s%s_relu' %(name, suffix)) return act def Inception7A(data, num_1x1, num_3x3_red, num_3x3_1, num_3x3_2, num_5x5_red, num_5x5, pool, proj, name): tower_1x1 = Conv(data, num_1x1, name=('%s_conv' % name)) tower_5x5 = Conv(data, num_5x5_red, name=('%s_tower' % name), suffix='_conv') tower_5x5 = Conv(tower_5x5, num_5x5, kernel=(5, 5), pad=(2, 2), name=('%s_tower' % name), suffix='_conv_1') tower_3x3 = Conv(data, num_3x3_red, name=('%s_tower_1' % name), suffix='_conv') tower_3x3 = Conv(tower_3x3, num_3x3_1, kernel=(3, 3), pad=(1, 1), name=('%s_tower_1' % name), suffix='_conv_1') tower_3x3 = Conv(tower_3x3, num_3x3_2, kernel=(3, 3), pad=(1, 1), name=('%s_tower_1' % name), suffix='_conv_2') pooling = mx.sym.pooling(data=data, kernel=(3, 3), stride=(1, 1), pad=(1, 1), pool_type=pool, name=('%s_pool_%s_pool' % (pool, name))) cproj = Conv(pooling, proj, name=('%s_tower_2' % name), suffix='_conv') concat = mx.sym.concat(*[tower_1x1, tower_5x5, tower_3x3, cproj], name='ch_concat_%s_chconcat' % name) return concat def Inception7B(data, num_3x3, num_d3x3_red, num_d3x3_1, num_d3x3_2, pool, name): tower_3x3 = Conv(data, num_3x3, kernel=(3, 3), pad=(0, 0), stride=(2, 2), name=('%s_conv' % name)) tower_d3x3 = Conv(data, num_d3x3_red, name=('%s_tower' % name), suffix='_conv') tower_d3x3 = Conv(tower_d3x3, num_d3x3_1, kernel=(3, 3), pad=(1, 1), stride=(1, 1), name=('%s_tower' % name), suffix='_conv_1') Vol.01 2017.03 AI CODE int FaceAlignment::track(unsigned char *src, int width, int height, int rotation) { m_img.img.wrap(height, width, src); m_img.gt_enabled = false; for (int i = 0; i < m_face_cnt; i++) { } if (m_data[i].enabled == false) // pass if it is not valid continue; // connect image to data DataInfo *data = &m_data[i]; data->img_ptr = &m_img; // compute bounding box of current face MyRect<float> bbox = FAUtil::compute_bbox(data->current_pts); data->fd = bbox; // put mean shape to the face in image data->current_norm_pts = mean_shape; data->q = FAUtil::compute_similarity(data->current_norm_pts, data->current_pts); data->inv_q = FAUtil::compute_inverse_similarity(data->q); Vol.02 2017.04 data->current_pts = FAUtil::apply_similarity_transform(data->current_norm_pts, data->q); data->initial_pts = data->current_pts; // DO align align(data); AI CODE vector<pairif_t> c_dmf_topk( ) { Map<const VectorXf> vv(v, M), AA(A, N); Map<const MatrixXf> XX(X, N, M); vector<pair<int, float>> result; RowVectorXf dot = hidden_layers[0].transpose() * vv; for(int j=1; j < (int)hidden_layers.size(); ++j) dot = dot * hidden_layers[j]; dot = dot.array().max(0.0f); VectorXf ex = fast_softmax(xx, dot, AA); get_fast_knn(ex, k, result); return result; } vector<pairsf_t> prediction_by_vector(const float* v, int k) { auto ret = pool_.enqueue([](const float* _v, vector<factortype>& hidden_layers, const float* sfxb, const float* sfx, int N, int M, int _k){ return c_dmf_topk(_v, hidden_layers, sfxb, sfx, N, M, _k); }, v, hidden_layers_, (const float*)softmax.data(), k); auto val = move(ret.get()); vector<pairsf_t> result = as_result(val); return result; } Vol.03 2017.05 tower_d3x3 = Conv(tower_d3x3, num_d3x3_2, kernel=(3, 3), pad=(0, 0), stride=(2, 2), name=('%s_tower' % name), suffix='_conv_2') pooling = mx.symbol.pooling(data=data, kernel=(3, 3), stride=(2, 2), pad=(0,0), pool_type="max", name=('max_pool_%s_pool' % name)) concat = mx.sym.concat(*[tower_3x3, tower_d3x3, pooling], name='ch_concat_%s_chconcat' % name) return concat } // validate aligned face return validate(src, width, height, rotation); 77
:, noah.jung@kakaocorp.com Question
End of Document