23_Time-Series-Prediction

Similar documents
Algorithm_Trading_Simple

Modern Javascript

Polly_with_Serverless_HOL_hyouk

PowerPoint 프레젠테이션

Lab - Gradient descent Copyright 2018 by Introduction [PDF 파일다운로드 ]() 이번랩은우리가강의를통해들은 Gradient descent 을활용하여 LinearRegression

Microsoft PowerPoint - ch03ysk2012.ppt [호환 모드]

데이터 시각화

김기남_ATDC2016_160620_[키노트].key

기술통계

다운로드된 lab_normal_equation.zip 파일을작업폴더로이동한후압축해제후작업하시길바랍니다. 압축해제하면폴더가 linux_mac 과 windows 로나눠져있습니다. 자신의 OS에맞는폴더로이동해서코드를수정해주시기바랍니다. linear_model.py 코드 구조

PowerPoint 프레젠테이션

(......).hwp

Analytics > Log & Crash Search > Unity ios SDK [Deprecated] Log & Crash Unity ios SDK. TOAST SDK. Log & Crash Unity SDK Log & Crash Search. Log & Cras

<32B1B3BDC32E687770>

untitled

Multi-pass Sieve를 이용한 한국어 상호참조해결 반-자동 태깅 도구

PowerSHAPE 따라하기 Calculate 버튼을 클릭한다. Close 버튼을 눌러 미러 릴리프 페이지를 닫는다. D 화면을 보기 위하여 F 키를 누른다. - 모델이 다음과 같이 보이게 될 것이다. 열매 만들기 Shape Editor를 이용하여 열매를 만들어 보도록

4. #include <stdio.h> #include <stdlib.h> int main() { functiona(); } void functiona() { printf("hihi\n"); } warning: conflicting types for functiona

OCaml

12-file.key

02 C h a p t e r Java

SRC PLUS 제어기 MANUAL

(Exposure) Exposure (Exposure Assesment) EMF Unknown to mechanism Health Effect (Effect) Unknown to mechanism Behavior pattern (Micro- Environment) Re

< C6AFC1FD28B1C7C7F5C1DF292E687770>

<32382DC3BBB0A2C0E5BED6C0DA2E687770>

PowerPoint 프레젠테이션

SOSCON-MXNET_1014

012임수진

DIY 챗봇 - LangCon

Javascript.pages

T100MD+

s SINUMERIK 840C Service and User Manual DATA SAVING & LOADING & & /

PowerPoint 프레젠테이션

SLA QoS

FMX M JPG 15MB 320x240 30fps, 160Kbps 11MB View operation,, seek seek Random Access Average Read Sequential Read 12 FMX () 2

KT AI MAKERS KIT 사용설명서 (Node JS 편).indd

No Slide Title

dist=dat[:,2] # 기초통계량구하기 len(speed) # 데이터의개수 np.mean(speed) # 평균 np.var(speed) # 분산 np.std(speed) # 표준편차 np.max(speed) # 최대값 np.min(speed) # 최소값 np.me

untitled

MAX+plus II Getting Started - 무작정따라하기


Solaris Express Developer Edition

OP_Journalism

03장.스택.key

Mobile Service > IAP > Android SDK [ ] IAP SDK TOAST SDK. IAP SDK. Android Studio IDE Android SDK Version (API Level 10). Name Reference V

07 자바의 다양한 클래스.key

( )부록

Hi-MO 애프터케어 시스템 편 5. 오비맥주 카스 카스 후레쉬 테이블 맥주는 천연식품이다 편 처음 스타일 그대로, 부탁 케어~ Hi-MO 애프터케어 시스템 지속적인 모발 관리로 끝까지 스타일이 유지되도록 독보적이다! 근데 그거 아세요? 맥주도 인공첨가물이

DBPIA-NURIMEDIA

PJTROHMPCJPS.hwp

예제 1.1 ( 관계연산자 ) >> A=1:9, B=9-A A = B = >> tf = A>4 % 4 보다큰 A 의원소들을찾을경우 tf = >> tf = (A==B) % A

PCServerMgmt7

2 min 응용 말하기 01 I set my alarm for It goes off. 03 It doesn t go off. 04 I sleep in. 05 I make my bed. 06 I brush my teeth. 07 I take a shower.

example code are examined in this stage The low pressure pressurizer reactor trip module of the Plant Protection System was programmed as subject for

歯표지_통합_.PDF

15 홍보담당관 (언론홍보담당) 김병호 ( 金 秉 鎬 ) 16 (행정담당) 박찬해 ( 朴 鑽 海 ) 예산담당관 17 (복지행정담당) 이혁재 ( 李 赫 在 ) 18 (보육담당) 주사 이영임 ( 李 泳 任 ) 기동근무해제. 19 (장애인담당) 박노혁 ( 朴 魯 爀 ) 기동

09권오설_ok.hwp

PowerPoint 프레젠테이션

Orcad Capture 9.x

untitled

<C7D1B9CEC1B7BEEEB9AEC7D03631C1FD28C3D6C1BE292E687770>

Slide 1

Something that can be seen, touched or otherwise sensed

Buy one get one with discount promotional strategy

NLTK 6: 텍스트 분류 학습 (` `%%%`#`&12_`__~~~ౡ氀猀攀)

solution map_....

_

슬라이드 1

Contents Contents 2 1 Abstract 3 2 Infer Checkers Eradicate Infer....

10X56_NWG_KOR.indd

歯FDA6000COP.PDF

<C1DF3320BCF6BEF7B0E8C8B9BCAD2E687770>

PowerPoint 프레젠테이션

, ( ) 1) *.. I. (batch). (production planning). (downstream stage) (stockout).... (endangered). (utilization). *

SIGPLwinterschool2012

PL10

Let G = (V, E) be a connected, undirected graph with a real-valued weight function w defined on E. Let A be a set of E, possibly empty, that is includ

Week5

3 Gas Champion : MBB : IBM BCS PO : 2 BBc : : /45


e hwp

DE1-SoC Board

슬라이드 제목 없음

지능정보연구제 16 권제 1 호 2010 년 3 월 (pp.71~92),.,.,., Support Vector Machines,,., KOSPI200.,. * 지능정보연구제 16 권제 1 호 2010 년 3 월

<C1A C0E D33BCBABAD0B0E820C1B6BCBAC0D0B1E22E687770>

nonpara6.PDF

09È«¼®¿µ 5~152s

歯한글사용설명서.PDF

Integ

大学4年生の正社員内定要因に関する実証分析

Lab10

Motor

윤활유 개발 동향 및 연구 사례

#KM560

Page 2 of 6 Here are the rules for conjugating Whether (or not) and If when using a Descriptive Verb. The only difference here from Action Verbs is wh

1

untitled

Overview Ensemble Model Director of TEAMLAB Sungchul Choi

(2) : :, α. α (3)., (3). α α (4) (4). (3). (1) (2) Antoine. (5) (6) 80, α =181.08kPa, =47.38kPa.. Figure 1.

Transcription:

TensorFlow #23 (Time-Series Prediction) Magnus Erik Hvass Pedersen (http://www.hvass-labs.org/) / GitHub (https://github.com/hvass- Labs/TensorFlow-Tutorials) / Videos on YouTube (https://www.youtube.com/playlist? list=pl9hr9snujfsmeu1zniy0xphszl5uihcxz) /. RNN (Recurrent Neural Network). TensorFlow Keras, #01 #03-C, #20 RNN. 1980 ~2018 Denmark (https://en.wikipedia.org/wiki/denmark) : Aalborg (https://en.wikipedia.org/wiki/aalborg), The Hunter Corps (Jægerkorps) (https://en.wikipedia.org/wiki/jaeger_corps_(denmark). Aarhus (https://en.wikipedia.org/wiki/aarhus) C++ - (https://en.wikipedia.org/wiki/bjarne_stroustrup) Google V8 JavaScript Engine (https://en.wikipedia.org/wiki/chrome_v8). Esbjerg (https://en.wikipedia.org/wiki/esbjerg). Odense (https://en.wikipedia.org/wiki/odense) _H. C. Andersen (https://en.wikipedia.org/wiki/hans_christian_andersen). Roskilde (https://en.wikipedia.org/wiki/roskilde).

:

: 5 24 "Odense" ( 2 ). RNN (Recurrent Neural Network). 5, 20 1344 (8 = 24 x 7 X 8)., 3.

Imports In [2]: %matplotlib inline import matplotlib.pyplot as plt import tensorflow as tf import numpy as np import pandas as pd import os from sklearn.preprocessing import MinMaxScaler Keras Import. In [3]: # from tf.keras.models import Sequential # This does not work! from tensorflow.python.keras.models import Sequential from tensorflow.python.keras.layers import Input, Dense, GRU, Embeddin g from tensorflow.python.keras.optimizers import RMSprop from tensorflow.python.keras.callbacks import EarlyStopping, ModelChec kpoint, TensorBoard, ReduceLROnPlateau

Python 3.6 (Anaconda) : In [4]: Out[4]: tf. version '1.4.0' In [5]: Out[5]: tf.keras. version '2.0.8-tf' In [6]: Out[6]: pd. version '0.20.3'. [National Climatic Data Center (NCDC), USA] (https://www7.ncdc.noaa.gov/cdo/cdoselect.cmd) (https://www7.ncdc.noaa.gov/cdo/cdoselect.cmd) )..... In [9]: import weather.. 35MB. In [10]: weather.maybe_download_and_extract() - Download progress: 100.0% Download finished. Extracting files. Done.. In [11]: Out[11]: cities = weather.cities cities ['Aalborg', 'Aarhus', 'Esbjerg', 'Odense', 'Roskilde']

, 60.. 30. In [13]: %%time df = weather.load_resampled_data() CPU times: user 16.3 ms, sys: 24.2 ms, total: 40.5 ms Wall time: 39.1 ms. In [14]: df.head() Out[14]: Aalborg Aarhus Temp Pressure WindSpeed WindDir Temp Pressure WindSp DateTime 1980-03- 01 11:00:00 1980-03- 01 12:00:00 1980-03- 01 13:00:00 1980-03- 01 14:00:00 1980-03- 01 15:00:00 5.000000 1007.766667 10.2 280.000000 5.0 1008.300000 15.4 5.000000 1008.000000 10.3 290.000000 5.0 1008.600000 13.4 5.000000 1008.066667 9.7 290.000000 5.0 1008.433333 15.4 4.333333 1008.133333 11.1 283.333333 5.0 1008.266667 14.9 4.000000 1008.200000 11.3 280.000000 5.0 1008.100000 17.0

Esbjerg Roskilde.,...,.,,.. In [15]: Out[15]: df['esbjerg']['pressure'].plot() <matplotlib.axes._subplots.axessubplot at 0x1a37a60080>

In [16]: Out[16]: df['roskilde']['pressure'].plot() <matplotlib.axes._subplots.axessubplot at 0x1a406b2240> 20. In [17]: df.values.shape Out[17]: (333109, 20). In [18]: df.drop(('esbjerg', 'Pressure'), axis=1, inplace=true) df.drop(('roskilde', 'Pressure'), axis=1, inplace=true) 18. In [19]: df.values.shape Out[19]: (333109, 18).

In [20]: Out[20]: df.head(1) Aalborg Temp Pressure DateTime Aarhus WindSpeed WindDir Temp Pressure WindSpeed Wind 1980-03- 01 11:00:00 5.0 1007.766667 10.2 280.0 5.0 1008.3 15.4 290.. Odense 50., 36.4-31.2..,. In [21]: Out[21]: df['odense']['temp']['2006-05':'2006-07'].plot() <matplotlib.axes._subplots.axessubplot at 0x1a47c09438> 10. 10, 50.

In [22]: Out[22]: df['aarhus']['temp']['2006-05':'2006-07'].plot() <matplotlib.axes._subplots.axessubplot at 0x1a44e41240> In [23]: Out[23]: df['roskilde']['temp']['2006-05':'2006-07'].plot() <matplotlib.axes._subplots.axessubplot at 0x1a44e31b38>

. 10.,,... (1 ~ 366) (0 ~ 23). In [23]: df['various', 'Day'] = df.index.dayofyear df['various', 'Hour'] = df.index.hour. In [24]: target_city = 'Odense'. In [26]: target_names = ['Temp', 'WindSpeed', 'Pressure']. 24 24. 24 24. 7 7 * 24. In [27]: shift_days = 1 shift_steps = shift_days * 24 # Number of hours..! In [28]: df_targets = df[target_city][target_names].shift(-shift_steps)

!!!. Pandas.. shift_steps + 5. In [29]: df[target_city][target_names].head(shift_steps + 5) Out[29]: Temp WindSpeed Pressure DateTime 1980-03-01 11:00:00 6.142857 12.585714 1011.066667 1980-03-01 12:00:00 7.000000 11.300000 1011.200000 1980-03-01 13:00:00 7.000000 12.118182 1011.300000 1980-03-01 14:00:00 6.857143 12.742857 1011.400000 1980-03-01 15:00:00 6.000000 12.400000 1011.500000 1980-03-01 16:00:00 4.909091 12.618182 1011.688889 1980-03-01 17:00:00 3.953488 12.646512 1011.877778 1980-03-01 18:00:00 3.674419 11.725581 1012.066667 1980-03-01 19:00:00 3.395349 10.804651 1012.255556 1980-03-01 20:00:00 3.116279 9.883721 1012.444444 1980-03-01 21:00:00 2.837209 8.962791 1012.633333 1980-03-01 22:00:00 2.558140 8.041860 1012.822222 1980-03-01 23:00:00 2.279070 7.120930 1013.011111 1980-03-02 00:00:00 2.000000 6.200000 1013.200000 1980-03-02 01:00:00 2.076923 7.738462 1012.366667 1980-03-02 02:00:00 2.538462 7.969231 1011.533333 1980-03-02 03:00:00 3.000000 8.200000 1010.700000 1980-03-02 04:00:00 3.000000 7.927273 1010.100000 1980-03-02 05:00:00 2.916667 7.658333 1009.500000 1980-03-02 06:00:00 2.416667 7.408333 1008.900000 1980-03-02 07:00:00 2.000000 7.100000 1008.300000 1980-03-02 08:00:00 2.142857 6.542857 1007.700000

1980-03-02 09:00:00 3.000000 6.200000 1007.100000 1980-03-02 10:00:00 2.833333 8.350000 1006.466667 1980-03-02 11:00:00 2.000000 6.828571 1005.833333 1980-03-02 12:00:00 2.000000 8.200000 1005.200000 1980-03-02 13:00:00 0.166667 9.216667 1004.766667 1980-03-02 14:00:00 1.000000 11.885714 1004.333333 1980-03-02 15:00:00 1.000000 12.400000 1003.900000 5. 5. In [30]: Out[30]: df_targets.head(5) DateTime Temp WindSpeed Pressure 1980-03-01 11:00:00 2.000000 6.828571 1005.833333 1980-03-01 12:00:00 2.000000 8.200000 1005.200000 1980-03-01 13:00:00 0.166667 9.216667 1004.766667 1980-03-01 14:00:00 1.000000 11.885714 1004.333333 1980-03-01 15:00:00 1.000000 12.400000 1003.900000, NaN( ).. In [31]: df_targets.tail() Out[31]: Temp WindSpeed Pressure DateTime 2018-03-01 19:00:00 NaN NaN NaN 2018-03-01 20:00:00 NaN NaN NaN 2018-03-01 21:00:00 NaN NaN NaN 2018-03-01 22:00:00 NaN NaN NaN 2018-03-01 23:00:00 NaN NaN NaN

NumPy Pandas NumPy. NaN numpy. : In [32]: x_data = df.values[0:-shift_steps] In [33]: print(type(x_data)) print("shape:", x_data.shape) <class 'numpy.ndarray'> Shape: (333085, 18) ( ) : In [34]: y_data = df_targets.values[:-shift_steps] In [35]: print(type(y_data)) print("shape:", y_data.shape) <class 'numpy.ndarray'> Shape: (333085, 3) ( ) : In [38]: num_data = len(x_data) num_data Out[38]: 333085 : In [39]: train_split = 0.9 :

In [40]: num_train = int(train_split * num_data) num_train Out[40]: 299776 : In [41]: num_test = num_data - num_train num_test Out[41]: 33309 : In [42]: x_train = x_data[0:num_train] x_test = x_data[num_train:] len(x_train) + len(x_test) Out[42]: 333085 : In [43]: y_train = y_data[0:num_train] y_test = y_data[num_train:] len(y_train) + len(y_test) Out[43]: 333085 : In [44]: num_x_signals = x_data.shape[1] num_x_signals Out[44]: 18 : In [45]: num_y_signals = y_data.shape[1] num_y_signals Out[45]: 3

: In [46]: print("min:", np.min(x_train)) print("max:", np.max(x_train)) Min: -27.0 Max: 1050.8-1 1. scikit-learn.. In [47]: x_scaler = MinMaxScaler(). In [48]: x_train_scaled = x_scaler.fit_transform(x_train), 0 1. In [49]: print("min:", np.min(x_train_scaled)) print("max:", np.max(x_train_scaled)) Min: 0.0 Max: 1.0. In [50]: x_test_scaled = x_scaler.transform(x_test)..,. In [51]: y_scaler = MinMaxScaler() y_train_scaled = y_scaler.fit_transform(y_train) y_test_scaled = y_scaler.transform(y_test)

2 numpy. 20 3 300,000. : In [52]: print(x_train_scaled.shape) print(y_train_scaled.shape) (299776, 18) (299776, 3) 300k RNN. In [54]: def batch_generator(batch_size, sequence_length): """ Generator function for creating random batches of training-data. """ # Infinite loop. while True: # Allocate a new array for the batch of input-signals. x_shape = (batch_size, sequence_length, num_x_signals) x_batch = np.zeros(shape=x_shape, dtype=np.float16) # Allocate a new array for the batch of output-signals. y_shape = (batch_size, sequence_length, num_y_signals) y_batch = np.zeros(shape=y_shape, dtype=np.float16) # Fill the batch with random sequences of data. for i in range(batch_size): # Get a random start-index. # This points somewhere into the training-data. idx = np.random.randint(num_train - sequence_length) # Copy the sequences of data starting at this index. x_batch[i] = x_train_scaled[idx:idx+sequence_length] y_batch[i] = y_train_scaled[idx:idx+sequence_length] yield (x_batch, y_batch) GPU 100%. GPU, RAM 'sequence_length'.

In [55]: batch_size = 256 1344 sequence-length. 8. 1 24 x 7 24 x 7 x 8 8. In [56]: sequence_length = 24 * 7 * 8 sequence_length Out[56]: 1344. In [57]: generator = batch_generator(batch_size=batch_size, sequence_length=sequence_length). In [58]: x_batch, y_batch = next(generator) 256. 1344 20 3. In [52]: print(x_batch.shape) print(y_batch.shape) (256, 1344, 20) (256, 1344, 3) 20.

In [59]: batch = 0 # First sequence in the batch. signal = 0 # First signal from the 20 input-signals. seq = x_batch[batch, :, signal] plt.plot(seq) Out[59]: [<matplotlib.lines.line2d at 0x1a3b942470>]. 20. In [60]: Out[60]: seq = y_batch[batch, :, signal] plt.plot(seq) [<matplotlib.lines.line2d at 0x1a3c2f9978>]

.,.... In [61]: validation_data = (np.expand_dims(x_test_scaled, axis=0), np.expand_dims(y_test_scaled, axis=0)) (RNN) RNN (Recurrent Neural Network). Keras API Keras. Keras #03-C #20. In [62]: model = Sequential() Gated Recurrent Unit (GRU). 512. Keras (None ).(num_x_signals). In [63]: model.add(gru(units=512, return_sequences=true, input_shape=(none, num_x_signals,))) GRU 512. 3 512 3 ( dense). 0 1. Sigmoid. 0 1. In [64]: model.add(dense(num_y_signals, activation='sigmoid'))

., -20 +30-20 0 +30 1. 0 1-20 +30..., NaN.. In [65]: if False: from tensorflow.python.keras.initializers import RandomUniform # Maybe use lower init-ranges. init = RandomUniform(minval=-0.05, maxval=0.05) model.add(dense(num_y_signals, activation='linear', kernel_initializer=init)) (Loss Function) (MSE).... 50 " ",. In [66]: warmup_steps = 50

In [67]: def loss_mse_warmup(y_true, y_pred): """ Calculate the Mean Squared Error between y_true and y_pred, but ignore the beginning "warmup" part of the sequences. y_true is the desired output. y_pred is the model's output. """ # The shape of both input tensors are: # [batch_size, sequence_length, num_y_signals]. # Ignore the "warmup" parts of the sequences # by taking slices of the tensors. y_true_slice = y_true[:, warmup_steps:, :] y_pred_slice = y_pred[:, warmup_steps:, :] # These sliced tensors both have this shape: # [batch_size, sequence_length - warmup_steps, num_y_signals] # Calculate the MSE loss for each value in these tensors. # This outputs a 3-rank tensor of the same shape. loss = tf.losses.mean_squared_error(labels=y_true_slice, predictions=y_pred_slice) # Keras may reduce this across the first axis (the batch) # but the semantics are unclear, so to be sure we use # the loss across the entire tensor, we reduce it to a # single scalar with the mean function. loss_mean = tf.reduce_mean(loss) return loss_mean. In [68]: optimizer = RMSprop(lr=1e-3) Keras. In [69]: model.compile(loss=loss_mse_warmup, optimizer=optimizer). (None, None, 3), 3. 3.

In [70]: model.summary() Layer (type) Output Shape Param # ================================================================= gru_1 (GRU) (None, None, 512) 815616 dense_1 (Dense) (None, None, 3) 1539 ================================================================= Total params: 817,155 Trainable params: 817,155 Non-trainable params: 0 Callback Functions TensorBoard Keras.. In [71]: path_checkpoint = '23_checkpoint.keras' callback_checkpoint = ModelCheckpoint(filepath=path_checkpoint, monitor='val_loss', verbose=1, save_weights_only=true, save_best_only=true). In [72]: callback_early_stopping = EarlyStopping(monitor='val_loss', patience=5, verbose=1) TensorBoard. In [73]: callback_tensorboard = TensorBoard(log_dir='./23_logs/', histogram_freq=0, write_graph=false) (patience = 0 ). facgtor. 1e-3 0.1 1e-4..

In [74]: callback_reduce_lr = ReduceLROnPlateau(monitor='val_loss', factor=0.1, min_lr=1e-4, patience=0, verbose=1) In [75]: callbacks = [callback_early_stopping, callback_checkpoint, callback_tensorboard, callback_reduce_lr]. " ". steps_per_epoch "epoch". GTX 1070 " " 2.5. 14 5. 35. NaN..,,,.. In [ ]: %%time model.fit_generator(generator=generator, epochs=20, steps_per_epoch=100, validation_data=validation_data, callbacks=callbacks) Epoch 1/20 22/100 [=====>...] - ETA: 2722s - loss: 0.0144 (early-stopping)...

In [70]: try: model.load_weights(path_checkpoint) except Exception as error: print("error trying to load checkpoint.") print(error). (batch),. In [71]: result = model.evaluate(x=np.expand_dims(x_test_scaled, axis=0), y=np.expand_dims(y_test_scaled, axis=0)) 1/1 [==============================]1/1 [=========================== ===] - 4s 4s/step In [72]: print("loss (test-set):", result) loss (test-set): 0.0021468019112944603 In [1]: #. if False: for res, metric in zip(result, model.metrics_names): print("{0}: {1:.3e}".format(metric, res)). In [74]: def plot_comparison(start_idx, length=100, train=true): """ Plot the predicted and true output-signals. :param start_idx: Start-index for the time-series. :param length: Sequence-length to process and plot. :param train: Boolean whether to use training- or test-set. """ if train: # Use training-data. x = x_train_scaled y_true = y_train else: # Use test-data. x = x_test_scaled

y_true = y_test # End-index for the sequences. end_idx = start_idx + length # Select the sequences from the given start-index and # of the given length. x = x[start_idx:end_idx] y_true = y_true[start_idx:end_idx] # Input-signals for the model. x = np.expand_dims(x, axis=0) # Use the model to predict the output-signals. y_pred = model.predict(x) # The output of the model is between 0 and 1. # Do an inverse map to get it back to the scale # of the original data-set. y_pred_rescaled = y_scaler.inverse_transform(y_pred[0]) # For each output-signal. for signal in range(len(target_names)): # Get the output-signal predicted by the model. signal_pred = y_pred_rescaled[:, signal] # Get the true output-signal from the data-set. signal_true = y_true[:, signal] # Make the plotting-canvas bigger. plt.figure(figsize=(15,5)) # Plot and compare the two signals. plt.plot(signal_true, label='true') plt.plot(signal_pred, label='pred') ) # Plot grey box for warmup-period. p = plt.axvspan(0, warmup_steps, facecolor='black', alpha=0.15 # Plot labels etc. plt.ylabel(target_names[signal]) plt.legend() plt.show()

... 20..., shift_steps. x. 30-50.,.. 30-50 " ". 50 " ". " "... In [75]: plot_comparison(start_idx=100000, length=1000, train=true)

..... ( 42 )....

In [76]: plot_comparison(start_idx=200000, length=1000, train=true).

In [77]: Out[77]: df['odense']['temp'][200000:200000+1000].plot() <matplotlib.axes._subplots.axessubplot at 0x7f69f54d37f0>... In [78]: Out[78]: df_org = weather.load_original_data() df_org.xs('odense')['temp']['2002-12-23':'2003-02-04'].plot() <matplotlib.axes._subplots.axessubplot at 0x7f69db165860>

..,..... In [79]: plot_comparison(start_idx=200, length=1000, train=false)

RNN(Recurrent Neural Network). 5.....,.... TensorFlow. TensorFlow...?.?., GRU,,. #19.. "Odense".. 3 7??, 1, 3?.

(MIT) (c) 2018 [Magnus Erik Hvass Pedersen] (http://www.hvass-labs.org/ (http://www.hvass-labs.org/)),,, ( " ").,,.., (, ) " ".,.