Long Short Term Memory Fully Convolutional Network (LSTM-FCN) and Attention LSTM-FCN (ALSTM-FCN) have been successful in classifying univariate time series . However, they have never been applied to on a multivariate time series classification problem. The models we propose, Multivariate LSTM-FCN (MLSTM-FCN) and Multivariate.Attention LSTM-FCN (MALSTM-FCN), converts their respective univariate models into multivariate variants. We extend the squeeze-and-excite block to the case of 1D sequence models and augment the fully convolutional blocks of the LSTM-FCN and ALSTM-FCN models to enhance classification accuracy.
As the datasets now consist of multivariate time series, we can define a time series dataset as a tensor of shape (N, Q, M ), where N is the number of samples in the dataset, Q is the maximum number of time steps amongst all variables and M is the number of variables processed per time step. Therefore a univariate time series dataset is a special case of the above definition, where M is 1. The alteration required to the input of the LSTM-FCN and ALSTM-FCN models is to accept M inputs per time step, rather than a single input per time step.
Similar to LSTM-FCN and ALSTM-FCN, the proposed models comprise a fully convolutional block and a LSTM block, as depicted in Fig. 1. The fully convolutional block contains three temporal convolutional blocks, used as a feature extractor, which is replicated from the original fully convolutional block by Wang et al . The convolutional blocks contain a convolutional layer with a number of filters (128, 256, and 128) and a kernel size of 8, 5, and 3 respectively. Each convolutional layer is succeeded by batch normalization, with a momentum of 0.99 and epsilon of 0.001. The batch normalization layer is succeeded by the ReLU activation function. In addition, the first two convolutional blocks conclude 6 with a squeeze-and-excite block, which sets the proposed model apart from LSTM-FCN and ALSTM-FCN. Fig. 2 summarizes the process of how the squeeze-and-excite block is computed in our architecture. For all squeeze and excitation blocks, we set the reduction ratio r to 16. The final temporal convolutional block is followed by a global average pooling layer.The squeeze-and-excite block is an addition to the FCN block which adaptively recalibrates the input feature maps. Due to the reduction ratio r set to 16, the number of parameters required to learn these self-attention maps is reduced such that the overall model size increases by just 3-10 %.
线性整流函数 （Linear rectification function），又称修正线性单元，是一种人工神经网络中常用的激活函数（activation function），通常指代以斜坡函数及其变种为代表的非线性函数。
This adaptive rescaling of the filter maps is of utmost importance to the improved performance of the MLSTM-FCN model compared to LSTM-FCN, as it incorporates learned self-attention to the inter-correlations between multiple variables at each time step, which was inadequate with the LSTM-FCN