2021
DOI: 10.1109/tvt.2021.3064287
|View full text |Cite
|
Sign up to set email alerts
|

Estimating State of Charge for xEV Batteries Using 1D Convolutional Neural Networks and Transfer Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
20
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
8
1

Relationship

1
8

Authors

Journals

citations
Cited by 90 publications
(33 citation statements)
references
References 24 publications
0
20
0
Order By: Relevance
“…e affine optical flow vector for each pixel in the frame is as follows: (13) where f k (x + i, y + j), f k−1 (x, y) are the horizontal and vertical components of the affine optical flow, respectively. e quality of the motion feature information extracted from the convolutional neural network determines that the method is very important for accurate extraction and motion identification in the spatially weighted motion feature classifier [14][15][16].…”
Section: Technical Feature Extraction Of Long-distance Runningmentioning
confidence: 99%
“…e affine optical flow vector for each pixel in the frame is as follows: (13) where f k (x + i, y + j), f k−1 (x, y) are the horizontal and vertical components of the affine optical flow, respectively. e quality of the motion feature information extracted from the convolutional neural network determines that the method is very important for accurate extraction and motion identification in the spatially weighted motion feature classifier [14][15][16].…”
Section: Technical Feature Extraction Of Long-distance Runningmentioning
confidence: 99%
“…Among all DL architectures compared in the study, the proposed transformer model achieved the lowest RMSE of 1.1075%, 1.3139% and 1.1914% and MAE of 0.4441%, 0.5680% and 0.6502% on the test drive cycles outperforming even the recurrent models which has been widely used for SOC estimation as shown in Table 2 . We also note that the convolutional models such as the Resnet 40 and the Inception Time 51 also outperformed the conventional GRU 41 and LSTM 52 model. The baseline Transformer model that is not trained with the proposed training framework scores poorly along with the feedforward DNN.…”
Section: Resultsmentioning
confidence: 77%
“…Secondly, the models that are trained on one cell chemistry do not apply to other cell chemistry. Even though preliminary work indicates that transfer learning is possible 40 , further tests are still required to verify its accuracy if it applies to more cells with differing chemistry. In most cases a model that is trained on one Li-ion battery cell data does not generalize well across another cell and may require re-training of the model from scratch.…”
Section: Introductionmentioning
confidence: 99%
“…Given a source domain, \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{upgreek} \usepackage{mathrsfs} \setlength{\oddsidemargin}{-69pt} \begin{document} }{}$\mathcal {D}_{s}$ \end{document} and a corresponding task, \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{upgreek} \usepackage{mathrsfs} \setlength{\oddsidemargin}{-69pt} \begin{document} }{}$\mathcal {T}_{s}$ \end{document} , and a target domain, \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{upgreek} \usepackage{mathrsfs} \setlength{\oddsidemargin}{-69pt} \begin{document} }{}$\mathcal {D}_{t}$ \end{document} and a corresponding task, \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{upgreek} \usepackage{mathrsfs} \setlength{\oddsidemargin}{-69pt} \begin{document} }{}$\mathcal {T}_{t}$ \end{document} , the objective of transfer learning is to improve the performance of a machine learning model in \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{upgreek} \usepackage{mathrsfs} \setlength{\oddsidemargin}{-69pt} \begin{document} }{}$\mathcal {D}_{t}$ \end{document} using the knowledge acquired in \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{upgreek} \usepackage{mathrsfs} \setlength{\oddsidemargin}{-69pt} \begin{document} }{}$\mathcal {D}_{s}$ \end{document} and \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{upgreek} \usepackage{mathrsfs} \setlength{\oddsidemargin}{-69pt} \begin{document} }{}$\mathcal {T}_{s}$ \end{document} [54] . Transfer learning has played a significant role in the facilitating the use of deep learning in numerous applications [55] – [57] . In this work, we empirically demonstrate how knowledge transfer is equally effective for vision transformer based framework in medical image classification.…”
Section: Proposed Methodsmentioning
confidence: 99%