2021
DOI: 10.1108/jfm-08-2020-0059
|View full text |Cite
|
Sign up to set email alerts
|

Enhancement of bi-objective function model to master straight-line facilities sequences using frequency from-to chart

Abstract: Purpose The purpose of this study is to understand the functional power of frequency from-to chart (FFTC) as an independent solution-key for generation optimal (exact) facilities sequences with an equal distance of straight-line flow patterns. The paper will propose a bi-objective function model based on the Torque Method then will turn it into a computer-based technique with a permutative manner using the full enumeration method. This model aims to figure out if there is a difference between the moment minimi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 4 publications
(11 reference statements)
0
1
0
Order By: Relevance
“…Its core idea is to introduce attention weight α into the input sequence to give priority to the location of relevant information to generate the output the next time. The attention module in the network structure with the attention model is responsible for automatically learning attention weight α ij , which can automatically capture the correlation between h i and S j (Gamal et al, 2020). These attention weights are then used to construct the content vector C, which is passed to the decoder as input.…”
Section: Related Technologymentioning
confidence: 99%
“…Its core idea is to introduce attention weight α into the input sequence to give priority to the location of relevant information to generate the output the next time. The attention module in the network structure with the attention model is responsible for automatically learning attention weight α ij , which can automatically capture the correlation between h i and S j (Gamal et al, 2020). These attention weights are then used to construct the content vector C, which is passed to the decoder as input.…”
Section: Related Technologymentioning
confidence: 99%