2023
DOI: 10.1016/j.marpol.2023.105754
|View full text |Cite
|
Sign up to set email alerts
|

Visualizing the annual transition of ocean policy in Japan using text mining

Mengyao Zhu,
Kotaro Tanaka,
Tomonari Akamatsu
Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(3 citation statements)
references
References 21 publications
0
1
0
Order By: Relevance
“…Additionally, the model assigns coefficients to these topics, where a higher coefficient indicates a stronger association or relevance within the dataset. In essence, topics with higher coefficients are the central themes or focal points in the user reviews (Zhu et al, 2023).…”
Section: A Quantitative Text Miningmentioning
confidence: 99%
“…Additionally, the model assigns coefficients to these topics, where a higher coefficient indicates a stronger association or relevance within the dataset. In essence, topics with higher coefficients are the central themes or focal points in the user reviews (Zhu et al, 2023).…”
Section: A Quantitative Text Miningmentioning
confidence: 99%
“…In the field of new energy vehicle policy, Liu Qin et al (2023) pointed out issues such as insufficient policy consistency, declining policy balance, and an expansion of the negative policy convexity index [ 33 ]. Zhu Mengyao et al (2023) utilized text mining for visual analysis of the annual changes in Japan's maritime policies, discovering that expert opinions and shifts in policy emphasis generally aligned with the results of unsupervised analyses [ 34 ]. In the research concerning China's photovoltaic power policy, Chong Zhaotian et al (2023) found through quantitative analysis using text mining that there is a trend towards consistency in policy objectives and measures, with photovoltaic policies gradually shifting towards integrating a variety of measures [ 35 ].…”
Section: Literature Reviewmentioning
confidence: 99%
“…In the first step, Excel was loaded into KH Coder in the form of Stanford POS Tagger to pre-process the information, and check the merging of words in the word frequency list. The nonsense words, such as Be verbs, pronouns, and conjunctions, were then coded into "Force Ignore, " [41] while the semantics of other words were simultaneously checked and the words with fixed collocations were extracted and coded into "Force Pick up, " which was then run again. Following processing, the basic descriptive information was collected.…”
Section: Research Techniques and Proceduresmentioning
confidence: 99%