2020
DOI: 10.21203/rs.3.rs-51141/v1
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Blended teaching practices for active learning in higher pharmacy education

Abstract: Background : Active learning practices improve student achievement on average in college. Blended adoption of some form of research-based teaching methods for active learning at the tertiary level is rapidly expanding. Nevertheless, there have been few studies to date on the effects of detailed factors such as the blending ratio of the teaching components, impacts of learning resources and formative evaluation methods. The aim of this study was to develop a blended teaching strategy by incorporating methods of… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
3
3

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(6 citation statements)
references
References 14 publications
(14 reference statements)
0
4
0
Order By: Relevance
“…Based on empirical observations, we disable path-length regularisation [47] and reduce R1 regularisation's weight to 2 for superior quality and diversity. We use a combination of Rectified Adam [56] and Lookahead [101] method as an optimiser to train the sketchto-photo mapper for 5M iterations at a constant learning rate of 10 −5 and a batch size of 4. λ 1 , λ 2 , λ 3 , and λ 4 are set to 1, 0.8, 0.5, and 0.6 respectively.…”
Section: Methodsmentioning
confidence: 99%
“…Based on empirical observations, we disable path-length regularisation [47] and reduce R1 regularisation's weight to 2 for superior quality and diversity. We use a combination of Rectified Adam [56] and Lookahead [101] method as an optimiser to train the sketchto-photo mapper for 5M iterations at a constant learning rate of 10 −5 and a batch size of 4. λ 1 , λ 2 , λ 3 , and λ 4 are set to 1, 0.8, 0.5, and 0.6 respectively.…”
Section: Methodsmentioning
confidence: 99%
“…where ℓ (D 𝑛 , 𝑓 (X 𝑛 ; 𝜃 )) = 1 𝑀 𝑀 𝑚=1 − log 𝑓 𝑚,𝑑 𝑛,𝑚 (X 𝑛 ; 𝜃 ). We employ the pre-trained multilingual BERT with token-level classification head that uses Adam optimizer [21,26] with early stopping and multiple random initializations.…”
Section: Phase Two: Lm-assisted Weak Supervisionmentioning
confidence: 99%
“…Representation Similarity. Previous studies (Liu et al, 2020a) has posited that the Pre-LN Transformer has disproportionately large weights on its residual branch, which may inhibit its potential performance as the model deepens. Our hypothesis is that DeepNorm directly augment the weights of the residual branches in order to enhance the stability of deep model training, but may also impede the potential of the deep model.…”
Section: Parameter Redundancymentioning
confidence: 99%