2020
DOI: 10.21203/rs.3.rs-33109/v1
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

pH-responsive and hyaluronic acid-functionalized metal-organic frameworks for therapy of osteoarthritis

Abstract: Drug therapy of osteoarthritis (OA) is limited by the short retention and lacking of stimulus-responsiveness after intra-articular (IA) injection. The weak acid microenvironment in joint provides a potential trigger for controlled drug release systems in the treatment of OA. Herein, we developed an pH-responsive metal−organic frameworks (MOFs) system modified by hyaluronic acid (HA) and loaded with an anti-inflammatory protocatechuic acid (PCA), designated as MOF@HA@PCA, for the therapy of OA. Results demonstr… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

2
21
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
1
1

Relationship

0
5

Authors

Journals

citations
Cited by 13 publications
(25 citation statements)
references
References 21 publications
2
21
0
Order By: Relevance
“…On the other hand, the pre-norm transformer reaches 66.35 on Wikitext-2 and 26.16 on PTB, slightly outperformingWang et al (2019). This is consistent with previous findings(Xiong et al, 2020) showing advantages of pre-norm over post-norm.…”
supporting
confidence: 94%
See 3 more Smart Citations
“…On the other hand, the pre-norm transformer reaches 66.35 on Wikitext-2 and 26.16 on PTB, slightly outperformingWang et al (2019). This is consistent with previous findings(Xiong et al, 2020) showing advantages of pre-norm over post-norm.…”
supporting
confidence: 94%
“…Interestingly, the homogeneity arguments do not work out if we instead consider the post-norm transformer architecture (Xiong et al, 2020). learning rate η and weight decay λ by training a variety of transformer language models on Wikitext-2 for 1 epoch.…”
Section: D2 Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…We use the original Transformer model as our baseline model with two modifications: First, we apply layer normalization before the selfattention and feedforward blocks instead of after. This small change has been unanimously adopted by all current Transformer implementations because it leads to more effective training Xiong et al, 2020). Secondly, we use relative attention with shared biases (as used in Raffel et al (2019)) instead of sinusoidal positional embeddings, which makes it easier to train the model.…”
Section: Methodsmentioning
confidence: 99%