“…On the other hand, a woman may try to influence a prediction of home-based delivery with the goal of receiving more support. In this paper, we evaluate this algorithm's vulnerability to falsified responses to produce a specific prediction, often referred to as “adversarial attacks” in the machine learning literature [ [39] , [40] , [41] , [42] , [43] ]. The findings from this analysis will enable us to quantify the susceptibility of our developed algorithm to adversarial attacks, which can be used to inform methods to monitor for or mitigate the impact of these attacks when the algorithm is deployed.…”