“…Thus, the algorithmic fairness discourse may be limited because in many cases, machine learning algorithms utilize the data at the point of creating the algorithm without considering the historical context in which the input data were generated (So et al, 2022). This can lead to machine learning models that "learn" to reinforce disparities that were created by seemingly race-neutral markers as objective truths, thereby legitimizing different treatments (Benjamin, 2019;Browne, 2010;Gerdon et al, 2022). Examples of this process in the domain of housing include the use of seemingly race-neutral variables, particularly risk-based pricing algorithms, such as security deposits in the rental market (Hatch, 2017), and mortgage insurance in mortgage loans (Deng and Gabriel, 2006).…”