Artificial Intelligence (AI) assessment to mitigate risks arising from biased, unreliable, or regulatory noncompliant systems remains an open challenge for researchers, policymakers, and organizations across industries. Due to the scattered nature of research on AI across disciplines, there is a lack of overview on the challenges that need to be overcome to move AI assessment forward. In this study, we synthesize existing research on AI assessment applying a descriptive literature review. Our study reveals seven challenges along three main categories: ethical implications, regulatory gaps, and socio-technical limitations. This study contributes to a better understanding of the challenges in AI assessment so that AI researchers and practitioners can resolve these challenges to move AI assessment forward.