The increasing usage of computer vision algorithms in camera-centric devices has led to a growing need for optimizing these algorithms to improve their performance on resource-constrained platforms. Halide is a language specific to image processing algorithms that separates the algorithm's scheduling from its implementation, resulting in high performance. This thesis proposes an approach to improve the performance of Halide computer vision algorithms using stochastic algorithms such as simulated annealing to optimize scheduling, which enables exploring the global optimum of affine scheduling in constrained time. To convert the Halide program to MLIR, we use novel compile flows, namely the Halide to MLIR (HTM) converter. The efficacy of the approach will be evaluated on different platforms, such as x86, ARM, and RISC-V. The study demonstrates the potential of MLIR's transformation and optimization capabilities on Affine dialects and highlights the need for tuning infrastructure to fully leverage MLIR's optimization capabilities.