The game of the Amazons is a fairly young member of the class of territory-games. Since there is very few human play, it is difficult to estimate the level of current programs. However, it is believed that humans could play much stronger than todays programs, given enough training and incentives. With the more general goal of improving the playing level of Amazons programs in mind, we focused here on the playing of endgames situations. Our comparative study of two solvers, DFPN and WPNS, and three game-playing algorithms, Minimax with Alpha/Beta, Monte-Carlo Tree-Search and Temperature Discovery Search, shows that even though their computing process is quite expensive, traditional PNS-based solvers are best suited for the task of finding moves in a subgame, while no specific improvement is needed to classical game-playing engines to play well combinations of subgames. Even the new Amazons standard of Monte-Carlo Tree-Search, despite showing often weaknesses in precise tasks like solving, handles Amazons endgames pretty well.