Brain-computer interfaces offer a way to communicate for people with severe motor and speech disabilities. However, successful current letter speller implementations require perception-driven paradigms (EEG) or cognitively demanding tasks (fMRI, fNIRS) which are not directly linked to letters visualized in the mind's eye. A more natural, content-based, BCI speller system immediately decoding imagined letters from associated brain activity is desirable. In the current study, we take the first steps towards such a BCI and offer new insights into the neural underpinnings of visual mental imagery, a process which is considered one of the main sources of human cognitive complexity. We demonstrate for the first time the feasibility to reconstruct visual field images which carry recognizable content of imagined letter shapes. Using submillimeter resolution fMRI