This is fascinating stuff.
In a paper outlined here, Changde Du from the Research Center for Brain-Inspired Intelligence, Beijing and his co-authors from the Chinese Academy of Sciences (CAS), Beijing, describe decoding human brain activities via functional magnetic resonance imaging (fMRI) scans.
So, human eyes see a bunch of relatively simple images, send that information to the brain that processes these images; then the fMRI scans try to capture the brain activity as voxels. They have used simple geometric shapes and alphabet letters to feed nonlinear observation models parameterized by Deep Neural Networks (DNN). They call their model: Deep Generative Multiview Model (DGMM).
Stay tuned as we go through the paper in detail and share our thoughts.