Thursday, September 22, 2011

"Reconstructing visual experiences from brain activity evoked by natural movies."

(Waiting for the next post in the physics series? Apologies. New academic year = teaching. Next post, on EPI, will be along within a few days...)

Gallant Lab strikes again!

Another shameless plug for fMRI at Berkeley! A study with the title of this post was published online today in Current Biology. Seems like it's generating as much media buzz as their previous study (Nature, 2008) using still images. Congratulations to Shinji, Joseph (An), Thomas and co. on another fine study. (And people said the Varian didn't work. Ha!)

Some links for more information:

UC Berkeley News Center.

Gallant Lab website. Best place to start, see the FAQ.

YouTube videos of some of the results:


3 comments:

  1. I dunno, the 'visual experiences from brain activity' is a bit of an over-statement - it's just a weighted average of the top 5 or whatever closest matches against a set of short movie clips that's statically computed at the end of the run ? that they're similar to the original images is just telling me there's some convolution / lossy encoding going on in the visual cortex ...which isn't surprising?

    from the web page faq, it seems that each subject had their own unique model built from the training data generated when they watched the initial video set. what would be interesting is how close, or how different, those models are to each other, e.g., what the reconstruction for subject A watching something looks like when analyzed with the model generated for subject B. etc

    ReplyDelete
  2. @break, correct, the model is agnostic. It says nothing about perception, and it's an assumption that the subject experienced the movies, that's true. In a sense, we can consider the visual activity as if it were the pixels on a screen that's playing in the subject's brain. Whether the subject chooses to watch (i.e. attend to) the movie is another matter entirely. So the decoded movie says nothing whatsoever about the subject's perception experience, only the bottom-up input to that person's brain.

    Re. your second point, the inter-subject variability for fMRI is *huge*. So I would bet good coin the use of one subject's model for another's activity would be pretty grim.

    ReplyDelete
  3. @practiCalfMRI - bah, it's a bit of a shame about the variability - anyways, it's still a cool demo, and thanks for the reply.

    ReplyDelete