tag:blogger.com,1999:blog-4402160631955197288.post6163968068992656008..comments2024-03-12T19:57:17.818-07:00Comments on practiCal fMRI: the nuts & bolts: "Reconstructing visual experiences from brain activity evoked by natural movies."practiCal fMRIhttp://www.blogger.com/profile/07387300671699742416noreply@blogger.comBlogger3125tag:blogger.com,1999:blog-4402160631955197288.post-24672350260391585612011-10-21T20:41:06.346-07:002011-10-21T20:41:06.346-07:00@practiCalfMRI - bah, it's a bit of a shame ab...@practiCalfMRI - bah, it's a bit of a shame about the variability - anyways, it's still a cool demo, and thanks for the reply.breaknoreply@blogger.comtag:blogger.com,1999:blog-4402160631955197288.post-51812409703397171642011-10-21T15:29:39.731-07:002011-10-21T15:29:39.731-07:00@break, correct, the model is agnostic. It says no...@break, correct, the model is agnostic. It says nothing about perception, and it's an assumption that the subject experienced the movies, that's true. In a sense, we can consider the visual activity as if it were the pixels on a screen that's playing in the subject's brain. Whether the subject chooses to watch (i.e. attend to) the movie is another matter entirely. So the decoded movie says nothing whatsoever about the subject's perception experience, only the bottom-up input to that person's brain. <br /><br />Re. your second point, the inter-subject variability for fMRI is *huge*. So I would bet good coin the use of one subject's model for another's activity would be pretty grim.practiCalfMRIhttp://practicalfmri.blogspot.comnoreply@blogger.comtag:blogger.com,1999:blog-4402160631955197288.post-71494987522982459792011-10-20T22:34:08.830-07:002011-10-20T22:34:08.830-07:00I dunno, the 'visual experiences from brain ac...I dunno, the 'visual experiences from brain activity' is a bit of an over-statement - it's just a weighted average of the top 5 or whatever closest matches against a set of short movie clips that's statically computed at the end of the run ? that they're similar to the original images is just telling me there's some convolution / lossy encoding going on in the visual cortex ...which isn't surprising? <br /><br />from the web page faq, it seems that each subject had their own unique model built from the training data generated when they watched the initial video set. what would be interesting is how close, or how different, those models are to each other, e.g., what the reconstruction for subject A watching something looks like when analyzed with the model generated for subject B. etcbreaknoreply@blogger.com