I was thinking about the same thing. So to make things easier I created a complete environment with fpga development and linux kernel / linux software distribution. I added into the elink-redesign project the Analog Devices IP and now have a functioning system with hdmi, sound and elink-redesign (see branch elink-redesign at
https://github.com/peteasa/parallella.git).
I have seen several things that could help whilst doing this project. First the Analog Devices IP libraries use a module axi_hdmi_tx_vdma that takes the video from the vdma and passes it to axi_hdmi_+tx_core to a control and data pipe line. It would be possible to intercept the video stream at that point. After this point the data is processed by the Analog Devices IP (for example color space coversion, RGB to CrYCb) and sent to the hdmi connection so it does not seem sensible to intercept beyond this point.
Easier might be to add a tap into the mm2s interface and do some processing on the vdma output then push it back into the axi_hdmi_tx unmodified via the mm2s interface. It would be possible to provide the stream of data to the epiphany chip and for the epiphany chip to do the processing and return the frames back to axi_hdmi_tx for delivery to hdmi. By using the "s" part of mm2s it would be possible to port this to future versions of Analog Devices IP more easily.
Another place to intercept the data is via the "normal" software route, ie packages like OpenCV bind the camera input to the video output (or file output) in software. I have loaded the v4linux drivers into my development environment but not yet OpenCV, so still at the planning stage at the moment. Not sure how easy it would be to offload some of the OpenCV processing onto the epiphany chip but at least OpenCV already has this type of interface built in.
Further it would be possible to intercept camera data (see for example Sylvain's work with the RiPi camera
viewtopic.php?t=2514). However if you are like me and only have the USB camera its not obvious how to intercept the data from the USB port and anyway the format direct from the camera is likely to be YCbYCr so why not let the proven linux kernel drivers handle the various formats from the Camera.
I am still thinking about the best approach, the easiest would seem to be OpenCV..