you know, there's a reason why people write papers on the most trivial topics:
even though trivial, it's still a technical advancement nobody did think of earlier.
so, in that spirit, please be more specific with your reply.
for example there exist rocket-driven cars, and they indeed are unreasonably fast.
so quite naturally it makes no sense to put them on the highways.
however, with development of computer-controlled cars, it might make sense in future.
human is incapable of driving at such high speeds, computers might. one obstacle removed.
do you see? all it takes to remove obstacles is to formulate the problem.
eventually someone might come up with a solution.
we wouldn't notice if nobody did state the problem clear enough to connect the two!
so in terms of parallella, as I understood the current hardware allows for 2 connections off board in addition to fpga-connector.
additionally, as I understood, parallella cannot address the whole 4GB range of 32 bits. is that correct?
so what is needed is some sort of software-driven dma-transfer.
i.e. you send a command to another parallella to send you a stream of data.
then this stream is sent on to another parallella through the other connection.
all that is done piece by piece, in chunks of 64KB or something.
do you see any bottleneck in that idea, from the point of view of having 160 core epiphany structure?
as far as I know the only bottleneck is the 160Mb/s to arm-memory.
basically you're limited to emulating x86 with 2x120Mb/s ram bus, but with any amount of cores and memory, right?
as for virtual box, do you see any problem with parallellizing the branch-predictions and coprocessors?
what problems in the area of programming the platform do you see here?
so far I've read most of the forum posts, I can't figure out which posts on programming you are referring to.
or do you mean my memory-model for addressing <10Gb through 32bit? I'm aware it's on the level of GBit network...