by greytery » Thu Feb 06, 2014 9:54 am
Running compute intensive code on an interpreted layer on a coprocessor does seem to be a v. inefficient approach, by definition. I think that supports the point of trying to run the Erlang VM on the ARM, and only a v.small subset on the Epiphany - IF AT ALL!
As said above, Erlang processes can make calls to C/C++ routines, and sometimes that's the best way.
For example, I'm thinking about Wings 3D (Erlang) and how it accelerates some of its graphics calcs (C++, OpenGL). That is, it keeps a healthy distinction between the main program body running on general purpose X86, and making best use of the co-processor (i.e the graphics card).
So instead of trying to run a cut-down VM on each core, why not concentrate on ensuring that there are efficient mechanisms and interfaces to invoke Epiphany routines? Methinks it would be simpler than gutting the VM and deliver immediate value.
Use Erlang for what it does best (e.g. distributed scalability and resilient computing) and the Epiphany (or any other coprocessor) for what it does best.
I have - sorry, have ordered - two boards, and one project I have in mind is to run Erlang on my Windows PCs AND on the Parallellas, so distributing the processes across my Gigabyte network. Only the processes that required specific acceleration would need to run on the Parallellas.
(Edit) Gigabit. I Wish!
tery