by polas » Fri Oct 14, 2016 11:09 am
This certainly looks like an interesting project, but as Jar says in its current form it is more applicable for the host rather than the Epiphany. I see the interpreter needs 256KB with a minimum 16K RAM needed, each Epiphany core has 32KB memory so currently this implementation goes quite a way beyond the on-core limits... You could pop the interpreter in shared memory and just use the on-core RAM for heap/stack but that will likely have a significant performance penalty. The Epiphany is far more accelerator than CPU in terms of it doesn't have any direct IO etc so again that is something that would need to be addressed on the host side. These could be done, but it moves quite a way away from where that implementation currently is which is of no surprise as it is currently targeted more at embedded rather than many-core. With ePython we do all the lexing/parsing/byte code building etc on the host and have the minimum amount running on the Epiphany (the interpreter & runtime takes up 24KB of on-core memory, with the rest reserved for byte code, heap, stack etc but this can all transparently wrap over onto shared memory.)
Where these things become more interesting is running in hybrid mode, where someone can run full Python code in an existing interpreter (maybe CPython) on the host and off load certain Python kernels to the Epiphany cores. ePython currently supports this, with ePython taking care of the Epiphany side and a Python module imported in the host code to allow for the two to communicate. Currently it is designed for eduction/prototyping and certainly it won't be as fast as C but I think additional optimisations on the ePython side (such as JIT compiling) could make it more compelling for application usage too.
Nick