I wrote some benchmark code that used _vsnprintf_r() to print the statistical results of my memory test on the Epiphany III. The benchmarks run reliably, however, when I print the results, _vsnprintf_r() appears to hang in the midst of every thousand'th or so invocation--but only when when printing floating point values. Integer values print fine. The calls are made from core memory, although, _vsnprintf_r() resides in external memory with a custom link script based upon the standard fast script. The only modification to fast.ldf is to also load some of my framework code in external memory along with the library. The application code is exercising the mesh pretty extensively as it benchmarks core and external memory on each core individually, and all cores simultaneously.
Obviously, my first thought was that there was some bug in my code--and there still may be. Once I isolated the hang to _vsnprintf_r(), I began to experiment. When I remove the offending call using a double, and replace it with two calls using int (one for each part of the double using floor(), round(), etc.), or just don't print the results, the code runs for hours, i.e. hundreds of thousands to millions of invocations of the library invocation. I spent a fair amount of time ensuring that enough stack space was available, etc, moving code to external memory, etc., as I would presume _vsnprintf_r() was necessarily making calls to a double simulation library. (My recollection is that the C standard requires promotion of a float to double when passed to a function such as _vsnprintf_r(), but that the E3 hardware only supports float.) Output buffer spaces are more than sufficient, and this function should never over-write them anyway. There shouldn't be any memory leaks that accumulate, as the code is reloaded after approximately ten or so results are printed. Each invocation runs a different test via a command line parameter, and there are twenty five different tests, about three loops, that is 75 tests per minute, 4.5K tests per hour, about 30K results/invocations (of _vsnprintf_r()) per hour. I have trapped the relevant interrupts, E_SW_EXCEPTION and E_MEM_FAULT, and nothing is being generated. Nested bash scripts let the system run hours at a time, tmux lets me reconnect via ssh to monitor progress. Temperature is 51.5 degrees Celsius as measured by ztemp.sh. My other applications don't generate floating point results, and all seem to run fine.
I generally don't believe in statistical debugging or crowd sourcing, however, I was curious as to whether anyone else has seen anything similar?