dev.nlited.com

>>

DbgOutRT

<<<< prev
next >>>>

2016-07-07 19:43:39 chip Page 1763 📢 PUBLIC

July 7 2016

Real-Time Logging

I would like to be able to log from within real-time interrupt handlers. My first attempt at logging RT events worked, until it locked up the system after about 20 seconds. This was enough to prove that the code worked, but I had a deadlock problem. I cannot stop the RT interrupts from occurring, even with spinlock-irqsave(). The deadlock occurred when a NRT thread had acquired the lock and was writing to the FIFO when a RT interrupt fired on the same CPU and tried to acquire the same lock.

Mutex will reschedule, spin-lock will only delay.

The RT thread needs to write to its own FIFO that is dedicated to RT data and is large enough to never overflow. The normal FIFO is fairly complex, with multiple blocks. The RT FIFO will be a single, simplified buffer. The NRT threads will transfer any data in the RT buffer into the FIFOs before reading or writing its own data.

I need to have separate spin-locks for the normal (NRT) FIFO and the RT FIFO. The RT writers will wait on only the RT lock, which will never be held by the NRT threads. The NRT threads will wait on only the NRT. This works because the RT thread will only modify the RT FIFO write pointer, and the NRT thread will only modify the RT FIFO read pointer.

The good news is that none of this code is specific to Xenomai, it is just basic spinlocks.

This will also work on single-core systems (such as BLIS). In this case the spinlock is just a nop (need to verify this) and I rely on having only 2 contexts, a single NRT thread that may be interrupted by a single RT thread; the RT thread can never be interrupted.

There may be a problem with my hypothesis. If a spinlock on a single core system devolves to a nop, then the hang I observed was not caused by a spinlock deadlock. It may have been a buffer overrun that shotgunned kernel memory with particularly immediate effect. Using a separate RT FIFO should fix that problem as well.


RtBufWrite: static BYTE *pBuf; //Allocated RT FIFO static BYTE *pLinearBuf; //Buffer used to unwrap pBuf[], must be BufSz. static UINT BufSz; //Allocated size (bytes) of pBuf[] static UINT RdPtr,WrPtr; //Read, write ptrs into pBuf[] //WrPtr==RdPtr means the buffer is EMPTY. static spinlock_t RtLock; //RT lock, controls WRITE access. static spinlock_t NrtLock; //NRT lock, controls READ access. //RT thread write. //The RtLock blocks all other RT threads until I release it. //However, a NRT thread on another CPU may modify RdPtr at any time. void RtBufWrite(BYTE *pSrc, UINT ByteCt) { spinlock_acquire(&RtLock); //Take a snapshot of RdPtr, it may change at any time. //It is OK if it is updated before I am done, it just //means I had more space available than I originally calculated. UINT _RdPtr= RdPtr; //Calculate available space first. AvailCt= (WrPtr <= _RdPtr) ? (_RdPtr-WrPtr) : (BufSz-WrPtr)+_RdPtr; //Write all or nothing... if(ByteCt >= AvailCt) { //Not enough room, data is dropped. } else if(WrPtr<=_RdPtr) { //Easiest case: Available space is a single chunk. memcpy(&pRtBuf[WrPtr],pSrc,ByteCt); WrPtr+= ByteCt; } else { //Available space is broken by end of buffer. //Write the first chunk to pBuf[WrPtr...BufSz] ChunkSz= min(BufSz-WrPtr,ByteCt); memcpy(&pBuf[WrPtr],pSrc,ChunkSz); WrPtr+= ChunkSz; //Do I need to write a second chunk? if(ByteCt > ChunkSz) { //Second chunk will always be written to pBuf[0]. memcpy(pBuf,&pSrc[ChunkSz],ByteCt-ChunkSz); WrPtr= ByteCt-ChunkSz; } } //Check for wrap-around. if(WrPtr>=BufSz) WrPtr= 0; spinlock_release(&RtLock); } //NRT thread transfer. //I need to lock out other NRT threads while I transfer the RT FIFO to //the Item FIFO. void RtBufTransfer(void) { spinlock_acquire(&NrtLock); //Take a snapshot of the RT WrPtr. //WrPtr may change at any time, if it does I will catch the new data next time. UINT _WrPtr= WrPtr; //Calculate the bytes of valid data in the RT FIFO. UINT DataCt= (RdPtr <= _WrPtr) ? (_WrPtr-RdPtr) : (BufSz-RdPtr)+_WrPtr; if(DataCt>0) { if(RdPtr<_WrPtr || _WrPtr==0) { //Simple case, data is not split by end of buffer. BufWrite(&pBuf[RdPtr],DataCt); } else { //Data is split by the end of the buffer, but it needs //to be written to the item FIFO in a single operation. //I need to unwrap the data into a linear buffer first. ChunkSz= BufSz-RdPtr; memcpy(&pLinearBuf[0],&pBuf[RdPtr],ChunkSz); memcpy(&pLinearBuf[ChunkSz],&pBuf[0],DataCt-ChunkSz); BufWrite(pLinearBuf,DataCt); } //I always empty the RT FIFO. RdPtr= _WrPtr; } spinlock_release(&NrtLock); }

This implies a separate API for writing RT items, so that I know to write them into the RT FIFO. This will be a very limited API: DbgOut(), DbgTraceEnter(), and DbgTraceExit(). The top-level API needs both RT and NRT variations, which call a common middle layer with a RT flag, then perform the low-level item writes using either the RT or NRT ItemWrite() functions based on the RT flag.

RT/NRT API: //NRT API int DbgOut(HCLNT hClnt, UINT *pStack) { return(DbgOut2(0,hClnt,pStack)); } int DbgTraceEnter(HCLNT hClnt, UINT Chnl) { return(DbgTrace2(0,hClnt,Chnl,0)); } int DbgTraceOut(HCLNT hClnt, UINT Chnl) { return(DbgTrace2(0,hClnt,Chnl,1)); } //RT API int DbgOutRT(HCLNT hClnt, UINT *pStack) { return(DbgOut2(1,hClnt,pStack)); } int DbgTraceEnterRT(HCLNT hClnt, UINT Chnl) { return(DbgTrace2(1,hClnt,Chnl,0)); } int DbgTraceOutRT(HCLNT hClnt, UINT Chnl) { return(DbgTrace2(1,hClnt,Chnl,1)); } //Middle layer int DbgOut2(UINT IsRT, HCLNT hClnt, UINT *pStack) { //Create the item packet, format the text if(IsRT) { RtItemWrite(pItem,ByteCt); } else { ItemWrite(pItem,ByteCt); } } int DbgTrace2(UINT IsRT, HCLNT hClnt, UINT Chnl, UINT InOut) { //Create the item packet if(IsRT) { RtItemWrite(pItem,ByteCt); } else { ItemWrite(pItem,ByteCt); } }

Implementation

Of course, the actual implementation was not nearly so clean and easy.

The API to write items requires the items to be submitted as one complete item at a time, not just blocks of bytes. This requires two stages to transfer the RT items into the item FIFO, first unwrapping the data into a linear buffer, then submitting each item.

The trace events are not submitted to the Item FIFO individually, they are first accumulated into a set of trace blocks and then each block of many events is submitted as a single item. This means I need to handle RT trace and RT printf logging with completely separate mechanisms.

=====

Rather than create two completely different versions of the RT FIFO, one for printf and another for trace, I could create an ITEMHDR for each RT trace event in the "standard" RT FIFO, then route the items in RtFifoRead() to either the item FIFO or TraceCmd(). This would not be the most efficient approach, but it would be expedient.

I eventually found that using the RtFifo as a generic command deferment queue worked the best. I simply write the Cmd into RtFifo from the realtime context and execute it later before the next NRT command. The only exception to this is the PRINTF command, which needs to be formatted immediately and written into the RtFifo as a completed Item.

I had to add a new Flags field to the DbgCmdHdr_s struct, which forced bumping the DBGCMD_API version. This means the current server is no longer compatible with older clients unless I write some backward compatibility code into Client.cpp. But since there are no older clients being used, this is a non-issue.



WebV7 (C)2018 nlited | Rendered by tikope in 85.226ms | 3.12.34.150