A Post-Silicon Trace Analysis Approach for System-on-Chip Protocol Debug
Abstract
Reconstructing system-level behavior from silicon traces is a critical problem in post-silicon validation of System-on-Chip designs. Current industrial practice in this area is primarily manual, depending on collaborative insights of the architects, designers, and validators. This paper presents a trace analysis approach that exploits architectural models of the system-level protocols to reconstruct design behavior from partially observed silicon traces in the presence of ambiguous and noisy data. The output of the approach is a set of all potential interpretations of a system’s internal executions abstracted to the system-level protocols. To support the trace analysis approach, a companion trace signal selection framework guided by system-level protocols is also presented, and its impacts on the complexity and accuracy of the analysis approach are discussed. That approach and the framework have been evaluated on a multi-core system-on-chip prototype that implements a set of common industrial system-level protocols.
1 Introduction
Post-silicon validation makes use of pre-production silicon integrated circuit (IC) to ensure that the fabricated system works as desired under actual operating conditions with real software. It is a critical component of the design validation life-cycle for modern microprocessors and system-on-chip (SoC) designs. Unfortunately, it is also highly complex, performed under aggressive schedules and accounting for more than of the overall design validation cost [12].
An SoC design is often composed of a large number of pre-designed hardware or software blocks (often referred to as “intellectual properties” or “IPs”) that coordinate through complex protocols to implement system-level behavior [6]. An execution trace of a system typically involves activities from the CPU, audio controller, display controller, wireless radio antenna, etc., reflecting the interleaved execution of a potentially large number of communication protocols. As SoCs integrate more IPs, the interactions among the IPs are increasingly more complex. Moreover, modern interconnects are highly concurrent allowing multiple transactions to be processed simultaneously for scalability and performance. They are an important source of design errors. On the other hand, observability limitations allow only a small number of participating signals to be actually traced during silicon execution. Furthermore, electrical perturbations cause silicon data to be noisy, lossy, and ambiguous. It is non-trivial during post-silicon debug to identify all participating protocols and pinpoint the interleavings that result in an observed trace.
Previous work [15] proposed a method for correlating silicon traces with system-level protocol specifications. The idea was to reconstruct protocol execution scenarios from a partially observed silicon trace, which provide abstract views of system internal executions to facilitate post-silicon SoC debug. While that work showed promising results, it has a number of deficiencies precluding its applicability in practice. First, there was no way to qualify or rank the quality of protocol execution scenarios generated by the reconstruction procedure. Under poor observability condition, it was possible for the algorithm to generate hundreds or thousands of potential protocol execution scenarios consistent with a partially observed trace. Without a metric to rank the quality of these reconstructions, the debugger is faced with the unenviable task of wading through these potential scenarios to infer what may actually have happened in a specific silicon execution. Moreover, based on past experiences, interleavings of different protocol executions are a major source of functional bugs. Since the method developed in [15] does not capture orderings among different protocol executions, the results obtained with that method offer little help for bug localization and root causing.
This paper addresses the above deficiencies by introducing an optimized trace analysis approach. Central to this optimized approach is a new formulation of protocol execution scenarios that comprehends ordering relations among protocol executions. Quantitative metrics are also developed so that the quality of the results derived by and the efficiency of the analysis approach can be measured. Trace signal selections can have great impacts on the complexity and accuracy of the trace analysis. Therefore, a companion trace signal selection framework is proposed. This framework is communication-centric, and guided by system-level protocols. Its objective is to facilitate the trace analysis to produce high quality interpretations of observed silicon traces efficiently. Various trace signal selection strategies are evaluated and analyzed based on their impacts on the trace analysis approach applied to a non-trivial multi-core SoC model that implements a number of common industrial system-level protocols.
2 Flow Specification


An SoC model as shown in Figure 1 is used to illustrate and experiment the work described in this paper. It consists of two CPUs (CPU_X), each with a private Data Cache (Cache_X), a graphics engine (GFX), a power management unit (PMU), a system memory, and three peripheral blocks: an audio control unit (Audio), a UART controller (UART), and a USB controller (USB). All these blocks are connected through an interconnect fabric (Bus).
System operations are realized by executions performed in various blocks that are coordinated by system-level protocols. These protocols are typically specified in architecture documents as message flow diagrams, where the words “protocol” and “flow” are used interchangeably. In this paper, as in [15], system flows are formalized using Labeled Petri-nets (LPNs). Figure 2 shows a memory write protocol initiated from a CPU CPU_X in LPN where and . An LPN is a tuple where is a finite set of places, is a finite set of transitions, is a finite set of events, and is a labeling function that maps each transition to an event . For each transition , its preset, denoted as , is the set of places connected to , and its postset, denoted as , is the set of places that is connected to. A state of a LPN is a subset of places marked with tokens. There are two special states associated with each LPN; which is the set of initially marked places, also referred to as the initial state, and the end state which is the set of places not going to any transitions.
A transition can be executed in a state if . Executing causes the labeled event to be emitted, and leads to a new state . Therefore, executing an LPN leads to a sequence of events. Execution of a LPN completes if its is reached.
For example, in Figure 2, can be executed in . Event is emitted after is executed, and the LPN state becomes . The end state is .
A flow specification may also contain multiple branches describing different ways a system can execute such flow. For example, the flow shown in Figure 2 has three branches covering the cases where the cache (snoop) operation is hit or miss.
3 Post-silicon Trace Analysis
3.1 Previous Work
This section recaps the previous approach in [15]. The objective of the trace analysis is to reconstruct design internal behavior wrt given system-level flow specifications from a partially observed silicon trace on a small number of hardware signals. The off-chip analysis includes two broad phases: (1) trace abstraction, which maps a silicon trace into a sequence of flow events, higher-level architectural constructs including e.g., messages, operations, etc, and (2) trace interpretation, which infers possible flow execution scenarios that are compliant with the abstracted event sequence.
To illustrate the basic idea, consider the system flow in Figure 2, which we call . Suppose that the following flow execution trace is abstracted from an observed silicon trace by executing a design that implements .
Here the flow events are referred to by their transition names in the LPN. The first four events result in the following flow execution scenario
(1) |
A flow execution scenario is defined as a set of flow instances and their respective current states after some events are processed [15]. It can be viewed as an abstraction of system states wrt system flows. The above execution scenario indicates that the sequence of the first four events is a result from executing those two flow instances of from their initial states to the shown states. For the first event , it may be a result from executing or , but exactly which one is unknown due to limited observability. Both possible cases are considered, and two execution scenarios below are derived as a result from interpreting .
(2) |
After handling the next event , the above two execution scenarios are reduced to the one as shown below.
After the remaining six events are handled, the following execution scenario is derived.
As another example, now suppose that the design with a bug generates the flow trace below.
This sequence is almost the same as the previous one except that the last event is instead of in the previous trace. is an event used in a different flow specification describing a CPU memory read protocol. Analyzing the trace right before leads to the execution scenarios below.
(3) |
However, cannot be a result from executing either flow instances in both scenarios, which indicates a noncompliance of the design implementation with respect to the given flow specification. Such an event is referred to as being inconsistent. In this case, the algorithm halts, and returns and the derived flow execution scenarios as shown in (3) for debugger to examine further.
3.2 Flow Execution Scenarios
The trace analysis approach in [15] does not capture orderings among flow instances for execution scenarios. However, from a debugger’s point of view, communication protocols can be related. For example, a firmware loading protocol always happens before a firmware execution protocol. If a firmware execution protocol is found to happen before a firmware loading protocol, that possibly indicates an error in the system implementing such protocols. Such properties cannot be checked by the previous approach.
To address that problem, this paper presents a new definition of flow execution scenarios as
where and are two indices representing relative time when is initiated and completed. The ordering relations can be derived by comparing their start and end indices. For example, for two flow instances in an execution scenario, and , , is initiated before if , or is initiated after is completed if . The ordering relations can provide more accurate information for understanding system execution under limited observability. Section 3.3 explains how and are decided during the trace analysis.
In order to support the new definition of flow execution scenarios, the trace abstraction, which maps an observed silicon trace to a linear sequence of flow events as in [15], is also generalized. A SoC design can be viewed as a group of IP blocks networked by an on-chip interconnect fabric. These blocks communicate with each other through communication links, each of which implements a protocol, such as ARM AXI, over a set of wires. The approach presented in this paper is communication centric in that it works on silicon traces on a selected number of wires of a selected number of communication links for observation. Suppose that there are communication links, and some wires from each link are selected for observation. A silicon trace is assumed to be a sequence of such that each is a vector defined as
where is a state on link in step .
If all wires of a link are observable, then a state on that link can be uniquely mapped to a flow event of the same link. Under limited observability, a state on a link is typically mapped to a set of flow events. Therefore, a silicon trace is abstracted to a sequence where
(4) |
is a vector of sets of flow events abstracted from , and each in is a set of flow events abstracted from state in . No temporal orderings exist among all events in . On the other hand, for two events, and such that , then happens before .
Based on different levels of information captured, this paper classifies flow execution scenarios as follows.
-
•
Type-1 execution scenarios capture the number of instances of each flow specification initiated from a silicon trace, and their relative orderings of initiations.
-
•
Type-2 execution scenarios, on top of what is captured by Type-1 scenarios, capture completion of each flow instance. This additional information can be used to identify potential problems if there is any flow instance that is not completed. Furthermore, Type-2 execution scenarios capture the relative orderings among all flow instances as described above.
-
•
Type-3 execution scenarios, on top of what is captured by Type-2 scenarios, capture information on execution paths followed by individual flow instances. This information can provide a means to debuggers to have a detailed examination on how each flow instance is executed.
These different execution scenarios can be used to provide different views of system execution, from coarse-grained to more detailed ones, at different stages of debug.
3.3 Algorithms
Algorithm 1 shows the top-level procedure for detecting internal flow executions based on a partially observed silicon trace, and checks the compliance wrt a given flow specification. It takes as inputs , a set of system level flow specifications, and a signal trace , which is assumed to be a sequence of states on a set of observable trace signals, and each state is uniquely indexed starting from .
This algorithm scans trace starting from index initialized to , extracts all possible flow events from at index as described in section 3.2 (line 6), and maps each of those extracted flow events to update already detected execution scenarios (line 11). The algorithm terminates if one of two conditions holds. If an inconsistence is encountered, the set of detected partial execution scenarios along with two indices and are returned (line 16). Index provides temporal information on when the inconsistency occurs, while provides spatial information on which communication link an inconsistent event is transmitted. If no inconsistency is found, the set of all execution scenarios compliant with the observed trace is returned (line 18) when index is larger than the length of the trace.
Algorithm 2 takes the specification , an execution scenario , a flow event , and index of the trace where is extracted, and it produces a set of execution scenarios consistent with . This algorithm performs two tasks. In the first task (lines 5-12), the algorithm checks every flow instance to decide if can be accepted. If such an instance is found (line 7), then it is updated with the new state as the result of (line 9). Furthermore, if causes the flow instance to complete, its index is set to (line 10-11), indicating the completion of that instance due to event at step of the trace. In task 2, all possibilities where can initiate a new flow instance are considered (line 14-20). If a new instance can be initiated, its is set to , indicating the initiation of that instance due to a signal event at step of the trace.
3.4 On the Complexity and Accuracy
Due to the limited observability, reconstructing system level executions from an observed silicon trace is an imprecise process. The large number of execution scenarios typically derived during the analysis would take large amounts of runtime and memory to process and to store, thus making it less efficient. This is referred to as the complexity problem of the trace analysis. After the analysis is done, a large number of derived execution scenarios make it difficult to understand the analysis results, thus being less helpful for debugging. Obviously, a single flow execution scenario derived at the end of the trace analysis provides much more precise information for debug than ten candidate flow execution scenarios. This is referred to as the accuracy problem of the trace analysis.
The contributing factors to the complexity and accuracy problems are explained below.
-
1.
A signal event mapped to a set of flow events Due to the limited observability, a signal event of an observed silicon trace is often interpreted as a number of different flow events, which typically leads to derivation of a number of different execution scenarios. This situation is exacerbated by the fact that silicon traces are often very long, which could lead to excessively large numbers of possible execution scenarios derived during or at the end of the analysis.
-
2.
A flow event mapped to different temporal flow instances Temporal flow instances refer to the flow instances activated by the same component, e.g. read/write flows activated by CPU_0. If several temporal instances of some flows are activated by a component, mapping flow events to those flow instances can be ambiguous. For example, suppose that an execution scenario includes two instances of the flow as shown in Figure 2 activated by CPU_0, one in state , and the other one in state . An instance of flow event can be mapped to either flow instance leading to two new execution scenarios from the current one.
-
3.
A flow event mapped to flow instances activated by different components This situation can happen when flow instances that share some common events are activated by different components. For example, suppose an execution scenario has two instances of the flow as shown in Figure 2, one activated by CPU_0 and the other one by CPU_1, and both are in state . A flow event can be mapped to either one of these two instances, leading to two new execution scenarios derived from the current one.
The above issues can be mitigated by good signal selections to be discussed in the following section. In order to evaluate the impacts of different trace signal selections on the complexity and accuracy of the trace analysis, this paper introduces two quantitative metrics. The complexity is measured by the peak count of flow execution scenarios encountered during the analysis process, i.e., the largest size of encountered during the execution of Algorithm 1. The accuracy is measured by the final count of flow execution scenarios derived at the end of the analysis process, i.e., the size of returned on either line 16 or 18 of Algorithm 1.
4 Trace Signal Selection
Trace signal selection is a critical step in post-silicon debug. It includes two different efforts: pre-silicon and post-silicon. During pre-silicon selection, a few thousand signals among a vast number of internal signals are tapped for observation. All necessary signals must be selected at this stage, otherwise, expensive re-design along with silicon re-spin are required. During post-silicon debug, a small subset of those tapped signals are routed to the chip interface for tracing during system execution.
Previous work such as [4] is typically applied to gate level design models, and the quality of the results is evaluated by the commonly used state restoration ratio. However, it is difficult to scale those methods to large and complex SoC designs. More importantly, signals selected at the gate level are often irrelevant to system-level functionalities. There is an attempt to raise the abstraction level for trace signal selection to the register transfer level (RTL) guided by assertions [11], however that work does not consider system level functionalities either. In [3], a system level protocol guided approach is proposed. It is similar to our work in that both are based on system level protocols. However, the selection techniques developed in [3] are simple and irrelevant to understanding silicon traces at the system level, and the evaluation was performed on an abstract transaction level model.
This section introduces a framework shown in Figure 3 for trace signal selection guided by system-level protocols. Due to the page limit, this paper only considers the pre-silicon trace signal selection. Since the pre-silicon selection needs to support all types of execution scenarios, it is sufficient to consider only Type-3 scenarios as they supersede Type-1 or -2 scenarios.

![]() |
![]() |
---|---|
(a) | (b) |
4.1 System Level Selection
During the system level selection, different subsets of flow events for observation are selected from given flow specifications. Then, those results are passed to the more refined bit level selection.
To support Type-3 scenarios, the start and end events of all flow specifications must be selected. If a flow specification has multiple branches, additional events may need to be selected so that the branch followed by a flow instance during system execution can be captured. Figure 4 shows two examples of different branching structures for flows.
-
•
In Figure 4(a), each branch ends with an unique event. There is no need to select additional events as observing different end events can clearly identify the branch followed during system execution.
-
•
In Figure 4(b), branches split and then join, and the flow ends with a common event. In this case, an unique event needs to be selected for each branch.
The flow shown in Figure 2 has three branches with a structure similar to Figure 4(b). Its start and end events are . Note that , , and actually refer to the same event. There is no choice for the right branch as must be selected. To identify the left two branches from the right one, either or needs to be selected. Similarly, one of events in needs to be selected in order to identify the left branch from the middle one. Therefore, all possible event selections for that flow are
Among possibly large number of event selections, there are two types of events that have interesting characteristics. One type includes events that are unique to specific flows (ref. unique events), while the other type includes events shared by multiple flows (ref. shared events). This section considers their impacts on the complexity and accuracy of the trace analysis and the signal selection. For the complexity and accuracy of the trace analysis, only issues #2 and #3 in section 3.4 are considered in this section. Issue #1 is relevant to the bit level selection.
It is important to select events that can be mapped to smallest number of flow instances during the trace analysis in order to reduce the complexity and improve the accuracy. Refer to the flow shown in Figure 2. Events and are used only in that flow. During the trace analysis, they are just mapped to the instances of that particular flow. Issue #2 can be addressed in that instances of different flows initiated by the same component are ignored for those events. Issue #3 is addressed in that instances of any flows initiated by different components are ignored for them. and have a similar characteristic.
On the other hand, events and are used in many different flows of different components. Those flows can be read/write flows of CPU_0 or CPU_1. During the trace analysis, if there are multiple instances of such flows, it is impossible to know which of those flow instances cause those events to be generated. Therefore, the analysis algorithm has to map those events to those flow instances in all possible ways. That can cause huge negative impacts on the complexity and accuracy of the trace analysis.
In terms of trace signal selection, those two types of events can lead to different results. If unique events are selected. then the total number of events selected can be large, and as a result, a large number of trace signals need to be selected in order to observe those events. On the other hand, the total number of events can be smaller if shared events are selected. That leads to a smaller number of trace signals that need to be selected. The negative impacts of selecting shared events can also be mitigated if certain implementation details are available. Next section gives more discussions on that point.
4.2 Bit Level Selection
The bit level selection takes as inputs the set of event selections produced in the previous step and an RTL model that implements the system flow specifications, and performs two tasks for each event selection:
-
1.
Evaluate its quality wrt the three issues discussed in Section 3.4;
-
2.
Choose one selection, and generate a set of candidate trace signals that implement the selected events.
The ultimate goal of the bit level selection is to produce a reduced set candidate trace signals optimized for the trace analysis approach. Since the bit level selection depends on implementation specifics, this section can only discuss some general guidelines and tradeoffs. Note that flow specifications are typically independent of memory address and data information. Therefore, the address and data bits included in event implementations can be generally ignored.
Signals that implement the Cmd field of flow events are selected based on their respective distinguishing power. Given a set of flow events and a set of signals that implement , the distinguishing power of , is defined by can be partitioned wrt . A finer partition means higher distinguishing power. For example, suppose two flow events on link implemented by eight signals with the following encodings.
Under these encodings, signals have zero distinguishing power. and have the equal power, therefore selecting either one would be fine. Selecting signals with high distinguishing power helps to address issue #1 as discussed in Section 3.4.
RTL models may contain additional implementation information that can help to address issue #2 and #3. For example, memory operations may be executed out-of-order. In this case, CPUs usually assign unique sequence IDs to flow instances to maintain data and control dependency in the original programs. If sequence IDs are available, selecting signals implementing them can help address issue #2.
If the on-chip interconnect needs to handle events from different components in a system, the events are usually assigned with tags to identify their originating components. Selecting tags can affect how events are selected. Refer to Figure 2 for the following discussion.
-
1.
If unique events such as or are selected, observing tags is not needed.
-
2.
Shared events or are selected along with tags.
For option 2, tags can help to map events to the flow instances with the same tags during the trace analysis, thus addressing issue #3. Even though additional signals for tags are selected, the total number of events may be smaller if the shared events are used in many different flows, therefore resulting in reduced signals for observation overall.
The following discussion illustrates yet another example of how implementation information can allow different events to be selected. Refer to Figure 2. That flow contains two branching places, and . When a flow instance reaches , which branch to take next depends on whether the cache operation is hit or miss. Similarly, which branch to take at depends on whether the cache snoop operation is hit or miss. If these two status signals are available and included for observation, there is no need to select branch events. Observing start/end events plus those status signals are sufficient to identify branches followed by a flow instance during system execution.
5 Experimental Results
To the best of our knowledge, this work is the first to present a systematic approach to post-silicon trace analysis guided by system level protocols. We are not able to find any similar previous work where ours can be evaluated and compared with. The closest work to ours is [15]. However, our work is more general and developed with practical considerations. Additionally, the work in [15] is discussed and evaluated based on an abstract transaction-level model while our approach is evaluated on a RTL model.
5.1 The Model
The ideas and techniques presented in this paper are evaluated on a multi-core SoC prototype, as shown in Figure 1, which implements a number of common industrial system-level protocols including cache coherence and power management. This prototype is a cycle- and pin-accurate RTL model written in VHDL. Even though this model is simple compared to real SoC designs, it is much more sophisticated than the gate-level benchmark suites typically considered as targets for post-silicon analysis [10, 4, 5].
Since the proposed trace analysis approach is communication centric, the focus of this model is the implementation of system-level protocols. The CPUs are treated as a test environment where software programs are simulated in VHDL to trigger various protocols. Therefore, there is no instruction cache as no instructions are involved when the CPUs are simulated. The peripheral blocks, GFX, PMU, Audio, etc, are also described as abstract models that generate events to initiate flows or to respond incoming requests.
More details of some system-level protocols implemented in our model can be found in [3]. They include downstream read/write protocols for each CPU, upstream read/write for the peripheral blocks, and system power management protocols, which are abstracted from real industrial protocols. These system-level protocols are supported by inter-block communication protocols based on the ARM AXI4-lite [1]. A total of flows are implemented for this prototype.
A flow event is generated from a source and consumed by a destination by messages transmitted over that link. In our model, each message is organized as follows.
The meanings of the message fields are given below. The numbers following the individual fields indicate their respective widths. Note that not all fields are used on all links. That model has over four thousand single bit signals.
-
Val
indicates validity of a message.
-
Cmd
carries operations to be performed by the target block.
-
Tag
is used by Bus to identify the original sources of messages from different blocks that go to the same destination, e.g. memory wr_req from Bus in response to wr_req from both CPUs.
-
Sid
is an unique number generated by a component to represent sequencing information of flows initiated by the same component.
-
Addr
carries the memory address at the target block where Cmd is applied.
-
Data
carries data to a target or from a source. Its width can vary depending on the links where a message is sent. On the links between Cache to Bus, the width is equal to the size of the cache block, which is bytes. For all the other links, the width is bits.
5.2 Experiment Setup
Test Environment The prototype is simulated in a random test environment where CPUs, GFX, and other peripheral blocks are programmed to randomly select a flow to initiate in each clock cycle. The contents of Cmd, Addr, and Data in each activated flow are set randomly. Additionally, CPUs can activate power management protocols non-deterministically. Each of these blocks activates a total of 100 flow instances during entire simulation.
Trace Signal Selection In the experiments, different selections of trace signals are produced as discussed in section 4, and their impacts on the complexity and accuracy of the trace analysis approach are evaluated. The list below explains the selections at the system level while information on the bit level selection is given in Table 1.
-
S1
All events of all flow specifications, and all signals implementing each event are selected. This selection offers full observation, and provides a baseline for comparing with other selections.
-
S2
The start and end events of all protocols are selected. Furthermore, for each branch in each flow, one unique event is selected.
-
S3
The start and end events of all protocols are selected. Furthermore, for each branch in each flow, a highly shared event is selected.
-
S4
The start and end events of all protocols are selected. Instead of selecting events for branches in each flow, signals whose states control the flow branching are selected.
At the bit level, the Addr and Data fields are not considered. On the other hand, the Val bit is always selected so that valid messages can be identified from observed traces. For selections S2, S3, and S4, experiments are performed to evaluate all combinations of Cmd, Tag and Sid fields.
5.3 Result Analysis
System level selection | S1 | S2 | S3 | S4 | ||||||||||||
U S | U S | U S | ||||||||||||||
Cmd | ||||||||||||||||
Tag | ||||||||||||||||
Sid | ||||||||||||||||
# Bits | 870 | 545 | 401 | 401 | 401 | 401 | 495 | 367 | 367 | 367 | 367 | 378 | 258 | 258 | 258 | 258 |
# scen | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | ||||||
(Final) | ||||||||||||||||
# scen | 1 | 1 | 5184 | 1 | 1 | 5184 | 1 | 1 | 8 | 1 | ||||||
(Max) | ||||||||||||||||
Time | 1.628 | 1.475 | 600 | 3.679 | 600 | 1.464 | 1.444 | 600 | 3.812 | 600 | 1.426 | 1.430 | 1.411 | 1.424 | 600 | 1.419 |
Mem | 0.516 | 1.10 | 4.2 | 66 | 1.124 | 1.11 | 4.2 | 101 | 1.1 | 0.504 | GB | 0.58 | GB | 1.116 |
In Table 1, a means that all signals implementing a particular field for all selected events in selection are traced. Otherwise, all those signals are not traced. Third row (# Bits) shows the total numbers of single-bit signals are traced for different selections. As discussed in section 4, system-level selection may choose events unique to particular flows or events shared by multiple flows. From the table, we can see that selecting shared events leads to a smaller number of trace signals (S3) compared with selecting unique events (S2). However, if status signals controlling flow branching are selected without selecting any branch events, S4 leads to the smallest trace signal selection.
From the table, it is quite obvious that not selecting Cmd or Sid has severe impacts on the trace analysis as explained in issues #1 and #2 in section 3.4. On the other hand, not selecting Tag has negative impacts, but not as severe. The trace analysis can still finish even though it takes more time and memory. Next, compare the results obtained by selecting Cmd and Sid but no Tag under S2S4. The results with S4 are much better than S2 or S3. This is due to that no branch events are selected for S4, therefore, issues #2 and #3 are avoided. Combined with the benefit of reduced trace signals, S4 appears to be the best option. On the other hand, not selecting any branch events may cause difficulty in understanding flow execution if a branch is long and a system execution fails to reach the end of that branch.
In the above discussion, selections of Cmd, Tag and Sid are applied to all events as the result of the system-level selection. A finer selection can be used to reduce trace signals if unique events and shared events are considered separately. For unique events, the sources where they are generated are known from flows, therefore Tags need not be traced. Shared events may be results of flow instances initiated by different components, therefore tracing Tags are necessary. On the other hand, tracing or not tracing Cmds has little impact on the trace analysis. These points are supported by the results shown in columns under . Under S2, compare the results under against those with all three fields selected. We can see that the runtime performance and the complexity and accuracy of the trace analysis are similar while the trace signals are reduced with the finer selection. Comparing the results under against those obtained with only Cmd and Sid selected, the complexity is significantly dropped. The same conclusion can be drawn for S3 and S4.
From the above discussion, it is necessary to trace signals implementing Cmd and Sid whenever possible, and trace as many signals implementing Tag as allowed to reduce complexity of the trace analysis even more. If Tag or Sid is not part of the design, we recommend to add DFx circuitry in order to trace such information. In the above experiments, the final execution scenarios under different signal selection, if available, contain the correct number of flow instances initiated, and the orderings among the flow instances, as generated by the test environment, are correctly captured.
6 Related Work
Our work is closely related to communication-centric and transaction based debug. An early pioneering work is described by Goossens et al. [9, 14, 8], which advocates the focus on observing activities on the interconnect network among IP blocks, and mapping these activities to transactions for better correlation between computations and communications. A similar transaction-based debug approach is presented by Gharebhagi and Fujita [7]. It proposes an automated extraction of state machines at transaction level from high level design models. From an observed failure trace, it tries to derive a set of feasible transaction traces that lead to the observed failure state. However, this approach requires manual inputs and may not be able to derive such traces.
Singerman et al. [13] deploys a central repository of system events and simple transactions defined by architects and IP designers. It spans across a wide spectrum of the post-silicon validation including DFx instrumentation, test generation, coverage, and debug. Also, Abarbanel et al. [2] propose a model at a higher-level of abstraction, flows, is proposed. Flows are used to specify more sophisticated cross-IP transactions such as power management, security, etc, and to facilitate reuse of the efforts of the architectural analysis to check HW/SW implementations.
7 Conclusion
An improved trace analysis approach for post-silicon debug is presented where observed raw silicon traces are interpreted wrt system flow specifications. In this approach, a new formulation of flow execution scenarios is described where more diverse information among flows can be captured and represented. A trace signal selection framework is also described in support of the proposed trace analysis approach. Some observations on trace signal selections and their impacts on the accuracy and efficiency of the trace analysis are discussed. Experiments on a non-trivial SoC prototype reveal insights on impacts of different signal selections on the complexity and accuracy of the trace analysis. In the future, we plan to perform more extensive and in-depth study on trace signal selections guided by system flow specifications.
References
- [1] Amba axi and ace protocol specification. http://www.arm.com.
- [2] Y. Abarbanel, E. Singerman, and M. Y. Vardi. Validation of soc firmware-hardware flows: Challenges and solution directions. In Proceedings of DAC’14, pages 2:1–2:4, 2014.
- [3] M. Amrein. System-level trace signal selection for post-silicon debug using linear programming. Master’s thesis, Univ. of Illinois Urbana-Champaign, May 2015.
- [4] K. Basu and P. Mishra. Efficient trace signal selection for post silicon validation and debug. In VLSI Design (VLSI Design), pages 352–357. IEEE, 2011.
- [5] D. Chatterjee, C. McCarter, and V. Bertacco. Simulation-based signal selection for state restoration in silicon debug. In ICCAD, pages 595–601. IEEE, 2011.
- [6] H. D. Foster. Trends in functional verification: A 2014 industry study. In DAC, pages 48:1–48:6, 2015.
- [7] A. M. Gharehbaghi and M. Fujita. Transaction-based post-silicon debug of many-core system-on-chips. In ISQED, pages 702–708, 2012.
- [8] K. Goossens, B. Vermeulen, and A. B. Nejad. A high-level debug environment for communication-centric debug. In Proceedings of DATE’09, pages 202–207, 2009.
- [9] K. Goossens, B. Vermeulen, R. v. Steeden, and M. Bennebroek. Transaction-based communication-centric debug. In Proceedings of NOCS’07, pages 95–106, 2007.
- [10] H. F. Ko and N. Nicolici. Algorithms for state restoration and trace-signal selection for data acquisition in silicon debug. IEEE TCAD, 28(2):285–297, 2009.
- [11] S. Ma, D. Pal, R. Jiang, S. Ray, and S. Vasudevan. Can’t see the forest for the trees: State restoration’s limitations in post-silicon trace signal selection. ICCAD ’15, pages 1–8, Piscataway, NJ, USA, 2015. IEEE Press.
- [12] P. Patra. On the cusp of a validation wall. IEEE Des. Test, 24(2):193–196, Mar. 2007.
- [13] E. Singerman, Y. Abarbanel, and S. Baartmans. Transaction based pre-to-post silicon validation. In Proceedings of DAC’11, pages 564–568, 2011.
- [14] B. Vermeulen and K. Goossens. A noc monitoring infrastructure for communication-centric debug of embedded multi-processor socs. In VLSI-DAT ’09, pages 183–186, 2009.
- [15] H. Zheng, Y. Cao, S. Ray, and J. Yang. Protocol-guided analysis of post-silicon traces under limited observability. In Proceedings of ISQED’16, pages 301–306, March 2016.
Appendix A CPU read/write downstream protocol
X={ 0, 1}
X’=1-X
Target={ Memory, USB, UART, AUDIO, GFX}
CMD ={read, write }


Appendix B Upstream read/write protocol
Initiator={ GFX, USB, AUDIO, UART}
Target={ Memory, USB, UART, AUDIO, GFX}
Note that a peripheral can’t initialize a read flow to read itself


Initiator={ GFX, AUDIO}
Target={ Memory, USB, UART, AUDIO, GFX}
Note that a peripheral can’t initialize a write flow to write itself


Appendix C CPU write back protocol
X={1,0}
Target={ Memory, USB, UART, AUDIO, GFX}


Appendix D CPU power on/off protocol
CMD={ pwr_on, pwr_off}
X={1,0}
Target={ Memory, USB, UART, AUDIO, GFX}

