Take l1_lat that is one of the GPU_Microbenchmark for example.
This binary would be run in PTX mode.
When there were no breakpoints being set, or only breakpoints unrelated to GPGPU-SIM initialization were set, then the benchmark could be safely executed, and the terminal would leave message as below:
gpgpu_simulation_time = 0 days, 0 hrs, 0 min, 53 sec (53 sec)
gpgpu_simulation_rate = 623 (inst/sec)
gpgpu_simulation_rate = 2568 (cycle/sec)
gpgpu_silicon_slowdown = 440809x
L1 Latency = 23.9961 cycles
Total Clk number = 6143
GPGPU-Sim: *** exit detected ***
The lines starts with "L1 Latency" and "Total Clk number" can be found in the modified l1_lat.h.
However, if GPGPU-SIM initialization related breakpoints were set (See below):
m_shader_config.init();
m_L1D_config.init(m_L1D_config.m_config_string, FuncCachePreferNone);
Trace::init();
Then, every time, the program would collapse at the construction function of GPGPU-SIM itself:
gpgpu_sim::gpgpu_sim(const gpgpu_sim_config &config, gpgpu_context *ctx)
: gpgpu_t(config, ctx), m_config(config) {
gpgpu_ctx = ctx;
m_shader_config = &m_config.m_shader_config;
The simulator would release a "Segmentation Fault" at the entry of "bool gpgpu_sim::active() {" in a few seconds.
I am quite confused, and simulation flow should have nothing to do with breakpoints.