Dubugg Your C++ Applications. Linking Libraries. Memory Leakages.
In the following article, I will display general problems, which often occur while compiling/running the C++ application (program). Since there are many types of errors, having different roots I will focus only on the general approach and depict standard commands to analyze the root.
Since I use Linux OS (Ubuntu) the commands are applicable to this system.
Compilation process
Understanding the compilation process provides us an excellent chance to utilize the available software tools for running our debugging process or understanding how our application “works” under the hood.
When you run the compilation process (here I use GCC) all the below steps are performed “automatically”. However, there are ways to “force” to stop each compilation subprocess and be able to inspect what has been “done’ by the compiler (intermediate files).
During the compilation process, we can evaluate the following steps (consider the above figure),
- When the C++ preprocessor encounters the #include <file> directive, it replaces it with the file's content, creating an expanded source code file. During the preprocessor phase, all macros are also expended.
The output after the preprocessor can be verified running (I assume we compile the program.cpp),
#include <iostream>
#define VALUE 100
int main(){
std::cout << "This is output from program (macro) = " << VALUE << "\n";
return 0;
}
Running g++ with the flag -E (all available flags),
g++ program.cpp -o prog.ii -E
we can investigate prog.ii and check that library <iostream> is included in the file prog.ii.
2. Then, the preprocessed file is processed into an assembly language for the platform-specific. You use the same compiler but with the flag -S,
g++ prog.ii -o prog.s -S
if you use nano, vi, or another IDE you are able to consider the assembly code,
3. Next, the assembler converts the file that’s generated by the compiler into the object code file. Compile previous assembly code using the flag -c,
g++ prog.s -o prog.o -c
It can be difficult to investigate the “content” of the prog.o file. The useful toll objdump help to convert the object file to a more readable format,
objdump -dC prog.o
//run man objdump to check all available options
Now you can object file compile to the final executable format,
g++ prog.o -o prog
You can also compile your program to all intermediate files at once running,
g++ program.cpp --save-temps -o prog
Static Linking
During the linking phase, the compiler links all necessary libraries to run our program. We can investigate which libraries are incorporated. Simply run “ldd” utility as follows (for dynamic linking, we will discuss later),
ldd ./prog
The output is as follows,
Here is important to note, the C++ is implemented in the library libstdc++.so.6 located in /lib/x86_64-linux-gnu/.
First, we can investigate the symbols which are included in this library running and grab “cout” symbol,
nm -D /lib/x86_64-linux-gnu/libstdc++.so.6 | grep cout
Now we can investigate the same symbol in our program
nm -D prog | grep cout
When our C++ program consists of several cpp file the most important thing is to give information to the compilator where the implementation of certain functions (methods) are located.
We can take a simple example where is the main we call compute function.
#include <iostream>
int main()
{
auto value = compute(10,20);
std::cout << "This is output from program = " << value << "\n";
return 0;
}
As expected the compiler fails since there is no information given about the compute function.
Now we can of course include the definition of the function, but as we can assume the implementation of the function is still missing. The linker complains,
#include <iostream>
int compute (int a, int b); //declaration
int main()
{
auto value = compute(10,20);
std::cout << "This is output from program = " << value << "\n";
return 0;
}
It is, of course, true that we can implement the compute function, however for our purposes, we approach another case.
As we previously indicated our program consists of several files. In our case the implementation of the compute function is deployed in separate files: compute.cpp and compute.h which includes our previously defined declaration.
All files look as follows,
//program.cpp
#include <iostream>
#include "compute.h"
int main()
{
auto value = compute(10,20);
std::cout << "This is output from program = " << value << "\n";
return 0;
}
//compute.cpp
int compute (int a, int b){
return a + b;
}
//compute.h
#pragma once
int compute (int, int);
Before we go further the most important here is to consider the default location where the compiler searches for the libraries and in our case the header files.
We can run command,
g++ -v -x c++ -E /dev/null
There are default locations where the compiler searches for the libraries and headers.
We can also specify our own location by adding the flag -I (in our case we add a header from:
-I/home/markus/SOFTWARE_DEVELOPMENT/cpp_dev/extralib
g++ -v -x c++ -E /dev/null -I/home/markus/SOFTWARE_DEVELOPMENT/cpp_dev/extralib
Now we can come back to our program and compilation process.
If we compile the file program.cpp the compiler still is not familiar with the implementation of compute function, besides the header (compute.h) is found by the compiler. We can check the running of the preprocessing
g++ program.cpp -o prog.ii -E
As we can see the compiler finds the header (compute.h) and declaration but still, the implementation is not “visible” and the compiler fails.
What we have to do is to compile the compute.cpp file to the object and using the same g++ compiler link together.
g++ program.cpp -o program.o -c
g++ compute.cpp -o compute.o -c
g++ compute.o program.o -o program
//run program
./program
//expected output
This is output from program = 30
Before we link all the objects we can investigate that objects (program. o and compute.o).
We can investigate the symbols,
nm program.o
and evaluate the symbol “compute” is undefined (U _Z7computeii)
We can also investigate the compute.o object and be familiar that the compute function is defined (T _Z7computeii)
nm object.o
We can also build the static library (for compute.cpp) using “ar” utility and include the library while compiling the program. Here is a simple example of how to do that.
All files remain the same but for compute.cpp we create a simple library (archive). The complete example is as follows,
// create the object
g++ compute.cpp -o compute.o -c
// run archive with -crs flag. Run man ar to see the meaning of flags
// we create the libprog library
ar -crs libprog.a compute.o
// we can create new folder and move the libprog library to that
mkdir libs
cd libs
mv ../libprog.a .
cd ..
// now we can comile our program against our library
// adding flag -l raplacing lib (-lprog == libprog)
// -L./libs - is the location of our library
g++ program.cpp -o our_prog -lprog -L./libs
./our_prog
We can assume that you run that program using the Eigen library. You do not want to install but rather clone the repository on your local machine. Then, you have to inform the g++ compiler where the Eigen headers are. As we discussed above the compiler searches in the default location. What you have to do is to make a soft link between the location where you clone your files to a location where the g++ looks for headers. In my case,
sudo ln -s /usr/include/eigen3/Eigen /usr/local/include/
Now you can compile this program without errors.
#include <iostream>
#include <Eigen/Dense>
using Eigen::MatrixXf;
using Eigen::VectorXf;
VectorXf takeVector()
{
Eigen::VectorXf vec(6);
vec << 1, 2, 3, 4, 5, 6;
return vec;
}
Eigen::MatrixXf takeMatrix()
{
Eigen::MatrixXf mat(4, 4);
mat << 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16;
return mat;
}
int main()
{
Eigen::VectorXf vec = takeVector();
Eigen::MatrixXf mat = takeMatrix();
std::cout << mat << "\n";
Eigen::MatrixXf matR = mat.block<4,1>(0,2);
std::cout << matR << "\n";
}
While compiling the program, which “needs” a static library and a certain library is located in another location, you have to add the location of the static library to the path LD_LIBRARY_PATH.
You can check your current path running,
echo $LD_LIBRARY_PATH
In my case, I have one from SFML
Dynamic Linking
Now we will build our own dynamic library (shared library) and export it to LD_LIBRARY_PATH so the library can be used while we run the program.
For now, we can clean the LD_LIBRARY_PATH,
export LD_LIBRARY_PATH=""
As you can see below we follow the same compilation process shower this time we add also -fpic flag. As you remember while want to include the static library we have to compile programs with certain libraries. In that case, the libraries necessary to run the code are included inside the executable file. The program is “familiar” with all the addresses of the function, variables, etc.
When the dynamic linking is in place the program loads in the library at runtime. Beforehand the compiler does not know where exactly that library
is going to be loaded into the memory, therefore we have to generate code that works regardless of where that code gets loaded in — we add the flag -fpic in order to generate position-independent code.
The whole process of creating the dynamic library for our simple program can be depicted as follows,
// compile with extra flag -fpic
g++ compute.cpp -o compute.o -c -fpic
// create the dynamic library libprog.so
g++ -shared compute.o libprog.so
// we can move libprog.so to libs folder
cd libs
mv ../libprog.so .
// now we can compile using the same procedure as we discussed before
g++ program.cpp -o prog -lprog -L../libs
The compiler compiles without the errors, but if we run,
The program complains about the libprog.so. When we run the compilation we included the path for the library but, the final executable was not built with the libraries, therefore running is not possible.
You can run LDD,
ldd ./prog
As you can we are missing libprog.so (not found).
What we have to do is to export the path of libprog.so to LD_LIBRARY_PATH (in my case)
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/home/markus/SOFTWARE_DEVELOPMENT/cpp_dev/libs
and now we can run again LDD,
As you can see the libprog.so is seen so the program can be run properly!
Memory Debugging
One of the most common problems related to program bugs is memory leakage. Memory leakages occur when the program terminates, however, some dynamically allocated heap blocks are still not released. The problem can be considered trivial, however, when the program runs several internal iterations, the leakage of the memory can be critical.
One of the most important tools for analyzing memory leakages is Valgrind. On Ubuntu, you can install running,
sudp apt install valgrind
Let us consider the simple program,
#include <iostream>
class A
{
public:
int a;
int b;
};
int main()
{
int *p = new int[10];
A *pp = new A[5];
}
Here we allocated a heap array of 10 integers (1 integer occupies 4 bytes) and an array of 5 objects (class A). Members of Class A are two integers that occupy 8 bytes.
In order to check potential leakages we have to compile our program with flag -g (enables debugging),
g++ mem_program.cpp -o mem_prog -g
and run Valgrind, as follows
valgrind ./mem_prog
The output is as expected (we lose 80 bytes = 10*4 + 5*8) — see HEAP SUMMARY
You can apply also some other flags, which exactly inform about the location of the leak,
valgrind --leak-check=full ./mem_prog
For C++ enthusiasts I really recommend extremely consistent and perfectly done YouTube channels: CoffeeBeforeArch and Mike Shah.
Thank you for reading.