One of the most used techniques in the imagiology field is called Computed Tomography (CT), a method to acquire slices of the body based on the attenuation of X-rays. This monograph will try to compile the most important information about CT, namely its history, physical principles, fundamental instrumentation, data acquisition and processing techniques, as well as its applications.
Firstly, a brief tour through the history of the technique will be taken, while some of the most important achievements will be referred. The starting point will be the discovery of the X-rays, then passing through the creation of the first CT scanner and the development of data analysis and processing algorithms.
Then, a concise revision of the evolution of the scanners will be done, delineating the different generations of scanners and the key features of each one.
In order to understand how an object can be scanned by this technique, a review of the physical concepts that constitute the basis of CT will be done. More precisely, we will discuss the attenuation of radiation while passing through objects. A short description of how X-rays interact with matter and the concept of linear attenuation coefficient will be discussed.
The instrumentation needed for CT will shortly be referred, in particular the most important components of a CT scanner will be briefly explained.
As data acquired by the scanners are not displayed in the way they are obtained, we will afterward explain the most used methods to process and analyze the great amount of information acquired by the CT detectors.
The process of creating a scale to represent data – the CT numbers – will subsequently be overviewed, in order to understand how images are created and shown to the doctors. A description of how CT allows to distinguish different anatomical structures and how it permits to see just the structures we want will also be done.
After that, an enumeration of some of the many clinical applications of CT will be done, knowing at the start that it will be impossible to list all the applications, reason why just a few will be referred. Besides, it is not the main goal of this monograph, although it is essential to understand the crucial importance of CT in the medicine field.
Finally, we will try to conjecture about the future of CT, specifically what it can be improved and what are the actual challenges for this technique and how it can be overcame.
This monograph is part of the Hospital and Medical Instrumentation course and pretends to be an overall view of CT, reason why there is not exhaustive detail in each section (for more detail in the approached topics, please read the references). 3-Dimensional reconstruction techniques will not be discussed because it is the topic of another group. Incisive instrumentation will not be exploited because it not exploited in the course as well.
The history of CT started with the discovery of X-rays in 1895 by Wilhelm Conrad Roentgen, which gave him the Physics Nobel Prize in 1901.
During 1917, the Austrian mathematician Johann Radon developed a study in which he demonstrated that making several projections in different directions of a material and recreating its associated pattern, it was possible to obtain a slice where one could characterize different densities of the material.
The idea of using these mathematical methods to create images of slices of the human body in radiographic films was proposed by the Italian radiologist Alessandro Vallebona in 1930.
Between 1956 and 1963, the physicist Allan Cormack developed a method to calculate the distribution of absorbed radiation in the human body based on transmission measurements, which allowed to detect smaller variations in absorption. , , 
In the year of 1972, Sir Godfrey Hounsfield (who won the Nobel Prize in Medicine or Physiology in 1979, shared with Cormack) invented the first CT scanner in United Kingdom when he was working at EMI Company, which, at the time, was actually best known for its connection to the music world. The original prototype, called “EMI Scanner”, recorded 160 points for each projection in 180 different angles (with steps of 1°) and each slice took 5 minutes to be acquired. A 180×160 matrix was then constructed with these data, which took 2 and half hours to be analyzed until the final 2D-images could be visualized.
The first types of scanners required the patient’s head to be immerged in a “water-filled” container in order to reduce the difference of X-rays’ attenuation between the rays that crossed the skull and the ones that only crossed the environment, because the detector had a small range of intensities that it could measure. , 
During the subsequent years, CT scanners increased its complexity, and based on that evolution, we can distinguish five generations of machines that will be discussed in the next section (Section 3).
Later, in 1989, it was developed a new technique in which data acquisition was done continuously – the spiral CT scanning – using the movement of the platform where the patient was lying. 
Nowadays, CT machines have obviously superior performances than the prototypes of the 70’s. In fact, several rows of detectors have been added which now allows registration of multiple slices at the same time – the multislices scanners. These improvements allowed to represent data in 1024×1024 matrixes, which have a 1 megapixel pixel resolution. , 
Over the time, the fundamentals of data acquisition and the key characteristics of the machines changed in many ways. This fact, allow us to split the evolution of the CT scanners in five generations.
The first technique implemented in CT commercial machines consisted of the emission of a parallel X-ray beam that passed through the patient until it reached a detector located on the opposite side. Both X-ray and detector were place in the edge of a ring with the patient as the center. The X-ray source, as well as the detector, suffered a linear translation motion to acquire data from all mater’s directions. Then, the X-ray tube and the detector, was rotated about 1°, having the patient as isocenter, and a new beam was emitted and the movement of translation restarted. This process was repeated until it reached 180° and, for each cycle of emitted beams, 160 projections of the material on analysis were recorded. The highly collimated beam provided excellent rejection of scattered radiation in the patient. At this point, the most used image reconstruction technique was the backprojection. Later in this work (Section 6) we will explain the techniques used in reconstruction. The time needed for data acquisition was extremely long (5 minutes per slice), due to technological limitations. 
In the second generation, the collimated beam was replaced by a fan X-ray beam and the simple detector was replaced by a linear array of detectors. This advance resulted in a shorter scan time, although this technique still continued to use a coupled source-detector translation motion. At the same time, the algorithms used to reconstruct the slice images became more complicated.
Because of the vast amount of time needed to acquire data, both the first and second generations of scanners were limited to head and extremities scans, because those were the regions of the body that could remain immobilized during the long scan time. , , 
The third generation of scanners emerged in 1976. In this generation, the fan beam was large enough to completely contain the patient, which made the translation movement redundant and the scanner commenced to execute only the rotational movement. Such as the fan beam, also the detectors became big enough to record all data of each slice at a time. The detector consisted of a line with hundreds of independent detectors that, like as in the second generation, rotated attached to the X-ray source, which required up to 5 seconds to acquire each slice. The power supply was now made by a “slip ring” system placed on the gantry, which allowed to continually rotate it without the need to reverse the rotating motion to untwist the power cables used before, as it was needed after each rotation in first and second generations. , 
This generation was implemented in the late 70’s and its innovation was a stationary ring of detectors that surrounded the patient. In this case, only the X-ray beam had movement. The ring consisted of a 600 to 4800 independent detectors that sequentially recorded the projections, so detector and source were no longer associated. However, detectors were calibrated twice during each rotation of the X-ray source, providing a self-calibrating system. Third generation systems were calibrated only once every few hours.
In the fourth generation systems, two detectors geometries were used. The first one consists of a rotating fan beam inside the fixed ring of detectors and the second one has the fan beam outside the ring. These technological advances provided a reduction of the scan times to 5s per image and slice spacing below 1 mm.
Both third and fourth generations are available in market and both have success in medical activities. , 
The innovation of the fifth generation of CT scanners (early 80’s) was a new system of X-ray source. While the ring of detectors remains stationary, it was added a new semicircular strip of tungsten and one electron gun which is placed in the patient alignment. By directing this electron beam to the anode of the tungsten strip, the release of X-ray radiation is induced. This method results in a no moving parts system, i.e. no mechanical motion is needed to record data because the detectors completely surround the patients and the electronic beam is directed electronically. The four target rings and the two detector banks allow eight slices to be acquired at the same time, which reduce the scan time and, consequently, the motion artifacts. This fact led to the reduction of scan time to between 33 and 100 ms, which is sufficient to capture images of the heart during its cardiac cycle, reason why it is the most used in diagnostic of cardiac disease. For that reason, this is also called Ultrafast CT (UFCT) or Cardiovascular CT (CVCT) Because of the continuous scan, special adjustments in the algorithm are needed to reduce image artifacts. , , 
The idea of creating a spiral CT came with the need for scans of 3-Dimensional images. This system to acquire 3-Dimensional CT images was born in the early 90’s and consists of a continue translation movement of the table which supports the patient. This technique is based on the third generation of machines and allows scan times of the abdomen to be reduced from 10 minutes to 1 minute, which reduces the motion artifacts. Besides, a 3-Dimensional model of the organ under study can be reconstructed. The most complex innovation of this technique consists of the data processing algorithms, because they must consider the spiral path of X-ray beam around the patient. Technically, this was possible only due to the “slip ring” system implemented on the third generation of scanner. , , 
After the development of new techniques, detectors, methods and algorithms, nowadays the question is: “How many slices can we acquire at same time?”. The answer to this question lies in the placement of several rows of detectors and the transformation of a fan beam X-ray to a 3-Dimensional cone beam. Nowadays, manufacturers have already placed 64 rows of detectors (multislice systems) and the image quality reached high levels. Moreover, the completely scan of a structure takes now about 15 seconds or even less. 
The basic principle of CT is measuring the spatial density distribution of a human organ or a part of the body. It is similar to conventional X-ray, in which an X-ray source of uniform intensity is directed to the patient and the image is generated by the projection of the X-rays against a film. The X-rays are emitted with a certain intensity I0 and they emerge on the other side of the patient with a lower intensity I. The intensity decreases while crossing the patient, because radiation interacts with matter. More precisely, X-rays used in CT are of the order of 120kV and, with that energy (120 keV), they interact with tissues mainly by photoelectric (mostly at lower energies) and Compton effects (at higher energies), although they can also interact by coherent scatter, also called Rayleigh scatter (5% to 10% of the total interactions).
Photoelectric effect consists of the emission of an electron (photoelectron) from the irradiated matter caused by the absorption of the X-ray’s energy by an inner electron of the medium. In Compton effect, a X-ray photon interacts with an outer electron of matter and deviates its trajectory, transferring part of its energy to the electron, which is then ejected. In coherent scatter, the energy of the X-ray is absorbed by the tissue causing the electrons to gain harmonic motion and is then reradiated in a random direction as a secondary X-ray. , , , , 
CT X-rays are not monoenergetic, but for now, to simplify the understanding of this concept, we will consider them monoenergetic. When an X-ray (as well as other radiation) passes through a material, part of its intensity is absorbed in the medium and, as a consequence, the final intensity is lower than the initial one. More precisely, the Beer’s Law states that intensity transmitted through the medium depends on the linear attenuation coefficient of the material µ – if we consider that we are in presence of a homogeneous medium – and the thickness of the material x according to the following expression:
The problem with conventional radiographs is that it only provides an integrated value for µ along the path of the X-ray, which means that we have a 2-Dimensional projection of a 3-Dimensional anatomy. As it can be easily understood, all the structures and organs at the same level will appear overlapped in the image. As a consequence, some details cannot be perceived and some organs may not be entirely seen. For example, it is very hard to see the kidneys in a conventional radiography because the intestines appear in front of them. , , 
Moreover, as there are many values of (typically one for each point of the scanned part of the body), it is not possible to calculate their values with one singe measure. However, if measures of the same plane by many different directions are made, all the coefficients may be calculated, and that is what CT does. As Figure 4 shows, a narrow X-ray beam that is produced by the source in the direction of a detector, which means that only a narrow slice of the body is imaged and the value of intensity recorded by the detector depends on all the material crossed by the X-ray in its way. That is the reason why it is called tomography – it derives from the Greek tomos which means to cut or section. Many data of X-ray transmission through a plane of an object (an organ or a party of the body) from several directions are recorded and are then used to reconstruct the object by signal processing techniques. These techniques will be discussed later in this monograph (Section 6). The tightly collimated X-ray beam ensures that no significant scatter is present in order to assure a low signal-to-noise ratio (SNR), a necessary premise to obtain a faithful image of the scanned object. For that reason, unlike conventional tomography, in CT, patient’s structures located outside the area that is being imaged do not interfere. , , 
The X-ray system is composed by an X-ray source, collimators, detectors and a data-acquisition system (DAS). X-ray source is undoubtedly the most important part, because it is what determines the quality of the image. , 
The basis of the X-ray source (called X-ray tube) is to accelerate a beam of electrons between two electrodes against a metal target and is shown in Figure 5. The cathode is a coiled tungsten filament, which is crossed by a current which causes the filament to heat up. At high temperatures (2220°C), the tungsten releases electrons, a process called thermionic emission. A 15 to 150 kV potential difference is applied between the cathode and the anode, which forces the released electrons to accelerate towards the anode. 
When the electrons hit the anode, they produce X-rays by two ways. On the one hand, when an electron passes near the tungsten nucleus, it is deflected by an attractive electric force (because the nucleus is positively charged and the electron has a negative charge) and loses part of their energy as X-rays. As there are an enormous number of possible interactions and each one leads to a partial loss of kinetic energy, the produced X-rays have a great range of energies, as Figure 5 shows. This process is called bremsstrahlung (i.e. braking radiation). On the other hand, if an electron from the cathode hits and penetrates an atom of the anode, it can collide with an inner electron of it, causing the electron to be ejected and the atom to have a “hole”, which is filled by an outer electron. The difference of binding energy of these two electrons is released as an X-ray. This process is called characteristic radiation, because its energy depends on the binding energy of the electrons, which is characteristic of a given material. , , 
The tube current represents the number of electrons that pass from the cathode to the anode per unit of time. Typical values for CT are from 200 up to 1000 mA. The potential difference between the electrodes is generally of 120 kV, which produces an energy spectrum ranging from 30 to 120 keV. The tube output is the product between the tube current and the voltage between the electrodes and it is desired to have high values because that permits a shorter scan time, which reduces the artifacts due to movement (such as for heart scans). , 
Production of X-rays in these tubes is an inefficient process and most of the power supplied to the tube is converted in heating of the anode. So, a heat exchanger is needed to cool the tube. This heat exchanger is placed on the rotating gantry. Spiral CT in particular requires high cooling rates of the X-ray tube and high heat storage capacity. 
The electron beam released from the source is a dispersed beam, normally larger than the desired field-of-view (FOV) of the image. Usually, the fan beam width is set for 1 to 10 mm (although recent CT scanner allow submilimetric precision), with determines the width of the imaged slice. The collimator is placed between the source and the patient and is composed by lead sheets to restrict the beam just to the required directions.
An X-ray beam larger than the FOV leads to a larger number of X-rays emitted than the ones needed to the scan and that has two problems: the radiation dose given to the patient is increased unnecessarily; and the number of Compton-scattered radiation increases. , 
An ideal CT system only with primary radiation (x-rays emitted from the source) reaching the detector does not exist and Compton scatter is always present. As this scatter is randomly distributed and has no useful information about the distribution of density of the scanned object, it just contributes to the reduction of image contrast and should be minimized to the maximum. This, because unlike photoelectric effect, Compton effect has a low contrast between tissues.
As referred above, collimators are useful to limit the X-ray beam to the FOV. However, even with a collimator, 50% to 90% of the radiation that reaches the detector is secondary radiation. To reduce the Compton scatter, antiscatter grids can be placed between the detector and the patient. 
An antiscatter grid consists of strips of sheets oriented parallel to the primary radiation direction combined with a support of aluminum, which drastically reduces the scatter radiation that has not the direction of the primary one, as illustrated in Figure 6.
In order to not lower the image quality because of the grid shade, the strips should be narrow. There is, however, a tradeoff between the reduction of scatter radiation (that improve the image contrast) and the dose that must be given to the patient to have the same number of detected X-rays. 
At the beginning, single-slice CT scanners with just one source and one detector were used. However, these took much time to acquire an image, reason why the evolution brought us single-source, multiple-detector machinery and multislice systems.
The third and fourth generations added a wider X-ray fan beam and a larger number of detectors to the gantry (typically from 512 to 768), which permitted to acquire more information in a smaller time.
The detectors used in CT must be highly efficient to minimize the dose given to the patient, have a large dynamic range and be very stable over the time and over temperature variations inside the gantry. Three factors contribute to overall efficiency: geometric efficiency (fraction of the total area of detector that is sensitive to radiation), quantum efficiency (the fraction of incident X-rays that is absorbed to contribute to signal) and conversion efficiency (the ability to convert the absorbed X-rays into electrical signal).
These detectors can be of two types (shown in Figure 7): solid-state detectors or gas ionization detectors. Solid-state detectors consist of an array of scintillating crystals and photodiodes, while gas ionization detectors consist of an array of compressed gas chambers to which is applied a high voltage to gather ions produced by radiation in inside the chamber. The gas is kept under a high pressure, to maximize interactions between X-rays and gas molecules, which produce electro-ion pairs. , 
The transmitted fraction of the incident X-ray intensity (I/I0 in equation 1) can be as small as 10-4, reason why DAS must be very accurate over a great range. The role of DAS is to acquire these data and then encode it into digital values and transmit these to computers for reconstruction to begin.
DAS make use of many electronic components, such as precision preamplifiers, current-to-voltage converters, analog integrators, multiplexers and analog-to-digital converters. The logarithmic step needed in equation 3 to get the values of µi can be performed with an analog logarithmic amplifier.
Data transfer is a crucial step to assure speed to the whole process and used to be done by direct connection between DAS and the computer. However, with the appearance of rotating scanners in third and fourth generations, these transfer rate, which is as high as 10 Mbytes/s is now accomplished by optical transmitters placed on the rotating gantry that send information to fixed optical receivers. 
The data acquisition of the projections, the reconstruction of the signal, the display of the reconstructed data and the manipulation of tomographic images is possible by computer systems used to control the hardware. Current systems consist of 12 processors which achieve 200 MFLOPS (million floating-point operations per second) and can reconstruct an image of 1024×1024 pixels in less than 5 seconds. 
As data are acquired in several directions (e.g. with increments of 1° or even less) and each direction is split in several distinct points (e.g. 160 or more), at least 28 800 points are stored, which means that there must be efficient mathematical and computational techniques to analyze all this information. A square matrix representing a 2-Dimensional map of the variation of X-ray absorption with the position is then reconstructed. There are four major techniques to analyze these data, which we will discuss subsequently. 
As it was referred above (Section 4), there is a measure of for each pixel, which means that modern CT scanners deal with 1 048 576 points for each slice (nowadays the matrixes used are 1024×1024). As a result, to generate the image of one single slice, a system of at least 1 048 576 equations must be solved (one equation for each unknown variable), which means that this technique is totally unusable. In fact, imagine that in 1967, Hounsfield built the first CT scanner, which took 9 days to acquire the data of a single slice and 21 hours to compute the equations (and by the time, the matrix had only 28 000 entries). Besides, nowadays CT scanners acquire about 50% more measures than it would be needed in order to reduce noise and artifacts, which would require even more computational resources. , , 
These techniques try to calculate the final image by small adjustments based on the acquired measures. Three major variations of this method can be found: Algebraic Reconstruction Technique (ART), Simultaneous Iterative Reconstruction Technique (SIRT) and Iterative Least-Squares Technique (ILST). These variations differ only in the way corrections are made: ray-by-ray, pixel-by-pixel or the entire data simultaneously, respectively.
In ART as an example, data of one angular position are divided into equally spaced elements along each ray. Then, these data are compared with analogous data from another angular position and the differences between X-ray attenuation are added equally to the fitting elements. Basically, for each measure, the system tries to found out how each pixel value can be modified to agree with the particular measure that is being analyzed. In order to adjust measures with pixel values, if the sum of the entries along one direction is lower than the experimental measure for that direction, all the pixels are increased. Otherwise, if the sum of the entries is higher than the measured attenuation, pixels are decreased in value. By repeating this iterative cycle, we will progressively decrease the error in pixels, until we get an accurate image. ART was used in the first commercial scanner in 1972, but it is no longer used because iterative methods are usually slow. Besides, this method implies that all data must be acquired before the reconstruction begins. , 
Backprojection is a formal mathematical technique that reconstructs the image based only on the projection of the object onto image planes in different directions. Each direction is given the same weight and the overall linear attenuation coefficient is generated by the sum of attenuation in each X-ray path that intersects the object from different angular positions. In a simpler manner, backprojection can be constructed by smearing each object’s view back trough the image plane in the direction it was registered. When this processed is finished for all the elements of the anatomic section, one obtains a merged image of the linear attenuation coefficients, which is itself a crude reconstruction of the scanned object.
An illustration of this technique is represented in Figure 8. By its analysis, it is also clear that the final image is blurred, which means that this technique needs a little improvement, which is given by filtered backprojection. , , 
Filtered backprojection is therefore used to correct the blurring resultant from simple backprojection. It consists of applying a filter “kernel” to each of the 1-Dimensional projections of the object. That is done by convolving a deblurring function with the X-ray transmission data before they are projected. The filter removes from data the frequencies of the X-ray responsible for most of the blurring. As we can see in Figure 8, the filter has two significant effects. On the one hand, it levels the top of the pulse, making the signal uniform within it. On the other hand, it negatively spikes the sides of the pulse, so these negative neighborhoods will neutralize the blurring effect.
As a result, the image produced by this technique is consistent with the scanned object, if an infinite number of views and an infinite number of points per view are acquired. , 
Compared with the two previous methods this process has also the advantage that reconstruction can begins at the same time that data are being acquired and that is one of the reasons why it is one of the most popular methods nowadays. 
The last signal processing technique that will be discussed in this monograph is the Fourier reconstruction which consists of analyzing data in the frequency domain instead of the spatial domain. For this, one takes each angular orientation of the X-ray attenuation pattern and decomposes it on its frequency components. In the frequency domain, the scanned image is seen as a 2-Dimensional grid, over which we place a dark line for the spectrum of each view, as Figure 9 shows.
To reconstruct the image, one has to take the 1-Dimensional Fast Fourier Transform (FFT). Then, according to the Fourier Slice Theorem, each view’s spectrum is identical to the values of one line (slice) through the image spectrum, assuring that, in the grid, each view has the same angle that was originally acquired. Finally, the inverse FFT of the image spectrum is used to achieve a reconstruction of the scanned object.
7. Data Display
As it was said earlier (Section 6), linear attenuation coefficients give us a crude image of the object. In fact, they can be expressed in dB/cm, but as they are dependent on the incident radiation energy, CT scanning does not use the attenuation coefficients to represent the image, but instead it uses integer numbers called CT numbers. These are occasionally, but unofficially, called Hounsfield units and have the following relation with the linear attenuation coefficients:
where µ is the linear attenuation coefficient of each pixel and µw is the linear attenuation coefficient of water.
This CT number depends clearly on the medium. For human applications, we may consider that CT number varies from -1000 for air and 1000 for bone, with CT number of 0 for water, as it is easily seen from equation 5. , , , 
The CT numbers of the scanned object are then presented on the monitor as a grey scale. As shown in Figure 10, CT numbers have a large range and as human eye cannot distinguish so many types of grays, it is usually used a window to show a smaller range of CT numbers, depending on what it is desired to see. The Window Width (WW) identifies the range of CT numbers and consequently alters the contrast (as Figures 11 and 12 show), whereas Window Level (WL) sets the centre of the window and, therefore, select which structures are seen. The lowest CT number of the window, which corresponds to the lowest density tissue, is represented in black and the highest Ct number (highest density tissue) is represented in white.
As it can easily be understood, radiation dose given to the patient is dependent on the resolution of the scanner and its contrast, as well as