text
stringlengths 313
1.33M
|
---|
# Control Systems/System Representations
## System Representations
This is a table of times when it is appropriate to use each different
type of system representation:
+-------------------------------------+--------------+-----------+-----------+
| Properties | State-Space\ | Transfer\ | Transfer\ |
| | Equations | Function | Matrix |
+=====================================+==============+===========+===========+
| Linear, Distributed | no | no | no |
+-------------------------------------+--------------+-----------+-----------+
| Linear, Lumped | yes | no | no |
+-------------------------------------+--------------+-----------+-----------+
| Linear, Time-Invariant, Distributed | no | yes | no |
+-------------------------------------+--------------+-----------+-----------+
| Linear, Time-Invariant, Lumped | yes | yes | yes |
+-------------------------------------+--------------+-----------+-----------+
## General Description
These are the general external system descriptions. *y* is the system
output, *h* is the system response characteristic, and *x* is the system
input. In the time-variant cases, the general description is also known
as the convolution description.
General Description
----------------------------
Time-Invariant, Non-causal
Time-Invariant, Causal
Time-Variant, Non-Causal
Time-Variant, Causal
## State-Space Equations
These are the state-space representations for a system. *y* is the
system output, *x* is the internal system state, and *u* is the system
input. The matrices A, B, C, and D are coefficient matrices.
State-Space Equations
-----------------------
Time-Invariant
Time-Variant
These are the digital versions of the equations listed above. All the
variables have the same meanings, except that the systems are digital.
State-Space Equations
-----------------------
Time-Invariant
Time-Variant
## Transfer Functions
These are the transfer function descriptions, obtained by using the
Laplace Transform or the Z-Transform on the general system descriptions
listed above. *Y* is the system output, *H* is the system transfer
function, and *X* is the system input.
Transfer Function
-------------------
$Y(s) = H(s)X(s)$
Transfer Function
-------------------
$Y(z) = H(z)X(z)$
## Transfer Matrix
This is the transfer matrix system description. This representation can
be obtained by taking the Laplace or Z transforms of the state-space
equations. In the SISO case, these equations reduce to the transfer
function representations listed above. In the MIMO case, ***Y***is the
vector of system outputs,***X***is the vector of system inputs,
and***H*** is the transfer matrix that relates each input X to each
output Y.
Transfer Matrix
----------------------------------------------
$\mathbf{Y}(s) = \mathbf{H}(s)\mathbf{X}(s)$
Transfer Matrix
----------------------------------------------
$\mathbf{Y}(z) = \mathbf{H}(z)\mathbf{X}(z)$
|
# Control Systems/Matrix Operations
## Laws of Matrix Algebra
Matrices must be compatible sizes in order for an operation to be valid:
Addition:Matrices must have the same dimensions (same number of rows, same number of columns). Matrix addition is commutative:
:
: $A + B = B + A$
Multiplication:Matrices must have the same inner dimensions (the number of columns of the first matrix must equal the number of rows in the second matrix). For instance, if matrix A is *n* × *m*, and matrix B is *m* × *k*, then we can multiply:
:
: $AB = C$
: Where C is an *n* × *k* matrix. Matrix multiplication is not
commutative:
$$AB \ne BA$$
: Because it is not commutative, the differentiation must be made
between \"multiplication on the left\", and \"multiplication on the
right\".
Division:There is no such thing as division in matrix algebra, although multiplication of the matrix inverse performs the same basic function. To find an inverse, a matrix must be nonsingular, and must have a non-zero determinant.
## Transpose Matrix
The transpose of a matrix, denoted by:
$$X^T$$
is the matrix where the rows and columns of X are interchanged. In some
instances, the transpose of a matrix is denoted by:
$$X'$$
This shorthand notation is used when the superscript T applied to a
large number of matrices in a single equation, and the notation would
become too crowded otherwise. When this notation is used in the book,
derivatives will be denoted explicitly with:
$$\frac{d}{dt}X(t)$$
## Determinant
The determinant of a matrix it is a scalar value. It is denoted
similarly to absolute-value in scalars:
$$|X|$$
A matrix has an inverse if the matrix is square, and if the determinant
of the matrix is non-zero.
## Inverse
The inverse of a matrix A, which we will denote here by \"B\" is any
matrix that satisfies the following equation:
$$AB = BA = I$$
Matrices that have such a companion are known as \"invertible\"
matrices, or \"non-singular\" matrices. Matrices which do not have an
inverse that satisfies this equation are called \"singular\" or
\"non-invertable\".
An inverse can be computed in a number of different ways:
1. Append the matrix A with the Identity matrix of the same size. Use
row-reductions to make the left side of the matrice an identity. The
right side of the appended matrix will then be the inverse:
$$[A|I] \to [I|B]$$
2. The inverse matrix is given by the adjoint matrix divided by the
determinant. The adjoint matrix is the transpose of the cofactor
matrix.
$$A^{-1} = \frac{\operatorname{adj}(A)}{|A|}$$
3. The inverse can be calculated from the Cayley-Hamilton Theorem.
## Eigenvalues
The eigenvalues of a matrix, denoted by the Greek letter lambda λ, are
the solutions to the characteristic equation of the matrix:
$$|X - \lambda I| = 0$$
Eigenvalues only exist for square matrices. Non-square matrices do not
have eigenvalues. If the matrix X is a real matrix, the eigenvalues will
either be all real, or else there will be complex conjugate pairs.
## Eigenvectors
The eigenvectors of a matrix are the nullspace solutions of the
characteristic equation:
$$(X - \lambda_i I)v_i = 0$$
There is at least one distinct eigenvector for every distinct
eigenvalue. Multiples of an eigenvector are also themselves
eigenvectors. However, eigenvalues that are not linearly independent are
called \"non-distinct\" eigenvectors, and can be ignored.
## Left-Eigenvectors
Left Eigenvectors are the right-hand nullspace solutions to the
characteristic equation:
$$w_i(A - \lambda_i I) = 0$$
These are also the rows of the inverse transition matrix.
## Generalized Eigenvectors
In the case of repeated eigenvalues, there may not be a complete set of
*n* distinct eigenvectors (right or left eigenvectors) associated with
those eigenvalues. Generalized eigenvectors can be generated as follows:
$$(A -\lambda I)v_{n+1} = v_n$$
Because generalized eigenvectors are formed in relation to another
eigenvector or generalize eigenvectors, they constitute an ordered set,
and should not be used outside of this order.
## Transformation Matrix
The transformation matrix is the matrix of all the eigenvectors, or the
ordered sets of generalized eigenvectors:
$$T = [v_1 v_2 \cdots v_n]$$
The inverse transition matrix is the matrix of the left-eigenvectors:
$$T^{-1} = \begin{bmatrix}w_1' \\ w_2' \\ \cdots \\ w_n'\end{bmatrix}$$
A matrix can be diagonalized by multiplying by the transition matrix:
$$A = TDT^{-1}$$
Or:
$$T^{-1}AT = D$$
If the matrix has an incomplete set of eigenvectors, and therefore a set
of generalized eigenvectors, the matrix cannot be diagonalized, but can
be converted into Jordan canonical form:
$$T^{-1}AT = J$$
## MATLAB
The MATLAB programming environment was specially designed for matrix
algebra and manipulation. The following is a brief refresher about how
to manipulate matrices in MATLAB:
Addition:To add two matrices together, use a plus sign (\"+\"):
`C = A + B;`
Multiplication:To multiply two matrices together use an asterisk (\"\*\"):
`C = A * B;`
: If your matrices are not the correct dimensions, MATLAB will issue
an error.
Transpose:To find the transpose of a matrix, use the apostrophe (\" \' \"):
`C = A';`
Determinant:To find the determinant, use the **det** function:
`d = det(A);`
Inverse:To find the inverse of a matrix, use the function **inv**:
`C = inv(A);`
Eigenvalues and Eigenvectors:To find the eigenvalues and eigenvectors of a matrix, use the **eig** command:
`[E, V] = eig(A);`
: Where E is a square matrix with the eigenvalues of A in the diagonal
entries, and V is the matrix comprised of the corresponding
eigenvectors. If the eigenvalues are not distinct, the eigenvectors
will be repeated. MATLAB will not calculate the generalized
eigenvectors.
Left Eigenvectors:To find the left eigenvectors, assuming there is a complete set of distinct right-eigenvectors, we can take the inverse of the eigenvector matrix:
`[E, V] = eig(A);`\
`C = inv(V);`
The rows of C will be the left-eigenvectors of the matrix A.
For more information about MATLAB, see the wikibook MATLAB
Programming.
|
# Control Systems/MATLAB
## MATLAB
**MATLAB** is a programming language that is specially designed for the
manipulation of matrices. Because of its computational power, MATLAB is
a tool of choice for many control engineers to design and simulate
control systems. This page is going to discuss using MATLAB for control
systems design and analysis. MATLAB has a number of plugin modules
called \"Toolboxes\". Nearly all the functions described below are
located in the **control systems toolbox**. If your system has the
control systems toolbox installed, you can get more information about
the toolbox by typing `help control` at the MATLAB prompt.
Also, there is an open-source competitor to MATLAB called **Octave**.
Octave is similar to MATLAB, but there are also some differences. This
page will focus on MATLAB, but another page could be added to focus on
Octave. As of Sept 10th, 2006, all the MATLAB commands listed below have
been implemented in GNU octave.
This page will use the template to show
MATLAB functions that can be used to perform different tasks.
### Input-Output Isolation
In a MIMO system, typically it can be important to isolate a single
input-output pair for analysis. Each input corresponds to a single row
in the B matrix, and each output corresponds to a single column in the C
matrix. For instance, to isolate the 2nd input and the 3rd output, we
can create a system:
`sys = ss(A, B(:,2), C(3,:), D);`
This page will refer to this technique as \"input-output isolation\".
## Step Response
First, let\'s take a look at the classical approach, with the following
system:
$$G(s) = \frac{5s + 10}{s^2 + 4s + 5}$$
This system can effectively be modeled as two vectors of coefficients,
NUM and DEN:
`NUM = [5, 10]`\
`DEN = [1, 4, 5]`
Now, we can use the MATLAB **step** command to produce the step response
to this system:
`step(NUM, DEN, t);`
Where t is a time vector. If no results on the left-hand side are
supplied by you, the step function will automatically produce a
graphical plot of the step response. If, however, you use the following
format:
`[y, x, t] = step(NUM, DEN, t);`
Then MATLAB will not produce a plot automatically, and you will have to
produce one yourself.
Here is a sample screenshot:
```{=html}
<center>
```
Step_screen.jpg\|Step response
```{=html}
</center>
```
Now, let\'s look at the modern, state-space approach. If we have the
matrices A, B, C and D, we can plug these into the step function, as
shown:
`step(A, B, C, D);`
or, we can optionally include a vector for time, t:
`step(A, B, C, D, t);`
Again, if we supply results on the left-hand side of the equation,
MATLAB will not automatically produce a plot for us.
If we didn\'t get an automatic plot, and we want to produce our own, we
type:
`[y, x, t] = step(NUM, DEN, t);`
And then we can create a graph using the **plot** command:
`plot(t, y);`
y is the output magnitude of the step response, while x is the internal
state of the system from the state-space equations:
$$x' = Ax + Bu$$
$$y = Cx + Du$$
## Classical ↔ Modern
MATLAB contains features that can be used to automatically convert to
the state-space representation from the Laplace representation. This
function, **tf2ss**, is used as follows:
`[A, B, C, D] = tf2ss(NUM, DEN);`
Where NUM and DEN are the coefficient vectors of the numerator and
denominator of the transfer function, respectively.
In a similar vein, we can convert from the Laplace domain back to the
state-space representation using the **ss2tf** function, as such:
`[NUM, DEN] = ss2tf(A, B, C, D);`
Or, if we have more than one input in a vector u, we can write it as
follows:
`[NUM, DEN] = ss2tf(A, B, C, D, u);`
The u parameter must be provided when our system has more than one
input, but it does not need to be provided if we have only 1 input. This
form of the equation produces a transfer function for each separate
input. NUM and DEN become 2-D matricies, with each row being the
coefficients for each different input.
## z-Domain Digital Filters
Let us now consider a digital system with the following generic transfer
function in the Z domain:
$$H(z) = \frac{n(z)}{d(z)}$$
Where n(z) and d(z) are the numerator and denominator polynomials of the
transfer function, respectively. The **filter** command can be used to
apply an input vector x to the filter. The output, y, can be obtained
from the following code:
`y = filter(n, d, x);`
The word \"filter\" may be a bit of a misnomer in this case, but the
fact remains that this is the method to apply an input to a digital
system. Once we have the output magnitude vector, we can plot it using
our plot command:
`plot(y);`
To get the step response of the digital system, we must first create a
step function using the **ones** command:
`u = ones(1, N);`
Where N is the number of samples that we want to take in our digital
system (not to be confused with \"n\", our numerator coefficient). Once
we have produced our unit step function, we can pass this function
through our digital filter as such:
`y = filter(n, d, u);`
And we can plot y:
`plot(y);`
## State-Space Digital Filters
Likewise, we can analyze a digital system in the state-space
representation. If we have the following digital state relationship:
$$x[k + 1] = Ax[k] + Bu[k]$$
$$y[k] = Cx[k] + Du[k]$$
We can convert automatically to the pulse response using the **ss2tf**
function, that we used above:
`[NUM, DEN] = ss2tf(A, B, C, D);`
Then, we can filter it with our prepared unit-step sequence vector, u:
`y = filter(num, den, u)`
this will give us the step response of the digital system in the
state-space representation.
## Root Locus Plots
MATLAB supplies a useful, automatic tool for generating the root-locus
graph from a transfer function: the **rlocus** command. In the transfer
function domain, or the state space domain respectively, we have the
following uses of the function:
`rlocus(num, den);`
And:
`rlocus(A, B, C, D);`
These functions will automatically produce root-locus graphs of the
system. However, if we provide left-hand parameters:
`[r, K] = rlocus(num, den);`
Or:
`[r, K] = rlocus(A, B, C, D);`
The function won\'t produce a graph automatically, and you will need to
produce one yourself. There is also an optional additional parameter for
gain, K, that can be supplied:
`rlocus(num, den, K);`
Or:
`rlocus(A, B, C, D, K);`
If K is not supplied, MATLAB will supply an automatic gain value for
you.
Once we have our values \[r, K\], we can plot a root locus:
`plot(r);`
The **rlocus** command cannot be used with MIMO systems, so if your
system is a MIMO system, you must separate out your coefficient matrices
to isolate each separate Input-output pair, and graph each individually.
Here is a sample screenshot:
```{=html}
<center>
```
Rlocus_screen.jpg\|root-locus
```{=html}
</center>
```
## Digital Root-Locus
Creating a root-locus diagram for a digital system is exactly the same
as it is for a continuous system. The only difference is the
interpretation of the results, because the stability region for digital
systems is different from the stability region for continuous systems.
The same **rlocus** function can be used, in the same manner as is used
above.
## Bode Plots
MATLAB also offers a number of tools for examining the frequency
response characteristics of a system, both using Bode plots, and using
Nyquist charts. To construct a Bode plot from a transfer function, we
use the following command:
`[mag, phase, omega] = bode(NUM, DEN, omega);`
Or:
`[mag, phase, omega] = bode(A, B, C, D, u, omega);`
Where \"omega\" is the frequency vector where the magnitude and phase
response points are analyzed. If we want to convert the magnitude data
into decibels, we can use the following conversion:
`magdb = 20 * log10(mag);`
This conversion should be known well enough by now that it doesn\'t
require explanation.
When talking about Bode plots in decibels, it makes the most sense (and
is the most common occurrence) to also use a logarithmic frequency
scale. To create such a logarithmic sequence in omega, we use the
**logspace** command, as such:
`omega = logspace(a, b, n);`
This command produces n points, spaced logarithmicly, from $10^a$ up to
$10^b$.
If we use the bode command without left-hand arguments, MATLAB will
produce a graph of the bode phase and magnitude plots automatically.
The **bode** command, if used with a MIMO system, will use subplots to
produce all the input-output relationship graphs on a single plot
window. for a system with multiple inputs and multiple outputs, this can
become difficult to see clearly. In these cases, it is typically better
to separate out your coefficient matrices to isolate each individual
input-output pair.
Here is a sample screenshot:
```{=html}
<center>
```
Bode_screen.jpg\|Bode plot (Frequency response)
```{=html}
</center>
```
## Nyquist Plots
In addition to the bode plots, we can create nyquist charts by using the
**nyquist** command. The nyquist command operates in a similar manner to
the bode command (and other commands that we have used so far):
`[real, imag, omega] = nyquist(NUM, DEN, omega);`
Or:
`[real, imag, omega] = nyquist(A, B, C, D, u, omega);`
Here, \"real\" and \"imag\" are vectors that contain the real and
imaginary parts of each point of the nyquist diagram. If we don\'t
supply the right-hand arguments, the nyquist command automatically
produces a nyquist plot for us.
Like the **bode** command, the **nyquist** command will use subplots to
display the input-output relations of MIMO systems on a single plot
window. If there are multiple input-output pairs, it can be difficult to
see the individual graphs.
Here is a sample screenshot:
```{=html}
<center>
```
Nyquist_screen.jpg\|Nyquist plot
```{=html}
</center>
```
## Lyapunov Equations
## Controllability
A controllability matrix can be constructed using the **ctrb** command.
The controllability gramian can be constructed using the **gram**
command.
## Observability
An observability matrix can be constructed using the command **obsv**
## Empirical Gramians
Empirical gramians can be computed for linear and also nonlinear control
systems. The empirical gramian framework **emgr** allows the computation
of the controllability, observability and cross gramian; it is
compatible with **MATLAB** and **OCTAVE** and does not require the
*control systems toolbox*.
## Further reading
- Ogata, Katsuhiko, \"Solving Control Engineering Problems with
MATLAB\", Prentice Hall, New Jersey, 1994.
- MATLAB Programming.
- <http://octave.sourceforge.net/>
- MATLAB Category on
ControlTheoryPro.com
- Empirical Gramian Framework
|
# Control Systems/Glossary
The following is a listing of some of the most important terms from the
book, along with a short definition or description.
## A, B, C
Acceleration Error:The amount of steady state error of the system when stimulated by a unit parabolic input.\
Acceleration Error Constant:A system metric that determines that amount of acceleration error in the system.\
Adaptive Control:A branch of control theory where controller systems are able to change their response characteristics over time, as the input characteristics to the system change.\
Adaptive Gain: when control gain is varied depending on system state or condition, such as a disturbance\
Additivity:A system is additive if a sum of inputs results in a sum of outputs.\
Analog System:A system that is continuous in time and magnitude.\
ARMA: Autoregressive Moving Average, see 1\
ATO: Analog Timed Output. Control loop output is correlated to a timed contact closure.\
A/M: Auto-Manual. Control modes, where auto typically means output is computer-driven, calculated while manual can be field-driven or merely using a static setpoint.
```{=html}
<!-- -->
```
Bilinear Transform: a variant of the Z-transform, see 2\
Block Diagram:A visual way to represent a system that displays individual system components as boxes, and connections between systems as arrows.\
Bode Plots:A set of two graphs, a \"magnitude\" and a \"phase\" graph, that are both plotted on log scale paper. The magnitude graph is plotted in decibels versus frequency, and the phase graph is plotted in degrees versus frequency. Used to analyze the frequency characteristics of the system.\
Bounded Input, Bounded Output:BIBO. If the input to the system is finite, then the output must also be finite. A condition for **stability**.
```{=html}
<!-- -->
```
Cascade: When the output of a control loop is fed to/from another loop.\
Causal:A system whose output does not depend on future inputs. All physical systems must be causal.\
Classical Approach:See **Classical Controls**.\
Classical Controls:A control methodology that uses the transform domain to analyze and manipulate the **Input-Output** characteristics of a system.\
Closed Loop:a controlled system using feedback or feedforward\
Compensator:A Control System that augments the shortcomings of another system.\
Condition Number:\
Conditional Stability:A system with variable gain is conditionally stable if it is BIBO stable for certain values of gain, but not BIBO stable for other values of gain.\
Continuous-Time:A system or signal that is defined at all points t.\
Control Rate: the rate at which control is computed and any appropriate output sent. Lower bound is sample rate.\
Control System:A system or device that manages the behavior of another system or device.\
Controller:See **Control System**.\
Convolution:A complex operation on functions defined by the integral of the two functions multiplied together, and time-shifted.\
Convolution Integral:The integral form of the convolution operation.\
CQI: Control Quality Index, $=1-abs(PV-SP)/max[PVmax-SP,SP-PVmin]$, 1 being ideal.\
CV: Controlled variable
## D, E, F
Damping Ratio:A constant that determines the damping properties of a system.\
Deadtime: time shift between the output change and the related effect (typ. at least one control sample). One sees \"Lag\" used for this action sometimes.\
Digital:A system that is both **discrete-time**, and **quantized**.\
Direct action: target output increase is required to bring the process variable (PV) to setpoint (SP) when PV is below SP. Thus, PV increases with output increase directly.\
Discrete magnitude:See **quantized**.\
Discrete time:A system or signal that is only defined at specific points in time.\
Distributed:A system is distributed if it has both an infinite number of states, and an infinite number of state variables. See **Lumped**.\
Dynamic:A system is called dynamic if it doesn\'t have memory. See **Instantaneous**, **Memory**.
```{=html}
<!-- -->
```
Eigenvalues:Solutions to the characteristic equation of a matrix. If the matrix is itself a function of time, the eigenvalues might be functions of time. In this case, they are frequently called **eigenfunctions**.\
Eigenvectors:The nullspace vectors of the characteristic equation for particular eigenvalues. Used to determine state-transitions, among other things. See 3\
Euler\'s Formula:An equation that relates complex exponentials to complex sinusoids.\
Exponential Weighted Average (EWA): Apportions fractional weight to new and existing data to form a working average. Example EWA=0.70\*EWA+0.30\*latest, see Filtering.\
External Description:A description of a system that relates the input of the system to the output, without explicitly accounting for the internal states of the system.
```{=html}
<!-- -->
```
Feedback:The output of the system is passed through some sort of processing unit H, and that result is fed into the plant as an input.\
Feedforward: whwn apriori knowledge is used to forecast at least part of the control response.\
Filtering (noise): Use of signal smoothing techniques to reject undesirable components like noise. Can be as simple as using exponential weighted averaging on the input.\
Final Value Theorem:A theorem that allows the steady-state value of a system to be determined from the transfer function.\
FOH:First order hold\
Frequency Response:The response of a system to sinusoids of different frequencies. The Fourier Transform of the impulse response.\
Fourier Transform:An integral transform, similar to the Laplace Transform, that analyzes the frequency characteristics of a system.
See 4
## G, H, I
Game Theory:A branch of study that is related to control engineering, and especially **optimal control**. Multiple competing entities, or \"players\" attempt to minimize their own cost, and maximize the cost of the opponents.\
Gain:A constant multiplier in a system that is typically implemented as an amplifier or attenuator. Gain can be changed, but is typically not a function of time. Adaptive control can use time-adaptive gains that change with time.\
General Description:An external description of a system that relates the system output to the system input, the system response, and a time constant through integration.
```{=html}
<!-- -->
```
Hendrik Wade Bode:Electrical Engineer, did work in control theory and communications. Is primarily remembered in control engineering for his introduction of the **bode plot**.\
Harry Nyquist:Electrical Engineer, did extensive work in controls and information theory. Is remembered in this book primarily for his introduction of the **Nyquist Stability Criterion**.\
Homogeniety:Property of a system whose scaled input results in an equally scaled output.\
Hybrid Systems:Systems which have both analog and digital components.
```{=html}
<!-- -->
```
Impulse:A function denoted δ(t), that is the derivative of the unit step.\
Impulse Response:The system output when the system is stimulated by an impulse input. The Inverse Laplace Transform of the transfer function of the system.\
Initial Conditions:The conditions of the system at time $t = t_0$, where t~0~ is the first time the system is stimulated.\
Initial Value Theorem:A theorem that allows the initial conditions of the system to be determined from the Transfer function.\
Input-Output Description:See **external description**.\
Instantaneous:A system is instantaneous if the system doesn\'t have memory, and if the current output of the system is only dependent on the current input. See **Dynamic**, **Memory**.\
Integrated Absolute Error (IAE):absolute error (ideal vs actual performance) is integrated over the analysis period.\
Integrated Squared Error (ISE):squared error (ideal vs actual performance) is integrated over the analysis period.\
Integrators:A system pole at the origin of the S-plane. Has the effect of integrating the system input.\
Inverse Fourier Transform:An integral transform that converts a function from the frequency domain into the time-domain.\
Inverse Laplace Transform:An integral transform that converts a function from the S-domain into the time-domain.\
Inverse Z-Transform:An integral transform that converts a function from the Z-domain into the discrete time domain.
## J, K, L
Lag: The observed process impact from an output is slower than the control rate.\
Laplace Transform:An integral transform that converts a function from the time domain into a complex frequency domain.\
Laplace Transform Domain:A complex domain where the Laplace Transform of a function is graphed. The imaginary part of **s** is plotted along the vertical axis, and the real part of **s** is plotted along the horizontal axis.\
Left Eigenvectors:Left-hand nullspace solutions to the characteristic equation of a matrix for given eigenvalues. The rows of the inverse transition matrix.\
Linear:A system that satisfies the **superposition principle**. See **Additive** and **Homogeneous**.\
Linear Time-Invariant: LTI. See **Linear**, and **Time-Invariant**.\
Low Clamp: User-applied lower bound on control output signal.\
L/R: Local/Remote operation.\
LQR: Linear Quadratic Regulator.\
Lumped:A system with a finite number of states, or a finite number of state variables.
## M, N, O
Magnitude: the gain component of frequency response. This is often all that is considered in saying a discrete filter\'s response is well matched to the analog\'s. It is the DC gain at 0 frequency.\
Marginal Stability:A system has an oscillatory response, as determined by having imaginary poles or imaginary eigenvalues.\
Mason\'s Rule: see 5\
MATLAB: Commercial software having a Control Systems toolbox. Also see Octave.\
Memory:A system has memory if its current output is dependent on previous and current inputs.\
MFAC:Model Free Adaptive Control.\
MIMO:A system with multiple inputs and multiple outputs.\
Modern Approach:see **modern controls**\
Modern Controls:A control methodology that uses the state-space representation to analyze and manipulate the **Internal Description** of a system.\
Modified Z-Transform:A version of the Z-Transform, expanded to allow for an arbitrary processing delay.\
MPC: Model Predictive Control.\
MRAC: Model Reference Adaptive Control.\
MV: can denote Manipulated variable or Measured variable (not the same)
```{=html}
<!-- -->
```
Natural Frequency:The fundamental frequency of the system, the frequency for which the system\'s frequency response is largest.\
Negative Feedback:A feedback system where the output signal is subtracted from the input signal, and the difference is input to the plant.\
The Nyquist Criteria:A necessary and sufficient condition of stability that can be derived from **Bode plots**.\
Nonlinear Control:A branch of control engineering that deals exclusively with non-linear systems. We do not cover nonlinear systems in this book.
```{=html}
<!-- -->
```
OCTAVE: Open-source software having a Control Systems toolbox. Also see MATLAB.\
Offset: The discrepancy between desired and actual value after settling. P-only control can give offset.\
Oliver Heaviside:Electrical Engineer, Introduced the Laplace Transform as a tool for control engineering.\
Open Loop: when the system is not closed, its behavior has a free-running component rather than controlled\
Optimal Control:A branch of control engineering that deals with the minimization of system cost, or maximization of system performance.\
Order:The order of a polynomial is the highest exponent of the independent variable in that exponent. The order of a system is the order of the Transfer Function\'s denominator polynomial.\
Output equation:An equation that relates the current system input, and the current system state to the current system output.\
Overshoot:measures the extent of system response against desired (setpoint tracking).
## P, Q, R
Parabolic:A parabolic input is defined by the equation$\frac{1}{2}t^2u(t)$.\
Partial Fraction Expansion:A method by which a complex fraction is decomposed into a sum of simple fractions.\
Percent Overshoot:PO, the amount by which the step response overshoots the reference value, in percentage of the reference value.\
Phase: the directional component of frequency response, not typically well-matched between a discrete filter equivalent to the analog version, especially as frequency approaches the Nyquist limit. The final value in the limit drives system stability, and stems from the poles and zeros of the characteristic equation.\
PID:Proportional-Integral-Derivative\
Plant:A central system which has been provided, and must be analyzed or controlled.\
PLC:Programmable Logic Controller\
Pole:A value for s that causes the denominator of the transfer function to become zero, and therefore causes the transfer function itself to approach infinity.\
Pole-Zero Form:The transfer function is factored so that the locations of all the poles and zeros are clearly evident.\
Position Error:The amount of steady-state error of a system stimulated by a unit step input.\
Position Error Constant:A constant that determines the position error of a system.\
Positive Feedback:A feedback system where the system output is added to the system input, and the sum is input into the plant.\
PSD:The power spectral density which shows the distribution of power in the spectrum of a particular signal.\
Pulse Response:The response of a digital system to a unit step input, in terms of the transfer matrix.\
PV: Process variable
```{=html}
<!-- -->
```
Quantized:A system is quantized if it can only output certain discrete values.\
Quarter-decay: the time or number of control rates required for process overshoot to be limited to within 1/4 of the maximum peak overshoot (PO) after a SP change. If the PO is 25% at sample time N, this would be time N+k when subsequent PV remains \< SP\*1.0625, presuming the process is settling.
```{=html}
<!-- -->
```
Raise-Lower: Output type that works from present position rather than as a completely new computed spanned output. For R/L, the % change should be applied to the working clamps i.e. 5%(hi clamp-lo clamp).\
Ramp:A ramp is defined by the function $tu(t)$.\
Reconstructors:A system that converts a digital signal into an analog signal.\
Reference Value:The target input value of a feedback system.\
Relaxed:A system is relaxed if the initial conditions are zero.\
Reverse action: target output decrease is required to bring the process variable (PV) to setpoint (SP) when PV is below SP. Thus, PV decreases with output increase.\
Rise Time:The amount of time it takes for the step response of the system to reach within a certain range of the reference value. Typically, this range is 80%.\
Robust Control:A branch of control engineering that deals with systems subject to external and internal noise and disruptions.
## S, T, U, V
Samplers:A system that converts an analog signal into a digital signal.\
Sampled-Data Systems:See *Hybrid Systems*\'.\
Sampling Time:In a discrete system, the sampling time is the amount of time between samples. Reflects the lower bound for Control rate.\
SCADA: Supervisory Control and Data Acquisition.\
S-Domain:The domain of the Laplace Transform of a signal or system.\
Second-order System;\
Settling Time:The amount of time it takes for the system\'s oscillatory response to be damped to within a certain band of the steady-state value. That band is typically 10%.\
Signal Flow Diagram:A method of visually representing a system, using arrows to represent the direction of signals in the system.\
SISO: Single input, single output.\
Span: the designed operation region of the item,=high range-low range. Working span can be smaller if output clamps are used.\
Stability:Typically \"BIBO Stability\", a system with a well-behaved input will result in a well-behaved output. \"Well-behaved\" in this sense is arbitrary.\
Star Transform:A version of the Laplace Transform that acts on discrete signals. This transform is implemented as an infinite sum.\
State Equation:An equation that relates the future states of a system with the current state and the current system input.\
State Transition Matrix:A coefficient matrix, or a matrix function that relates how the system state changes in response to the system input. In time-invariant systems, the state-transition matrix is the matrix exponential of the system matrix.\
State-Space Equations:A set of equations, typically written in matrix form, that relates the input, the system state, and the output. Consists of the state equation and the output equation. See 6\
State-Variable:A vector that describes the internal state of the system.\
Stability:The system output cannot approach infinity as time approaches infinity. See **BIBO**, **Lyapunov Stability**.\
Step Response:The response of a system when stimulated by a unit-step input. A unit step is a setpoint change for setpoint tracking.\
Steady State:The output value of the system as time approaches infinity.\
Steady State Error:At steady state, the amount by which the system output differs from the reference value.\
Superposition:A system satisfies the condition of superposition if it is both additive and homogeneous.\
System Identification: method of trying to identify the system characterization , typically through least squares analysis of input,output and noise data vectors. May use ARMA type framework.\
System Type:The number of ideal integrators in the system.
```{=html}
<!-- -->
```
Time-Invariant:A system is time-invariant if an input time-shifted by an arbitrary delay produces an output shifted by that same delay.\
Transfer Function:The ratio of the system output to its input, in the S-domain. The Laplace Transform of the function\'s impulse response.\
Transfer Function Matrix:The Laplace transform of the state-space equations of a system, that provides an external description of a MIMO system.
```{=html}
<!-- -->
```
Uniform Stability:Also \"Uniform BIBO Stability\", a system where an input signal in the range \[0, 1\] results in a finite output from the initial time until infinite time. See 7.\
Unit Step:An input defined by $u(t)$. Practically, a setpoint change.\
Unity Feedback:A feedback system where the feedback loop element H has a transfer function of 1.
```{=html}
<!-- -->
```
Velocity Error:The amount of steady-state error when the system is stimulated by a ramp input.\
Velocity Error Constant:A constant that determines that amount of velocity error in a system.
## W, X, Y, Z
W-plane: Reference plane used in the bilinear transform.\
Wind-up: when the numerics of computed control adjustment can \"wind-up\", yielding control correction with an inappropriate component unless prevented. An example is the \"I\" contribution of PID if output has been disconnected during PID calculation
```{=html}
<!-- -->
```
Zero:A value for s that causes the numerator of the transfer function to become zero, and therefore causes the transfer function itself to become zero.\
Zero Input Response:The response of a system with zero external input. Relies only on the value of the system state to produce output.\
Zero State Response:The response of the system with zero system state. The output of the system depends only on the system input.\
ZOH: Zero order hold.\
Z-Transform:An integral transform that is related to the Laplace transform through a change of variables. The Z-Transform is used primarily with digital systems. See 8
|
# Control Systems/List of Equations
The following is a list of the important equations from the text,
arranged by subject. For more information about these equations,
including the meaning of each variable and symbol, the uses of these
functions, or the derivations of these equations, see the relevant pages
in the main text.
## Fundamental Equations
$$e^{j\omega} = \cos(\omega) + j\sin(\omega)$$
$$(a*b)(t) = \int_{-\infty}^\infty a(\tau)b(t - \tau)d\tau$$
$$\mathcal{L}[f(t) * g(t)] = F(s)G(s)$$
$$\mathcal{L}[f(t)g(t)] = F(s) * G(s)$$
$$|A - \lambda I| = 0$$
$$Av = \lambda v$$
$$wA = \lambda w$$
$$dB = 20 \log(C)$$
## Basic Inputs
$$u(t) = \left\{
\begin{matrix}
0, & t < 0
\\
1, & t \ge 0
\end{matrix}\right.$$
$$r(t) = t u(t)$$
$$p(t) = \frac{1}{2}t^2 u(t)$$
## Error Constants
$$K_p = \lim_{s \to 0} G(s)$$
$$K_p = \lim_{z \to 1} G(z)$$
$$K_v = \lim_{s \to 0} s G(s)$$
$$K_v = \lim_{z \to 1} (z - 1) G(z)$$
$$K_a = \lim_{s \to 0} s^2 G(s)$$
$$K_a = \lim_{z \to 1} (z - 1)^2 G(z)$$
## System Descriptions
$$y(t) = \int_{-\infty}^\infty g(t, r)x(r)dr$$
$$y(t) = x(t) * h(t) = \int_{-\infty}^\infty x(\tau)h(t - \tau)d\tau$$
$$Y(s) = H(s)X(s)$$
$$Y(z) = H(z)X(z)$$
$$x'(t) = A x(t) + B u(t)$$
$$y(t) = C x(t) + D u(t)$$
$$C[sI - A]^{-1}B + D = \mathbf{H}(s)$$
$$C[zI - A]^{-1}B + D = \mathbf{H}(z)$$
$$\mathbf{Y}(s) = \mathbf{H}(s)\mathbf{U}(s)$$
$$\mathbf{Y}(z) = \mathbf{H}(z)\mathbf{U}(z)$$
$$M = \frac{y_{out}}{y_{in}} = \sum_{k=1}^N \frac{M_k \Delta\ _k}{ \Delta\ }$$
## Feedback Loops
$$H_{cl}(s) = \frac{KGp(s)}{1 + KGp(s)Gb(s)}$$
$$H_{ol}(s) = KGp(s)Gb(s)$$
$$F(s) = 1 + H_{ol}$$ {{-}}
## Transforms
$$F(s) = \mathcal{L}[f(t)] = \int_0^\infty f(t) e^{-st}dt$$
$$f(t)
= \mathcal{L}^{-1} \left\{F(s)\right\}
= {1 \over {2\pi}}\int_{c-i\infty}^{c+i\infty} e^{st} F(s)\,ds$$
$$F(j\omega) = \mathcal{F}[f(t)] = \int_0^\infty f(t) e^{-j\omega t} dt$$
$$f(t)
= \mathcal{F}^{-1}\left\{F(j\omega)\right\}
= \frac{1}{2\pi}\int_{-\infty}^\infty F(j\omega) e^{-j\omega t} d\omega$$
$$F^*(s) = \mathcal{L}^*[f(t)] = \sum_{i = 0}^\infty f(iT)e^{-siT}$$
$$X(z) = \mathcal{Z}\left\{x[n]\right\} = \sum_{i = -\infty}^\infty x[n] z^{-n}$$
$$x[n] = Z^{-1} \{X(z) \}= \frac{1}{2 \pi j} \oint_{C} X(z) z^{n-1} dz \$$
$$X(z, m) = \mathcal{Z}(x[n], m) = \sum_{n = -\infty}^{\infty} x[n + m - 1]z^{-n}$$
## Transform Theorems
$$x(\infty) = \lim_{s \to 0} s X(s)$$
$$x[\infty] = \lim_{z \to 1} (z - 1) X(z)$$
$$x(0) = \lim_{s \to \infty} s X(s)$$
## State-Space Methods
$$x(t) = e^{At-t_0}x(t_0) + \int_{t_0}^{t}e^{A(t - \tau)}Bu(\tau)d\tau$$
$$x[n] = A^nx[0] + \sum_{m=0}^{n-1}A^{n-1-m}Bu[n]$$
$$y(t) = Ce^{At-t_0}x(t_0) + C\int_{t_0}^{t}e^{A(t - \tau)}Bu(\tau)d\tau + Du(t)$$
$$y[n] = CA^nx[0] + \sum_{m=0}^{n-1}CA^{n-1-m}Bu[n] + Du[n]$$
$$x(t) = \phi(t, t_0)x(t_0) + \int_{t_0}^{t} \phi(\tau, t_0)B(\tau)u(\tau)d\tau$$
$$x[n] = \phi[n, n_0]x[t_0] + \sum_{m = n_0}^{n} \phi[n, m+1]B[m]u[m]$$
$$G(t, \tau) = \left\{\begin{matrix}C(\tau)\phi(t, \tau)B(\tau) & \mbox{ if } t \ge \tau \\0 & \mbox{ if } t < \tau\end{matrix}\right.$$
$$G[n] = \left\{\begin{matrix}CA^{k-1}N & \mbox{ if } k > 0 \\ 0 & \mbox{ if } k \le 0\end{matrix}\right.$$
## Root Locus
$$1 + KG(s)H(s) = 0$$
$$1 + K\overline{GH}(z) = 0$$
$$\angle KG(s)H(s) = 180^\circ$$
$$\angle K\overline{GH}(z) = 180^\circ$$
$$N_a = P - Z$$
$$\phi_k = (2k + 1)\frac{\pi}{P - Z}$$
$$\sigma_0 = \frac{\sum_P - \sum_Z}{P - Z}$$
$$\frac{G(s)H(s)}{ds} = 0$$ or $\frac{\overline{GH}(z)}{dz} = 0$
## Lyapunov Stability
$$MA + A^TM = -N$$
## Controllers and Compensators
$$D(s) = K_p + {K_i \over s} + K_d s$$
$$D(z) = K_p + K_i \frac{T}{2} \left[ \frac{z + 1}{z - 1} \right] + K_d \left[ \frac{z - 1}{Tz} \right]$$
|
# X86 Disassembly/Introduction
## What Is This Book About?
This book is about the disassembly of x86 machine code into
human-readable assembly, and the decompilation of x86 assembly code into
human-readable C or C++ source code. Some topics covered will be common
to all computer architectures, not just x86-compatible machines.
## What Will This Book Cover?
This book is going to look in-depth at the disassembly and decompilation
of x86 machine code and assembly code. We are going to look at the way
programs are made using assemblers and compilers, and examine the way
that assembly code is made from C or C++ source code. Using this
knowledge, we will try to reverse the process. By examining common
structures, such as data and control structures, we can find patterns
that enable us to disassemble and decompile programs quickly.
## Who Is This Book For?
This book is for readers at the undergraduate level with experience
programming in x86 Assembly and C or C++. This book is not designed to
teach assembly language programming, C or C++ programming, or
compiler/assembler theory.
## What Are The Prerequisites?
The reader should have a thorough understanding of x86
Assembly, C
Programming, and possibly C++
Programming. This book is intended to
increase the reader\'s understanding of the relationship between x86
machine code, x86 Assembly Language, and the C Programming Language. If
you are not too familar with these topics, you may want to reread some
of the above-mentioned books before continuing.
## What is Disassembly?
Computer programs are written originally in a human readable code form,
such as assembly language or a high-level language. These programs are
then compiled into a binary format called **machine code**. This binary
format is not directly readable or understandable by humans. Many
programs \-- such as malware, proprietary commercial programs, or very
old legacy programs \-- may not have the source code available to you.
Programs frequently perform tasks that need to be duplicated, or need to
be made to interact with other programs. Without the source code and
without adequate documentation, these tasks can be difficult to
accomplish. This book outlines tools and techniques for attempting to
convert the raw machine code of an executable file into equivalent code
in assembly language and the high-level languages C and C++. With the
high-level code to perform a particular task, several things become
possible:
1. Programs can be ported to new computer platforms, by compiling the
source code in a different environment.
2. The algorithm used by a program can be determined. This allows other
programs to make use of the same algorithm, or for updated versions
of a program to be rewritten without needing to track down old
copies of the source code.
3. Security holes and vulnerabilities can be identified and patched by
users without needing access to the original source code.
4. New interfaces can be implemented for old programs. New components
can be built on top of old components to speed development time and
reduce the need to rewrite large volumes of code.
5. We can figure out what a piece of malware does. We hope this leads
us to figuring out how to block its harmful effects. Unfortunately,
some malware writers use self-modifying code techniques (polymorphic
camouflage, XOR encryption, scrambling)`<ref>`{=html}
\"How does a crypter for bypass antivirus detection
work?\"
```{=html}
</ref>
```
, apparently to make it difficult to even detect that malware, much less
disassemble it.
Disassembling code has a large number of practical uses. One of the
positive side effects of it is that the reader will gain a better
understanding of the relation between machine code, assembly language,
and high-level languages. Having a good knowledge of these topics will
help programmers to produce code that is more efficient and more secure.
## References
|
# X86 Disassembly/Assemblers and Compilers
## Assemblers
**Assemblers "wikilink")** are
significantly simpler than compilers, and are often implemented to
simply translate the assembly code to binary machine code via one-to-one
correspondence. Assemblers rarely optimize beyond choosing the shortest
form of an instruction or filling delay slots.
Because assembly is such a simple process, disassembly can often be just
as simple. Assembly instructions and machine code words have a
one-to-one correspondence, so each machine code word will exactly map to
one assembly instruction. However, disassembly has some other
difficulties which cannot be accounted for using simple code-word
lookups. We will introduce assemblers here, and talk about disassembly
later.
## Assembler Concepts
Assemblers, on a most basic level, translate assembly instructions into
machine code with a one to one correspondence. They can also translate
named variables into hard-coded memory addresses and labels into their
relative code addresses.
Assemblers, in general, do not perform code optimization. The machine
code that comes out of an assembler is equivalent to the assembly
instructions that go into the assembler. Some assemblers have high-level
capabilities in the form of *Macros.*
Some information about the program is lost during the assembly process.
First and foremost, program data is stored in the same raw binary format
as the machine code instructions. This means that it can be difficult to
determine which parts of the program are actually instructions. Notice
that you can disassemble raw data, but the resultant assembly code will
be nonsensical. Second, textual information from the assembly source
code file, such as variable names, label names, and code comments are
all destroyed during assembly. When you disassemble the code, the
instructions will be the same, but all the other helpful information
will be lost. The code will be accurate, but more difficult to read.
Compilers, as we will see later, cause even more information to be lost,
and decompiling is often so difficult and convoluted as to become nearly
impossible to do accurately.
## Intel Syntax Assemblers
Because of the pervasiveness of Intel-based IA-32 microprocessors in the
home PC market, the majority of assembly work done (and the majority of
assembly work considered in this wikibook) is x86-based. Many of these
assemblers (or new versions of them) can handle amd64/x86_64/EMT64 code
as well, although this wikibook will focus primarily on 32 bit
(x86/IA-32) code examples.
### MASM
MASM is Microsoft\'s assembler, an abbreviation for \"Macro Assembler.\"
However, many people use it as an acronym for \"Microsoft Assembler,\"
and the difference isn\'t a problem at all. MASM has a powerful macro
feature, and is capable of writing very low-level syntax, and
pseudo-high-level code with its macro feature. MASM 6.15 is currently
available as a free-download from Microsoft, and MASM 7.xx is currently
available as part of the Microsoft platform DDK.
- MASM uses Intel Syntax.
- MASM is used by Microsoft to implement some low-level portions of
its Windows Operating systems.
- MASM, contrary to popular belief, has been in constant development
since 1980, and is upgraded on a needs-basis.
- MASM has always been made compatible by Microsoft to the current
platform, and executable file types.
- MASM currently supports all Intel instruction sets, including SSE2.
Many users love MASM, but many more still dislike the fact that it
isn\'t portable to other systems.
### TASM
TASM, Borland\'s \"Turbo Assembler,\" is a functional assembler from
Borland that integrates seamlessly with Borland\'s other software
development tools. Current release version is version 5.0. TASM syntax
is very similar to MASM, although it has an \"IDEAL\" mode that many
users prefer. TASM is not free.
### NASM
NASM, the \"Netwide Assembler,\" is a free, portable, and retargetable
assembler that works on both Windows and Linux. It supports a variety of
Windows and Linux executable file formats, and even outputs pure binary.
NASM is not as \"mature\" as either MASM or TASM, but is:
- more portable than MASM
- cheaper than TASM
- strives to be very user-friendly
NASM comes with its own disassembler `ndisasm`, and supports 64-bit
(x86-64/x64/AMD64/Intel 64) CPUs.
NASM is released under the LGPL.
### FASM
FASM, the \"Flat Assembler\" is an open source assembler that supports
x86, and IA-64 Intel architectures.
## (x86) AT&T Syntax Assemblers
AT&T syntax for x86 microprocessor assembly code is not as common as
Intel-syntax, but the GNU Assembler (GAS) uses it, and it is the *de
facto* assembly standard on Unix and Unix-like operating systems.
### GAS
The GNU Assembler (GAS) is the default
back-end to the GNU Compiler Collection (GCC) suite. As such, GAS is as
portable and retargetable as GCC is. However, GAS uses the AT&T syntax
for its instructions as default, which some users find to be less
readable than Intel syntax. Newer versions of gas can be switched to
Intel syntax with the directive \".intel_syntax noprefix\".
GAS is developed specifically to be used as the GCC backend. Because GCC
always feeds it syntactically correct code, GAS often has minimal error
checking.
GAS is available as a part of either the GCC package or the GNU binutils
package. 1
## Other Assemblers
### HLA
HLA, short for \"High Level
Assembler\" is a project spearheaded by Randall Hyde to create an
assembler with high-level syntax. HLA works as a front-end to other
assemblers such as FASM (the default), MASM, NASM, and GAS. HLA supports
\"common\" assembly language instructions, but also implements a series
of higher-level constructs such as loops, if-then-else branching, and
functions. HLA comes complete with a comprehensive standard library.
Since HLA works as a front-end to another assembler, the programmer must
have another assembler installed to assemble programs with HLA. HLA code
output therefore, is as good as the underlying assembler, but the code
is much easier to write for the developer. The high-level components of
HLA may make programs less efficient, but that cost is often far
outweighed by the ease of writing the code. HLA high-level syntax is
very similar in many respects to Pascal, which in turn is itself similar
in many respects to C, so many high-level programmers will immediately
pick up many of the aspects of HLA.
Here is an example of some HLA code:
``` cpp
mov(src, dest); // C++ style comments
pop(eax);
push(ebp);
for(mov(0, ecx); ecx < 10; inc(ecx)) do
mul(ecx);
endfor;
```
Some disassemblers and debuggers can disassemble binary code into
HLA-format, although none can faithfully recreate the HLA macros.
## Compilers
A compiler is a program that
converts instructions from one language into equivalent instructions in
another language. There is a common misconception that a compiler always
directly converts a high level language into machine language, but this
isn\'t always the case. Many compilers convert code into assembly
language, and a few even convert code from one high level language into
another. Common examples of compiled languages are: C/C++, Fortran, Ada,
and Visual Basic. The figure below shows the common compile-time steps
to building a program using the C programming language. The compiler
produces object files which are linked to form the final executable:
![](C_language_building_steps.png "C_language_building_steps.png")
For the purposes of this book, we will only be considering the case of a
compiler that converts C or C++ into assembly code or machine language.
Some compilers, such as the Microsoft C compiler, compile C and C++
source code directly into machine code. GCC on the other hand compiles C
and C++ into assembly language, and an assembler is used to convert that
into the appropriate machine code. From the standpoint of a
disassembler, it does not matter exactly how the original program was
created. Notice also that it is not possible to exactly reproduce the C
or C++ code used originally to create an executable. It is, however,
possible to create code that compiles identically, or code that performs
the same task.
C language statements do not share a one to one relationship with
assembly language. Consider that the following C statements will
typically all compile into the same assembly language code:
``` C
*arrayA = arrayB[x++];
*arrayA = arrayB[x]; x++;
arrayA[0] = arrayB[x++];
arrayA[0] = arrayB[x]; x++;
```
Also, consider how the following loop constructs perform identical
tasks, and are likely to produce similar or even identical assembly
language code:
``` c
for(;;) { ... }
while(1) { ... }
do { ... } while(1)
```
## Common C/C++ Compilers
The purpose of this section is to list some of the most common C and
C++
compilers in use for developing *production-level* software. There are
many many C compilers in the world, but the reverser doesn\'t need to
consider all cases, especially when looking at professional software.
This page will discuss each compiler\'s strengths and weaknesses, its
availability (download sites or cost information), and it will also
discuss how to generate an assembly listing file from each compiler.
### Microsoft C Compiler
The Microsoft C compiler is available from Microsoft for free as part of
the Windows Server 2003 SDK. It is the same compiler and library as is
used in MS Visual Studio, but doesn\'t come with the fancy IDE. The MS C
Compiler has a very good optimizing engine. It compiles C and C++, and
has the option to compile C++ code into MSIL (the .NET bytecode).
Microsoft\'s compiler only supports Windows systems, and
Intel-compatible 16/32/64 bit architectures.
The Microsoft C compiler is **cl.exe** and the linker is **link.exe**
#### Listing Files
In this wikibook, cl.exe is frequently used to produce assembly listing
files of C source code. To produce an assembly listing file yourself,
use the syntax:
`cl.exe /Fa``<assembly file name>`{=html}` ``<C source file>`{=html}
The \"/Fa\" switch is the command-line option that tells the compiler to
produce an assembly listing file.
For example, the following command line:
`cl.exe /FaTest.asm Test.c`
would produce an assembly listing file named \"Test.asm\" from the C
source file \"Test.c\". Notice that there is no space between the /Fa
switch and the name of the output file.
### GNU C Compiler
The GNU C compiler is part of the GNU Compiler Collection (GCC) suite.
This compiler is available for most systems and it is free software.
Many people use it exclusively so that they can support many platforms
with just one compiler to deal with. The GNU GCC Compiler is the *de
facto* standard compiler for Linux and Unix systems. It is retargetable,
allowing for many input languages (C, C++, Obj-C, Ada, Fortran,
etc\...), and supporting multiple target OSes and architectures. It
optimizes well, but has a non-aggressive IA-32 code generation engine.
The GCC frontend program is "gcc\" ("gcc.exe" on Windows) and the
associated linker is "ld" ("ld.exe" on Windows). Windows cmd searches
for the programs with ".exe" extensions automatically, so you don\'t
need to type the filename extension.
#### Listing Files
To produce an assembly listing file in GCC, use the following command
line syntax:
`gcc -S /path/to/sourcefile.c`
For example, the following commandline:
`gcc -S test.c`
will produce an assembly listing file named "test.s". Assembly listing
files generated by GCC will be in GAS format. On x86 you can select the
syntax with `-masm=intel` or `-masm=att`. GCC listing files are
frequently not as well commented and laid-out as are the listing files
for cl.exe.
You may add \`-g3\` flags to enable source-code-level debugging symbols
so you can see the line numbers in the listing. The
`-fno-asynchronous-unwind-tables` flag can help eliminate some macros in
the listing.
### Intel C Compiler
This compiler is used only for x86, x86-64, and IA-64 code. It is
available for both Windows and Linux. The Intel C compiler was written
by the people who invented the original x86 architecture: Intel.
Intel\'s development tools generate code that is tuned to run on Intel
microprocessors, and is intended to squeeze every last ounce of speed
from an application. AMD IA-32 compatible processors are not guaranteed
to get the same speed boosts because they have different internal
architectures.
### Metrowerks CodeWarrior
This compiler is commonly used for classic MacOS and for embedded
systems. If you try to reverse-engineer a piece of consumer electronics,
you may encounter code generated by Metrowerks CodeWarrior.
### Green Hills Software Compiler
This compiler is commonly used for embedded systems. If you attempt to
reverse-engineer a piece of consumer electronics, you may encounter code
generated by Green Hills C/C++.
|
# X86 Disassembly/Disassemblers and Decompilers
## What is a Disassembler?
In essence, a **disassembler** is the exact opposite of an assembler.
Where an assembler converts code written in an assembly language into
binary machine code, a disassembler reverses the process and attempts to
recreate the assembly code from the binary machine code.
Since most assembly languages have a one-to-one correspondence with
underlying machine instructions, the process of disassembly is
relatively straight-forward, and a basic disassembler can often be
implemented simply by reading in bytes, and performing a table lookup.
Of course, disassembly has its own problems and pitfalls, and they are
covered later in this chapter.
Many disassemblers have the option to output assembly language
instructions in Intel, AT&T, or (occasionally) HLA syntax. Examples in
this book will use Intel and AT&T syntax interchangeably. We will
typically not use HLA syntax for code examples, but that may change in
the future.
## x86 Disassemblers
Here we are going to list some commonly available disassembler tools.
Notice that there are professional disassemblers (which cost money for a
license) and there are freeware/shareware disassemblers. Each
disassembler will have different features, so it is up to you as the
reader to determine which tools you prefer to use.
### Online Disassemblers
ODA: is a free, web-based disassembler for a wide variety of architectures. You can use \"Live View\" to see how code is disassembled in real time, one byte at a time, or upload a file. The site is currently in beta release but will hopefully only get better with time.
: <http://www.onlinedisassembler.com>
### Commercial Windows Disassemblers
IDA Pro: is a professional disassembler that is expensive, extremely powerful, and has a whole slew of features. The downside to IDA Pro is that it costs \$515 US for the standard single-user edition. As such this wikibook will not consider IDA Pro specifically because the price tag is exclusionary. Freeware versions do exist; see below.
- (version 6.x) <http://www.hex-rays.com/idapro/>
Relyze Desktop: is an interactive software reverse engineering tool that lets you disassemble, decompile and diff x86, x64, ARM32 and ARM64 software.
: <https://www.relyze.com/overview.html>
```{=html}
<!-- -->
```
Hopper Disassembler: is a reverse engineering tool for the Mac, that lets you disassemble, decompile and debug 32/64bits Intel Mac executables. It can also disassemble and decompile Windows executables.
: <http://www.hopperapp.com>
```{=html}
<!-- -->
```
OBJ2ASM: is an object file disassembler for 16 and 32 bit x86 object files in Intel OMF, Microsoft COFF format, Linux ELF or Mac OS X Mach-O format.
: <http://www.digitalmars.com/ctg/obj2asm.html>
```{=html}
<!-- -->
```
PE Explorer: is a disassembler that \"focuses on ease of use, clarity and navigation.\" It isn\'t as feature-filled as IDA Pro and carries a smaller price tag to offset the missing functionality: \$130
: <http://www.heaventools.com/PE_Explorer_disassembler.htm>
```{=html}
<!-- -->
```
W32DASM (Win32dasm): W32DASM was an excellent 16/32 bit disassembler for Windows, it seems it is no longer developed. the latest version available is from 2003. the website went down and no replacement went up.
: <http://www.softpedia.com/get/Programming/Debuggers-Decompilers-Dissasemblers/WDASM.shtml>
```{=html}
<!-- -->
```
Binary Ninja: Binary Ninja is a commercial, cross-platform (Linux, OS X, Windows) reverse engineering platform with aims to offer a similar feature set to IDA at a much cheaper price point. A precursor written in python is open source and available at <https://github.com/Vector35/deprecated-binaryninja-python>. Introductory pricing is \$99 for student/non-commercial use, and \$399 for commercial use.
: <https://binary.ninja/>
```{=html}
<!-- -->
```
Hiew: x86-64 disassembler & assembler. Single license pricing is \$19, and \$199 with lifetime updates.
: hiew.ru
### Commercial Freeware/Shareware Windows Disassemblers
OllyDbg: OllyDbg is one of the most popular disassemblers recently. It has a large community and a wide variety of plugins available. It emphasizes binary code analysis. Supports x86 instructions only (no x86_64 support for now, although it is on the way).
: <http://www.ollydbg.de/> (official website)
: <http://www.openrce.org/downloads/browse/OllyDbg_Plugins> (plugins)
: <http://www.ollydbg.de/odbg64.html> (64 bit version)
### Free Windows Disassemblers
Capstone: Capstone is an open source disassembly framework for multi-arch (including support for x86, x86_64) & multi-platform with advanced features.
: <http://www.capstone-engine.org/>
```{=html}
<!-- -->
```
Zydis: Fast and lightweight x86/x86-64 decoder library. It does not offer disassembler features such as linear sweep or recursive disassembling.
: <https://github.com/zyantific/zydis>
```{=html}
<!-- -->
```
Objconv: A command line disassembler supporting 16, 32, and 64 bit x86 code. Latest instruction set (SSE4, AVX, XOP, FMA, etc.), several object file formats, several assembly syntax dialects. Windows, Linux, BSD, Mac. Intelligent analysis.
- <http://www.agner.org/optimize/#objconv>
IDA 3.7: A DOS GUI tool that behaves very much like IDA Pro, but is considerably more limited. It can disassemble code for the Z80, 6502, Intel 8051, Intel i860, and PDP-11 processors, as well as x86 instructions up to the 486.
- <http://www.simtel.net/product.php> (search for **ida37fw**)
IDA Pro Freeware: Behaves almost exactly like IDA Pro, but disassembles only Intel x86 opcodes and is Windows-only. It can disassemble instructions for those processors available as of 2003. Free for non-commercial use.
- (version 4.1) <http://www.themel.com/idafree.zip>
- (version 4.3) <http://www.datarescue.be/idafreeware/freeida43.exe>
- (version 5.0) <https://www.scummvm.org/frs/extras/IDA/idafree50.exe>
- (version 7.0)
<https://www.hex-rays.com/products/ida/support/download_freeware.shtml>
BORG Disassembler: BORG is an excellent Win32 Disassembler with GUI.
: <http://www.caesum.com/>
```{=html}
<!-- -->
```
HT Editor: An analyzing disassembler for Intel x86 instructions. The latest version runs as a console GUI program on Windows, but there are versions compiled for Linux as well.
: <http://hte.sourceforge.net/>
```{=html}
<!-- -->
```
diStorm64: diStorm is an open source highly optimized stream disassembler library for 80x86 and AMD64.
: <http://ragestorm.net/distorm/>
```{=html}
<!-- -->
```
crudasm: crudasm is an open source disassembler with a variety of options. It is a work in progress and is bundled with a partial decompiler.
: <http://sourceforge.net/projects/crudasm9/>
```{=html}
<!-- -->
```
BeaEngine: BeaEngine is a complete disassembler library for IA-32 and intel64 architectures (coded in C and usable in various languages : C, Python, Delphi, PureBasic, WinDev, masm, fasm, nasm, GoAsm).
: <https://github.com/BeaEngine/beaengine>
```{=html}
<!-- -->
```
Visual DuxDebugger: is a 64-bit debugger disassembler for Windows.
: <http://www.duxcore.com/products.html>
```{=html}
<!-- -->
```
BugDbg: is a 64-bit user-land debugger designed to debug native 64-bit applications on Windows.
: <http://www.pespin.com/>
```{=html}
<!-- -->
```
DSMHELP: Disassemble Help Library is a disassembler library with single line Epimorphic assembler. Supported instruction sets - Basic,System,SSE,SSE2,SSE3,SSSE3,SSE4,SSE4A,MMX,FPU,3DNOW,VMX,SVM,AVX,AVX2,BMI1,BMI2,F16C,FMA3,FMA4,XOP.
: <http://dsmhelp.narod.ru/> (in Russian)
```{=html}
<!-- -->
```
ArkDasm: is a 64-bit interactive disassembler and debugger for Windows. Supported processor: x64 architecture (Intel x64 and AMD64)
: <http://www.arkdasm.com/>
```{=html}
<!-- -->
```
SharpDisam: is a C# port of the udis86 x86 / x86-64 disassembler
: <http://sharpdisasm.codeplex.com/>
```{=html}
<!-- -->
```
CFF Explorer: Special fields description and modification (.NET supported), utilities, rebuilder, hex editor, import adder, signature scanner, signature manager, extension support, scripting, disassembler, dependency walker etc.
: ntcore.com
```{=html}
<!-- -->
```
bddisasm: fast, lightweight, x86/x64 instruction decoding library.
: github.com/bitdefender/bddisasm
### Unix Disassemblers
Many of the Unix disassemblers, especially the open source ones, have
been ported to other platforms, like Windows (mostly using MinGW or
Cygwin). Some Disassemblers like otool (\[OS X) are distro-specific.
Capstone: Capstone is an open source disassembly framework for multi-arch (including support for x86, x86_64) & multi-platform (including Mac OSX, Linux, \*BSD, Android, iOS, Solaris) with advanced features.
: <http://www.capstone-engine.org/>
```{=html}
<!-- -->
```
Bastard Disassembler: The Bastard disassembler is a powerful, scriptable disassembler for Linux and FreeBSD.
: <http://bastard.sourceforge.net/>
```{=html}
<!-- -->
```
ndisasm: NASM\'s disassembler for x86 and x86-64. Works on DOS, Windows, Linux, Mac OS X and various other systems.
```{=html}
<!-- -->
```
udis86: Disassembler Library for x86 and x86-64
: <http://udis86.sourceforge.net/>
```{=html}
<!-- -->
```
Zydis: Fast and lightweight x86/x86-64 disassembler library.
: <https://github.com/zyantific/zydis>
```{=html}
<!-- -->
```
Objconv: See above.
```{=html}
<!-- -->
```
ciasdis: The official name of ciasdis is *computer_intelligence_assembler_disassembler*. This Forth-based tool allows to incrementally and interactively build knowledge about a code body. It is unique that all disassembled code can be re-assembled to the exact same code. Processors are 8080, 6809, 8086, 80386, Pentium I en DEC Alpha. A scripting facility aids in analyzing Elf and MSDOS headers and makes this tool extendable. The Pentium I ciasdis is available as a binary image, others are in source form, loadable onto lina Forth, available from the same site.
: <http://home.hccnet.nl/a.w.m.van.der.horst/ciasdis.html>
```{=html}
<!-- -->
```
objdump : comes standard, and is typically used for general inspection of binaries. Pay attention to the relocation option and the dynamic symbol table option.
```{=html}
<!-- -->
```
gdb : comes standard, as a debugger, but is very often used for disassembly. If you have loose hex dump data that you wish to disassemble, simply enter it (interactively) over top of something else or compile it into a program as a string like so: char foo\[\] = {0x90, 0xcd, 0x80, 0x90, 0xcc, 0xf1, 0x90};
```{=html}
<!-- -->
```
lida linux interactive disassembler: an interactive disassembler with some special functions like a crypto analyzer. Displays string data references, does code flow analysis, and does not rely on objdump. Utilizes the Bastard disassembly library for decoding single opcodes. The project was started in 2004 and remains dormant to this day.
: <http://lida.sourceforge.net>
```{=html}
<!-- -->
```
dissy : This program is a interactive disassembler that uses objdump.
: <http://code.google.com/p/dissy/>
```{=html}
<!-- -->
```
EmilPRO : replacement for the deprecated dissy disassembler.
: <http://github.com/SimonKagstrom/emilpro>
```{=html}
<!-- -->
```
x86dis : This program can be used to display binary streams such as the boot sector or other unstructured binary files.
```{=html}
<!-- -->
```
ldasm: LDasm (Linux Disassembler) is a Perl/Tk-based GUI for objdump/binutils that tries to imitate the \'look and feel\' of W32Dasm. It searches for cross-references (e.g. strings), converts the code from GAS to a MASM-like style, traces programs and much more. Comes along with PTrace, a process-flow-logger. Last updated in 2002, available from Tucows.
: <http://www.tucows.com/preview/59983/LDasm>
```{=html}
<!-- -->
```
llvm: LLVM has two interfaces to its disassembler:
: ```{=html}
<dl>
```
```{=html}
<dt>
```
`llvm-objdump`
```{=html}
</dt>
```
```{=html}
<dd>
```
Mimics GNU objdump.
```{=html}
</dd>
```
```{=html}
<dt>
```
`llvm-mc`
```{=html}
</dt>
```
```{=html}
<dd>
```
See the LLVM
blog. Example
usage:
```{=html}
<div class="mw-code">
```
\$ echo \'1 2\' \| llvm-mc -disassemble
-triple=x86_64-apple-darwin9\
addl %eax, (%rdx)\
\$ echo \'0x0f 0x1 0x9\' \| llvm-mc -disassemble
-triple=x86_64-apple-darwin9\
sidt (%rcx)\
\$ echo \'0x0f 0xa2\' \| llvm-mc -disassemble
-triple=x86_64-apple-darwin9\
cpuid\
\$ echo \'0xd9 0xff\' \| llvm-mc -disassemble
-triple=i386-apple-darwin9\
fcos\
```{=html}
</div>
```
```{=html}
</dd>
```
```{=html}
</dl>
```
```{=html}
<!-- -->
```
otool: OS X\'s object file displaying tool.
```{=html}
<!-- -->
```
edb: A cross platform x86/x86-64 debugger.
: <https://github.com/eteran/edb-debugger>
```{=html}
<!-- -->
```
bddisasm: fast, lightweight, x86/x64 instruction decoding library.
: github.com/bitdefender/bddisasm
```{=html}
<!-- -->
```
rasm2: radare2 disassembler and assembler tool. Includes x86.nz library with support for x86/x86-64.
## Disassembler Issues
As we have alluded to before, there are a number of issues and
difficulties associated with the disassembly process. The two most
important difficulties are the division between code and data, and the
loss of text information.
### Separating Code from Data
Since data and instructions are all stored in an executable as binary
data, the obvious question arises: how can a disassembler tell code from
data? Is any given byte a variable, or part of an instruction?
The problem wouldn\'t be as difficult if data were limited to the .data
section (segment) of an executable (explained in a later chapter) and if
executable code were limited to the .code section of an executable, but
this is often not the case. Data may be inserted directly into the code
section (e.g. jump address tables, constant strings), and executable
code may be stored in the data section (although new systems are working
to prevent this for security reasons). AI programs, LISP or Forth
compilers may not contain .text and .data sections to help decide, and
have code and data interspersed in a single section that is readable,
writable and executable, Boot code may even require substantial effort
to identify sections. A technique that is often used is to identify the
entry point of an executable, and find all code reachable from there,
recursively. This is known as \"code crawling\".
Many interactive disassemblers will give the user the option to render
segments of code as either code or data, but non-interactive
disassemblers will make the separation automatically. Disassemblers
often will provide the instruction AND the corresponding hex data on the
same line, shifting the burden for decisions about the nature of the
code to the user. Some disassemblers (e.g. ciasdis) will allow you to
specify rules about whether to disassemble as data or code and invent
label names, based on the content of the object under scrutiny.
Scripting your own \"crawler\" in this way is more efficient; for large
programs interactive disassembling may be impractical to the point of
being unfeasible.
The general problem of separating code from data in arbitrary executable
programs is equivalent to the halting problem. As a consequence, it is
not possible to write a disassembler that will correctly separate code
and data for all possible input programs. Reverse engineering is full of
such theoretical limitations, although by Rice\'s theorem all
interesting questions about program properties are undecidable (so
compilers and many other tools that deal with programs in any form run
into such limits as well). In practice a combination of interactive and
automatic analysis and perseverance can handle all but programs
specifically designed to thwart reverse engineering, like using
encryption and decrypting code just prior to use, and moving code around
in memory.
### Lost Information
User defined textual identifiers, such as variable names, label names,
and macros are removed by the assembly process. They may still be
present in generated object files, for use by tools like debuggers and
relocating linkers, but the direct connection is lost and
re-establishing that connection requires more than a mere disassembler.
Especially small constants may have more than one possible name.
Operating system calls (like DLLs in MS-Windows, or syscalls in Unices)
may be reconstructed, as their names appear in a separate segment or are
known beforehand. Many disassemblers allow the user to attach a name to
a label or constant based on his understanding of the code. These
identifiers, in addition to comments in the source file, help to make
the code more readable to a human, and can also shed some clues on the
purpose of the code. Without these comments and identifiers, it is
harder to understand the purpose of the source code, and it can be
difficult to determine the algorithm being used by that code. When you
combine this problem with the possibility that the code you are trying
to read may, in reality, be data (as outlined above), then it can be
even harder to determine what is going on. Another challenge is posed by
modern optimising compilers; they inline small subroutines, then combine
instructions over call and return boundaries. This loses valuable
information about the way the program is structured.
## Decompilers
Akin to Disassembly, **Decompilers** take the process a step further and
actually try to reproduce the code in a high level language. Frequently,
this high level language is C, because C is simple and primitive enough
to facilitate the decompilation process. Decompilation does have its
drawbacks, because lots of data and readability constructs are lost
during the original compilation process, and they cannot be reproduced.
Since the science of decompilation is still young, and results are
\"good\" but not \"great\", this page will limit itself to a listing of
decompilers, and a general (but brief) discussion of the possibilities
of decompilation. Compared to disassemblers a decompiler generates code
that doesnot require that one is familiar at the processor at hand. It
may even be that the decompiled code can be compiled on a different
processor, or give a reasonable starting point to reproduce the program
on a different processor.
### Decompilation: Is It Possible?
In the face of optimizing compilers, it is not uncommon to be asked \"Is
decompilation even possible?\" To some degree, it usually is. Make no
mistake, however: an optimizing compiler results in the irretrievable
loss of information. An example is in-lining, as explained above, where
code called is combined with its surroundings, such that the places
where the original subroutine is called cannot even be identified. An
optimizer that reverses that process is comparable to an artificial
intelligence program that recreates a poem in a different language. So
perfectly operational decompilers are a long way off. At most, current
Decompilers can be used as simply an aid for the reverse engineering
process leaving lots of arduous work.
### Common Decompilers
Hex-Rays Decompiler: Hex-Rays is a commercial decompiler. It is made as an extension to popular IDA-Pro disassembler. It is currently the only viable commercially available decompiler which produces usable results. It supports both x86 and ARM architecture.
: <http://www.hex-rays.com/products/decompiler/index.shtml>
```{=html}
<!-- -->
```
ILSpy: ILSpy is an open source .NET assembly browser and decompiler.
: <https://github.com/icsharpcode/ILSpy>
```{=html}
<!-- -->
```
DCC: DCC is likely one of the oldest decompilers in existence, dating back over 20 years. It serves as a good historical and theoretical frame of reference for the decompilation process in general (Mirrors: 12). As of 2015, DCC is an active project. Some of the latest changes include fixes for longstanding memory leaks and a more modern Qt5-based front-end.
```{=html}
<!-- -->
```
RetDec: The Retargetable Decompiler is a freeware web decompiler that takes in ELF/PE/COFF binaries in Intel x86, ARM, MIPS, PIC32, and PowerPC architectures and outputs C or Python-like code, plus flow charts and control flow graphs. It puts a running time limit on each decompilation. It produces nice results in most cases.
: <https://github.com/avast/retdec>
```{=html}
<!-- -->
```
Reko: a modular open-source decompiler supporting both an interactive GUI and a command-line interface. Its pluggable design supports decompilation of a variety of executable formats and processor architectures (8- , 16- , 32- and 64-bit architectures as of 2015). It also supports running unpacking scripts before actual decompilation. It performs global data and type analyses of the binary and yields its results in a subset of C++.
: <http://sourceforge.net/projects/decompiler>
: <https://github.com/uxmal/reko>
```{=html}
<!-- -->
```
C4Decompiler: C4Decompiler is an interactive, static decompiler under development (Alpha in 2013). It performs global analysis of the binary and presents the resulting C source in a Windows GUI. Context menus support navigation, properties, cross references, C/Asm mixed view and manipulation of the decompile context (function ABI).
: <http://www.c4decompiler.com>
```{=html}
<!-- -->
```
Boomerang Decompiler Project: Boomerang Decompiler is an attempt to make a powerful, retargetable decompiler. So far, it only decompiles into C with moderate success.
: <http://boomerang.sourceforge.net/>
```{=html}
<!-- -->
```
Reverse Engineering Compiler (REC): REC is a powerful \"decompiler\" that decompiles native assembly code into a *C-like* code representation. The code is half-way between assembly and C, but it is much more readable than the pure assembly is. Unfortunately the program appears to be rather unstable.
: <http://www.backerstreet.com/rec/rec.htm>
```{=html}
<!-- -->
```
ExeToC: ExeToC decompiler is an interactive decompiler that boasted pretty good results in the past.
: <http://sourceforge.net/projects/exetoc>
```{=html}
<!-- -->
```
snowman: Snowman is an open source native code to C/C++ decompiler. Supports ARM, x86, and x86-64 architectures. Reads ELF, Mach-O, and PE file formats. Reconstructs functions, their names and arguments, local and global variables, expressions, integer, pointer and structural types, all types of control-flow structures, including switch. Has a nice graphical user interface with one-click navigation between the assembler code and the reconstructed program. Has a command-line interface for batch processing.
: <https://derevenets.com>
```{=html}
<!-- -->
```
Ghidra: Ghidra is a reverse engineering package that includes a decompiler. It was written by the NSA for internal work, and apparently released because they didn\'t want to have to re-train every new person they hired. It is written in Java.
## A General view of Disassembling
### 8 bit CPU code
Most embedded CPUs are 8-bit CPUs.[^1]
Normally when a subroutine is finished, it returns to executing the next
address immediately following the `call` instruction.
However, assembly-language programmers occasionally use several
different techniques that adjust the return address, making disassembly
more difficult:
- jump tables,
- calculated jumps, and
- a parameter after the call instruction.
#### jump tables and other calculated jumps
On 8-bit CPUs, calculated jumps are often implemented by pushing a
calculated \"return\" address to the stack, then jumping to that address
using the \"return\" instruction. For example, the RTS
Trick uses this technique
to implement jump tables (branch table).
#### parameters after the call instruction
Instead of picking up their parameters off the stack or out of some
fixed global address, some subroutines provide parameters in the
addresses of memory that follow the instruction that called that
subroutine. Subroutines that use this technique adjust the return
address to skip over all the constant parameter data, then return to an
address many bytes after the \"call\" instruction. One of the more
famous programs that used this technique is the \"Sweet 16\" virtual
machine.
The technique may make disassembly more difficult.
A simple example of this is the `write()` procedure implemented as
follows:
``` asm
; assume ds = cs, e.g like in boot sector code
start:
call write ; push message's address on top of stack
db "Hello, world",0dh,0ah,00h
; return point
ret ; back to DOS
write proc near
pop si ; get string address
mov ah,0eh ; BIOS: write teletype
w_loop:
lodsb ; read char at [ds:si] and increment si
or al,al ; is it 00h?
jz short w_exit
int 10h ; write the character
jmp w_loop ; continue writing
w_exit:
jmp si
write endp
end start
```
A macro-assembler like TASM will then use a macro like this one:
``` asm
_write macro message
call write
db message
db 0
_write endm
```
From a human disassembler\'s point of view, this is a nightmare,
although this is straightforward to read in the original Assembly source
code, as there is no way to decide if the db should be interpreted or
not from the binary form, and this may contain various jumps to real
executable code area, triggering analysis of code that should never be
analysed, and interfering with the analysis of the real code (e.g.
disassembling the above code from 0000h or 0001h won\'t give the same
results at all).
However a half-decent tool with possibilities to specifiy rules, and
heuristic means to identify texts will have little trouble.
### 32 bit CPU code
Most 32-bit CPUs use the ARM instruction set.[^2][^3][^4]
Typical ARM assembly code is a series of subroutines, with literal
constants scattered between subroutines. The standard prolog and
epilog
for subroutines is pretty easy to recognize.
### A brief list of disassemblers
- ciasdis
\"an assembler where the elements opcode, operands and modifiers are
all objects, that are reusable for disassembly.\" For 8080 8086
80386 Alpha 6809 and should be usable for Pentium 68000 6502 8051.
- radare, the reverse engineering framework
includes open-source tools to disassemble code for many processors
including x86, ARM, PowerPC, m68k, etc. several virtual machines
including java, msil, etc., and for many platforms including Linux,
BSD, OSX, Windows, iPhoneOS, etc.
- IDA, the Interactive Disassembler ( IDA
Pro ) can disassemble code for a
huge number of processors, including ARM Architecture (including
Thumb and Thumb-2), ATMEL AVR, INTEL 8051, INTEL 80x86, MOS
Technologies 6502, MC6809, MC6811, M68H12C, MSP430, PIC 12XX, PIC
14XX, PIC 18XX, PIC 16XXX, Zilog Z80, etc.
- objdump, part of the GNU binutils, can disassemble code for several
processors and platforms. binutils is an important part of the
toolchain as it provides the linker, assembler and other utilties
(like objdump) to manipulate executables on the target platform, and
is available for most popular platforms.
- For OS X/BSD systems, there is a rough equivalent called otool
in the XCode kit.
- lists a huge number of disassemblers
- Program transformation wiki:
disassembly
lists many highly recommended disassemblers
- search for \"disassemble\" at
SourceForge shows
many disassemblers for a variety of CPUs.
- Hopper is a disassembler that runs on OS-X
and disassembles 32/64-bit OS-X and windows binaries.
- The University of Queensland Binary Translator
(UQBT)
is a reusable, component-based binary-translation framework that
supports CISC, RISC, and stack-based processors.
## Further reading
- <http://www.crackmes.de/> : reverse engineering challenges
- \"A Challengers Handbook\" by Caesum
3 has some tips on
reverse engineering programs in JavaScript, Flash Actionscript
(SWF), Java, etc.
- the Open Source Institute occasionally has reverse engineering
challenges among its other brainteasers.4
- The Program Transformation wiki has a Reverse engineering and
Re-engineering
Roadmap,
and discusses disassemblers, decompilers, and tools for translating
programs from one high-level language to another high-level
language.
- Other disassemblers with multi-platform
support
[^1]: Jim Turley. \"The Two Percent
Solution\".
2002.
[^2]:
[^3]: Mark Hachman. \"ARM Cores Climb Into 3G
Territory\".
2002. \"Although Intel and AMD receive the bulk of attention in the
computing world, ARM's embedded 32-bit architecture, \... has
outsold all others.\"
[^4]: Tom Krazit. \"ARMed for the living
room\".
\"ARM licensed 1.6 billion cores \[in 2005\]\". 2006.
|
# X86 Disassembly/Analysis Tools
## Debuggers
**Debuggers** are programs that allow the user to execute a compiled
program one step at a time. You can see what instructions are executed
in which order, and which sections of the program are treated as code
and which are treated as data. Debuggers allow you to analyze the
program while it is running, to help you get a better picture of what it
is doing.
Advanced debuggers often contain at least a rudimentary disassembler,
often times hex editing and reassembly features. Debuggers often allow
the user to set *breakpoints* on instructions, function calls, and even
memory locations.
A breakpoint is an instruction to the debugger that allows program
execution to be halted when a certain condition is met. For instance,
when a program accesses a certain variable, or calls a certain API
function, the debugger can pause program execution.
### Windows Debuggers
SoftICE : A *de facto* standard for Windows debugging. SoftICE can be used for *local kernel debugging*, which is a feature that is very rare, and very valuable. SoftICE was taken off the market in April 2006.
WinDbg : WinDbg is a free piece of software from Microsoft that can be used for local user-mode debugging, or even remote kernel-mode debugging. WinDbg is not the same as the better-known Visual Studio Debugger, but comes with a nifty GUI nonetheless. Available in 32 and 64-bit versions.
: <https://docs.microsoft.com/en-us/windows-hardware/drivers/debugger/debugger-download-tools>
```{=html}
<!-- -->
```
IDA Pro : The multi-processor, multi-OS, interactive disassembler by DataRescue.
: <http://www.hex-rays.com/idapro/>
```{=html}
<!-- -->
```
OllyDbg : OllyDbg is a free and powerful Windows debugger with a built-in disassembly and assembly engine. Very useful for patching, disassembling, and debugging.
: <http://www.ollydbg.de/>
```{=html}
<!-- -->
```
x64dbg : A set of 32 and 64 bit x86 debuggers. x64dbg is the spiritual successor to the discontinued OllyDbg.
```{=html}
<!-- -->
```
Immunity Debugger : Immunity Debugger is a branch of OllyDbg v1.10, with built-in support for Python scripting and much more.
: <http://immunityinc.com/products/debugger/index.html>
### Linux Debuggers
Many of the open source debuggers on Linux, again, are cross-platform.
They may be available on some other Unix(-like) systems, or even
Windows. Some of the debuggers may give you better experience than the
old and native ones on your system.
gdb : The GNU debugger, comes with any normal Linux install. It is quite powerful and even somewhat programmable, though the raw user interface is harsh.
```{=html}
<!-- -->
```
lldb: LLVM\'s debugger.
```{=html}
<!-- -->
```
emacs : The GNU editor, can be used as a front-end to gdb. This provides a powerful hex editor and allows full scripting in a LISP-like language.
```{=html}
<!-- -->
```
ddd : The Data Display Debugger. It\'s another front-end to gdb. This provides graphical representations of data structures. For example, a linked list will look just like a textbook illustration.
```{=html}
<!-- -->
```
strace, ltrace, and xtrace : Lets you run a program while watching the actions it performs. With strace, you get a log of all the system calls being made. With ltrace, you get a log of all the library calls being made. With xtrace, you get a log of some of the funtion calls being made.
```{=html}
<!-- -->
```
valgrind : Executes a program under emulation, performing analysis according to one of the many plug-in modules as desired. You can write your own plug-in module as desired. Newer versions of valgrind also support OS X.
```{=html}
<!-- -->
```
NLKD : A kernel debugger.
: <http://forge.novell.com/modules/xfmod/project/?nlkd>
```{=html}
<!-- -->
```
edb : A fully featured plugin-based debugger inspired by the famous OllyDbg. Project page
```{=html}
<!-- -->
```
KDbg : A gdb front-end for KDE. <http://kdbg.org>
```{=html}
<!-- -->
```
RR0D : A Ring-0 Debugger for Linux. RR0D Project Page
```{=html}
<!-- -->
```
Radare2
: A debugger and reversing framework.
```{=html}
<!-- -->
```
Winedbg: Wine "wikilink")\'s debugger. Debugs Windows executables using wine.
### Debuggers for Other Systems
dbx : The standard Unix debugger on systems derived from AT&T Unix. It is often part of an optional development toolkit package which comes at an extra price. It uses an interactive command line interface.
```{=html}
<!-- -->
```
ladebug : An enhanced debugger on Tru64 Unix systems from HP (originally Digital Equipment Corporation) that handles advanced functionality like threads better than dbx.
```{=html}
<!-- -->
```
DTrace : An advanced tool on Solaris that provides functions like profiling and many others on the entire system, including the kernel.
```{=html}
<!-- -->
```
mdb : The Modular Debugger (MDB) is a new general purpose debugging tool for the Solaris Operating Environment. Its primary feature is its extensibility. The Solaris Modular Debugger Guide describes how to use MDB to debug complex software systems, with a particular emphasis on the facilities available for debugging the Solaris kernel and associated device drivers and modules. It also includes a complete reference for and discussion of the MDB language syntax, debugger features, and MDB Module Programming API.
### Debugger Techniques
#### Setting Breakpoints
As previously mentioned in the section on disassemblers, a 6-line C
program doing something as simple as outputting \"Hello, World!\" turns
into massive amounts of assembly code. Most people don\'t want to sift
through the entire mess to find out the information they want. It can be
time consuming just to find the information one desires by just looking
through the code. As an alternative, one can choose to set breakpoints
to halt the program once it has reached a given point within the
program\'s code.
For instance, let\'s say that in your program you consistantly
experience crashes after one particular event: immediately after closing
a message box. You set breakpoints on all calls to *MessageBoxA*. You
run your program with the breakpoints set, and it stops, ready to call
*MessageBoxA*. Executing each line one-by-one thereafter (referred to as
*stepping*) through the code, and watching the program stack, you see
that a buffer overflow occurs soon after the call.
## Hex Editors
**Hex editors** are able to directly view and edit the binary of a
source file, and are very useful for investigating the structure of
proprietary closed-format data files. There are many hex editors in
existence. This section will attempt to list some of the best, some of
the most popular, or some of the most powerful.
HxD (Freeware): For Windows. A fast and powerful free hex, disk and RAM editor
: <http://mh-nexus.de/hxd/>
```{=html}
<!-- -->
```
Freeware Hex Editor XVI32 : For Windows. A freeware hex editor.
: <http://www.chmaas.handshake.de/delphi/freeware/xvi32/xvi32.htm>
```{=html}
<!-- -->
```
wxHexEditor (Beta, For Windows and Linux, Free & Open Source): A fast hex editor specially for HUGE files and disk devices, allows up to hexabyte, allow size changes (inject and deletes) without creating temp file, could view files with multiple panes, has built-in disassembler, supports tags for (reverse) engineering big binaries or file systems, could view files thrug XOR encryption.
: <http://wxhexeditor.sourceforge.net/>
```{=html}
<!-- -->
```
HHD Software Hex Editor Neo : For Windows. A fast file, disk, and memory editor with built-in disassembler and file structure viewer.
: <http://www.hhdsoftware.com/Family/hex-editor.html>
```{=html}
<!-- -->
```
Catch22 HexEdit : For Windows. his is a powerful hex editor with a slew of features. Has an excellent data structure viewer.
: <http://www.catch22.net/software/hexedit.asp>
```{=html}
<!-- -->
```
BreakPoint Hex Workshop : For Windows. An excellent and powerful hex-editor, its usefulness is restricted by the fact that it is not free like some of the other options.
: <http://www.bpsoft.com/>
```{=html}
<!-- -->
```
Tiny Hexer : Free and does statistics. For Windows.
: <http://www.mirkes.de/files/>
```{=html}
<!-- -->
```
frhed - free hex editor : For Windows. Free and opensource.
: <http://www.kibria.de/frhed.html>
```{=html}
<!-- -->
```
Cygnus Hex Editor: For Windows. A very fast and easy-to-use hex editor, available in a \'Free Edition\'.
: <http://www.softcircuits.com/cygnus/fe/>
```{=html}
<!-- -->
```
Hexprobe Hex Editor : For Windows. A professional hex editor designed to include all the power to deal with hex data, particularly helpful in the areas of hex-byte editing and byte-pattern analysis.
: <http://www.hexprobe.com/hexprobe/index.htm>
```{=html}
<!-- -->
```
UltraEdit32 : For Windows. A hex editor/text editor, won \"Application of the Year\" at 2005 Shareware Industry Awards Conference.
: <http://www.ultraedit.com/>
```{=html}
<!-- -->
```
Hexinator (For Windows and Linux): lets you edit files of unlimited size (overwrite, insert, delete), displays text with dozens of text encodings, shows variables in little and big endian byte order.
: <https://hexinator.com>
```{=html}
<!-- -->
```
ICY Hexplorer : For Windows. A lightweight free and open source hex file editor with some nifty features, such as pixel view, structures, and disassembling.
: <http://hexplorer.sourceforge.net/>
```{=html}
<!-- -->
```
WinHex : For Windows. A powerful hex file and disk editor with advanced abilities for computer forensics and data recovery (used by governments and military).
: <http://www.x-ways.net/index-m.html>
```{=html}
<!-- -->
```
010 Editor : For Windows. A very powerful and fast hex editor with extensive support for data structures and scripting. Can be used to edit drives and processes.
: <http://www.sweetscape.com/010editor/>
!A view of a small binary file in a 1Fh hex
editor.{width="250"}
1Fh : For Windows. A free binary/hex editor which is very fast, even while working with large files. It\'s the only Windows hex editor that allows you to view files in byte code (all 256-characters).
: <http://www.4neurons.com/1Fh/>
```{=html}
<!-- -->
```
HexEdit : For Windows (Open source) and shareware versions. Powerful and easy to use binary file and disk editor.
: <http://www.hexedit.com/>
```{=html}
<!-- -->
```
HexToolkit : For Windows. A free hex viewer specifically designed for reverse engineering file formats. Allows data to be viewed in various formats and includes an expression evaluator as well as a binary file comparison tool.
: <http://www.binaryearth.net/HexToolkit>
```{=html}
<!-- -->
```
FlexHex : For Windows. It Provides full support for NTFS files which are based on a more complex model than FAT32 files. Specifically, FlexHex supports Sparse files and Alternate data streams "wikilink") of files on any NTFS volume. Can be used to edit OLE compound files, flash cards, and other types of physical drives.
: <http://www.heaventools.com/flexhex-hex-editor.htm>
```{=html}
<!-- -->
```
HT Editor : For Windows. A file editor/viewer/analyzer for executables. Its goal is to combine the low-level functionality of a debugger and the usability of IDEs.
: <http://hte.sourceforge.net/>
```{=html}
<!-- -->
```
HexEdit : For MacOS. A simple but reliable hex editor wher you to change highlight colours. There is also a port for Apple Classic users.
: <http://hexedit.sourceforge.net/>
```{=html}
<!-- -->
```
Hex Fiend : For MacOS. A very simple hex editor, but incredibly powerful nonetheless. It\'s only 346 KB to download and takes files as big as 116 GB.
: <http://ridiculousfish.com/hexfiend/>
```{=html}
<!-- -->
```
ImHex : For Windows, MacOS and Linux. Displays, decodes and analyzes binary data (+ printable ASCII chars) and allow edition of bytes. Includes data inspector with various decoding (integers, floats, char/wchar, Unicode, dates, RGBA/RGB565 color\...), search by hex bytes and string, hex diff, pattern matching, yara rules (for malware pattern detection), hash computations, graphical data statistics, disassemblers, and various extra tools from a \"content store\". Free and open-source, licensed under GPLv2.
: <https://imhex.werwolv.net/>
### Linux Hex Editors only
bvi: A typical three-pane hex editor, with a vi-like interface.
```{=html}
<!-- -->
```
emacs : Along with everything else, emacs also includes a hex editor.
```{=html}
<!-- -->
```
joe : Joe\'s own editor now also supports hex editing.
```{=html}
<!-- -->
```
bless : A very capable gtk based hex editor.
```{=html}
<!-- -->
```
xxd and any text editor : Produce a hex dump with xxd, freely edit it in your favorite text editor, and then convert it back to a binary file with your changes included.
```{=html}
<!-- -->
```
GHex : Hex editor for GNOME.
: <http://directory.fsf.org/All_Packages_in_Directory/ghex.html>
```{=html}
<!-- -->
```
Okteta : The well-integrated hexeditor from KDE since 4.1. Offers the traditional two-columns layout, one with numeric values (binary, octal, decicmal, hexdecimal) and one with characters (lots of charsets supported). Editing can be done in both columns, with unlimited undo/redo. Small set of tools (searching/replacing, strings, binary filter, and more).
: <http://utils.kde.org/projects/okteta>
```{=html}
<!-- -->
```
BEYE : A viewer of binary files with built-in editor in binary, hexadecimal and disassembler modes. It uses native Intel syntax for disassembly. Highlight AVR/Java/Athlon64/Pentium 4/K7-Athlon disassembler, Russian codepages converter, full preview of formats - MZ, NE, PE, NLM, coff32, elf partial - a.out, LE, LX, PharLap; code navigator and more over. (
: <http://beye.sourceforge.net/en/beye.html>
```{=html}
<!-- -->
```
BIEW : A viewer of binary files with built-in editor in binary, hexadecimal and disassembler modes. It uses native Intel syntax for disassembly. Highlight AVR/Java/Athlon64/Pentium 4/K7-Athlon disassembler, Russian codepages converter, full preview of formats - MZ, NE, PE, NLM, coff32, elf partial - a.out, LE, LX, PharLap; code navigator and more over. (PROJECT RENAMED, see BEYE)
: <http://biew.sourceforge.net/en/biew.html>
```{=html}
<!-- -->
```
hview : A curses based hex editor designed to work with large (600+MB) files with as quickly, and with little overhead, as possible.
: <http://web.archive.org/web/20010306001713/http://tdistortion.esmartdesign.com/Zips/hview.tgz>
```{=html}
<!-- -->
```
HexCurse : An ncurses-based hex editor written in C that currently supports hex and decimal address output, jumping to specified file locations, searching, ASCII and EBCDIC output, bolded modifications, an undo command, quick keyboard shortcuts, etc.
: <http://www.jewfish.net/description.php?title=HexCurse>
```{=html}
<!-- -->
```
hexedit : View and edit files in hexadecimal or in ASCII.
: <http://rigaux.org/hexedit.html>
```{=html}
<!-- -->
```
Data Workshop : An editor to view and modify binary data; provides different views which can be used to edit, analyze and export the binary data.
: <http://www.dataworkshop.de/>
```{=html}
<!-- -->
```
VCHE: A hex editor which lets you see all 256 characters as found in video ROM, even control and extended ASCII, it uses the /dev/vcsa\* devices to do it. It also could edit non-regular files, like hard disks, floppies, CDROMs, ZIPs, RAM, and almost any device. It comes with a ncurses and a raw version for people who work under X or remotely.
: <http://www.grigna.com/diego/linux/vche/>
```{=html}
<!-- -->
```
DHEX: DHEX is just another Hexeditor with a Diff-mode for ncurses. It makes heavy use of colors and is themeable.
: <http://www.dettus.net/dhex/>
{{-}}
## Other Tools for Windows
### Resource Monitors
SysInternals Freeware : This page has a large number of excellent utilities, many of which are very useful to security experts, network administrators, and (most importantly to us) reversers. Specifically, check out **Process Monitor**, **FileMon**, **RegMon**, **TCPView**, and **Process Explorer**.
: <https://docs.microsoft.com/en-us/sysinternals/>
### API Monitors
SpyStudio Freeware : The Spy Studio software is a tool to hook into windows processes, log windows API call to DLLs, insert breakpoints and change parameters.
: <http://www.nektra.com/products/spystudio/>
```{=html}
<!-- -->
```
rohitab.com API Monitor : API Monitor is a free software that lets you monitor and control API calls made by applications and services. Features include detailed parameter information, structures, unions, enumerated/flag data types, call stack, call tree, breakpoints, custom DLLs, memory editor, call filtering, COM monitoring, 64-bit. Includes definitions for over 13,000 APIs and 1,300+ COM interfaces.
: <http://www.rohitab.com/apimonitor>
### PE File Header dumpers
Dumpbin : Dumpbin is a program that previously used to be shipped with MS Visual Studio, but recently the functionality of Dumpbin has been incorporated into the Microsoft Linker, link.exe. to access dumpbin, pass /dump as the first parameter to link.exe:
`link.exe /dump [options]`
: It is frequently useful to simply create a batch file that handles
this conversion:
`::dumpbin.bat`\
`link.exe /dump %*`
**All examples in this wikibook that use dumpbin will call it in this
manner.**
: Here is a list of useful features of dumpbin
1:
`dumpbin /EXPORTS displays a list of functions exported from a library`\
`dumpbin /IMPORTS displays a list of functions imported from other libraries`\
`dumpbin /HEADERS displays PE header information for the executable`
: <http://msdn.microsoft.com/library/default.asp?url=/library/en-us/vccore/html/_core_dumpbin_reference.asp>
```{=html}
<!-- -->
```
Depends : Dependency Walker is a GUI tool which will allow you to see exports and imports of binaries. It ships with many Microsoft tools including MS Visual Studio.
## GNU Tools
The GNU packages have been ported to many platforms including Windows.
GNU BinUtils : The GNU BinUtils package contains several small utilities that are very useful in dealing with binary files. The most important programs in the list are the GNU objdump, readelf, GAS assembler, and the GNU linker, although the reverser might find more use in addr2line, c++filt, nm, and readelf.
: <http://www.gnu.org/software/binutils/>
```{=html}
<!-- -->
```
objdump : Dumps out information about an executable including symbols and assembly. It comes standard. It can be made to support non-native binary formats.
`objdump -p displays a list of functions imported from other libraries, exported to and miscellaneous file header information`
It\'s useful to check dll dependencies from command line
readelf : Like *objdump* but more specialized for ELF executables.
```{=html}
<!-- -->
```
size : Lists the sizes of the segments.
```{=html}
<!-- -->
```
nm : Lists the symbols in an ELF file.
```{=html}
<!-- -->
```
strings : Prints the strings from a file.
```{=html}
<!-- -->
```
file : Tells you what type of file it is.
```{=html}
<!-- -->
```
fold : Folds the results of *strings* into something pageable.
```{=html}
<!-- -->
```
kill : Can be used to halt a program with the sig_stop signal.
```{=html}
<!-- -->
```
strace : Trace system calls and signals.
## Other Tools for Linux
oprofile : Can be used the find out what functions and data segments are used
```{=html}
<!-- -->
```
subterfugue : A tool for playing odd tricks on an executable as it runs. The tool is scriptable in python. The user can write scripts to take action on events that occur, such as changing the arguments to system calls.
: <http://subterfugue.org/>
```{=html}
<!-- -->
```
lizard : Lets you run a program *backwards*.
: <http://lizard.sourceforge.net/>
```{=html}
<!-- -->
```
dprobes : Lets you work with both kernel and user code.
```{=html}
<!-- -->
```
biew : Both a hex editor and a disassembler.
```{=html}
<!-- -->
```
ltrace : Displays runtime library call information for dynamically linked executables.
```{=html}
<!-- -->
```
asmDIFF : Searches for functions, instructions and memory pointers in different versions of same binary by using code metrics. Supports x86, x86_64 code in PE and ELF files.
: <http://duschkumpane.org/index.php/asmdiff>
## XCode Tools
XCode contains some extra tools to be used under
OS X with the Mach-O format. You can see more of them under
`/Applications/Xcode.app/Contents/Developer/usr/bin/`.
lipo: Manages fat binaries with multiple architectures.
```{=html}
<!-- -->
```
otool: *Object file displaying tool*, works somehow like objdump and readelf.
XCode also packs a lot of Unix tools, with many of them sharing the
names (and functions) of the GNU tools. Other tools like nasm/ndisasm,
lldb and GNU as can also be found.
|
# X86 Disassembly/Microsoft Windows
## Microsoft Windows
The **Windows operating system** is a popular reverse engineering target
for one simple reason: the OS itself (market share, known weaknesses),
and most applications for it, are not Open Source or free. Most software
on a Windows machine doesn\'t come bundled with its source code, and
most pieces have inadequate, or non-existent documentation.
Occasionally, the only way to know precisely what a piece of software
does (or for that matter, to determine whether a given piece of software
is malicious or legitimate) is to reverse it, and examine the results.
## Windows Versions
Windows operating systems can be easily divided into 2 categories:
Windows9x, and WindowsNT.
### Windows 9x
The Windows9x kernel was originally written to span the 16bit - 32bit
divide. Operating Systems based on the 9x kernel are Windows 95, Windows
96, Windows 98, and Windows Me. Windows9x Series operating systems are
known to be prone to bugs and system instability. The actual OS itself
was a 32 bit extension of MS-DOS, its predecessor. An important issue
with the 9x line is that they were all based around using the ANSI
format for storing strings, rather than Unicode.
Development on the Windows9x kernel ended with the release of Windows
Me.
### Windows NT
The WindowsNT kernel series was originally written as enterprise-level
server and network software. WindowsNT stresses stability and security
far more than Windows9x kernels did (although it can be debated whether
that stress was good enough). It also handles all string operations
internally in Unicode, giving more flexibility when using different
languages. Operating systems based on the WindowsNT kernel are: Windows
NT (versions 3.1, 3.11, 3.2, 3.5, 3.51 and 4.0), Windows 2000 (NT 5.0),
Windows XP (NT 5.1), Windows Server 2003 (NT 5.2), Windows Vista (NT
6.0), Windows 7 (NT 6.1), Windows 7.1 (NT 6.11), Windows 8 (NT 6.2),
Windows 8.1 (NT 6.3), and Windows 10 (NT 10.0).
The Microsoft Xbox and Xbox 360 also run a variant of NT, forked from
Windows 2000. Most future Microsoft operating system products are based
on NT in some shape or form.
## Virtual Memory
Memory is organized into \"pages\" that are 4096 bytes by default. Pages
not in current use by the system or any of the applications may be
written to a special section on the hard disk known as the \"paging
file.\" Use of the paging file may increase performance on some systems,
although high latency for I/O to the HDD can actually reduce performance
in some instances.
32-bit Windows NT allows for a maximum of 4 GiB of virtual memory
address space per process. This is divided into 2 GiB user memory and 2
GiB kernel memory by default.
In some 32-bit versions and editions, the operating system can be
started with the /3GB switch which divides this into 3 GiB user memory
and 1 GiB kernel memory. Only 32-bit applications that are compiled with
the large memory flag can use up to 3 GiB in this mode. The /3GB switch
is not supported in 64-bit Windows, but 32-bit applications with the
large memory flag can access up to 4 GiB on 64-bit Windows. 64-bit
applications are not restricted in this way.
Starting with the Pentium Pro CPU some 32-bit versions and editions can
use Physical Address Extensions (the /PAE switch) to access memory above
4 GiB up to 64 GiB. This memory can be accessed by 32-bit applications
that support PAE (i.e. some versions of 32-bit Microsoft SQL Server and
32-bit Microsoft Exchange Server). Special configuration is required,
however.
## System Architecture
The Windows architecture is heavily layered. Function calls that a
programmer makes may be redirected 3 times or more before any action is
actually performed. There is an unignorable penalty for calling Win32
functions from a user-mode application. However, the upside is equally
unignorable: code written in higher levels of the windows system is much
easier to write. Complex operations that involve initializing multiple
data structures and calling multiple sub-functions can be performed by
calling only a single higher-level function.
The Win32 API comprises 3 modules: KERNEL32, USER32, and GDI32. KERNEL32
is layered on top of NTDLL, and most calls to KERNEL32 functions are
simply redirected into NTDLL function calls. USER32 and GDI32 are both
based on WIN32K (a kernel-mode module, responsible for the Windows
\"look and feel\"), although USER32 also makes many calls to the
more-primitive functions in GDI32. This and NTDLL both provide an
interface to the Windows NT kernel, NTOSKRNL (see further below).
NTOSKRNL is also partially layered on HAL (Hardware Abstraction Layer),
but this interaction will not be considered much in this book. The
purpose of this layering is to allow processor variant issues (such as
location of resources) to be made separate from the actual kernel
itself. A slightly different system configuration thus requires just a
different HAL module, rather than a completely different kernel module.
## System calls and interrupts
After filtering through different layers of subroutines, most API calls
require interaction with part of the operating system. Services are
provided via \'software interrupts\', traditionally through the \"int
0x2e\" instruction. This switches control of execution to the NT
executive / kernel, where the request is handled. It should be pointed
out here that the stack used in kernel mode is different from the user
mode stack. This provides an added layer of protection between kernel
and user. Once the function completes, control is returned back to the
user application.
Both Intel and AMD provide an extra set of instructions to allow faster
system calls, the \"SYSENTER\" instruction from Intel and the SYSCALL
instruction from AMD.
## Win32 API
Both WinNT and Win9x systems utilize the Win32 API. However, the WinNT
version of the API has more functionality and security constructs, as
well as Unicode support. Most of the Win32 API can be broken down into 3
separate components, each performing a separate task.
### kernel32.dll
Kernel32.dll, home of the KERNEL subsystem, is where non-graphical
functions are implemented. Some of the APIs located in KERNEL are: The
Heap API, the Virtual Memory API, File I/O API, the Thread API, the
System Object Manager, and other similar system services. Most of the
functionality of kernel32.dll is implemented in ntdll.dll, but in
undocumented functions. Microsoft prefers to publish documentation for
kernel32 and guarantee that these APIs will remain unchanged, and then
put most of the work in other libraries, which are then not documented.
### gdi32.dll
gdi32.dll is the library that implements the GDI subsystem, where
primitive graphical operations are performed. GDI diverts most of its
calls into WIN32K, but it does contain a manager for GDI objects, such
as pens, brushes and device contexts. The GDI object manager and the
KERNEL object manager are completely separate.
### user32.dll
The USER subsystem is located in the user32.dll library file. This
subsystem controls the creation and manipulation of USER objects, which
are common screen items such as windows, menus, cursors, etc\... USER
will set up the objects to be drawn, but will perform the actual drawing
by calling on GDI (which in turn will make many calls to WIN32K) or
sometimes even calling WIN32K directly. USER utilizes the GDI Object
Manager.
## Native API
The native API, hereby referred to as the NTDLL subsystem, is a series
of undocumented API function calls that handle most of the work
performed by KERNEL32. Microsoft also does not guarantee that the native
API will remain the same between different versions, as Windows
developers modify the software. This gives the risk of native API calls
being removed or changed without warning, breaking software that
utilizes it.
### ntdll.dll
The NTDLL subsystem is located in ntdll.dll. This library contains many
API function calls, that all follow a particular naming scheme. Each
function has a prefix: Ldr, Nt, Zw, Csr, Dbg, etc\... and all the
functions that have a particular prefix all follow particular rules.
The \"official\" native API is usually limited only to functions whose
prefix is Nt or Zw. These calls are in fact the same in user-mode: the
relevant Export
entries map to the same
address in memory. However, in kernel-mode, the Zw\* system call stubs
set the *previous mode* to kernel-mode, ensuring that certain parameter
validation routines are *not* performed. The origin of the prefix \"Zw\"
is unknown; this prefix was chosen due to its having no significance at
all[^1].
In actual implementation, the system call stubs merely load two
registers with values required to describe a native API call, and then
execute a software interrupt (or the `sysenter` instruction).
Most of the other prefixes are obscure, but the known ones are:
- Rtl stands for \"Run Time Library\", calls which help functionality
at runtime (such as RtlAllocateHeap)
- Csr is for \"Client Server Runtime\", which represents the interface
to the win32 subsystem located in csrss.exe
- Dbg functions are present to enable debugging routines and
operations
- Ldr provides the ability to load, manipulate and retrieve data from
DLLs and other module resources
### User Mode Versus Kernel Mode
Many functions, especially Run-time Library routines, are shared between
ntdll.dll and ntoskrnl.exe. Most Native API functions, as well as other
kernel-mode only functions exported from the kernel are useful for
driver writers. As such, Microsoft provides documentation on many of the
native API functions with the Microsoft Server 2003 Platform DDK. The
DDK (Driver Development Kit) is available as a free download.
## ntoskrnl.exe
This module is the Windows NT \"\'Executive\'\", providing all the
functionality required by the native API, as well as the kernel itself,
which is responsible for maintaining the machine state. By default, all
interrupts and kernel calls are channeled through ntoskrnl in some way,
making it the single most important program in Windows itself. Many of
its functions are exported (all of which with various prefixes, a la
NTDLL) for use by device drivers.
## Win32K.sys
This module is the \"Win32 Kernel\" that sits on top of the lower-level,
more primitive NTOSKRNL. WIN32K is responsible for the \"look and feel\"
of windows, and many portions of this code have remained largely
unchanged since the Win9x versions. This module provides many of the
specific instructions that cause USER and GDI to act the way they do.
It\'s responsible for translating the API calls from the USER and GDI
libraries into the pictures you see on the monitor.
## Win64 API
With the advent of 64-bit processors, 64-bit software is a necessity. As
a result, the Win64 API was created to utilize the new hardware. It is
important to note that the format of many of the function calls are
identical in Win32 and Win64, except for the size of pointers, and other
data types that are specific to 64-bit address space.
## Windows Vista
Microsoft has released a new version of its Windows operation system,
named \"Windows Vista.\" Windows Vista may be better known by its
development code-name \"Longhorn.\" Microsoft claims that Vista has been
written largely from the ground up, and therefore it can be assumed that
there are fundamental differences between the Vista API and system
architecture, and the APIs and architectures of previous Windows
versions. Windows Vista was released January 30th, 2007.
## Windows CE/Mobile, and other versions
Windows CE is the Microsoft offering on small devices. It largely uses
the same Win32 API as the desktop systems, although it has a slightly
different architecture. Some examples in this book may consider
WindowsCE.
## \"Non-Executable Memory\"
Recent windows service packs have attempted to implement a system known
as \"Non-executable memory\" where certain pages can be marked as being
\"non-executable\". The purpose of this system is to prevent some of the
most common security holes by not allowing control to pass to code
inserted into a memory buffer by an attacker. For instance, a shellcode
loaded into an overflowed text buffer cannot be executed, stopping the
attack in its tracks. The effectiveness of this mechanism is yet to be
seen, however.
## COM and Related Technologies
COM, and a whole slew of technologies that are either related to COM or
are actually COM with a fancy name, is another factor to consider when
reversing Windows binaries. COM, DCOM, COM+, ActiveX, OLE, MTS, and
Windows DNA are all names for the same subject, or subjects, so similar
that they may all be considered under the same heading. In short, COM is
a method to export Object-Oriented Classes in a uniform,
*cross-platform* and *cross-language* manner. In essence, COM is .NET,
version 0 beta. Using COM, components written in many languages can
export, import, instantiate, modify, and destroy objects defined in
another file, most often a DLL. Although COM provides cross-platform (to
some extent) and cross-language facilities, each COM object is compiled
to a native binary, rather than an intermediate format such as Java or
.NET. As a result, COM does not require a virtual machine to execute
such objects.
Due to the way that COM works, a lot of the methods and data structures
exported by a COM component are difficult to perceive by simply
inspecting the executable file. Matters are made worse if the creating
programmer has used a library such as
ATL to simplify
their programming experience. Unfortunately for a reverse engineer, this
reduces the contents of an executable into a \"Sea of bits\", with
pointers and data structures everywhere.
## Remote Procedure Calls (RPC)
RPC is a generic term referring to techniques that allow a program
running on one machine to make calls that actually execute on another
machine. Typically, this is done by *marshalling* all of the data needed
for the procedure including any state information stored on the first
machine, and building it into a single data structure, which is then
transmitted over some communications method to a second machine. This
second machine then performs the requested action, and returns a data
packet containing any results and potentially changed state information
to the originating machine.
In Windows NT, RPC is typically handled by having two libraries that are
similarly named, one which generates RPC requests and accepts RPC
returns, as requested by a user-mode program, and one which responds to
RPC requests and returns results via RPC. A classic example is the print
spooler, which consists of two pieces: the RPC stub spoolss.dll, and the
spooler proper and RPC service provider, spoolsv.exe. In most machines,
which are stand-alone, it would seem that the use of two modules
communicating by means of RPC is overkill; why not simply roll them into
a single routine? In networked printing, though, this makes sense, as
the RPC service provider can be resident physically on a distant
machine, with the remote printer, and the local machine can control the
printer on the remote machine in exactly the same way that it controls
printers on the local machine.
[^1]: <https://msdn.microsoft.com/en-us/library/windows/hardware/ff565646(v=vs.85>).aspx
|
# X86 Disassembly/Linux
## GNU/Linux
The **GNU/Linux operating system** is open source, but at the same time
there is so much that constitutes \"GNU/Linux\" that it can be difficult
to stay on top of all aspects of the system. Here we will attempt to
boil down some of the most important concepts of the GNU/Linux Operating
System, especially from a reverser\'s standpoint
## System Architecture
The concept of \"GNU/Linux\" is mostly a collection of a large number of
software components that are based on the GNU tools and the Linux
kernel. GNU/Linux is itself broken into a number of variants called
\"distros\" which share some similarities, but may also have distinct
peculiarities. In a general sense, all GNU/Linux distros are based on a
variant of the Linux kernel. However, since each user may edit and
recompile their own kernel at will, and since some distros may make
certain edits to their kernels, it is hard to proclaim any one version
of any one kernel as \"the standard\". Linux kernels are generally based
on the philosophy that system configuration details should be stored in
aptly-named, human-readable (and therefore human-editable) configuration
files.
The Linux kernel implements much of the core API, but certainly not all
of it. Much API code is stored in external modules (although users have
the option of compiling all these modules together into a \"Monolithic
Kernel\").
On top of the kernel generally runs one or more **shells**. Bash is one
of the more popular shells, but many users prefer other shells,
especially for different tasks.
Beyond the shell, Linux distros frequently offer a GUI (although many
distros do not have a GUI at all, usually for performance reasons).
Since each GUI often supplies its own underlying framework and API,
certain graphical applications may run on only one GUI. Some
applications may need to be recompiled (and a few completely rewritten)
to run on another GUI.
## Configuration Files
## Shells
Here are some popular shells:
Bash : An acronym for \"Bourne Again SHell.\"
```{=html}
<!-- -->
```
Bourne : A precursor to Bash.
```{=html}
<!-- -->
```
Csh : C Shell
```{=html}
<!-- -->
```
Ksh : Korn Shell
```{=html}
<!-- -->
```
TCsh : A Terminal oriented Csh.
```{=html}
<!-- -->
```
Zsh : Z Shell
## Desktop Environments
Some of the more popular desktop environments:
GNOME : GNU Network Object Modeling Environment
```{=html}
<!-- -->
```
KDE : K Desktop Environment
## Debuggers
gdb : The GNU Debugger. It is available on most Linux distributions, and is primarily used to debug ELF executables. manpage
```{=html}
<!-- -->
```
winedbg : A debugger for Wine "wikilink"), used to debug Windows executables under Linux. manpage
```{=html}
<!-- -->
```
edb : A fully featured plugin-based debugger inspired by the famous OllyDbg. Project page
## File Analyzers
strings : Finds printable strings in a file. When, for example, a password is stored in the binary itself (defined statically in the source), the string can then be extracted from the binary without ever needing to execute it. manpage)
```{=html}
<!-- -->
```
file : Determines a file type, useful for determining whether an executable has been stripped and whether it\'s been dynamically (or statically) linked. manpage)
```{=html}
<!-- -->
```
objdump : Disassembles object files, executables and libraries. Can list internal file structure and disassemble specific sections. Supports both Intel and AT&T syntax
```{=html}
<!-- -->
```
nm : Lists symbols from executable files. Doesn\'t work on stripped binaries. Used mostly on debugging version of executables.
|
# X86 Disassembly/Linux Executable Files
## ELF Files
The **ELF file format** (short for Executable and Linking Format) was
developed by Unix System Laboratories to be a successor to previous file
formats such as COFF and a.out. In many respects, the ELF format is more
powerful and versatile than previous formats, and has widely become the
standard on Linux, Solaris, IRIX, and FreeBSD (although the
FreeBSD-derived Mac OS X uses the Mach-O format instead). ELF has also
been adopted by OpenVMS for Itanium and BeOS for x86.
Historically, Linux has not always used ELF; Red Hat Linux 4 was the
first time that distribution used ELF; previous versions had used the
a.out format.
ELF Objects are broken down into different segments and/or sections.
These can be located by using the ELF header found at the first byte of
the object. The ELF header provides the location for both the program
header and the section header. Using these data structures the rest of
the ELF objects contents can be found, this includes .text and .data
segments which contain code and data respectively.
The GNU readelf utility, from the binutils package, is a common tool for
parsing ELF objects.
### File Format
!An ELF file has two views: the program header shows the *segments*
used at run-time, while the section header lists the set of *sections*
of the
binary.{width="200"}
Each ELF file is made up of one ELF header, followed by file data. The
file data can include:
- Program header table, describing zero or more segments
- Section header table, describing zero or more sections
- Data referred to by entries in the program or section header table
The segments contain information that is necessary for runtime execution
of the file, while sections contain important data for linking and
relocation. Each byte in the entire file is taken by no more than one
section at a time, but there can be orphan bytes, which are not covered
by a section. In the normal case of a Unix executable one or more
sections are enclosed in one segment.
{{-}}
## Relocatable ELF Files
Relocatable ELF files are created by compilers. They need to be linked
before running.
Those files are often found in `.a` archives, with a `.o` extension.
## a.out Files
a.out is a very simple format consisting of a
header (at offset 0) which contains the size of 3 executable sections
(code, data, bss), plus pointers to additional information such as
relocations (for .o files), symbols and symbols\' strings. The actual
sections contents follows the header. Offsets of different sections are
computed from the size of the previous section.
The a.out format is now rarely used.
### File Format
|
# X86 Disassembly/Mac OS X
## Mach-O format overview
MacOS (Previously OS X) uses the Mach-O file format to encode
executables, object files, and shared libraries (.dylib files). Here, we
will be looking at the 64-bit version of the Mach-O format. The majority
of data in Mach-O files are \'segments\' and \'sections\', where
Segments are containers for Sections, and store information about each
Section. The Sections themselves are containers for data. Mach-O files
have five primary structures:
**Structure** Description
--------------- ----------------------------------------------------------------------------
Header Contains information about the purpose, and size of the file\'s structures
Load Commands Declaration of all Segments and Sections
Data The actual contents of the file (e.g. Data section, Text section).
Symbol table Says where each symbol is located in the file
String table Contains the name of each symbol
Note that when each Structure is gone over, they are all an unbroken
sequence of bytes, and there is no empty space between them.
## Header
### Information
The header is the very first thing in the file, and it has 8 unsigned
32-bit integers:
Name Purpose Endianness Typical Value
------------------------- ------------------------------------------------------------------ --------------- -------------------------------------------------------
Magic Number The File\'s magic number Big-Endian 0xFEEDFACF for 64-bit architecture
CPU Type The Intended CPU type for the executable Little-Endian 0x01000007 for x86_64
CPU subtype The specific kind of CPU used Little-Endian 0x00000003 for all x64 CPUs
File type The purpose of the file Little-Endian 0x00000001 for object file, 0x00000002 for executable
Number of Load Commands The quantity of Load commands (does not include section headers) Little-Endian Variable
Size of Load Commands The number of bytes occupied by the Load Commands Little-Endian Variable
Flags Extra file information Little-Endian 0x00000000
Reserved No practical use Little-Endian 0x00000000
|
# X86 Disassembly/The Stack
## The Stack
![](Data_stack.svg "Data_stack.svg") Generally speaking, a **stack** is
a data structure that stores data values contiguously in memory. Unlike
an array, however, you access (read or write) data only at the \"top\"
of the stack. To read from the stack is said \"**to pop**\" and to write
to the stack is said \"**to push**\". A stack is also known as a LIFO
queue (Last In First Out) since values are popped from the stack in *the
reverse order* that they are pushed onto it (think of how you pile up
plates on a table). Popped data disappears from the stack.
All x86 architectures use a stack as a temporary storage area in RAM
that allows the processor to quickly store and retrieve data in memory.
The current top of the stack is pointed to by the **esp** register. The
stack \"grows\" downward, from high to low memory addresses, so values
recently pushed onto the stack are located in memory addresses *above*
the esp pointer. No register specifically points to the bottom of the
stack, although most operating systems monitor the stack bounds to
detect both \"underflow\" (popping an empty stack) and \"overflow\"
(pushing too much information on the stack) conditions.
When a value is popped off the stack, the value remains sitting in
memory until overwritten. However, you should never rely on the content
of memory addresses below esp, because other functions may overwrite
these values without your knowledge.
Users of Windows ME, 98, 95, 3.1 (and earlier) may fondly remember the
infamous \"Blue Screen of Death\" \-- that was sometimes caused by a
stack overflow exception. This occurs when too much data is written to
the stack, and the stack \"grows\" beyond its limits. Modern operating
systems use better bounds-checking and error recovery to reduce the
occurrence of stack overflows, and to maintain system stability after
one has occurred.
## Push and Pop
The following lines of ASM code are basically equivalent:
```{=html}
<center>
```
+----------+-----------------------------+
| ``` asm | ``` asm |
| push eax | sub esp, 4 |
| ``` | mov DWORD PTR SS:[esp], eax |
| | ``` |
+----------+-----------------------------+
| ``` asm | ``` asm |
| pop eax | mov eax, DWORD PTR SS:[esp] |
| ``` | add esp, 4 |
| | ``` |
+----------+-----------------------------+
```{=html}
</center>
```
but the single command actually performs much faster than the
alternative. It can be visualized that the stack grows from right to
left, and esp decreases as the stack grows in size.
```{=html}
<center>
```
Push Pop
-------------------------------------------------------------- ------------------------------------------------------------
![](ReverseEngineeringPush.JPG "ReverseEngineeringPush.JPG") ![](ReverseEngineeringPop.JPG "ReverseEngineeringPop.JPG")
```{=html}
</center>
```
## ESP In Action
Let\'s say we want to quickly discard 3 items we pushed earlier onto the
stack, without saving the values (in other words \"clean\" the stack).
The following works (note that it overwrites the **eax** register):
{{-}}
``` asm
pop eax
pop eax
pop eax
```
However there is a faster method, that also does not affect any register
but the stack pointer. We can simply perform some basic arithmetic on
esp to make the pointer go \"above\" the data items, so they cannot be
read anymore, and can be overwritten with the next round of **push**
commands.
``` asm
add esp, 12 ; 12 is 3 DWORDs (4 bytes * 3)
```
Likewise, if we want to reserve room on the stack for an item bigger
than a DWORD, we can use a subtraction to artificially move esp forward.
We can then access our reserved memory directly as a memory pointer, or
we can access it indirectly as an offset value from esp itself.
Say we wanted to create an array of byte values on the stack, 100 items
long. We want to store the pointer to the base of this array in **edi**.
How do we do it? Here is an example:
``` asm
sub esp, 100 ; num of bytes in our array
mov edi, esp ; copy address of 100 bytes area to edi
```
To destroy that array, we simply write the instruction
``` asm
add esp, 100
```
## Reading Without Popping
To read values on the stack without popping them off the stack, **esp**
can be used with an offset. For instance, to read the 3 DWORD values
from the top of the stack into eax (but without using a pop
instruction), we would use the instructions:
``` asm
mov eax, DWORD PTR SS:[esp]
mov eax, DWORD PTR SS:[esp + 4]
mov eax, DWORD PTR SS:[esp + 8]
```
Remember, since esp moves downward as the stack grows, data on the stack
can be accessed with a positive offset. A negative offset should never
be used because data \"above\" the stack cannot be counted on to stay
the way you left it. The operation of reading from the stack without
popping is often referred to as \"peeking\", but since this isn\'t the
official term for it this wikibook won\'t use it.
## Data Allocation
There are two areas in the computer memory where a program can store
data. The first, the one that we have been talking about, is the stack.
It is a linear LIFO buffer that allows fast allocations and
deallocations, but has a limited size. The **heap** is typically a
non-linear data storage area, typically implemented using linked lists,
binary trees, or other more exotic methods. Heaps are slightly more
difficult to interface with and to maintain than a stack, and
allocations/deallocations are performed more slowly. However, heaps can
grow as the data grows, and new heaps can be allocated when data
quantities become too large.
As we shall see, explicitly declared variables are allocated on the
stack. Stack variables are finite in number, and have a definite size.
Heap variables can be variable in number and in size. We will discuss
these topics in more detail later.
|
# X86 Disassembly/Functions and Stack Frames
## Functions and Stack Frames
In the execution environment, functions are frequently set up with a
\"**stack frame**\" to allow access to both function parameters, and
automatic local function variables. The idea behind a stack frame is
that each subroutine can act independently of its location on the stack,
and each subroutine can act as if it is the top of the stack.
When a function is called, a new stack frame is created at the current
**esp** location. A stack frame acts like a partition on the stack. All
items from previous functions are higher up on the stack, and should not
be modified. Each current function has access to the remainder of the
stack, from the stack frame until the end of the stack page. The current
function always has access to the \"top\" of the stack, and so functions
do not need to take account of the memory usage of other functions or
programs.
## Standard Entry Sequence
For many compilers, the standard function entry sequence is the
following piece of code (*X* is the total size, in bytes, of all
*automatic* local variables used in the function):
``` asm
push ebp
mov ebp, esp
sub esp, X
```
For example, here is a C function code fragment and the resulting
assembly instructions (the code generated with ABI standards won\'t have
\"sub esp, 12\" instruction due to red zones):
``` C
void MyFunction()
{
int a, b, c;
...
```
``` asm
_MyFunction:
push ebp ; save the value of ebp
mov ebp, esp ; ebp now points to the top of the stack
sub esp, 12 ; space allocated on the stack for the local variables
```
This means local variables can be accessed by referencing ebp. Consider
the following C code fragment and corresponding assembly code:
``` C
a = 10;
b = 5;
c = 2;
```
``` asm
mov [ebp - 4], 10 ; location of variable a
mov [ebp - 8], 5 ; location of b
mov [ebp - 12], 2 ; location of c
```
This all seems well and good, but what is the purpose of **ebp** in this
setup? Why save the old value of ebp and then point ebp to the top of
the stack, only to change the value of esp with the next instruction?
The answer is *function parameters*.
Consider the following C function declaration:
``` C
void MyFunction2(int x, int y, int z)
{
...
}
```
It produces the following assembly code:
``` asm
_MyFunction2:
push ebp
mov ebp, esp
sub esp, 0 ; no local variables, most compilers will omit this line
```
Which is exactly as one would expect. So, what exactly does **ebp** do,
and where are the function parameters stored? The answer is found when
we call the function.
Consider the following C function call:
``` C
MyFunction2(10, 5, 2);
```
This will create the following assembly code (using a Right-to-Left
calling convention called CDECL, explained later):
``` asm
push 2
push 5
push 10
call _MyFunction2
```
**Note:** Remember that the **call** x86 instruction is basically
equivalent to
``` asm
push eip + 2 ; return address is current address + size of two instructions
jmp _MyFunction2
```
It turns out that the function arguments are all passed on the stack!
Therefore, when we move the current value of the stack pointer (**esp**)
into **ebp**, we are pointing ebp directly at the function arguments. As
the function code pushes and pops values, ebp is not affected by esp.
Remember that pushing basically does this:
``` asm
sub esp, 4 ; "allocate" space for the new stack item
mov [esp], X ; put new stack item value X in
```
This means that first the return address and then the old value of
**ebp** are put on the stack. Therefore \[ebp\] points to the location
of the old value of ebp, \[ebp + 4\] points to the return address, and
\[ebp + 8\] points to the first function argument. Here is a (crude)
representation of the stack at this point:
`: : `\
`| 2 | [ebp + 16] (3rd function argument)`\
`| 5 | [ebp + 12] (2nd argument)`\
`| 10 | [ebp + 8] (1st argument)`\
`| RA | [ebp + 4] (return address)`\
`| FP | [ebp] (old ebp value)`\
`| | [ebp - 4] (1st local variable)`\
`: :`\
`: :`\
`| | [ebp - X] (esp - the current stack pointer. The use of push / pop is valid now)`
The stack pointer value may change during the execution of the current
function. In particular this happens when:
- parameters are passed to another function;
- the pseudo-function \"alloca()\" is used.
\[FIXME: When parameters are passed into another function the esp
changing is not an issue. When that function returns the esp will be
back to its old value. So why does ebp help there. This needs better
explanation. (The real explanation is here, ESP is not really needed:
<https://learn.microsoft.com/en-us/archive/blogs/larryosterman/fpo>)\]
This means that the value of **esp** cannot be reliably used to
determine (using the appropriate offset) the memory location of a
specific local variable. To solve this problem, many compilers access
local variables using negative offsets from the **ebp** registers. This
allows us to assume that the same offset is always used to access the
same variable (or parameter). For this reason, the ebp register is
called the **frame pointer**, or FP.
## Standard Exit Sequence
The Standard Exit Sequence must undo the things that the Standard Entry
Sequence does. To this effect, the Standard Exit Sequence must perform
the following tasks, in the following order:
1. Remove space for local variables, by reverting **esp** to its old
value.
2. Restore the old value of **ebp** to its old value, which is on top
of the stack.
3. Return to the calling function with a *ret* command.
As an example, the following C code:
``` C
void MyFunction3(int x, int y, int z)
{
int a, b, c;
...
return;
}
```
Will create the following assembly code:
``` asm
_MyFunction3:
push ebp
mov ebp, esp
sub esp, 12 ; sizeof(a) + sizeof(b) + sizeof(c)
;x = [ebp + 8], y = [ebp + 12], z = [ebp + 16]
;a = [ebp - 4] = [esp + 8], b = [ebp - 8] = [esp + 4], c = [ebp - 12] = [esp]
mov esp, ebp
pop ebp
ret 12 ; sizeof(x) + sizeof(y) + sizeof(z)
```
## Non-Standard Stack Frames
Frequently, reversers will come across a subroutine that doesn\'t set up
a standard stack frame. Here are some things to consider when looking at
a subroutine that does not start with a standard sequence:
### Using Uninitialized Registers
When a subroutine starts using data in an *uninitialized* register, that
means that the subroutine expects external functions to put data into
that register before it gets called. Some calling conventions pass
arguments in registers, but sometimes a compiler will not use a standard
calling convention.
### \"static\" Functions
In C, functions may optionally be declared with the **static** keyword,
as such:
``` C
static void MyFunction4();
```
The **static** keyword causes a function to have only local scope,
meaning it may not be accessed by any external functions (it is strictly
internal to the given code file). When an optimizing compiler sees a
static function that is only referenced by calls (no references through
function pointers), it \"knows\" that external functions cannot possibly
interface with the static function (the compiler controls all access to
the function), so the compiler doesn\'t bother making it standard.
### Hot Patch Prologue
Some Windows functions set up a regular stack frame as explained above,
but start out with the apparently non-sensical line
``` asm
mov edi, edi;
```
This instruction is assembled into 2 bytes which serve as a placeholder
for future function patches. Taken as a whole such a function might look
like this:
``` asm
nop ; each nop is 1 byte long
nop
nop
nop
nop
FUNCTION: ; <-- This is the function entry point as used by call instructions
mov edi, edi ; mov edi,edi is 2 bytes long
push ebp ; regular stack frame setup
mov ebp, esp
```
If such a function needs to be replaced without reloading the
application (or restarting the machine in case of kernel patches) it can
be achieved by inserting a jump to the replacement function. A short
jump instruction (which can jump +/- 127 bytes) requires 2 bytes of
storage space - just the amount that the \"mov edi,edi\" placeholder
provides. A jump to any memory location, in this case to our replacement
function, requires 5 bytes. These are provided by the 5 no-operation
bytes just preceding the function. If a function thus patched gets
called it will first jump back by 5 bytes and then do a long jump to the
replacement function. After the patch the memory might look like this
``` asm
LABEL:
jmp REPLACEMENT_FUNCTION ; <-- 5 NOPs replaced by jmp
FUNCTION:
jmp short LABEL ; <-- mov edi has been replaced by short jump backwards
push ebp
mov ebp, esp ; <-- regular stack frame setup as before
```
The reason for using a 2-byte mov instruction at the beginning instead
of putting 5 nops there directly, is to prevent corruption during the
patching process. There would be a risk with replacing 5 individual
instructions if the instruction pointer is currently pointing at any one
of them. Using a single mov instruction as placeholder on the other hand
guarantees that the patching can be completed as an atomic transaction.
## Local Static Variables
Local static variables cannot be created on the stack, since the value
of the variable is preserved across function calls. We\'ll discuss local
static variables and other types of variables in a later chapter.
|
# X86 Disassembly/Functions and Stack Frame Examples
## Example: Number of Parameters
``` asm
_Question1:
push ebp
mov ebp, esp
sub esp, 4
mov eax, [ebp + 8]
mov ecx, 2
mul ecx
mov [esp + 0], eax
mov eax, [ebp + 12]
mov edx, [esp + 0]
add eax, edx
mov esp, ebp
pop ebp
ret
```
The function above takes 2 4-byte parameters, accessed by offsets +8 and
+12 from ebp. The function also has 1 variable created on the stack,
accessed by offset +0 from esp. The function is nearly identical to this
C code:
``` C
int Question1(int x, int y)
{
int z;
z = x * 2;
return y + z;
}
```
}}
## Example: Standard Entry Sequences
``` asm
_Question2:
call _SubQuestion2
mov ecx, 2
mul ecx
ret
```
The function does not follow the standard entry sequence, because it
doesn\'t set up a proper stack frame with ebp and esp. The function
basically performs the following C instructions:
``` C
int Question2()
{
return SubQuestion2() * 2;
}
```
Although an optimizing compiler has chosen to take a few shortcuts. }}
|
# X86 Disassembly/Calling Conventions
## Calling Conventions
**Calling conventions** are a standardized method for functions to be
implemented and called by the machine. A calling convention specifies
the method that a compiler sets up to access a subroutine. In theory,
code from any compiler can be interfaced together, so long as the
functions all have the same calling conventions. In practice however,
this is not always the case.
Calling conventions specify how arguments are passed to a function, how
return values are passed back out of a function, how the function is
called, and how the function manages the stack and its stack frame. In
short, the calling convention specifies how a function call in C or C++
is converted into assembly language. Needless to say, there are many
ways for this translation to occur, which is why it\'s so important to
specify certain standard methods. If these standard conventions did not
exist, it would be nearly impossible for programs created using
different compilers to communicate and interact with one another.
There are three major calling conventions that are used with the C
language on 32-bit x86 processors: STDCALL, CDECL, and FASTCALL. In
addition, there is another calling convention typically used with C++:
THISCALL.[^1] There are other calling conventions as well, including
PASCAL and FORTRAN conventions, among others.
Other processors, such as AMD64 processors (also called x86-64
processors), each have their own calling convention.[^2][^3]
## Notes on Terminology
There are a few terms that we are going to be using which are mostly
common sense, but which are worthy of stating directly:
Passing arguments : \"passing arguments\" is a way of saying that the calling function is writing data in the place where the called function will look for them. Arguments are passed before the *call* instruction is executed.
```{=html}
<!-- -->
```
Right-to-Left and Left-to-Right : These describe the manner in which arguments are passed to the subroutine, in terms of the High-level code. For instance, the following C function call:
``` C
MyFunction1(a, b);
```
will generate the following code if passed Left-to-Right:
``` asm
push a
push b
call _MyFunction
```
and will generate the following code if passed Right-to-Left:
``` asm
push b
push a
call _MyFunction
```
Return value : Some functions return a value, and that value must be received reliably by the function\'s caller. The called function places its return value in a place where the calling function can get it when execution returns. The called function stores the return value before executing the *ret* instruction.
```{=html}
<!-- -->
```
Cleaning the stack : When arguments are pushed onto the stack, eventually they must be popped back off again. Whichever function, the caller or the callee, is responsible for cleaning the stack must reset the stack pointer to eliminate the passed arguments.
```{=html}
<!-- -->
```
Calling function (the caller): The \"parent\" function that calls the subroutine. Execution resumes in the calling function directly after the subroutine call, unless the program terminates inside the subroutine.
```{=html}
<!-- -->
```
Called function (the callee): The \"child\" function that gets called by the \"parent.\"
```{=html}
<!-- -->
```
Name Decoration : When C code is translated to assembly code, the compiler will often \"decorate\" the function name by adding extra information that the linker will use to find and link to the correct functions. For most calling conventions, the decoration is very simple (often only an extra symbol or two to denote the calling convention), but in some extreme cases (notably C++ \"thiscall\" convention), the names are \"mangled\" severely.
```{=html}
<!-- -->
```
Entry sequence (the function prologue): a few instructions at the beginning of a function, which prepare the stack and registers for use within the function.
```{=html}
<!-- -->
```
Exit sequence (the function epilogue): a few instructions at the end of a function, which restore the stack and registers to the state expected by the caller, and return to the caller. Some calling conventions clean the stack in the exit sequence.
```{=html}
<!-- -->
```
Call sequence: a few instructions in the middle of a function (the caller) which pass the arguments and call the called function. After the called function has returned, some calling conventions have one more instruction in the call sequence to clean the stack.
## Standard C Calling Conventions
The C language, by default, uses the CDECL calling convention, but most
compilers allow the programmer to specify another convention via a
specifier keyword. These keywords **are not** part of the ISO-ANSI C
standard, so you should always check with your compiler documentation
about implementation specifics.
If a calling convention other than CDECL is to be used, or if CDECL is
not the default for your compiler, and you want to manually use it, you
must specify the calling convention keyword in the function declaration
itself, and in any prototypes for the function. This is important
because both the calling function and the called function need to know
the calling convention.
### CDECL
In the CDECL calling convention the following holds:
- Arguments are passed on the stack in Right-to-Left order, and return
values are passed in eax.
- The *calling* function cleans the stack. This allows CDECL functions
to have *variable-length argument lists* (aka variadic functions).
For this reason the number of arguments is not appended to the name
of the function by the compiler, and the assembler and the linker
are therefore unable to determine if an incorrect number of
arguments is used.
Variadic functions usually have special entry code, generated by the
va_start(), va_arg() C pseudo-functions.
Consider the following C instructions:
``` C
_cdecl int MyFunction1(int a, int b)
{
return a + b;
}
```
and the following function call:
``` C
x = MyFunction1(2, 3);
```
These would produce the following assembly listings, respectively:
``` asm
_MyFunction1:
push ebp
mov ebp, esp
mov eax, [ebp + 8]
mov edx, [ebp + 12]
add eax, edx
pop ebp
ret
```
and
``` asm
push 3
push 2
call _MyFunction1
add esp, 8
```
When translated to assembly code, CDECL functions are almost always
prepended with an underscore (that\'s why all previous examples have
used \"\_\" in the assembly code).
### STDCALL
STDCALL, also known as \"WINAPI\" (and a few other names, depending on
where you are reading it) is used almost exclusively by Microsoft as the
standard calling convention for the Win32 API. Since STDCALL is strictly
defined by Microsoft, all compilers that implement it do it the same
way.
- STDCALL passes arguments right-to-left, and returns the value in
eax. (The Microsoft documentation erroneously claimed that arguments
are passed left-to-right, but this is not the case.)
- The called function cleans the stack, unlike CDECL. This means that
STDCALL doesn\'t allow variable-length argument lists.
Consider the following C function:
``` C
_stdcall int MyFunction2(int a, int b)
{
return a + b;
}
```
and the calling instruction:
``` C
x = MyFunction2(2, 3);
```
These will produce the following respective assembly code fragments:
``` asm
:_MyFunction2@8
push ebp
mov ebp, esp
mov eax, [ebp + 8]
mov edx, [ebp + 12]
add eax, edx
pop ebp
ret 8
```
and
``` asm
push 3
push 2
call _MyFunction2@8
```
There are a few important points to note here:
1. In the function body, the *ret* instruction has an (optional)
argument that indicates how many bytes to pop off the stack when the
function returns.
2. STDCALL functions are name-decorated with a leading underscore,
followed by an @, and then the number (in bytes) of arguments passed
on the stack. This number will always be a multiple of 4, on a
32-bit aligned machine.
### FASTCALL
The FASTCALL calling convention is not completely standard across all
compilers, so it should be used with caution. In FASTCALL, the first 2
or 3 32-bit (or smaller) arguments are passed in registers, with the
most commonly used registers being edx, eax, and ecx. Additional
arguments, or arguments larger than 4-bytes are passed on the stack,
often in Right-to-Left order (similar to CDECL). The calling function
most frequently is responsible for cleaning the stack, if needed.
Because of the ambiguities, it is recommended that FASTCALL be used only
in situations with 1, 2, or 3 32-bit arguments, where speed is
essential.
The following C function:
``` C
_fastcall int MyFunction3(int a, int b)
{
return a + b;
}
```
and the following C function call:
``` C
x = MyFunction3(2, 3);
```
Will produce the following assembly code fragments for the called, and
the calling functions, respectively:
``` asm
:@MyFunction3@8
push ebp
mov ebp, esp ;many compilers create a stack frame even if it isn't used
add eax, edx ;a is in eax, b is in edx
pop ebp
ret
```
and
``` asm
;the calling function
mov eax, 2
mov edx, 3
call @MyFunction3@8
```
The name decoration for FASTCALL prepends an @ to the function name, and
follows the function name with \@x, where x is the number (in bytes) of
arguments passed to the function.
Many compilers still produce a stack frame for FASTCALL functions,
especially in situations where the FASTCALL function itself calls
another subroutine. However, if a FASTCALL function doesn\'t need a
stack frame, optimizing compilers are free to omit it.
Commonly gcc and Windows FASTCALL convention pushes parameters one and
two into ecx and edx, respectively, before pushing any remaining
parameters onto the stack. Calling MyFunction3 using this standard would
look like:
``` asm
;the calling function
mov ecx, 2
mov edx, 3
call @MyFunction3@8
```
## C++ Calling Convention
C++ requires that non-static methods of a class be called by an instance
of the class. Therefore it uses its own standard calling convention to
ensure that pointers to the object are passed to the function:
**THISCALL**.
### THISCALL
In THISCALL, the pointer to the class object is passed in ecx, the
arguments are passed Right-to-Left on the stack, and the return value is
passed in eax.
For instance, the following C++ instruction:
``` Cpp
MyObj.MyMethod(a, b, c);
```
Would form the following asm code:
``` asm
mov ecx, MyObj
push c
push b
push a
call _MyMethod
```
At least, it *would* look like the assembly code above if it weren\'t
for **name mangling**.
### Name Mangling
Because of the complexities inherent in function overloading, C++
functions are heavily name-decorated to the point that people often
refer to the process as \"Name Mangling.\" Unfortunately C++ compilers
are free to do the name-mangling differently since the standard does not
enforce a convention. Additionally, other issues such as exception
handling are also not standardized.
Since every compiler does the name-mangling differently, this book will
not spend too much time discussing the specifics of the algorithm.
Notice that in many cases, it\'s possible to determine which compiler
created the executable by examining the specifics of the name-mangling
format. We will not cover this topic in this much depth in this book,
however.
Here are a few general remarks about THISCALL name-mangled functions:
- They are recognizable on sight because of their complexity when
compared to CDECL, FASTCALL, and STDCALL function name decorations
- They sometimes include the name of that function\'s class.
- They almost always include the number and type of the arguments, so
that overloaded functions can be differentiated by the arguments
passed to it.
Here is an example of a C++ class and function declaration:
``` Cpp
class MyClass {
MyFunction(int a) { }
};
```
And here is the resultant mangled name:
`?MyFunction@MyClass@@QAEHH@Z`
### Extern \"C\"
In a C++ source file, functions placed in an `extern "C"` block are
guaranteed not to be mangled. This is done frequently when libraries are
written in C++, and the functions need to be exported without being
mangled. Even though the program is written in C++ and compiled with a
C++ compiler, some of the functions might therefore not be mangled and
will use one of the ordinary C calling conventions (typically CDECL).
## Note on Name Decorations
We\'ve been discussing name decorations in this chapter, but the fact is
that in pure disassembled code there typically are no names whatsoever,
especially not names with fancy decorations. The assembly stage removes
all these readable identifiers, and replaces them with the binary
locations instead. Function names really only appear in two places:
1. Listing files produced during compilation
2. In export tables, if functions are exported
When disassembling raw machine code, there will be no function names and
no name decorations to examine. For this reason, you will need to pay
more attention to the way parameters are passed, the way the stack is
cleaned, and other similar details.
While we haven\'t covered optimizations yet, suffice it to say that
optimizing compilers can even make a mess out of these details.
Functions which are not exported do not necessarily need to maintain
standard interfaces, and if it is determined that a particular function
does not need to follow a standard convention, some of the details will
be optimized away. In these cases, it can be difficult to determine what
calling conventions were used (if any), and it is even difficult to
determine where a function begins and ends. This book cannot account for
all possibilities, so we try to show as much information as possible,
with the knowledge that much of the information provided here will not
be available in a true disassembly situation.
## Further reading
- x86 Disassembly/Calling Convention
Examples
- Embedded Systems/Mixed C and Assembly
Programming
describes calling conventions on other CPUs.
[^1]: Josh Lospinoso. \"Common x86 Calling
Conventions\".
[^2]: \"C to assembly call convention 32bit vs
64bit\".
[^3]: \"ASM call
conventions\".
|
# X86 Disassembly/Calling Convention Examples
## Microsoft C Compiler
Here is a simple function in C:
``` C
int MyFunction(int x, int y)
{
return (x * 2) + (y * 3);
}
```
Using cl.exe, we are going to generate 3 separate listings for
MyFunction, one with CDECL, one with FASTCALL, and one with STDCALL
calling conventions. On the commandline, there are several switches that
you can use to force the compiler to change the default:
- `/Gd` : The default calling convention is CDECL
- `/Gr` : The default calling convention is FASTCALL
- `/Gz` : The default calling convention is STDCALL
Using these commandline options, here are the listings:
### CDECL
``` C
int MyFunction(int x, int y)
{
return (x * 2) + (y * 3);
}
```
becomes:
``` {.asm .numberLines}
PUBLIC _MyFunction
_TEXT SEGMENT
_x$ = 8 ; size = 4
_y$ = 12 ; size = 4
_MyFunction PROC NEAR
; Line 4
push ebp
mov ebp, esp
; Line 5
mov eax, _y$[ebp]
imul eax, 3
mov ecx, _x$[ebp]
lea eax, [eax+ecx*2]
; Line 6
pop ebp
ret 0
_MyFunction ENDP
_TEXT ENDS
END
```
On entry of a function, ESP points to the return address pushed on the
stack by the `call` instruction (that is, previous contents of EIP). Any
argument in stack of higher address than entry ESP is pushed by caller
before the call is made; in this example, the first argument is at
offset +4 from ESP (EIP is 4 bytes wide), plus 4 more bytes once the EBP
is pushed on the stack. Thus, at line 5, ESP points to the saved frame
pointer EBP, and arguments are located at addresses ESP+8 (x) and ESP+12
(y).
For CDECL, caller pushes arguments into stack in a right to left order.
Because ret 0 is used, it must be the caller who cleans up the stack.
As a point of interest, notice how **lea** is used in this function to
simultaneously perform the multiplication (ecx \* 2), and the addition
of that quantity to eax. Unintuitive instructions like this will be
explored further in the chapter on unintuitive
instructions.
### FASTCALL
``` C
int MyFunction(int x, int y)
{
return (x * 2) + (y * 3);
}
```
becomes:
``` asm
PUBLIC @MyFunction@8
_TEXT SEGMENT
_y$ = -8 ; size = 4
_x$ = -4 ; size = 4
@MyFunction@8 PROC NEAR
; _x$ = ecx
; _y$ = edx
; Line 4
push ebp
mov ebp, esp
sub esp, 8
mov _y$[ebp], edx
mov _x$[ebp], ecx
; Line 5
mov eax, _y$[ebp]
imul eax, 3
mov ecx, _x$[ebp]
lea eax, [eax+ecx*2]
; Line 6
mov esp, ebp
pop ebp
ret 0
@MyFunction@8 ENDP
_TEXT ENDS
END
```
This function was compiled with optimizations turned off. Here we see
arguments are first saved in stack then fetched from stack, rather than
be used directly. This is because the compiler wants a consistent way to
use all arguments via stack access, not only one compiler does like
that.
There is no argument is accessed with positive offset to entry SP, it
seems caller doesn't pushed in them, thus it can use ret 0. Let's do
further investigation:
``` C
int FastTest(int x, int y, int z, int a, int b, int c)
{
return x * y * z * a * b * c;
}
```
and the corresponding listing:
``` asm
PUBLIC @FastTest@24
_TEXT SEGMENT
_y$ = -8 ; size = 4
_x$ = -4 ; size = 4
_z$ = 8 ; size = 4
_a$ = 12 ; size = 4
_b$ = 16 ; size = 4
_c$ = 20 ; size = 4
@FastTest@24 PROC NEAR
; _x$ = ecx
; _y$ = edx
; Line 2
push ebp
mov ebp, esp
sub esp, 8
mov _y$[ebp], edx
mov _x$[ebp], ecx
; Line 3
mov eax, _x$[ebp]
imul eax, DWORD PTR _y$[ebp]
imul eax, DWORD PTR _z$[ebp]
imul eax, DWORD PTR _a$[ebp]
imul eax, DWORD PTR _b$[ebp]
imul eax, DWORD PTR _c$[ebp]
; Line 4
mov esp, ebp
pop ebp
ret 16 ; 00000010H
```
Now we have 6 arguments, four are pushed in by caller from right to
left, and last two are passed again in cx/dx, and processed the same way
as previous example. Stack cleanup is done by ret 16, which
corresponding to 4 arguments pushed before call executed.
For FASTCALL, compiler will try to pass arguments in registers, if not
enough caller will pushed them into stack still in an order from right
to left. Stack cleanup is done by callee. It is called FASTCALL because
if arguments can be passed in registers (for 64bit CPU the maximum
number is 6), no stack push/clean up is needed.
The name-decoration scheme of the function: \@MyFunction@n, here n is
stack size needed for all arguments.
### STDCALL
``` C
int MyFunction(int x, int y)
{
return (x * 2) + (y * 3);
}
```
becomes:
``` asm
PUBLIC _MyFunction@8
_TEXT SEGMENT
_x$ = 8 ; size = 4
_y$ = 12 ; size = 4
_MyFunction@8 PROC NEAR
; Line 4
push ebp
mov ebp, esp
; Line 5
mov eax, _y$[ebp]
imul eax, 3
mov ecx, _x$[ebp]
lea eax, [eax+ecx*2]
; Line 6
pop ebp
ret 8
_MyFunction@8 ENDP
_TEXT ENDS
END
```
The STDCALL listing has only one difference than the CDECL listing that
it uses \"ret 8\" for self clean up of stack. Lets do an example with
more parameters:
``` C
int STDCALLTest(int x, int y, int z, int a, int b, int c)
{
return x * y * z * a * b * c;
}
```
Let\'s take a look at how this function gets translated into assembly by
cl.exe:
``` asm
PUBLIC _STDCALLTest@24
_TEXT SEGMENT
_x$ = 8 ; size = 4
_y$ = 12 ; size = 4
_z$ = 16 ; size = 4
_a$ = 20 ; size = 4
_b$ = 24 ; size = 4
_c$ = 28 ; size = 4
_STDCALLTest@24 PROC NEAR
; Line 2
push ebp
mov ebp, esp
; Line 3
mov eax, _x$[ebp]
imul eax, DWORD PTR _y$[ebp]
imul eax, DWORD PTR _z$[ebp]
imul eax, DWORD PTR _a$[ebp]
imul eax, DWORD PTR _b$[ebp]
imul eax, DWORD PTR _c$[ebp]
; Line 4
pop ebp
ret 24 ; 00000018H
_STDCALLTest@24 ENDP
_TEXT ENDS
END
```
Yes the only difference between STDCALL and CDECL is that the former
does stack clean up in callee, the later in caller. This saves a little
bit in X86 due to its \"ret n\".
## GNU C Compiler
We will be using 2 example C functions to demonstrate how GCC implements
calling conventions:
``` C
int MyFunction1(int x, int y)
{
return (x * 2) + (y * 3);
}
```
and
``` C
int MyFunction2(int x, int y, int z, int a, int b, int c)
{
return x * y * (z + 1) * (a + 2) * (b + 3) * (c + 4);
}
```
GCC does not have commandline arguments to force the default calling
convention to change from CDECL (for C), so they will be manually
defined in the text with the directives: \_\_cdecl, \_\_fastcall, and
\_\_stdcall.
### CDECL
The first function (MyFunction1) provides the following assembly
listing:
``` asm
_MyFunction1:
pushl %ebp
movl %esp, %ebp
movl 8(%ebp), %eax
leal (%eax,%eax), %ecx
movl 12(%ebp), %edx
movl %edx, %eax
addl %eax, %eax
addl %edx, %eax
leal (%eax,%ecx), %eax
popl %ebp
ret
```
First of all, we can see the name-decoration is the same as in cl.exe.
We can also see that the ret instruction doesn\'t have an argument, so
the calling function is cleaning the stack. However, since GCC doesn\'t
provide us with the variable names in the listing, we have to deduce
which parameters are which. After the stack frame is set up, the first
instruction of the function is \"movl 8(%ebp), %eax\". One we remember
(or learn for the first time) that GAS instructions have the general
form:
`instruction src, dest`
We realize that the value at offset +8 from ebp (the last parameter
pushed on the stack) is moved into eax. The leal instruction is a little
more difficult to decipher, especially if we don\'t have any experience
with GAS instructions. The form \"leal(reg1,reg2), dest\" adds the
values in the parenthesis together, and stores the value in *dest*.
Translated into Intel syntax, we get the instruction:
``` asm
lea ecx, [eax + eax]
```
Which is clearly the same as a multiplication by 2. The first value
accessed must then have been the last value passed, which would seem to
indicate that values are passed right-to-left here. To prove this, we
will look at the next section of the listing:
``` asm
movl 12(%ebp), %edx
movl %edx, %eax
addl %eax, %eax
addl %edx, %eax
leal (%eax,%ecx), %eax
```
the value at offset +12 from ebp is moved into edx. edx is then moved
into eax. eax is then added to itselt (eax \* 2), and then is added back
to edx (edx + eax). remember though that eax = 2 \* edx, so the result
is edx \* 3. This then is clearly the y parameter, which is furthest on
the stack, and was therefore the first pushed. CDECL then on GCC is
implemented by passing arguments on the stack in right-to-left order,
same as cl.exe.
### FASTCALL
``` asm
.globl @MyFunction1@8
.def @MyFunction1@8; .scl 2; .type 32; .endef
@MyFunction1@8:
pushl %ebp
movl %esp, %ebp
subl $8, %esp
movl %ecx, -4(%ebp)
movl %edx, -8(%ebp)
movl -4(%ebp), %eax
leal (%eax,%eax), %ecx
movl -8(%ebp), %edx
movl %edx, %eax
addl %eax, %eax
addl %edx, %eax
leal (%eax,%ecx), %eax
leave
ret
```
Notice first that the same name decoration is used as in cl.exe. The
astute observer will already have realized that GCC uses the same trick
as cl.exe, of moving the fastcall arguments from their registers (ecx
and edx again) onto a negative offset on the stack. Again, optimizations
are turned off. ecx is moved into the first position (-4) and edx is
moved into the second position (-8). Like the CDECL example above, the
value at -4 is doubled, and the value at -8 is tripled. Therefore, -4
(ecx) is x, and -8 (edx) is y. It would seem from this listing then that
values are passed left-to-right, although we will need to take a look at
the larger, MyFunction2 example:
``` asm
.globl @MyFunction2@24
.def @MyFunction2@24; .scl 2; .type 32; .endef
@MyFunction2@24:
pushl %ebp
movl %esp, %ebp
subl $8, %esp
movl %ecx, -4(%ebp)
movl %edx, -8(%ebp)
movl -4(%ebp), %eax
imull -8(%ebp), %eax
movl 8(%ebp), %edx
incl %edx
imull %edx, %eax
movl 12(%ebp), %edx
addl $2, %edx
imull %edx, %eax
movl 16(%ebp), %edx
addl $3, %edx
imull %edx, %eax
movl 20(%ebp), %edx
addl $4, %edx
imull %edx, %eax
leave
ret $16
```
By following the fact that in MyFunction2, successive parameters are
added to increasing constants, we can deduce the positions of each
parameter. -4 is still x, and -8 is still y. +8 gets incremented by 1
(z), +12 gets increased by 2 (a). +16 gets increased by 3 (b), and +20
gets increased by 4 (c). Let\'s list these values then:
`z = [ebp + 8]`\
`a = [ebp + 12]`\
`b = [ebp + 16]`\
`c = [ebp + 20]`
c is the furthest down, and therefore was the first pushed. z is the
highest to the top, and was therefore the last pushed. Arguments are
therefore pushed in right-to-left order, just like cl.exe.
### STDCALL
Let\'s compare then the implementation of MyFunction1 in GCC:
``` asm
.globl _MyFunction1@8
.def _MyFunction1@8; .scl 2; .type 32; .endef
_MyFunction1@8:
pushl %ebp
movl %esp, %ebp
movl 8(%ebp), %eax
leal (%eax,%eax), %ecx
movl 12(%ebp), %edx
movl %edx, %eax
addl %eax, %eax
addl %edx, %eax
leal (%eax,%ecx), %eax
popl %ebp
ret $8
```
The name decoration is the same as in cl.exe, so STDCALL functions (and
CDECL and FASTCALL for that matter) can be assembled with either
compiler, and linked with either linker, it seems. The stack frame is
set up, then the value at \[ebp + 8\] is doubled. After that, the value
at \[ebp + 12\] is tripled. Therefore, +8 is x, and +12 is y. Again,
these values are pushed in right-to-left order. This function also
cleans its own stack with the \"ret 8\" instruction.
Looking at a bigger example:
``` asm
.globl _MyFunction2@24
.def _MyFunction2@24; .scl 2; .type 32; .endef
_MyFunction2@24:
pushl %ebp
movl %esp, %ebp
movl 8(%ebp), %eax
imull 12(%ebp), %eax
movl 16(%ebp), %edx
incl %edx
imull %edx, %eax
movl 20(%ebp), %edx
addl $2, %edx
imull %edx, %eax
movl 24(%ebp), %edx
addl $3, %edx
imull %edx, %eax
movl 28(%ebp), %edx
addl $4, %edx
imull %edx, %eax
popl %ebp
ret $24
```
We can see here that values at +8 and +12 from ebp are still x and y,
respectively. The value at +16 is incremented by 1, the value at +20 is
incremented by 2, etc all the way to the value at +28. We can therefore
create the following table:
`x = [ebp + 8]`\
`y = [ebp + 12]`\
`z = [ebp + 16]`\
`a = [ebp + 20]`\
`b = [ebp + 24]`\
`c = [ebp + 28]`
With c being pushed first, and x being pushed last. Therefore, these
parameters are also pushed in right-to-left order. This function then
also cleans 24 bytes off the stack with the \"ret 24\" instruction.
## Example: C Calling Conventions
## Example: Named Assembly Function
## Example: Unnamed Assembly Function
## Example: Another Unnamed Assembly Function
## Example: Name Mangling
|
# X86 Disassembly/Branches
## Branching
Computer science professors tell their students to avoid jumps and
**goto** instructions, to avoid the proverbial \"spaghetti code.\"
Unfortunately, assembly only has jump instructions to control program
flow. This chapter will explore the subject that many people avoid like
the plague, and will attempt to show how the spaghetti of assembly can
be translated into the more familiar control structures of high-level
language. Specifically, this chapter will focus on **If-Then-Else** and
**Switch** branching instructions.
## If-Then
Let\'s consider a generic **if** statement in pseudo-code followed by
its equivalent form using jumps:
+----------------------------------+----------------------------------+
| `if (condition) then`\ | ![](_C_langu |
| ` do_action;` | age_if.png "_C_language_if.png") |
| | |
| `if | |
| not (condition) then goto end;`\ | |
| ` do_action;`\ | |
| `end:` | |
+----------------------------------+----------------------------------+
What does this code do? In English, the code checks the condition and
performs a jump only if it is *false*. With that in mind, let\'s compare
some actual C code and its Assembly translation:
+-------------+--------------+
| ``` C | ``` asm |
| if(x == 0) | mov eax, $x |
| { | cmp eax, 0 |
| x = 1; | jne end |
| } | mov eax, 1 |
| x++; | end: |
| ``` | inc eax |
| | mov $x, eax |
| | ``` |
+-------------+--------------+
Note that when we translate to assembly, we need to *negate* the
condition of the jump because\--like we said above\--we only jump if the
condition is false. To recreate the high-level code, simply negate the
condition once again.
Negating a comparison may be tricky if you\'re not paying attention.
Here are the correct dual forms:
Instruction Meaning
------------- ----------------------------------------
JNE **J**ump if **n**ot **e**qual
JE **J**ump if **e**qual
JG **J**ump if **g**reater
JLE **J**ump if **l**ess than or **e**qual
JL **J**ump if **l**ess than
JGE **J**ump if **g**reater or **e**qual
And here are some examples.
``` asm
mov eax, $x //move x into eax
cmp eax, $y //compare eax with y
jg end //jump if greater than
inc eax
mov $x, eax //increment x
end:
...
```
Is produced by these C statements:
``` C
if(x <= y)
{
x++;
}
```
As you can see, x is incremented only if it is **less than or equal to**
y. Thus, if it is greater than y, it will not be incremented as in the
assembler code. Similarly, the C code
``` C
if(x < y)
{
x++;
}
```
produces this assembler code:
``` asm
mov eax, $x //move x into eax
cmp eax, $y //compare eax with y
jge end //jump if greater than or equal to
inc eax
move $x, eax //increment x
end:
...
```
X is incremented in the C code only if it is **less than** y, so the
assembler code now jumps if it\'s greater than or equal to y. This kind
of thing takes practice, so we will try to include lots of examples in
this section.
## If-Then-Else
Let us now look at a more complicated case: the **If-Then-Else**
instruction.
+----------------------------------+----------------------------------+
| `if (condition) then`\ | ![](C_language_if_el |
| ` do_action`\ | se.png "C_language_if_else.png") |
| `else`\ | |
| ` do_alternative_action;` | |
| | |
| `if not (condition) goto else;`\ | |
| ` do_action;`\ | |
| ` goto end;`\ | |
| `else:`\ | |
| ` do_alternative_action;`\ | |
| `end:` | |
+----------------------------------+----------------------------------+
Now, what happens here? Like before, the if statement only jumps to the
else clause when the condition is false. However, we must also install
an *unconditional* jump at the end of the \"then\" clause, so we don\'t
perform the else clause directly afterwards.
Now, here is an example of a real C If-Then-Else:
``` C
if(x == 10)
{
x = 0;
}
else
{
x++;
}
```
Which gets translated into the following assembly code:
``` asm
mov eax, $x
cmp eax, 0x0A ;0x0A = 10
jne else
mov eax, 0
jmp end
else:
inc eax
end:
mov $x, eax
```
As you can see, the addition of a single unconditional jump can add an
entire extra option to our conditional.
## Switch-Case
**Switch-Case** structures can be very complicated when viewed in
assembly language, so we will examine a few examples. First, keep in
mind that in C, there are several keywords that are commonly used in a
switch statement. Here is a recap:
Switch : This keyword tests the argument, and starts the switch structure\
Case : This creates a label that execution will switch to, depending on the value of the argument.\
Break : This statement jumps to the end of the switch block\
Default : This is the label that execution jumps to if and only if it doesn\'t match up to any other conditions
Lets say we have a general switch statement, but with an extra label at
the end, as such:
``` C
switch (x)
{
//body of switch statement
}
end_of_switch:
```
Now, every **break** statement will be immediately replaced with the
statement
``` asm
jmp end_of_switch
```
But what do the rest of the statements get changed to? The case
statements can each resolve to any number of arbitrary integer values.
How do we test for that? The answer is that we use a \"Switch Table\".
Here is a simple, C example:
``` C
int main(int argc, char **argv)
{ //line 10
switch(argc)
{
case 1:
MyFunction(1);
break;
case 2:
MyFunction(2);
break;
case 3:
MyFunction(3);
break;
case 4:
MyFunction(4);
break;
default:
MyFunction(5);
}
return 0;
}
```
And when we compile this with **cl.exe**, we can generate the following
listing file:
``` asm
tv64 = -4 ; size = 4
_argc$ = 8 ; size = 4
_argv$ = 12 ; size = 4
_main PROC NEAR
; Line 10
push ebp
mov ebp, esp
push ecx
; Line 11
mov eax, DWORD PTR _argc$[ebp]
mov DWORD PTR tv64[ebp], eax
mov ecx, DWORD PTR tv64[ebp]
sub ecx, 1
mov DWORD PTR tv64[ebp], ecx
cmp DWORD PTR tv64[ebp], 3
ja SHORT $L810
mov edx, DWORD PTR tv64[ebp]
jmp DWORD PTR $L818[edx*4]
$L806:
; Line 14
push 1
call _MyFunction
add esp, 4
; Line 15
jmp SHORT $L803
$L807:
; Line 17
push 2
call _MyFunction
add esp, 4
; Line 18
jmp SHORT $L803
$L808:
; Line 19
push 3
call _MyFunction
add esp, 4
; Line 20
jmp SHORT $L803
$L809:
; Line 22
push 4
call _MyFunction
add esp, 4
; Line 23
jmp SHORT $L803
$L810:
; Line 25
push 5
call _MyFunction
add esp, 4
$L803:
; Line 27
xor eax, eax
; Line 28
mov esp, ebp
pop ebp
ret 0
$L818:
DD $L806
DD $L807
DD $L808
DD $L809
_main ENDP
```
Lets work our way through this. First, we see that line 10 sets up our
standard stack frame, and it also saves ecx. Why does it save ecx?
Scanning through the function, we never see a corresponding \"pop ecx\"
instruction, so it seems that the value is never restored at all. In
fact, the compiler isn\'t saving ecx at all, but is instead simply
reserving space on the stack: it\'s creating a local variable. The
original C code didn\'t have any local variables, however, so perhaps
the compiler just needed some extra scratch space to store intermediate
values. Why doesn\'t the compiler execute the more familiar \"sub esp,
4\" command to create the local variable? **push ecx** is just a faster
instruction that does the same thing. This \"scratch space\" is being
referenced by a *negative offset* from ebp. **tv64** was defined in the
beginning of the listing as having the value -4, so every call to
\"tv64\[ebp\]\" is a call to this scratch space.
There are a few things that we need to notice about the function in
general:
- Label \$L803 is the end_of_switch label. Therefore, every \"jmp
SHORT \$L803\" statement is a **break**. This is verifiable by
comparing with the C code line-by-line.
- Label \$L818 contains a list of hard-coded memory addresses, which
here are labels in the code section! Remember, labels resolve to the
memory address of the instruction. This must be an important part of
our puzzle.
To solve this puzzle, we will take an in-depth look at line 11:
``` asm
mov eax, DWORD PTR _argc$[ebp]
mov DWORD PTR tv64[ebp], eax
mov ecx, DWORD PTR tv64[ebp]
sub ecx, 1
mov DWORD PTR tv64[ebp], ecx
cmp DWORD PTR tv64[ebp], 3
ja SHORT $L810
mov edx, DWORD PTR tv64[ebp]
jmp DWORD PTR $L818[edx*4]
```
This sequence performs the following pseudo-C operation:
`if( argc - 1 >= 4 )`\
`{`\
` goto $L810; /* the default */`\
`}`\
`label *L818[] = { $L806, $L807, $L808, $L809 }; /* define a table of jumps, one per each case */`\
`//`\
`goto L818[argc - 1]; /* use the address from the table to jump to the correct case */`
Here\'s why\...
### The Setup
``` asm
mov eax, DWORD PTR _argc$[ebp]
mov DWORD PTR tv64[ebp], eax
mov ecx, DWORD PTR tv64[ebp]
sub ecx, 1
mov DWORD PTR tv64[ebp], ecx
```
The value of argc is moved into eax. The value of eax is then
immediately moved to the scratch space. The value of the scratch space
is then moved into ecx. Sounds like an awfully convoluted way to get the
same value into so many different locations, but remember: I turned off
the optimizations. The value of ecx is then decremented by 1. Why
didn\'t the compiler use a **dec** instruction instead? Perhaps the
statement is a general statement, that in this case just happens to have
an argument of 1. We don\'t know why exactly, all we know is this:
- **eax = \"scratch pad\"**
- **ecx = eax - 1**
Finally, the last line moves the new, decremented value of ecx *back
into the scratch pad*. Very inefficient.
### The Compare and Jumps
``` asm
cmp DWORD PTR tv64[ebp], 3
ja SHORT $L810
```
The value of the scratch pad is compared with the value 3, and if the
*unsigned* value is above 3 (4 or more), execution jumps to label
\$L810. How do I know the value is unsigned? I know because **ja** is an
unsigned conditional jump. Let\'s look back at the original C code
switch:
``` C
switch(argc)
{
case 1:
MyFunction(1);
break;
case 2:
MyFunction(2);
break;
case 3:
MyFunction(3);
break;
case 4:
MyFunction(4);
break;
default:
MyFunction(5);
}
```
Remember, the scratch pad contains the value (argc - 1), which means
that this condition is only triggered when argc \> 4. What happens when
argc is greater than 4? The function goes to the default condition. Now,
let\'s look at the next two lines:
``` asm
mov edx, DWORD PTR tv64[ebp]
jmp DWORD PTR $L818[edx*4]
```
**edx** gets the value of the scratch pad (argc - 1), and then there is
a very weird jump that takes place: execution jumps to a location
pointed to by the value (edx \* 4 + \$L818). What is \$L818? We will
examine that right now.
### The Switch Table
``` asm
$L818:
DD $L806
DD $L807
DD $L808
DD $L809
```
\$L818 is a pointer, in the code section, to a list of other code
section pointers. These pointers are all 32bit values (DD is a DWORD).
Let\'s look back at our jump statement:
``` asm
jmp DWORD PTR $L818[edx*4]
```
In this jump, \$L818 *isn\'t the offset, it\'s the base*, edx\*4 is the
offset. As we said earlier, edx contains the value of (argc - 1). If
argc == 1, we jump to \[\$L818 + 0\] which is \$L806. If argc == 2, we
jump to \[\$L818 + 4\], which is \$L807. Get the picture? A quick look
at labels \$L806, \$L807, \$L808, and \$L809 shows us exactly what we
expect to see: the bodies of the **case** statements from the original C
code, above. Each one of the case statements calls the function
\"MyFunction\", then breaks, and then jumps to the end of the switch
block.
## Ternary Operator ?:
Again, the best way to learn is by doing. Therefore we will go through a
mini example to explain the ternary operator. Consider the following C
code program:
``` C
int main(int argc, char **argv)
{
return (argc > 1)?(5):(0);
}
```
**cl.exe** produces the following assembly listing file:
``` asm
_argc$ = 8 ; size = 4
_argv$ = 12 ; size = 4
_main PROC NEAR
; File c:\documents and settings\andrew\desktop\test2.c
; Line 2
push ebp
mov ebp, esp
; Line 3
xor eax, eax
cmp DWORD PTR _argc$[ebp], 1
setle al
dec eax
and eax, 5
; Line 4
pop ebp
ret 0
_main ENDP
```
Line 2 sets up a stack frame, and line 4 is a standard exit sequence.
There are no local variables. It is clear that Line 3 is where we want
to look.
The instruction \"xor eax, eax\" simply sets eax to 0. For more
information on that line, see the chapter on unintuitive
instructions. The **cmp** instruction
tests the condition of the ternary operator. The **setle** function is
one of a set of x86 functions that works like a conditional move: al
gets the value 1 if argc \<= 1. Isn\'t that the exact opposite of what
we wanted? In this case, it is. Let\'s look at what happens when argc =
0: **al** gets the value 1. **al** is decremented (al = 0), and then eax
is logically anded with 5. 5 & 0 = 0. When argc == 2 (greater than 1),
the **setle** instruction doesn\'t do anything, and eax still is zero.
eax is then decremented, which means that eax == -1. What is -1?
In x86 processors, negative numbers are stored in **two\'s-complement**
format. For instance, let\'s look at the following C code:
``` C
BYTE x;
x = -1;
```
At the end of this C code, **x** will have the value 11111111: all ones!
When argc is greater than 1, setle sets al to zero. Decrementing this
value sets every bit in eax to a logical 1. Now, when we perform the
logical **and** function we get:
` ...11111111`\
`&...00000101 ;101 is 5 in binary`\
`------------`\
` ...00000101`
eax gets the value 5. In this case, it\'s a roundabout method of doing
it, but as a reverser, this is the stuff you need to worry about.
For reference, here is the GCC assembly output of the same ternary
operator from above:
``` asm
_main:
pushl %ebp
movl %esp, %ebp
subl $8, %esp
xorl %eax, %eax
andl $-16, %esp
call __alloca
call ___main
xorl %edx, %edx
cmpl $2, 8(%ebp)
setge %dl
leal (%edx,%edx,4), %eax
leave
ret
```
Notice that GCC produces slightly different code than cl.exe produces.
However, the stack frame is set up the same way. Notice also that GCC
doesn\'t give us line numbers, or other hints in the code. The ternary
operator line occurs after the instruction \"call \_\_main\". Let\'s
highlight that section here:
``` asm
xorl %edx, %edx
cmpl $2, 8(%ebp)
setge %dl
leal (%edx,%edx,4), %eax
```
Again, **xor** is used to set edx to 0 quickly. Argc is tested against 2
(instead of 1), and dl is set if argc is *greater then or equal*. If dl
gets set to 1, the **leal** instruction directly thereafter will move
the value of 5 into eax (because lea (edx,edx,4) means edx + edx \* 4,
i.e. edx \* 5).
|
# X86 Disassembly/Loops
## Loops
To complete repetitive tasks, programmers often implement **loops**.
There are many sorts of loops, but they can all be boiled down to a few
similar formats in assembly code. This chapter will discuss loops, how
to identify them, and how to \"decompile\" them back into high-level
representations.
## Do-While Loops
It seems counterintuitive that this section will consider **Do-While**
loops first, considering that they might be the least used of all the
variations in practice. However, there is method to our madness, so read
on.
Consider the following generic Do-While loop:
+---+---+
| ` | ! |
| ` | [ |
| ` | ] |
| | ( |
| C | C |
| | _ |
| d | l |
| o | a |
| | n |
| { | g |
| | u |
| | a |
| | g |
| | e |
| a | _ |
| c | d |
| t | o |
| i | _ |
| o | w |
| n | h |
| ; | i |
| | l |
| } | e |
| | . |
| w | p |
| h | n |
| i | g |
| l | |
| e | " |
| ( | C |
| c | _ |
| o | l |
| n | a |
| d | n |
| i | g |
| t | u |
| i | a |
| o | g |
| n | e |
| ) | _ |
| ; | d |
| ` | o |
| ` | _ |
| ` | w |
| | h |
| | i |
| | l |
| | e |
| | . |
| | p |
| | n |
| | g |
| | " |
| | ) |
+---+---+
What does this loop do? The loop body simply executes, the condition is
tested at the end of the loop, and the loop jumps back to the beginning
of the loop if the condition is satisfied. Unlike **if** statements,
Do-While conditions are not reversed.
Let us now take a look at the following C code:
``` C
do
{
x++;
} while(x != 10);
```
Which can be translated into assembly language as such:
``` asm
mov eax, $x
beginning:
inc eax
cmp eax, 0x0A ;0x0A = 10
jne beginning
mov $x, eax
```
## While Loops
**While** loops look almost as simple as a **Do-While** loop, but in
reality they aren\'t as simple at all. Let\'s examine a generic
while-loop:
``` C
while(x)
{
//loop body
}
```
What does this loop do? First, the loop checks to make sure that x is
true. If x is not true, the loop is skipped. The loop body is then
executed, followed by another check: is x still true? If x is still
true, execution jumps back to the top of the loop, and execution
continues. Keep in mind that there needs to be a jump at the bottom of
the loop (to get back up to the top), but it makes no sense to jump back
to the top, retest the conditional, and then jump *back to the bottom of
the loop* if the conditional is found to be false. The while-loop then,
performs the following steps:
1. check the condition. if it is false, go to the end
2. perform the loop body
3. check the condition, if it is true, jump to 2.
4. if the condition is not true, fall-through the end of the loop.
Here is a while-loop in C code:
``` C
while(x <= 10)
{
x++;
}
```
And here then is that same loop translated into assembly:
``` asm
mov eax, $x
cmp eax, 0x0A
jg end
beginning:
inc eax
cmp eax, 0x0A
jle beginning
end:
```
If we were to translate that assembly code **back into C**, we would get
the following code:
``` C
if(x <= 10) //remember: in If statements, we reverse the condition from the asm
{
do
{
x++;
} while(x <= 10)
}
```
See why we covered the Do-While loop first? Because the While-loop
becomes a Do-While when it gets assembled.
So why can\'t the jump label occur before the test?
``` asm
mov eax, $x
beginning:
cmp eax, 0x0A
jg end
inc eax
jmp beginning
end:
mov $x, eax
```
## For Loops
What is a For-Loop? In essence, it\'s a While-Loop with an initial
state, a condition, and an iterative instruction. For instance, the
following generic For-Loop:
+---+---+
| ` | ! |
| ` | [ |
| ` | ] |
| | ( |
| C | C |
| | _ |
| f | l |
| o | a |
| r | n |
| ( | g |
| i | u |
| n | a |
| i | g |
| t | e |
| i | _ |
| a | f |
| l | o |
| i | r |
| z | . |
| a | p |
| t | n |
| i | g |
| o | |
| n | " |
| ; | C |
| | _ |
| c | l |
| o | a |
| n | n |
| d | g |
| i | u |
| t | a |
| i | g |
| o | e |
| n | _ |
| ; | f |
| | o |
| i | r |
| n | . |
| c | p |
| r | n |
| e | g |
| m | " |
| e | ) |
| n | |
| t | |
| ) | |
| | |
| { | |
| | |
| | |
| | |
| a | |
| c | |
| t | |
| i | |
| o | |
| n | |
| | |
| } | |
| ` | |
| ` | |
| ` | |
+---+---+
gets translated into the following pseudocode while-loop:
``` C
initialization;
while(condition)
{
action;
increment;
}
```
Which in turn gets translated into the following Do-While Loop:
``` C
initialization;
if(condition)
{
do
{
action;
increment;
} while(condition);
}
```
Note that often in for() loops you assign an initial constant value in A
(for example x = 0), and then compare that value with another constant
in B (for example x \< 10). Most optimizing compilers will be able to
notice that the first time x IS less than 10, and therefore there is no
need for the initial if(B) statement. In such cases, the compiler will
simply generate the following sequence:
``` C
initialization;
do
{
action
increment;
} while(condition);
```
rendering the code indistinguishable from a while() loop.
## Other Loop Types
C only has Do-While, While, and For Loops, but some other languages may
very well implement their own types. Also, a good C-Programmer could
easily \"home brew\" a new type of loop using a series of good macros,
so they bear some consideration:
### Do-Until Loop
A common Do-Until Loop will take the following form:
``` C
do
{
//loop body
} until(x);
```
which essentially becomes the following Do-While loop:
``` C
do
{
//loop body
} while(!x);
```
### Until Loop
Like the Do-Until loop, the standard Until-Loop looks like the
following:
``` C
until(x)
{
//loop body
}
```
which (likewise) gets translated to the following While-Loop:
``` C
while(!x)
{
//loop body
}
```
### Do-Forever Loop
A Do-Forever loop is simply an unqualified loop with a condition that is
always true. For instance, the following pseudo-code:
``` C
doforever
{
//loop body
}
```
will become the following while-loop:
``` C
while(1)
{
//loop body
}
```
Which can actually be reduced to a simple unconditional jump statement:
``` asm
beginning:
;loop body
jmp beginning
```
Notice that some non-optimizing compilers will produce nonsensical code
for this:
``` asm
mov ax, 1
cmp ax, 1
jne loopend
beginning:
;loop body
cmp ax, 1
je beginning
loopend:
```
Notice that a lot of the comparisons here are not needed since the
condition is a constant. Most compilers will optimize cases like this.
|
# X86 Disassembly/Variables
## Variables
We\'ve already seen some mechanisms to create local storage on the
stack. This chapter will talk about some other variables, including
**global variables**, **static variables**, variables labelled
\"**const**,\" \"**register**,\" and \"**volatile**.\" It will also
consider some general techniques concerning variables, including
accessor and setter methods (to borrow from object-oriented
terminology). This section may also talk about setting memory
breakpoints in a debugger to track memory I/O on a variable.
## How to Spot a Variable
Variables come in 2 distinct flavors: those that are created on the
stack (local variables), and those that are accessed via a hardcoded
memory address (global variables). Any memory that is accessed via a
hard-coded address is usually a global variable. Variables that are
accessed as an offset from esp, or ebp are frequently local variables.
Hardcoded address : Anything hardcoded is a value that is stored as-is in the binary, and is not changed at runtime. For instance, the value 0x2054 is hardcoded, whereas the current value of variable X is not hard-coded and may change at runtime.
Example of a hardcoded address:
``` asm
mov eax, [0x77651010]
```
OR:
``` asm
mov ecx, 0x77651010
mov eax, [ecx]
```
Example of a non-hardcoded (softcoded?) address:
``` asm
mov ecx, [esp + 4]
add ecx, ebx
mov eax, [ecx]
```
In the last example, the value of ecx is calculated at run-time, whereas
in the first 2 examples, the value is the same every time. RVAs are
considered hard-coded addresses, even though the loader needs to \"fix
them up\" to point to the correct locations.
## .BSS and .DATA sections
Both .bss and .data sections contain values which can change at run-time
(e.g. *variables*). Typically, variables that are initialized to a
non-zero value in the source are allocated in the .data section (e.g.
\"int a = 10;\"). Variables that are not initialized, or initialized
with a zero value, can be allocated to the .bss section (e.g. \"int
arr\[100\];\"). Because all values of .bss variables are guaranteed to
be zero at the start of the program, there is no need for the linker to
allocate space in the binary file. Therefore, .bss sections do not take
space in the binary file, regardless of their size.
## \"Static\" Local Variables
Local variables labeled **static** maintain their value across function
calls, and therefore cannot be created on the stack like other local
variables are. How are static variables created? Let\'s take a simple
example C function:
``` C
void MyFunction(int a)
{
static int x = 0;
printf("my number: ");
printf("%d, %d\n", a, x);
}
```
Compiling to a listing file with **cl.exe** gives us the following code:
``` asm
_BSS SEGMENT
?x@?1??MyFunction@@9@9 DD 01H DUP (?) ; `MyFunction'::`2'::x
_BSS ENDS
_DATA SEGMENT
$SG796 DB 'my number: ', 00H
$SG797 DB '%d, %d', 0aH, 00H
_DATA ENDS
PUBLIC _MyFunction
EXTRN _printf:NEAR
; Function compile flags: /Odt
_TEXT SEGMENT
_a$ = 8 ; size = 4
_MyFunction PROC NEAR
; Line 4
push ebp
mov ebp, esp
; Line 6
push OFFSET FLAT:$SG796
call _printf
add esp, 4
; Line 7
mov eax, DWORD PTR ?x@?1??MyFunction@@9@9
push eax
mov ecx, DWORD PTR _a$[ebp]
push ecx
push OFFSET FLAT:$SG797
call _printf
add esp, 12 ; 0000000cH
; Line 8
pop ebp
ret 0
_MyFunction ENDP
_TEXT ENDS
```
Normally when assembly listings are posted in this wikibook, most of the
code gibberish is discarded to aid readability, but in this instance,
the \"gibberish\" contains the answer we are looking for. As can be
clearly seen, this function creates a standard stack frame, and it
doesn\'t create any local variables on the stack. In the interests of
being complete, we will take baby-steps here, and work to the conclusion
logically.
In the code for Line 7, there is a call to \_printf with 3 arguments.
Printf is a standard **libc** function, and it therefore can be assumed
to be cdecl calling convention. Arguments are pushed, therefore, from
right to left. Three arguments are pushed onto the stack before \_printf
is called:
- `DWORD PTR ?x@?1??MyFunction@@9@9`
- `DWORD PTR _a$[ebp]`
- `OFFSET FLAT:$SG797`
The second one, \_a\$\[ebp\] is partially defined in this assembly
instruction:
`_a$ = 8`
And therefore \_a\$\[ebp\] is the variable located at offset +8 from
ebp, or the first argument to the function. OFFSET FLAT:\$SG797 likewise
is declared in the assembly listing as such:
``` asm
SG797 DB '%d, %d', 0aH, 00H
```
If you have your ASCII table handy, you will notice that 0aH = 0x0A =
\'\\n\'. OFFSET FLAT:\$SG797 then is the format string to our printf
statement. Our last option then is the mysterious-looking
\"?x@?1??MyFunction@@9@9\", which is defined in the following assembly
code section:
``` asm
_BSS SEGMENT
?x@?1??MyFunction@@9@9 DD 01H DUP (?)
_BSS ENDS
```
This shows that the Microsoft C compiler creates static variables in the
.bss section. This might not be the same for all compilers, but the
lesson is the same: local static variables are created and used in a
very similar, if not the exact same, manner as global values. In fact,
as far as the reverser is concerned, the two are usually
interchangeable. Remember, the only real difference between static
variables and global variables is the idea of \"scope\", which is only
used by the compiler.
## Signed and Unsigned Variables
Integer formatted variables, such as **int**, **char**, **short** and
**long** may be declared signed or unsigned variables in the C source
code. There are two differences in how these variables are treated:
1. Signed variables use signed instructions such as **add**, and
**sub**. Unsigned variables use unsigned arithmetic instructions
such as **addi**, and **subi**.
2. Signed variables use signed branch instructions such as **jge** and
**jl**. Unsigned variables use unsigned branch instructions such as
**jae**, and **jb**.
The difference between signed and unsigned instructions is the
conditions under which the various flags for greater-than or less-than
(overflow flags) are set. The integer result values are exactly the same
for both signed and unsigned data.
## Floating-Point Values
Floating point values tend to be 32-bit data values (for **float**) or
64-bit data values (for **double**). These values are distinguished from
ordinary integer-valued variables because they are used with
floating-point instructions. Floating point instructions typically start
with the letter *f*. For instance, **fadd**, **fcmp**, and similar
instructions are used with floating point values. Of particular note are
the **fload** instruction and variants. These instructions take an
integer-valued variable and converts it into a floating point variable.
We will discuss floating point variables in more detail in a later
chapter.
## Global Variables
Global variables do not have a limited scope like lexical variables do
inside a function body. Since the notion of lexical scope implies the
use of the system stack, and since global variables are not lexical in
nature, they are typically not found on the stack. Global variables tend
to exist in the program as a hard-coded memory address, a location which
never changes throughout program execution. These could exist in the
DATA segment of the executable, or anywhere else that a hard-coded
memory address can be used to store data.
In C, global variables are defined outside the body of any function.
There is no \"global\" keyword. Any variable which is not defined inside
a function is global. In C however, a variable which is not defined
inside a function is only global to the particular source code file in
which it is defined. For example, we have two files `Foo.c` and `Bar.c`,
and a global variable `MyGlobalVar`:
+-----------------------+------------------------+
| Foo.c | Bar.c |
+=======================+========================+
| ``` c | ``` c |
| int MyGlobalVar; | int GetVarBar(void) |
| | { |
| int GetVarFoo(void) | //wrong! |
| { | return MyGlobalVar; |
| //right! | } |
| return MyGlobalVar; | ``` |
| } | |
| ``` | |
+-----------------------+------------------------+
In the example above, the variable `MyGlobalVar` is visible inside the
file `Foo.c`, but is not visible inside the file `Bar.c`. To make
`MyGlobalVar` visible inside all project files, we need to use the
`extern` keyword, which we will discuss below.
### \"`static`\" Variables
The C programming language specifies a special keyword \"`static`\" to
define variables which are lexical to the function (they cannot be
referenced from outside the function) but they maintain their values
across function calls. Unlike ordinary lexical variables which are
created on the stack when the function is entered and are destroyed from
the stack when the function returns, static variables are created once
and are never destroyed.
``` c
int MyFunction(void)
{
static int x;
...
}
```
Static variables in C are global variables, except the compiler takes
precautions to prevent the variable from being accessed outside of the
parent function\'s scope. Like global variables, static variables are
referenced using a hardcoded memory address, not a location on the stack
like ordinary variables. However unlike globals, static variables are
only used inside a single function. There is no difference between a
global variable which is only used in a single function, and a static
variable inside that same function. However, it\'s good programming
practice to limit the number of global variables, so when disassembling,
you should prefer interpreting these variables as static instead of
global.
### \"`extern`\" Variables
The `extern` keyword is used by a C compiler to indicate that a
particular variable is global to the entire project, not just to a
single source code file. Besides this distinction, and the slightly
larger lexical scope of extern variables, they should be treated like
ordinary global variables.
In static libraries, variables marked as being extern might be available
for use with programs which are linked to the library.
### Global Variables Summary
Here is a table to summarize some points about global variables:
-------------------- -------------------------------------------------- --------------------------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
How it\'s referenced Lexical scope Notes
`static` variables Hard-coded memory address, only in one function One function only In disassembly, indistinguishable from global variables except that it\'s only used in one function. A global variable is only static if it\'s never used in another function.
Global variables Hard-coded memory address, only in one file One source code file only Global variables are only used in a single file. This can help you when disassembling to get a rough estimate for how the original source code was arranged.
`extern` variables Hard-coded memory address, in the entire project The entire project Extern variables are available for use in all functions of a project, and in programs linked to the project (external libraries, for example).
-------------------- -------------------------------------------------- --------------------------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
When disassembling, a hard-coded memory address should be considered to
be an ordinary global variable unless you can determine from the scope
of the variable that it is static or extern.
## Constants
Variables qualified with the **const** keyword (in C) are frequently
stored in the .data section of the executable. Constant values can be
distinguished because they are initialized at the beginning of the
program, and are never modified by the program itself. For this reasons,
some compilers may choose to store constant variables (especially
strings) in the .text section of the executable, thus allowing the
sharing of these variables across multiple instances of the same
process. This creates a big problem for the reverser, who now has to
decide whether the code he\'s looking at is part of a constant variable
or part of a subroutine.
## \"Volatile\" memory
In C and C++, variables can be declared \"volatile,\" which tells the
compiler that the memory location can be accessed from *external* or
*concurrent* processes, and that the compiler should not perform any
optimizations on the variable. For instance, if multiple threads were
all accessing and modifying a single global value, it would be bad for
the compiler to store that variable in a register sometimes, and flush
it to memory infrequently. In general, Volatile memory must be flushed
to memory after every calculation, to ensure that the most current
version of the data is in memory when other processes come to look for
it.
It is not always possible to determine from a disassembly listing
whether a given variable is a volatile variable. However, if the
variable is accessed frequently from memory, and its value is constantly
updated in memory (especially if there are free registers available),
that\'s a good hint that the variable might be volatile.
## Simple Accessor Methods
An Accessor Method is a tool derived from OO theory and practice. In its
most simple form, an accessor method is a function that receives no
parameters (or perhaps simply an offset), and returns the value of a
variable. Accessor and Setter methods are ways to restrict access to
certain variables. The only standard way to get the value of the
variable is to use the Accessor.
Accessors can prevent some simple problems, such as out-of-bounds array
indexing, and using uninitialized data. Frequently, Accessors contain
little or no error-checking.
Here is an example:
``` asm
push ebp
mov ebp, esp
mov eax, [ecx + 8] ;THISCALL function, passes "this" pointer in ecx
mov esp, ebp
pop ebp
ret
```
Because they are so simple, accessor methods are frequently heavily
optimized (they generally don\'t need a stack frame), and are even
occasionally *inlined* by the compiler.
## Simple Setter (Manipulator) Methods
Setter methods are the antithesis of an accessor method, and provide a
unified way of altering the value of a given variable. Setter methods
will often take as a parameter the value to be set to the variable,
although some methods (Initializers) simply set the variable to a
pre-defined value. Setter methods often do bounds checking, and error
checking on the variable before it is set, and frequently either a)
return no value, or b) return a simple boolean value to determine
success.
Here is an example:
``` asm
push ebp
mov ebp, esp
cmp [ebp + 8], 0
je error
mov eax, [ebp + 8]
mov [ecx + 0], eax
mov eax, 1
jmp end
:error
mov eax, 0
:end
mov esp, ebp
pop ebp
ret
```
|
# X86 Disassembly/Data Structures
## Data Structures
Few programs can work by using simple memory storage; most need to
utilize complex data objects, including **pointers**, **arrays**,
**structures**, and other complicated types. This chapter will talk
about how compilers implement complex data objects, and how the reverser
can identify these objects.
## Arrays
Arrays are simply a storage scheme for multiple data objects of the same
type. Data objects are stored sequentially, often as an offset from a
pointer to the beginning of the array. Consider the following C code:
``` C
x = array[25];
```
Which is identical to the following asm code:
``` asm
mov ebx, $array
mov eax, [ebx + 25]
mov $x, eax
```
Now, consider the following example:
``` C
int MyFunction1()
{
int array[20];
...
```
This (roughly) translates into the following asm pseudo-code:
``` asm
:_MyFunction1
push ebp
mov ebp, esp
sub esp, 80 ;the whole array is created on the stack!!!
lea $array, [esp + 0] ;a pointer to the array is saved in the array variable
...
```
The entire array is created on the stack, and the pointer to the bottom
of the array is stored in the variable \"array\". An optimizing compiler
could ignore the last instruction, and simply refer to the array via a
+0 offset from esp (in this example), but we will do things verbosely.
Likewise, consider the following example:
``` C
void MyFunction2()
{
char buffer[4];
...
```
This will translate into the following asm pseudo-code:
``` asm
:_MyFunction2
push ebp
mov ebp, esp
sub esp, 4
lea $buffer, [esp + 0]
...
```
Which looks harmless enough. But, what if a program inadvertantly
accesses buffer\[4\]? what about buffer\[5\]? what about buffer\[8\]?
This is the makings of a buffer overflow vulnerability, and (might) will
be discussed in a later section. However, this section won\'t talk about
security issues, and instead will focus only on data structures.
### Spotting an Array on the Stack
To spot an array on the stack, look for large amounts of local storage
allocated on the stack (\"sub esp, 1000\", for example), and look for
large portions of that data being accessed by an offset from a different
register from esp. For instance:
``` asm
:_MyFunction3
push ebp
mov ebp, esp
sub esp, 256
lea ebx, [esp + 0x00]
mov [ebx + 0], 0x00
```
is a good sign of an array being created on the stack. Granted, an
optimizing compiler might just want to offset from esp instead, so you
will need to be careful.
### Spotting an Array in Memory
Arrays in memory, such as global arrays, or arrays which have initial
data (remember, initialized data is created in the .data section in
memory) and will be accessed as offsets from a hardcoded address in
memory:
``` asm
:_MyFunction4
push ebp
mov ebp, esp
mov esi, 0x77651004
mov ebx, 0x00000000
mov [esi + ebx], 0x00
```
It needs to be kept in mind that structures and classes might be
accessed in a similar manner, so the reverser needs to remember that all
the data objects in an array are of the same type, that they are
sequential, and they will often be handled in a loop of some sort. Also,
(and this might be the most important part), each elements in an array
may be accessed by a *variable offset from the base*.
Since most times an array is accessed through a computed index, not
through a constant, the compiler will likely use the following to access
an element of the array:
``` asm
mov [ebx + eax], 0x00
```
If the array holds elements larger than 1 byte (for char), the index
will need to be multiplied by the size of the element, yielding code
similar to the following:
``` asm
mov [ebx + eax * 4], 0x11223344 # access to an array of DWORDs, e.g. arr[i] = 0x11223344
...
mul eax, $20 # access to an array of structs, each 20 bytes long
lea edi, [ebx + eax] # e.g. ptr = &arr[i]
```
This pattern can be used to distinguish between accesses to arrays and
accesses to structure data members.
## Structures
All C programmers are going to be familiar with the following syntax:
``` C
struct MyStruct
{
int FirstVar;
double SecondVar;
unsigned short int ThirdVar;
}
```
It\'s called a **structure** (Pascal programmers may know a similar
concept as a \"record\").
Structures may be very big or very small, and they may contain all sorts
of different data. Structures may look very similar to arrays in memory,
but a few key points need to be remembered: structures do not need to
contain data fields of all the same type, structure fields are often
4-byte aligned (not sequential), and each element in a structure has its
own offset. It therefore makes no sense to reference a structure element
by a variable offset from the base.
Take a look at the following structure definition:
``` C
struct MyStruct2
{
long value1;
short value2;
long value3;
}
```
Assuming the pointer to the base of this structure is loaded into ebx,
we can access these members in one of two schemes:
+---+---+
| ` | ` |
| ` | ` |
| ` | ` |
| | |
| a | a |
| s | s |
| m | m |
| | |
| ; | ; |
| d | d |
| a | a |
| t | t |
| a | a |
| | |
| i | i |
| s | s |
| | |
| 3 | " |
| 2 | p |
| - | a |
| b | c |
| i | k |
| t | e |
| | d |
| a | " |
| l | |
| i | [ |
| g | e |
| n | b |
| e | x |
| d | |
| | + |
| [ | |
| e | 0 |
| b | ] |
| x | |
| | ; |
| + | v |
| | a |
| 0 | l |
| ] | u |
| | e |
| ; | 1 |
| v | |
| a | [ |
| l | e |
| u | b |
| e | x |
| 1 | |
| | + |
| [ | |
| e | 4 |
| b | ] |
| x | |
| | ; |
| + | v |
| | a |
| 4 | l |
| ] | u |
| | e |
| ; | 2 |
| v | |
| a | [ |
| l | e |
| u | b |
| e | x |
| 2 | |
| | + |
| [ | |
| e | 6 |
| b | ] |
| x | |
| | ; |
| + | v |
| | a |
| 8 | l |
| ] | u |
| | e |
| ; | 3 |
| v | ` |
| a | ` |
| l | ` |
| u | |
| e | |
| 3 | |
| ` | |
| ` | |
| ` | |
+---+---+
The first arrangement is the most common, but it clearly leaves open an
entire memory word (2 bytes) at offset +6, which is not used at all.
Compilers occasionally allow the programmer to manually specify the
offset of each data member, but this isn\'t always the case. The second
example also has the benefit that the reverser can easily identify that
each data member in the structure is a different size.
Consider now the following function:
``` asm
:_MyFunction
push ebp
mov ebp, esp
lea ecx, SS:[ebp + 8]
mov [ecx + 0], 0x0000000A
mov [ecx + 4], ecx
mov [ecx + 8], 0x0000000A
mov esp, ebp
pop ebp
```
The function clearly takes a pointer to a data structure as its first
argument. Also, each data member is the same size (4 bytes), so how can
we tell if this is an array or a structure? To answer that question, we
need to remember one important distinction between structures and
arrays: the elements in an array are all of the same type, the elements
in a structure do not need to be the same type. Given that rule, it is
clear that one of the elements in this structure is a pointer (it points
to the base of the structure itself!) and the other two fields are
loaded with the hex value 0x0A (10 in decimal), which is certainly not a
valid pointer on any system I have ever used. We can then partially
recreate the structure and the function code below:
``` C
struct MyStruct3
{
long value1;
void *value2;
long value3;
}
void MyFunction2(struct MyStruct3 *ptr)
{
ptr->value1 = 10;
ptr->value2 = ptr;
ptr->value3 = 10;
}
```
As a quick aside note, notice that this function doesn\'t load anything
into eax, and therefore it doesn\'t return a value.
## Advanced Structures
Lets say we have the following situation in a function:
``` asm
:MyFunction1
push ebp
mov ebp, esp
mov esi, [ebp + 8]
lea ecx, SS:[esi + 8]
...
```
what is happening here? First, esi is loaded with the value of the
function\'s first parameter (ebp + 8). Then, ecx is loaded with a
pointer to the offset +8 from esi. It looks like we have 2 pointers
accessing the same data structure!
The function in question could easily be one of the following 2
prototypes:
``` C
struct MyStruct1
{
DWORD value1;
DWORD value2;
struct MySubStruct1
{
...
```
``` C
struct MyStruct2
{
DWORD value1;
DWORD value2;
DWORD array[LENGTH];
...
```
one pointer offset from another pointer in a structure often means a
complex data structure. There are far too many combinations of
structures and arrays, however, so this wikibook will not spend too much
time on this subject.
## Identifying Structs and Arrays
Array elements and structure fields are both accessed as offsets from
the array/structure pointer. When disassembling, how do we tell these
data structures apart? Here are some pointers:
1. Array elements are not meant to be accessed individually. Array
elements are typically accessed using a variable offset
2. Arrays are frequently accessed in a loop. Because arrays typically
hold a series of similar data items, the best way to access them all
is usually a loop. Specifically,
`for(x = 0; x < length_of_array; x++)` style loops are often used to
access arrays, although there can be others.
3. All the elements in an array have the same data type.
4. Struct fields are typically accessed using constant offsets.
5. Struct fields are typically not accessed in order, and are also not
accessed using loops.
6. Struct fields are not typically all the same data type, or the same
data width
## Linked Lists and Binary Trees
Two common structures used when programming are linked lists and binary
trees. These two structures in turn can be made more complicated in a
number of ways. Shown in the images below are examples of a linked list
structure and a binary tree structure.
+---+---+
| ! | ! |
| [ | [ |
| ] | ] |
| ( | ( |
| C | t |
| _ | r |
| l | e |
| a | e |
| n | - |
| g | d |
| u | a |
| a | t |
| g | a |
| e | - |
| _ | s |
| l | t |
| i | r |
| n | u |
| k | c |
| e | t |
| d | u |
| _ | r |
| l | e |
| i | . |
| s | s |
| t | v |
| . | g |
| p | |
| n | " |
| g | t |
| | r |
| " | e |
| C | e |
| _ | - |
| l | d |
| a | a |
| n | t |
| g | a |
| u | - |
| a | s |
| g | t |
| e | r |
| _ | u |
| l | c |
| i | t |
| n | u |
| k | r |
| e | e |
| d | . |
| _ | s |
| l | v |
| i | g |
| s | " |
| t | ) |
| . | { |
| p | w |
| n | i |
| g | d |
| " | t |
| ) | h |
| { | = |
| w | " |
| i | 3 |
| d | 0 |
| t | 0 |
| h | " |
| = | } |
| " | |
| 4 | |
| 0 | |
| 0 | |
| " | |
| } | |
+---+---+
Each node in a linked list or a binary tree contains some amount of
data, and a pointer (or pointers) to other nodes. Consider the following
asm code example:
``` asm
loop_top:
cmp [ebp + 0], 10
je loop_end
mov ebp, [ebp + 4]
jmp loop_top
loop_end:
```
At each loop iteration, a data value at \[ebp + 0\] is compared with the
value 10. If the two are equal, the loop is ended. If the two are not
equal, however, the pointer in ebp is updated with a pointer at an
offset from ebp, and the loop is continued. This is a classic
linked-loop search technique. This is analagous to the following C code:
``` c
struct node
{
int data;
struct node *next;
};
struct node *x;
...
while(x->data != 10)
{
x = x->next;
}
```
Binary trees are the same, except two different pointers will be used
(the right and left branch pointers).
|
# X86 Disassembly/Objects and Classes
## Object-Oriented Programming
**Object-Oriented** (OO) programming provides for us a new unit of
program structure to contend with: the **Object**. This chapter will
look at disassembled classes from C++. This chapter will not deal
directly with COM, but it will work to set a lot of the groundwork for
future discussions in reversing COM components (Windows users only).
## Classes
A basic class that has not inherited anything can be broken into two
parts, the variables and the methods. The non-static variables are
shoved into a simple data structure while the methods are compiled and
called like every other function.
When you start adding in inheritance and polymorphism, things get a
little more complicated. For the purposes of simplicity, the structure
of an object will be described in terms of having no inheritance. At the
end, however, inheritance and polymorphism will be covered.
### Variables
All static variables defined in a class resides in the static region of
memory for the entire duration of the application. Every other variable
defined in the class is placed into a data structure known as an object.
Typically when the constructor is called, the variables are placed into
the object in sequential order, see **Figure 1**.
`<small>`{=html}A:`</small>`{=html}
``` cpp
class ABC123 {
public:
int a, b, c;
ABC123():a(1), b(2), c(3) {};
};
```
`<small>`{=html}B:`</small>`{=html}
``` asm
0x00200000 dd 1 ;int a
0x00200004 dd 2 ;int b
0x00200008 dd 3 ;int c
```
```{=html}
<table width=95%>
```
```{=html}
<tr>
```
```{=html}
<td>
```
`<small>`{=html}**Figure 1**: An example of what an object looks like in
memory`</br>`{=html} **Figure 1.A**: The definition for the class
\"ABC123.\" This class has three integers, a, b, and c. The constructor
sets \'a\' to equal 1, \'b\' to equal 2, and \'c\' to equal
3.`</br>`{=html} **Figure 1.B**: How the object ABC123 might be placed
in memory, ordering the variables from the class sequentially. At memory
address 0x00200000 there is a double word integer (32 bits) with a value
of 1, representing the variable \'a\'. Memory address 0x00200004 has a
double word integer with the value of 2, representing the variable
\'b\'. And at memory address 0x00200008 there is a double word integer
with a value of 3, representing the variable \'c\'.`</small>`{=html}
```{=html}
</td>
```
```{=html}
</tr>
```
```{=html}
</table>
```
However, the compiler typically needs the variables to be separated into
sizes that are multiples of a word (2 bytes) in order to locate them.
Not all variables fit this requirement, namely char arrays; some unused
bits might be used pad the variables so they meet this size requirement.
This is illustrated in **Figure 2**.
`<small>`{=html}A:`</small>`{=html}
``` cpp
class ABC123{
public:
int a;
char b[3];
double c;
ABC123():a(1),c(3) { strcpy(b,"02"); };
};
```
`<small>`{=html}B:`</small>`{=html}
``` asm
0x00200000 dd 1 ;int a ; offset = abc123 + 0*word_size
0x00200004 db '0' ;b[0] = '0' ; offset = abc123 + 2*word_size
0x00200005 db '2' ;b[1] = '2'
0x00200006 db 0 ;b[2] = null
0x00200007 db 0 ;<= UNUSED BYTE
0x00200008 dd 0x00000000 ;double c, lower 32 bits ; offset = abc123 + 4*word_size
0x0020000C dd 0x40080000 ;double c, upper 32 bits
```
```{=html}
<table width=95%>
```
```{=html}
<tr>
```
```{=html}
<td>
```
`<small>`{=html}**Figure 2**: An example of an object having a padded
variable`</br>`{=html} **Figure 2.A**: A new definition for the class
\"ABC123.\" This class has one 32 bit integer, a. One 3 byte char array,
b. And one 64 bit double, c. The constrictor sets \'a\' to 1, \'b\' to
\"02\", and \'c\' to 3.`</br>`{=html} **Figure 2.B** Shows how ABC123
might be stored in memory. The first double word in the object is the
variable \'a\' at location 0x00200000 with a value of 1. Variable \'b\'
starts at the memory location 0x00200004. It\'s three bytes containing
three chars, \'0\',\'2\', and the null value. The next available
address, 0x00200007, is unused since it\'s not a multiple of a word. The
last variable \'c\', starts at 0x00200008 and it two double words (64
bits). It contains the value 3.`</small>`{=html}
```{=html}
</td>
```
```{=html}
</tr>
```
```{=html}
</table>
```
In order for the application to access one of these object variables, an
object pointer needs to be offset to find the desired variable. The
offset of every variable is known by the compiler and written into the
object code wherever it\'s needed. **Figure 3** shows how to offset a
pointer to retrieve variables.
``` asm
;abc123 = pointer to object
mov eax, [abc123] ;eax = &a ;offset = abc123+0*word_size = abc123
mov ebx, [abc123+4] ;ebx = &b ;offset = abc123+2*word_size = abc123+4
mov ecx, [abc123+8] ;ecx = &c ;offset = abc123+4*word_size = abc123+8
```
**Figure 3**: This shows how to offset a pointer to retrieve variables.
The first line places the address of variable \'a\' into eax. The second
line places the address of variable \'b\' into ebx. And the last line
places the variable \'c\' into ecx.
### Methods
At a low level, there is almost no difference between a function and a
method. When decompiling, it can sometimes be hard to tell a difference
between the two. They both reside in the text memory space, and both are
called the same way. An example of how a method is called can be seen in
**Figure 4**.
`<small>`{=html}A:`</small>`{=html}
``` cpp
//method call
abc123->foo(1, 2, 3);
```
`<small>`{=html}B:`</small>`{=html}
``` asm
push 3 ; int c
push 2 ; int b
push 1 ; int a
push [ebp-4] ; the address of the object
call 0x00434125 ; call to method
```
```{=html}
<table width=95%>
```
```{=html}
<tr>
```
```{=html}
<td>
```
`<small>`{=html}**Figure 4**: A method call.`</br>`{=html} **Figure
4.A**: A method call in the C++ syntax. abc123 is a pointer to an object
that has a method, foo(). foo() is taking three integer arguments, 1, 2,
and 3.`</br>`{=html} **Figure 4.B** The same method call in x86
assembly. It takes four arguments, the address of the object and three
integers. The pointer to the object is at ebp-4 and the method is at
address 0x00434125. `</small>`{=html}
```{=html}
</td>
```
```{=html}
</tr>
```
```{=html}
</table>
```
A notable characteristic in a method call is the address of the object
being passed in as an argument. This, however, is not a always a good
indicator. **Figure 5** shows function with the first argument being an
object passed in by reference. The result is function that looks
identical to a method call.
`<small>`{=html}A:`</small>`{=html}
``` cpp
//function call
foo(abc123, 1, 2, 3);
```
`<small>`{=html}B:`</small>`{=html}
``` asm
push 3 ; int c
push 2 ; int b
push 1 ; int a
push [ebp+4] ; the address of the object
call 0x00498372 ; call to function
```
```{=html}
<table width=95%>
```
```{=html}
<tr>
```
```{=html}
<td>
```
`<small>`{=html}**Figure 5**: A function call.`</br>`{=html} **Figure
5.A:** A function call in the C++ syntax. foo() is taking four
arguments, one pointer and three integer arguments.`</br>`{=html}
**Figure 5.B:** The same function call in x86 assembly. It takes four
arguments, the address of the object and three integers. The pointer to
the object is at ebp-4 and the method is at address 0x00498372.
`</small>`{=html}
```{=html}
</td>
```
```{=html}
</tr>
```
```{=html}
</table>
```
### Inheritance & Polymorphism
Inheritance and polymorphism completely changes the structure of a
class, the object no longer contains just variables, they also contain
pointers to the inherited methods. This is due to the fact that
polymorphism requires the address of a method or inner object to be
figured out at runtime.
Take **Figure 6** into consideration. How does the application know to
call D::one or C::one? The answer is that the compiler figures out a
convention in which to order variables and method pointers inside the
object such that when they\'re referenced, the offsets are the same for
any object that has inherited its methods and variables.
```{=html}
<center>
```
```{=html}
<table border=0 margin=0 padding=0 width=95%>
```
```{=html}
<tr>
```
```{=html}
<td>
```
``` cpp
A *obj[2];
obj[0] = new C();
obj[1] = new D();
for(int i=0; i<2; i++)
obj[i]->one();
```
```{=html}
</td>
```
```{=html}
</tr>
```
```{=html}
<tr>
```
```{=html}
<td>
```
`<small>`{=html}**Figure 6**: A small C++ polymorphic loop that calls a
function, one. The classes C and D both inherit an abstract class, A.
The class A, for this code to work, must have a virtual method with the
name, \"one.\" `</small>`{=html}
```{=html}
</td>
```
```{=html}
</tr>
```
```{=html}
</table>
```
```{=html}
</center>
```
The abstract class A acts as a blueprint for the compiler, defining an
expected structure for any class that inherits it. Every variable
defined in class A and every virtual method defined in A will have the
exact same offset for any of its children. **Figure 7** declares a
possible inheritance scheme as well as it structure in memory. Notice
how the offset to C::one is the same as D::one, and the offset to C\'s
copy of A::a is the same as D\'s copy. In this, our polymorphic loop can
just iterate through the array of pointers and know exactly where to
find each method.
`<small>`{=html}A:`</small>`{=html}
``` cpp
class A{
public:
int a;
virtual void one() = 0;
};
class B{
public:
int b;
int c;
virtual void two() = 0;
};
class C: public A{
public:
int d;
void one();
};
class D: public A, public B{
public:
int e;
void one();
void two();
};
```
`<small>`{=html}B:`</small>`{=html}
``` asm
;Object C
0x00200000 dd 0x00423848 ; address of C::one ;offset = 0*word_size
0x00200004 dd 1 ; C's copy of A::a ;offset = 2*word_size
0x00200008 dd 4 ; C::d ;offset = 4*word_size
;Object D
0x00200100 dd 0x00412348 ; address of D::one ;offset = 0*word_size
0x00200104 dd 1 ; D's copy of A::a ;offset = 2*word_size
0x00200108 dd 0x00431255 ; address of D::two ;offset = 4*word_size
0x0020010C dd 2 ; D's copy of B::b ;offset = 6*word_size
0x00200110 dd 3 ; D's copy of B::c ;offset = 8*word_size
0x00200114 dd 5 ; D::e ;offset = 10*word_size
```
```{=html}
<table width=95%>
```
```{=html}
<tr>
```
```{=html}
<td>
```
`<small>`{=html}**Figure 7**: A polymorphic inheritance
scheme.`</br>`{=html} **Figure 7.A** defines the inheritance scheme. It
shows that class C inherits class A, and class D inherits class A and
class B.`</br>`{=html} **Figure 7.B** shows how the inheritance scheme
might be structured in memory. Class C\'s object has everything that was
declared in class A in the first two double words. The remainder of the
object was defined by class C. Class D\'s object also has everything
that was declared in class A in the first two double words. Then the
next three double words is everything declared in class B. And the last
double word is the variable defined by class D.`</small>`{=html}
```{=html}
</td>
```
```{=html}
</tr>
```
```{=html}
</table>
```
## Classes Vs. Structs
|
# X86 Disassembly/Floating Point Numbers
## Floating Point Numbers
This page will talk about how **floating point** numbers are used in
assembly language constructs. This page will not talk about new
constructs, it will not explain what the FPU instructions do, how
floating point numbers are stored or manipulated, or the differences in
floating-point data representations. However, this page will demonstrate
briefly how floating-point numbers are used in code and data structures
that we have already considered.
The x86 architecture does not have any registers specifically for
floating point numbers, but it does have a special stack for them. The
floating point stack is built directly into the processor, and has
access speeds similar to those of ordinary registers. Notice that the
FPU stack is not the same as the regular system stack.
## Calling Conventions
With the addition of the floating-point stack, there is an entirely new
dimension for passing parameters and returning values. We will examine
our calling conventions here, and see how they are affected by the
presence of floating-point numbers. These are the functions that we will
be assembling, using both GCC, and cl.exe:
``` C
__cdecl double MyFunction1(double x, double y, float z)
{
return (x + 1.0) * (y + 2.0) * (z + 3.0);
}
__fastcall double MyFunction2(double x, double y, float z)
{
return (x + 1.0) * (y + 2.0) * (z + 3.0);
}
__stdcall double MyFunction3(double x, double y, float z)
{
return (x + 1.0) * (y + 2.0) * (z + 3.0);
}
```
### CDECL
Here is the cl.exe assembly listing for MyFunction1:
``` asm
PUBLIC _MyFunction1
PUBLIC __real@3ff0000000000000
PUBLIC __real@4000000000000000
PUBLIC __real@4008000000000000
EXTRN __fltused:NEAR
; COMDAT __real@3ff0000000000000
CONST SEGMENT
__real@3ff0000000000000 DQ 03ff0000000000000r ; 1
CONST ENDS
; COMDAT __real@4000000000000000
CONST SEGMENT
__real@4000000000000000 DQ 04000000000000000r ; 2
CONST ENDS
; COMDAT __real@4008000000000000
CONST SEGMENT
__real@4008000000000000 DQ 04008000000000000r ; 3
CONST ENDS
_TEXT SEGMENT
_x$ = 8 ; size = 8
_y$ = 16 ; size = 8
_z$ = 24 ; size = 4
_MyFunction1 PROC NEAR
; Line 2
push ebp
mov ebp, esp
; Line 3
fld QWORD PTR _x$[ebp]
fadd QWORD PTR __real@3ff0000000000000
fld QWORD PTR _y$[ebp]
fadd QWORD PTR __real@4000000000000000
fmulp ST(1), ST(0)
fld DWORD PTR _z$[ebp]
fadd QWORD PTR __real@4008000000000000
fmulp ST(1), ST(0)
; Line 4
pop ebp
ret 0
_MyFunction1 ENDP
_TEXT ENDS
```
Our first question is this: are the parameters passed on the stack, or
on the floating-point register stack, or some place different entirely?
Key to this question, and to this function is a knowledge of what
**fld** and **fstp** do. fld (Floating-point Load) pushes a floating
point value onto the FPU stack, while fstp (Floating-Point Store and
Pop) moves a floating point value from ST0 to the specified location,
and then pops the value from ST0 off the stack entirely. Remember that
**double** values in cl.exe are treated as 8-byte storage locations
(QWORD), while floats are only stored as 4-byte quantities (DWORD). It
is also important to remember that floating point numbers are not stored
in a human-readable form in memory, even if the reader has a solid
knowledge of binary. Remember, these aren\'t integers. Unfortunately,
the exact format of floating point numbers is well beyond the scope of
this chapter.
x is offset +8, y is offset +16, and z is offset +24 from ebp.
Therefore, z is pushed first, x is pushed last, and the parameters are
passed right-to-left on the *regular stack* not the floating point
stack. To understand how a value is returned however, we need to
understand what **fmulp** does. fmulp is the \"Floating-Point Multiply
and Pop\" instruction. It performs the instructions:
`ST1 := ST1 * ST0`\
`FPU POP ST0`
This multiplies ST(1) and ST(0) and stores the result in ST(1). Then,
ST(0) is marked empty and stack pointer is incremented. Thus, contents
of ST(1) are on the top of the stack. So the top 2 values are multiplied
together, and the result is stored on the top of the stack. Therefore,
in our instruction above, \"fmulp ST(1), ST(0)\", which is also the last
instruction of the function, we can see that the last result is stored
in ST0. Therefore, floating point parameters are passed on the regular
stack, but floating point results are passed on the FPU stack.
One final note is that MyFunction2 cleans its own stack, as referenced
by the **ret 20** command at the end of the listing. Because none of the
parameters were passed in registers, this function appears to be exactly
what we would expect an STDCALL function would look like: parameters
passed on the stack from right-to-left, and the function cleans its own
stack. We will see below that this is actually a correct assumption.
For comparison, here is the GCC listing:
``` asm
LC1:
.long 0
.long 1073741824
.align 8
LC2:
.long 0
.long 1074266112
.globl _MyFunction1
.def _MyFunction1; .scl 2; .type 32; .endef
_MyFunction1:
pushl %ebp
movl %esp, %ebp
subl $16, %esp
fldl 8(%ebp)
fstpl -8(%ebp)
fldl 16(%ebp)
fstpl -16(%ebp)
fldl -8(%ebp)
fld1
faddp %st, %st(1)
fldl -16(%ebp)
fldl LC1
faddp %st, %st(1)
fmulp %st, %st(1)
flds 24(%ebp)
fldl LC2
faddp %st, %st(1)
fmulp %st, %st(1)
leave
ret
.align 8
```
This is a very difficult listing, so we will step through it (albeit
quickly). 16 bytes of extra space is allocated on the stack. Then, using
a combination of fldl and fstpl instructions, the first 2 parameters are
moved from offsets +8 and +16, to offsets -8 and -16 from ebp. Seems
like a waste of time, but remember, optimizations are off. **fld1**
loads the floating point value 1.0 onto the FPU stack. **faddp** then
adds the top of the stack (1.0), to the value in ST1 (\[ebp - 8\],
originally \[ebp + 8\]).
### FASTCALL
Here is the cl.exe listing for MyFunction2:
``` asm
PUBLIC @MyFunction2@20
PUBLIC __real@3ff0000000000000
PUBLIC __real@4000000000000000
PUBLIC __real@4008000000000000
EXTRN __fltused:NEAR
; COMDAT __real@3ff0000000000000
CONST SEGMENT
__real@3ff0000000000000 DQ 03ff0000000000000r ; 1
CONST ENDS
; COMDAT __real@4000000000000000
CONST SEGMENT
__real@4000000000000000 DQ 04000000000000000r ; 2
CONST ENDS
; COMDAT __real@4008000000000000
CONST SEGMENT
__real@4008000000000000 DQ 04008000000000000r ; 3
CONST ENDS
_TEXT SEGMENT
_x$ = 8 ; size = 8
_y$ = 16 ; size = 8
_z$ = 24 ; size = 4
@MyFunction2@20 PROC NEAR
; Line 7
push ebp
mov ebp, esp
; Line 8
fld QWORD PTR _x$[ebp]
fadd QWORD PTR __real@3ff0000000000000
fld QWORD PTR _y$[ebp]
fadd QWORD PTR __real@4000000000000000
fmulp ST(1), ST(0)
fld DWORD PTR _z$[ebp]
fadd QWORD PTR __real@4008000000000000
fmulp ST(1), ST(0)
; Line 9
pop ebp
ret 20 ; 00000014H
@MyFunction2@20 ENDP
_TEXT ENDS
```
We can see that this function is taking 20 bytes worth of parameters,
because of the \@20 decoration at the end of the function name. This
makes sense, because the function is taking two **double** parameters (8
bytes each), and one **float** parameter (4 bytes each). This is a grand
total of 20 bytes. We can notice at a first glance, without having to
actually analyze or understand any of the code, that there is only one
register being accessed here: **ebp**. This seems strange, considering
that FASTCALL passes its regular 32-bit arguments in registers. However,
that is not the case here: all the floating-point parameters (even z,
which is a 32-bit float) are passed on the stack. We know this, because
by looking at the code, there is no other place where the parameters
could be coming from.
Notice also that **fmulp** is the last instruction performed again, as
it was in the CDECL example. We can infer then, without investigating
too deeply, that the result is passed at the top of the floating-point
stack.
Notice also that x (offset \[ebp + 8\]), y (offset \[ebp + 16\]) and z
(offset \[ebp + 24\]) are pushed in reverse order: z is first, x is
last. This means that floating point parameters are passed in
right-to-left order, on the stack. This is exactly the same as CDECL
code, although only because we are using floating-point values.
Here is the GCC assembly listing for MyFunction2:
``` asm
.align 8
LC5:
.long 0
.long 1073741824
.align 8
LC6:
.long 0
.long 1074266112
.globl @MyFunction2@20
.def @MyFunction2@20; .scl 2; .type 32; .endef
@MyFunction2@20:
pushl %ebp
movl %esp, %ebp
subl $16, %esp
fldl 8(%ebp)
fstpl -8(%ebp)
fldl 16(%ebp)
fstpl -16(%ebp)
fldl -8(%ebp)
fld1
faddp %st, %st(1)
fldl -16(%ebp)
fldl LC5
faddp %st, %st(1)
fmulp %st, %st(1)
flds 24(%ebp)
fldl LC6
faddp %st, %st(1)
fmulp %st, %st(1)
leave
ret $20
```
This is a tricky piece of code, but luckily we don\'t need to read it
very close to find what we are looking for. First off, notice that no
other registers are accessed besides **ebp**. Again, GCC passes all
floating point values (even the 32-bit float, z) on the stack. Also, the
floating point result value is passed on the top of the floating point
stack.
We can see again that GCC is doing something strange at the beginning,
taking the values on the stack from \[ebp + 8\] and \[ebp + 16\], and
moving them to locations \[ebp - 8\] and \[ebp - 16\], respectively.
Immediately after being moved, these values are loaded onto the floating
point stack and arithmetic is performed. z isn\'t loaded till later, and
isn\'t ever moved to \[ebp - 24\], despite the pattern.
LC5 and LC6 are constant values, that most likely represent floating
point values (because the numbers themselves, 1073741824 and 1074266112
don\'t make any sense in the context of our example functions. Notice
though that both LC5 and LC6 contain two **.long** data items, for a
total of 8 bytes of storage? They are therefore most definitely
**double** values.
### STDCALL
Here is the cl.exe listing for MyFunction3:
``` asm
PUBLIC _MyFunction3@20
PUBLIC __real@3ff0000000000000
PUBLIC __real@4000000000000000
PUBLIC __real@4008000000000000
EXTRN __fltused:NEAR
; COMDAT __real@3ff0000000000000
CONST SEGMENT
__real@3ff0000000000000 DQ 03ff0000000000000r ; 1
CONST ENDS
; COMDAT __real@4000000000000000
CONST SEGMENT
__real@4000000000000000 DQ 04000000000000000r ; 2
CONST ENDS
; COMDAT __real@4008000000000000
CONST SEGMENT
__real@4008000000000000 DQ 04008000000000000r ; 3
CONST ENDS
_TEXT SEGMENT
_x$ = 8 ; size = 8
_y$ = 16 ; size = 8
_z$ = 24 ; size = 4
_MyFunction3@20 PROC NEAR
; Line 12
push ebp
mov ebp, esp
; Line 13
fld QWORD PTR _x$[ebp]
fadd QWORD PTR __real@3ff0000000000000
fld QWORD PTR _y$[ebp]
fadd QWORD PTR __real@4000000000000000
fmulp ST(1), ST(0)
fld DWORD PTR _z$[ebp]
fadd QWORD PTR __real@4008000000000000
fmulp ST(1), ST(0)
; Line 14
pop ebp
ret 20 ; 00000014H
_MyFunction3@20 ENDP
_TEXT ENDS
END
```
x is the highest on the stack, and z is the lowest, therefore these
parameters are passed from right-to-left. We can tell this because x has
the smallest offset (offset \[ebp + 8\]), while z has the largest offset
(offset \[ebp + 24\]). We see also from the final fmulp instruction that
the return value is passed on the FPU stack. This function also cleans
the stack itself, as noticed by the call *\'ret 20*. It is cleaning
exactly 20 bytes off the stack which is, incidentally, the total amount
that we passed to begin with. We can also notice that the implementation
of this function looks exactly like the FASTCALL version of this
function. This is true because FASTCALL only passes DWORD-sized
parameters in registers, and floating point numbers do not qualify. This
means that our assumption above was correct.
Here is the GCC listing for MyFunction3:
``` asm
.align 8
LC9:
.long 0
.long 1073741824
.align 8
LC10:
.long 0
.long 1074266112
.globl @MyFunction3@20
.def @MyFunction3@20; .scl 2; .type 32; .endef
@MyFunction3@20:
pushl %ebp
movl %esp, %ebp
subl $16, %esp
fldl 8(%ebp)
fstpl -8(%ebp)
fldl 16(%ebp)
fstpl -16(%ebp)
fldl -8(%ebp)
fld1
faddp %st, %st(1)
fldl -16(%ebp)
fldl LC9
faddp %st, %st(1)
fmulp %st, %st(1)
flds 24(%ebp)
fldl LC10
faddp %st, %st(1)
fmulp %st, %st(1)
leave
ret $20
```
Here we can also see, after all the opening nonsense, that \[ebp - 8\]
(originally \[ebp + 8\]) is value x, and that \[ebp - 24\] (originally
\[ebp - 24\]) is value z. These parameters are therefore passed
right-to-left. Also, we can deduce from the final fmulp instruction that
the result is passed in ST0. Again, the STDCALL function cleans its own
stack, as we would expect.
### Conclusions
Floating point values are passed as parameters on the stack, and are
passed on the FPU stack as results. Floating point values do not get put
into the general-purpose integer registers (eax, ebx, etc\...), so
FASTCALL functions that only have floating point parameters collapse
into STDCALL functions instead. **double** values are 8-bytes wide, and
therefore will take up 8-bytes on the stack. **float** values however,
are only 4-bytes wide.
## Float to Int Conversions
## FPU Compares and Jumps
|
# X86 Disassembly/Code Optimization
## Code Optimization
An **optimizing compiler** is perhaps one of the most complicated, most
powerful, and most interesting programs in existence. This chapter will
talk about optimizations, although this chapter will not include a table
of common optimizations.
## Stages of Optimizations
There are two times when a compiler can perform optimizations: first, in
the intermediate representation, and second, during the code generation.
### Intermediate Representation Optimizations
While in the intermediate representation, a compiler can perform various
optimizations, often based on dataflow analysis techniques. For example,
consider the following code fragment:
``` C
x = 5;
if(x != 5)
{
//loop body
}
```
An optimizing compiler might notice that at the point of \"if (x !=
5)\", the value of x is always the constant \"5\". This allows
substituting \"5\" for x resulting in \"5 != 5\". Then the compiler
notices that the resulting expression operates entirely on constants, so
the value can be calculated now instead of at run time, resulting in
optimizing the conditional to \"if (false)\". Finally the compiler sees
that this means the body of the if conditional will never be executed,
so it can omit the entire body of the if conditional altogether.
Consider the reverse case:
``` C
x = 5;
if(x == 5)
{
//loop body
}
```
In this case, the optimizing compiler would notice that the IF
conditional will always be true, and it won\'t even bother writing code
to test x.
### Control Flow Optimizations
Another set of optimization which can be performed either at the
intermediate or at the code generation level are control flow
optimizations. Most of these optimizations deal with the elimination of
useless branches. Consider the following code:
``` C
if(A)
{
if(B)
{
C;
}
else
{
D;
}
end_B:
}
else
{
E;
}
end_A:
```
In this code, a simplistic compiler would generate a jump from the C
block to end_B, and then another jump from end_B to end_A (to get around
the E statements). Clearly jumping to a jump is inefficient, so
optimizing compilers will generate a direct jump from block C to end_A.
This unfortunately will make the code more confused and will prevent a
nice recovery of the original code. For complex functions, it\'s
possible that one will have to consider the code made of only if()-goto;
sequences, without being able to identify higher level statements like
if-else or loops.
The process of identifying high level statement hierarchies is called
\"code structuring\".
### Code Generation Optimizations
Once the compiler has sifted through all the logical inefficiencies in
your code, the code generator takes over. Often the code generator will
replace certain slow machine instructions with faster machine
instructions.
For instance, the instruction:
``` asm
beginning:
...
loopnz beginning
```
operates *much* slower than the equivalent instruction set:
``` asm
beginning:
...
dec ecx
jne beginning
```
So then why would a compiler ever use a loopxx instruction? The answer
is that most optimizing compilers never use a loopxx instruction, and
therefore as a reverser, you will probably never see one used in real
code.
What about the instruction:
``` asm
mov eax, 0
```
The mov instruction is relatively quick, but a faster part of the
processor is the arithmetic unit. Therefore, it makes more sense to use
the following instruction:
``` asm
xor eax, eax
```
because xor operates in very few processor cycles (and saves three bytes
at the same time), and is therefore faster than a \"mov eax, 0\". The
only drawback of a xor instruction is that it changes the processor
flags, so it cannot be used between a comparison instruction and the
corresponding conditional jump.
## Loop Unwinding
When a loop needs to run for a small, but definite number of iterations,
it is often better to **unwind the loop** in order to reduce the number
of jump instructions performed, and in many cases prevent the
processor\'s branch predictor from failing. Consider the following C
loop, which calls the function `MyFunction()` 5 times:
``` C
for(x = 0; x < 5; x++)
{
MyFunction();
}
```
Converting to assembly, we see that this becomes, roughly:
``` asm
mov eax, 0
loop_top:
cmp eax, 5
jge loop_end
call _MyFunction
inc eax
jmp loop_top
loop_end:
```
Each loop iteration requires the following operations to be performed:
1. Compare the value in eax (the variable \"x\") to 5, and jump to the
end if greater then or equal
2. Increment eax
3. Jump back to the top of the loop.
Notice that we remove all these instructions if we manually repeat our
call to `MyFunction()`:
``` asm
call _MyFunction
call _MyFunction
call _MyFunction
call _MyFunction
call _MyFunction
```
This new version not only takes up less disk space because it uses fewer
instructions, but also runs faster because fewer instructions are
executed. This process is called **Loop Unwinding**.
## Inline Functions
The C and C++ languages allow the definition of an `inline` type of
function. Inline functions are functions which are treated similarly to
macros. During compilation, calls to an inline function are replaced
with the body of that function, instead of performing a `call`
instruction. In addition to using the `inline` keyword to declare an
inline function, optimizing compilers may decide to make other functions
inline as well.
Function inlining works similarly to loop unwinding for increasing code
performance. A non-inline function requires a call instruction, several
instructions to create a stack frame, and then several more instructions
to destroy the stack frame and return from the function. By copying the
body of the function instead of making a call, the size of the machine
code increases, but the execution time *decreases*.
It is not necessarily possible to determine whether identical portions
of code were created originally as macros, inline functions, or were
simply copy and pasted. However, when disassembling it can make your
work easier to separate these blocks out into separate inline functions,
to help keep the code straight.
|
# X86 Disassembly/Code Obfuscation
## Code Obfuscation
**Code Obfuscation** is the act of making the assembly code or machine
code of a program more difficult to disassemble or decompile. The term
\"obfuscation\" is typically used to suggest a deliberate attempt to add
difficulty, but many other practices will cause code to be obfuscated
without that being the intention. Software vendors may attempt to
obfuscate or even encrypt code to prevent reverse engineering efforts.
There are many different types of obfuscations. Notice that many code
optimizations (discussed in the previous chapter) have the side-effect
of making code more difficult to read, and therefore optimizations act
as obfuscations.
## What is Code Obfuscation?
There are many things that obfuscation could be:
- Encrypted code that is decrypted prior to runtime.
- Compressed code that is decompressed prior to runtime.
- Executables that contain Encrypted sections, and a simple decrypter.
- Code instructions that are put in a hard-to read order.
- Code instructions which are used in a non-obvious way.
This chapter will try to examine some common methods of obfuscating
code, but will not necessarily delve into methods to break the
obfuscation.
## Interleaving
Optimizing Compilers will engage in a process called **interleaving** to
try and maximize parallelism in pipelined processors. This technique is
based on two premises:
1. That certain instructions can be executed out of order and still
maintain the correct output
2. That processors can perform certain pairs of tasks simultaneously.
### x86 NetBurst Architecture
The Intel **NetBurst Architecture** divides an x86 processor into 2
distinct parts: the supporting hardware, and the primitive core
processor. The primitive core of a processor contains the ability to
perform some calculations blindingly fast, but not the instructions that
you or I am familiar with. The processor first converts the code
instructions into a form called \"micro-ops\" that are then handled by
the primitive core processor.
The processor can also be broken down into 4 components, or modules,
each of which is capable of performing certain tasks. Since each module
can operate separately, up to 4 separate tasks can be handled
*simultaneously* by the processor core, so long as those tasks can be
performed by each of the 4 modules:
Port0 : Double-speed integer arithmetic, floating point load, memory store
```{=html}
<!-- -->
```
Port1 : Double-speed integer arithmetic, floating point arithmetic
```{=html}
<!-- -->
```
Port2 : memory read
```{=html}
<!-- -->
```
Port3 : memory write (writes to address bus)
So for instance, the processor can simultaneously perform 2 integer
arithmetic instructions in both Port0 and Port1, so a compiler will
frequently go to great lengths to put arithmetic instructions close to
each other. If the timing is just right, up to 4 arithmetic instructions
can be executed in a single instruction period.
Notice however that writing to memory is particularly slow (requiring
the address to be sent by Port3, and the data itself to be written by
Port0). Floating point numbers need to be loaded to the FPU before they
can be operated on, so a floating point load and a floating point
arithmetic instruction cannot operate on a single value in a single
instruction cycle. Therefore, it is not uncommon to see floating point
values loaded, integer values be manipulated, and then the floating
point value be operated on.
## Non-Intuitive Instructions
Optimizing compilers frequently will use instructions that are not
intuitive. Some instructions can perform tasks for which they were not
designed, typically as a helpful side effect. Sometimes, one instruction
can perform a task more quickly than other specialized instructions can.
The only way to know that one instruction is faster than another is to
consult the processor documentation. However, knowing some of the most
common substitutions is very useful to the reverser.
Here are some examples. The code in the first box operates more quickly
than the one in the second, but performs exactly the same tasks.
**Example 1**
*Fast*
``` asm
xor eax, eax
```
*Slow*
``` asm
mov eax, 0
```
**Example 2**
*Fast*
``` asm
shl eax, 3
```
*Slow*
``` asm
push edx
push 8
mul dword [esp]
add esp, 4
pop edx ;# edx is not preserved by "mul"
```
Sometimes such transformations could be made to make the analysis more
difficult:
**Example 3**
*Fast*
``` asm
push $next_instr
jmp $some_function
$next_instr:...
```
*Slow*
``` asm
call $some_function
```
**Example 4**
*Fast*
``` asm
pop eax
jmp eax
```
*Slow*
``` asm
retn
```
### Common Instruction Substitutions
lea : The lea instruction has the following form:
``` asm
lea dest, (XS:)[reg1 + reg2 * x]
```
Where XS is a segment register (SS, DS, CS, etc\...), reg1 is the base
address, reg2 is a variable offset, and x is a multiplicative scaling
factor. What lea does, essentially, is load the memory address being
pointed to in the second argument, into the first argument. Look at the
following example:
``` asm
mov eax, 1
lea ecx, [eax + 4]
```
Now, what is the value of ecx? The answer is that ecx has the value of
(eax + 4), which is 5. In essence, lea is used to do addition and
multiplication of a register and a constant that is a byte or less (-128
to +127).
Now, consider:
``` asm
mov eax, 1
lea ecx, [eax+eax*2]
```
Now, ecx equals 3.
The difference is that lea is quick (because it only adds a register and
a small constant), whereas the **add** and **mul** instructions are more
versatile, but slower. lea is used for arithmetic in this fashion very
frequently, even when compilers are not actively optimizing the code.
xor : The xor instruction performs the bit-wise exclusive-or operation on two operands. Consider then, the following example:
``` asm
mov al, 0xAA
xor al, al
```
What does this do? Let\'s take a look at the binary:
` 10101010 ;10101010 = 0xAA`\
`xor 10101010`\
` --------`\
` 00000000`
The answer is that \"xor reg, reg\" sets the register to 0. More
importantly, however, is that \"xor eax, eax\" sets eax to 0 *faster*
(and the generated code instruction is smaller) than an equivalent \"mov
eax, 0\".
mov edi, edi : On a 64-bit x86 system, this instruction clears the high 32-bits of the rdi register.
```{=html}
<!-- -->
```
shl, shr : left-shifting, in binary arithmetic, is equivalent to multiplying the operand by 2. Right-shifting is also equivalent to integer division by 2, although the lowest bit is dropped. in general, left-shifting by $N$ spaces multiplies the operand by $2^N$, and right shifting by $N$ spaces is the same as dividing by $2^N$. One important fact is that resulting number is an integer with no fractional part present. For example:
``` asm
mov al, 31 ; 00011111
shr al, 1 ; 00001111 = 15, not 15.5
```
xchg : xchg exchanges the contents of two registers, or a register and a memory address. A noteworthy point is the fact that xchg operates faster than a move instruction. For this reason, xchg will be used to move a value from a source to a destination, when the value in the source no longer needs to be saved.
As an example, consider this code:
``` asm
mov ebx, eax
mov eax, 0
```
Here, the value in `eax` is stored in `ebx`, and then `eax` is loaded
with the value zero. We can perform the same operation, but using `xchg`
and `xor` instead:
``` asm
xchg eax, ebx
xor eax, eax
```
It may surprise you to learn that the second code example operates
significantly faster than the first one does.
## Obfuscators
There are a number of tools on the market that will automate the process
of code obfuscation. These products will use a number of transformations
to turn a code snippet into a less-readable form, although it will not
affect the program flow itself (although the transformations may
increase code size or execution time).
## Code Transformations
Code transformations are a way of reordering code so that it performs
exactly the same task but becomes more difficult to trace and
disassemble. We can best demonstrate this technique by example. Let\'s
say that we have 2 functions, FunctionA and FunctionB. Both of these two
functions are comprised of 3 separate parts, which are performed in
order. We can break this down as such:
``` C
FunctionA()
{
FuncAPart1();
FuncAPart2();
FuncAPart3();
}
FunctionB()
{
FuncBPart1();
FuncBPart2();
FuncBPart3();
}
```
And we have our main program, that executes the two functions:
``` C
main()
{
FunctionA();
FunctionB();
}
```
Now, we can rearrange these snippets to a form that is much more
complicated (in assembly):
``` asm
main:
jmp FAP1
FBP3: call FuncBPart3
jmp end
FBP1: call FuncBPart1
jmp FBP2
FAP2: call FuncAPart2
jmp FAP3
FBP2: call FuncBPart2
jmp FBP3
FAP1: call FuncAPart1
jmp FAP2
FAP3: call FuncAPart3
jmp FBP1
end:
```
As you can see, this is much harder to read, although it perfectly
preserves the program flow of the original code. This code is much
harder for a human to read, although it isn\'t hard at all for an
automatic disassembler (such as IDA Pro) to read.
## Opaque Predicates
An **Opaque Predicate** is a predicate inside the code, that cannot be
evaluated during static analysis. This forces the attacker to perform a
dynamic analysis to understand the result of the line. Typically this is
related to a branch instruction that is used to prevent in static
analysis the understanding which code path is taken.
## Code Encryption
Code can be encrypted, just like any other type of data, except that
code can also work to encrypt and decrypt *itself.* Encrypted programs
cannot be directly disassembled. However, such a program can also not be
run directly because the encrypted opcodes cannot be interpreted
properly by the CPU. For this reason, an encrypted program must contain
some sort of method for decrypting itself prior to operation.
The most basic method is to include a small stub program that decrypts
the remainder of the executable, and then passes control to the
decrypted routines.
### Disassembling Encrypted Code
To disassemble an encrypted executable, you must first determine how the
code is being decrypted. Code can be decrypted in one of two primary
ways:
1. All at once. The entire code portion is decrypted in a single pass,
and left decrypted during execution. Using a debugger, allow the
decryption routine to run completely, and then dump the decrypted
code into a file for further analysis.
2. By Block. The code is encrypted in separate blocks, where each block
may have a separate encryption key. Blocks may be decrypted before
use, and re-encrypted again after use. Using a debugger, you can
attempt to capture all the decryption keys and then use those keys
to decrypt the entire program at once later, or you can wait for the
blocks to be decrypted, and then dump the blocks individually to a
separate file for analysis.
|
# X86 Disassembly/Debugger Detectors
## Detecting Debuggers
It may come as a surprise that a running program can actually detect the
presence of an attached user-mode debugger. Also, there are methods
available to detect kernel-mode debuggers, although the methods used
depend in large part on which debugger is trying to be detected.
This subject is peripheral to the narrative of this book, and the
section should be considered an optional one for most readers.
## IsDebuggerPresent API
The Win32 API contains a function called \"IsDebuggerPresent\", which
will return a boolean true if the program is being debugged. The
following code snippet will detail a general usage of this function:
``` C
if(IsDebuggerPresent())
{
TerminateProcess(GetCurrentProcess(), 1);
}
```
Of course, it is easy to spot uses of the IsDebuggerPresent() function
in the disassembled code, and a skilled reverser will simply patch the
code to remove this line. For OllyDbg, there are many plugins available
which hide the debugger from this and many other APIs.
## PEB Debugger Check
The Process Environment Block stores the value that IsDebuggerPresent
queries to determine its return value. To avoid suspicion, some
programmers access the value directly from the PEB instead of calling
the API function. The following code snippet shows how to access the
value:
``` asm
mov eax, [fs:0x30]
mov al, [eax+2]
test al, al
jne @DebuggerDetected
```
## Kernel Mode Debugger Check
On Windows 32 and 64-bit Win \<XP?, 7,8.1 and 10.
There is a structure called \_KUSER_SHARED_DATA at offset 0x2D4 it
contains the field named \'KdDebuggerEnabled\' which is set to 0x03 if a
KDM is active or 0x00 if not.
Base address of the structure is static (0x7FFE0000) across different
Windows versions even \< XP.
The field is updated constantly with the last 2 bits set to \'11\' by
the kernel.
The following assembly instruction will work in both 32 and 64-bit
applications:
``` asm
cmp byte ptr ds:[7FFE02D4], 3
je @DebuggerDetected
```
This has quite a few advantages. Known Source of
information.
## Timeouts
Debuggers can put break points in the code, and can therefore stop
program execution. A program can detect this, by monitoring the system
clock. If too much time has elapsed between instructions, it can be
determined that the program is being stopped and analyzed (although this
is not always the case). If a program is taking too much time, the
program can terminate.
Notice that on preemptive multithreading systems, such as modern Windows
or Linux systems will switch away from your program to run other
programs. This is called thread switching. If the system has many
threads to run, or if some threads are hogging processor time, your
program may detect a long delay and may falsely determine that the
program is being debugged.
## Detecting SoftICE
SoftICE is a local kernel debugger, and as such,
it can\'t be detected as easily as a user-mode debugger can be. The
IsDebuggerPresent API function will not detect the presence of SoftICE.
To detect SoftICE, there are a number of techniques that can be used:
1. Search for the SoftICE install directory. If SoftICE is installed,
the user is probably a hacker or a reverser.
2. Detect the presence of **int 1**. SoftICE uses interrupt 1 to debug,
so if interrupt 1 is installed, SoftICE is running.
## Detecting OllyDbg
OllyDbg is a popular 32-bit usermode debugger. Unfortunately, the last
few releases, including the latest version (v1.10) contain a
vulnerability in the handling of the Win32 API function
OutputDebugString(). 1 A
programmer trying to prevent his program from being debugged by OllyDbg
could exploit this vulnerability in order to make the debugger crash.
The author has never released a fix, however there are unofficial
versions and plugins available to protect OllyDbg from being exploited
using this vulnerability.
|
# Foundations of Computer Science/Introduction
Have you ever wondered what computing is and how a computer works? What
exactly is computer science? Why---beyond the obvious reasons---is it
important? What do computer scientists do? What types of problems do
they work on? What approaches do they use to solve those problems? How,
in general, do computer scientists think?
**Question 1:** What do you think of when you hear \"computer science?\"
Write a paragraph or list, or draw an image or diagram of what comes to
mind.
**Question 2:** What are the parts of computer science that are most
interesting or important to you currently? Why?
When you hear the term \"computer science\" perhaps you think of a
specific computer. Or someone you know who works with computers. Or a
particular computer use, say online games or social networks. There are
many, many different aspects of computing and computer science.
There are a number of reasons why it is useful and important to know
something about this computer science. Computers affect many, many
aspects of our lives in different ways. For many people, computers are
playing or will play a significant role in the work they do, in their
recreational pursuits, in how they communicate with others, in their
education, in their health care, etc. Think about the many different
ways you encounter computers and computing, either directly or
indirectly, in your daily life.
What, more specifically, will this book cover? The foremost purpose of
this text is to give you a greater understanding of the fundamentals of
computer science: What is computer science, anyway? Is the same as
computer programming? What is a computer? For example, most people would
agree that a \"laptop computer\" is a computer, as is a \"tablet
computer\", but what about a smartphone? And how do computers work? For
example, we can store not only numbers and text in computers, but also
images, video files, and audio files; how do computers handle such
disparate data? And what are some interesting and important subareas of
computer science? For example, what is important to know about subareas
such as computer graphics, networking, or databases? And why is any of
this important? Isn\'t it sufficient for most people just to use
computers, rather than have a deeper understanding of computers and
computer science?
These are all fundamental questions about computing, and in this book
we\'ll look at them and other questions. In summary, one purpose of this
book is to provide an overview of computer science that not only exposes
you to computer science fundamentals---such as how a computer works on a
rudimentary level---but also explores why these fundamentals are
important.
There are two parts of this overview that are particularly important:
while the main theme is an overview of computer science, two essential
subthemes are how mathematics is used in computer science and how
computer science affects, and is affected by, society.
Both subthemes fit well in an overview of computer science book.
Computer science relies heavily on mathematics (in fact, some colleges
have computer science and mathematics programs in a joint department).
Certain uses of mathematics in computer science are obvious---for
example, in computational tools such as spreadsheets---but there are
also many less obvious ways that mathematics is essential to computer
science. For example at the lowest level in a computer, data (whether
that data is numeric, text, audio, video, etc.) is all represented in
binary, i.e., as strings of 0\'s and 1\'s. This means that to understand
something very basic about computers you need to understand binary
numbers and operations.
Computers also affect society in many ways, from the use of
computer-generated imagery in films, to large government or commercial
databases, to the multiple societal effects of the Internet. And society
affects computers, for example through user behavior and through
different types of regulation.
While mathematics and technology and society might seem too different to
be included comfortably in the same book, there are actually many
computer science topics that are useful to explore from both
perspectives---in a sense, these different viewpoints are \"two sides of
the same coin.\" For example, one topic in the book is computer
security. Mathematics plays a role in security, for example in
encryption. And computer security also has many societal aspects, for
example national security, infrastructure security, and individual
security. Most of the topics in this text similarly have both
mathematical underpinnings and societal aspects, and exploring these
topics from both perspectives will result in a richer understanding.
## What this book isn\'t
There are a number of different types of introductory computer science
books. So, in addition to explaining what this text is, it is also
useful to state what it is not.
*This is not a programming book.* Programming is a central activity in
computer science, but it is not the whole of computer science. Because
programming is important, we\'ll spend some time on it. However, because
computer science is much more than programming, and because this is an
overview book, that time will be only a small part of this work.
*This is not a computer applications book.* Many other books cover basic
computer applications. For example, a popular choice is teaching how to
use a word processor, a spreadsheet, a database management program, and
presentation software. These and other applications are important parts
of computer science, and so in this book you will get a chance to learn
about some applications that might be new to you. However---like
programming--- using applications is only part of learning about
computer science, and so application use will be only a small part of
this book.
\'\'This is not a \"computer literacy\" or \"computer fluency\" book.
There are a variety of definitions of computer literacy or computer
fluency. For example, the Wikipedia definition, derived from a report
from the U.S. Congress of Technology Assessment, is \"the knowledge and
ability to use computers and related technology efficiently, with a
range of skills covering levels from elementary use to programming and
advanced problem solving.\"[^1] Parts of this book will involve using
computers to gain a variety of skills. For example, you will do a
variety of computer-related tasks such as performing web searches,
constructing web pages, doing elementary computer programming, and
working with databases. However, this is just one part, rather than the
totality, of the text. So this book shares some characteristics of a
computer literacy book, but overall it has a wider focus than that type
of a textbook.
*This is not a \"great ideas in computer science\" book.* One current
trend in computer science introductory materials is to study computer
science through its important, fundamental ideas.[^2] And this book does
cover some key ideas. For example, an early topic we\'ll study is how
all data in computers, whether those data are numeric, text, video, or
others, are represented within the computer as 0\'s and 1\'s. In
general, the topics in the book are fundamental to computer science.
However, this text also differs from a great ideas book. It is not
focused solely on ideas, but explores broadly a number of
computer-related issues, subtopics, and computer skills. Moreover, this
book focuses more on mathematical thinking, and on technology and
society, than a typical great ideas book would.
In addition to programming, applications, computer fluency, and great
ideas, there are a number of other types of introductory computer
science textbooks. Some survey a variety of computer science topics.
Others focus on professional software development practices. Still
others look at look at computing through a particular \"lens\" such as
networks or computational biology. And so on. This book has some common
characteristics with these other courses, but also has significant
differences. In particular, the biggest difference is this book blends
an overview of computer science with a strong emphasis on mathematics,
and on society and technology; this is a balance of emphases that has a
number of advantages, but is not usually seen in introductory computer
science courses.
## What is this book about?
Both mathematical thinking and technology and society are significant
parts of this book. Many textbooks present an introduction to computer
science though programming, or through how computers work, or through
some other aspect of computing. However, there is not a suitable text
that combines an overview of computer science with both sufficient
mathematical and sufficient society and technology emphases.
At first glance, it might seem odd that a book introducing computer
science would deal with liberal education. What does computer science
have to do with liberal education? Understanding computers well involves
exploring them from a variety of different viewpoints. This includes
understanding not only how computers work---including, for example, the
mathematical underpinnings of computer science---but also how they
affect, and are affected by, society. In summary, to have a good
understanding of computers and computer science it is important to
explore them from a variety of perspectives, including the perspectives
embodied in liberal education.
### Mathematical thinking
**Question 3.** What do you think of when you hear the word
\"mathematics?\" Write a paragraph or list, or draw an image or diagram
of what comes to mind.
**Question 4.** Based on your experience with computers, write a list of
some places where mathematics is used in computing.
What do computers and mathematics have in common? Why is it appropriate
for an overview of computer science book to require mathematical
thinking?
Much of the use of mathematics in this book is applying mathematical
ideas and operations to solve computer science problems. There are a
number of important mathematical underpinnings of computer science, and
so understanding computer science involves being able to solve
mathematical problems involving these underpinnings. At the same time,
the different uses of mathematics in this text exemplify characteristics
of mathematics as a whole, and of the close tie between the fields of
mathematics and computer science. For instance, the mathematics in the
book illustrates the following:
- The reliance of many key ideas in computer science, such as data
representation, on mathematics.
- The use of special mathematics- or logic-related notation and
terminology in many parts of computer science.
- The ability to represent and work with many different types of data
in the computer, and the related ability to represent and work with
quantities in different representations using a variety of
operations.
- The need for rigor in solving problems, analyzing situations, or
specifying computational processes.
- The use of numbers and arithmetic in solving computational problems.
However, rather than being simple arithmetic problems, these
problems often have some special characteristics such as involving
repeated operations, or involving extremely large or extremely small
numbers.
- The existence of a variety of different algorithms for solving such
diverse problems as pattern matching, counting specified values in a
table of data, or finding the shortest path between two nodes in a
graph.
Solving many of the problems in this book will involve doing some
mathematics, and therefore manipulating mathematical or logical symbols.
Here are a few examples:
- In exploring low-level logical operations you\'ll need to manipulate
binary representation and logical operators.
- In studying the growth rate of algorithms you\'ll need to work with
the *Ο* and *Θ* notations commonly used by computer scientists.
- In specifying computational processes you\'ll need to use
\"pseudocode\" or a programming language. These share many
notational characteristics with mathematical or logical symbols,
especially when the computational processing involves a large number
of numeric computations.
The level of mathematics in this book is introductory-level college
mathematics. As such, the mathematics is not advanced, and there is no
mathematical prerequisite for this book beyond the requirements needed
for general college admission. At the same time, the mathematics in this
book goes beyond high school mathematics even though many of the types
of mathematics used in this text appear in some high school mathematics
courses.
As an example, one appearance of mathematics in this book is binary (or
base 2) representation. This is a topic that often appears in high
school mathematics courses, and the basics of binary representation are
not complicated. In this book we review such basics as how to convert
numbers between decimal (base 10) and binary representation, and how to
do simple operations such as adding two binary numbers. However, we also
use binary representation in additional ways that underpin the workings
of computers. Here are a few examples:
- We\'ll look at a few different ways to represent numbers in binary
representation. For example, integers are often represented in
binary not using the usual straightforward binary representation,
but in \"two\'s complement\" form. So part of this book is learning
not only about the \"usual\" binary representation, but also about
these alternatives.
- We\'ll look at various issues with binary representation, such as
the number of \"bits\" used, that are important in determining the
range and precision of numbers used by computers.
- In addition to representing numbers, we will also look at how
computers use binary representation to represent and operate on
other types of data such as text, colors, and images.
- In addition to basic operations such as binary addition, we will
also look at other operations on binary representations. For
example, logical operations are important in masking colors in image
processing, and in implementing arithmetic operations in low-level
computer hardware.
In summary, even though many of the mathematical topics in this book
appear in high school mathematics, they go beyond the usual high school
treatment of those topics in breadth or depth.
### Technology and society
**Question 5.** What do you think of when you hear \"technology and
society?\" Write a paragraph or list, or draw an image or diagram of
what comes to mind.
**Question 6.** Based on your experience with computing, write a list of
examples of how computing affects, and is affected by, society.
The topic of this book is computers and computing. Computers have
affected society in numerous and diverse ways, some of which we\'ll
explore in this book. And current and future computer applications will
affect society in even more ways.
Through this book you should get an understanding of how computers work.
This includes understanding the basics of computer hardware and computer
software.
More broadly, however, computer science relies on results from other
areas of science, engineering, and related fields. The most prominent
example of this we will see in this text is various ways that
mathematics is essential in computer science.
Technology affects society. However, it is not a one-way street. Society
also affects technology. For example, society fosters technology by
means such as government support for research. As another example,
different individuals, businesses, and other organizations adopt and use
technology in ways often not foreseen by the technology\'s creators.
In this book we\'ll look at a variety of instances of how society
affects technology. These include government funding for the early
Internet, Internet regulation, how business considerations affect
computing products, and societal aspects of computer security.
In many topics in computers and society there are multiple stakeholders.
These can include individual users, developers, companies (producers,
consumers, and intermediaries), government bodies, professional
organizations, and other types of organizations. These different
stakeholders often have different views and different goals.
In this book we will often look at technology and society issues from
numerous perspectives. Sometimes we will focus on a specific perspective
or the role of a specific stakeholder. However, other times we will
explore issues more broadly: Who are the stakeholders? What is their
role in this issue? What are their goals?
One often hears conflicting views on computer and society issues.
Computers are beneficial for society. Computers are harmful to society.
The Internet is making it easier for people to communicate and is
bringing people together. The Internet is making people more isolated.
Computers and automation are robbing people of jobs. Computers and
automation create jobs.[^3]
In this book we\'ll often explore issues that are contentious and/or
complicated. How do we avoid a superficial, one-sided understanding of
such issues? How do we resolve conflicting claims about such issues?
Computing technology not only has had massive effects on society, but
continues to affect society. Not a day goes by without some
technological advance involving computing. In many ways the \"computer
revolution\" is just beginning.
One goal of this book is that you\'ll learn enough about computing in
general, about trends in computing, and about computing and society that
you\'ll be able to evaluate new technology. Note that \"evaluate\" might
mean different things in different contexts. For instance, it might mean
give an informed projection about whether a new computer product will be
successful or not. Or it might mean predict future computer advances in
a certain area. Or it might mean analyze whether a new computer
application is more likely to be more beneficial than harmful.
## Additional questions for thought and discussion
Here are some additional introductory questions.
**Question 7.** How do you use computers? List the most important ways.
**Question 8.** Write down a list of movies in which computing plays a
major role. For each movie, indicate whether computing is portrayed as
beneficial, harmful, beneficial in some ways but harmful in others, or
neutral.
**Question 9.** Do you think computers, on the whole, have more positive
effects than negative ones, more negative ones than positive, or about
equal positive and negative effects? Why?
**Question 10.** List some ways computers are beneficial to society.
Then list some ways they are harmful.
**Question 11.** Suppose you were to write a novel, play, screenplay,
etc. about some aspect of computers and society. Describe what the theme
or themes of your work would be.
**Question 12.** What does \"technology\" mean? What are some important
ways you use technology in your daily life?
**Question 13.** Suppose you had to write a short essay or short story
entitled \"Computers and Me.\" What would be some key points or themes
in that work?
**Question 14.** Suppose you had to write a short essay or short story
entitled \"Technology and Me.\" What would be some key points or themes
in that work?
## Notes
```{=html}
<references />
```
[^1]: See Computer literacy at
the English Wikipedia. Accessed May 20, 2015.
[^2]: For example, see
This site organizes principles into seven categories: computation,
communication, coordination, recollection, automation, evaluation,
and design. There are a number of good ideas, insights, and
frameworks in this and related approaches, and in fact many of the
key ideas in this book will relate in some way to Denning\'s
principles.
[^3]: See
|
# Foundations of Computer Science/What is Computing
## What is Computing
In this course, we try to focus on computing principles (big ideas)
rather than computer technologies, which are tools and applications of
the principles. Computing is
defined by a set of principles or ideas, which underlies a myriad of
technologies that are created based on the principles. Technologies can
be complex and constantly evolving but principles stays the same. In the
second half of the course, we will study various technologies to
demonstrate the power of computing and how principles are applied.
In addition to principles of computing and technologies there are
practices of computing - what professionals do to advance computing.
The chart to the right illustrates the difference between principles of
computing and practices of computing. Principles underlie technologies
and practices. A consumer exploits the power of computing through the
applications built for them for various tasks. We believe everyone needs
to know the principles of computing because such principles are widely
applicable. As professionals in the field of computing we need to know
the two ends and everything in the middle - the practices (activities
and skills that make computing useful and effective).
!This chart illustrates the difference between principles of computing
and practices of
computing.{width="600"}
We will use the terms computing and
computation interchangeably
throughout the book.
### Principles of Computing
Computing is fundamentally about information processes. One of the big
ideas of computing is that information processes can be carried out
purely mechanically via symbol manipulation. The agent that does the
computing, whether a thinking human being or a machine (computer), does
not matter. Toward the end of the book we will see this is true for all
modern computers - digital computers manipulate two symbols (zero and
one) blindly according to instructions.
#### An Analogy
The following analogy from the \"Thinking as Computation\" book [^1]
illustrates the idea. Imagine that we have the following table of
symbols.
a b c d e f g h i j
--- ---- ---- ---- ---- ---- ---- ---- ---- ---- ----
a aa ab ac ad ae af ag ah ai aj
b ab ac ad ae af ag ah ai aj ba
c ac ad ae af ag ah ai aj ba bb
d ad ae af ag ah ai aj ba bb bc
e ae af ag ah ai aj ba bb bc bd
f af ag ah ai aj ba bb bc bd be
g ag ah ai aj ba bb bc bd be bf
h ah ai aj ba bb bc bd be bf bg
i ai aj ba bb bc bd be bf bg bh
j aj ba bb bc bd be bf bg bh bi
The symbols can be any set of symbols, we pick the letters from the
English alphabet for simplicity. We can define a procedure P that takes
two symbols (\'a\' through \'j\') as the input and produces two symbols
in the same set as the output. Internally, the procedure uses the first
input symbol to find a row that starts with the same symbol, then uses
the second input symbol to find a column with the same symbol at the
top, and then report/return the symbols at the cross point. It is not
hard to imagine such a table lookup procedure can be done purely
mechanically (blindly) by a simple agent (e.g. a device or a machine).
Of course a human being can do it but this type of symbol manipulation
requires no human intelligence. Two conclusions can be drawn from this
thought experiment:
- Symbol manipulation can be done mechanically.
- The machine that performs the manipulation does not need to know the
meaning of the symbols nor the purpose of the manipulation.
This procedure can be meaningful if we know how to interpret the
symbols. For example, if the symbols \'a\' through \'j\' represent
quantities of 0 through 9 respectively, this procedure performs single
decimal digit addition. For instance, p(d, f) = p(3, 5) = ai = 08, which
is the correct result of 3+5. The following table is essentially the
same as the previous one except that it uses symbols that are meaningful
to humans.
0 1 2 3 4 5 6 7 8 9
--- ---- ---- ---- ---- ---- ---- ---- ---- ---- ----
0 00 01 02 03 04 05 06 07 08 09
1 01 02 03 04 05 06 07 08 09 10
2 02 03 04 05 06 07 08 09 10 11
3 03 04 05 06 07 08 09 10 11 12
4 04 05 06 07 08 09 10 11 12 13
5 05 06 07 08 09 10 11 12 13 14
6 06 07 08 09 10 11 12 13 14 15
7 07 08 09 10 11 12 13 14 15 16
8 08 09 10 11 12 13 14 15 16 17
9 09 10 11 12 13 14 15 16 17 18
Now that we have a simple procedure P that can instruct a simple agent
to add two single digit decimal numbers, we can devise a new procedure
P1 that can add three single-digit decimal numbers as shown in the
following chart.
!This chart illustrates a way to build a P1 procedure from a P1
procedure.{width="600"}
The new procedure P1 employs three instances of procedure P to add three
decimal digits and return two digits as the result. We can view the
procedures as machines with inputs and outputs and the lines are pipes
that allow the symbols to go from one place to another place. It is not
hard to imagine that an agent that can carry out P can carry out P1 as
P1 is entirely made up of P. Note that the dotted rectangle represents
the new procedure P1 made up of instances of P and the answer given by
P1 for the sample inputs is correct. Again the symbols used in the
process can be any set of symbols because internally simple table
lookups are performed.
Now imagine that we could use P1 to construct more complex procedures,
for example procedure P2 in the following chart.
!This chart illustrates a way to build a P2 procedure from a P1
procedure.{width="600"}
P2 uses P1 to add two double-digit numbers, in fact we can simply add
more P1s to the design to deal with any numbers of digits.
By now we can make the following observations:
- Whatever machine that can perform P can perform P1, P2, and etc.
- We have made procedures that perform seemingly intelligent
activities by making them more complex and at the same time kept
them doable by simple machines.
If we follow the same line of reasoning, it is not hard to imagine we
can create increasingly more complex procedures to instruct the simple
machine to do progressively more intelligent things, such as
- integer subtraction
- compare two integers (subtraction and check the sign of the result)
- integer multiplication (repeated addition)
- represent fractions using a pair of integers and do arithmetic on
them
- use matrices of integers to represent systems of equations and solve
them using matrix operations
- use systems of equations to model complex physical systems and
perform numerical simulations of these systems
In summary, from this example we can see that simple symbolic operations
can be assembled to form larger procedures to perform amazing activities
through computational processes. Such activities are not limited to
numerical calculations. If we can represent abstract ideas as symbols
(as we represent abstract quantities as concrete numbers) and device
procedures to manipulate the symbols according to the relations among
the ideas we can model reasoning as computational processes. This is
what computer science is fundamentally about - information processes
with two essential components: representations and a sequence of rules
for manipulation of the representations. Note that it has nothing to do
with electronics or physics. The machine that carries out such processes
does not need to know the meaning of the symbols and why the process
yields correct results. The machine only needs to follow the procedures
(a set of rules) blindly.
As an example, you can read about a mechanical computer (difference
engine) designed by
Charles Babbage that
can tabulate polynomial functions:
!A difference engine: computing the solution to a polynomial
function.jpg "A difference engine: computing the solution to a polynomial function"){width="600"}
#### Another Analogy
Richard
Feynman
used another similar analogy (file clerk) to explain how computers work
from the inside out in his Computer Heuristics Lecture (1 hour 15 mins):
<http://www.youtube.com/watch?v=EKWGGDXe5MA>
#### History
Now that we have learned computing is, in essence, a certain
manipulation of symbols. A computer\'s ability to perform amazing tasks
depends on its ability to manipulate symbols according to well defined
rules. In fact, digital computers only manipulate two symbols - zeros
and ones. The intelligence of computing lies in the design and
implementations of the rules/programs.
In the future, when talking about computer machinery, we will see
computers are constructed using such principles.
You may wonder where the ideas come from. Many people in history made
significant contributions to ideas of computing and computers. Gottfried
Leibniz (1646--1716), a German philosopher, is considered the first
person to dream of reducing reasoning to calculation and building a
machine capable of carrying out such calculations. He observed that in
arithmetic we represent abstract quantities using symbols and manipulate
the symbols to get useful results according to rules. He dreamed that we
could represent abstract ideas using symbols and reason with the ideas
according to the logics between the ideas via similar concrete symbol
manipulation as we do in arithmetic. Such manipulations give us correct
results not because whoever does the manipulation is intelligent but
because the rules of manipulation mirrors relationships between
quantities and logics between ideas.
Because of Leibniz's dream, now we have computer science and universal
machines called computers. A computer is fundamentally a physical device
that can manipulate symbols following very simple logic rules. Almost
all computers are electronic because it happens to be cheaper and easier
to build that way. Computer science is fundamentally about the
information process (dealing with abstract ideas) that takes place
through symbol manipulation, which follows a recipe (a set of rules).
Such recipes are also known as algorithms. No wonder so many computer
programming books are called cookbooks :) In computer science we study
how to represent information and how to design and apply algorithms to
get meaningful results. There are usually many ways to perform the same
task. Comparing algorithms for evaluation purposes is called algorithm
(complexity) analysis. Communicating an algorithm (recipe) to a computer
is called programming/software development. The languages we use for
such communication are called computer programming languages. The
artifacts of programming are computer programs or software. The
engineering disciplines we try to apply in the software development
process to produce quality software is called software engineering. So
computer science is more about problem solving than computers. Computing
science is probably a more appropriate name for this discipline.
### Practices of Computing
Principles are fundamental ideas that permeate all aspects of computing.
Practices are not principles but are very useful to identify because
they identify the central practices of computing professionals.
Practices, sometimes called \"know-hows\", define someone\'s skill set
and the level of competency: beginner, competent, and expert. The four
core practices of computing are identified in the Great Principles of
Computing project:[^2]
- Programming (including multilingual programming practice)
- Systems and systems thinking
- Modeling, validating, testing, and measuring
- Innovating
Programming is an integral part of computer science because it allows us
to explore abstract ideas in computer science in concrete ways. It is
also an exciting creative process, which brings a great deal of
satisfaction when we can make computers do useful things. In this course
we will program in a very high-level graphical programming environment
to explore ideas in computer science.
Donald Knuth regards programming like composition: well-written programs
are a pleasure for others or yourself to read. He believes that
programming is triply rewarding :
- beautiful code (aesthetic)
- do useful work (humanitarian)
- get paid (economic)
Programming a computer is essentially teaching the computer how to do
things. As we mentioned previously computers are simple machines that
strictly follow orders. For a computer to do the right task the
instructions in our program must be correct and logical. Programs that
are executable on a computer are software - serving as the brain of the
computer. Software with errors is called buggy (see for the history of
this name) software. Testing software on an actual computer can help
catch most bugs in the software. Testing provides almost immediate
feedback to the quality of our programs so that we can fix bugs and
improve it. Because of this, we believe programming makes us better
thinkers and learners. We will see why it is hard to prove the
correctness of programs.
!A person interacts with a computer in a programming
activity.{width="600"}
## References
[^1]: Levesque, Hector,Thinking as
Computation, Hector J. Levesque,
[^2]: Denning, Peter, The Great Principles
of Computing,
<http://denninginstitute.com/pjd/GP/GP-site/welcome.html>
|
# Foundations of Computer Science/Information Representation
## Information Representation
### Introductory problem
Computers often represent colors as a red-green-blue (RGB) set of
numbers, called a \"triple\", where each of the red, green, and blue
components is an integer between 0 and 255. For example, the color (255,
0, 10) has full red, no green, and a small amount of blue. Write an
algorithm that takes as input the RGB components for a color, and
returns a message indicating the largest component or components. For
example, if the input color is (100, 255, 0), the algorithm should
output \"`Largest component(s): green`\". And if the input color is
(255, 255, 255), then the algorithm should output
\"`Largest component(s): red, green, blue`\".
### Overview of this chapter
One amazing aspect of computers is they can store so many different
types of data. Of course computers can store numbers. But unlike simple
calculators they can also store text, and they can store colors, and
images, and audio, and video, and many other types of data. And not only
can they store many different types, but they can also analyze them, and
they can transmit them to other computers. This versatility is one
reason why computers are so useful, and affect so many areas of our
lives.
To understand computers and computer science, it is important to know
something about how computers deal with different types of data. Let\'s
return to colors. How are colors stored in a computer? The introductory
problem states one way: as an RGB triple. This is not the only possible
way. RGB is just one of many color systems. For example, sometimes
colors are represented as an HSV triple: by hue, saturation, and value.
However, RGB is the most common color representation in computer
programs.
This leads to a deeper issue: how are *numbers* stored in a computer?
And why is it important anyway that we understand how numbers, and other
different types of data, are stored and processed in a computer? This
chapter deals with these and related questions. In particular, we will
look at the following:
1. Why is this an important topic?
2. How do computers represent numbers?
3. How do computers represent text?
4. How do computers represent other types of data such as images?
5. What is the binary number system and why is it important in computer
science?
6. How do computers do basic operations such as addition and
subtraction?
### Goals
Upon completing this chapter, you should be able to do the following:
1. Be able to explain how, on the lowest level, computers represent
both numeric and text data, as well as other types of data such as
color data.
2. Be able to explain and use the basic terminology in this area: bit,
byte, megabyte, RGB triple, ASCII, etc.
3. Be able to convert numbers and text from one representation to
another.
4. Be able to convert integers from one representation to another, for
example from decimal representation to two\'s complement
representation.
5. Be able to add and subtract numbers written in unsigned binary or in
two\'s complement representation.
6. Be able to explain how the number of bits used to represent data
affects the range and precision of the representation.
7. Be able to explain in general how computers represent different
types of data such as images.
8. Be able to do calculations involving amounts of memory or download
times for certain datasets.
### Data representation and mathematics
How is data representation related to liberal education and mathematics?
As you might guess, there is a strong connection. Computers store all
data in terms of binary (i.e., base 2) numbers. So to understand
computers it is necessary to understand binary. Moreover, you need to
understand not only binary basics, but also some of the complications
such as the \"two\'s complement\" notation discussed below.
Binary representation is important not only because it is how computers
represent data, but also because so much of computers and computing is
based on it. For example, we will see it again in the chapter on machine
organization.
### Data representation and society and technology
*The computer revolution*. That is a phrase you often hear used to
describe the many ways computers are affecting our lives. Another phrase
you might hear is the *digital revolution*. What does the digital
revolution mean?
Nowadays, many of our devices are digital. We have digital watches,
digital phones, digital radio, digital TVs, etc. However, previously
many devices were *analog*: \"data \... represented by a continuously
variable physical quantity\" [^1] Think, for example, of an old watch
with second, minute, and hour hands that moved continuously (although
very slowly for the minute and hour hands). Compare this with many
modern-day watches that shows a digital representation of the time such
as 2:03:23.
This example highlights a key difference between analog and digital
devices: analog devices rely on a continuous phenomenon and digital
devices rely on a discrete one. As a second example of this difference,
an analog radio receives audio radio broadcast signals which are
transmitted as radio *waves*, while a digital radio receives signals
which are streams of numbers.[^2]
The digital revolution refers to the many digital devices, their uses,
and their effects. These devices include not only computers, but also
other devices or systems that play a major role in our lives, such as
communication systems.
Because digital devices usually store numbers using the binary number
system, a major theme in this chapter is binary representation of data.
Binary is fundamental to computers and computer science: to understand
how computers work, and how computer scientists think, you need to
understand binary. The first part of this chapter therefore covers
binary basics. The second part then builds on the first and explains how
computers store different types of data.
## Representation basics
### Introduction
Computing is fundamentally about information processes. Each computation
is a certain manipulation of symbols, which can be done purely
mechanically (blindly). If we can represent information using symbols
and know how to process the symbols and interpret the results, we can
access valuable new information. In this section we will study
information representation in computing.
The algorithms chapters discuss ways to describe a sequence of
operations. Computer scientists use algorithms to specify *behavior* of
computers. But for these algorithms to be useful they need data, and so
computers need ways to represent data.[^3]
Information is conveyed as the
content of messages, which when interpreted and perceived by our senses,
causes certain mental responses. Information is always encoded into some
form for transmission and interpretation. We deal with information all
the time. For example, we receive information when we read a book,
listen to a story, watch a movie, or dream a dream. We give information
when we write an email, draw a picture, act in a show or give a speech.
Information is abstract but it is conveyed through concrete media. For
instance, a conversation on the phone communicates information but the
information is represented by sound waves and electronic signals along
the way.
Information is abstract/virtual and the media that carry the information
must be concrete/physical. Therefore before any information can be
processed or communicated it must be quantified/digitized: a process
that turns information into (data) representations using symbols.
People have many ways to represent even a very simple number. For
example, the number four can be represented as 4 or IV or `||||` or 2 +
2, and so on. How do computers represent numbers? (Or text? Or audio
files?)
The way computers represent and work with numbers is different from how
we do. Since early computer history, the standard has been the binary
number system. Computers \"like\" binary because it is extremely easy
for them. However, binary is not easy for humans. While most of the time
people do not need to be concerned with the internal representations
that computers use, sometimes they do.
### Why binary?
Suppose you and some friends are spending the weekend at a cabin. The
group will travel in two separate cars, and you all agree that the first
group to arrive will leave the front light on to make it easier for the
later group. When the car you are in arrives at the cabin you will be
able to tell by the light if your car arrived first. The light therefore
encodes two possibilities: on (the other group has already arrived) or
off (the other group hasn\'t arrived yet).
To convey more information you could use two lights. For example, both
off could mean the first group hasn\'t arrived yet, the first light off
and second on indicate the first group has arrived but left to get
supplies, the first on and second off that the group arrived but left to
go fishing, and both on that the group has arrived and hasn\'t left.
Note the key ideas here: a light can be on or off (we don\'t allow
different level of light, multiple colors, or other options), just two
possibilities. But the second is that if we want to represent more than
two choices we can use more lights.
This \"on or off\" idea is a powerful one. There are two and only two
distinct choices or states: on or off, 0 or 1, black or white, present
or absent, large or small, rough or smooth, etc.---all of these are
different ways of representing possibilities. One reason the two-choice
idea is so powerful is it is easier to build objects---computers,
cameras, CDs, and so on---where the data at the lowest level is in two
possible states, either a 0 or a 1.[^4]
In computer representation, a *bit* (i.e., a binary digit) can be a 0 or
a 1. A collection of bits is called a *bitstring*. A bitstring that is 8
bits long is called a *byte*. Bits and bytes are important concepts in
computer storage and data transmission, and later on we\'ll explain them
further along with some related terminology and concepts. But first we
will look at the basic question of how a computer represents numbers.
### A brief historic aside
Claude Shannon is considered the
father of information theory
because he is the first person who studied and built mathematical models
for information and communication of information. He also made many
other significant contributions to computing. His seminal paper "A
mathematical theory of communication" (1948) changed our view of
information, laying the foundation for the information age. Shannon
discovered that the fundamental unit of information is a yes or no
answer to a question or one bit with two distinct states, which can be
represented by only two symbols. He also founded the design theory of
digital computers/circuits by proving that propositions of Boolean
algebra can be used to build a \"logic machine\" capable of carrying out
general computation (manipulation of two types of symbols).
Data, another term closely related to
information, is an abstract concept of representations of information.
We will use information representations and data interchangeably.
### External and internal information representation
Information can be represented on different levels. It is helpful to
separate information representations into two categories: external
representation and internal representation. External representation is
used for communication between humans and computers. Everything we see
on a computer monitor or screen, whether it is text, image, or motion
picture, is a representation of certain information. Computers also
represent information externally using sound and other media, such as
touch pads for the blind to read text.
Internally all modern computers represent information as bits. We can
think of a bit as a digit with two possible values. Since a bit is the
fundamental unit of information it is sufficient to represent all
information. It is also the simplest representation because only two
symbols are needed to represent two distinct values. This makes it easy
to represent bits physically - any device capable of having two distinct
states works, e.g. a toggle switch. We will see later that modern
computer processors are made up of tiny switches called transistors.
### Review of the decimal number system
When bits are put together into sequences they can represent numbers. We
are familiar with representing quantities with numbers. Numbers are
concrete symbols representing abstract quantities. With ten fingers,
humans conveniently adopted the base ten (decimal) numbering system,
which requires ten different symbols. We all know decimal representation
and use it every day. For instance, the arabic numerals use 0 through 9.
Each symbol represents a power of ten depending on the position the
symbol is in.
So, for example, the number one hundred and twenty-four is
$(1\times100) + (2\times10) + (4\times1)$. We can emphasize this by
writing the powers of 10 over the digits in 124:
`10^2 10^1 10^0`\
` 1 2 4`
So if we take what we know about base 10 and apply it to base 2 we can
figure out binary. But first recall that a bit is a binary digit and a
byte is 8 bits. In this file most of the binary numbers we talk about
will be one byte long.
(Computers actually use more than one byte to represent most numbers.
For example, most numbers are actually represented using 32 bits (4
bytes) or 64 bits (8 bytes). The more bits, the more different values
you can represent: a single bit permits 2 values, 2 bits give 4 values,
3 bits gives 8 values, \..., 8 bits give 256 values, and in general *n*
bits gives $2^n$ values. However when looking at binary examples we\'ll
usually use 8 bit numbers to make the examples manageable.
This base ten system used for numbering is somewhat arbitrary. In fact,
we commonly use other base systems to represent quantities of different
nature: base 7 for days in a week, base 60 for minutes in an hour, 24
for hours in a day, 16 for ounces in a pound, and so on. It is not hard
to imagine base 2 (two symbols) is the simplest base system, because
with fewer than two symbols, we cannot represent change (and therefore
no information).
### Unsigned binary
When we talk about decimal, we deal with 10 digits---0 through 9
(that\'s where *deci*mal comes from). In binary we only have two digits,
that\'s why it\'s *bi*nary. The digits in binary are 0 and 1. You will
never see any 2\'s or 3\'s, etc. If you do, something is wrong. A bit
will always be a 0 or 1.
Counting in binary proceeds as follows:
` 0 (decimal 0) `\
` 1 (decimal 1) `\
` 10 (decimal 2) `\
` 11 (decimal 3) `\
` 100 (decimal 4) `\
` 101 (decimal 5) `\
` ...`
An old joke runs, \"There are 10 types of people in the world. Those who
understand binary and those who don\'t.\"
The next thing to think about is what values are possible in one byte.
Let\'s write out the powers of two in a byte:
`2^7 2^6 2^5 2^4 2^3 2^2 2^1 2^0`\
`128 64 32 16 8 4 2 1 `
As an example, the binary number 10011001 is
$(1\times128) + (0\times 64) + (0\times 32) + (1\times 16) + (1\times 8) + (0\times 4) + (0\times 2) + (1\times 1) = 153.$
Note each of the 8 bits can either be a 0 or a 1. So there are two
possibilities for the leftmost bit, two for the next bit, two for the
bit after that, and so on: two choices for each of the 8 bits.
Multiplying these possibilities together gives $2^8$ or 256
possibilities. In *unsigned binary* these possibilities represent the
integers between 0 (all bits 0) to 255 (all bits 1).
All base systems work in the same way: the rightmost digit represents
the quantity of the base raised to the zeroth power (recall that
anything raised to the 0th power results in 1), and each digit to the
left represents a quantity that is base times larger than the one
represented by the digit immediately to the right. The binary number
1001 represents the quantity 9 in decimal, because the rightmost 1
represents $2^0=1$, the zeroes contribute nothing at the $2^1$ and $2^2$
positions, and finally the leftmost one represents $2^3=8$. When we use
different base systems it is necessary to indicate the base as the
subscript to avoid confusion. For example, we write $1001_2$ to indicate
the number 1001 in binary (which represents the quantity 9 in decimal).
The subscript 2 means \"binary\": it tells the reader that it does *not*
represent a thousand and one in decimal. This example also shows us that
representations have no intrinsic meaning. The same pattern of symbols,
e.g. 1001, can represent different quantities depending on the way it is
interpreted. There are many other ways to represent the quantity
$9_{10}$ (remember: read this as \"nine in base 10 / decimal\"); for
instance, the symbol 九 represents the same quantity in Chinese.
As the same quantity can be represented differently, we can often change
the representation without changing the quantity it represents. As shown
before, the binary representation $1001_2$ is equivalent to the decimal
representation $9_{10}$ - representing exactly the same quantity. In
studying computing we often need to convert between decimal
representation, which we are most familiar with, and binary
representation, which is used internally by computers.
### Binary to decimal conversion
Converting the binary representation of a non-negative integer to its
decimal representation is a straight-forward process: summing up the
quantities each binary digit represents yields the result.
$1001_2=1\times2^3+0\times2^2+0\times2^1+1\times2^0=8+0+0+1=9_{10}$
### Decimal to binary conversion
One task you will need to do in this book, and which computer scientists
often need to do, is to convert a decimal number to or from a binary
number. The last subsection showed how to convert binary to decimal:
take each power of 2 whose corresponding bit is a 1, and add those
powers together.
Suppose we want to do a decimal to binary conversion. As an example,
let\'s convert the decimal value 75 to binary. Here\'s one technique
that relies on successive division by 2:
`75/2 quotient=37 remainder=1`\
`37/2 quotient=18 remainder=1`\
`18/2 quotient=9 remainder=0`\
`9/2 quotient=4 remainder=1`\
`4/2 quotient=2 remainder=0`\
`2/2 quotient=1 remainder=0`\
`1/2 quotient=0 remainder=1`
We then take the remainders bottom-to-top to get 1001011. Since we
usually work with group of 8 bits, if it doesn\'t fill all eight bits,
we add zeroes at the front until it does. So we end up with 01001011.
## Binary mathematics
### Addition of binary numbers
In addition to storing data, computers also need to do operations such
as addition of data. How do we add numbers in binary representation?
Addition of bits has four simple rules, shown here as four vertical
columns:
` 0 0 1 1`\
`+ 0 + 1 + 0 + 1`\
`=========================`\
` 0 1 1 10`
Now if we have a binary number consisting of multiple bits we use these
four rules, plus \"carrying\". Here\'s an example:
` 00110101`\
`+ 10101100`\
`==========`\
` 11100001`
Here\'s the same example, but with the carried bits listed explicitly,
i.e., a 0 if there is no carry, and a 1 if there is. When 1+1=10, the 0
is kept in that column\'s solution and the 1 is carried over to be added
to the next column left.
` 0111100`\
` 00110101`\
`+ 10101100`\
`==========`\
` 11100001`
We can check binary operations by converting each number to decimal:
with both binary and decimal we\'re doing the same operations on the
same numbers, but with different representations. If the representations
and operations are correct the results should be consistent. Let\'s look
one more time at the example addition problem we just solved above.
Converting $00110101_2$ to decimal produces $53_{10}$ (do the conversion
on your own to verify its accuracy), and converting $10101100_2$ gives
$172_{10}$. Adding these yields $225_{10}$, which, when converted back
to binary is indeed $11100001_2$.
But binary addition doesn\'t always work *quite* right:
` 01110100`\
`+ 10011111`\
`==========`\
` 100010011 `
Note there are 9 bits in the result, but there should only be 8 in a
byte. Here is the sum in decimal:
` 116`\
`+ 159 `\
`=====`\
` 275`
Note 275 which is greater than 255, the maximum we can hold in an 8-bit
number. This results in a condition called **overflow**. Overflow is not
an issue if the computer can go to a 9-bit binary number; however, if
the computer only has 8 bits set aside for the result, overflow means
that a program might not run correctly or at all.
### Subtraction of binary numbers
Once again, let\'s start by looking at single bits:
` 0 0 1 1`\
`- 0 - 1 - 0 - 1`\
`========================`\
` 0 -1 1 0`
Notice that in the `-1` case, what we often want to do is get a 1 result
and borrow. So let\'s apply this to an 8-bit problem:
` 10011101`\
`- 00100010`\
`==========`\
` 01111011`
which is the same as (in base 10),
` 157`\
`- 34`\
`======`\
` 123`
Here\'s the binary subtraction again with the borrowing shown:
` 1100010`\
` 10011101`\
`- 00100010`\
`==========`\
` 01111011`
Most people find binary subtraction significantly harder than binary
addition.
## Other representations related to binary
You might have had questions about the binary representation in the last
section. For example, what about negative numbers? What about numbers
with a fractional part? Aren\'t all those 0\'s and 1\'s difficult for
humans to work with? These are good questions. In this and a couple of
other sections we\'ll look at a few other representations that are used
in computer science and are related to binary.
### Hexadecimal
Computers are good at binary. Humans aren\'t. Binary is hard for humans
to write, hard to read, and hard to understand. But what if we want a
number system that is easier to read but still is closely tied to binary
in some way, to preserve some of the advantages of binary?
One possibility is *hexadecimal*, i.e., base 16. But using a base
greater than 10 immediately presents a problem. Specifically, we run out
of digits after 0 to 9 --- we can\'t use 10, 11, or greater because
those have multiple digits within them. So instead we use letters: A is
10, B is 11, C is 12, D is 13, E is 14, and F is 15. So the digits
we\'re using are 0 through F instead of 0 through 9 in decimal, or
instead of 0 and 1 in binary.
We also have to reexamine the value of each place. In hexadecimal, each
place represents a power of 16. A two-digit hexadecimal number has a
16\'s place and a 1\'s place. For example, D8 has D in the 16\'s place,
and 8 in the 1\'s place:
`16^1 16^0 <- hexadecimal places showing powers of 16`\
`16 1 <- value of these places in decimal (base 10)`\
`D 8 <- our sample hexadecimal number`
So the hexadecimal number D8 equals
$(13 \times 16) + (8 \times 1) = 216$ in decimal. Note any two digit
hexadecimal number, however, can represent the same amount of
information as one byte of binary. (That\'s because the largest
two-digit hex number
$FF_{16} = (15 \times 16) + (15 \times 1) = 255_10 = 11111111_2$, the
same maximum as 8 bits of binary.) So it\'s easier for us to read or
write.
When working with a number, there are times when which representation is
being used isn\'t clear. For example, does 10 represent the number ten
(so the representation is decimal), the number two (the representation
is binary), the number sixteen (hexadecimal), or some other number?
Often, the representation is clear from the context. However, when it
isn\'t, we use a subscript to clarify which representation is being
used, for example $10_{10}$ for decimal, versus $10_2$ for binary,
versus $10_{16}$ for hexadecimal.
Hexadecimal numbers can have more hexadecimal digits than the two we\'ve
already seen. For example, consider $FF0581A4_{16}$, which uses the
following powers of 16:
`16^7 16^6 16^5 16^4 16^3 16^2 16^1 16^0`\
`F F 0 5 8 1 A 4`
So in decimal this is:
$(15 \times 16^7) + (15 \times 16^6) + (0 \times 16^5) + (5 \times 16^4)$
$+ (8 \times 16^3) + (1 \times 16^2) + (10 \times 16^1) + (4 \times 16^0)$
$= 4,278,550,948$
Hexadecimal doesn\'t appear often, but it is used in some places, for
example sometimes to represent memory addresses (you\'ll see this in a
future chapter) or colors. Why is it useful in such cases? Consider a
24-bit RGB color with 8 bits each for red, green, and blue. Since 8 bits
requires 2 hexadecimal digits, a 24-bit color needs 6 hexadecimal
digits, rather than 24 bits. For example, `FF0088` indicates a 24-bit
color with a full red component, no green, and a mid-level blue.
Now there are additional types of conversion problems:
`* Decimal to hexadecimal`\
`* Hexadecimal to decimal`\
`* Binary to hexadecimal`\
`* Hexadecimal to binary`
Here are a couple examples involving the last two of these.
Let\'s convert the binary number 00111100 to hexadecimal. To do this,
break it into two 4-bit parts: 0011 and 1100. Now convert each part to
decimal and get 3 and 12. The 3 is a hexadecimal digit, but 12 isn\'t.
Instead recall that C is the hexadecimal representation for 12. So the
hexadecimal representation for 00111100 is 3C.
Rather than going from binary to decimal (for each 4-bit segment) and
then to hexadecimal digits, you could go from binary to hexadecimal
directly.
Hexadecimal digits and their decimal and binary equivalents: first, base
16 (hexadecimal), then base 10 (decimal), then base 2 (binary).
`16 10 2 <- bases`\
`===========`\
`0 0 0000`\
`1 1 0001`\
`2 2 0010`\
`3 3 0011`\
`4 4 0100`\
`5 5 0101`\
`6 6 0110`\
`7 7 0111`\
`8 8 1000`\
`9 9 1001`\
`A 10 1010`\
`B 11 1011`\
`C 12 1100`\
`D 13 1101`\
`E 14 1110`\
`F 15 1111`
Now let\'s convert the hexadecimal number D6 to binary. D is the
hexadecimal representation for $13_{10}$, which is 1101 in binary. 6 in
binary is 0110. Put these two parts together to get 11010110. Again we
could skip the intermediate conversions by using the hexadecimal and
binary columns above.
## Text representation
A piece of text can be viewed as a stream of symbols can be
represented/encoded as a sequence of bits resulting in a stream of bits
for the text. Two common encoding schemes are
ASCII code and
Unicode. ASCII code use one
byte (8 bits) to represent each symbol and can represent up to 256
($2^8=256$) different symbols, which includes the English alphabet (in
both lower and upper cases) and other commonly used symbols. Unicode
extends ASCII code to represent a much larger number of symbols using
multiple bytes. Unicode can represent any symbol from any written
language and much more.
## Image, audio, and video files
Images, audio, and video are other types of data. How computers
represent these types of data is fascinating but complex. For example,
there are perceptual issues (e.g., what types of sounds can humans hear,
and how does that affect how many numbers we need to store to reliably
represent music?), size issues (as we\'ll see below, these types of data
can result in large file sizes), standards issues (e.g., you might have
heard of JPEG or GIF image formats), and other issues.
We won\'t be able to cover image, audio, and video representation in
depth: the details are too complicated, and can get very sophisticated.
For example, JPEG images can rely on an advanced mathematical technique
called the discrete cosine transform. However, it is worth examining a
few key high-level points about image, audio, and video files:
1. Computers can represent not only basic numeric and text data, but
also data such as music, images, and video.
2. They do this by digitizing the data. At the lowest level the data is
still represented in terms of bits, but there are higher-level
representational constructs as well.
3. There are numerous ways to encode such data, and so standard
encoding techniques are useful.
4. Audio, image, and video files can be large, which presents
challenges in terms of storing, processing and transmitting these
files. For this reason most encoding techniques use some
sophisticated types of compression.
### Images
A perceived image is the result of light beams physically coming into
our eyes and triggering nerves to send signals to our brain. In
computing, an image is simulated by a grid of dots (called *pixels*, for
\"picture element\"), each of which has a particular color. This works
because our eyes cannot tell the difference between the original image
and the dot-based image if the resolution (number of dots used) is high
enough. In fact, the computer screen itself uses such a grid of pixels
to display images and text.
\"The largest and most detailed photograph of our galaxy ever taken has
been unveiled. The gigantic nine-gigapixel image captures more than 84
million stars at the core of the Milky Way. It was created with data
gathered by the Visible and Infrared Survey Telescope for Astronomy
(VISTA) at the European Southern Observatory\'s Paranal Observatory in
Chile. If it was printed with the resolution of a newspaper it would
stretch 30 feet long and 23 feet tall, the team behind it said, and has
a resolution of 108,200 by 81,500 pixels.\"[^5]
While this galaxy image is obviously an extreme example, it illustrates
that images (even much smaller images) can take significant computer
space. Here is a more mundane example. Suppose you have an image that is
1500 pixels wide, and 1000 pixels high. Each pixel is stored as a 24-bit
color. How many bytes does it take to store this image?
This problem describes a straightforward but naive way to store the
image: for each row, for each column, store the 24-bit color at that
location. The answer is $1500\times1000$ pixels multiplied by 24
bits/pixel multiplied by 8 bits per 1 byte = 4.5 million bytes, or about
4.5MB.
Note the file size. If you store a number of photographs or other images
you know that images, and especially collections of images, can take up
considerable storage space. You might also know that most images do not
take 4.5MB. And you have probably heard of some image storage formats
such as JPEG or GIF.
Why are most image sizes tens or hundreds of kilobytes rather than
megabytes? Most images are stored not in a direct format, but using some
compression technique. For example, suppose you have a night image where
the entire top half of the image is black ((0,0,0) in RGB). Rather than
storing (0,0,0) as many times as there are pixels in the upper half of
the image, it is more efficient to use some \"shorthand.\" For example,
rather than having a file that has thousands of 0\'s in it, you could
have (0,0,0) plus a number indicating how many pixels starting the image
(if you read from line by line from top to bottom) have color (0,0,0).
This leads to a compressed image: an image that contains all, or most,
of the information in the original image, but in a more efficient
representation. For example, if an original image would have taken 4MB,
but the more efficient version takes 400KB, then the compression ratio
is 4MB to 400KB, or about 10 to 1.
Complicated compression standards, such as JPEG, use a variety of
techniques to compress images. The techniques can be quite
sophisticated.
How much can an image be compressed? It depends on a number of factors.
For many images, a compression ratio of, say, 10:1 is possible, but this
depends on the image and on its use. For example, one factor is how
complicated an image is. An uncomplicated image (say, as an extreme
example, if every pixel is black[^6]), can be compressed a very large
amount. Richer, more complicated images can be compressed less. However,
even complicated images can usually be compressed at least somewhat.
Another consideration is how faithful the compressed image is to the
original. For example, many users will trade some small discrepancies
between the original image and the compressed image for a smaller file
size, as long as those discrepancies are not easily noticeable. A
compression scheme that doesn\'t lose any image information is called a
*lossless* scheme. One that does is called *lossy*. Lossy compression
will give better compression than lossless, but with some loss of
fidelity.[^7]
In addition, the encoding of an image includes other metadata, such as
the size of the image, the encoding standard, and the date and time when
it was created.
### Video
It is not hard to imagine that videos can be encoded as series of image
frames with synchronized audio tracks also encoded using bits.
Suppose you have a 10 minute video, 256 x 256 pixels, 24 bits per pixel,
and 30 frames of the video per second. You use an encoding that stores
all bits for each pixel for each frame in the video. What is the total
file size? And suppose you have a 500 kilobit per second download
connection; how long will it take to download the file?
This problem highlights some of the challenges of video files. Note the
answer to the file size question is (256x256) pixels $\times$ 24
bits/pixel $\times$ 10 minutes $\times$ 60 seconds/minute $\times$ 30
frames per second = approximately 28 Gb (Gb means giga*bits*). This is
about 28/8 = 3.5 giga*bytes*. With a 500 kilobit per second download
rate, this will take 28Gb/500 Kbps, or about 56,000 seconds. This is
over 15 hours, longer than many people would like to wait. And the time
will only increase if the number of pixels per frame is larger (e.g., in
a full screen display) or the video length is longer, or the download
speed is slower.
So video file size can be an issue. However, it does not take 15 hours
to download a ten minute video; as with image files, there are ways to
decrease the file size and transmission time. For example, standards
such as MPEG make use not only of image compression techniques to
decrease the storage size of a single frame, but also take advantage of
the fact that a scene in one frame is usually quite similar to the scene
in the next frame. There\'s a wealth of information online about various
compression techniques and standards, storage media, etc.[^8]
### Audio
It might seem, at first, that audio files shouldn\'t take anywhere as
much space as video. However, if you think about how complicated audio
such as music can be, you probably won\'t be surprised that audio files
can also be large.
Sound is essentially vibrations, or collections of sound waves
travelling through the air. Humans can hear sound waves that have
frequencies of between 20 and 20,000 cycles per second.[^9] To avoid
certain undesirable artifacts, audio files need to use a sample rate of
twice the highest frequency. So, for example, for a CD music is usually
sampled 44,100 Hz, or 44,100 times per second.[^10] And if you want a
stereo effect, you need to sample on two channels. For each sample you
want to store the amplitude using enough bits to give a faithful
representation. CDs usually use 16 bits per sample. So a minute of music
takes 44,100 samples $\times$ 16 bits/samples $\times$ 2 channels
$\times$ 60 seconds/minute $\times$ 8 bits/1 byte = about 10.5MB per
minute. This means a 4 minute song will take about 40MB, and an hour of
music will take about 630 MB, which is (very) roughly the amount of
memory a typical CD will hold.[^11]
Note, however, that if you want to download a 40 MB song over a 1Mbps
connection, it will take 40MB/1Mbps, which comes to about 320 seconds.
This is not a long time, but it would be desirable if it could be
shorter. So, not surprisingly, there are compression schemes that reduce
this considerably. For example, there is an MPEG audio compression
standard that will compress 4 minutes songs to about 4MB, a considerable
reduction.[^12]
## Sizes and limits of representations
In the last section we saw that a page of text could take a few thousand
bytes to store. Images files might take tens of thousands, hundreds of
thousands, or even more bytes. Music files can take millions of bytes.
Movie files can take billions. There are databases that consist of
trillions or quadrillions of bytes of data.
Computer science has special terminology and notation for large numbers
of bytes. Here is a table of memory amounts, their powers of two, and
approximate American English word.
`1 kilobyte (KB) — `$2^{10}$` bytes — thousand bytes`\
`1 megabyte (MB) — `$2^{20}$` bytes — million bytes`\
`1 gigabyte (GB) — `$2^{30}$` bytes — billion bytes`\
`1 terabyte (TB) — `$2^{40}$` bytes — trillion bytes`\
`1 petabyte (PB) — `$2^{50}$` bytes — quadrillion bytes`\
`1 exabyte (EB) — `$2^{60}$` bytes — quintillion bytes`
There are still higher numbers or smaller quantities of these
types.[^13]
Kilobytes, megabytes, and the other sizes are important enough for
discussing file sizes, computer memory sizes, and so on, that you should
know both the terminology and the abbreviations. One caution: file sizes
are usually given in terms of *bytes* (or kilobytes, megabytes, etc.).
However, some quantities in computer science are usually given in terms
involving bits. For example, download speeds are often given in terms of
bits per second. \"Mbps\" is an abbreviation for mega*bits* (not
megabytes) per second. Notice the \'b\' in Mbps is a lower case, while
the \'b\' in MB (megabytes) is capitalized.
In the context of computer memory, the usual definition of kilobytes,
megabytes, etc. is a power of two. For example, a kilobyte is
$2^{10} = 1024$ bytes, not a thousand. In some other situations,
however, a kilobyte is defined to be exactly a thousand bytes. This can
obviously be confusing. For the purposes of this book, the difference
will usually not matter. That is, in most problems we do, an
approximation will be close enough. So, for example, if we do a
calculation and find a file takes 6,536 bytes, then you can say this is
approximately 6.5 KB, unless the problem statement says otherwise.[^14]
All representations are limited in multiple ways. First, the number of
different things we can represent is limited because the number
combinations of symbols we can use is always limited by the physical
space available. For instance, if you were to represent a decimal number
by writing it down on a piece of paper, the size of the paper and the
size of the font limit how many digits you can put down. Similarly in a
computer the number of bits can be stored physically is also limited.
With three binary digits we can generate $2^3=8$ different
representations/patterns, namely
$000_2, 001_2, 010_2, 011_2, 100_2, 101_2, 110_2, 111_2$, which
conventionally represent 0 through 7 respectively. Keep in mind
representations do not have intrinsic meanings. So three bits can
possibly represent seven different things. With n bits we can represent
$2^n$ different things because each bit can be either one or zero and
$2^n$ are the total combinations we can get, which limits the amount of
information we can represent.
Another type of limit is due to the nature of the representations. For
example, one third can never be represented precisely by a decimal
format with a fractional part because there will be an infinite number
of threes after the decimal point. Similarly, one third can not be
represented precisely in binary format either. In other words, it is
impossible to represent one third as the sum of a finite list of power
of twos. However, in a base-three numbering system one third can be
represented precisely as: $0.1_3$ because the one after the point
represent a power of three: $3^{-1}$.
## Notes and references
```{=html}
<references />
```
[^1]: Analog at Wiktionary.
[^2]: Actually, it\'s more complicated than that because some devices,
including some digital radios, intermix digital and analog. For
example, a digital radio broadcast might start in digital form,
i.e., as a stream of numbers, then be converted into and transmitted
as radio waves, then received and converted back into digital form.
Technically speaking the signal was *modulated* and *demodulated*.
If you have a *modem* (*mod*ulator-*dem*odulator) on your computer,
it fulfills a similar function.
[^3]: Actually we need not only data, but a way to represent the
algorithms within the computer as well. How computers store
algorithm instructions is discussed in another chapter.
[^4]: Of course how a 0 or 1 is represented varies according to the
device. For example, in a computer the common way to differentiate a
0 from a 1 is by electrical properties, such as using different
voltage levels. In a fiber optic cable, the presence or absence of a
light pulse can differentiate 0\'s from 1\'s. Optical storage
devices can differentiate 0\'s and 1\'s by the presence or absence
of small \"dents\" that affect the reflectivity of locations on the
disk surface.
[^5]: 1
[^6]: You might have seen modern art paintings where the entire work is
a single color.
[^7]: See, for example,
2 for examples
of the interplay between compression rate and image fidelity.
[^8]: For example, see
3 and the
links there.
[^9]: This is just a rough estimate since there is much individual
variation as well as other factors that affect this range.
[^10]: Hz, or *Hertz*, is a measurement of frequency. It appears in a
variety of places in computer science, computer engineering, and
related fields such as electrical engineering. For example, a
computer monitor might have a refresh rate of 60Hz, meaning it is
redrawn 60 times per second. It is also used in many other fields.
As an example, in most modern day concert music, A above middle C is
taken to be 440 Hz.
[^11]: See, for example, 4 for
more information about how CDs work. In general, there is a wealth
of web sites about audio files, formats, storage media, etc.
[^12]: Remember there is also an MPEG video compression standard. MPEG
actually has a collection of standards: see Moving Picture Experts
Group on
Wikipedia.
[^13]: See, for example, binary
prefixes.
[^14]: The difference between \"round\" numbers, such as a million, and
powers of 10 is not as pronounced for smaller numbers of bytes as it
is for larger. A kilobyte is $2^{10}=1024$ bytes, which is only 2.4%
more than a thousand. A megabyte is $2^{20} = 1,048,576$ bytes,
about 4.9% more than one million. A gigabyte is about 7.4% bytes
more than a billion, and a terabyte is about 10.0% more bytes than a
trillion. In most of the file size problems we do, we\'ll be
interested in the approximate size, and being off by 2% or 5% or 10%
won\'t matter. But of course there are real-world applications where
it does matter, so when doing file size problems keep in mind we are
doing approximations, not exact calculations.
|
# Foundations of Computer Science/Algorithms and Programs
## Algorithms and Programs
An algorithm can be defined
as a set of steps used to solve a specific problem. For example, a cook
may use a recipe when preparing a specific type of food. Similarly, in
computer science, algorithms are the conceptual solutions used to create
programs. It is important to distinguish an algorithm from a program.
The implementation of an algorithm is known as a
program.
### Defining information processes
Computer is about information processes. Once information is represented
concretely using different patterns of symbols it can be processed to
derive new information. We learned that computers use the binary system
internally to represent everything as sequence of bits - zeros and ones.
Chapter 1 of the Blown to Bits
book
talks about the digital explosion of bits as the result of the
innovations in computing and technologies, which enable us to turn
information into bits and shared them with unprecedented speed.
Creating information processes is the topic of this chapter. We will
learn that information processes start with conceptual solutions to
problems and then can be implemented (coded) in ways understandable to
machines. The conceptual solutions are called algorithms and the
executable implementations are called programs.
### What is an algorithm?
Algorithm is a rather fancy
name for a simple idea: a step-by-step solution to a problem. Avi
Wigderson once said algorithm is a common language for nature, human,
and computer. The idea has been around for a long time. You are already
familiar with many algorithms, such as tying your shoes, making coffee,
send an email, and cooking a dish according to a recipe. Algorithms in
computing are designed for computers to follow. Imagine we have built a
machine that can perform the single digit addition procedure described
in chapter one. Recall the procedure performs the addition using simple
table lookup. If we give the machine two digits and ask it to perform
the operation, it gives two digits back as the answer. Of course the
numbers in the inputs and the output have to be represented (encoded)
properly. Even though the machine doesn\'t understand addition, it
should be able to perform the addition correctly. However, the machine
will not perform the addition unless it is instructed to do so. The
command (with input values) that signals the machine to perform an
addition is called an instruction. We have imagined that it is not hard
to use the addition procedure to create other more complex procedures
that can perform more impressive activities. Before we can create such
procedures we must identify a problem and find a conceptual solution to
it. Often time the conceptual solution is one that can be carried out
manually by a person. This conceptual solution is an algorithm.
### Why study algorithms?
Algorithm is a center piece in the computer science discipline. As we
discussed in chapter one, computing can be done blindly or purely
mechanically by a simple device. The intelligence of any computation
(information process) lies in the algorithm that defines it.
For an algorithm to be useful, it must be correct - the steps must be
logical and specific for machines to carry out - and efficient - it must
finish in a reasonable amount of time. The correctness and efficiency of
algorithms are two key issues in the study of algorithms.
### Programs are implemented algorithms
Studying algorithms allows us to solve problems conceptually regardless
of the machines that carry out the solutions. An algorithm must
communicate a conceptual solution in a unambiguous and human
understandable fashion. A notational system for describing algorithms
should allow us to describe and reason with ideas on paper. Once an
algorithm\'s correctness is verified on paper, it can be implemented as
a program understandable to a particular machine.
### Formal definition of algorithm
Alan Turing is the first
person who studied algorithms mathematically by creating a universal
machine model, later called Turing machine. He also proved that
computation is unavoidable in circumstances - we will have to perform
the computation to the result, which separates computing from
mathematics (the birth of computer science). The turing machine can
represent/encode information in a standard form and interpret and
updates the representation according to rules (algorithms), which is
part of the representation. This machine model is simple yet powerful.
In fact, it is the most powerful computing model known to computer
scientist. The turing machine can perform any computation done by any
machine we could ever build. Turing machine
equivalents
is defined based on this idea.
The turing machine model allows us to study algorithms in abstraction.
For instance, we can view each algorithm as a state
machine: an
algorithm always starts with a state - represented by a data
representation of the input and the internal information - and move
through a number of states to its final state as the result of
performing operations prescribed in the algorithm. When the number of
possible initial states approach infinity the same algorithm can
generate potentially infinite number of computation. This explains why
it is hard to verify the correctness of an algorithm through testing as
the initial states can be too many to exhaustively enumerate.
### Define algorithms
An algorithm is simply a set of steps that allow us to solve a
particular problem. A step is a unit of work that is unambiguous and can
be done in a fixed amount of time. For example bring a pot of water to
boil is a step in the tea-making process/algorithm. In computing we deal
with representations of information (data), so a step can be adding two
integers and storing the result to a variable. We will explain how to
define and use variables later.
The definition of a unit of work depends on what the agent, who performs
work, can do. Algorithms in computing are necessarily informed by the
capability of the computing machines. Recall that algorithms must be
implemented/described in a programming language understandable to a
machine before the machine can perform the task. There are many
different programming languages, therefore different ways to express the
same algorithm. The only language understandable to a specific machine
is called the machine language. Machine languages are written in
instructions consisting of zeros and ones (binary bits) because
computers are fundamentally machines that can manipulate two types of
symbols. Each different type of machine is designed to understand its
own native language---patterns of zeros and ones---because they can have
very different computing
hardware.
As you can imagine writing programs in machine languages can be very
hard. Normally we write programs to express our algorithms in high level
languages - languages that are close to our natural language, e.g.
English. Then we use tools (compilers and interpreters) to translate our
programs in higher level languages to machine languages, which is
analogous to using a personal interpreter when we travel abroad and
don\'t understand the native language. To run the same program on a
different machine we can simply recompile it or use a different
interpreter. High level languages hides the differences between machines
to allow us to write programs in a machine independent way, which is a
huge time saver. When we write programs in high level languages we use
an abstraction that is supported by all computers. For instance if a
high level language allows the expression of an addition we assume it
can be done by all computers.
Programming languages (high level or machine level) are tools for
expressing algorithms to machines. When we create algorithms to solve
problems conceptually we want to create them independent of the
languages. A well-designed recipe ought to work for different cooks in
different kitchens. So the steps or units of work must be defined in
terms of a higher abstraction - a set of common data structure,
operations and control structures that all languages support. By
creating algorithms conceptually using abstractions allows us humans to
think on a higher level closer to the problem domain we know. When an
algorithm is implemented in a particular language the abstract steps can
be mapped to the specific expression in the language. We are confident
the chain of tools we have can translate the solution to the machine
level executable code. The following diagram shows that the same
algorithm can be implemented in a variety of programming languages and
the result programs can be executed on different machine models (maybe
after some translation).
!This diagram shows that the same algorithm can be implemented in a
variety of programming language and then result programs can be run on
different machine
models.
Here are the common operations and control structures we can assume all
high level languages support:
- data structures: variables of single value and list of values.
- operations: arithmetic operations, comparisons, and relational
operations (and, or, and not)
- control structures: sequential (one after another), conditional
(selective on a condition), and repetition
Here is an algorithm defined in pseudo
code (natural language). This
algorithm finds the largest number in a list of numbers with the
following steps:
1. set max (a variable) to the value of the first number in the list
(store and retrieve values).
2. go through the list one number at a time comparing each number to
max, if the number is greater than max replace max with the number
(conditional and repetition).
3. the value stored in max is the answer.
We know the solution is correct because it is so simple. We know how to
carry it out manually, but we would certainly not solve the problem
using the process expressed in psuedo code. It is necessary to design
and express the algorithm in this detailed fashion for computers. Keep
in mind that computers are machines that perform computation
mechanically and therefore the instructions must be specific. The psuedo
code, even though in natural language, must use the aforementioned
constructs (data structures, operations, and control structures) that
are common to computers. You may wonder why we can not ask the computer
to look at the whole list and identify the largest number as we humans
do. Computers are simple machines that can not think. They are designed
to perform simple operations, e.g. adding two digits via symbol
manipulation. Even we, as human beings, will not be able to scan a long
list, say a million numbers, and find the largest number at a glance.
The algorithm we write must be expressed in terms of what a computer can
do and are scalable to inputs (data sets) of arbitrary sizes. The
algorithm we just studied can deal with a list of any size. In fact, it
makes little difference to a computer whether the list has three numbers
or three million numbers.
Another way to express the same algorithm is to use a graphical notation
called flow-chart. !This flow-chart shows the steps involved in finding
the largest number in a list of
numbers.
This chart shows the logic of the solution more clearly. There are two
conditionals - the checking of a condition and corresponding actions
taken as the result. The top-most conditional defines a repetition
(loop) because their an unconditional branching back to the conditional
as expressed in the arrow with no label.
Both the pseudo code and the flow chart describe the same solution to
the same problem. They are different representations of the same idea.
The following figure shows an implementation of the algorithm in
Scratch. !This stack of blocks scratch finds the largest number in a
list of
numbers.
As you can see, a concrete implementation of an algorithm must use the
building \"blocks\" available in the particular implementation language.
What is not shown in the figure is the part where the list is populated
with data from a file or user input. The structure of the code resembles
that of the flow-chart.
In summary constructing and studying algorithms allows us to study
(algorithmic) solution to problems in a language and computing
environment neutral way. We could use the \"finding largest number\"
algorithm as a single block to form more complex algorithms, but we need
to keep in mind this block is not a unit of work as the number of steps
involved depends on the input size (the length of the list). We will
revisit this topic in the future when we talk about functional
decomposition (to keep algorithms simple) and algorithm complexity
analysis (count the cost of algorithms).
Programs Each software program boils down to two components - data
(structure) and algorithm. We will study some fundamental algorithms and
data structures in computer science.
### Example algorithms
#### Image enconding/representation
Follow this
<http://csunplugged.org/sites/default/files/activity_pdfs_full/unplugged-02-image_representation.pdf>
image representation
activity to see how images
are encoded, transmitted, and reproduced in fax machines.
#### Error detection
Follow this
<http://csunplugged.org/sites/default/files/activity_pdfs_full/unplugged-04-error_detection.pdf>
error detection activity to see
the algorithm works to detect and also correct single bit errors.
A similar algorithm, Luhn
algorithm, is used to
validate credit card numbers.
#### Text compression
Text compression is another
important task in computing. The following activity demonstrates how a
compression algorithm works:
<http://csunplugged.org/sites/default/files/activity_pdfs_full/unplugged-03-text_compression.pdf>
#### Searching
Why is searching important? We do it on a daily basis. It is good
business too. Google\'s mission is to organize the world\'s information
and make it universally accessible and useful. Obviously to be able to
find the information we need fast is very useful and profitable.
We can always find a piece of information by going through a list of
them sequentially checking each one of them. Could you describe the
algorithm using either the pseudo code or the flow-chart notation?
Structure-wise this algorithm should resemble the \"find largest
number\" algorithm. This algorithm needs two inputs: the list and the
target item we are looking for. The repeated steps are fetching the next
item and comparing it to target. The piece of information used in the
comparison is also known as the key because it determines whether a
search is successful or not. For instance, if we have a list of students
we can search a student by last name, birthdate, or shoe size, which are
search keys.
A sequential search is straight-forward, but it can be costly if we need
to perform it very often. However, if the list is ordered by the search
key we can use a much better algorithm by taking advantage of this
ordered property of the data. Think about the index of a book, a phone
book, or a dictionary. They are all ordered somehow. For instance, home
phone numbers in a phone books are usually ordered by owner\'s last
names and business phone numbers are ordered by business types. Entries
in a dictionary or the index of a book are ordered alphabetically. How
does this orderedness help us when we search for information? It allows
us to guesstimate where the search target is located. The number
guessing
game
illustrate the idea well. If the list of numbers are random but ordered
increasingly we can always guess the target number is in the middle of
the search range (initially the whole list). If we are not lucky, we can
eliminate half of the candidates - if the number in the middle is
greater than the search target the new search range will be the first
half of the previous search range, otherwise the second half. Imagine
the reduction in the number of comparisons! We will study this algorithm
in much detail when we discuss algorithm complexities.
### Social impact
Please watch the following video How algorithms shape our
world. Could you explain
in what ways algorithms are shaping our world?
In the world of computing, we store and process data (representations of
information) by
quantifying information.
This quantification process reduces the world to what can be counted and
measured and emphasize abstraction and efficiency (speed). We must not
be fooled to believe the abstractions of reality are true reality as in
abstractionism. As
Frederick Brooks warns us "Models are intentional oversimplifications to
help us with real-life problems that are frighteningly complicated. The
map is not the terrain."
|
# Foundations of Computer Science/Algorithm Design
## Algorithm Design
Algorithm design is a
specific method to create a mathematical process in solving problems.
One powerful example of algorithm design can be seen when solving a
Rubik\'s cube. When solving a Rubik\'s cube (of any size), it is
important to know the step by step instructions or algorithm to reach an
efficient solution. This is where algorithm design comes into place.
There are designs that breakdown the seemingly complex solution by
addressing each layer (First, Middle, and Last and colors.
Please follow the link on solving the last layer of a 3X3 Rubik\'s
cube).
### Approaches to algorithm design
#### Top Down
The top down approach to design is starting by examining the full
problem or one way to think of it is to look at the big/whole picture
first. Once you have assessed the main problem then you divide the
problem into smaller components or parts.
The next portion of the top down approach is to begin testing.
Initially, we will have portions that are missing due to focusing on the
bigger picture. In situations where parts of the problem have not been
solved stubs or placeholders are used to as temporary holders.
One way to think about the top down approach is in a hierarchical
setting such as a general command his troops. The general will breakdown
a mission by assigning each soldier with a specific task to complete
that in turn contributes to a critical part of the overall mission.
#### Bottom Down
The bottom down approach to algorithm design starts with the smallest
units or parts of a problem. This approach solves the smallest units
first and then gradually builds out the next layer or solution. Using
this method ensures that the smallest unit has been successfully tested
and therefore, when you start solving or implementing the next
sub-solution that it will work due to the previous layers working
successfully.
One example is building a car. Each piece of the car is engineered,
created, and tested piece by piece. Knowing that the smaller parts work
correctly the parts are then gradually added on an assembly line. As the
parts are added, you know that the smaller components work due to
thoroughly testing each piece. Eventually, as you walk through this
process the end result is a properly functioning car.
### Algorithm Design: Building Blocks
There are basic logic structures and operations involved in algorithm
design. Building blocks are necessary to decide how we want to
manipulate units of work. The basis of every algorithm is steps or
blocks of operations. These steps/blocks of operation can be as simple
as adding two numbers together. However, these blocks of operation can
also be complex, for example, finding the maximum value in a list of
numbers.
Logic structures in important for organizing steps into a
process/solution. The following four basic logic structures describe the
type of blocks used for creating algorithms:
- procedure/function call - one example would be a single block in
Scratch
- sequence - in order to create a sequence you need a stack of blocks
- alternatives - use of the if-then-else blocks to indicate alternate
solutions for a particular problem
- iteration - using the \"repeat\", \"for\", and \"forever\" blocks to
build loops to solve problems
Most languages have programming constructs for basic operations and
logic structures. In order to understand programming constructs you must
first learn about constructed
language. According
to Wikipedia, a constructed language \"is a language whose phonology,
grammar, and vocabulary has been consciously devised for human or
human-like communication, instead of having developed naturally\".
Similarly, a programming
construct is
\"designed to communicate instructions to a machine, particularly a
computer.\"
|
# Foundations of Computer Science/Algorithm Complexity
## Algorithm Complexity
\"An algorithm is an abstract recipe, prescribing a process that might
be carried out by a human, by computer, or by other means. It thus
represents a very general concept, with numerous applications.\"---David
Harel, \"Algorithmics - the spirit of computing\".
We have learned that algorithms are conceptual solutions to problems.
Computing algorithms can be described/defined in terms of the common
units of work that computers can do so that they are neutral to
programming languages and the execution environment (computers).
Algorithm writing is such a creative process as Francis Sullivan noted
that \"for me, great algorithms are the poetry of computing. Just like
verse, they can be terse, allusive, dense, and even mysterious. But once
unlocked, they cast a brilliant new light on some aspects of
computing.\" You have seen example algorithms documented using abstract
notations such as pseudo code and flowchart.
Once algorithms are implemented/coded, i.e. represented by concrete
symbols, they become alive in program that embody them. Programs are
executable/runnable by computers. The ability to run different programs
that implements all kinds algorithms is a unique feature of computers as
machines. Usually when we buy a machine, e.g. an appliance, we assume it
has a set of well defined functions it can perform. For example a
microwave oven is supposed to warm and cook our food and nothing more.
We don\'t expect a microwave oven ever to be able to wash the clothes
for us. A computing machine (a computer) is different. We expect it to
perform the function whichever program makes it to do - the
functionality of a computer is extensible. If we liken a computer to a
car, programmers are drivers who can make the car do different things
(to a certain extent) and users are like passengers taking advantage of
the things the car can do. This is another reason why everyone needs to
learn computer programming because it gives you the freedom to make the
computer do different thing.
### Correctness of Algorithms
Algorithms must be correct to be useful. We must examine our algorithms
carefully to remove errors before using them to create programs. If an
algorithm has errors, it is impossible to create correct programs. We
often remind students that if their algorithms do not work on paper they
won\'t work on a computer. So we must work out our algorithms on paper
first.
Even if the design of an algorithm is correct we may introduce errors
during the programming process. As any natural language a programming
language has its own syntax consisting of grammatical rules for using
the language. When we violate such rules we introduce syntax errors in
our program. This type of errors are easy to fix, in fact most modern
program editors can detect and warn us about such errors. Another type
of errors are logic errors, which result from the misuse of the
language. In other words, our grammatically correct program doesn\'t
make sense or make the wrong sense to the computer. For example, a
recipe can be unclear or misleading even though all sentences are
grammatically correct. You may think that computers can also make
mistakes when running programs that are logically. This is true but very
rarely this is the case especially with modern computers equipped with
error detection mechanisms. We generally assume computers don\'t make
mistakes. If the program doesn\'t generate the correct answer, it is the
result of human errors.
We also call logic errors software
bugs. The original \"bug\"
is, in fact, a hardware failure - a moth caught in a electromechanical
computer. Now bugs are generally used to refer to any error/failure in
computer systems, both in hardware and in software. When a computer
program is buggy, it will produce erroneous results or crash as the
computer may not know what to do next. Another more subtle bug may cause
the computer program to never finish, known as the infinite loop, which
is obviously not what we wanted.
Bugs are almost inevitable as humans make mistakes. Then, how do we fix
bugs to ensure the correctness of our programs? We must test our
programs to verify its correctness. A test consists of a sample input to
a program and the desired output from the program. We can run a test by
subject the program to the sample input and collect the actual output.
If the actual output matches the desired output the program passes the
test, otherwise their is a bug in the program or in the test (tests can
be buggy too). We usually use a set of tests (a test suite) to exercise
different parts of the algorithm. This process is called debugging as
expressed in the following pseudo code:
`for each test in the test suite`\
` run and compare the actual output to the desired output`\
` if they match move on to the next test`\
` otherwise fix the bug and repeat the whole process`
Note that it is very difficult to make our tests exhaustive except for
very simple programs. When a program becomes larger and more complex the
number of tests need to cover all possible cases grows very fast. As
Dijkstra said
\"program testing can be used to show the presence of bugs, but never to
show their absence!\" There are techniques for proving the correctness
of programs. For instance, microcode for computer processors are often
proved correct via formal
verification.
### \"Goodness\" of algorithms
There are usually always multiple ways to solve the same problem and,
therefore, multiple algorithms to make computers solve the problem for
us. In addition to solving the problem correctly we want to be able to
compare/evaluate algorithms that solve the same problem. We must define
the criteria/measure for \"goodness\" in algorithm design. Such criteria
or measures can include simplicity, ease of implementation, speed of
execution, or preciseness of answers. In computing we care the most
about the solution speed because a fast algorithm can solve larger
problem or more problems in the same amount of time. This is also known
as efficiency - an economic measure of many processes. Often time he
usefulness of many programs depends on the timeliness of the results.
For instance, a program that takes 24 hours to predict the next day\'s
weather is almost useless.
Given an algorithm how do we measure its \"speed\"? One possible
approach is to implement the algorithm, run the result program, and
measure its execution time - elapsed time between the start and the end
of a program run. There are some challenges in this approach. First the
algorithm must be implemented, which can a serious undertaking.
Secondly, to run two programs to compare their execution time we must
subject them to the same input (a.k.a a workload, e.g a list of one
million numbers to be sorted) and the size of the input is never ideal.
Thirdly, the \"speed\" of a running program is influenced by the
execution environment, such as the machine\'s hardware configuration.
Take a recipe for example. Different cooks will surely spend different
time following it. The amount of food needed will surely magnify the
difference - as we increase the amount of ingredients the lengthy
recipes will take even longer to follow. But there are intrinsic
characteristics of the recipes that affect the preparation time. If a
recipe involves beating eggs can instruct the cook to break each egg and
scramble it individually or break all eggs first and scramble them
together. Obviously the first method is slower due to the additional
steps involved. Algorithms in computing exhibit similar characteristics.
Recall that algorithms must be defined in terms of the units of work
(steps) computers can perform so that it is straightforward to implement
them in programming languages. The way the steps are ordered and
organized in algorithms can significantly affect the execution time of
the programs implement the algorithms.
Since an algorithm is a conceptual solution to a problem, we want to
study their \"speed\" in an abstract sense without actually implementing
them. This is known as algorithm analysis in computer science. In this
approach we take an algorithm described in pseudo code or flow chart,
count the number of steps (units of work), which is always a function of
the input size. In the aforementioned example recipe the time it takes
to follow the first method (break each egg and scramble it individually)
is directly proportional to the number of eggs involved. In fact if only
one egg is needed, there is no difference between the two methods.
Instead of measuring the steps taken for a particular input size, we
focus on the relationship function between the number of steps and the
input size, which shows the pattern in which the amount of work (cost)
grows as the input size increases. Such functions are also known as
growth functions. Then, we apply asymptotic
analysis, which
compare functions as inputs approach infinity, to simplify the functions
because as the input size approaches infinity the difference between the
units of works disappears (we can assume breaking an egg and scrambling
it take the same amount of time) and the cost of most \"complex\" part
of the task will dominate (a part that repeats a step 10 time will dwarf
any step that is only done once) the total cost. For example, in a
recipe we have a one quick step (takes 0.0001 seconds per serving) that
repeats 10 times and a slow step (takes 10 seconds per serving) doesn\'t
repeat, an amount of serving of N (input size) would cost $0.0001*10*N$
total seconds on the repeated steps and would always cost 10 seconds on
the slow step. When N is bigger than 10000, the repeated part would cost
more than the slow part. In asymptotic analysis we can ignore the slow
step because its contribution to the total cost is negligible when N
approaches infinity. With simplify growth functions we can put them into
categories considering algorithms in each category to have similar
performance characteristics. This type of analysis is not completely
precise but it can be very useful in studying the nature of algorithms
and predicting their performances. We learn how to put algorithms into
categories and rank their performances according to the categories they
are in. We denote each category using big O notation.
In summary we have discussed the following few key concepts regarding
algorithm complexity analysis:
- Algorithms are conceptual and abstract solutions to problems and
programs are concrete executable code computers can run. We can not
run algorithms on computers; we can only run programs that implement
algorithms. Therefore, the performance of an algorithm is by
definition abstract (can not be concretely defined or measured).
- The goodness measure of an algorithm has to be an intrinsic
characteristic of the algorithm itself - something that reflects the
\"cleverness\" of the design regardless the implementation details
and the future execution environments.
- From an economic perspective we care about the cost of algorithms
both in terms of time and space. We will focus only on the time cost
in this course. We can not actually measure the time cost of
algorithms, but we can study the relationship between the time cost
and the input size, which reflects the internal complexity (or
cleverness) of algorithms. We represent such relationships as growth
functions with the input size as the variable and the total cost as
the value. The growth functions can be simplified by keeping only
the dominate term (asymptotic analysis) because other terms won\'t
matter when the input becomes really large (eventually approaching
infinity). Simplified growth functions are put into categories and
algorithms can be rank by the categories they belong to.
- Algorithms in the low complexity category will perform better than
algorithms in the higher complexity categories when the input size
is sufficiently large. We care about large input sizes because any
algorithm can solve a small problem fast. With this algorithm
analysis technique we can evaluate and compare algorithms to predict
the relative performance of the programs implementing such
algorithms before actually implementing any of them.
- Efficiency, as another term often used, is reversely proportional to
complexity. A more complex algorithm is less efficient because it
makes less efficient uses of the computing resources by being more
complex.
### Examples
There are at least two ways to calculate the sum of all numbers between
1 and N. The following are two algorithms:
- Algorithm 1: add all the numbers up manually, one by one
- Algorithm 2: calculate the result using this formula $(N+1)*(N/2)$
Consider the following question: if we carry out the two algorithms
manually, which one would run faster if
- N = 2?
- N = 100?
- N = 1,000,000?
Lets see how a algorithms behave (after implemented) on a computer. The
following script implements algorithm 1 using a block that iterates
through all the numbers in the range and adding them to the sum one at a
time. !A Snap! script that adds all the numbers between two
numbers.
!A Snap! reporter block that adds all the numbers between two numbers
and reports the
sum.
The same problem can be solved using algorithm 2, which use a function
to calculate the result as shown in the following script and the report
block. !A Snap! reporter block that calculate the sum of all numbers in
range and reports the
result.
!A Snap! block that calculates the sum of all numbers in a
range.
Both scripts (programs) take the same input - two numbers that define a
range. The number of numbers in the range is the input size or more
properly the problem size. We assume the units of work are arithmetic
operations $+-*/$, assignment operations, and report operation. Such an
assumption is reasonable because in deed those operations takes the same
to to perform regardless the operands. If you run the programs and try
different input size you will observe that as you increase the input
size the execution time of the first program increases steadily whereas
the second program shows not change in execution time. Can we predict
such behaviors?
Lets apply algorithm analysis to these two algorithms. The first
algorithm loops through the numbers to get the sum, therefore the
relationship function between cost (the number of steps - additions) and
the input size is $f=a+b*N$ assuming N is the input size and a and b are
the cost for creating the script variable and the cost for an addition.
Note that both a and b are constant because they don\'t change for a
given computer. We can simply the function to $f=N$ because when N
approaches infinity the constants don\'t matter anymore. We assign
algorithms with this type of growth function into the leaner algorithm
to the leaner this type of functions linear time category denoted by
$O(N)$. The second algorithm always takes the same amount of time
because it performs the same set of operations regardless of the input
size. It belongs to the constant time category denoted by $O(1)$.
Lets consider another problem: are numbers in a list distinct? Here is a
straight forward algorithm in pseudo code:
- Step 1: compare the first number with all of the other numbers in
the list. If, at any point, the same number is seen, stop and answer
NO.
- Step 2: repeat Step 1 by taking the next number from the list and
comparing it with all of the other numbers.
- Step 3: after using all numbers in the list, stop and answer YES.
The input size is the size of the list. The more numbers in the list the
longer it takes to answer the question. According to algorithm it is
possible that the first comparison find two identical numbers, which
answer the questions right a way. This is good design because at that
point it is unnecessary to continue with the rest of the algorithm. When
we analyze algorithms we focus on worst cases because the performance of
an algorithm is dependent from the actual input and we should not rely
on luck! So the relationship (growth) function between the cost and the
input size for this algorithm should be $f=N*(N-1)$ because in the worst
case (all numbers are unique) the algorithm has to compare each number
to all the other numbers in the list before it arrive at the answer. To
simply the function by keeping only the largest dominating term we get
$f=N^2$, which puts the algorithm in the quadratic category $O(N^2)$. If
you are interested you can reproduce and run the following script (an
implementation of the algorithm) to verify the predicted performance
characteristics of the algorithm. !A Snap! reporter block that tests
whether all the numbers in a list are
unique.
!A Snap! script that tests whether all the numbers in a list are
unique.
Lets look at an algorithm from another category: print all n-digit
numbers?
`For each number in 0, 1, 2, 3, ..., 9 `\
` use the number for the first digit`\
` enumerate all n-1 digit numbers`\
` append each every n-1 digit number to the first digit to form a n digit number`
This example is a little contrived but it demonstrate a new performance
category and is very useful when we discuss encryption techniques. The
algorithm is complex simply because the amount of output (all n digit
numbers) it has to generate. So the cost of the output part will
dominate the total cost. To generate all n digit numbers we must first
generate all n-1 digit numbers. To generate all n-1 digit numbers we
must first generate all n-2 digit numbers and so on. It is easier to
study the process backward. Assuming outputting a number is the unit of
work (it takes the same time output a number) the cost to generate all
one digit numbers is 10. To generate all two digit numbers we have to
output $10*10=10^2$ numbers. For three digit numbers the cost is $10^3$.
Did you see a pattern? To generate all n digit numbers it costs $10^n$.
This type of algorithms goes much faster than the quadratic ones because
the input size is in the exponent. They belong to the exponential time
category denoted as $O(2^N)$. The root (in this case 2) doesn\'t matter.
Because all exponential functions are more similar to each other than
quadratic functions or linear functions.
Here is a YouTube video illustrating the performance difference between
the bubble sort algorithm and the quick sort algorithm:
<https://www.youtube.com/watch?v=aXXWXz5rF64>
|
# Foundations of Computer Science/Abstraction and Recursion
## Abstraction and Recursion
Programming is easy as long as the programs are small. Inevitably our
programs will grow larger and larger as we create them to solve
increasingly complex problem. One technique we use to keep our
algorithms and programs simple is
abstraction, which is an
idea widely used in many fields such as art, math and engineering. In
this chapter we will study how to apply this technique is algorithm
design and programming.
An abstraction removes details to help us focus our attention. For
instance, a car present a set of simple controls as an interface to its
drivers. As long as we know how to use the interface we can drive the
car without knowing how it operates under the hood. The internal working
of a car is unnecessary detail to drivers unless they are race car
drivers who need this type knowledges to drive the car most efficiently.
This interface hasn\'t change much since when the first car was made.
Abstraction also generalizes concepts by extracting common features from
specific examples. This car interface extracts common car features (what
a driver needs to know to drive a car) from all kinds of cars. Once you
learn how to drive one car you have learned how to drive all cars of the
same type. This is a powerful idea. Such abstraction also gives car
makers the freedom to change the internal design of a car without
affecting the users.
### Abstraction in Computing
We have learned that algorithm design is an integral part of problem
solving using computing and a first step in programming. The hardest
part isn\'t programming/coding but the keeping track of details in large
programs. There are two primary ways to keep our programs \"small\":
chunking and layering, which are two metaphors for abstraction. Chunking
breaks down (decompose) functionality into smaller units and let units
interact with each other through a well-defined interface. For instance,
in Snap! you can implement an algorithm as a block, which then can be
used anywhere in your script as long as you can call the block with a
proper sequence of parameters according to the interface. Layering
separates the functional units (blocks) into layers to separate concerns
and simply interactions patterns to make the complexity more manageable.
The following figure illustrates the idea of layering.
!By organizing functional units (blocks) into layers we can simply the
interactions and allow concurrent development of the
layers. into layers we can simply the interactions and allow concurrent development of the layers.")
In the figure each layer relies on the layer below it to function and
provides services to the layer above it. For example, a unit in layer
one is only allowed to call units in layer 2 below it. All interactions
are limited to pairs of layers that are next to each other in a stack of
layers. We could replace a layer completely with a new implementation
without affecting the rest of the stack, which achieves modularity. On
the contrary if any arbitrary interaction is allowed we may end up with
tightly coupled system as shown in the following figure.
!Without any restriction any unit (block) can call any other
unit. can call any other unit.")
### Abstraction Examples
Using abstraction to achieve simplification and generalization is truly
a powerful organizing idea. Recall the thought experiment in chapter
one, in which we built a machine that can potentially solve groups of
equations. The machine was built through abstraction - we construct
larger building blocks using smaller (more elementary) ones and treat
each block as a unit ignoring the internal details.
The Snap! environment allows us to construct programs in a similar
fashion. When a block is defined it becomes a new building block (user
defined). The block can be arbitrarily complex, but to use it we only
need to know the interface - its name and its list of parameters. The
unnecessary details are hidden from the users greatly simplifying the
thinking involved in programming.
Lets look at a concrete example. In order to make a sprite to draw
different equal-lateral shapes we can create a block for drawing each
type of shapes such as triangles, squares, pentagons, and etc. A block
in Snap! is an abstraction that hides details and represents a certain
behavior/intention. The following block draws a square with a specific
size.
!This Snap! block draws a square with a specific
size.
To draw a triangle we can use a similar logic structure.
!This Snap! block draws an equal-lateral
triangle.
The draw triangle block repeat three times to draw each side with a turn
of 120 degrees. Do you know why the sprite has to turn 120 degrees to
form a 60 angel between the two sides? I hope you have noticed that the
same logic structure can be used to create blocks for drawing any
equal-lateral shapes. Instead of creating a block for each shape we can
generalize the task into one block that can draw any shapes. This block
needs an additional piece of information - the number of sides.
!The Snap! block can draw any equal-lateral
shape.
With the number of sides we can determines the internal angle of the
shape, which is all we need to draw the shape. Please checkout this
resource
if you are not sure how to calculate the internal angle using the number
of sides. This block can serve as an abstraction of the tasks of drawing
equal-lateral shapes (polygons). You may have noticed the length of the
sides is hard-coded (typed in as a constant not a parameter). What if we
want to draw shapes of different size? We can further generalize the
function of the block by adding anther parameter and use it to control
the side length.
!This Snap! block draws any equal-lateral polygon of any
size.
Through this example, we have demonstrated defining blocks to abstract
away details of a task and generalizing a solution to solve more
problems.
### Recursion
Recursion is a pattern that
is self-similar - the whole consists of smaller parts that are
structurally similar to the whole. For example, a tree consist of
branches that look like smaller trees. Similarly, a directory tree of a
file system on a computer and an ancestry tree genealogy exhibit a
similar pattern. The following figure shows a recursive tree.
!Tree created using the Logo programming language and relying heavily
on
recursion.
Self-similarity allows us to define concepts that exhibit such a pattern
in a more concise and elegant way. A tree can be either a trunk with no
branches or a trunk with a number of branches, each of which is a tree.
This definition covers all possible tree structures. How would you
describe the following picture?
!`A visual form of recursion known as the Droste effect. The woman in this image holds an object that contains a smaller image of her holding an identical object, which in turn contains a smaller image of herself holding an identical object, and so forth.`
If you were to do it by delving into finer and finer details repeatedly
it never ends. Can you define the picture recursively? Another example
is the definition of the factorial function in math$$1!=1$$ and for all
$n>1 n! = n*(n-1)!$ This recursive definition not only defines factorial
but also describes a way to calculate factorial. For example, 5! can be
calculated from 4!, which is 4 times 3!, which is 3 time 2!, which is 2
times 1!, which is 1 by definition.
If the problem we are solving is self-similar - solving the whole
problem is a matter of solving the parts that are similar to the whole -
the solution we are defining for the whole can be used to solve the
subproblems(recursion). The beauty of a recursive solution is that the
definition of the problem is the solution as shown in the factorial
example. To design a recursive solution we practice wishful thinking -
as we describe the solution to whole problem we wish it is fully defined
and can be used to solve the smaller subproblems. In computer
programming, this is called recursive thinking/programming. To program
recursively, we describe the solution to a problem as a
function/procedure/block, in which we break the bigger problems into
smaller ones and use the function we defining to solve the smaller
problems. If the problem is finite, eventually the smaller problems are
so simple that they are directly solvable. In such cases the recursion
stops. We call those cases base cases. By the time our program reaches
all base cases, we would have solved the whole problem because all the
subproblems are solved including the problem we start with. In any
recursive function, two essential parts must exist: base cases and
recursive cases. The recursive cases part breaks a bigger problem into
smaller ones. The base cases stops the recursion by solving the directly
solvable problems. If both parts exists and are structured properly, the
algorithm (function) can solve problems of any size by asking clones of
itself to solve partial problems. Recursive problem solving is a
powerful idea because it simples our thinking: if you can define a
problem recursively you can solve it recursively. Recursive solutions
are more elegant and easy to verify, but it only lends itself to
problems that can be defined recursively.
Before we study some concrete examples we will introduce the concept of
function, which makes recursive solutions more manageable. A block in
Snap! is considered a function (similar to a math function) if it has
the following properties:
- a function takes an arbitrary number of inputs (zero or more)
- a function always returns/reports exactly one result value
- for the same input a function always reports the same result value
- the execution of a function has no side effects to the environment
With such restrictions functions in Snap! are blocks that performance a
task in isolation (in its own world) and hand off the result to be
further processed. According to the definition what blocks in the
following list are functions?
!Some blocks in this list are
functions.
### Recursion Examples
Consider the binary search algorithm:
`Find an item (target) in a sorted list`\
` if the list is empty, the target cannot be found`\
` consider the item in the middle, if it matches the target you are done `\
` otherwise, decide which half to search next`
To search one half of the ordered list is just a smaller version/part of
the overall problem so we can apply the same algorithm wishing the
algorithm is fully defined. This process goes on till a list is empty or
the search target is found.
Clearly the base cases are
- if the list is empty, then the target can be found
- if target in the middle of a list, then the target is found
The recursive cases are (assume the list is sorted in ascending order)
- if the item in the middle of a list is smaller than the search
target, continue searching in the second half of the list
- otherwise, continue searching in the first half of the list
|
# Foundations of Computer Science/Recursion Revisited
## Recursion Revisited
Recursive solutions provide another powerful way to solve self-similar
problems. The example that we will examine is the binary search
solution.
### How Binary Search Works?
The process for identifying a target item in a sorted list. You start
with the first base case which is to directly solve whether the middle
item is equal to the target. If the middle item equals the target then
the search is complete and the index is reported. If the list is empty
then the search reports -1 or not found. Otherwise, decide if the target
is greater or less than the middle item to dictate which half of the
list to search.
Next, if the middle item is greater than the target the first half of
the list is searched, else, we search the second half of the list. This
is the same problem that we originally started with making this a
recursive or self-similar problem.
The image below is the binary search code implementation. The base cases
and recursive cases are labeled.
!Binary search is used to find an item (target) in a sorted list. It
consists of bases cases and recursive cases that make up the recursive
solution. in a sorted list. It consists of bases cases and recursive cases that make up the recursive solution.")
### Binary Search: Abstraction Simplification
One excellent way to simplify the binary search solution is using
abstraction. The image previously shown uses a helper block called
\"middle between\" which is used to find the middle index when given the
first and last index of a sorted list (see below).
!The helper block used in a binary search to find the middle index
between the first and last index given as
input.
Another way to simplify the binary search solution is adding another
level of abstraction. The helper block below shows the target and list
being passed in as input into the \"binary search for\" block (see
below). The actual recursive solution is implemented when the recursive
search block detects the first and last index of a sorted list. The
\"binary search for\" block allows users to call the recursive search
without the user being required to provide the first and last indices.
The user will only see the information and/or blocks necessary to start
the binary search. The user will not see the recursive search or behind
the scenes of the solution).
!The binary search helper block that adds another level of abstraction
that searches for the target in a list by calling the recursive search
block.
### Binary Search: Tracing
We have previously discussed simplifying our interface for the recursive
search using \"binary search for\" which takes the target and a list as
inputs (see example below). Then, the \"recursive search for\" is called
and the first base case is immediately solved as our target, 9, is not
equal to 5. Since the target 9 is greater than 5 (element at the middle
index), we search the second half of the list (index 4 to index 5). The
process is repeated and the list is split checking the first element at
position and/or index 4. The target is greater than 7 and the recursive
search is repeated again searching the second half of the list again.
The base case 2 immediately solves that the target 9 is equal to 9 in
the list; which located at the middle index 5.
Now, since the target has been found at index 5, the recursive search
reports back to the second recursive search which in turn reports back
to the first recursive search. Finally, index 5 is reported back to the
user at top level of the \"binary search for\" block.
!This image shows how to trace the binary search when the target is
equal to 9. Step by step the base case and recursive cases are used for
reporting.
In the event the target is not found in a list the binary search reports
-1 (see below). The same recursive search calls are made, but on the
last call the start index is greater than the last index which indicates
the target is not found and the end of the list has occurred.
!This image shows how to trace the binary search when the target is not
found in the sorted list. The solution reports -1 or not found if the
target is not in the
list.
### Koch Curve
In 1904, Helge von Koch discovered the von Koch snowflake curve, \"a
continuous curve important in the study of fractal geometry\" (3). The
Koch curve starts with a single line segment that is 1 unit long. Then,
the segment is replaced with four new segments, each one-third the
length of the original, also called the generator. The process is
repeated forever and creates the Koch curve. The image below shows this
process for stages 0 to 2.
!This shows the generator (stage 0) and iterations of stage 1 and 2 of
the Koch
curve. and iterations of stage 1 and 2 of the Koch curve.")
The length of the Koch curve is infinite. The curve is another
interesting implementation of recursion where it is self-similar; the
curve copies itself over and over.
### Koch Snowflake
Using the same Koch curve generator on all three sides of an equilateral
triangle we see repeated iterations eventually start to look like a
snowflake (please see Koch
Snowflake). The snowflake
has infinite perimeter and finite area.
Examining each stage of the Koch Snowflake a pattern is created for the
number of segments per side, the length, and the total length of the
curve as seen below. The number of segments per sides is increased four
times the previous stage. The length of the segment is divided into
equal thirds each time giving us the length of each segment. The
original stage, an equilateral triangle, is why we take three times the
number of segments per side divided by the length of a segment raised to
the number of the stage currently being implemented.
!The Koch Snowflake pattern can be seen by tracking the different
iterations.
### Exploration Questions
Studying the Koch Curve and Snowflake patterns are established and show
how the same process is repeated to generate each stage. We can use the
recursive process to solve abstractions of the same problem.
- What would the total length of the N-th iteration be? Look at the
patterns made by the numbers both before and after simplifying.
- What do you expect the Koch Curve to look like? In other words, what
would you expect to happen if you repeated this infinitely many
times?
- What is the length of the Koch Curve?
- Can you estimate the area at each stage? What is the area of the
final snowflake?
Reviewing the previous questions we can start to observe the behavior of
the Koch Curve and Koch Snowflake.
1. Based on the patterns established it is clear that the total length
of the N-th iteration would be 3\*(4/3)^n^.
2. As the Koch Curve is infinitely repeated it will start to look more
smooth as the lines will appear closer and closer although never
touching.
3. The length of the Koch Curve is infinity (after applying the
generator infinite number of times).
4. The area of the final snowflake is bounded (finite).
[^1]
```{=html}
<references/>
```
[^1]: Niels Fabian Helge von
Koch.
(2014). In Encyclopædia Britannica. Retrieved from
<http://www.britannica.com/EBchecked/topic/958515/Niels-Fabian-Helge-von-Koch>
|
# Foundations of Computer Science/Higher Order Functions
## Higher Order Functions
higher order functions offer a more powerful ways to generalize
solutions to problems by allowing blocks to take blocks as parameters
and returning a block as a return value. All other functions are called
first order functions. An example higher order function in math is the
derivative function which takes a function as the input and produces
another function (the derivative of the first function) as the output.
In computer science, a map function takes an arbitrary function and a
data set (e.g. a list) and applies the function to each and every data
item in the set. Another example is the reduce (or fold) function, which
takes a input function and data set and produces the aggregation of all
items in the data set using the function. For instance, if the input
function is the addition the reduce function returns the sum of all
items in the data set as the output. If the input function is
multiplication, the reduce function produces the product of all items in
the data set. Higher order function also allows us to create
compositions of functions using existing functions on the fly. For
example, given two functions $y=f(x)$ and $y=g(x)$ and $b=f(a)$ and
$c=g(b)$ we can create a function $y=fg(x)=f(g(x))$ so that $c=fg(a)$.
### Examples
#### map
The following script uses the built in map block/function to apply the
same block/function to each and every element in a list.
!The built in map block (function) is used to apply a block (function)
to each and every element in a
list. is used to apply a block (function) to each and every element in a list.")
To apply a different function we simply need to find or implement the
function and use it as the first parameter to the map block. The next
example uses the multiplication function that takes two parameters.
Snap! is smart enough to detect that and use each element of the list as
both parameters when the function is applied. So the result list should
contain the elements from the original list multiplied to themselves.
!The built in map block (function) is used to apply a block (function)
to each and every element in a list. Because the function passed in
takes two parameters when the function is applied to a element of the
list the same element is used as both
parameters. is used to apply a block (function) to each and every element in a list. Because the function passed in takes two parameters when the function is applied to a element of the list the same element is used as both parameters.")
This map block generalized this pattern of applying the same function to
multiple data items into a block. It doesn\'t simply programming because
someone has to write this map block (check the source code to see how
complicated it is), but it makes programmers happier because it keeps
the thinking part simpler. As programmers we are freed from worrying
about the iteration of lists so that we can focus on the function that
needs to be applied to the list.
#### reduce
The following two examples use the built-in reduce function in Snap! to
calculate the sum and the product of a list of numbers.
!This block applies the addition function to each and every item in the
list to calculate the
total.
!This block uses the built-in reduce function in Snap! to calculate the
product of a list of
numbers.
Note that the reduce (combine with) function can take any function with
two input parameters to aggregate a list of values to a single value. By
using higher order functions we can create generalized solutions that
can be customized to solve a larger set of problems.
#### return blocks as data
The following block demonstrates the use of blocks as data. In this
block two reporter blocks are taken in as parameters, which are then
used to form the report value - a new block. The new block applies the
two input report blocks to some unknown input value. Note that the
\"ring\" (gray frame) around the report value is very important.
Whatever enclosed in the \"ring\" will be treated as data not programs.
The application of the two functions will not be evaluated but will
simply be returned as data without evaluation, which is what we wanted
in this higher order function.
!This block takes two (reporter) blocks as input parameters and reports
a new block as the output. The function of the new block is the
composition of the two functions represented by the two input blocks -
when the new block is called it takes the input to the new block,
applies the two functions (specified when the new block is created) to
the input, and reports the
result. blocks as input parameters and reports a new block as the output. The function of the new block is the composition of the two functions represented by the two input blocks - when the new block is called it takes the input to the new block, applies the two functions (specified when the new block is created) to the input, and reports the result.")
To use the composed function we can call the compose function with the
two input functions and use the \"call\" block to execute the composed
block against an input value as shown in the following script.
!The compose block is called to form a composed block (function) from
the two input blocks. The composed block is then applied to the input
value of 3. The result of this block call is
1+2+3=6. from the two input blocks. The composed block is then applied to the input value of 3. The result of this block call is 1+2+3=6.")
With this \"compose\" block we define new functions by combining
existing functions on the fly - when we need them. This is a powerful
way of generalization we cannot achieve without using higher order
functions.
|
# Foundations of Computer Science/The Internet and the Web
## The Internet and the Web
The Internet and the Web give us the ability to connect to countless
resources and is molding the way our society utilizes technology for
online storage and services. We will use principles previously learned
to examine Internet and Web communication. The principles we will
examine are:
- information can be encoded into messages
- a coordination system is a set of agents interacting with each other
toward a common objective
- messages can hide information
### Computer Networks
A computer network is considered a communication sub-system that
connects a group of computers enabling them to communicate with each
other. When thinking of a computer network you must consider two parts
that make it possible:
**Hardware:**
- network interface card (NIC) - required in order to connect to a
local area network
- cabling or antennas - required to carry signals for transmission
- network switches - used to relay signals
**Software:**
- Programs - used to process information (bits) using algorithms
### Network Standard
Similar to encoding procedures used for bits, the same idea of a
standard must be used for networks. In order for communication to occur
it requires a standard for devices, message format, and procedures of
interactions. These standards provide ordered processes for
communication.
Once we have the standards in place we can examine what actually makes a
network tick. As stated previously, a computer network is made of two
parts: hardware and software. The physical hardware sets the way for
communication to travel, but does not enable the network. The software
(programs) are the pieces that make a computer network to allow
software-to-software communication.
The focus of this chapter will be on following three software standards:
- the Internet protocol suite
- layers of software
- abstraction being used for simplification
### Background Definitions
Knowing the definitions from the links provided will give you a
foundation for the material in this chapter.
### Stack of Protocols
When analyzing the protocols needed to allow communication over a
network, we see that different protocols are layered to create levels of
abstraction. These abstraction layers are used both for the upper and
lower layers (see image below).
!Shows the stack of network agents used to transmit a message from one
computer to
another.
**Message Analogy**
Let\'s say that Computer A wants to send a message to Computer B. Trace
through the steps below to see how a message is sent via the two stacks
of agents.
1. Only A4 and B4 can access the physical mailboxes to send and receive
packages
2. A1 puts the message into packages
3. A2 adds sequence numbers and tracking numbers to packages
4. A3 adds address labels
5. A4 puts the packages in the outbox
6. packages arrive at B4\'s inbox
7. B3 accepts packages addressed to B
8. B2 checks use sequence numbers to put the packages in order and
acknowledges the packages using the tracking numbers to A2
9. A2 re-sends a package unless acknowledged
10. B1 opens the packages to reconstruct the original message
The network protocols work in the same way with A1 to A4 and B1 to B4
being software. The delivery mechanism used between A4 and B4 usually
consists of metal wires, fiber optic cables, or radio waves in the air.
**Delivery Mechanism**
Previously, we established how information is transmitted using Computer
A and Computer B. Two delivery mechanisms that are used today for
communication between networks are circuit switching and
packet-switching. When you think about a telephone network, this network
requires a connection is established before communication can occur. For
example, when you call someone, the phone rings until the other person
picks up or voicemail initiates; this type of communication is known as
synchronous communication.
The opposite is true for computer networks which use packet-switching.
When using packet-switching, each packet (which is a small package of
information) is individually addressed and delivered separately. The
process mimics how mail packages are delivered via shared media, i.e.
trucks, trains, ships, and airplanes. For instance, when you send a
letter, you do not wait until the recipient is ready. This type of
communication is called asynchronous communication.
### The Internet
We have seen different standards and/or protocols of the Internet.The
following describes the different characteristics of the Internet which
will be important when distinguishing the Internet from the Web.
- An infrastructure for communication (information highway)
- A global connection of computer networks using the Internet Protocol
(IP)
- Uses layers of communication protocols: IP,TCP, HTTP/FTP/SSH
- Built on open standards: anyone can create a new internet device
- Lack of centralized control (mostly)
- Everyone can use it with simple, commonly available software
## The World Wide Web
The World Wide Web is often confused with the Internet as it is used in
conjunction with the Internet. The web is only one of the services
provided through the Internet. It is important to know the
characteristics of the Web (see below):
- A collection of distributed web pages or documents that can be
fetched using the web protocol (HTTP-Hyper-Text Transfer Protocol)
- A service (application) that uses the Internet as a delivery
mechanism
- Only one of the services that run on the Internet along with other
services: email, file transfer, remote login, and etc.
### The Web
There are two roles that work together to make up the web: Web servers
and Web clients (browsers).
**Web servers**
- Software that listens for web page requests and has access to stored
web pages
- Apache, MS Internet Information Server (IIS)
**Web clients (browsers)**
- Software that fetches/displays documents fetched from web servers
- Firefox, Internet Explorer, Safari, Chrome
### Uniform Resource Locator (URL)
The Uniform Resource Locator (URL) is an identifier for the location of
a page on the web. The system of URLs is hierarchical (see image below).
!An example of the different pieces of a
URL._example.PNG "An example of the different pieces of a URL.")
- **edu**: a URL for a school (not .com or .org)
- **www.sbuniv.edu**: a URL for the Southwest Baptist University (SBU)
website
- **www.sbuniv.edu/COBACS/CIS/index.html**: a URL to a page on SBU\'s
website under the path
## Hyper-Text Markup Language (HTML)
The language used to define web pages is known as HTML. In order to view
an example, open another tab and navigate to the Southwest Baptist
University CIS
Department website.
Once you have the page open, right click on the page and select \"View
Source\", this will allow you to see the HTML code that was used to
create the web page. The web page itself may content
hypertext (clickable text
that serves as links). A link is just a defined URL that points to
another web page. Web pages and links are what combine to form the Web.
## Finding Information on the Web
It is important to note how to find information on the Web. Follow the
steps below to see how this process works: Use a hierarchical system
(directory) to find the URLs to pages that may have the information
- Use our knowledge to guess, e.g. start from apple.com to navigate to
the page for iPhone 5s
- Use a search engine
` -we look for information (wherever it is located) not pages`\
` -we may find information we did not know existed`
## How a Search Engine Works
One of the main sources for locating resources can be found using a
search engine. However, have you ever thought about how they actually
work? There is a series of steps that describe exactly what happens when
a search engine is used:
1. Gather information: crawl the web
2. Keep copies: cache web pages
3. Build an index
4. Understand the query
5. Determine the relevance of each possible result to the query
6. Determine the ranking of the relevant results
7. Present the results
**Measure of Important Pages**
Once a search is performed relevant pages are provided. However, not all
relevant pages displayed are considered important. A web page does not
gain importance until it has been ranked by credible sources. One of
Google\'s innovations is page rank - a measure of the "importance" of a
page that takes into account the external references to it. A page is
considered more important based on the number of important pages that
link to that page. For example, an electronic article from the New York
Times would have a higher level of importance or page rank than a
personal blog due to the number of important pages linked to that online
article.
|
# Foundations of Computer Science/Encryption
## Encryption
In order to ensure secure communication takes place encryption methods
must be used. Secure communication over the web is important for areas
such as e-commerce. Encryption is used to encode messages ensuring no
one, but the intended recipient knows the content of the message.
The messages that are transferred over the Internet in the form of
packets. When you think about packets, they are more like postcards than
letters. The content of each packet are plaintext (exposed for all to
see) as the bits are transmitted.
The best way to protect these packets during transmission and after
reception is using encryption techniques. Encryption is simply the
process of converting information (plaintext) into unintelligible text
(ciphertext) to avoid unwanted parties from intercepting the message. In
order for the recipient to understand the ciphertext they must use a
decryption method. Decryption reverses the process of ciphertext back
into plaintext.
The two parts: encryption and decryption are part of what is known as
cryptography. Cryptography (secret writing) is the practice and study of
techniques for secure communication in the presence of third parties. It
is not a new practice and has been around since early 2000 B.C..
### Caesar Cipher
The Caesar cipher is an example of a substitution cipher. This cipher
uses a letter-by-letter translation to encrypt messages. A cipher is
simply a method (algorithm) used to transform a message into an obscured
form and reversing the transformation. An example of this particular
cipher can be seen below where you replace each letter in the top row by
the corresponding letter on the bottom row:
!Caesar Cipher
example.
With the Caesar cipher there are 25 possible variations representing one
for each different amount of shifting. The key to remember about the
encryption and decryption rule is the amount of the shift. If we know
the Caesar cipher is used then we could try all possible 25 shifts of
the alphabet to decrypt the message. However, tools have been created to
encrypt and
decrypt
messages created using this cipher.
## Substitution Ciphers
Substitution ciphers are ciphers that use one symbol is substituted for
another according to a uniform rule. The example below shows a
substitution table that defines a rule for reordering letters in the
alphabet. How many possible reordering possibilities can be performed
using the example below?
!Substitution cipher
!Number of methods
possible. These
type of ciphers appear to be unbreakable, but that is not true.
Frequency analysis is used to decode substitution ciphers . This
technique used to break general substitution ciphers uses frequencies
letters that appear in a language.The image below shows the original
message with symbols. We will use frequency analysis to decode the
cipher.
!Original encoded
ciphertext.
After we have replaced the most used characters with E and T we can
begin using other common symbols and sentence structure to fill in the
gaps.
!Process of using conjectural
decoding.
Finally, after replacing symbols with frequently used letters we see the
entire message displayed below.
!Complete ciphertext
message.
### Vigenère Cipher
The Vigenere cipher is similar to the Caesar cipher, but it uses
multiple Caesar ciphers to encode a message. For a long time the
Viegenere cipher was considered unbreakable until the 1800s when Charles
Babbage discovered a way.
!This table shows the key to the cipher thomasbbryan. This cipher was
used by an attorney named Thomas B. Bryan in 1894 to communicate with
his
client.
!Key description
The substitution table used above encrypts and decrypts messages. We use
the second column, \"thomasbbryan\", to uniquely identify the table.
This key is used to specify which cipher is used.
The Vigenere cipher was unbreakable until the method to decode this
cipher was discovered in 1863. Although the cipher is no longer secure,
it was at the time a great enhancement to secure communications.
### Vernam Cipher
The weakness discovered with the Vigenere cipher is the repeated use of
the same key. In order to combat this problem the Vernam cipher was
created. The key is as long as the plaintext so that no repetition is
needed. For example, if we wanted to use the Vernam cipher to encrypt
the message the length of 100, we might use 100 Caesar ciphers extended
to 100 rows. This was a one-time pad used to encrypt messages. The
Vernam cipher was used widely during World War II and the Cold War.
In principle, the one-time pad is as good as it gets when it comes to
cryptography. This process is mathematically provable. The Vernam cipher
operates in a similar fashion to the Caesar cipher. Wherein the Caesar
cipher one number key is used as the shift cipher, the Vernam cipher
operates through the use of many different shift ciphers being used, a
unique one for each letter in the key. This is done by shifting the
character according to the value of the letter it corresponds to in the
alphabet. For example if the letter of the key was \'A\' that would lead
to a shift of 1.
When used correctly, the one-time pad is unbreakable, but it is
difficult to transmit the one-time pad between the parties without
interception. Another challenge is that the cipher (one-time pad) is
impractical. If there is a way to transmit the message, then the person
might as well send the message itself due to the length and complexity.
Today, we use more innovative practices to secure communication.
Sophisticated ciphers (programs) use shorter keys and these keys are
sequences of bits on which both parties agree to keep secret. This
process works because computers divide ASCII-coded plaintext messages
into blocks. The bits that make up that block are transformed according
to a specific method that depends on the secret key created.
There are no known shortcuts for breaking secret key ciphers. Even using
a brute-force attack is difficult because it requires guessing all
possible keys but as the attack occurs the process grows exponentially
in time based on the size of the key. Increasing the key length by only
one bit doubles the work required to break the cipher. By creating
longer keys it makes it possible to have the work outgrow the actual
computing power. Due to this, breaking these ciphers is possible, but
computationally infeasible, taking hundreds of years or more to crack.
The challenges that come with using secret key encryption is that the
number of keys required increases as the number of network members
increases. For each pair of members a new shared secret key must be
created. Creating unique keys becomes more complicated as more
combinations are needed. Another challenge is securely establishing a
secret key between two parties when a secure channel does not exist
between them.
### Public Key Encryption
In 1976, Whitfield Diffie and Martin Hellman proposed the idea of
public-key encryption. The idea is of two mathematically related keys a
public key and a private key. The keys are paired, but computationally
infeasible to connect to each other. A message encrypted using the
private key can only be decrypted by the public key and vice versa.
When a user picks a secret key and encrypts the message with the
recipient\'s public key and sends the ciphertext to the recipient. The
recipient then uses their private key to decrypt the ciphertext to get
the secret key. The private keys are kept secret and never sent to the
other user. The two can communicate using the secret key (also known as
a session key). The confidentiality of the message is ensured due to no
one except the recipient being able to decrypt the message from the
initiator.
The way to ensure the message is from the sender is to use digital
signature schemes. Signatures should be easy for a user to produce, but
difficult for anyone else to forge. Digital signatures can also be tied
to the content of the message being signed. Authenticity of the message
is verified due to the ciphertext only being decrypted with the intended
party\'s private key.
|
# Foundations of Computer Science/Simulation
## Simulation
Simulation can be a very powerful way to represent real-world systems,
scenarios, and experiments. Simulation is the recreation of a real-world
system in a prepared and controlled environment. As we study different
objects and environments we see the complexity of measuring, analyzing,
and emulating events such as nature. Although it is impossible to
produce 100% accuracy and detail; the two are used to attempt to create
an environment as close as possible to mimicking real-world systems. In
order to achieve a higher level of accuracy and detail we focus on the
most important aspects of the system we hope to simulate. The higher the
granularity in detail the more likely we are to accurately predict what
will happen in a real world environment.
The three motivating factors behind using simulation instead of real
world experimentation are:
- **Control** - Gives us the ability to explore problems that were
utterly out of reach due to our lack of control over a real world
situation. For example, if we are attempting to simulate a storm or
hurricane these are events we have no control over, but studying the
paths could be beneficial in predicting future effects and climate
changes.
- **Cost** - Conducting experiments in real system can be costly both
in regards to time and money. For example, instead of automotive
manufacturers crashing several cars for testing they can use
simulations to mimic different car crashes, angles, and scenarios
without spending time and money on an actual car.
- **Safety** - Certain experiments can be dangerous or harmful and
simulations. Scientists are able to simulate events such as virus
outbreaks, aircraft engine failure, and even testing nuclear bomb
material (more info on atomic bomb
simulations).
## Modeling
Modeling is the process of describing how the components of a simulation
looks and behaves. During this process we model the behavior and
interactions of all components. Although we are using loose
approximations these simulated versions can be a surprisingly accurate
recreation of the real world system.
|
# Foundations of Computer Science/Artificial Intelligence
## Artificial Intelligence
### What is A.I.?
Artificial Intelligence (AI) is the idea of building a system that can
mimic human intelligence. How we determine intelligence is based on how
people plan, learn, natural language processing, motion and
manipulation, perception, and creativity. These various areas are used
in the process of engineering and developing Artificial Intelligence.
### The Turing Test & A.I.
One concept that is important to note is that computers only perform
algorithms and programs given, they cannot inherently create algorithms
on their own. We have seen in previous chapters where algorithms can
morph or change, but not without being given an algorithm.
Previous chapters have breached the topic of the Turing Test. The Turing
Test is used as a theoretical standard to determine whether a human
judge can distinguish via a conversation with one machine and one human
which is a human and which is a machine. If a machine can trick the
human judge into thinking it is human then it passes the Turing Test.
Although there are several innovative developments in the field of AI
there are still areas that are being tested in improved. Once such
example can be found with CAPTCHAs (Completely Automated Public Turing
test to tell Computers and Humans apart). The CAPTCHAs are used to
distinguish between human and computer. Even today, computers are unable
to identify images such as those generated by CAPTCHAs as well as
humans. Those developing CAPTCHAs are using these as a tool to teach
computers to recognize and learn words and/or images that humans are
able to identify. CAPTCHA is considered a reverse Turing
test because a computer is
determining whether a human user is indeed human.
### Intelligent Agent Approach
The intelligent agent approach was first introduced in the quest for
\"artificial flight\". The Wright brothers and others stopped their
attempts to imitate birds and to instead embrace the idea of
aerodynamics. The goal is not to duplicate, but to use what is known
about flying and manipulate that knowledge.
A major part of this approach is a rational agent, which is an an agent
used to achieve the best outcome. Therein lies the connection to the
intelligence test of rationality. Basically, can this machine or
computer mimic the rational behavior of a human.
### History of A.I.
The concept of AI began around 1943 and became a field of study in 1956
at Dartmouth. AI is not limited to the Computer Sciences disciplines,
but can be seen in countless disciplines such as Mathematics,
Philosophy, Economics, Neuroscience, psychology and various other areas.
The areas of interest in the Computer Science and Engineering field are
focused on how we can build more efficient computers. Great advancements
have been made in the area of hardware and software.
### A.I. Knowledge-based Expert System
An A.I. system often times will use a rule-based system to capture
knowledge in the form of if-then statements. Another way to think about
these rule-based systems is as decision trees. Decision trees use these
preset rules to determine the decision path to follow based on input
provided. An example of a decision tree or rule-based system is a single
player game. In the game the player imagines an animal (real or
imaginary) and answers a series of questions, which are designed for the
computer to guess what the animal is assuming the player always answers
the questions truthfully.
### Machine Learning
There are two types of machine learning: formal and informal. The
informal involves giving computers the ability to learn without
explicitly programming the capability (Arthur Samuel, 1959). The formal
type of machine learning is a computer program that learns from
experience in respect to some task and increases performance based on
that experience (Tom Mitchell, 1998).
### Supervised Learning
The supervised learning is based on giving the correct answers and
having the computer mapping inputs to outputs. Examples of supervised
machine learning are:
- United States Postal Service using computers to read zip codes on
envelopes and automatically sort mail (i.e. handwritten zip codes)
- Spam filters - software is trained to learn and distinguish between
spam and non-spam messages (i.e. email filters)
- Facial recognition- used by cameras to focus and via photo editing
software to tag persons (i.e. Facebook)
### Unsupervised Learning
Unsupervised learning is simply the reverse of supervised learning where
the correct answers are unknown. The goal or objective of unsupervised
learning is to discover structure in data, which is also known as data
mining. The computer looks at data to find trends to make and/or assist
in decision making. Below are examples of unsupervised learning:
- Clustering algorithm - Used to find patterns in datasets and then
group that data into different coherent clusters.
- Market segmentation - targeting customers based off regions, likes,
dislikes, when the consumer makes purchases, etc. This is considered
targeted marketing.
- Recommendation systems - systems or software that make
recommendations to the consumer as to what they may like based off
their preferences (i.e. Netflix, Hulu, etc.).
- Statistical Natural Language Processing - used to guess the next
word or auto-complete words based off of correcting/guessing
techniques, suggesting news stories, or translating texts)
### Genetic Programming
Genetic Programming is an idea that
uses evolutionary process to improve algorithms.
### Future of A.I.
There are many challenges in mimicking human intelligence. Humans
acquire common senses that are intuitive but hard to reason rationally,
e.g. the color of a blue car is blue. Deep
learning is a branch A.I. that aim to
create algorithms that can acquire intuition.
|
# Foundations of Computer Science/Limits of Computing
## Limits of Computing
We have studied some big ideas of computing, which allows us to perform
amazing tasks through information process. You might have gotten the
impression that if we can quantify information and design an algorithm,
we can solve any problem using computing. In this chapter we will study
the theory about the limits of computing. There are theoretical and
practical limits on what computing can do for us.
### Turing Machine Revisited
When we talked about the formal definition of \"algorithm\", we
introduced the Turing
machine, which is a
mathematical model for computation. With this model we can study the
nature of computing including what it is can possibly do. Alan Turing
contributed greatly to the computability theory with his model machine.
He proved that Turing machine is as powerful as any computer we can ever
build (Turing
completeness). If we
can find a problem the Turing machine cannot solve, we just proved the
solution is not computable, i.e., the problem is not solvable through
computing on any computer.
### Halting Problem
In 1936 Alan Turing proved that a general algorithm to solve the
halting problem for all
possible program-input pairs cannot exist, i.e., the halting problem is
unsolvable using computing. More specifically, the halting problem is
undecidable because it is the simplest type of question that answers a
yes or no (decision) question: whether a program will eventually halt
(stop) given an input. Obviously, it is infeasible to actually run the
program with the input to see whether it will halt because it may take
forever if there is an infinite loop in the program. The idea is to
analyze a program with given input to determine whether it will halt or
not.
Alan Turing proved the halting problem is undecidable by a proof
technique called proof by
contradiction. It
would be very helpful to review some of the sample
proofs on
Wikipedia. Another classic example is the proof of Euclid\'s
theorem
in number theory, which asserts that there are infinitely many prime
numbers. All proofs by contradiction start with an assumption that the
proposition we are trying to proof is false, follow a logical sequence
of valid steps, and arrive at a conclusion that is clearly false or
contradicts, which is also false because a statement can be either true
or false but never both.
If we assume the halting problem is decidable/solvable we should be able
to design an algorithm and implement it as a program that solve the
problem for us. The program should take a program and the input to the
program as input parameters and return the answer - whether the program
will halt or not on the input. Such a program may sound strange because
it takes a program as input, but we have seen such programs, e.g., the
higher order function blocks in Snap! takes a block (program) as input
and an interpreter
program) takes
the source code of a program as data and runs the program. Programs are
data and there no intrinsic difference between the tow. The following
proof shows that a program that answers the halting question cannot
exist.
1. Assume such a program A exists. A(P, D) -\> whether program P halt
on input data D.
2. We can create another program B that takes a program as input.
Inside program B we can call program A with the given input to
determine whether the input program will halt on itself as input
data and if the answer is no (will not halt) we halt (return) and if
the answer is yes (will halt) loop forever.
` B(P):`\
` if A(P, P) = yes `\
` infinite loop`\
` else`\
` return `\
1. What happens if we feed program B to itself as the input? or more
simply, would program B halt on itself? There are two possible
outcomes of the exercise: program B halts on itself or program B
runs forever when itself is the input. The actual outcome/result
depends on the outcome of program A called inside program B. If
program B halts on itself, according to the design of program B it
should run forever because the program A with program B as both
inputs should run an infinite loop. On the other hand, if program B
will not halt on itself, the output from program B with itself as
input should return right away. Either way it is a contradiction.
2. So far, we have made an assumption, followed a set of logically
sound steps, and finally arrived at contradictions. What went wrong?
The contradictions cannot be true because they are logically
impossible. The steps are logically sound and the only part that
could be wrong is the assumption. Therefore, the assumption cannot
be true, i.e., the halting problem is not decidable.
Here is a YouTube video illustrating the proof:
<https://www.youtube.com/watch?v=92WHN-pAFCs>
### Intractable Problems
The Halting problem is hard because it is not solvable algorithmically
even in principle. There are other hard problems that are solvable in
principle but in practice they are close to being impossible to solve.
As you can see, we can categorize problems by the performance of the
best-known algorithms. If a problem can be solved using a fast
algorithm, the problem is easy because we can use a computer to solve it
fast. On the contrary if the best-known algorithm we know takes a long
time to solve a problem, it is hard because computers cannot solve it
fast.
Using algorithm complexity
theory
we can put each algorithm into a particular category according to the
algorithm\'s complexity. If the big-O notation of a category contains
only polynomial terms, the problems solvable using algorithms in this
category are called P problems (Polynomial time solutions exist), such
as $O(1)$, $log_2(N)$$O(N)$, and $O(N^2)$. The P problems are the easy
problems to computers. Among the problems without a polynomial time
solution there are problems that if we can guess the solution it can be
verified in polynomial time. For example, the integer
factorization
(or prime decomposition) problem has no known polynomial time solution
but given an answer we can verify it very quickly by simply multiplying
the factors and comparing the result to the integer. These types of
problems are called NP (Non-deterministic Polynomial) problems.
Collectively we call problems that take too long to solve intractable
problems, which include problems with best algorithm in exponential time
($O(2^N)$) or those with polynomial time solutions but the exponent is
too larger, e.g. $O(N^{15})$.
If a problem\'s best algorithmic solution is in the $O(2^N)$, when
$N=100$ and a computer does $10^{12}$ operations per second it would
take $4 \times 10^{10}$ years (the age of the universe) to solve the
problem on the computer.
### P v.s. NP
Obviously P is a subset of NP because NP is defined only by polynomial
answer verification time and being able to solve a problem in polynomial
time (P) certainly qualifies for that. Whether P is a proper subset of
NP or, in other words, whether $P=NP$ remains one of the open questions
in computer science. Nobody knows the answer. You can win a million
dollars if you can solve it as one of the Millennium Prize
Problems.
To attack this P v.s. NP problem theoretical computer scientist have
defined another category called
NP-complete problems. The
relationships among the three categories are illustrated in the
following figure.
!Diagram of complexity classes provided that P ≠ NP. The existence of
problems within NP but outside both P and NP-complete, under that
assumption, was established by Ladner\'s
theorem.\[1\]
All problems in this category are NP problems sharing one special
property - ALL other NP-complete problems can be translated/reduced to
each of the NP-complete problems in polynomial time. Because of the
nature of NP-completeness if we can solve ANY single problem in this
category, we have proved all NP problems are solvable in polynomial
time, i.e., $P=NP$. We can take any NP problem, translate it to the
solved NP-complete problem, and solve the problem in polynomial time.
The overall time is still polynomial because polynomial + polynomial is
polynomial. Thousands of NP-complete
problems
have been discovered but none of them has been solved. NP-complete
problems are, in a sense, the most difficult known problems.
Most computer scientist believe $P \ne NP$ because the implications
otherwise. The \"creative leaps\" will disappear because solving a
problem is as easy as being able to recognize the right answer. Most
encryption algorithms are computationally secure because breaking them
are NP-problems - there is no known efficient polynomial time solution.
If $P=NP$ then all encryptions are broken.
There are other unsolved problems in computer science. You can find a
list of such problems at
<https://en.wikipedia.org/wiki/List_of_unsolved_problems_in_computer_science>
|
# Foundations of Computer Science/Computing Machinery
## Computing Machinery
We have studied some fundamental principles of computing and seen the
power of computing demonstrated in powerful technologies that operate on
these principles. At the beginning we imagined that computing can be
done purely mechanically and blindly through symbol manipulation if we
can build a simple device that is capable of some simple tasks. In this
lesson we will study the history of the development of
computers and the principles
on which all modern computer hardware operate. We will see on the
physical layer a computer is nothing but a simple device that follows
simple rules to manipulate two symbols.
### Computer Hardware
Computer hardware that can carry out computation automatically has been
around for too long. There is a short history from early
computers
to modern computers we have today.
#### Mechanical Computers
Charles Babbage invented the concept of a programmable computer and
designed the first mechanical computer in the early 19th century. His
difference engine and
analytical engine are
mechanical devices designed to tabulate polynomial functions. The input
to the machines are programs and data on punched cards and the output
number can be pouched onto cards or directed to a printer, a curve
plotter and a bell. The machines use ordinary base-10 fixed-point
arithmetic. Charles Babbage\'s engine is the first design for a
general-purpose computer that is
Turing-complete.
#### Analog Computers
In early 20th century mechanical analog computers are designed to
perform increasingly sophisticated computing, e.g. solving differential
equations by integration. Analog
computers use
continuously changeable aspects of physical phenomena such as
electrical, mechanical, or hydraulic quantities to model/quantify
computation, which lack the accuracy of modern digital computers.
#### Digital Computers
The world\'s first fully automatic digital computer is the
electromechanical programmable computer
Z3) made by Konrad Zuse in
1941. Z3 uses electric switches that drive mechanical relays to perform
computation. It replaces the decimal system with the binary system and
pioneered the use of floating point numbers. Program and data are stored
on punched film. The distinction between a digital device and an analog
device is whether the representations of values are discrete or
continuous. For example, black and white are discrete values but their
infinite number of grayish colors (a grayscale).
#### Electronic Digital Computers
The world\'s first electronic digital programmable computer is Colossus,
built with a large number of valves (vacuum tubes) in 1943. The design
was all electronic and was used to break the German Enigma code.
#### Transistor Computers
From 1955 vacuum tubes were replaced by transistors in computer designs
resulting in smaller, more reliable and more power efficient second
generation computers, giving rise to the \"second generation\" of
computers.
#### Integrated Circuit Computers
The invention of integrated circuit in 1952 ushered in a new era of
computing machinery - micro-computers. Microprocessors with integrated
circuit are used to build common computing devices you see today:
desktop computers, laptop computers, phones, and greeting cards.
### Principles of Digital Computing
The mathematical foundation of digital computing is Boolean logic
invented by George Boole.
Claude Shannon proved in
the 1930s that electronic circuits can compute in binary using Boolean
logic, which becomes the fundamental principle/idea behind all modern
computing devices.
#### Boolean Algebra
Boolean algebra has three operations AND, OR, and NOT on two values:
true and false. The rules for evaluating the three operations are shown
in the figure.
!The truth table showing the rules for the three basic Boolean
operations.
A Boolean operation operates on Boolean values and always result in a
Boolean value. For the AND operation the result is true only when both
operands are true. The OR operation, on the other hand, only results in
a false value if either of the two operands is false. The NOT operation
takes one operand and simply negates it. We will see if we can build
electronic circuits that implement the three operations we can build
circuits that can perform all kinds of arithmetic and logic functions.
The three boolean operations can be implemented physically using
transistors. A transistor is fundamentally a tiny switch as shown in the
figure.
!A transistor is fundamentally a tiny switch with three pins. When a
logical 1 is applied to the control pin the switch is closed connecting
in to
out.
When a high voltage (logical 1) is applied to the control pin the switch
is closed connecting the in pin directly to the out pin. Transistor
operates on two voltages: high and low, which can be used to represent
two different logical values: true and false or two binary values: 1 and
0. We will use a high voltage to physically represent a logical 1 and a
low voltage a logical 0.
#### Transistors and Logic Gates
Transistors are simple devices that are tiny, but it is fundamental
building block of electronic circuits. For example we can build a device
called NOT gate using a single transistor as shown in the figure.
!A NOT gate constructed using a single
transistor.
If we treat what\'s inside the red box as a unit it behaves like
negator, which is known as an NOT gate in digital logic design. As shown
in the truth table for this device (next to the figure) when the input
is logical 1 the switch is closed connecting the output to the ground
dragging the output voltage to a low signifying a logical 0. On the
other hand, when the input is logical 0 the switch stays open which
result in a high voltage on the output line because it is connected to
the power through a resister and without a current the resister will not
cause any drop in power. Once this device is built we can use it as a
building block to construct more complicated circuits. Will use the
following symbol to represent a NOT gate.
!NOT gate
With transistors and the NOT gate we can build a device that performs
the AND operation.
!An AND gate built using two transistors and a NOT
gate.
As shown in the figure the device performs exact an AND operation. The
output is logical 1 (high voltage) only when both inputs are logical 1,
which causes both switches to close dragging the output before the NOT
gate to low. The NOT gate then negates the output to be high voltage or
logical 1.
Similarly we can build a device for performing the OR operation.
!An OR gate built using two transistors and a NOT
gate.
A shown in the figure this kind of parallel structure guarantees that
the output is high voltage as long as any transistor is closed. In other
words, the relationship between the two inputs and the output is a OR
logical function.
#### Gates to Circuits
With the three basic gates (AND, OR, NOT) we can build any combinational
logic circuit. A circuit consists of input wires, gates connected by
wires, and output wires. Once a circuit is designed it can be viewed as
a black box that implements some logic mapping input to output. Here is
the standard circuit construction algorithm:
1. build a truth table from the desired logic
2. build a logic expression in sum-of-products form
3. convert the expression to circuit design using gates
We want to construct a circuit that test the equality of two bits. The
two inputs are the two bits represented physically by a high voltage
(logical 1) or a low voltage (logical 0). According to the desired logic
of the circuit we can draw the following truth table:
!This logic tests whether two bits are the
same/equal.
The first two columns enumerates all possible value combinations of the
two input lines. The output is logical 1 (true) only when the two inputs
are the same. Based on the truth table, we can derive the following
logic expression (sum-of-products form):
`(a AND b) OR ((NOT a) AND (NOT b))`
To derive the sum-of-products form we check lines in the truth table
with a logical 1 for the output. We know the input combinations shown in
these lines are supposed to cause the output to become logical 1. We can
represent each of these lines using a logic expression, for instance, (a
AND b) represents the last line because when both a and b are logical 1
the expression evaluates to a logical 1 according to the definition of
the AND operation. Similarly ((NOT a) AND (NOT b)) represents the first
line. If we combine the two cases, we can represent the desired logic
using a single expression: (a AND b) OR ((NOT a) AND (NOT b)). If we
plugin the inputs for the cases in the truth table this expression
should evaluate to the same desired output values for the corresponding
cases. Because we know how to build the devices (gates) that implement
AND, OR, and operations we can build a device that can compare whether
two bits are equal. This is device will be able to perform this kind of
operation purely mechanically (blindly) because it doesn\'t know the
meaning of the operation.
Using the same methodology we can gradually build more and more
complicated circuits. For example we could build a device that can add
two binary digits: one-bit adder. It is just a matter of figuring out
the desired logic and construct the device using the building blocks we
already know how to make.
!An adder circuit that adds two binary
bits.{width="400"}
Once we get the sum-of-products form of the logic expression from the
truth table it is straight forward for us to construct the circuit
because all we need are the three types of logic gates and wire
connection. The following figure shows the design of a one-bit adder
(with carry-in) circuit using the three basic logic gates.
!The circuit for producing the sum bit of a one bit
adder.{width="400"}
We can construct multi-bit adders by connecting multiple one-bit adders.
The following figure shows that a two-bit adder can be formed by
connecting the carry-out of the first one-bit adder to the carry-in of
the second one-bit adder.
!A 2-bit adder circuit constructed from two 1-bit
adders.{width="400"}
!A Simple Flow Chart for a Lamp Diagnosis
Algorithm
!A Flow Chart for the Algorithm that Computes N! (factorial of
N)")
Computer data
storage
|
# Foundations of Computer Science/Parallel Processing
Computing is fundamentally about information processes. On a digital
computer such processes are carried out via symbol manipulations in
binary logic. With the advancement in semiconducting technology we have
been able to keep making computers run faster---manipulate bits at a
higher speed---by cramming more transistors into computer chips. This is
known as the Moore\'s law
originated around 1970\'s. The trend of increase has slowed down and
will eventually flatten due to limits in physics as predicted by some
physicist, who also
predicted potential new technologies that may replace semiconductors
(silicon) in computer hardware manufacturing.
In the meantime, hardware companies have tweaked their technologies to
maintain the growth in hardware capacity. Multicore technology replaces
one fast CPU (Central Processing
Unit---the brain
of a computer) with many slower ones (called cores) to avoid overheating
the chip. Even though each core is a slower but we get more of them and
could get more done if we can arrange the work properly. For instance, a
strong worker can lift 100 bricks a minute and a normal worker can only
lift 34 bricks. Three normal workers can outperform one strong worker
even though they are much slower individually. This is the idea of
parallel processing.
Traditionally computer program has been written to describe sequential
processes, which means the steps can only be carried out one at a time
and one after another in a sequence. This type of program works fine on
a computer with a single processor because the computer can perform one
symbol manipulation at a time any ways. In fact we have been reaping the
benefit of Moore\'s law: every two computer hardware double it speed
causing our program run twice as fast without us doing anything. This
trend has stopped. Each individual processor (core) is not getting
faster but we have more of them in a computer. As a result our existing
sequential program will run slower even though the hardware\'s capacity
has become larger. Before the next generation of computers are invented
we can parallel
computing/processing
to solve problems faster computationally.
The idea of parallel processing is not new. For example, a car assembly
line allows multiple cars to be built at the same time. Even though
different parts of the car are being assembled at a given time this
assembly line keeps all the workers busy increasing the throughput
(number of cars built per unit time) of the whole system. We can make
the workers work faster to further increase the throughput or we could
add another assembly and hire more workers. This is one form of parallel
processing - pipelining. Another form of parallelism divides the whole
computing task into parts that can be computed simultaneously and run
them physically on different CPU (computers). This is similar to putting
a jigsaw puzzle together with friends. As you can imagine having some
extra help will definitely help solve the puzzle faster, but do it mean
the more the better. The answer is no. As the number of helpers increase
the amount of coordination and communication increases faster. When you
have too many people they may start stepping on each other\'s toes and
competing with each other for resources (space and puzzle pieces). This
is known as the overhead of parallel processing, which causes the
diminishing return on investment. We can see this pattern clearly when
we measure the improvement in performance as a function of the workers
involved.
In the context of parallel processing/computing we use a metric called
speedup to measure the
improvement in performance. The achieved speedup equals the
solution/execution time of a program with out parallel processing
divided by the execution time of the same task with parallel processing.
$$S = \frac{T_{old}}{T_{new}}$$ **where:**
- $S \$ is the speedup.
- $T_{old} \$ is the old execution time without the parallel
processing.
- $T_{new} \$ is the new execution time with the parallel processing.
If parallel processing makes a program run twice as fast the speedup is
two (a.k.a two-fold speedup). Theoretically as we double the number of
workers or resources we can expect a two-fold speed up. Practically it
is hard to achieve this optimal speedup because some tasks are not
always parallelizable. For example you can not usually lay the carpet
before the floor of house is constructed and you cannot always add more
painters to get the painting job down faster. Computational tasks often
have similar dependency and resources constraints to keep us from fully
utilize the parallel processing systems (e.g. multi-core computers) we
have.
Exercise:
With a washing machine and a dryer you could work on one load of laundry
at time. You wash it first and then put it into the dryer. Assume the
whole task takes an hour. This works perfectly when you have only one
load to do and there is nothing you can do to make it go faster. What if
you have many loads of laundry to do? You can at least at one load done
every hour. Can you \"speed it up\"? If the number of loads can be
arbitrary larger what is shortest average per load laundry time you can
achieve?
|
# C Programming/Why learn C?
C "wikilink") is the most commonly
used programming language for writing operating
systems. The first operating
system written in C was Unix. Later
operating systems like GNU/Linux were all
written in C. Not only is C the language of operating systems, it is the
precursor and inspiration for almost all of the most popular high-level
languages available today. In fact, Perl,
PHP,
Python "wikilink") and
Ruby "wikilink") are all written
in C.
By way of analogy, let\'s say that you were going to be learning
Spanish, Italian, French, or Romanian. Do you think knowing Latin would
be helpful? Just as Latin was the basis of all of those languages,
knowing C will enable you to understand and appreciate an entire family
of programming languages built upon the traditions of C. Knowledge of C
enables freedom.
### Why C and not assembly?
The biggest reason to learn C over
assembly is because it\'s much easier
and faster to write code in C than in assembly for a given programming
task. With C, you will write far fewer lines of code, complete the job
much quicker, and with far less mental effort than if you wrote it in
assembly. And with today\'s modern compilers, an executable file
compiled from C source code will typically run faster than one written
\"by hand\" using assembly. Only in rare edge cases, and only if you
really know what you are doing, can assembly offer important speed
advantages over C code compiled with a decent compiler.
And with C, you do not have to sacrifice a lot of low level control over
how your code is executed. A typical C statement translates into just a
few assembly instructions. But C also provides you with a large software
library to help you execute low-level tasks that you\'d rather not be
bothered programming.
Another huge advantage of C is portability. Different processors have
different instruction sets. Having to rewrite and maintain assembly code
for each computer architecture you wish to execute your code on is an
onerous task. And so one of the main strengths of C is that it combines
universality and portability across various computer architectures while
still giving you the same kind of low level hardware control you get
with assembly. This means you can write your C source code once and
easily compile it into binaries for use on a wide variety of machines.
For example, C programs can be compiled and run on the HP 50g calculator
(ARM processor), the TI-89 calculator
(68000 processor), Palm OS Cobalt
smartphones (ARM processor), the original iMac
(PowerPC), the Arduino (Atmel
AVR), and the Intel iMac
(Intel Core 2 Duo). Each of these devices has its
own assembly that is completely incompatible with the assembly of any
other. C makes it possible to run your code on these machines with much
less effort.
So is it any wonder that C is such a popular language?
Like toppling dominoes, the next generation of programs follows the
trend of its ancestors. Operating systems designed in C always have
system libraries designed in C. Those system libraries are in turn used
to create higher-level libraries (like
OpenGL, or
GTK), and the designers of those libraries
often decide to use the language the system libraries used. Application
developers use the higher-level libraries to design word processors,
games, media players and the like. Many of them will choose to program
in the language that the higher-level library uses. And the pattern
continues on and on and on\...
That said, learning assembly can be fun and worthwhile because it can
give you a deep understanding of how your computer works at very low
levels. And learning assembly will definitely help you become a more
skilled C programmer. So, by all means, we encourage you learn assembly,
but when it comes time to do real work, you\'ll definitely want to get
it done with C.
### Why C, and not another language?
The primary design of C is to produce portable code while maintaining
performance and minimizing footprint (CPU
time,
memory usage, disk I/O, etc.).
This is useful for operating
systems, embedded
systems or other programs where
performance matters a lot ("high-level" interface would affect
performance). With C it's relatively easy to keep a mental picture of
what a given line really does, because most of the things are written
explicitly in the code. C has a big codebase for low level applications.
It is the "native" language of UNIX, which makes it
flexible and portable. It is a stable and mature language which is
unlikely to disappear for a long time and has been ported to most, if
not all, platforms.
One powerful reason is memory allocation. Unlike most programming
languages, C allows the programmer to write directly to memory. Key
constructs in C such as structs, pointers and arrays are designed to
structure and manipulate memory in an efficient, machine-independent
fashion. In particular, C gives control over the memory layout of data
structures. Moreover dynamic memory allocation is under the control of
the programmer (which also means that memory deallocation has to be done
by the programmer). Languages like
Java "wikilink") and Perl shield
the programmer from having to manage most details of memory allocation
and pointers (except for memory leaks and
some other forms of excess memory usage). This can be useful since
dealing with memory allocation when building a high-level program is a
highly error-prone process. However, when dealing with low-level code
such as the part of the OS that controls a device, C provides a uniform,
clean interface. These capabilities just do not exist in most other
languages.
While Perl, PHP, Python and Ruby may be powerful and support many
features not provided by default in C, they are not normally implemented
in their own language. Rather, most such languages initially relied on
being written in C (or another high-performance programming language),
and would require their implementation be ported to a new platform
before they can be used.
As with all programming languages, whether you want to choose C over
another high-level language is a matter of opinion and both technical
and business requirements could dictate which language is required.
|
# C Programming/History
The field of computing as we know it today started in 1947 with three
scientists at Bell Telephone Laboratories---William
Shockley, Walter
Brattain, and John
Bardeen---and their groundbreaking
invention: the transistor. In 1956, the first
fully transistor-based computer, the TX-0, was
completed at MIT. The first integrated
circuit was created in 1958 by Jack
Kilby at Texas Instruments, but the first
high-level programming language existed even before then.
The Fortran project was developed in 1954 by
IBM. A shortening of \"*The IBM Mathematical **For**mula **Tran**slating
System*\", the project had the purpose of creating and fostering
development of a procedural, imperative programming language that was
especially suited to numeric computation and scientific computing. It
was a breakthrough in terms of productivity and programming ease
(compared to assembly language) and
speed (Fortran programs ran nearly as fast as, and in some cases, just
as fast as, programs written in assembly). Furthermore, Fortran was
written at a high-enough level (and thus was machine independent enough)
to become the first widely adopted programming language. The Algorithmic
Language (Algol 58) was derived from Fortran in
1958 and evolved into Algol 60 in 1960. The
Combined Programming Language
(CPL) was then created out
of Algol 60 in 1963. In 1967, it evolved into Basic
CPL (BCPL), which was the basis for
*B* "wikilink"), which was created in 1971,
and served as the basis of *C*.
Created by Ken Thompson at Bell Labs, B was
a stripped-down version of BCPL that was also a compiled
language (see User\'s Reference to
B) used in early
internal versions of the UNIX operating system. As
Dennis Ritchie noted in his *Development
of the C Language* :
> The B compiler on the PDP-7 did not generate machine instructions, but
> instead \'threaded code\', an interpretive scheme in which the
> compiler\'s output consists of a sequence of addresses of code
> fragments that perform the elementary operations. The operations
> typically --- in particular for B --- act on a simple stack machine.
Thompson and Ritchie improved B, and called the result NB. Further
extensions to NB created its logical successor, C. Most of UNIX was
rewritten in NB, and then C, which resulted in a more portable operating
system. The portability of UNIX was
the main reason for the initial popularity of both UNIX and C. Rather
than creating a new operating system for each new machine, system
programmers could simply write the few system-dependent parts required
for the machine, and then write a C compiler for the new system. Since
most of the system utilities were thus written in C, it simply made
sense to also write new utilities in C.
The American National Standards Institute began work on standardizing
the C language in 1983, and completed the standard in 1989. The
standard, ANSI X3.159-1989 \"Programming Language C\", served as the
basis for all implementations of C compilers. The standards were later
updated in 1990 and 1999, allowing for features that were either in
common use, or were appearing in C++.
de:C-Programmierung:
Grundlagen es:Programación
en C/Historia de C
fr:Programmation
C/Introduction
it:C/Linguaggio/Panoramica
pt:Programar em C/História da linguagem
C
|
# C Programming/What you need before you can learn
## Getting Started
This book introduces and teaches the basics of the C programming
language and touches upon some advanced topics as well. This section
outlines the required skills and tools you\'ll need to get the most out
of this book.
### Skills and Prior Experience You\'ll Need
This book is for beginning programmers, so don\'t worry if you have no
formal computer training or prior programming experience. It\'s assumed
you know how to turn your computer on, start and stop applications, and
perform other basic operations like installing software. It\'s also
assumed you have some experience interacting with your operating system
through a terminal window using its **command line interface.** If you
aren\'t sure what this means, consider seeking out a tutorial for your
chosen platform that can get you comfortable with getting around your
computer\'s command line. At a minimum, you should know the basic
commands for navigating to different directories and performing simple
file management operations. This book will spell out any other commands
you\'ll need to run from the command line to get your C code working on
your machine.
### Software You\'ll Need
No one ever became a musician just by reading sheet music. Musicians
have to constantly play and practice on their instruments to get good.
Similarly, the only way to become a programmer is to write and execute
lots of code. To do that, you will need two different pieces of
software: a **compiler** and a **text editor**. Both can be had for no
cost.
###### Compilers
A compiler is a sophisticated piece of software for converting the C
source code you write with your text editor into the machine
code[^1] that you can execute on your
computer. Below is a list of some popular C compilers. Note that some of
the compilers listed below come as part of an **integrated development
environment
(IDE).** However, if you are brand new to programming, it\'s best if you
can install and run the compiler from the command line instead of
through an IDE. This book uses the GNU C Compiler (GCC) in its examples
so we recommend installing this compiler for use with this book. The
next section in this chapter will explain how to download and install
the GCC software to your machine.
**Popular C compilers/IDEs include:**
Name Website Platform License Details
------------------------------------------------------------------------------------- ------------------------------------------------------------- --------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------------------
Microsoft Visual Studio Community Visual Studio Windows Proprietary, free of charge Powerful and student-friendly version of an industry standard compiler.
Xcode Xcode macOS, OSX Proprietary, free of charge Available free of charge at Mac App Store.
Tiny C Compiler (TCC) tinycc GNU/Linux, Windows LGPL Small, fast and simple compiler.
Clang clang GNU/Linux, Windows, Unix, OS X University of Illinois/NCSA License A free, permissively licensed front-end using a LLVM backend.
GNU C Compiler gcc GNU/Linux, MinGW or mingw-w64 (Windows), Unix, OS X. GPL The De facto standard. Ships with most Unix-like systems.
###### Text Editors and IDEs
Aside from a compiler, the only other software requirement is a text
editor for writing and saving your C code.
Note that a text editor is different from a word
processor, a piece of software with many
features for creating visually appealing documents. Unlike word
processors, text editors are primarily designed to create plain text
files. On Windows, the Notepad text editor can be used but it does not
offer any advanced capabilities such as syntax highlighting and code
completion. There are hundreds of text editors (see List of Text
Editors). Among the most popular are
Notepad++ for Windows as well as Sublime
Text, gedit,
Vim "wikilink") and Emacs
which are also available on other operating systems ("cross-platform").
These text editors come with syntax
highlighting and line numbers, which
makes code easier to read at a glance, and to spot syntax errors. Many
text editors have features for increasing your coding speed, such as
keystroke macros and code snippets, that you can take advantage of as
you gain skill as a programmer.
You may also be considering the use of an **integrated Development
Environment** (**IDE**) to help you write code. An IDE is a suite of
integrated tools and features in one convenient package, usually with a
graphical user interface. These programs include a text editor and file
browser and are also sometimes bundled with an easily accessible
compiler. They also typically include a debugger, a tool that will
enable you to do such things as step through the program you develop
manually one source code line at a time, or alter data as an aid to
finding and correcting programming errors.
However, many IDEs do not offer a command line interface to the compiler
and/or offer only graphical buttons or a menu for executing programs. So
for new programmers, an IDE is not ideal. Instead, a simple text editor
will suffice along with the ability to issue simple commands on the
command line to help you gain a hands-on familiarity and understanding
of core development tools. Of course, an IDE may still be useful to you
if you have experience with one. But as a general guideline: Do not use
an IDE unless you know what the IDE is doing for you!
**Other popular compilers/IDEs include:**
Name Website Platform License Details
-------------------------------------------------------- ----------------------------------------------------------- ------------------------------ ------------------------------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------
Eclipse CDT "wikilink") Eclipse Windows, Mac OS X, GNU/Linux Free/Libre and Open Source Eclipse "wikilink") IDE for C/C++ development, a popular open source IDE.
Netbeans Netbeans Cross-platform CDDL and GPL 2.0 A Good comparable matured IDE to Eclipse.
GNOME Builder Builder GNU/Linux GPL A feature-rich but simple IDE for the GNOME desktop environment.
Anjuta Anjuta GNU/Linux GPL An extensible GTK+3 IDE for the GNOME desktop environment.
Geany geany Cross-platform GPL A lightweight cross-platform GTK+ notepad based on Scintilla, with basic IDE features.
KDevelop KDevelop Cross-platform GPL A cross-platform IDE for the KDE project.
Little C Compiler (LCC) "wikilink") lcc Windows Open Source but not Libre Small open source compiler.
Pelles C Pelles C Windows, Pocket PC Proprietary, free of charge A complete C development kit for Windows.
Dev-C++ Dev C++ Windows GPL Updated version of the formerly popular Bloodshed Dev-C++.
CodeLite CodeLite Cross-platform GPL 2 Free IDE for C/C++ development.
Code::Blocks Code::Blocks Cross-platform GPL 3.0 Built to meet users\' most demanding needs. Very extensible and fully configurable.
On **GNU/Linux**, GCC is almost always included by default.
On **Microsoft Windows**, Dev-C++ is recommended for beginners because
it is easy to use, free, and simple to install. Although the initial
developer (Bloodshed) hasn't updated it since 2005, a new version
appeared in 2011, made by an independent programmer, and is being
actively developed.[^2] An alternate option for those working only in
the Windows environment is the proprietary Microsoft Visual Studio
Community which is free of charge and has an excellent debugger.
On **Mac OS X**, the Xcode IDE provides the compilers needed to compile
various source files. The newer versions do not include the command line
tools. They need to be downloaded via Xcode-\>Preferences-\>Downloads.
## Footnotes
pl:C/Czego potrzebujesz
[^1]: Actually, GCC's (GNU C Compiler) **cc** (C Compiler) translates
the input .c file to the target CPU's
assembly, output is written to an
.s file. Then **as** (assembler) generates a machine code file from
the .s file. Pre-processing is done by another sub-program **cpp**
(C PreProcessor), which is not to be confused with **c++** (a
compiler for another programming language).
[^2]: <http://orwelldevcpp.blogspot.com/>
|
# C Programming/Obtaining a compiler
## Dev-C++
{{ Wikipedia \| Dev-C++ }}
Dev C++ is an Integrated Development
Environment (IDE) for the C++ programming language, available from
Bloodshed Software. An updated version is
available at Orwell Dev-C++.\
C++ is a programming language which contains within itself most of the C
language, plus extensions. Most C++ compilers will compile C programs,
sometimes with a few adjustments (like invoking them with a different
name or command line switch). Therefore, you can use Dev C++ for C
development.
However, Dev C++ is not the compiler. It is designed to use the
MinGW or Cygwin versions of
GCC - both of which can be obtained as part
of the Dev C++ package, although they are completely different
projects.\
Dev C++ simply provides an editor, syntax highlighting, some facilities
for the visualisation of code (like class and package browsing) and a
graphical interface to the chosen compiler. Because Dev C++ analyses the
error messages produced by the compiler and attempts to distinguish the
line numbers from the errors themselves, the use of other compiler
software is discouraged since the format of their error messages is
likely to be different.
The latest version of Dev-C++ is a
beta for version 5. However, it
still has a significant number of bugs. All the features are there, and
it is quite usable. It is considered one of the best free software C
IDEs available for Windows.
A version of Dev C++ for Linux is in the pipeline. It is not quite
usable yet, however. Linux users already have a wealth of IDEs
available. (e.g. KDevelop and
Anjuta.) Most of the graphical text editors, and
other common editors such as *emacs* and *vim*, support syntax
highlighting.
### Windows
1. Go to <https://sourceforge.net/projects/orwelldevcpp/> and pick the
download option.
2. The setup is pretty straight forward. Make sure the compiler option
is ticked.
3. You can now use the environment provided by the software to write
and run your code.
4. OPTIONALLY: \"C:\\Program Files (x86)\\Dev-Cpp\\MinGW64\\bin\" can
be added to the global PATH variable of the operating system to
compile with GCC from a command prompt.
## GCC
The GNU Compiler Collection
(GCC) is a free/libre set of compilers
developed by the Free Software
Foundation and can be installed
on a wide variety of operating systems. GCC commands are used throughout
this book to demonstrate how to compile C code so you are encouraged to
take the time to install GCC on your machine.
### GNU/Linux
On **GNU/Linux,** Installing the GNU C Compiler can vary in method from
distribution to distribution. (Type
in **cc -v** to see if it is installed already.)
- For Ubuntu, install the GCC compiler (along
with other necessary tools) by running
`sudo ``apt`` install build-essential`
in the terminal.
- For Debian, install the GCC compiler (as
root) by running
`apt`` install gcc` in the
terminal.
- For Fedora "wikilink"), install the
GCC compiler (as root) by running
`dnf` "wikilink")` install gcc` in the terminal.
- For RHEL, install the GCC
compiler (as root) by running
`dnf` "wikilink")` install gcc` in the terminal.
- For Mandrake, install the GCC compiler (as
root) by running `urpmi`` gcc` in the
terminal.
- For Slackware, the package is available on
their website - simply download, and
type `installpkg gcc-xxxxx.tgz` in the terminal.
- For Gentoo, you should already have GCC
installed as it will have been used when you first installed. To
update it run (as root) `emerge -uav gcc` in the terminal.
- For Arch Linux, install the GCC compiler
(as root) by running `pacman -S gcc` in the terminal.
- For Void Linux, install the GCC
compiler (as root) by running `xbps-install -S gcc` in the terminal.
- If you cannot become root, get the GCC tarball from
<ftp://ftp.gnu.org/> and follow the instructions in it to compile
and install in your home directory. Be warned though, you need a C
compiler to do that - yes, GCC itself is written in C.
- You can use a commercial C compiler/IDE.
### macOS
The simplest method for obtaining a compiler is to install Apple\'s
proprietary IDE, Xcode, available for
free.
Xcode comes bundled with a GCC-compatible compiler called
clang which replaced GCC as Xcode\'s default C
compiler a number of years ago. But because Xcode aliases the `gcc`
command to the clang compiler, GCC installation isn\'t necessary to
compile the example code in this book.
If you prefer using the GCC compiler, the third-party package manager,
Homebrew, provides an easy installation process.
You\'ll first need to install
Homebrew, and then issue the
`brew install` command to install the desired GCC Homebrew
formulae. You may want to find a
recent tutorial that will step you through this process as other
commands may be necessary to get GCC set up flawlessly on your system,
especially if you already have Xcode installed.
For hardcore computer enthusiasts, GCC can be compiled directly from the
source code. We highly recommend searching out and following an
up-to-date tutorial for installing GCC from source files.
### BSD Family Systems
- For FreeBSD, NetBSD,
OpenBSD, DragonFly
BSD the port of GNU GCC is available in
the base system, or it could be obtained using the ports collection
or pkgsrc.
### Windows
There are three ways to use GCC on Windows: Cygwin, MinGW and Windows
Subsystem for Linux (WSL). Applications compiled with Cygwin will not
run on any computer without Cygwin, so MinGW is recommended. MinGW is
simpler to install, and takes less disk space.
#### MinGW
1. Go to <http://sourceforge.net/projects/mingw/> download and save
this to your hard drive.
2. Once the download is finished, open it and follow the instructions.
You can also choose to install additional compilers, or the tool
Make, but these aren\'t necessary.
3. Now you need to set your PATH. Right-click on \"My computer\" and
click \"Properties\". Go to the \"Advanced\" tab and click on
\"Environment variables\". Go to the \"System variables\" section
and scroll down until you see \"Path\". Click on it, then click
\"edit\". Add \"C:\\MinGW\\bin\\\" (without the quotes) to the end.
4. To test if GCC works, open a command prompt and type \"gcc\". You
should get the message \"gcc: fatal error: no input files
compilation terminated.\". If you get this message, GCC is installed
correctly.
#### Cygwin
1. Go to <http://www.cygwin.com> and click on the \"Install Cygwin
Now\" button in the upper right corner of the page.
2. Click \"run\" in the window that pops up, and click \"next\" several
times, accepting all the default settings.
3. Choose any of the Download sites (\"ftp.easynet.be\", etc.) when
that window comes up; press \"next\" and the Cygwin installer should
start downloading.
4. When the \"Select Packages\" window appears, scroll down to the
heading \"Devel\" and click on the \"+\" by it. In the list of
packages that now displays, scroll down and find the \"gcc-core\"
package; this is the compiler. Click once on the word \"Skip\", and
it should change to some number like \"3.4\" etc. (the version
number), and an \"X\" will appear next to \"gcc-core\" and several
other related packages that will now be downloaded.
5. Click \"next\" and the compiler as well as the Cygwin tools should
start downloading; this could take a while. While you\'re waiting
for the installation to finish, download any text-editor designed
for programming. While Cygwin does include some, you may prefer
doing a web search to find other alternatives. While using a stock
text editor is possible, it is not ideal.
6. Once the Cygwin downloads are finished and you have clicked
\"next\", etc. to finish the installation, double-click the Cygwin
icon on your desktop to begin the Cygwin \"command prompt\". Your
home directory will automatically be set up in the Cygwin folder,
which now should be at \"C:\\cygwin\" (the Cygwin folder is in some
ways like a small unix/linux computer on your Windows machine \--
not technically of course, but it may be helpful to think of it that
way).
7. Type \"gcc\" at the Cygwin prompt and press \"enter\"; if \"gcc: no
input files\" or something like it appears you have succeeded and
now have the GCC compiler on your computer (and congratulations \--
you have also just received your first error message!).
#### Windows Subsystem for Linux
1. Go to <http://aka.ms/wsldocs> and follow the steps to install
WSL
2. Go to <https://aka.ms/vscode> and follow the steps to install
VSCode
3. Follow the
guide
and choose Get Started with C++ and
WSL
4. As a result you will need to install possibly
Ubuntu
and set-up accordingly installing GCC like the Linux guide above.
The current stable (usable) version of GCC is 4.9.1 published on
2014-07-16, which supports several platforms. In fact, GCC is not only a
C compiler, but a family of compilers for several languages, such as
C++, Ada, Java, and
Fortran.
## Embedded systems
- Most CPUs are microcontrollers in embedded systems, often programmed
in C, but most of the compilers mentioned above (except GCC) do not
support such CPUs. For specialized compilers that do support
embedded systems, see Embedded Systems/C
Programming.
## Other C compilers
We have a long list of C
compilers in a much later section
of this Wikibook. *Which of those compilers would be suitable for
beginning C programmers, that we should say a few words about getting
started with that particular compiler in this section of this Wikibook?*
pl:C/Używanie kompilatora
|
# C Programming/Intro exercise
## The \"Hello, World!\" Program
Tradition dictates that we begin with a program that displays a \"Hello,
World!\" greeting to the screen, followed by a new line, and then exits.
Below is the C source code that does just that. Type this code into your
preferred text editor/IDE and save it to a file named **hello.c**.
:
``` {.c .numberLines}
#include <stdio.h>
int main(void)
{
printf("Hello, World!\n");
return 0;
}
```
### Source code analysis
Although this is a very simple program, a lot of hidden meaning is
packed into the many symbols you see in the code. Though your compiler
understands it, you can only guess at what the code, sprinkled with some
familiar English words, might do. One of your first jobs as a new
programmer will be to learn the many \"words\" and symbols of the C
programming language, the language your compiler understands. Once you
learn the meaning underlying the code, you will be able to \"talk to\"
the compiler and give it your own orders and build any kind of program
you are inventive and resourceful enough to create.
But note that knowing the meanings of arcane symbols is not all there is
to programming. You can\'t master another language by reading a
translation dictionary. To become fluent in another language, you have
to practice conversing in that language. Learning a programming language
is no different. You have to practice \"talking\" to the compiler with
the source code you write. So be sure to type in the code example above
and feel free to experiment and alter it with your curiosity as your
guide.
OK, so let\'s dive in and look at the first line in our program:
:
``` {.c .numberLines startFrom="1"}
#include <stdio.h>
```
Before understanding what this line does, you have to know that your
machine already comes pre-installed with some C software code. The code
is there to save you from the drudgery of writing code that performs
basic, common tasks. This reusable code is referred to as a **library.**
And so our first line in our example program signals to the compiler
that we\'d like to \"check out\" some code from the library and make use
of it in our program. Here, we are borrowing code that will help us
print text to the screen.
The way we tell the C compiler to include library code into our own code
is by using what\'s called a .
One of the very first tasks your compiler will perform is to search
through your source code for preprocessor directives which modify your
source code in some way. In our case, the `#include` preprocessor
directive tells the compiler to copy source code from a library and
insert it directly into the code where the preprocessor directive is
found. Since our directive is at the very top of the file, the library
code will be inserted at the top of the source file. (Note that this all
happens in the computer\'s memory, so the original file on your disk
never actually gets altered.)
But which library code should the compiler insert? The next bit in the
line, the `<stdio.h>`, tells the compiler to copy and paste the C code
from the **stdio.h** file into your code. The angle brackets surrounding
the file name tell the compiler to look for the file in the standard
library as opposed, to say, your own personal library of reusable code.
Note that files with the **.h** extension are called **header files**.
The stdio.h header file contains many **functions** related to input and
output that are defined according to the C standard. Though this header
file gives us access to many different functions, the only library
function we are interested in from stdio.h is the `printf` function.
OK, but what, exactly, is a function? Let\'s take a look at the next
line in our code so we can begin to get an idea:
:
``` {.c .numberLines startFrom="3"}
int main(void)
```
Here we create a **function** named `main` that is the starting point
for all C programs. All C programs require a function called \"main\" or
they will not compile. Our function name is surrounded by two mysterious
symbols, **int** and **(void)**. The \"int\" bit tells the compiler what
kind of value our function will return while the \"(void)\" bit tells
our compiler what kind of values we will \"pass\" into our function.
We\'ll skip over what exactly this means for now as these values will be
covered in more detail later in the book. The most important thing to
understand right now is that together, these symbols **declare** our
function to the compiler and tell it that it exists.
So what is a function? In computer science, the term "function" is used
a bit more loosely than in mathematics, since functions often express
imperative ideas (as in the case of C) - that is, *how-to* process,
instead of declarations. For now, suffice it to say, functions define a
set of computer statements that work together to carry out a specific
task. In C, the statements associated with a function are placed between
a set of curly braces, `{ }`, which mark the beginning and end of the
statements. Together, the curly braces and the statements are called a
**block.** Let\'s take a look at the first line in our function\'s
block:
:
``` {.c .numberLines startFrom="5"}
printf("Hello World!\n");
```
This line of code is the heart of our program, the one that outputs our
greeting to the user's **console** (also known as the *terminal* in the
context of Unix-like operating systems), the text-based interface
installed on your computer. This statement is a **function call** and
has two main parts: the name of the library function used to print our
greeting, **printf**, followed by the data that we will pass to the
function, seen here between the pair of parentheses. The data we are
passing to the function is the **string**, "Hello World!\\n". The
\"\\n\" part at the end of the string is a special kind of character
called an **escape sequence.** The \"\\n\" escape sequence generates the
new line at the end of our text. Strings and escape sequences will be
covered in more detail later. We terminate the function call statement
with a semicolon so the compiler knows that it should begin looking for
a new statement which it finds on the next line:
:
``` {.c .numberLines startFrom="6"}
return 0;
```
Here, we say that our `main` function returns an integer value using the
`return` keyword. The integer value we are returning is \"0\". But what
does this mean, exactly? In the specific context of the `main` function,
the value we return is called the **exit status**, which we report back
to the operating system to indicate whether our code ran without error.
As our programs grow in complexity, we can use other integers as codes
to indicate various types of errors. This style of providing exit status
is a long standing convention[^1]. We will go into much more detail on
return values of functions later in the book.
So that\'s a lot to take in, even for such a short program. Don\'t worry
if you don\'t understand all of it and don\'t worry about memorizing it.
You do not learn programming by memorizing, you learn by repetition and
by doing. Memorizing all the notes to Beethoven\'s 5th symphony does not
make you a concert pianist, you must get on the keyboard and practice
and play!
Next we will show you how to take the source code you typed in and turn
it into an executable file with your compiler.
### Compiling
Compiling is the process we used to describe translating the orders you
gave to the compiler in your source code into the machine language that
can be run by your operating system and microprocessor. In this way,
your C compiler is a middle-man. You talk to the compiler in a language
it understands, C source code, and the compiler translates the source
into machine code to save you a lot of painstaking, tedious work writing
assembly code.
If the compiler finds your source code confusing, it will throw an error
along with a message to help you fix up your source code and clear up
any confusion. You will then need to try to recompile the code and
repeat the process until it compiles without error. Note that code that
compiles without error doesn\'t mean it\'s free of bugs. It just means
the compiler understands the instructions provided by your source code.
#### Unix-like
If you are using a Unix(-like) system, such as
GNU/Linux, Mac OS X,
or Solaris, it will
probably have GCC installed, otherwise on Linux you can install it using
the package manager of your distribution. Open the virtual console or a
terminal emulator and enter the following (be certain your current
working directory is the one containing your source code):
`gcc hello.c`
By default gcc will generate our executable binary with the name
*a.out*. To run your new generated program type:
`./a.out`
You should see `Hello, World!` printed after the last prompt.
To see the exit status of the last program you ran, type on your shell
command:
`echo $?`
This shows the value the `main` function has returned, which is 0 in the
above example.
There are a lot of options you can use with the gcc compiler. For
example, if you want the output to have a name other than a.out, you can
use the -o option. The following shows a few examples:
`-o`: indicates that the next parameter is the name of the resulting program (or library). If this option is not specified, the compiled program will, for historic reasons, end up in a file called \"a.out\" or \"a.exe\" (for cygwin users).
```{=html}
<!-- -->
```
`-Wall`: indicates that gcc should warn about many types of suspicious code that are likely to be incorrect.
You can use these options to create a program called \"helloworld\"
instead of \"a.out\" by typing:
`gcc -o helloworld hello.c -Wall`
Now you can run it by typing:
`./helloworld`
All the options are well documented in the manual[^2] for GCC.
#### On IDEs
If you are using an IDE you may have to select console project, and to
compile you just select build from the menu or the toolbar. The
executable will appear inside the project folder, but you should have a
menu button so you can just run the executable from the IDE. The process
is roughly the same on all IDEs.
## References
[^1]: <https://www.gnu.org/software/libc/manual/html_node/Exit-Status.html>
[^2]: <https://gcc.gnu.org/onlinedocs/>
|
# C Programming/Preliminaries
Before learning C syntax and programming constructs, it is important to
learn the meaning of a few key terms that are central in understanding
C.
## Block Structure, Statements, Whitespace, and Scope
Next we\'ll discuss the **basic structure** of a C program. If you\'re
familiar with PASCAL "wikilink"), you
may have heard it referred to as a **block-structured** language. C does
not have complete block structure (and you\'ll find out why when you go
over functions in detail) but it is still very important to understand
what blocks are and how to use them.
So what is in a **block**? Generally, a block consists of executable
**statements**.
But before we delve into blocks, let\'s examine statements. One way to
describe statements is they are the text (and surrounding whitespace)
the compiler will attempt to turn into executable instructions. A
simpler definition is statements are bits of code that do things. For
example:
``` c
int i = 6;
```
This **declares** an integer variable, which can be **accessed** with
the **identifier** \'i\', and **initializes** it to the value 6. The
various data types are introduced in the chapter
Variables.
You might have noticed the semicolon at the end of the statement.
Statements in C always end with a semicolon (;). Leaving off the
semicolon is a common mistake many people make, beginners and experts
alike! So until it becomes second nature, be sure to double check your
statements!
Since C is a \"free-format\" language, several statements can share a
single line in the source file, like this:
``` c
/* this declares the variables 'i', 'test', 'foo', and 'bar'
note that ONLY the variable named 'bar' is initialized to six! */
int i, test, foo, bar = 6;
```
There are several kinds of statements. You\'ve already seen some of
them, such as the assignment (`i = 6;`). A substantial portion of this
book deals with statement construction.
Back to our discussion of blocks. In C, blocks begin with an opening
brace `"{"` and end with a closing brace `"}"`. Blocks can contain other
blocks which can contain their own blocks, and so on.
Let\'s look at a block example.
``` c
int main(void)
{
/* this is a 'block' */
int i = 5;
{
/* this is also a 'block', nested inside the outer block */
int j = 6;
}
return 0;
}
```
You can use blocks with the preceding statements, such as the main
function declaration (and other statements we\'ve not yet covered), but
you can also use blocks by themselves.
**Whitespace** refers to the tab, space and newline characters that
separate the text characters that make up the source code.\
Like many things in life, it\'s hard to appreciate whitespace until
it\'s gone. To a C compiler, the source code
``` c
printf("Hello world"); return 0;
```
is the same as
``` c
printf("Hello world");
return 0;
```
which is also the same as
``` c
printf (
"Hello world") ;
return 0;
```
The compiler simply ignores most whitespace (except, for example, when
it separates `return` from `0`). However, it is common practice to use
spaces (or tabs) to organize source code for human readability.
Most of the time we do not want other functions or other programmer\'s
routines accessing data we are currently
manipulating, which is why it is important to understand the concept of
scope.
**Scope** describes the level at which a piece of data or a function is
visible. There are two types of scope in C, **local** and **global**.
When we speak of **global** scope, we\'re referring to something that
can be seen or manipulated from anywhere in the program. **Local** scope
applies to a program element that can be seen or manipulated only within
the block in which it was declared.
Let\'s look at some examples to get a better idea of scope.
``` c
int i = 5; /* this is a 'global' variable, it can be accessed from anywhere in the program */
/* this is a function, all variables inside of it
are "local" to the function. */
int main(void)
{
int i = 6; /* "local" 'i' is set to 6 */
printf("%d\n", i); /* prints a '6' to the screen, instead of the global variable of 'i', which is 5 */
return 0;
}
```
That shows an example of local and global. But what about different
scopes *inside* of functions?\
(you\'ll learn more about functions later, for now, just focus on the
\"main\" part.)
``` c
/* the main function */
int main(void)
{
/* this is the beginning of a 'block', you read about those above */
int i = 6; /* this is the first variable of this 'block', 'i' */
{
/* this is a new 'block', and because it's a different block, it has its own scope */
/* this is also a variable called 'i', but in a different 'block',
because it's in a different 'block' than the first variable named 'i', it doesn't affect the first one! */
int i = 5;
printf("%d\n", i); /* prints a '5' onto the screen */
}
/* now we're back into the first block */
printf("%d\n", i); /* prints a '6' onto the screen */
return 0;
}
```
## Basics of Using Functions
**Functions** are a big part of programming. A function is a special
kind of block that performs a well-defined task. If a function is
well-designed, it can enable a programmer to perform a task without
knowing anything about how the function works. The act of requesting a
function to perform its task is called a **function call**. Many
functions require a function call to hand it certain pieces of data
needed to perform its task; these are called **arguments**. Many
functions also return a value to the function call when they\'re
finished; this is called a **return value** (the return value in the
above program is **0**).
The things you need to know before calling a function are:
- What the function does
- The data type (discussed later) of the arguments and what they mean
- The data type of the return value and what it means
Many functions use the return value for the result of a computation.
Some functions use the return value to indicate whether they
successfully completed their work. As you have seen in the intro
exercise, the `main` function uses a return value to provide an exit
status to the operating system.
All code other than global data definitions and declarations needs to be
a part of a function.
Usually, you\'re free to call a function whenever you wish to. The only
restriction is that every executable program needs to have one, and only
one, **main** function, which is where the program begins executing.
We will discuss functions in more detail in a later chapter, C
Programming/Procedures and
functions.
## The Standard Library
In 1983, when C was in the process of becoming standardized, the
American National Standards
Institute (ANSI)
formed a committee to establish a standard specification of C known as
\"ANSI C\". That standard specification created a basic set of functions
common to each implementation of C, which is referred to as the
Standard Library. The Standard
Library provides functions for tasks such as input/output, string
manipulation, mathematics, files, and memory allocation. The Standard
Library does not provide functions that are dependent on specific
hardware or operating systems, like graphics, sound, or networking. In
the \"Hello, World\" program, a Standard Library function is used
(`printf`) which outputs lines of text to the standard
output stream.
pl:C/Podstawy
|
# C Programming/Basics of compilation
Having covered the basic concepts of C programming, we can now briefly
discuss the process of *compilation*.
Like any programming language, C by itself is completely
incomprehensible to a microprocessor. Its
purpose is to provide an intuitive way for humans to provide
instructions that can be easily converted into machine code that *is*
comprehensible to a microprocessor. The ***compiler*** is what
translates our human-readable source code into machine code.
To those new to programming, this seems fairly simple. A naive compiler
might read in every source file, translate everything into machine code,
and write out an executable. That could work, but has two serious
problems. First, for a large project, the computer may not have enough
memory to read all of the source code at once. Second, if you make a
change to a single source file, you would have to recompile the *entire*
application.
To deal with these problems, compilers break the job into steps. For
each source file (each `.c` file), the compiler reads the file, reads
the files it references via the `#include` directive, and translates
them to machine code. The result of this is an \"object file\" (`.o`).
After all the object files are created, a \"linker\" program collects
all of the object files and writes the actual executable program. That
way, if you change one source file, only that file needs to be
recompiled, after which, the application will need to be re-linked.
Without going into details, it can be beneficial to have a superficial
understanding of the compilation process.
## Preprocessor
The preprocessor provides the ability for the inclusion of so called
header files, macro expansions, conditional compilation and line
control. These features can be accessed by inserting the appropriate
preprocessor directives into
your code. Before compiling the source code, a special program, called
the preprocessor, scans the source code for tokens, or special strings,
and replaces them with other strings or code according to specific
rules. The C preprocessor is not technically part of the C language and
is instead a tool offered by your compiler\'s software.
All preprocessor directives begin with the hash character (#). You can
see one preprocessor directive in the Hello world
program. Example:
``` c
#include <stdio.h>
```
This directive causes the stdio header to be included into your program.
Other directives such as `#pragma` control compiler settings and macros.
The result of the preprocessing stage is a text string. You can think of
the preprocessor as a non-interactive text editor that modifies your
code to prepare it for compilation. The language of preprocessor
directives is agnostic to the grammar of C, so the C preprocessor can
also be used independently to process other kinds of text files.
## Syntax Checking
This step ensures that the code is valid and will sequence into an
executable program. Under most compilers, you may get messages or
warnings indicating potential issues with your program (such as a
conditional statement "wikilink")
always being true or false, etc.)
When an error is detected in the program, the compiler will normally
report the file name and line that is preventing compilation.
## Object Code
The compiler produces a machine code equivalent of the source code that
can be linked into the final program. At this point the code itself
can\'t be executed, as it requires linking to do so.
It\'s important to note after discussing the basics that compilation is
a \"one way street\". That is, compiling a C source file into machine
code is easy, but \"decompiling\" (turning machine code into the C
source that creates it) is not. Decompilers for C do exist, but the code
they create is hard to understand and only useful for reverse
engineering.
## Linking
Linking combines the separate object files into one complete program by
integrating libraries and the code and producing either an executable
program or a
library "wikilink"). Linking is performed by a
linker program, which is often part of a compiler suite.
Common errors during this stage are either missing or duplicate
functions.
## Automation
For large C projects, many programmers choose to automate compilation,
both in order to reduce user interaction requirements and to speed up
the process by recompiling only modified files.
Most Integrated Development Environments (IDE\'s) have some kind of
project management which makes such automation very easy. However, the
project management files are often usable only by users of the same
integrated development environment, so anyone desiring to modify the
project would need to use the same IDE.
On UNIX-like systems, make and Makefiles are often
used to accomplish the same. Make is traditional and flexible and is
available as one of the standard developer tools on most Unix and GNU
distributions.
Makefiles have been extended by the GNU
Autotools, composed of
Automake and
Autoconf for making software
compilable, testable, translatable and configurable on many types of
machines. Automake and Autoconf are described in detail in their
respective manuals.
The Autotools are often perceived to be complicated and various simpler
build systems have been developed. Many components of the GNOME
project now use the declarative Meson build
system which is less flexible, but instead
focuses on providing the features most commonly needed from a build
system in a simple way. Other popular build systems for programs written
in the C language include CMake and
Waf.
Once GCC is installed, it can be called with a list of c source files
that have been written but not yet compiled. e.g. if the file main.c
includes functions described in myfun.h and implemented in myfun_a.c and
myfun_b.c, then it is enough to write
` gcc main.c myfun_a.c myfun_b.c `
myfun.h is included in main.c, but if it is in a separate header file
directory, then that directory can be listed after a \"-I \" switch.
In larger programs, Makefiles and GNU make program can compile c files
into intermediate files ending with suffix .o which can be linked by
GCC.
How to compile each object file is usually described in the Makefile
with the object file as a label ending with a colon followed by two
spaces (tabs often cause problems) followed by a list of other files
that are dependencies, e.g. .c files and .o files compiled in another
section, and on the next line, the invocation of GCC that is required.
Typing `man make` or `info make` often gives the information needed to
on how to use make, as well as GCC.
Although GCC has a lot of option switches, one often used is -g to
generate debugging information for gdb to allow gdb to show source code
during a step-through of the machine code program. gdb has an \'h\'
command that shows what it can do, and is usually started with \'gdb
a.out\' if a.out is the anonymous executable machine code file that was
compiled by GCC.
de:C-Programmierung:
Kompilierung
es:Programación_en_C/Compilar_un_programa
et:Programmeerimiskeel
C/Kompileerimine
fr:Programmation C-C%2B%2B/Modularité et
compilation
it:C/Compilatore e
precompilatore/Compilatore
pt:Programar em C/Utilizando um
compilador
|
# C Programming/Structure and style
## C Structure and Style
This is a basic introduction to good coding style in the C Programming
Language. It is designed to provide information on how to effectively
use indentation, comments, and other elements that will make your C code
more readable. It is not a tutorial on actual C programming.
As a beginning programmer, the point of creating structure in the
program code might not be clear, as the compiler doesn\'t care about the
difference. However, as programs become complex, chances are that
writing the program has become a joint effort. (Or others might want to
see how it was accomplished. Or you may have to read it again years
later.) Well-written code also helps you get an overview of what the
code does.
In the following sections, we will attempt to explain good programming
practices that will in turn make your programs clearer.
## Introduction
In C, programs are composed of statements. Statements are terminated
with a semi-colon, and are collected in sections known as functions. By
convention, a statement should be kept on its own line, as shown in the
example below:
``` c
#include <stdio.h>
int main(void) {
printf("Hello, World!\n");
return 0;
}
```
The following block of code is essentially the same. While it contains
exactly the same code, and will compile and execute with the same
result, the removal of spacing causes an essential difference: it\'s
harder to read.
``` c
#include <stdio.h>
int main(void) {printf("Hello, World!\n");return 0;}
```
The simple use of indents and line breaks can greatly improve code
readability without impacting code performance. Readable code makes it
much easier to see where functions and procedures end and which lines
are part of which loops and procedures.
This lesson is going to focus on improving the coding style of an
example piece of code which applies a formula and prints the result.
Later, you\'ll see how to write code for such tasks in more detail. For
now, focus on how the code looks, not what it does.
## Line Breaks and Indentation
The addition of white space inside your code is arguably the most
important part of good code structure. Effective use of white space can
create a visual scale of how your code flows, which can be very
important when returning to your code when you want to maintain it.
### Line Breaks
With minimal line breaks, code is barely human-readable, and may be hard
to debug or understand:
``` {.c .numberLines}
#include <stdio.h>
int main(void) { int revenue = 80; int cost = 50; int roi; roi = (100 * (revenue - cost)) / cost; if (roi >= 0) { printf ("%d\n", roi); } return 0; }
```
Rather than putting everything on one line, it is much more readable to
break up long lines so that each statement and declaration goes on its
own line. After inserting line breaks, the code will look like this:
``` {.c .numberLines}
#include <stdio.h>
int main(void) {
int revenue = 80;
int cost = 50;
int roi;
roi = (100 * (revenue - cost)) / cost;
if (roi >= 0) {
printf ("%d\n", roi);
}
return 0;
}
```
### Blank Lines
Blank lines should be used to offset the main components of your code.
Always use them:
- After preprocessor directives.
- After new variables are declared.
- Use your own judgment for finding other places where components
should be separated.
Based on these two rules, there should now be at least two line breaks
added:
- After line 1, because line 1 has a preprocessor directive.
- After line 5, because line 5 contains a variable declaration.
This will make the code much more readable than it was before:
The following lines of code have line breaks between functions, but
without indentation.
``` {.c .numberLines}
#include <stdio.h>
int main(void) {
int revenue = 80;
int cost = 50;
int roi;
roi = (100 * (revenue - cost)) / cost;
if (roi >= 0) {
printf ("%d\n", roi);
}
return 0;
}
```
But it\'s still not as readable as it can be.
### Indentation
Although adding simple line breaks between key blocks of code can make
code easier to read, it provides no information about the block
structure of the program. Using the tab key can be very helpful.
Indentation visually separates paths of execution by moving their
starting points to a new column. This simple practice will make it much
easier to read and understand code. Indentation follows a fairly simple
rule:
- All code inside a new block should be indented by one
tab`<ref>`{=html}
Several programmers recommend \"use spaces for indentation. Do not use
tabs in your code. You should set your editor to emit spaces when you
hit the tab key.\"
1
2
Other programmers disagree.
3
4
Regardless of whether you prefer spaces or tabs, make sure you keep it
consistent within the projects you are working on. Mixing tabs and
spaces can cause code to become unreadable.
```{=html}
</ref>
```
more than the code in the previous path.
Based on the code from the previous section, there are two blocks
requiring indentation:
- Lines 4 to 16
- Line 13
``` {.c .numberLines}
#include <stdio.h>
int main(void) {
int revenue = 80;
int cost = 50;
int roi;
roi = (100 * (revenue - cost)) / cost;
if (roi >= 0) {
printf ("%d\n", roi);
}
return 0;
}
```
It is now fairly obvious as to which parts of the program fit inside
which blocks. You can tell which parts of the program the coder has
intended to be conditional, and which ones he or she has not. Although
it might not be immediately noticeable, once many nested paths get added
to the structure of the program, the use of indentation can be very
important. Thus, indentation makes the structure of your program clear.
It is claimed that research has shown that an indentation size between 2
to 4 characters is easier to read than 8 character indents[^1]. However,
an indent of 8 characters may still be in use for some systems[^2].
## Comments
Comments in code can be useful for a variety of purposes. They provide
the easiest way to set off specific parts of code (and their purpose);
as well as providing a visual \"split\" between various parts of your
code. Having good comments throughout your code will make it much easier
to remember what specific parts of your code do.
Comments in modern flavors of C (and many other languages) can come in
two forms:
``` {.c .numberLines}
// Single Line Comments (added by C99 standard, famously known as c++ style of comments)
```
and
``` {.c .numberLines}
/* Multi-Line
Comments
(only form of comments supported by C89 standard)*/
```
Note that Single line comments are a more recent addition to C, so some
compilers may not support them. A recent version of
GCC will have no
problems supporting them.
This section is going to focus on the various uses of each form of
commentary.
### Single-line Comments
Single-line comments are most useful for simple \'side\' notes that
explain what certain parts of the code do. The best places to put these
comments are next to variable declarations, and next to pieces of code
that may need explanation. Comments should make clear the intention and
ideas behind the corresponding code. What is immediately obvious from
reading the code does not belong in a comment.
Based on our previous program, there are various good places to place
comments:
- Line 5 and/or 6, to explain what \'int revenue\' and \'int cost\'
represent,
- Line 8, to explain what the variable \'roi\' is going to be used
for,
- Line 10, to explain the idea of the calculation,
- Line 12, to explain the purpose of the \'if\'.
This will make our program look something like:
``` {.c .numberLines}
#include <stdio.h>
int main(void) {
int revenue = 80; // as of 2016
int cost = 50;
int roi; // return on investment in percent
roi = (100 * (revenue - cost)) / cost; // formula from accounting book
if (roi >= 0) { // we don't care about negative roi
printf ("%d\n", roi);
}
return 0;
}
```
### Multi-line Comments
Multi-line comments are most useful for long explanations of code. They
can be used as copyright/licensing notices, and they can also be used to
explain the purpose of a block of code. This can be useful for two
reasons: They make your functions easier to understand, and they make it
easier to spot errors in code. If you know what a block is *supposed* to
do, then it is much easier to find the piece of code that is responsible
if an error occurs.
As an example, suppose we had a program that was designed to print
\"Hello, World! \" a certain number of lines, a specified number of
times. There would be many for loops in this program. For this example,
we shall call the number of lines *i*, and the number of strings per
line as *j*.
A good example of a multi-line comment that describes \'for\' loop
*i*\'s purpose would be:
``` c
/* For Loop (int i)
Loops the following procedure i times (for number of lines). Performs 'for' loop j on each loop,
and prints a new line at end of each loop.
*/
```
This provides a good explanation of what *i*\'s purpose is, whilst not
going into detail of what *j* does. By going into detail over what the
specific path does (and not ones inside it), it will be easier to
troubleshoot the path.
Similarly, you should always include a multi-line comment before each
function, to explain the role, preconditions and postconditions of each
function. Always leave the technical details to the individual blocks
inside your program - this makes it easier to troubleshoot.
A function descriptor should look something like:
``` c
/* Function : int hworld (int i,int j)
Input : int i (Number of lines), int j (Number of instances per line)
Output : 0 (on success)
Procedure: Prints "Hello, World!" j times, and a new line to standard output over i lines.
*/
```
This system allows for an at-a-glance explanation of what the function
should do. You can then go into detail over how each aspect of the
program is achieved later on in the program.
Finally, if you like to have aesthetically-pleasing source code, the
multi-line comment system allows for the easy addition of comment boxes.
These make the comments stand out much more than they would otherwise.
They look like this:
``` c
/***************************************
* This is a multi line comment
* That is nearly surrounded by a
* Cool, starry border!
***************************************/
```
Applied to our original program, we can now include a much more
descriptive and readable source code:
``` c
#include <stdio.h>
int main(void) {
/************************************************************************************
* Function: int main(void)
* Input : none
* Output : Returns 0 on success
* Procedure: Prints 2016's return on investment in percent if it is not negative.
************************************************************************************/
int revenue = 80; // as of 2016
int cost = 50;
int roi; // return on investment in percent
roi = (100 * (revenue - cost)) / cost; // formula from accounting book
if (roi >= 0) { // we don't care about negative roi
printf ("%d\n", roi);
}
return 0;
}
```
This will allow any outside users of the program an easy way to
comprehend what the code functions are and how they operate. It also
inhibits uncertainty with other like-named functions.
A few programmers add a column of stars on the right side of a block
comment:
``` c
/***************************************
* This is a multi line comment *
* that is completely surrounded by a *
* cool, starry border! *
***************************************/
```
But most programmers don\'t put any stars on the right side of a block
comment. They feel that aligning the right side is a waste of time.
Comments written in source files can be used for documenting source code
automatically by using popular tools like Doxygen.[^3][^4]
## References
- Aladdin\'s C coding
guidelines -
A more definitive C coding guideline.
- C/C++ Programming
Styles
GNU Coding styles & Linux Kernel Coding style
et:Programmeerimiskeel
C/Stiil
[^1]: <http://www.oualline.com/vim/vim-cook.html#drawing> Vim cookbook
[^2]: <https://www.kernel.org/doc/html/latest/process/coding-style.html>
Linux Kernel Coding Style
[^3]: \"Coding Conventions for C++ and
Java\"
\"all the block comments illustrated in this document have no pretty
stars on the right side of the block comment. This deliberate choice
was made because aligning those pretty stars is a large waste of
time and discourages the maintenance of in-line comments.\",
[^4]: c2:BigBlocksOfAsterisks,\"Code
craft\"
by Pete Goodliffe page 82,Falvotech \"C Programming Style
Guide\",
Fedora Directory Server Coding
Style
|
# C Programming/Variables
Like most programming languages, C uses and processes **variables**. In
C, variables are human-readable names for the computer\'s memory
addresses used by a running program. Variables make it easier to store,
read and change the data within the computer\'s memory by allowing you
to associate easy-to-remember labels for the memory addresses that store
your program\'s data. The memory addresses associated with variables
aren\'t determined until after the program is compiled and running on
the computer.
At first, it\'s easiest to imagine variables as placeholders for values,
much like in mathematics. You can think of a variable as being
equivalent to its assigned value. So, if you have a variable *i* that is
**initialized** (set equal) to 4, then it follows that *i + 1* will
equal *5*. However, a skilled C programmer is more mindful of the
invisible layer of abstraction going on just under the hood: that a
variable is a stand-in for the memory address where the data can be
found, not the data itself. You will gain more clarity on this point
when you learn about **pointers**.
Since C is a relatively low-level programming language, before a C
program can utilize memory to store a variable it must claim the memory
needed to store the values for a variable. This is done by **declaring**
variables. Declaring variables is the way in which a C program shows the
number of variables it needs, what they are going to be named, and how
much memory they will need.
Within the C programming language, when managing and working with
variables, it is important to know the *type* of variables and the
*size* of these types. A type's size is the amount of computer memory
required to store one value of this type. Since C is a fairly low-level
programming language, the size of types can be specific to the hardware
and compiler used -- that is, how the language is made to work on one
type of machine can be different from how it is made to work on another.
All variables in C are **typed**. That is, every variable declared must
be assigned as a certain type of variable.
## Declaring, Initializing, and Assigning Variables
Here is an example of declaring an integer, which we\'ve called
`some_number`. (Note the semicolon at the end of the line; that is how
your compiler separates one program *statement* from another.)
``` c
int some_number;
```
This statement tells the compiler to create a variable called
`some_number` and associate it with a memory location on the computer.
We also tell the compiler the type of data that will be stored at that
address, in this case an `int`eger. Note that in C we must specify the
type of data that a variable will store. This lets the compiler know how
much total memory to set aside for the data (on most modern machines an
`int` is 4 bytes in length). We\'ll look at other data types in the next
section.
Multiple variables can be declared with one statement, like this:
``` c
int anumber, anothernumber, yetanothernumber;
```
In early versions of C, variables had to be declared at the beginning of
a block. In C99 it is allowed to mix declarations and statements
arbitrarily -- but doing so is not usual, because it is rarely
necessary, some compilers still don't support C99 (portability), and it
may, because it is uncommon yet, irritate fellow programmers
(maintainability).
After declaring variables, you can assign a value to a variable later on
using a statement like this:
``` c
some_number = 3;
```
The assignment of a value to a variable is called *initialization*. The
above statement directs the compiler to insert an integer representation
of the number \"3\" into the memory address associated with
`some_number`. We can save a bit of typing by declaring *and* assigning
data to a memory address at the same time:
``` c
int some_new_number = 4;
```
You can also assign variables to the value of other variable, like so:
``` c
some_number = some_new_number;
```
Or assign multiple variables the same value with one statement:
``` c
anumber = anothernumber = yetanothernumber = 8;
```
This is because the assignment `x = y` returns the value of the
assignment, y. For example, `some_number = 4` returns 4. That said,
`x = y = z` is really a shorthand for `x = (y = z)`.
### Naming Variables
Variable names in C are made up of letters (upper and lower case) and
digits. The underscore character (\"\_\") is also permitted. Names must
not begin with a digit. Unlike some languages (such as
Perl and some
BASIC dialects), C does not
use any special prefix characters on variable names.
Some examples of valid (but not very descriptive) C variable names:
``` c
foo
Bar
BAZ
foo_bar
_foo42
_
QuUx
```
Some examples of invalid C variable names:
``` c
2foo (must not begin with a digit)
my foo (spaces not allowed in names)
$foo ($ not allowed -- only letters, and _)
while (language keywords cannot be used as names)
```
As the last example suggests, certain words are reserved as keywords in
the language, and these cannot be used as variable names.
It is not allowed to use the same name for multiple variables in the
same scope. When working with
other developers, you should therefore take steps to avoid using the
same name for global variables or function names. Some large projects
adhere to naming guidelines[^1] to avoid duplicate names and for
consistency.
In addition there are certain sets of names that, while not language
keywords, are reserved for one reason or another. For example, a C
compiler might use certain names \"behind the scenes\", and this might
cause problems for a program that attempts to use them. Also, some names
are reserved for possible future use in the C standard library. The
rules for determining exactly what names are reserved (and in what
contexts they are reserved) are too complicated to describe
here, and as a beginner you don\'t need to worry
about them much anyway. For now, just avoid using names that begin with
an underscore character.
The naming rules for C variables also apply to naming other language
constructs such as function names, struct tags, and macros, all of which
will be covered later.
## Literals
Anytime within a program in which you specify a value explicitly instead
of referring to a variable or some other form of data, that value is
referred to as a **literal**. In the initialization example above, 3 is
a literal. Literals can either take a form defined by their type (more
on that soon), or one can use hexadecimal (hex) notation to directly
insert data into a variable regardless of its
type. Hex numbers are always preceded with *0x*.
For now, though, you probably shouldn\'t be too concerned with hex.
## The Four Basic Data Types
In Standard C there are four basic data types. They are **`int`**,
**`char`**, **`float`**, and **`double`**.
### The `int` type
The `int` type stores integers in the form of \"whole numbers\". An
integer is typically the size of one machine word, which on most modern
home PCs is 32 bits (4 octets). Examples of literals are whole numbers
(integers) such as 1, 2, 3, 10, 100\... When `int` is 32 bits (4
octets), it can store any whole number (integer) between -2147483648 and
2147483647. A 32 bit word (number) has the possibility of representing
any one number out of 4294967296 possibilities (2 to the power of 32).
If you want to declare a new int variable, use the `int` keyword. For
example:
``` c
int numberOfStudents, i, j = 5;
```
In this declaration we declare 3 variables, numberOfStudents, i and j, j
here is assigned the literal 5.
### The `char` type
The `char` type is capable of holding any member of the execution
character
set.
It stores the same kind of data as an `int` (i.e. integers), but
typically has a size of one byte. The size of a byte is specified by the
macro `CHAR_BIT` which specifies the number of bits in a char (byte). In
standard C it never can be less than 8 bits. A variable of type `char`
is most often used to store character data, hence its name. Most
implementations use the ASCII character set as the
execution character set, but it\'s best not to know or care about that
unless the actual values are important.
Examples of character literals are \'a\', \'b\', \'1\', etc., as well as
some special characters such as \'`\0`\' (the null character) and
\'`\n`\' (newline, recall \"Hello, World\"). Note that the `char` value
must be enclosed within single quotations.
When we initialize a character variable, we can do it two ways. One is
preferred, the other way is ***bad*** programming practice.
The first way is to write:
``` c
char letter1 = 'a';
```
This is *good* programming practice in that it allows a person reading
your code to understand that letter1 is being initialized with the
letter \'a\' to start off with.
The second way, which should *not* be used when you are coding letter
characters, is to write:
``` c
char letter2 = 97; /* in ASCII, 97 = 'a' */
```
This is considered by some to be extremely ***bad*** practice, if we are
using it to store a character, not a small number, in that if someone
reads your code, most readers are forced to look up what character
corresponds with the number 97 in the encoding scheme. In the end,
`letter1` and `letter2` store both the same thing -- the letter \'a\',
but the first method is clearer, easier to debug, and much more
straightforward.
One important thing to mention is that characters for numerals are
represented differently from their corresponding number, i.e. \'1\' is
not equal to 1. In short, any single entry that is enclosed within
\'single quotes\'.
There is one more kind of literal that needs to be explained in
connection with chars: the **string literal**. A string is a series of
characters, usually intended to be displayed. They are surrounded by
double quotations (\" \", not \' \'). An example of a string literal is
the \"Hello, World!\\n\" in the \"Hello, World\" example.
The string literal is assigned to a character
`<b>`{=html}array`</b>`{=html}, arrays are described later. Example:
``` c
const char MY_CONSTANT_PEDANTIC_ITCH[] = "learn the usage context.\n";
printf("Square brackets after a variable name means it is a pointer to a string of memory blocks the size of the type of the array element.\n");
```
### The `float` type
`float` is short for **floating point**. It stores inexact
representations of real numbers, both integer and non-integer values. It
can be used with numbers that are much greater than the greatest
possible `int`. `float` literals must be suffixed with F or f. Examples
are: 3.1415926f, 4.0f, 6.022e+23f.
It is important to note that floating-point numbers are inexact. Some
numbers like 0.1f cannot be represented exactly as `float`s but will
have a small error. Very large and very small numbers will have less
precision and arithmetic operations are sometimes not associative or
distributive because of a lack of precision. Nonetheless, floating-point
numbers are most commonly used for approximating real numbers and
operations on them are efficient on modern microprocessors.[^2]
Floating-point arithmetic is
explained in more detail on Wikipedia.
`float` variables can be declared using the `float` keyword. A `float`
is only one machine word in size. Therefore, it is used when less
precision than a double provides is required.
### The `double` type
The `double` and `float` types are very similar. The `float` type allows
you to store single-precision floating point numbers, while the `double`
keyword allows you to store double-precision floating point numbers --
real numbers, in other words. Its size is typically two machine words,
or 8 bytes on most machines. Examples of `double` literals are
3.1415926535897932, 4.0, 6.022e+23 (scientific
notation). If you use 4 instead of
4.0, the 4 will be interpreted as an `int`.
The distinction between floats and doubles was made because of the
differing sizes of the two types. When C was first used, space was at a
minimum and so the judicious use of a float instead of a double saved
some memory. Nowadays, with memory more freely available, you rarely
need to conserve memory like this -- it may be better to use doubles
consistently. Indeed, some C implementations use doubles instead of
floats when you declare a float variable.
If you want to use a double variable, use the `double` keyword.
## `sizeof`
If you have any doubts as to the amount of memory actually used by any
variable (and this goes for types we\'ll discuss later, also), you can
use the **`sizeof`** operator to find out for sure. (For completeness,
it is important to mention that `sizeof` is a unary
operator, not a function.) Its syntax is:
``` c
sizeof object
sizeof(type)
```
The two expressions above return the size of the object and type
specified, in bytes. The return type is `size_t` (defined in the header
`<stddef.h>`) which is an unsigned value. Here\'s an example usage:
``` c
size_t size;
int i;
size = sizeof(i);
```
`size` will be set to 4, assuming `CHAR_BIT` is defined as 8, and an
integer is 32 bits wide. The value of `sizeof`\'s result is the number
of bytes.
Note that when `sizeof` is applied to a `char`, the result is 1; that
is:
``` c
sizeof(char)
```
always returns 1.
## Data type modifiers
One can alter the data storage of any data type by preceding it with
certain modifiers.
**`long`** and **`short`** are modifiers that make it possible for a
data type to use either more or less memory. The `int` keyword need not
follow the `short` and `long` keywords. This is most commonly the case.
A `short` can be used where the values fall within a lesser range than
that of an `int`, typically -32768 to 32767. A `long` can be used to
contain an extended range of values. It is not guaranteed that a `short`
uses less memory than an `int`, nor is it guaranteed that a `long` takes
up more memory than an `int`. It is only guaranteed that sizeof(short)
\<= sizeof(int) \<= sizeof(long). Typically a `short` is 2 bytes, an
`int` is 4 bytes, and a `long` either 4 or 8 bytes. Modern C compilers
also provide `long long` which is typically an 8 byte integer.
In all of the types described above, one bit is used to indicate the
sign (positive or negative) of a value. If you decide that a variable
will never hold a negative value, you may use the **`unsigned`**
modifier to use that one bit for storing other data, effectively
doubling the range of values while mandating that those values be
positive. The `unsigned` specifier also may be used without a trailing
`int`, in which case the size defaults to that of an `int`. There is
also a **`signed`** modifier which is the opposite, but it is not
necessary, except for certain uses of `char`, and seldom used since all
types (except `char`) are signed by default.
The `long` modifier can also be used with `double` to create a
`long double` type. This floating-point type may (but is not required
to) have greater precision than the `double` type.
To use a modifier, just declare a variable with the data type and
relevant modifiers:
``` c
unsigned short int usi; /* fully qualified -- unsigned short int */
short si; /* short int */
unsigned long uli; /* unsigned long int */
```
## `const` qualifier
When the **`const`** qualifier is used, the declared variable must be
initialized at declaration. It is then not allowed to be changed.
While the idea of a variable that never changes may not seem useful,
there are good reasons to use `const`. For one thing, many compilers can
perform some small optimizations on data when it knows that data will
never change. For example, if you need the value of π in your
calculations, you can declare a const variable of `pi`, so a program or
another function written by someone else cannot change the value of
`pi`.
Note that a Standard conforming compiler must issue a warning if an
attempt is made to change a `const` variable - but after doing so the
compiler is free to ignore the `const` qualifier.
## Magic numbers
When you write C programs, you may be tempted to write code that will
depend on certain numbers. For example, you may be writing a program for
a grocery store. This complex program has thousands upon thousands of
lines of code. The programmer decides to represent the cost of a can of
corn, currently 99 cents, as a literal throughout the code. Now, assume
the cost of a can of corn changes to 89 cents. The programmer must now
go in and manually change each entry of 99 cents to 89. While this is
not that big a problem, considering the \"global find-replace\" function
of many text editors, consider another problem: the cost of a can of
green beans is also initially 99 cents. To reliably change the price,
you have to look at every occurrence of the number 99.
C possesses certain functionality to avoid this. This functionality is
approximately equivalent, though one method can be useful in one
circumstance, over another.
### Using the `const` keyword
The `const` keyword helps eradicate **magic numbers**. By declaring a
variable `const corn` at the beginning of a block, a programmer can
simply change that const and not have to worry about setting the value
elsewhere.
There is also another method for avoiding magic numbers. It is much more
flexible than `const`, and also much more problematic in many ways. It
also involves the preprocessor, as opposed to the compiler. Behold\...
### `#define`
When you write programs, you can create what is known as a *macro*, so
when the computer is reading your code, it will replace all instances of
a word with the specified expression.
Here\'s an example. If you write
``` c
#define PRICE_OF_CORN 0.99
```
when you want to, for example, print the price of corn, you use the word
`PRICE_OF_CORN` instead of the number 0.99 -- the preprocessor will
replace all instances of `PRICE_OF_CORN` with 0.99, which the compiler
will interpret as the literal `double` 0.99. The preprocessor performs
substitution, that is, `PRICE_OF_CORN` is replaced by 0.99 so this means
there is no need for a semicolon.
It is important to note that `#define` has basically the same
functionality as the \"find-and-replace\" function in a lot of text
editors/word processors.
For some purposes, `#define` can be harmfully used, and it is usually
preferable to use `const` if `#define` is unnecessary. It is possible,
for instance, to `#define`, say, a macro `DOG` as the number 3, but if
you try to print the macro, thinking that `DOG` represents a string that
you can show on the screen, the program will have an error. `#define`
also has no regard for type. It disregards the structure of your
program, replacing the text *everywhere* (in effect, disregarding
scope), which could be advantageous in some circumstances, but can be
the source of problematic bugs.
You will see further instances of the `#define` directive later in the
text. It is good convention to write `#define`d words in all capitals,
so a programmer will know that this is not a variable that you have
declared but a `#define`d macro. It is not necessary to end a
preprocessor directive such as `#define` with a semicolon; in fact, some
compilers may warn you about unnecessary tokens in your code if you do.
## Scope
In the Basic Concepts section, the concept of scope was introduced. It
is important to revisit the distinction between local types and global
types, and how to declare variables of each. To declare a local
variable, you place the declaration at the beginning (i.e. before any
non-declarative statements) of the block to which the variable is deemed
to be local. To declare a global variable, declare the variable outside
of any block. If a variable is global, it can be read, and written, from
anywhere in your program.
Global variables are not considered good programming practice, and
should be avoided whenever possible. They inhibit code readability,
create naming conflicts, waste memory, and can create difficult-to-trace
bugs. Excessive usage of globals is usually a sign of laziness or poor
design. However, if there is a situation where local variables may
create more obtuse and unreadable code, there\'s no shame in using
globals.
## Other Modifiers
Included here, for completeness, are more of the modifiers that standard
C provides. For the beginning programmer, *static* and *extern* may be
useful. *volatile* is more of interest to advanced programmers.
*register* and *auto* are largely deprecated and are generally not of
interest to either beginning or advanced programmers.
### static
**`static`** is sometimes a useful keyword. It is a common misbelief
that the only purpose is to make a variable stay in memory.
When you declare a function or global variable as *static*, you cannot
access the function or variable through the extern (see below) keyword
from other files in your project. This is called *static linkage*.
When you declare a local variable as *static*, it is created just like
any other variable. However, when the variable goes out of scope (i.e.
the block it was local to is finished) the variable stays in memory,
retaining its value. The variable stays in memory until the program
ends. While this behaviour resembles that of global variables, static
variables still obey scope rules and therefore cannot be accessed
outside of their scope. This is called *static storage duration*.
Variables declared static are initialized to zero (or for pointers,
NULL[^3][^4]) by default. They can be initialized explicitly on
declaration to any *constant* value. The initialization is made just
once, at compile time.
You can use static in (at least) two different ways. Consider this code,
and imagine it is in a file called jfile.c:
``` c
#include <stdio.h>
static int j = 0;
void up(void)
{
/* k is set to 0 when the program starts. The line is then "ignored"
* for the rest of the program (i.e. k is not set to 0 every time up()
* is called)
*/
static int k = 0;
j++;
k++;
printf("up() called. k= %2d, j= %2d\n", k , j);
}
void down(void)
{
static int k = 0;
j--;
k--;
printf("down() called. k= %2d, j= %2d\n", k , j);
}
int main(void)
{
int i;
/* call the up function 3 times, then the down function 2 times */
for (i = 0; i < 3; i++)
up();
for (i = 0; i < 2; i++)
down();
return 0;
}
```
The `j` variable is accessible by both up and down and retains its
value. The `k` variables also retain their value, but they are two
different variables, one in each of their scopes. Static variables are a
good way to implement encapsulation, a term from the object-oriented way
of thinking that effectively means not allowing changes to be made to a
variable except through function calls.
Running the program above will produce the following output:
``` c
up() called. k= 1, j= 1
up() called. k= 2, j= 2
up() called. k= 3, j= 3
down() called. k= -1, j= 2
down() called. k= -2, j= 1
```
**Features of `static` variables :**
` 1. Keyword used - `**`static`**\
` 2. Storage - Memory`\
` 3. Default value - Zero`\
` 4. Scope - Local to the block in which it is declared`\
` 5. Lifetime - Value persists between different function calls`\
` 6. Keyword optionality - Mandatory to use the keyword`
### extern
**`extern`** is used when a file needs to access a variable in another
file that it may not have `#include`d directly. Therefore, *extern* does
not allocate memory for the new variable, it just provides the compiler
with sufficient information to access a variable declared in another
file.
**Features of `extern` variable :**
` 1. Keyword used - `**`extern`**\
` 2. Storage - Memory`\
` 3. Default value - Zero`\
` 4. Scope - Global (all over the program)`\
` 5. Lifetime - Value persists till the program's execution comes to an end`\
` 6. Keyword optionality - Optional if declared outside all the functions`
### volatile
**`volatile`** is a special type of modifier which informs the compiler
that the value of the variable may be changed by external entities other
than the program itself. This is necessary for certain programs compiled
with optimizations -- if a variable were not defined `volatile` then the
compiler may assume that certain operations involving the variable are
safe to optimize away when in fact they aren\'t. *volatile* is
particularly relevant when working with embedded systems (where a
program may not have complete control of a variable) and multi-threaded
applications.
### auto
**`auto`** is a modifier which specifies an \"automatic\" variable that
is automatically created when in scope and destroyed when out of scope.
If you think this sounds like pretty much what you\'ve been doing all
along when you declare a variable, you\'re right: all declared items
within a block are implicitly \"automatic\". For this reason, the *auto*
keyword is more like the answer to a trivia question than a useful
modifier, and there are lots of very competent programmers that are
unaware of its existence.
**Features of `automatic` variables :**
` 1. Keyword used - `**`auto`**\
` 2. Storage - Memory`\
` 3. Default value - Garbage value (random value)`\
` 4. Scope - Local to the block in which it is defined`\
` 5. Lifetime - Value persists while the control remains within the block`\
` 6. Keyword optionality - Optional`
### register
**`register`** is a hint to the compiler to attempt to optimize the
storage of the given variable by storing it in a register of the
computer\'s CPU when the program is run. Most optimizing compilers do
this anyway, so use of this keyword is often unnecessary. In fact, ANSI
C states that a compiler can ignore this keyword if it so desires -- and
many do. Microsoft Visual C++ is an example of an implementation that
completely ignores the *register* keyword.
**Features of `register` variables :**
` 1. Keyword used - `**`register`**\
` 2. Storage - CPU registers (values can be retrieved faster than from memory)`\
` 3. Default value - Garbage value`\
` 4. Scope - Local to the block in which it is defined`\
` 5. Lifetime - Value persists while the control remains within the block`\
` 6. Keyword optionality - Mandatory to use the keyword`
### Concepts
- Variables
- Types
- Data Structures
- Arrays
### In this section
- C variables
- C arrays
## References
de:C-Programmierung: Variablen und
Konstanten
et:Programmeerimiskeel
C/Muutujad
fi:C/Muuttujat fr:Programmation C/Bases du
langage
it:C/Variabili, operatori e
costanti/Variabili
ja:C言語 変数
pl:C/Zmienne pt:Programar em
C/Variáveis
[^1]: Examples of naming guidelines are those of the GNOME
Project
or the parts of the Python
interpreter that are
written in C.
[^2]: Representations of real numbers other than floating-point numbers
exist but are not fundamental data types in C. Some C compilers
support fixed-point
arithmetic data types, but
these are not part of standard C. Libraries such as the GNU
Multiple Precision Arithmetic
Library
offer more data types for real numbers and very large numbers.
[^3]: 1 - What is NULL and how is it
defined?
[^4]: 2 - NULL or 0, which should
you use?
|
# C Programming/Operators and type casting
## Operators and Assignments
C has a wide range of operators that make simple math easy to handle.
The list of operators grouped into precedence levels is as follows:
### Primary expressions
*Identifiers* are names of things in C, and consist of either a letter
or an underscore ( `_` ) optionally followed by letters, digits, or
underscores. An identifier (or variable name) is a primary expression,
provided that it has been declared as designating an object (in which
case it is an lvalue \[a value that can be used as the left side of an
assignment expression\]) or a function (in which case it is a function
designator).
A *constant* is a primary expression. Its type depends on its form and
value. The types of constants are character constants (e.g. `' '` is a
space), integer constants (e.g. `2`), floating-point constants (e.g.
`0.5`), and enumerated constants that have been previously defined via
`enum`.
A *string literal* is a primary expression. It consists of a string of
characters within double quotes ( `"` ).
A parenthesized expression is a primary expression. It consists of an
expression within parentheses ( `(` `)` ). Its type and value are those
of the non-parenthesized expression within the parentheses.
In C11, an expression that starts with `_Generic` followed by (, an
initial expression, a list of values of the form *type: expression*
where type is either a named type or the keyword default, and )
constitutes a primary expression. The value is the expression that
follows the type of the initial expression or the default if not found.
### Postfix operators
First, a primary expression is also a postfix expression. The following
expressions are also postfix expressions:
A postfix expression followed by a left square bracket (`[`), an
expression, and a right square bracket (`]`) in sequence constitutes an
invocation of the *array subscript operator*. One of the expressions
shall have type \"pointer to object `<i>`{=html}type`</i>`{=html}\" and
the other shall have an integer type; the result type is
`<i>`{=html}type`</i>`{=html}. Successive array subscript operators
designate an element of a multidimensional array.
A postfix expression followed by parentheses or an optional
parenthesized argument list indicates an invocation of the *function
call operator*. The value of the function call operator is the return
value of the function called with the provided arguments. The parameters
to the function are copied on the stack **by value** (or at least the
compiler acts as if that is what happens; if the programmer wanted the
parameter to be copied by reference, then it is easier to pass the
address of the area to be modified by value, then the called function
can access the area through the respective pointer). The trend for
compilers is to pass the parameters from right to left onto the stack,
but this is not universal.
A postfix expression followed by a dot (`.`) followed by an identifier
selects a member from a structure or union; a postfix expression
followed by an arrow (`->`) followed by an identifier selects a member
from a structure or union who is pointed to by the pointer on the
left-hand side of the expression.
A postfix expression followed by the increment or decrement operators
(`++` or `--` respectively) indicates that the variable is to be
incremented or decremented as a side effect. The value of the expression
is the value of the postfix expression *before* the increment or
decrement. These operators only work on integers and pointers.
### Unary expressions
First, a postfix expression is a unary expression. The following
expressions are all unary expressions:
The increment or decrement operators followed by a unary expression is a
unary expression. The value of the expression is the value of the unary
expression *after* the increment or decrement. These operators only work
on integers and pointers.
The following operators followed by a cast expression are unary
expressions:
`Operator Meaning`\
`======== =======`\
` & Address-of; value is the location of the operand`\
` * Contents-of; value is what is stored at the location`\
` - Negation`\
` + Value-of operator`\
` ! Logical negation ( (!E) is equivalent to (0==E) )`\
` ~ Bit-wise complement`
The keyword `sizeof` followed by a unary expression is a unary
expression. The value is the size of the type of the expression in
bytes. The expression is not evaluated.
The keyword `sizeof` followed by a parenthesized type name is a unary
expression. The value is the size of the type in bytes.
### Cast operators
A unary expression is also a cast expression.
A parenthesized type name followed by any expression, including
literals, is a cast expression. The parenthesized type name has the
effect of forcing the cast expression into the type specified by the
type name in parentheses. For arithmetic types, this either does not
change the value of the expression, or truncates the value of the
expression if the expression is an integer and the new type is smaller
than the previous type.
An example of casting an int as a float:
``` C
int i = 5;
printf("%f\n", (float) i / 2); // Will print out: 2.500000
```
### Multiplicative and additive operators
First, a multiplicative expression is also a cast expression, and an
additive expression is also a multiplicative expression. This follows
the precedence that multiplication happens before addition.
In C, simple math is very easy to handle. The following operators exist:
**+** (addition), **-** (subtraction), **\*** (multiplication), /
(division), and **%** (modulus); You likely know all of them from your
math classes - except, perhaps, modulus. It returns the **remainder** of
a division (e.g. 5 % 2 = 1). (Modulus is not defined for floating-point
numbers, but the *math.h* library has an *fmod* function.)
Care must be taken with the modulus, because it\'s not the equivalent of
the mathematical modulus: (-5) % 2 is not 1, but -1. Division of
integers will return an integer, and the division of a negative integer
by a positive integer will round towards zero instead of rounding down
(e.g. (-5) / 3 = -1 instead of -2). However, it is always true that for
all integer a and nonzero integer b, `((a / b) * b) + (a % b) == a`.
There is no inline operator to do exponentiation (e.g. 5 \^ 2 is **not**
25 \[it is 7; **\^** is the exclusive-or operator\], and 5 \*\* 2 is an
error), but there is a power
function.
The mathematical order of operations does apply. For example (2 + 3) \*
2 = 10 while 2 + 3 \* 2 = 8. Multiplicative operators have precedence
over additive operators.
``` c
#include <stdio.h>
int main(void)
{
int i = 0, j = 0;
/* while i is less than 5 AND j is less than 5, loop */
while( (i < 5) && (j < 5) )
{
/* postfix increment, i++
* the value of i is read and then incremented
*/
printf("i: %d\t", i++);
/*
* prefix increment, ++j
* the value of j is incremented and then read
*/
printf("j: %d\n", ++j);
}
printf("At the end they have both equal values:\ni: %d\tj: %d\n", i, j);
getchar(); /* pause */
return 0;
}
```
will display the following:
i: 0 j: 1
i: 1 j: 2
i: 2 j: 3
i: 3 j: 4
i: 4 j: 5
At the end they have both equal values:
i: 5 j: 5
### The shift operators (which may be used to rotate bits)
A shift expression is also an additive expression (meaning that the
shift operators have a precedence just below addition and subtraction).
Shift functions are often used in low-level I/O hardware interfacing.
Shift and rotate functions are heavily used in cryptography and software
floating point emulation. Other than that, shifts can be used in place
of division or multiplication by a power of two. Many processors have
dedicated function blocks to make these operations fast \-- see
Microprocessor Design/Shift and Rotate
Blocks. On
processors which have such blocks, most C compilers compile shift and
rotate operators to a single assembly-language instruction \-- see X86
Assembly/Shift and Rotate.
#### shift left
The `<<` operator shifts the binary representation to the left, dropping
the most significant bits and appending it with zero bits. The result is
equivalent to multiplying the integer by a power of two.
#### unsigned shift right
The unsigned shift right operator, also sometimes called the logical
right shift operator. It shifts the binary representation to the right,
dropping the least significant bits and prepending it with zeros. The
`>>` operator is equivalent to division by a power of two for unsigned
integers.
#### signed shift right
The signed shift right operator, also sometimes called the arithmetic
right shift operator. It shifts the binary representation to the right,
dropping the least significant bit, but prepending it with copies of the
original sign bit. The `>>` operator is not equivalent to division for
signed integers.
In C, the behavior of the `>>` operator depends on the data type it acts
on. Therefore, a signed and an unsigned right shift looks exactly the
same, but produces a different result in some cases.
#### rotate right
Contrary to popular belief, it is possible to write C code that compiles
down to the \"rotate\" assembly language instruction (on CPUs that have
such an instruction).
Most compilers recognize this idiom:
``` c
unsigned int x;
unsigned int y;
/* ... */
y = (x >> shift) | (x << (32 - shift));
```
and compile it to a single 32 bit rotate instruction. [^1] [^2]
On some systems, this may be \"#define\"ed as a macro or defined as an
inline function called something like \"rightrotate32\" or \"rotr32\" or
\"ror32\" in a standard header file like \"bitops.h\". [^3]
#### rotate left
Most compilers recognize this idiom:
``` c
unsigned int x;
unsigned int y;
/* ... */
y = (x << shift) | (x >> (32 - shift));
```
and compile it to a single 32 bit rotate instruction.
On some systems, this may be \"#define\"ed as a macro or defined as an
inline function called something like \"leftrotate32\" or \"rotl32\" in
a header file like \"bitops.h\".
### Relational and equality operators
A relational expression is also a shift expression; an equality
expression is also a relational expression.
The relational binary operators `<` (less than), `>` (greater than),
`<=` (less than or equal), and `>=` (greater than or equal) operators
return a value of 1 if the result of the operation is true, 0 if false.
The result of these operators is type `int`.
The equality binary operators `==` (equals) and `!=` (not equals)
operators are similar to the relational operators except that their
precedence is lower. They also return a value of 1 if the result of the
operation is true and 0 if it is false.
One thing with floating-point numbers and equality operators: Because
floating-point operations can produce approximations (e.g. 0.1 is a
repeating decimal in binary, so 0.1 \* 10.0 is hardly ever 1.0), it is
unwise to use the `==` operator with floating-point numbers. Instead, if
a and b are the numbers to compare, compare `fabs (a - b)` to a fudge
factor.
### Bitwise operators
The bitwise operators are `&` (and), `^` (exclusive or) and `|`
(inclusive or). The `&` operator has higher precedence than `^`, which
has higher precedence than `|`.
The values being operated upon must be integral; the result is integral.
One use for the bitwise operators is to emulate bit flags. These flags
can be set with OR, tested with AND, flipped with XOR, and cleared with
AND NOT. For example:
``` c
/* This code is a sample for bitwise operations. */
#define BITFLAG1 (1)
#define BITFLAG2 (2)
#define BITFLAG3 (4) /* They are powers of 2 */
unsigned bitbucket = 0U; /* Clear all */
bitbucket |= BITFLAG1; /* Set bit flag 1 */
bitbucket &= ~BITFLAG2; /* Clear bit flag 2 */
bitbucket ^= BITFLAG3; /* Flip the state of bit flag 3 from off to on or
vice versa */
if (bitbucket & BITFLAG3) {
/* bit flag 3 is set */
} else {
/* bit flag 3 is not set */
}
```
### Logical operators
The logical operators are `&&` (and), and `||` (or). Both of these
operators produce 1 if the relationship is true and 0 for false. Both of
these operators short-circuit; if the result of the expression can be
determined from the first operand, the second is ignored. The `&&`
operator has higher precedence than the `||` operator.
`&&` is used to evaluate expressions left to right, and returns a 1 if
*both* statements are true, 0 if either of them are false. If the first
expression is false, the second is not evaluated.
``` c
int x = 7;
int y = 5;
if(x == 7 && y == 5) {
...
}
```
Here, the `&&` operator checks the left-most expression, then the
expression to its right. If there were more than two expressions chained
(e.g. `x && y && z`), the operator would check `x` first, then y (if `x`
is nonzero), then continue rightwards to z if neither x or y is zero.
Since both statements return true, the `&&` operator returns true, and
the code block is executed.
``` c
if(x == 5 && y == 5) {
...
}
```
The && operator checks in the same way as before, and finds that the
first expression is false. The && operator stops evaluating as soon as
it finds a statement to be false, and returns a false.
`||` is used to evaluate expressions left to right, and returns a 1 if
*either* of the expressions are true, 0 if both are false. If the first
expression is true, the second expression is not evaluated.
``` c
/* Use the same variables as before. */
if(x == 2 || y == 5) { // the || statement checks both expressions, finds that the latter is true, and returns true
...
}
```
The `||` operator here checks the left-most expression, finds it false,
but continues to evaluate the next expression. It finds that the next
expression returns true, stops, and returns a 1. Much how the `&&`
operator ceases when it finds an expression that returns false, the `||`
operator ceases when it finds an expression that returns true.
It is worth noting that C does not have Boolean values (true and false)
commonly found in other languages. It instead interprets a 0 as false,
and any nonzero value as true.
### Conditional operators
The ternary `?:` operator is the conditional operator. The expression
`(x ? y : z)` has the value of `y` if `x` is nonzero, `z` otherwise.
Example:
``` c
int x = 0;
int y;
y = (x ? 10 : 6); /* The parentheses are technically not necessary as assignment
has a lower precedence than the conditional operator, but
it's there for clarity. */
```
The expression `x` evaluates to 0. The ternary operator then looks for
the \"if-false\" value, which in this case, is 6. It returns that, so
`y` is equal to six. Had `x` been a non-zero, then the expression would
have returned a 10.
### Assignment operators
The assignment operators are `=`, `*=`, `/=`, `%=`, `+=`, `-=`, `<<=`,
`>>=`, `&=`, `^=`, and `|=` . The `=` operator stores the value of the
right operand into the location determined by the left operand, which
must be an lvalue (a value that has an address, and
therefore can be assigned to).
For the others, `x op= y` is shorthand for `x = x op (y)` . Hence, the
following expressions are the same:
` 1. x += y - x = x+y`\
` 2. x -= y - x = x-y`\
` 3. x *= y - x = x*y`\
` 4. x /= y - x = x/y`\
` 5. x %= y - x = x%y`
The value of the assignment expression is the value of the left operand
after the assignment. Thus, assignments can be chained; e.g. the
expression `a = b = c = 0;` would assign the value zero to all three
variables.
### Comma operator
The operator with the least precedence is the comma operator. The value
of the expression `x, y` will evaluate both `x` and `y`, but provides
the value of `y`.
This operator is useful for including multiple actions in one statement
(e.g. within a for loop conditional).
Here is a small example of the comma operator:
``` c
int i, x; /* Declares two ints, i and x, in one declaration.
Technically, this is not the comma operator. */
/* this loop initializes x and i to 0, then runs the loop */
for (x = 0, i = 0; i <= 6; i++) {
printf("x = %d, and i = %d\n", x, i);
}
```
## References
fr:Programmation
C/Opérateurs
pl:C/Operatory
[^1]: GCC: \"Optimize common rotate
constructs\"
[^2]: \"Cleanups in ROTL/ROTR DAG combiner
code\"
mentions that this code supports the \"rotate\" instruction in the
CellSPU
[^3]: \"replace private copy of bit rotation
routines\"
\-- recommends including \"bitops.h\" and using its rol32 and ror32
rather than copy-and-paste into a new program.
|
# C Programming/Arrays and strings
Arrays in C act to store related data under a single variable name with
an index, also known as a *subscript*. It is easiest to think of an
array as simply a list or ordered grouping for variables of the same
type. As such, arrays often help a programmer organize collections of
data efficiently and intuitively.
Later we will consider the concept of a *pointer*, fundamental to C,
which extends the nature of the array (array can be termed as a constant
pointer). For now, we will consider just their declaration and their
use.
## Arrays
C arrays are declared in the following form:
``` text
type name[number of elements];
```
For example, if we want an array of six integers (or whole numbers), we
write in C:
``` c
int numbers[6];
```
For a six character array called *letters*,
``` c
char letters[6];
```
and so on.
You can also initialize as you declare. Just put the initial elements in
curly brackets separated by commas as the initial value:
``` text
type name[number of elements]={comma-separated values}
```
For example, if we want to initialize an array with six integers, with
`0, 0, 1, 0, 0, 0` as the initial values:
``` c
int point[6]={0,0,1,0,0,0};
```
Though when the array is initialized as in this case, the array
dimension may be omitted, and the array will be automatically sized to
hold the initial data:
``` c
int point[]={0,0,1,0,0,0};
```
This is very useful in that the size of the array can be controlled by
simply adding or removing initializer elements from the definition
without the need to adjust the dimension.
If the dimension is specified, but not all elements in the array are
initialized, the remaining elements will contain a value of 0. This is
very useful, especially when we have very large arrays.
``` c
int numbers[2000]={245};
```
The above example sets the first value of the array to 245, and the rest
to 0.
If we want to access a variable stored in an array, for example with the
above declaration, the following code will store a 1 in the variable `x`
``` c
int x;
x = point[2];
```
Arrays in C are indexed starting at 0, as opposed to starting at 1. The
first element of the array above is `point[0]`. The index to the last
value in the array is the array size minus one. In the example above the
subscripts run from 0 through 5. C does not guarantee bounds checking on
array accesses. The compiler may not complain about the following
(though the best compilers do):
``` c
char y;
int z = 9;
char point[6] = { 1, 2, 3, 4, 5, 6 };
//examples of accessing outside the array. A compile error is not always raised
y = point[15];
y = point[-4];
y = point[z];
```
During program execution, an out of bounds array access does not always
cause a run time error. Your program may happily continue after
retrieving a value from point\[-1\]. To alleviate indexing problems, the
sizeof() expression is commonly used when coding loops that process
arrays.
Many people use a macro that in turn uses sizeof() to find the number of
elements in an array, a macro variously named \"lengthof()\",[^1]
\"MY_ARRAY_SIZE()\" or \"NUM_ELEM()\",[^2]
\"SIZEOF_STATIC_ARRAY()\",[^3] etc.
``` c
int ix;
short anArray[]= { 3, 6, 9, 12, 15 };
for (ix=0; ix< (sizeof(anArray)/sizeof(short)); ++ix) {
DoSomethingWith("%d", anArray[ix] );
}
```
Notice in the above example, the size of the array was not explicitly
specified. The compiler knows to size it at 5 because of the five values
in the initializer list. Adding an additional value to the list will
cause it to be sized to six, and because of the sizeof expression in the
`for` loop, the code automatically adjusts to this change. Good
programming practice is to declare a variable `<i>`{=html}size
`</i>`{=html}, and store the number of elements in the array in it.
size = sizeof(anArray)/sizeof(short)
C also supports multi dimensional arrays (or, rather, arrays of arrays).
The simplest type is a two dimensional array. This creates a rectangular
array - each row has the same number of columns. To get a char array
with 3 rows and 5 columns we write in C
`char two_d[3][5];`
To access/modify a value in this array we need two subscripts:
``` c
char ch;
ch = two_d[2][4];
```
or
``` c
two_d[0][0] = 'x';
```
Similarly, a multi-dimensional array can be initialized like this:
``` c
int two_d[2][3] = {{ 5, 2, 1 },
{ 6, 7, 8 }};
```
The number of columns must be explicitly stated; however, the compiler
will find the appropriate amount of rows based on the initializer list.
There are also weird notations possible:
``` c
int a[100];
int i = 0;
if (a[i]==i[a])
{
printf("Hello world!\n");
}
```
------------------------------------------------------------------------
a\[i\] and i\[a\] refer to the same location. (This is explained later
in the next Chapter.)
## Strings
!String \"Merkkijono\" stored in
memory{width="300"}
C has no string handling facilities built in; consequently, strings are
defined as arrays of characters. C allows a character array to be
represented by a character string rather than a list of characters, with
the null terminating character automatically added to the end. For
example, to store the string \"Merkkijono\", we would write
``` c
char string[11] = "Merkkijono";
```
or
``` c
char string[11] = {'M', 'e', 'r', 'k', 'k', 'i', 'j', 'o', 'n', 'o', '\0'};
```
In the first example, the string will have a null character
automatically appended to the end by the compiler; by convention,
library functions expect strings to be terminated by a null character.
The latter declaration indicates individual elements, and as such the
null terminator needs to be added manually.
Strings do not always have to be linked to an explicit variable. As you
have seen already, a string of characters can be created directly as an
unnamed string that is used directly (as with the printf functions.)
To create an extra long string, you will have to split the string into
multiple sections, by closing the first section with a quote, and
recommencing the string on the next line (also starting and ending in a
quote):
``` c
char string[58] = "This is a very, very long "
"string that requires two lines.";
```
While strings may also span multiple lines by putting the backslash
character at the end of the line, this method is deprecated.
There is a useful library of string handling routines which you can use
by including another header file.
``` c
#include <string.h> //new header file
```
This standard string library will allow various tasks to be performed on
strings, and is discussed in the Strings
chapter.
## References
et:Programmeerimiskeel
C/Massiivid
fr:Programmation C/Tableaux
it:C/Vettori e
puntatori/Vettori
pl:C/Tablice
fi:C/Taulukot
[^1]: Pádraig Brady. \"C and C++
notes\".
[^2]: C Programming/Pointers and
arrays
[^3]: MINC/Reference/MINC1-volumeio-programmers-reference
|
# C Programming/Program flow control
Very few programs follow exactly one control path and have each
instruction stated explicitly. In order to program effectively, it is
necessary to understand how one can alter the steps taken by a program
due to user input or other conditions, how some steps can be executed
many times with few lines of code, and how programs can appear to
demonstrate a rudimentary grasp of logic. C constructs known as
conditionals and loops grant this power.
From this point forward, it is necessary to understand what is usually
meant by the word *block*. A block is a group of code statements that
are associated and intended to be executed as a unit. In C, the
beginning of a block of code is denoted with { (left curly), and the end
of a block is denoted with }. It is not necessary to place a semicolon
after the end of a block. Blocks can be empty, as in {}. Blocks can also
be nested; i.e. there can be blocks of code within larger blocks.
## Conditionals
There is likely no meaningful program written in which a computer does
not demonstrate basic decision-making skills. It can actually be argued
that there is no meaningful human activity in which some sort of
decision-making, instinctual or otherwise, does not take place. For
example, when driving a car and approaching a traffic light, one does
not think, \"I will continue driving through the intersection.\" Rather,
one thinks, \"I will stop if the light is red, go if the light is green,
and if yellow go only if I am traveling at a certain speed a certain
distance from the intersection.\" These kinds of processes can be
simulated in C using conditionals.
A conditional is a statement that instructs the computer to execute a
certain block of code or alter certain data only if a specific condition
has been met. The most common conditional is the If-Else statement, with
conditional expressions and Switch-Case statements typically used as
more shorthanded methods.
Before one can understand conditional statements, it is first necessary
to understand how C expresses logical relations. C treats logic as being
arithmetic. The value 0 (zero) represents false, and ***all other
values*** represent true. If you chose some particular value to
represent true and then compare values against it, sooner or later your
code will fail when your assumed value (often 1) turns out to be
incorrect. Code written by people uncomfortable with the C language can
often be identified by the usage of #define to make a \"TRUE\" value.
[^1]
Because logic is arithmetic in C, arithmetic operators and logical
operators are one and the same. Nevertheless, there are a number of
operators that are typically associated with logic:
### Relational and Equivalence Expressions:
a \< b: 1 if **a** is less than **b**, 0 otherwise.\
a \> b: 1 if **a** is greater than **b**, 0 otherwise.\
a \<= b: 1 if **a** is less than or equal to **b**, 0 otherwise.\
a \>= b: 1 if **a** is greater than or equal to **b**, 0 otherwise.\
a == b: 1 if **a** is equal to **b**, 0 otherwise.\
a != b: 1 if **a** is not equal to **b**, 0 otherwise
New programmers should take special note of the fact that the \"equal
to\" operator is ==, not =. This is the cause of numerous coding
mistakes and is often a difficult-to-find bug, as the expression
`(a = b)` sets `a` equal to `b` and subsequently evaluates to `b`; but
the expression `(a == b)`, which is usually intended, checks if `a` is
equal to `b`. It needs to be pointed out that, if you confuse = with ==,
your mistake will often not be brought to your attention by the
compiler. A statement such as `if (c = 20) {}` is considered perfectly
valid by the language, but will always assign 20 to `c` and evaluate as
true. A simple technique to avoid this kind of bug (in many, not all
cases) is to put the constant first. This will cause the compiler to
issue an error, if == got misspelled with =.
Note that C does not have a dedicated boolean type as many other
languages do. 0 means false and anything else true. So the following are
equivalent:
``` c
if (foo()) {
// do something
}
```
and
``` c
if (foo() != 0) {
// do something
}
```
Often `#define TRUE 1` and `#define FALSE 0` are used to work around the
lack of a boolean type. This is bad practice, since it makes assumptions
that do not hold. It is a better idea to indicate what you are actually
expecting as a result from a function call, as there are many different
ways of indicating error conditions, depending on the situation.
``` c
if (strstr("foo", bar) >= 0) {
// bar contains "foo"
}
```
Here, `strstr` returns the index where the substring foo is found and -1
if it was not found. Note that this would fail with the `TRUE`
definition mentioned in the previous paragraph. It would also not
produce the expected results if we omitted the `>= 0`.
One other thing to note is that the relational expressions do not
evaluate as they would in mathematical texts. That is, an expression
`myMin < value < myMax` does not evaluate as you probably think it
might. Mathematically, this would test whether or not *value* is between
*myMin* and *myMax*. But in C, what happens is that *value* is first
compared with *myMin*. This produces either a 0 or a 1. It is this value
that is compared against myMax. Example:
``` c
int value = 20;
/* ... */
if (0 < value < 10) { // don't do this! it always evaluates to "true"!
/* do some stuff */
}
```
Because *value* is greater than 0, the first comparison produces a value
of 1. Now 1 is compared to be less than 10, which is true, so the
statements in the if are executed. This probably is not what the
programmer expected. The appropriate code would be:
``` c
int value = 20;
/* ... */
if (0 < value && value < 10) { // the && means "and"
/* do some stuff */
}
```
### Logical Expressions
a \|\| b: when EITHER **a** or **b** is true (or both), the result is 1, otherwise the result is 0.\
a && b: when BOTH **a** and **b** are true, the result is 1, otherwise the result is 0.\
!a: when **a** is true, the result is 0; when **a** is 0, the result is 1.
Here\'s an example of a larger logical expression. In the statement:
` e = ((a && b) || (c > d));`
e is set equal to 1 if a and b are non-zero, or if c is greater than d.
In all other cases, e is set to 0.
C uses short circuit evaluation of logical expressions. That is to say,
once it is able to determine the truth of a logical expression, it does
no further evaluation. This is often useful as in the following:
`int myArray[12];`\
`....`\
`if (i < 12 && myArray[i] > 3) { `\
`....`
In the snippet of code, the comparison of i with 12 is done first. If it
evaluates to 0 (false), **i** would be out of bounds as an index to
**myArray**. In this case, the program never attempts to access
**myArray\[i\]** since the truth of the expression is known to be false.
Hence we need not worry here about trying to access an out-of-bounds
array element if it is already known that i is greater than or equal to
zero. A similar thing happens with expressions involving the or \|\|
operator.
`while (doThis() || doThat()) ...`
doThat() is never called if doThis() returns a non-zero (true) value.
### The If-Else statement
If-Else provides a way to instruct the computer to execute a block of
code only if certain conditions have been met. The syntax of an If-Else
construct is:
``` c
if (/* condition goes here */) {
/* if the condition is non-zero (true), this code will execute */
} else {
/* if the condition is 0 (false), this code will execute */
}
```
The first block of code executes if the condition in parentheses
directly after the *if* evaluates to non-zero (true); otherwise, the
second block executes.
The *else* and following block of code are completely optional. If there
is no need to execute code if a condition is not true, leave it out.
Also, keep in mind that an *if* can directly follow an *else* statement.
While this can occasionally be useful, chaining more than two or three
if-elses in this fashion is considered bad programming practice. We can
get around this with the Switch-Case construct described later.
Two other general syntax notes need to be made that you will also see in
other control constructs: First, note that there is no semicolon after
*if* or *else*. There could be, but the block (code enclosed in { and })
takes the place of that. Second, if you only intend to execute one
statement as a result of an *if* or *else*, curly braces are not needed.
However, many programmers believe that inserting curly braces anyway in
this case is good coding practice.
The following code sets a variable c equal to the greater of two
variables a and b, or 0 if a and b are equal.
``` c
if (a > b) {
c = a;
} else if (b > a) {
c = b;
} else {
c = 0;
}
```
Consider this question: why can\'t you just forget about *else* and
write the code like:
``` c
if (a > b) {
c = a;
}
if (a < b) {
c = b;
}
if (a == b) {
c = 0;
}
```
There are several answers to this. Most importantly, if your
conditionals are not mutually exclusive, *two* cases could execute
instead of only one. If the code was different and the value of a or b
changes somehow (e.g.: you reset the lesser of a and b to 0 after the
comparison) during one of the blocks? You could end up with multiple
*if* statements being invoked, which is not your intent. Also,
evaluating *if* conditionals takes processor time. If you use *else* to
handle these situations, in the case above assuming (a \> b) is non-zero
(true), the program is spared the expense of evaluating additional *if*
statements. The bottom line is that it is usually best to insert an
*else* clause for all cases in which a conditional will not evaluate to
non-zero (true).
#### The conditional expression
A conditional expression is a way to set values conditionally in a more
shorthand fashion than If-Else. The syntax is:
`(/* logical expression goes here */) ? (/* if non-zero (true) */) : (/* if 0 (false) */)`
The logical expression is evaluated. If it is non-zero (true), the
overall conditional expression evaluates to the expression placed
between the ? and :, otherwise, it evaluates to the expression after the
:. Therefore, the above example (changing its function slightly such
that c is set to b when a and b are equal) becomes:
`c = (a > b) ? a : b;`
Conditional expressions can sometimes clarify the intent of the code.
Nesting the conditional operator should usually be avoided. It\'s best
to use conditional expressions only when the expressions for a and b are
simple. Also, contrary to a common beginner belief, conditional
expressions do not make for faster code. As tempting as it is to assume
that fewer lines of code result in faster execution times, there is no
such correlation.
### The Switch-Case statement
Say you write a program where the user inputs a number 1-5
(corresponding to student grades, A(represented as 1)-D(4) and F(5)),
stores it in a variable **grade** and the program responds by printing
to the screen the associated letter grade. If you implemented this using
If-Else, your code would look something like this:
``` c
if (grade == 1) {
printf("A\n");
} else if (grade == 2) {
printf("B\n");
} else if /* etc. etc. */
```
Having a long chain of if-else-if-else-if-else can be a pain, both for
the programmer and anyone reading the code. Fortunately, there\'s a
solution: the Switch-Case construct, of which the basic syntax is:
``` c
switch (/* integer or enum goes here */) {
case /* potential value of the aforementioned int or enum */:
/* code */
case /* a different potential value */:
/* different code */
/* insert additional cases as needed */
default:
/* more code */
}
```
The Switch-Case construct takes a variable, usually an int or an enum,
placed after *switch*, and compares it to the value following the *case*
keyword. If the variable is equal to the value specified after *case*,
the construct \"activates\", or begins executing the code after the case
statement. Once the construct has \"activated\", there will be no
further evaluation of *case*s.
Switch-Case is syntactically \"weird\" in that no braces are required
for code associated with a *case*.
***Very important***: Typically, the last statement for each case is a
break statement. This causes program execution to jump to the statement
following the closing bracket of the switch statement, which is what one
would normally want to happen. However if the break statement is
omitted, program execution continues with the first line of the next
case, if any. This is called a *fall-through*. When a programmer desires
this action, a comment should be placed at the end of the block of
statements indicating the desire to fall through. Otherwise another
programmer maintaining the code could consider the omission of the
\'break\' to be an error, and inadvertently \'correct\' the problem.
Here\'s an example:
``` c
switch (someVariable) {
case 1:
printf("This code handles case 1\n");
break;
case 2:
printf("This prints when someVariable is 2, along with...\n");
/* FALL THROUGH */
case 3:
printf("This prints when someVariable is either 2 or 3.\n" );
break;
}
```
If a *default* case is specified, the associated statements are executed
if none of the other cases match. A *default* case is optional. Here\'s
a switch statement that corresponds to the sequence of if - else if
statements above.
Back to our example above. Here\'s what it would look like as
Switch-Case:
``` c
switch (grade) {
case 1:
printf("A\n");
break;
case 2:
printf("B\n");
break;
case 3:
printf("C\n");
break;
case 4:
printf("D\n");
break;
default:
printf("F\n");
break;
}
```
A set of statements to execute can be grouped with more than one value
of the variable as in the following example. (the fall-through comment
is not necessary here because the intended behavior is obvious)
``` c
switch (something) {
case 2:
case 3:
case 4:
/* some statements to execute for 2, 3 or 4 */
break;
case 1:
default:
/* some statements to execute for 1 or other than 2,3,and 4 */
break;
}
```
Switch-Case constructs are particularly useful when used in conjunction
with user defined *enum* data types. Some compilers are capable of
warning about an unhandled enum value, which may be helpful for avoiding
bugs.
## Loops
Often in computer programming, it is necessary to perform a certain
action a certain number of times or until a certain condition is met. It
is impractical and tedious to simply type a certain statement or group
of statements a large number of times, not to mention that this approach
is too inflexible and unintuitive to be counted on to stop when a
certain event has happened. As a real-world analogy, someone asks a
dishwasher at a restaurant what he did all night. He will respond, \"I
washed dishes all night long.\" He is not likely to respond, \"I washed
a dish, then washed a dish, then washed a dish, then\...\". The
constructs that enable computers to perform certain repetitive tasks are
called loops.
### While loops
A while loop is the most basic type of loop. It will run as long as the
condition is non-zero (true). For example, if you try the following, the
program will appear to lock up and you will have to manually close the
program down. A situation where the conditions for exiting the loop will
never become true is called an infinite loop.
``` c
int a = 1;
while (42) {
a = a * 2;
}
```
Here is another example of a while loop. It prints out all the powers of
two less than 100.
``` c
int a = 1;
while (a < 100) {
printf("a is %d \n", a);
a = a * 2;
}
```
The flow of all loops can also be controlled by **break** and
**continue** statements. A break statement will immediately exit the
enclosing loop. A continue statement will skip the remainder of the
block and start at the controlling conditional statement again. For
example:
``` c
int a = 1;
while (42) { // loops until the break statement in the loop is executed
printf("a is %d ", a);
a = a * 2;
if (a > 100) {
break;
} else if (a == 64) {
continue; // Immediately restarts at while, skips next step
}
printf("a is not 64\n");
}
```
In this example, the computer prints the value of a as usual, and prints
a notice that a is not 64 (unless it was skipped by the continue
statement).
Similar to If above, braces for the block of code associated with a
While loop can be omitted if the code consists of only one statement,
for example:
``` c
int a = 1;
while (a < 100)
a = a * 2;
```
This will merely increase a until a is not less than 100.
When the computer reaches the end of the while loop, it always goes back
to the while statement at the top of the loop, where it re-evaluates the
controlling condition. If that condition is \"true\" at that instant \--
even if it was temporarily 0 for a few statements inside the loop \--
then the computer begins executing the statements inside the loop again;
otherwise the computer exits the loop. The computer does not
\"continuously check\" the controlling condition of a while loop during
the execution of that loop. It only \"peeks\" at the controlling
condition each time it reaches the `while` at the top of the loop.
It is very important to note, once the controlling condition of a While
loop becomes 0 (false), the loop will not terminate until the block of
code is finished and it is time to reevaluate the conditional. If you
need to terminate a While loop immediately upon reaching a certain
condition, consider using **break**.
A common idiom is to write:
``` c
int i = 5;
while (i--) {
printf("java and c# can't do this\n");
}
```
This executes the code in the while loop 5 times, with i having values
that range from 4 down to 0 (inside the loop). Conveniently, these are
the values needed to access every item of an array containing 5
elements.
### For loops
For loops generally look something like this:
`for (`*`initialization`*`; `*`test`*`; `*`increment`*`) {`\
` /* code */`\
`}`
The *initialization* statement is executed exactly once - before the
first evaluation of the *test* condition. Typically, it is used to
assign an initial value to some variable, although this is not strictly
necessary. The *initialization* statement can also be used to declare
and initialize variables used in the loop.
The *test* expression is evaluated each time before the code in the
*for* loop executes. If this expression evaluates as 0 (false) when it
is checked (i.e. if the expression is not true), the loop is not
(re)entered and execution continues normally at the code immediately
following the FOR-loop. If the expression is non-zero (true), the code
within the braces of the loop is executed.
After each iteration of the loop, the *increment* statement is executed.
This often is used to increment the loop index for the loop, the
variable initialized in the initialization expression and tested in the
test expression. Following this statement execution, control returns to
the top of the loop, where the *test* action occurs. If a *continue*
statement is executed within the *for* loop, the increment statement
would be the next one executed.
Each of these parts of the for statement is optional and may be omitted.
Because of the free-form nature of the for statement, some fairly fancy
things can be done with it. Often a for loop is used to loop through
items in an array, processing each item at a time.
``` c
int myArray[12];
int ix;
for (ix = 0; ix < 12; ix++) {
myArray[ix] = 5 * ix + 3;
}
```
The above for loop initializes each of the 12 elements of myArray. The
loop index can start from any value. In the following case it starts
from 1.
``` c
for (ix = 1; ix <= 10; ix++) {
printf("%d ", ix);
}
```
which will print
**`1 2 3 4 5 6 7 8 9 10`**
You will most often use loop indexes that start from 0, since arrays are
indexed at zero, but you will sometimes use other values to initialize a
loop index as well.
The *increment* action can do other things, such as *decrement*. So this
kind of loop is common:
``` c
for (i = 5; i > 0; i--) {
printf("%d ", i);
}
```
which yields
**`5 4 3 2 1`**
Here\'s an example where the test condition is simply a variable. If the
variable has a value of 0 or NULL, the loop exits, otherwise the
statements in the body of the loop are executed.
``` c
for (t = list_head; t; t = NextItem(t)) {
/* body of loop */
}
```
A WHILE loop can be used to do the same thing as a FOR loop, however a
FOR loop is a more condensed way to perform a set number of repetitions
since all of the necessary information is in a one line statement.
A FOR loop can also be given no conditions, for example:
``` c
for (;;) {
/* block of statements */
}
```
This is called an infinite loop since it will loop forever unless there
is a break statement within the statements of the for loop. The empty
test condition effectively evaluates as true.
It is also common to use the comma operator in for loops to execute
multiple statements.
``` c
int i, j, n = 10;
for (i = 0, j = 0; i <= n; i++, j += 2) {
printf("i = %d , j = %d \n", i, j);
}
```
Special care should be taken when designing or refactoring the
conditional part, especially whether using \< or \<= , whether start and
stop should be corrected by 1, and in case of prefix- and
postfix-notations. ( On a 100 yards promenade with a tree every 10 yards
there are 11 trees. )
``` c
int i, n = 10;
for (i = 0; i < n; i++)
printf("%d ", i); // processed n times => 0 1 2 3 ... (n-1)
printf("\n");
for (i = 0; i <= n; i++)
printf("%d ", i); // processed (n+1) times => 0 1 2 3 ... n
printf("\n");
for (i = n; i--;)
printf("%d ", i); // processed n times => (n-1) ...3 2 1 0
printf("\n");
for (i = n; --i;)
printf("%d ", i); // processed (n-1) times => (n-1) ...4 3 2 1
printf("\n");
```
### Do-While loops
A DO-WHILE loop is a post-check while loop, which means that it checks
the condition after each run. As a result, even if the condition is zero
(false), it will run at least once. It follows the form of:
``` c
do {
/* do stuff */
} while (condition);
```
Note the terminating semicolon. This is required for correct syntax.
Since this is also a type of while loop, **break** and **continue**
statements within the loop function accordingly. A **continue**
statement causes a jump to the test of the condition and a *break*
statement exits the loop.
It is worth noting that Do-While and While are functionally almost
identical, with one important difference: Do-While loops are always
guaranteed to execute at least once, but While loops will not execute at
all if their condition is 0 (false) on the first evaluation.
## One last thing: goto
**goto** is a very simple and traditional control mechanism. It is a
statement used to immediately and unconditionally jump to another line
of code. To use goto, you must place a label at a point in your program.
A label consists of a name followed by a colon (:) on a line by itself.
Then, you can type \"goto *label*;\" at the desired point in your
program. The code will then continue executing beginning with *label*.
This looks like:
``` c
MyLabel:
/* some code */
goto MyLabel;
```
The ability to transfer the flow of control enabled by gotos is so
powerful that, in addition to the simple if, all other control
constructs can be written using gotos instead. Here, we can let \"S\"
and \"T\" be any arbitrary statements:
``` c
if (''cond'') {
S;
} else {
T;
}
/* ... */
```
The same statement could be accomplished using two gotos and two labels:
``` c
if (''cond'') goto Label1;
T;
goto Label2;
Label1:
S;
Label2:
/* ... */
```
Here, the first goto is conditional on the value of \"cond\". The second
goto is unconditional. We can perform the same translation on a loop:
``` c
while (''cond1'') {
S;
if (''cond2'')
break;
T;
}
/* ... */
```
Which can be written as:
``` c
Start:
if (!''cond1'') goto End;
S;
if (''cond2'') goto End;
T;
goto Start;
End:
/* ... */
```
As these cases demonstrate, often the structure of what your program is
doing can usually be expressed without using gotos. Undisciplined use of
gotos can create unreadable, unmaintainable code when more idiomatic
alternatives (such as if-elses, or for loops) can better express your
structure. Theoretically, the goto construct does not ever *have* to be
used, but there are cases when it can increase readability, avoid code
duplication, or make control variables unnecessary. You should consider
first mastering the idiomatic solutions, and use goto only when
necessary. Keep in mind that many, if not most, C style guidelines
*strictly forbid* use of **goto**, with the only common exceptions being
the following examples.
One use of goto is to break out of a deeply nested loop. Since **break**
will not work (it can only escape one loop), **goto** can be used to
jump completely outside the loop. Breaking outside of deeply nested
loops without the use of the goto is always possible, but often involves
the creation and testing of extra variables that may make the resulting
code far less readable than it would be with **goto**. The use of
**goto** makes it easy to undo actions in an orderly fashion, typically
to avoid failing to free memory that had been allocated.
Another accepted use is the creation of a state machine. This is a
fairly advanced topic though, and not commonly needed.
## Examples
``` c
#include <errno.h>
#include <stdio.h>
#include <stdlib.h>
int main(void)
{
int years;
printf("Enter your age in years : ");
fflush(stdout);
errno = 0;
if (scanf("%d", &years) != 1 || errno)
return EXIT_FAILURE;
printf("Your age in days is %d\n", years * 365);
return 0;
}
```
## References
de:C-Programmierung:
Kontrollstrukturen
et:Programmeerimiskeel
C/Keelestruktuurid
fr:Programmation C/Tests
pl:C/Instrukcje sterujące
pt:Programar em C/Controle de
fluxo
fi:C/Ohjausrakenteet
[^1]: C FAQ
|
# C Programming/Procedures and functions
In C programming, all executable code resides within a **function**.
Note that other programming languages may distinguish between a
\"function\", \"subroutine\", \"subprogram\", \"procedure\", or
\"method\" \-- in C, these are all functions. Functions are a
fundamental feature of any high level programming language and make it
possible to tackle large, complicated tasks by breaking tasks into
smaller, more manageable pieces of code.
At a lower level, a function is nothing more than a memory address where
the instructions associated with a function reside in your computer\'s
memory. In the source code, this memory address is usually given a
descriptive name which programmers can use to **call** the function and
execute the instructions that begin at the function\'s starting address.
The instructions associated with a function are frequently referred to
as a **block** of code. After the function\'s instructions finish
executing, the function can return a value and code execution will
resume with the instruction that immediately follows the initial call to
the function. If this doesn\'t make immediate sense to you, don\'t
worry. Understanding what is happening inside your computer at the
lowest levels can be confusing at first, but will eventually become very
intuitive as you develop your C programming skills.
For now, it\'s enough to know that a function and its associated block
of code is often executed (called) several times, from several different
places, during a single execution of a program.
As a basic example, suppose you are writing a program that calculates
the distance of a given (x,y) point to the x-axis and to the y-axis. You
will need to compute the absolute value of the whole numbers x and y. We
could write it like this (assuming we don\'t have a predefined function
for absolute value in any library):
``` c
#include <stdio.h>
/*this function computes the absolute value of a whole number.*/
int abs(int x)
{
if (x>=0) return x;
else return -x;
}
/*this program calls the abs() function defined above twice.*/
int main()
{
int x, y;
printf("Type the coordinates of a point in 2-plane, say P = (x,y). First x=");
scanf("%d", &x);
printf("Second y=");
scanf("%d", &y);
printf("The distance of the P point to the x-axis is %d. \n Its distance to the y-axis is %d. \n", abs(y), abs(x));
return 0;
}
```
The next example illustrates the usage of a function as a procedure.
It\'s a simplistic program that asks students for their grade for three
different courses and tells them if they passed a course. Here, we
created a function, called `check()` that can be called as many times as
we need to. The function saves us from having to write the same set of
instructions for each class the student has taken.
``` c
#include<stdio.h>
/*the 'check' function is defined here.*/
void check(int x)
{
if (x<60)
printf("Sorry! You will need to try this course again.\n");
else
printf("Enjoy your vacation! You have passed.\n");
}
/*the program starts here at the main() function, which calls the check() function three times.*/
int main()
{
int a, b, c;
printf("Type your grade in Mathematics (whole number). \n");
scanf("%d", &a);
check(a);
printf("Type your grade in Science (whole number). \n");
scanf("%d", &b);
check(b);
printf("Type your grade in Programming (whole number). \n");
scanf("%d", &c);
check(c);
/* this program should be replaced by something more meaningful.*/
return 0;
}
```
Notice that in the program above, there is no outcome value for the
\'check\' function. It only executes a procedure.
This is precisely what functions are for.
## More on functions
It\'s useful to conceptualize a function like a machine in a factory. On
the input side of the machine, you dump in the \"raw materials,\" or the
input data, that you want the machine to process. Then the machine goes
to work and and spits out a finished product, the \"return value,\" to
the output side of the machine which you can collect and use for other
purposes.
In C, you must tell the machine exactly what raw materials it is
expected to process and what kind of finished product you want the
machine to return to you. If you supply the machine with different raw
materials than it expects, or if you try to return a product that\'s
different than what you told the machine to produce, the C compiler will
throw an error.
Note that a function isn\'t required to take any inputs. It doesn\'t
have to return anything back to us, either. If we modify the example
above to ask the user for their grade inside the `check` function, there
would be no need to pass the grade value into the function. And notice
that the `check` doesn\'t pass a value back. The function just prints
out a message to the screen.
You should be familiar with some basic terminology related to functions:
- A function, call it *f*, that uses another function *g*, is said to
*call* *g*. For example, *f* calls *g* to print the squares of ten
numbers. *f* is referred to as the *caller* function and *g* is the
*callee*.
- The inputs we send to a function are called its *arguments*. When we
declare our function, we describe the *parameters* that determine
what type of *arguments* are acceptable to pass into the function.
We describe these parameters to the compiler inside a set of
parentheses next to the function\'s name.
- A function *g* that gives some kind of answer back to *f* is said to
*return* that answer or value. For example, *g* returns the sum of
its arguments.
## Writing functions in C
It\'s always good to learn by example. Let\'s write a function that will
return the square of a number.
``` c
int square(int x)
{
int square_of_x;
square_of_x = x * x;
return square_of_x;
}
```
To understand how to write such a function like this, it may help to
look at what this function does as a whole. It takes in an `int`, x, and
squares it, storing it in the variable square_of_x. Now this value is
returned.
The first int at the beginning of the function declaration is the type
of data that the function returns. In this case when we square an
integer we get an integer, and we are returning this integer, and so we
write `int` as the return type.
Next is the name of the function. It is good practice to use meaningful
and descriptive names for functions you may write. It may help to name
the function after what it is written to do. In this case we name the
function \"square\", because that\'s what it does - it squares a number.
Next is the function\'s first and only argument, an `int`, which will be
referred to in the function as x. This is the function\'s *input*.
In between the braces is the actual guts of the function. It declares an
integer variable called square_of_x that will be used to hold the value
of the square of x. Note that the variable square_of_x can **only** be
used within this function, and not outside. We\'ll learn more about this
sort of thing later, and we will see that this property is very useful.
We then assign x multiplied by x, or x squared, to the variable
square_of_x, which is what this function is all about. Following this is
a `return` statement. We want to return the value of the square of x, so
we must say that this function returns the contents of the variable
square_of_x.
Our brace to close, and we have finished the declaration.
Written in a more concise manner, this code performs exactly the same
function as the above:
``` c
int square(int x)
{
return x * x;
}
```
Note this should look familiar - you have been writing functions
already, in fact - main is a function that is always written.
### In general
In general, if we want to declare a function, we write
` `*`type`*` `*`name`*`(`*`type1`*` `*`arg1`*`, `*`type2`*` `*`arg2`*`, ...)`\
` {`\
` /* `*`code`*` */`\
` } `
We\'ve previously said that a function can take no arguments, or can
return nothing, or both. What do we write if we want the function to
return nothing? We use C\'s `void` keyword. `void` basically means
\"nothing\" - so if we want to write a function that returns nothing,
for example, we write
``` c
void sayhello(int number_of_times)
{
int i;
for(i=1; i <= number_of_times; i++) {
printf("Hello!\n");
}
}
```
Notice that there is no `return` statement in the function above. Since
there\'s none, we write `void` as the return type. (Actually, one can
use the `return` keyword in a procedure to return to the caller before
the end of the procedure, but one cannot return a value as if it were a
function.)
What about a function that takes no arguments? If we want to do this, we
can write for example
``` c
float calculate_number(void)
{
float to_return=1;
int i;
for(i=0; i < 100; i++) {
to_return += 1;
to_return = 1/to_return;
}
return to_return;
}
```
Notice this function doesn\'t take any inputs, but merely returns a
number calculated by this function.
Naturally, you can combine both void return and void in arguments
together to get a valid function, also.
### Recursion
Here\'s a simple function that does an infinite loop. It prints a line
and calls itself, which again prints a line and calls itself again, and
this continues until the stack overflows and the program crashes. A
function calling itself is called recursion, and normally you will have
a conditional that would stop the recursion after a small, finite number
of steps.
``` c
// don't run this!
void infinite_recursion()
{
printf("Infinite loop!\n");
infinite_recursion();
}
```
A simple check can be done like this. Note that ++depth is used so the
increment will take place before the value is passed into the function.
Alternatively you can increment on a separate line before the recursion
call. If you say print_me(3,0); the function will print the line
Recursion 3 times.
``` c
void print_me(int j, int depth)
{
if(depth < j) {
printf("Recursion! depth = %d j = %d\n",depth,j); //j keeps its value
print_me(j, ++depth);
}
}
```
Recursion is most often used for jobs such as directory tree scans,
seeking for the end of a linked list, parsing a tree structure in a
database and factorising numbers (and finding primes) among other
things.
### Static functions
If a function is to be called only from within the file in which it is
declared, it is appropriate to declare it as a static function. When a
function is declared static, the compiler will know to compile an object
file in a way that prevents the function from being called from code in
other files. Example:
``` c
static int compare( int a, int b )
{
return (a+4 < b)? a : b;
}
```
## Using C functions
We can now *write* functions, but how do we use them? When we write
main, we place the function outside the braces that encompass main.
When we want to use that function, say, using our `calculate_number`
function above, we can write something like
``` c
float f;
f = calculate_number();
```
If a function takes in arguments, we can write something like
``` c
int square_of_10;
square_of_10 = square(10);
```
If a function doesn\'t return anything, we can just say
``` c
say_hello();
```
since we don\'t need a variable to catch its return value.
## Functions from the C Standard Library
While the C language doesn\'t itself contain functions, it is usually
linked with the C Standard Library. To use this library, you need to add
an #include directive at the top of the C file, which may be one of the
following from C89/C90:
+---------------+---------------+---------------+---------------+---+
| - | - | - | - | |
| `<assert.h | `<limits.h | [`<signal.h | [`<stdlib.h | |
| >` | h "wikilink") | h "wikilink") | h "wikilink") | |
| - `<ctype. | - | - | - | |
| h>` | >` | h "wikilink") | h "wikilink") | |
| h>` | .h>`](w:Math. | `<stddef.h | .h>` | >`](w:Stddef. | h "wikilink") | |
| h>`](w:Float. | - | h "wikilink") | | |
| h "wikilink") | `<setjmp.h | - [`<stdio. | | |
| | >` | h "wikilink") | | |
+---------------+---------------+---------------+---------------+---+
The functions available are:
+----------------+----------------+----------------+----------------+
| `<assert.h>` | `<limits.h>` | `<signal.h>` | `<stdlib.h>` |
+================+================+================+================+
| - | - (constants | - int | - |
| assert(int) | only) | raise(int | atof(char\*), |
| | | sig). This | |
| | | - void\* | atoi(char\*), |
| | | signal(int | |
| | | sig, void | atol(char\*) |
| | | | - |
| | | (\*func)(int)) | strtod(char |
| | | | \* str, |
| | | | char \*\* |
| | | | endptr ), |
| | | | |
| | | | strtol(char |
| | | | \*str, |
| | | | char |
| | | | |
| | | | \*\*endptr), |
| | | | |
| | | | strtoul(char |
| | | | \*str, |
| | | | char |
| | | | |
| | | | \*\*endptr) |
| | | | - rand(), |
| | | | srand() |
| | | | - m |
| | | | alloc(size_t), |
| | | | calloc |
| | | | (size_t |
| | | | elements, |
| | | | size_t |
| | | | |
| | | | elementSize), |
| | | | r |
| | | | ealloc(void\*, |
| | | | int) |
| | | | - free |
| | | | (void\*) |
| | | | - exit(int), |
| | | | abort() |
| | | | - |
| | | | atexit(void |
| | | | |
| | | | (\*func)()) |
| | | | - getenv |
| | | | - system |
| | | | - qsort(void |
| | | | \*, size_t |
| | | | number, |
| | | | size_t |
| | | | size, int |
| | | | (\*sor |
| | | | tfunc)(void\*, |
| | | | void\*)) |
| | | | - abs, labs |
| | | | - div, ldiv |
+----------------+----------------+----------------+----------------+
| `<ctype.h>` | `<locale.h>` | `<stdarg.h>` | `<string.h>` |
+----------------+----------------+----------------+----------------+
| - isalnum, | - struct | - va_start | - memcpy, |
| isalpha, | lconv\* | (va_list, | memmove |
| isblank | loc | ap) | - memchr, |
| - iscntrl, | aleconv(void); | - va_arg | memcmp, |
| isdigit, | - char\* | (ap, | memset |
| isgraph | | (type)) | - strcat, |
| - islower, | setlocale(int, | - va_end | strncat, |
| isprint, | const | (ap) | strchr, |
| ispunct | char\*); | - va_copy | strrchr |
| - isspace, | | (va_list, | - strcmp, |
| isupper, | | va_list) | strncmp, |
| isxdigit | | | strccoll |
| - tolower, | | | - strcpy, |
| toupper | | | strncpy |
| | | | - strerror |
| | | | - strlen |
| | | | - strspn, |
| | | | strcspn |
| | | | - strpbrk |
| | | | - strstr |
| | | | - strtok |
| | | | - strxfrm |
+----------------+----------------+----------------+----------------+
| errno.h | math.h | stddef.h | time.h |
+----------------+----------------+----------------+----------------+
| - (errno) | - sin, cos, | - offsetof | - asctime |
| | tan | macro | (struct |
| | - asin, | | tm\* |
| | acos, | | tmptr) |
| | atan, | | - clock_t |
| | atan2 | | clock() |
| | - sinh, | | - char\* |
| | cosh, tanh | | |
| | - ceil | | ctime(const |
| | - exp | | time_t\* |
| | - fabs | | timer) |
| | - floor | | - double |
| | - fmod | | d |
| | - frexp | | ifftime(time_t |
| | - ldexp | | timer2, |
| | - log, log10 | | time_t |
| | - modf | | timer1) |
| | - pow | | - struct |
| | - sqrt | | tm\* |
| | | | |
| | | | gmtime(const |
| | | | time_t\* |
| | | | timer) |
| | | | - struct |
| | | | tm\* |
| | | | |
| | | | gmtime_r(const |
| | | | time_t\* |
| | | | timer, |
| | | | struct |
| | | | tm\* |
| | | | result) |
| | | | - struct |
| | | | tm\* |
| | | | l |
| | | | ocaltime(const |
| | | | time_t\* |
| | | | timer) |
| | | | - time_t |
| | | | |
| | | | mktime(struct |
| | | | tm\* ptm) |
| | | | - time_t |
| | | | |
| | | | time(time_t\* |
| | | | timer) |
| | | | - char \* |
| | | | |
| | | | strptime(const |
| | | | char\* |
| | | | buf, const |
| | | | char\* |
| | | | format, |
| | | | struct |
| | | | tm\* tptr) |
| | | | - time_t |
| | | | |
| | | | timegm(struct |
| | | | tm |
| | | | |
| | | | \*brokentime) |
+----------------+----------------+----------------+----------------+
| float.h | setjmp.h | stdio.h | |
+----------------+----------------+----------------+----------------+
| - | - int | - fclose | - fread, |
| (constants) | | - fopen, | fwrite |
| | setjmp(jmp_buf | freopen | - getc, putc |
| | env) | - remove | - getchar, |
| | - void | - rename | putchar, |
| | l | - rewind | fputchar |
| | ongjmp(jmp_buf | - tmpfile | - gets, puts |
| | env, int | - clearerr | - printf, |
| | value) | - feof, | vprintf |
| | | ferror | - fprintf, |
| | | - fflush | vfprintf |
| | | - fgetpos, | - sprintf, |
| | | fsetpos | snprintf, |
| | | - fgetc, | vsprintf, |
| | | fputc | vsnprintf |
| | | - fgets, | - perror |
| | | fputs | - scanf, |
| | | - ftell, | vscanf |
| | | fseek | - fscanf, |
| | | | vfscanf |
| | | | - sscanf, |
| | | | vsscanf |
| | | | - setbuf, |
| | | | setvbuf |
| | | | - tmpnam |
| | | | - ungetc |
+----------------+----------------+----------------+----------------+
## Variable-length argument lists
Functions with variable-length argument lists are functions that can
take a varying number of arguments. An example in the C standard library
is the `printf` function, which can take any number of arguments
depending on how the programmer wants to use it.
C programmers rarely find the need to write new functions with
variable-length arguments. If they want to pass a bunch of things to a
function, they typically define a structure to hold all those things \--
perhaps a linked list, or an array \-- and call that function with the
data in the arguments.
However, you may occasionally find the need to write a new function that
supports a variable-length argument list. To create a function that can
accept a variable-length argument list, you must first include the
standard library header `stdarg.h`. Next, declare the function as you
would normally. Next, add as the last argument an ellipsis (\"\...\").
This indicates to the compiler that a variable list of arguments is to
follow. For example, the following function declaration is for a
function that returns the average of a list of numbers:
``` c
float average (int n_args, ...);
```
Note that because of the way variable-length arguments work, we must
somehow, in the arguments, specify the number of elements in the
variable-length part of the arguments. In the `average` function here,
it\'s done through an argument called `n_args.` In the `printf`
function, it\'s done with the format codes that you specify in that
first string in the arguments you provide.
Now that the function has been declared as using variable-length
arguments, we must next write the code that does the actual work in the
function. To access the numbers stored in the variable-length argument
list for our `average` function, we must first declare a variable for
the list itself:
``` c
va_list myList;
```
The `va_list` type is a type declared in the `stdarg.h` header that
basically allows you to keep track of your list. To start actually using
`myList`, however, we must first assign it a value. After all, simply
declaring it by itself wouldn\'t do anything. To do this, we must call
`va_start`, which is actually a macro defined in `stdarg.h.` In the
arguments to `va_start`, you must provide the `va_list` variable you
plan on using, as well as the name of the last variable appearing before
the ellipsis in your function declaration:
``` c
#include <stdarg.h>
float average (int n_args, ...)
{
va_list myList;
va_start (myList, n_args);
va_end (myList);
}
```
Now that `myList` has been prepped for usage, we can finally start
accessing the variables stored in it. To do so, use the `va_arg` macro,
which pops off the next argument on the list. In the arguments to
`va_arg`, provide the `va_list` variable you\'re using, as well as the
primitive data type (e.g. `int`, `char`) that the variable you\'re
accessing should be:
``` c
#include <stdarg.h>
float average (int n_args, ...)
{
va_list myList;
va_start (myList, n_args);
int myNumber = va_arg (myList, int);
va_end (myList);
}
```
By popping `n_args` integers off of the variable-length argument list,
we can manage to find the average of the numbers:
``` c
#include <stdarg.h>
float average (int n_args, ...)
{
va_list myList;
va_start (myList, n_args);
int numbersAdded = 0;
int sum = 0;
while (numbersAdded < n_args) {
int number = va_arg (myList, int); // Get next number from list
sum += number;
numbersAdded += 1;
}
va_end (myList);
float avg = (float)(sum) / (float)(numbersAdded); // Find the average
return avg;
}
```
By calling `average (2, 10, 20)`, we get the average of `10` and `20`,
which is `15.`
## References
fr:Programmation C/Fonctions et
procédures
it:C/Blocchi e
funzioni/Funzioni
pl:C/Funkcje
|
# C Programming/Standard libraries
The **C standard library** is a standardized collection of
s and library routines used to implement
common operations, such as input/output and character string handling.
Unlike other languages (such as COBOL, Fortran, and PL/I) C does not
include built in keywords for these tasks, so nearly all C programs rely
on the standard library to operate.
## History
The C programming language previously did not provide any elementary
functions, such as I/O operations. Over time, user communities of C
shared ideas and implementations to provide those functions. These ideas
became common, and were eventually incorporated into the definition of
the standardized C programming language in 1989. These are now called
the **C standard libraries**.
Both Unix and C were created at AT&T\'s Bell Laboratories in the late
1960s and early 1970s. During the 1970s the C programming language
became increasingly popular, with many universities and organizations
beginning to create their own variations of the language for their own
projects. By the start of the 1980s compatibility problems between the
various C implementations became apparent. In 1983 the American National
Standards Institute (ANSI) formed a committee to establish a standard
specification of C known as \"ANSI C\". This work culminated in the
creation of the so-called **C89** standard in 1989. Part of the
resulting standard was a set of software libraries called the **ANSI C
standard library**.
Later revisions of the C standard have added several new required header
files to the library. Support for these new extensions varies between
implementations.
The headers **\<iso646.h\>**, **\<wchar.h\>**, and **\<wctype.h\>** were
added with Normative Addendum 1 (hereafter abbreviated as **NA1**), an
addition to the C Standard ratified in 1995.
The headers **\<complex.h\>**, **\<fenv.h\>**, **\<inttypes.h\>**,
**\<stdbool.h\>**, **\<stdint.h\>**, and **\<tgmath.h\>** were added
with **C99**, a revision to the C Standard published in 1999.
## Design
The declaration of each function is kept in a header file, while the
actual implementation of functions are separated into a library file.
The naming and scope of headers have become common but the organization
of libraries still remains diverse. The standard library is usually
shipped along with a compiler. Since C compilers often provide extra
functions that are not specified in ANSI C, a standard library with a
particular compiler is mostly incompatible with standard libraries of
other compilers.
Much of the C standard library has been shown to have been
well-designed. A few parts, with the benefit of hindsight, are regarded
as mistakes. The string input functions `gets()` (and the use of
`scanf()` to read string input) are the source of many buffer overflows,
and most programming guides recommend avoiding this usage. Another
oddity is `strtok()`, a function that is designed as a primitive
lexical analyser but is highly
\"fragile\" and difficult to use.
## ANSI Standard
The ANSI C standard library consists of 24 C header files which can be
included into a programmer\'s project with a single directive. Each
header file contains one or more function declarations, data type
definitions and macros. The contents of these header files follows.
In comparison to some other languages (for example Java) the standard
library is minuscule. The library provides a basic set of mathematical
functions, string manipulation, type conversions, and file and
console-based I/O. It does not include a standard set of \"container
types\" like the C++ Standard Template Library, let alone the complete
graphical user interface (GUI) toolkits, networking tools, and profusion
of other functions that Java provides as standard. The main advantage of
the small standard library is that providing a working ANSI C
environment is much easier than it is with other languages, and
consequently porting C to a new platform is relatively easy.
Many other libraries have been developed to supply equivalent functions
to that provided by other languages in their standard library. For
instance, the GNOME desktop environment project has developed the GTK+
graphics toolkit and GLib, a library of container data structures, and
there are many other well-known examples. The variety of libraries
available has meant that some superior toolkits have proven themselves
through history. The considerable downside is that they often do not
work particularly well together, programmers are often familiar with
different sets of libraries, and a different set of them may be
available on any particular platform.
### ANSI C library header files
----------------------------------------------- -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
**\<assert.h\>** Contains the assert macro, used to assist with detecting logical errors and other types of bug in debugging versions of a program.
**\<complex.h\>** A set of functions for manipulating complex numbers. (New with **C99**)
**\<ctype.h\>** This header file contains functions used to classify characters by their types or to convert between upper and lower case in a way that is independent of the used character set (typically ASCII or one of its extensions, although implementations utilizing EBCDIC are also known).
**\<errno.h\>** For testing error codes reported by library functions.
**\<fenv.h\>** For controlling floating-point environment. (New with **C99**)
**\<float.h\>** Contains defined constants specifying the implementation-specific properties of the floating-point library, such as the minimum difference between two different floating-point numbers (\_EPSILON), the maximum number of digits of accuracy (\_DIG) and the range of numbers which can be represented (\_MIN, \_MAX).
**\<inttypes.h\>** For precise conversion between integer types. (New with **C99**)
**\<iso646.h\>** For programming in ISO 646 variant character sets. (New with **NA1**)
**\<limits.h\>** Contains defined constants specifying the implementation-specific properties of the integer types, such as the range of numbers which can be represented (\_MIN, \_MAX).
**\<locale.h\>** For setlocale() and related constants. This is used to choose an appropriate locale.
**\<math.h\>** For computing common mathematical functions \-- see ../Further math/ or C++ Programming/Code/Standard C Library/Math for details.
**\<setjmp.h\>** Declares the macros setjmp/longjmp\|setjmp and longjmp, which are used for non-local exits
**\<signal.h\>** For controlling various exceptional conditions
**\<stdarg.h\>** For accessing a varying number of arguments passed to functions.
**\<stdbool.h\>** For a boolean data type. (New with **C99**)
**\<stdint.h\>** For defining various integer types. (New with **C99**)
**\<stddef.h\>** For defining several useful types and macros.
**\<stdio.h\>** Provides the core input and output capabilities of the C language. This file includes the venerable `printf` function.
**\<stdlib.h\>** For performing a variety of operations, including conversion, pseudo-random numbers, memory allocation, process control, environment, signalling, searching, and sorting.
**\<string.h\>** For manipulating several kinds of strings.
**\<tgmath.h\>** For type-generic mathematical functions. (New with **C99**)
**\<time.h\>** For converting between various time and date formats.
**\<wchar.h\>** For manipulating wide streams and several kinds of strings using wide characters - key to supporting a range of languages. (New with **NA1**)
**\<wctype.h\>** For classifying wide characters. (New with **NA1**)
----------------------------------------------- -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
## Common support libraries
While not standardized, C programs may depend on a runtime library of
routines which contain code the compiler uses at runtime. The code that
initializes the process for the operating system, for example, before
calling `main()`, is implemented in the C Run-Time Library for a given
vendor\'s compiler. The Run-Time Library code might help with other
language feature implementations, like handling uncaught exceptions or
implementing floating point code.
The C standard library only documents that the specific routines
mentioned in this article are available, and how they behave. Because
the compiler implementation might depend on these additional
implementation-level functions to be available, it is likely the
vendor-specific routines are packaged with the C Standard Library in the
same module, because they\'re both likely to be needed by any program
built with their toolset.
Though often confused with the C Standard Library because of this
packaging, the C Runtime Library is not a standardized part of the
language and is vendor-specific.
## Compiler built-in functions
Some compilers (for example, GCC) provide built-in
versions of many of the functions in the C standard library; that is,
the implementations of the functions are written into the compiled
object file, and the program calls the built-in versions instead of the
functions in the C library shared object file. This reduces function
call overhead, especially if function calls are replaced with inline
variants, and allows other forms of optimization (as the compiler knows
the control-flow characteristics of the built-in variants), but may
cause confusion when debugging (for example, the built-in versions
cannot be replaced with instrumented variants).
## POSIX standard library
POSIX, (along with the Single Unix Specification), specifies a number of
routines that should be available over and above those in the C standard
library proper; these are often implemented alongside the C standard
library functions, with varying degrees of closeness. For example, glibc
implements functions such as fork within libc.so, but before NPTL was
merged into glibc it constituted a separate library with its own linker
flag. Often, this POSIX-specified function will be regarded as part of
the library; the C library proper may be identified as the ANSI or ISO C
library.
The following libraries are recognized by POSIX:
------------- -----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
**c** This option shall make available all interfaces referenced in the System Interfaces volume of POSIX.1-2008, with the possible exception of those interfaces listed as residing in \<aio.h\>, \<arpa/inet.h\>, \<complex.h\>, \<fenv.h\>, \<math.h\>, \<mqueue.h\>, \<netdb.h\>, \<net/if.h\>, \<netinet/in.h\>, \<pthread.h\>, \<sched.h\>, \<semaphore.h\>, \<spawn.h\>, \<sys/socket.h\>, pthread_kill(), and pthread_sigmask() in \<signal.h\>, \<trace.h\>, interfaces marked as optional in \<sys/mman.h\>, interfaces marked as ADV (Advisory Information) in \<fcntl.h\>, and interfaces beginning with the prefix clock\_ or time\_ in \<time.h\>. This option shall not be required to be present to cause a search of this library.
**l** This option shall make available all interfaces required by the C-language output of lex that are not made available through the -l c option. (The flex program, a clone of lex, uses fl instead of l.)
**pthread** This option shall make available all interfaces referenced in \<pthread.h\> and pthread_kill() and pthread_sigmask() referenced in \<signal.h\>. An implementation may search this library in the absence of this option.
**m** This option shall make available all interfaces referenced in \<math.h\>, \<complex.h\>, and \<fenv.h\>. An implementation may search this library in the absence of this option.
**rt** This option shall make available all interfaces referenced in \<aio.h\>, \<mqueue.h\>, \<sched.h\>, \<semaphore.h\>, and \<spawn.h\>, interfaces marked as optional in \<sys/mman.h\>, interfaces marked as ADV (Advisory Information) in \<fcntl.h\>, and interfaces beginning with the prefix clock\_ and time\_ in \<time.h\>. An implementation may search this library in the absence of this option.
**trace** This option shall make available all interfaces referenced in \<trace.h\>. An implementation may search this library in the absence of this option.
**xnet** This option shall make available all interfaces referenced in \<arpa/inet.h\>, \<netdb.h\>, \<net/if.h\>, \<netinet/in.h\>, and \<sys/socket.h\>. An implementation may search this library in the absence of this option.
**y** This option shall make available all interfaces required by the C-language output of yacc that are not made available through the -l c option. (Some clones of yacc, including bison and byacc, include the entire library in the generated file, so it is not necessary to use -l y.)
------------- -----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
## References
pl:C/Biblioteka standardowa
|
# C Programming/Beginning exercises
## Variables
### Naming
1. Can a variable name start with a number?
2. Can a variable name start with a typographical symbol(e.g. #, \*,
\_)?
3. Give an example of a C variable name that would *not* work. Why
doesn\'t it work?
### Data Types
1. List at least three data types in C
1. On your computer, how much memory does each require?
2. Which ones can be used in place of another? Why?
1. Are there any limitations on these uses?
2. If so, what are they?
3. Is it necessary to do anything special to use the
alternative?
2. Can the name we use for a data type (e.g. \'int\', \'float\') be
used as a variable?
### Assignment
1. How would you assign the value 3.14 to a variable called pi?
2. Is it possible to assign an *int* to a *double*?
1. Is the reverse possible?
### Referencing
1. A common mistake for new students is reversing the assignment
statement. Suppose you want to assign the value stored in the
variable \"pi\" to another variable, say \"pi2\":
1. What is the correct statement?
2. What is the reverse? Is this a valid C statement (even if it
gives incorrect results)?
3. What if you wanted to assign a constant value (like 3.1415) to
\"pi2\":
: **a**. What would the correct statement look like?
: **b**. Would the reverse be a valid or invalid C statement?
## Simple I/O
### String manipulation
1\. Write a program that prompts the user for a string (pick a maximum
length), and prints its reverse.
2\. Write a program that prompts the user for a sentence (again, pick a
maximum length), and prints each word on its own line.
### Loops
1\. Write a function that outputs a right isosceles triangle of height
and width *n*, so *n = 3* would look like
*
**
***
2\. Write a function that outputs a sideways triangle of height *2n-1*
and width *n*, so the output for *n = 4* would be:
*
**
***
****
***
**
*
3\. Write a function that outputs a right-side-up triangle of height *n*
and width *2n-1*; the output for *n = 6* would be:
*
***
*****
*******
*********
***********
## Program Flow
1\. Build a program where control passes from main to four different
functions with 4 calls.
2\. Now make a while loop in main with the function calls inside it. Ask
for input at the beginning of the loop. End the while loop if the user
hits Q
3\. Next add conditionals to call the functions when the user enters
numbers, so 1 goes to function1, 2 goes to function 2, etc.
4\. Have function 1 call function a, which calls function b, which calls
function c
5\. Draw out a diagram of program flow, with arrows to indicate where
control goes
## Functions
1\. Write a function to check if an integer is negative; the declaration
should look like bool is_positive(int i);
2\. Write a function to raise a floating point number to an integer
power, so for example to when you use it
``` c
float a = raise_to_power(2, 3); //a gets 8
float b = raise_to_power(9, 2); //b gets 81
float raise_to_power(float f, int power); //make this your declaration
```
## Math
1\. Write a function to calculate if a number is prime. Return 1 if it
is prime and 0 if it is not a prime.
2\. Write a function to determine the number of prime numbers below n.
3\. Write a function to find the square root by using Newton\'s
method.
4\. Write functions to evaluate the trigonometric functions.
5\. Try to write a random number generator.
6\. Write a function to determine the prime number(s) between 2 and 100.
## Recursion
#### Merge sort
1\. Write a C program to generate a random integer array with a given
length n , and sort it recursively using the Merge sort algorithm.
- The merge sort algorithm is a recursive algorithm .
\- sorting a one element array is easy.
\- sorting two one-element arrays, requires the merge operation. The
merge operation looks at two sorted arrays as lists, and compares the
head of the list , and which ever head is smaller, this element is put
on the sorted list and the head of that list is ticked off, so the next
element becomes the head of that list. This is done until one of the
lists is exhausted, and the other list is then copied onto the end of
the sorted list.
\- the recursion occurs, because merging two one-element arrays produces
one two-element sorted array, which can be merged with another
two-element sorted array produced the same way. This produces a sorted 4
element array, and the same applies for another 4 element sorted array.
\- so the basic merge sort, is to check the size of list to be sorted,
and if it is greater than one, divide the array into two, and call merge
sort again on the two halves. After wards, merge the two halves in a
temporary space of equal size, and then copy back the final sorted array
onto the original array.
#### Binary heaps
2\. **Binary heaps** :
- A binary max-heap or min-heap, is an ordered structure where some
nodes are guaranteed greater than other nodes, e.g. the parent vs
two children. A binary heap can be stored in an array , where ,
\- given a position **i** (the parent) , **i\*2** is the left child, and
**i\*2+1** is the right child.
\- ( C arrays begin at position 0, but 0 \* 2 = 0, and 0 \*2 + 1= 1,
which is incorrect , so start the heap at position 1, or add 1 for
parent-to-child calculations, and subtract 1 for child-to-parent
calculations ).
- try to model this using with a pencil and paper, using 10 random
unsorted numbers, and inserting each of them into a \"heapsort\"
array of 10 elements.
```{=html}
<!-- -->
```
- To insert into a heap, **end-add** and **swap-parent** if higher,
until parent higher.
```{=html}
<!-- -->
```
- To delete the top of a heap, move **end-to-top**, and
**defer-higher-child** or **sift-down** , until no child is higher.
```{=html}
<!-- -->
```
- try it on a pen and paper the numbers 10, 4, 6 ,3 ,5 , 11.
```{=html}
<!-- -->
```
- the answer was 11, 5, 10, 3, 4 , 6.
```{=html}
<!-- -->
```
- EXERCISE: Now try removing each top element of 11, 5, 10, 3, 4, 6 ,
using end-to-top and sift-down ( or swap-higher-child) to get the
numbers
in descending order.
- a priority queue allows elements to be inserted with a priority ,
and extracted according to priority. ( This can happen usefully, if
the element has a paired structure, one part is the key, and the
other part the data. Otherwise, it is just a mechanism for sorting
).
```{=html}
<!-- -->
```
- EXERCISE: Using the above technique of insert-back/challenge-parent,
and delete-front/last-to-front/defer-higher-child, implement either
heap sort or a priority queue.
#### Dijkstra\'s algorithm
Dijkstra\'s algorithm is a searching algorithm using a priority queue.
It begins with inserting the start node with a priority value of 0. All
other nodes are inserted with priority values of large N. Each node has
an adjacency list of other nodes, a current distance to start node, and
previous pointer to previous node used to calculate current node.
Alternative to an adjacency list, is an adjacency matrix, which needs n
x n boolean adjacencies.
The algorithm basically iterates over the priority queue, removing the
front node, examining the adjacent nodes, and updating with a distance
equal to the sum of the front nodes distance for each adjacent node ,
and the distance given by the adjacency information for an adjacent
node.
After each node\'s update, the extra operation **\"update priority\"**
is used on that node :
*while* the node\'s distance is less than it\'s parents node ( for this
priority queue, parents have lesser distances than the children), the
node is swapped with the parent.
After this, *while* the node is greater distance than one or more of its
children, it is swapped with the least distant child, so the least
distant child becomes parent of its greater distant sibling, and parent
to the greater distant current node.
With updating the priority, the adjacent node to the current node has a
back pointer changed to the current node.
The algorithm ends when the target node becomes the current node
removed, and the path to the start node can be recorded in an array by
following back pointers, and then doing something like a quick sort
partition to reverse the order of the array , to give the shortest path
to target node from the start node.
#### Quick sort
3\. Write a C program to recursively sort using the Quick sort partition
exchange algorithm.
- you can use the \"driver\", or the random number test data from Q1.
on mergesort. This is \"re-use\", which is encouraged in general.
\- an advantage of reuse is less writing time, debugging time, testing
time.
- the concept of partition exchange is that a partition element is
(randomly) selected, and every thing that needs sorted is put into 3
equivalence
classes : those elements less than the partition value, the partition
element, and everything above (and equal to ) the partition element.
- this can be done without allocating more space than one temporary
element space for swapping two elements. e.g a temporary integer for
integer data.
- However, where the partition element should be using the original
array space is not known.
- This is usually implemented with putting the partition on the end of
the array to be sorted, and then putting two pointers , one at the
start of the array,
and one at the element next to the partition element , and repeatedly
scanning the left pointer right, and the right pointer left.
- the left scan stops when an element equal to or greater than the
partition is found, and the right scan stops for a smaller element
than the partition value,
and these are swapped, which uses the temporary extra space.
- the left scan will always stop if it reaches the partition element ,
which is the last element; this means the entire array is less than
partition value.
- the right scan could reach the first element, if the entire array is
greater than the partition , and this needs to be tested for, else
the scan doesn\'t stop.
- the outer loop exits when then left and right pointers cross.
Testing for pointer crossing and outer loop exit
should occur before swapping, otherwise the right pointer may be
swapping a less-than-partition element previously scanned by the left
pointer.
- finally, the partition element needs to be put between the left and
right partitions, once the pointers cross.
- At pointer crossing, the left pointer may be stopped at the
partition element\'s last position in the array, and the right
pointer not progressed past the
element just before the last element. This happens when all the elements
are less than the partition.
\- if the right pointer is chosen to swap with the partition, then an
incorrect state results where the last element of the left array becomes
less than the partition element value.
\- if the left pointer is chosen to swap with the partition, then the
left array will be less than the partition, and partition will have
swapped with an element with value greater than the partition or the
partition itself.
- The corner case of quicksorting a 2 element
`<b>`{=html}out-of-order`</b>`{=html} array has to be examined.
\- The left pointer stops on the first **out of order** element. The
right pointer begins on the first `<b>`{=html}out-of-order`</b>`{=html}
element, but the outer loop exits because this is the leftmost element.
The partition element is then swapped with the left pointer\'s first
element, and the two elements are now **in order**.
\- In the case of a 2 element **in order** array, the leftmost pointer
skips the first element which is less than the partition, and stops on
the partition. The right pointer begins on the first element and exits
because it is the first position. The pointers have crossed so the outer
loop exits. The partition swaps with itself, so the in-ordering is
preserved.
- After doing a swap, the left pointer should be incremented and right
pointer decremented, so the same positions aren\'t scanned again,
because an endless loop can result ( possibly when the left pointer
exits when the element is equal to or greater than the partition,
and the right element is equal to the partition value). One
implementation, Sedgewick, starts the pointers with the left pointer
minus one and right pointer
the plus one the intended initial scan positions, and use the
pre-increment and pre-decrement operators e.g. ( ++i, \--i) .
fr:Exercices en langage C
et:Programmeerimiskeel
C/Harjutused
pl:C/Ćwiczenia dla
początkujących
|
# C Programming/Advanced data types
In the chapter Variables we looked at the
primitive data types. However *advanced* data types allow us greater
flexibility in managing data in our program.
## Structs
Structs are data types made of variables of other data types (possibly
including other structs). They are used to group pieces of information
into meaningful units, and also permit some constructs not possible
otherwise. The variables declared in a struct are called \"members\".
One defines a struct using the `struct` keyword. For example:
``` c
struct mystruct {
int int_member;
double double_member;
char string_member[25];
} struct_var;
```
`struct_var` is a variable of type `struct mystruct`, which we declared
along with the definition of the new `struct mystruct` data type. More
commonly, struct variables are declared after the definition of the
struct, using the form:
``` c
struct mystruct struct_var;
```
It is often common practice to make a *type synonym* so we don\'t have
to type \"struct mystruct\" all the time. C allows us the possibility to
do so using a `typedef` statement, which aliases a type:
``` c
typedef struct {
// ...
} Mystruct;
```
The `struct` itself is an *incomplete* type (by the absence of a name on
the first line), but it is aliased as `Mystruct`. Then the following may
be used:
``` c
Mystruct struct_var;
```
The members of a struct variable may be accessed using the member access
operator `.` (a dot) or the indirect member access operator `->` (an
arrow) if the struct variable is a pointer:
``` c
struct_var.int_member = 0;
struct_var->int_number = 0; // this statement is equivalent to: (*struct_var).int_number = 0;
```
(Pointers will be explained in the next chapter.) Structs may contain
not only their own variables but may also contain variables pointing to
other structs. This allows a recursive definition, which is very
powerful when used with pointers:
``` c
struct restaurant_order {
char description[100];
double price;
struct restaurant_order *next_order;
};
```
This is an implementation of the linked list
data structure. Each node (a restaurant order) is pointing to one other
node. The linked list is terminated on the last node (in our example,
this would be the last order) whose `next_order` variable would be
assigned to `NULL`.
A recursive struct definition can be tricky when used with `typedef`. It
is not possible to declare a struct variable inside its own type by
using its aliased definition, since the aliased definition by `typedef`
does not exist before the `typedef` statement is evaluated:
``` c
typedef struct Mystruct {
// ...
struct Mystruct *pointer; // Mystruct *pointer; would cause a compile-time error
} Mystruct;
```
The size of a struct type is at least the sum of the sizes of all its
members. But a compiler is free to insert padding bytes between the
struct members to align the members to certain constraints. For example,
a struct containing of a char and a float will occupy 8 bytes on many
32bit architectures.
## Unions
The definition of a union is similar to that of a struct. The difference
between the two is that in a struct, the members occupy different areas
of memory, but in a union, the members occupy the same area of memory.
Thus, in the following type, for example:
``` c
union {
int i;
double d;
} u;
```
The programmer can access either `u.i` or `u.d`, but not both at the
same time. Since `u.i` and `u.d` occupy the same area of memory,
modifying one modifies the value of the other, sometimes in
unpredictable ways. This is also the main reason that unions are rarely
seen in practice.
The size of a union is the size of its largest member.
## Enumerations
Enumerations are artificial data types representing associations between
labels and integers. Unlike structs or unions, they are not composed of
other data types. An example declaration:
``` c
enum color {
red,
orange,
yellow,
green,
cyan,
blue,
purple,
} crayon_color;
```
In the example above, red equals 0, orange equals 1, \... and so on. It
is possible to assign values to labels within the integer range, but
they must be a literal.
Similar declaration syntax that applies for structs and unions also
applies for enums. Also, one *normally* doesn\'t need to be concerned
with the integers that labels represent:
``` c
enum weather weather_outside = rain;
```
This peculiar property makes enums especially convenient in switch-case
statements:
``` c
enum weather {
sunny,
windy,
cloudy,
rain,
} weather_outside;
// ...
switch (weather_outside) {
case sunny:
wear_sunglasses();
break;
case windy:
wear_windbreaker();
break;
case cloudy:
get_umbrella();
break;
case rain:
get_umbrella();
wear_raincoat();
break;
}
```
Enums are a simplified way to emulate associative arrays in C.
de:C-Programmierung: Komplexe
Datentypen
fr:Programmation C/Types
avancés pl:C/Typy
złożone
|
# C Programming/Pointers and arrays
!Pointer *a* pointing to variable *b*. Note that *b* stores a number,
whereas *a* stores the address of *b* in memory
(1462)"){width="180"}
A **pointer "wikilink")** is a value that
designates the address (i.e., the location in memory), of some value.
Pointers are variables that hold a memory location.
There are four fundamental things you need to know about pointers:
- How to declare them (with the address operator \'`&`\':
`int *pointer = &variable;`)
- How to assign to them (`pointer = NULL;`)
- How to reference the value to which the pointer points (known as
*dereferencing*, by using the dereferencing operator \'`*`\':
`value = *pointer;`)
- How they relate to arrays (the vast majority of arrays in C are
simple lists, also called \"1 dimensional arrays\", but we will
briefly cover multi-dimensional arrays with some pointers in a
later
chapter).
Pointers can reference any data type, even functions. We\'ll also
discuss the relationship of pointers with text strings and the more
advanced concept of function pointers.
## Declaring pointers
Consider the following snippet of code which declares two pointers:
``` {.c .numberLines}
struct MyStruct {
int m_aNumber;
float num2;
};
int main()
{
int *pJ2;
struct MyStruct *pAnItem;
}
```
Lines 1-4 define a
structure. Line 8
declares a variable that points to an `int`, and line 9 declares a
variable that points to something with structure MyStruct. So to declare
a variable as something that points to some type, rather than contains
some type, the asterisk (`*`) is placed before the variable name.
In the following, line 1 declares `var1` as a pointer to a long and
`var2` as a long and not a pointer to a long. In line 2, `p3` is
declared as a pointer to a pointer to an int.
``` c
long * var1, var2;
int ** p3;
```
Pointer types are often used as parameters to function calls. The
following shows how to declare a function which uses a pointer as an
argument. Since C passes function arguments by value, in order to allow
a function to modify a value from the calling routine, a pointer to the
value must be passed. Pointers to structures are also used as function
arguments even when nothing in the struct will be modified in the
function. This is done to avoid copying the complete contents of the
structure onto the stack. More about pointers as function arguments
later.
``` c
int MyFunction(struct MyStruct *pStruct);
```
## Assigning values to pointers
So far we\'ve discussed how to declare pointers. The process of
assigning values to pointers is next. To assign the address of a
variable to a pointer, the `&` or \'address of\' operator is used.
``` c
int myInt;
int *pPointer;
struct MyStruct dvorak;
struct MyStruct *pKeyboard;
pPointer = &myInt;
pKeyboard = &dvorak;
```
Here, pPointer will now reference myInt and pKeyboard will reference
dvorak.
Pointers can also be assigned to reference dynamically allocated memory.
The malloc() and calloc() functions are often used to do this.
``` c
#include <stdlib.h>
/* ... */
struct MyStruct *pKeyboard;
/* ... */
pKeyboard = malloc(sizeof *pKeyboard);
```
The malloc function returns a pointer to dynamically allocated memory
(or NULL if unsuccessful). The size of this memory will be appropriately
sized to contain the MyStruct structure.
The following is an example showing one pointer being assigned to
another and of a pointer being assigned a return value from a function.
``` c
static struct MyStruct val1, val2, val3, val4;
struct MyStruct *ASillyFunction( int b )
{
struct MyStruct *myReturn;
if (b == 1) myReturn = &val1;
else if (b==2) myReturn = &val2;
else if (b==3) myReturn = &val3;
else myReturn = &val4;
return myReturn;
}
struct MyStruct *strPointer;
int *c, *d;
int j;
c = &j; /* pointer assigned using & operator */
d = c; /* assign one pointer to another */
strPointer = ASillyFunction( 3 ); /* pointer returned from a function. */
```
When returning a pointer from a function, do not return a pointer that
points to a value that is local to the function or that is a pointer to
a function argument. Pointers to local variables become invalid when the
function exits. In the above function, the value returned points to a
static variable. Returning a pointer to dynamically allocated memory is
also valid.
## Pointer dereferencing
!The pointer `p` points to the variable
`a`.{width="300"}
To access a value to which a pointer points, the `*` operator is used.
Another operator, the `->` operator is used in conjunction with pointers
to structures. Here\'s a short example.
``` c
int c, d;
int *pj;
struct MyStruct astruct;
struct MyStruct *bb;
c = 10;
pj = &c; /* pj points to c */
d = *pj; /* d is assigned the value to which pj points, 10 */
pj = &d; /* now points to d */
*pj = 12; /* d is now 12 */
bb = &astruct;
(*bb).m_aNumber = 3; /* assigns 3 to the m_aNumber member of astruct */
bb->num2 = 44.3; /* assigns 44.3 to the num2 member of astruct */
*pj = bb->m_aNumber; /* equivalent to d = astruct.m_aNumber; */
```
The expression `bb->m_aNumber` is entirely equivalent to
`(*bb).m_aNumber`. They both access the `m_aNumber` element of the
structure pointed to by `bb`. There is one more way of dereferencing a
pointer, which will be discussed in the following section.
When dereferencing a pointer that points to an invalid memory location,
an error often occurs which results in the program terminating. The
error is often reported as a segmentation error. A common cause of this
is failure to initialize a pointer before trying to dereference it.
C is known for giving you just enough rope to hang yourself, and pointer
dereferencing is a prime example. You are quite free to write code that
accesses memory outside that which you have explicitly requested from
the system. And many times, that memory may appear as available to your
program due to the vagaries of system memory allocation. However, even
if 99 executions allow your program to run without fault, that 100th
execution may be the time when your \"memory pilfering\" is caught by
the system and the program fails. Be careful to ensure that your pointer
offsets are within the bounds of allocated memory!
The declaration `void *somePointer;` is used to declare a pointer of
some nonspecified type. You can assign a value to a void pointer, but
you must cast the variable to point to some specified type before you
can dereference it. Pointer arithmetic is also not valid with `void *`
pointers.
## Pointers and Arrays
Up to now, we\'ve carefully been avoiding discussing arrays in the
context of pointers. The interaction of pointers and arrays can be
confusing but here are two fundamental statements about it:
- A variable declared as an array of some type acts as a pointer to
that type. When used by itself, it points to the first element of
the array.
- A pointer can be indexed like an array name.
The first case often is seen to occur when an array is passed as an
argument to a function. The function declares the parameter as a
pointer, but the actual argument may be the name of an array. The second
case often occurs when accessing dynamically allocated memory.
Let\'s look at examples of each. In the following code, the call to
`calloc()` effectively allocates an array of struct MyStruct items.
``` c
struct MyStruct {
int someNumber;
float otherNumber;
};
float returnSameIfAnyEquals(struct MyStruct *workingArray, int size, int bb)
{
/* Go through the array and check if any value in someNumber is equal to bb. If
* any value is, return the value in otherNumber. If no values are equal to bb,
* return 0.0f. */
for (int i = 0; i < size; i++) {
if (workingArray[i].someNumber == bb ) {
return workingArray[i].otherNumber;
}
}
return 0.0f;
}
// Declare our variables
float someResult;
int someSize;
struct MyStruct myArray[4];
struct MyStruct *secondArray; // Notice that this is a pointer
const int ArraySize = sizeof(myArray) / sizeof(*myArray);
// Initialization of myArray occurs
someResult = returnSameIfAnyEquals(myArray, ArraySize, 4);
secondArray = calloc(someSize, sizeof(struct MyStruct));
for (int i = 0; i < someSize; i++) {
/* Fill secondArray with some data */
secondArray[i].someNumber = i * 2;
secondArray[i].otherNumber = 0.304f * i * i;
}
```
Pointers and array names can pretty much be used interchangeably;
however, there are exceptions. You cannot assign a new pointer value to
an array name. The array name will always point to the first element of
the array. In the function `returnSameIfAnyEquals`, you could however
assign a new value to workingArray, as it is just a pointer to the first
element of workingArray. It is also valid for a function to return a
pointer to one of the array elements from an array passed as an argument
to a function. A function should never return a pointer to a local
variable, even though the compiler will probably not complain.
When declaring parameters to functions, declaring an array variable
without a size is equivalent to declaring a pointer. Often this is done
to emphasize the fact that the pointer variable will be used in a manner
equivalent to an array.
``` c
/* Two equivalent function prototypes */
int LittleFunction(int *paramN);
int LittleFunction(int paramN[]);
```
Now we\'re ready to discuss pointer arithmetic. You can add and subtract
integer values to/from pointers. If myArray is declared to be some type
of array, the expression `*(myArray+j)`, where j is an integer, is
equivalent to `myArray[j]`. For instance, in the above example where we
had the expression `secondArray[i].otherNumber`, we could have written
that as `(*(secondArray+i)).otherNumber` or more simply
`(secondArray+i)->otherNumber`.
Note that for addition and subtraction of integers and pointers, the
value of the pointer is not adjusted by the integer amount, but is
adjusted by the amount multiplied by the size of the type to which the
pointer refers in bytes. (For example, `pointer + x` can be thought of
as `pointer + (x * sizeof(*type))`.)
One pointer may also be subtracted from another, provided they point to
elements of the same array (or the position just beyond the end of the
array). If you have a pointer that points to an element of an array, the
index of the element is the result when the array name is subtracted
from the pointer. Here\'s an example.
``` c
struct MyStruct someArray[20];
struct MyStruct *p2;
int i;
/* array initialization .. */
for (p2 = someArray; p2 < someArray+20; ++p2) {
if (p2->num2 > testValue)
break;
}
i = p2 - someArray;
```
You may be wondering how pointers and multidimensional arrays interact.
Let\'s look at this a bit in detail. Suppose A is declared as a two
dimensional array of floats (`float A[D1][D2];`) and that pf is declared
a pointer to a float. If pf is initialized to point to A\[0\]\[0\], then
\*(pf+1) is equivalent to A\[0\]\[1\] and \*(pf+D2) is equivalent to
A\[1\]\[0\]. The elements of the array are stored in row-major order.
``` c
float A[6][8];
float *pf;
pf = &A[0][0];
*(pf+1) = 1.3; /* assigns 1.3 to A[0][1] */
*(pf+8) = 2.3; /* assigns 2.3 to A[1][0] */
```
Let\'s look at a slightly different problem. We want to have a two
dimensional array, but we don\'t need to have all the rows the same
length. What we do is declare an array of pointers. The second line
below declares A as an array of pointers. Each pointer points to a
float. Here\'s some applicable code:
``` c
float linearA[30];
float *A[6];
A[0] = linearA; /* 5 - 0 = 5 elements in row */
A[1] = linearA + 5; /* 11 - 5 = 6 elements in row */
A[2] = linearA + 11; /* 15 - 11 = 4 elements in row */
A[3] = linearA + 15; /* 21 - 15 = 6 elements */
A[4] = linearA + 21; /* 25 - 21 = 4 elements */
A[5] = linearA + 25; /* 30 - 25 = 5 elements */
*A[3][2] = 3.66; /* assigns 3.66 to linearA[17]; */
*A[3][-3] = 1.44; /* refers to linearA[12];
negative indices are sometimes useful. But avoid using them as much as possible. */
```
We also note here something curious about array indexing. Suppose
`myArray` is an array and `i` is an integer value. The expression
`myArray[i]` is equivalent to `i[myArray]`. The first is equivalent to
`*(myArray+i)`, and the second is equivalent to `*(i+myArray)`. These
turn out to be the same, since the addition is commutative.
Pointers can be used with pre-increment or post-decrement, which is
sometimes done within a loop, as in the following example. The increment
and decrement applies to the pointer, not to the object to which the
pointer refers. In other words, `*pArray++` is equivalent to
`*(pArray++)`.
``` c
long myArray[20];
long *pArray;
int i;
/* Assign values to the entries of myArray */
pArray = myArray;
for (i=0; i<10; ++i) {
*pArray++ = 5 + 3*i + 12*i*i;
*pArray++ = 6 + 2*i + 7*i*i;
}
```
## Pointers in Function Arguments
Often we need to invoke a function with an argument that is itself a
pointer. In many instances, the variable is itself a parameter for the
current function and may be a pointer to some type of structure. The
ampersand (**`&`**) character is not needed in
this circumstance to obtain a pointer value, as the variable is itself a
pointer. In the example below, the variable `pStruct`, a pointer, is a
parameter to function `FunctTwo`, and is passed as an argument to
`FunctOne`.
The second parameter to `FunctOne` is an int. Since in function
`FunctTwo`, `mValue` is a pointer to an int, the pointer must first be
dereferenced using the \* operator, hence the second argument in the
call is `*mValue`. The third parameter to function `FunctOne` is a
pointer to a long. Since `pAA` is itself a pointer to a long, no
ampersand is needed when it is used as the third argument to the
function.
``` c
int FunctOne(struct someStruct *pValue, int iValue, long *lValue)
{
/* do some stuff ... */
return 0;
}
int FunctTwo(struct someStruct *pStruct, int *mValue)
{
int j;
long AnArray[25];
long *pAA;
pAA = &AnArray[13];
j = FunctOne( pStruct, *mValue, pAA ); /* pStruct already holds the address that the pointer will point to; there is no need to get the address of anything.*/
return j;
}
```
## Pointers and Text Strings
Historically, text strings in C have been implemented as arrays of
characters, with the last byte in the string being a zero, or the null
character \'\\0\'. Most C implementations come with a standard library
of functions for manipulating strings. Many of the more commonly used
functions expect the strings to be null terminated strings of
characters. To use these functions requires the inclusion of the
standard C header file \"string.h\".
A statically declared, initialized string would look similar to the
following:
``` c
static const char *myFormat = "Total Amount Due: %d";
```
The variable `myFormat` can be viewed as an array of 21 characters.
There is an implied null character (\'\\0\') tacked on to the end of the
string after the \'d\' as the 21st item in the array. You can also
initialize the individual characters of the array as follows:
``` c
static const char myFlower[] = { 'P', 'e', 't', 'u', 'n', 'i', 'a', '\0' };
```
An initialized array of strings would typically be done as follows:
``` c
static const char *myColors[] = {
"Red", "Orange", "Yellow", "Green", "Blue", "Violet" };
```
The initialization of an especially long string can be split across
lines of source code as follows.
``` c
static char *longString = "Hello. My name is Rudolph and I work as a reindeer "
"around Christmas time up at the North Pole. My boss is a really swell guy."
" He likes to give everybody gifts.";
```
The library functions that are used with strings are discussed in a
later chapter.
## Pointers to Functions
C also allows you to create pointers to functions. Pointers to functions
syntax can get rather messy. As an example of this, consider the
following functions:
``` c
static int Z = 0;
int *pointer_to_Z(int x) {
/* function returning integer pointer, not pointer to function */
return &Z;
}
int get_Z(int x) {
return Z;
}
int (*function_pointer_to_Z)(int); // pointer to function taking an int as argument and returning an int
function_pointer_to_Z = &get_Z;
printf("pointer_to_Z output: %d\n", *pointer_to_Z(3));
printf("function_pointer_to_Z output: %d", (*function_pointer_to_Z)(3));
```
Declaring a typedef to a function pointer generally clarifies the code.
Here\'s an example that uses a function pointer, and a void \* pointer
to implement what\'s known as a callback. The `DoSomethingNice` function
invokes a caller supplied function `TalkJive` with caller data. Note
that `DoSomethingNice` really doesn\'t know anything about what
`dataPointer` refers to.
``` c
typedef int (*MyFunctionType)( int, void *); /* a typedef for a function pointer */
#define THE_BIGGEST 100
int DoSomethingNice( int aVariable, MyFunctionType aFunction, void *dataPointer )
{
int rv = 0;
if (aVariable < THE_BIGGEST) {
/* invoke function through function pointer (old style) */
rv = (*aFunction)(aVariable, dataPointer );
} else {
/* invoke function through function pointer (new style) */
rv = aFunction(aVariable, dataPointer );
};
return rv;
}
typedef struct {
int colorSpec;
char *phrase;
} DataINeed;
int TalkJive( int myNumber, void *someStuff )
{
/* recast void * to pointer type specifically needed for this function */
DataINeed *myData = someStuff;
/* talk jive. */
return 5;
}
static DataINeed sillyStuff = { BLUE, "Whatcha talkin 'bout Willis?" };
DoSomethingNice( 41, &TalkJive, &sillyStuff );
```
Some versions of C may not require an ampersand preceding the `TalkJive`
argument in the `DoSomethingNice` call. Some implementations may require
specifically casting the argument to the `MyFunctionType` type, even
though the function signature exacly matches that of the typedef.
Function pointers can be useful for implementing a form of polymorphism
in C. First one declares a structure having as elements function
pointers for the various operations to that can be specified
polymorphically. A second base object structure containing a pointer to
the previous structure is also declared. A class is defined by extending
the second structure with the data specific for the class, and static
variable of the type of the first structure, containing the addresses of
the functions that are associated with the class. This type of
polymorphism is used in the standard library when file I/O functions are
called.
A similar mechanism can also be used for implementing a state
machine in C. A structure is defined
which contains function pointers for handling events that may occur
within state, and for functions to be invoked upon entry to and exit
from the state. An instance of this structure corresponds to a state.
Each state is initialized with pointers to functions appropriate for the
state. The current state of the state machine is in effect a pointer to
one of these states. Changing the value of the current state pointer
effectively changes the current state. When some event occurs, the
appropriate function is called through a function pointer in the current
state.
### Practical use of function pointers in C
Function pointers are mainly used to reduce the complexity of switch
statement. Example with switch statement:
``` c
#include <stdio.h>
int add(int a, int b);
int sub(int a, int b);
int mul(int a, int b);
int div(int a, int b);
int main()
{
int i, result;
int a=10;
int b=5;
printf("Enter the value between 0 and 3 : ");
scanf("%d",&i);
switch(i)
{
case 0: result = add(a,b); break;
case 1: result = sub(a,b); break;
case 2: result = mul(a,b); break;
case 3: result = div(a,b); break;
}
}
int add(int i, int j)
{
return (i+j);
}
int sub(int i, int j)
{
return (i-j);
}
int mul(int i, int j)
{
return (i*j);
}
int div(int i, int j)
{
return (i/j);
}
```
Without using a switch statement:
``` c
#include <stdio.h>
int add(int a, int b);
int sub(int a, int b);
int mul(int a, int b);
int div(int a, int b);
int (*oper[4])(int a, int b) = {add, sub, mul, div};
int main()
{
int i,result;
int a=10;
int b=5;
printf("Enter the value between 0 and 3 : ");
scanf("%d",&i);
result = operi;
}
int add(int i, int j)
{
return (i+j);
}
int sub(int i, int j)
{
return (i-j);
}
int mul(int i, int j)
{
return (i*j);
}
int div(int i, int j)
{
return (i/j);
}
```
Function pointers may be used to create a struct member function:
``` c
typedef struct
{
int (*open)(void);
void (*close)(void);
int (*reg)(void);
} device;
int my_device_open(void)
{
/* ... */
}
void my_device_close(void)
{
/* ... */
}
void register_device(void)
{
/* ... */
}
device create(void)
{
device my_device;
my_device.open = my_device_open;
my_device.close = my_device_close;
my_device.reg = register_device;
my_device.reg();
return my_device;
}
```
Use to implement this pointer (following code must be placed in
library).
``` c
static struct device_data
{
/* ... here goes data of structure ... */
};
static struct device_data obj;
typedef struct
{
int (*open)(void);
void (*close)(void);
int (*reg)(void);
} device;
static struct device_data create_device_data(void)
{
struct device_data my_device_data;
/* ... here goes constructor ... */
return my_device_data;
}
/* here I omit the my_device_open, my_device_close and register_device functions */
device create_device(void)
{
device my_device;
my_device.open = my_device_open;
my_device.close = my_device_close;
my_device.reg = register_device;
my_device.reg();
return my_device;
}
```
## Examples of pointer constructs
Below are some example constructs which may aid in creating your
pointer.
``` c
int i; // integer variable 'i'
int *p; // pointer 'p' to an integer
int a[]; // array 'a' of integers
int f(); // function 'f' with return value of type integer
int **pp; // pointer 'pp' to a pointer to an integer
int (*pa)[]; // pointer 'pa' to an array of integer
int (*pf)(); // pointer 'pf' to a function with return value integer
int *ap[]; // array 'ap' of pointers to an integer
int *fp(); // function 'fp' which returns a pointer to an integer
int ***ppp; // pointer 'ppp' to a pointer to a pointer to an integer
int (**ppa)[]; // pointer 'ppa' to a pointer to an array of integers
int (**ppf)(); // pointer 'ppf' to a pointer to a function with return value of type integer
int *(*pap)[]; // pointer 'pap' to an array of pointers to an integer
int *(*pfp)(); // pointer 'pfp' to function with return value of type pointer to an integer
int **app[]; // array of pointers 'app' that point to pointers to integer values
int (*apa[])[]; // array of pointers 'apa' to arrays of integers
int (*apf[])(); // array of pointers 'apf' to functions with return values of type integer
int ***fpp(); // function 'fpp' which returns a pointer to a pointer to a pointer to an int
int (*fpa())[]; // function 'fpa' with return value of a pointer to array of integers
int (*fpf())(); // function 'fpf' with return value of a pointer to function which returns an integer
```
## sizeof
The sizeof operator is often used to refer to the size of a static array
declared earlier in the same function.
To find the end of an array (example from wikipedia:Buffer
overflow):
``` c
#include <stdio.h>
#include <string.h>
int main(int argc, char *argv[])
{
char buffer[10];
if (argc < 2)
{
fprintf(stderr, "USAGE: %s string\n", argv[0]);
return 1;
}
strncpy(buffer, argv[1], sizeof(buffer));
buffer[sizeof(buffer) - 1] = '\0';
return 0;
}
```
To iterate over every element of an array, use
``` c
#define NUM_ELEM(x) (sizeof (x) / sizeof (*(x)))
for( i = 0; i < NUM_ELEM(array); i++ )
{
/* do something with array[i] */
;
}
```
Note that the `sizeof` operator only works on things defined earlier in
the same function. The compiler replaces it with some fixed constant
number. In this case, the `buffer` was declared as an array of 10
char\'s earlier in the same function, and the compiler replaces
`sizeof(buffer)` with the number 10 at compile time (equivalent to us
hard-coding 10 into the code in place of `sizeof(buffer)`). The
information about the length of `buffer` is not actually stored anywhere
in memory (unless we keep track of it separately) and cannot be
programmatically obtained at run time from the array/pointer itself.
Often a function needs to know the size of an array it was given \-- an
array defined in some other function. For example,
``` c
/* broken.c - demonstrates a flaw */
#include <stdio.h>
#include <string.h>
#define NUM_ELEM(x) (sizeof (x) / sizeof (*(x)))
int sum( int input_array[] ){
int sum_so_far = 0;
int i;
for( i = 0; i < NUM_ELEM(input_array); i++ ) // WON'T WORK -- input_array wasn't defined in this function.
{
sum_so_far += input_array[i];
};
return( sum_so_far );
}
int main(int argc, char *argv[])
{
int left_array[] = { 1, 2, 3 };
int right_array[] = { 10, 9, 8, 7, 6, 5, 4, 3, 2, 1 };
int the_sum = sum( left_array );
printf( "the sum of left_array is: %d", the_sum );
the_sum = sum( right_array );
printf( "the sum of right_array is: %d", the_sum );
return 0;
}
```
Unfortunately, (in C and C++) the length of the array cannot be obtained
from an array passed in at run time, because (as mentioned above) the
size of an array is not stored anywhere. The compiler always replaces
sizeof with a constant. This sum() routine needs to handle more than
just one constant length of an array.
There are some common ways to work around this fact:
- Write the function to require, for each array parameter, a
\"length\" parameter (which has type \"size_t\"). (Typically we use
sizeof at the point where this function is called).
- Use of a convention, such as a null-terminated
string to mark the end
of the array.
- Instead of passing raw arrays, pass a structure that includes the
length of the array (such as \".length\") as well as the array (or a
pointer to the first element); similar to the `string` or `vector`
classes in C++.
``` c
/* fixed.c - demonstrates one work-around */
#include <stdio.h>
#include <string.h>
#define NUM_ELEM(x) (sizeof (x) / sizeof (*(x)))
int sum( int input_array[], size_t length ){
int sum_so_far = 0;
int i;
for( i = 0; i < length; i++ )
{
sum_so_far += input_array[i];
};
return( sum_so_far );
}
int main(int argc, char *argv[])
{
int left_array[] = { 1, 2, 3, 4 };
int right_array[] = { 10, 9, 8, 7, 6, 5, 4, 3, 2, 1 };
int the_sum = sum( left_array, NUM_ELEM(left_array) ); // works here, because left_array is defined in this function
printf( "the sum of left_array is: %d", the_sum );
the_sum = sum( right_array, NUM_ELEM(right_array) ); // works here, because right_array is defined in this function
printf( "the sum of right_array is: %d", the_sum );
return 0;
}
```
It\'s worth mentioning that sizeof operator has two variations:
`sizeof (``<i>`{=html}`type``</i>`{=html}`)` (for instance:
`sizeof (int)` or `sizeof (struct some_structure)`) and
`sizeof ``<i>`{=html}`expression``</i>`{=html} (for instance:
`sizeof some_variable.some_field` or `sizeof 1`).
## External Links
```{=html}
<div class="noprint">
```
!Pointer Fun with
Binky.ogg "Pointer Fun with Binky")
```{=html}
</div>
```
- \"Common Pointer
Pitfalls\"
by Dave Marshall
de:C-Programmierung: Zeiger
fr:Programmation C/Pointeurs
it:C/Vettori e puntatori/Interscambiabilità tra puntatori e
vettori
pl:C/Wskaźniki
|
# C Programming/Memory management
In C, you have already considered creating variables for use in the
program. You have created some arrays for use, but you may have already
noticed some limitations:
- the size of the array must be known beforehand
- the size of the array cannot be changed in the duration of your
program
*Dynamic memory allocation* in C is a way of circumventing these
problems.
## The `malloc` function
``` c
#include <stdlib.h>
void *calloc(size_t nmemb, size_t size);
void free(void *ptr);
void *malloc(size_t size);
void *realloc(void *ptr, size_t size);
```
The standard C function `malloc` is the means of implementing dynamic
memory allocation. It is defined in stdlib.h or malloc.h, depending on
what operating system you may be using. Malloc.h contains only the
definitions for the memory allocation functions and not the rest of the
other functions defined in stdlib.h. Usually you will not need to be so
specific in your program, and if both are supported, you should use
\<stdlib.h\>, since that is ANSI C, and what we will use here.
The corresponding call to release allocated memory back to the operating
system is `free`.
When dynamically allocated memory is no longer needed, `free` should be
called to release it back to the memory pool. Overwriting a pointer that
points to dynamically allocated memory can result in that data becoming
inaccessible. If this happens frequently, eventually the operating
system will no longer be able to allocate more memory for the process.
Once the process exits, the operating system is able to free all
dynamically allocated memory associated with the process.
Let\'s look at how dynamic memory allocation can be used for arrays.
Normally when we wish to create an array we use a declaration such as
``` c
int array[10];
```
Recall `array` can be considered a pointer which we use as an array. We
specify the length of this array is 10 `int`s. After `array[0]`, nine
other integers have space to be stored consecutively.
Sometimes it is not known at the time the program is written how much
memory will be needed for some data; for example, when it depends upon
user input. In this case we would want to dynamically allocate required
memory after the program has started executing. To do this we only need
to declare a pointer, and invoke `malloc` when we wish to make space for
the elements in our array, *or*, we can tell `malloc` to make space when
we first initialize the array. Either way is acceptable and useful.
We also need to know how much an int takes up in memory in order to make
room for it; fortunately this is not difficult, we can use C\'s builtin
`sizeof` operator. For example, if `sizeof(int)` yields 4, then one
`int` takes up 4 bytes. Naturally, `2*sizeof(int)` is how much memory we
need for 2 `int`s, and so on.
So how do we `malloc` an array of ten `int`s like before? If we wish to
declare and make room in one hit, we can simply say
``` c
int *array = malloc(10*sizeof(int));
```
We only need to declare the pointer; `malloc` gives us some space to
store the 10 `int`s, and returns the pointer to the first element, which
is assigned to that pointer.
**Important note!** `malloc` does *not* initialize the array; this means
that the array may contain random or unexpected values! Like creating
arrays without dynamic allocation, the programmer must initialize the
array with sensible values before using it. Make sure you do so, too.
(\'\'See later the function `memset` for a simple method.)
It is not necessary to immediately call `malloc` after declaring a
pointer for the allocated memory. Often a number of statements exist
between the declaration and the call to `malloc`, as follows:
``` c
int *array = NULL;
printf("Hello World!!!");
/* more statements */
array = malloc(10*sizeof(int)); /* delayed allocation */
/* use the array */
```
A more practical example of dynamic memory allocation would be the
following:
> Given an array of 10 integers, remove all duplicate elements from the
> array, and create a new array without duplicate elements (a
> set "wikilink")).
A simple algorithm to remove duplicate elements:
``` {.c .numberLines}
int arrl = 10; // Length of the initial array
int arr[10] = {1, 2, 2, 3, 4, 4, 5, 6, 5, 7}; // A sample array, containing several duplicate elements
for (int x = 0; x < arrl; x++)
{
for (int y = x + 1; y < arrl; y++)
{
if (arr[x] == arr[y])
{
for (int s = y; s < arrl; s++)
{
if (!(s + 1 == arrl))
arr[s] = arr[s + 1];
}
arrl--;
y--;
}
}
}
```
Because the length of our new array depends on the input, it must be
dynamically allocated:
``` c
int *newArray = malloc(arrl*sizeof(int));
```
The above array will currently contain unexpected values, so we must use
`memcpy` to set our dynamically allocated memory block to the new
values:
``` c
memcpy(newArray, arr, arrl*sizeof(int));
```
### Error checking
When we want to use `malloc`, we have to be mindful that the pool of
memory available to the programmer is *finite*. Even if a modern PC will
have at least an entire gigabyte of memory, it is still possible and
conceivable to run out of it! In this case, `malloc` will return `NULL`.
In order to stop the program crashing from having no more memory to use,
one should always check that malloc has not returned `NULL` before
attempting to use the memory; we can do this by
``` c
int *pt = malloc(3 * sizeof(int));
if(pt == NULL)
{
fprintf(stderr, "Out of memory, exiting\n");
exit(1);
}
```
Of course, suddenly quitting as in the above example is not always
appropriate, and depends on the problem you are trying to solve and the
architecture you are programming for. For example, if the program is a
small, non critical application that\'s running on a desktop quitting
may be appropriate. However if the program is some type of editor
running on a desktop, you may want to give the operator the option of
saving their tediously entered information instead of just exiting the
program. A memory allocation failure in an embedded processor, such as
might be in a washing machine, could cause an automatic reset of the
machine. For this reason, many embedded systems designers avoid dynamic
memory allocation altogether.
## The `calloc` function
The `calloc` function allocates space for an array of items and
initializes the memory to zeros. The call
`mArray = calloc( count, sizeof(struct V))` allocates `count` objects,
each of whose size is sufficient to contain an instance of the structure
`struct V`. The space is initialized to all bits zero. The function
returns either a pointer to the allocated memory or, if the allocation
fails, `NULL`.
## The `realloc` function
``` c
void * realloc ( void * ptr, size_t size );
```
The `realloc` function changes the size of the object pointed to by
`ptr` to the size specified by `size`. The contents of the object shall
be unchanged up to the lesser of the new and old sizes. If the new size
is larger, the value of the newly allocated portion of the object is
indeterminate. If `ptr` is a null pointer, the `realloc` function
behaves like the `malloc` function for the specified size. Otherwise, if
`ptr` does not match a pointer earlier returned by the `calloc`,
`malloc`, or `realloc` function, or if the space has been deallocated by
a call to the `free` or `realloc` function, the behavior is undefined.
If the space cannot be allocated, the object pointed to by `ptr` is
unchanged. If `size` is zero and `ptr` is not a null pointer, the object
pointed to is freed. The `realloc` function returns either a null
pointer or a pointer to the possibly moved allocated object.
## The `free` function
Memory that has been allocated using `malloc`, `realloc`, or `calloc`
must be released back to the system memory pool once it is no longer
needed. This is done to avoid perpetually allocating more and more
memory, which could result in an eventual memory allocation failure.
Memory that is not released with `free` is however released when the
current program terminates on most operating systems. Calls to `free`
are as in the following example.
``` c
int *myStuff = malloc( 20 * sizeof(int));
if (myStuff != NULL)
{
/* more statements here */
/* time to release myStuff */
free( myStuff );
}
```
### free with recursive data structures
It should be noted that `free` is neither intelligent nor recursive. The
following code that depends on the recursive application of free to the
internal variables of a struct
does not work.
``` c
typedef struct BSTNode
{
int value;
struct BSTNode* left;
struct BSTNode* right;
} BSTNode;
// Later: ...
BSTNode* temp = (BSTNode*) calloc(1, sizeof(BSTNode));
temp->left = (BSTNode*) calloc(1, sizeof(BSTNode));
// Later: ...
free(temp); // WRONG! don't do this!
```
The statement \"`free(temp);`\" will **not** free `temp->left`, causing
a memory leak. The correct way is to define a function that frees
*every* node in the data structure:
``` c
void BSTFree(BSTNode* node){
if (node != NULL) {
BSTFree(node->left);
BSTFree(node->right);
free(node);
}
}
```
Because C does not have a garbage collector, C programmers are
responsible for making sure there is a `free()` exactly once for each
time there is a `malloc()`. If a tree has been allocated one node at a
time, then it needs to be freed one node at a time.
### Don\'t free undefined pointers
Furthermore, using `free` when the pointer in question was never
allocated in the first place often crashes or leads to mysterious bugs
further along.
To avoid this problem, always initialize pointers when they are
declared. Either use `malloc` at the point they are declared (as in most
examples in this chapter), or set them to `NULL` when they are declared
(as in the \"delayed allocation\" example in this chapter). [^1]
### Write constructor/destructor functions
One way to get memory initialization and destruction right is to imitate
object-oriented programming. In this paradigm, objects are constructed
after raw memory is allocated for them, live their lives, and when it is
time for them to be destructed, a special function called a destructor
destroys the object\'s innards before the object itself is destroyed.
For example:
``` c
#include <stdlib.h> /* need malloc and friends */
/* this is the type of object we have, with a single int member */
typedef struct WIDGET_T {
int member;
} WIDGET_T;
/* functions that deal with WIDGET_T */
/* constructor function */
void
WIDGETctor (WIDGET_T *this, int x)
{
this->member = x;
}
/* destructor function */
void
WIDGETdtor (WIDGET_T *this)
{
/* In this case, I really don't have to do anything, but
if WIDGET_T had internal pointers, the objects they point to
would be destroyed here. */
this->member = 0;
}
/* create function - this function returns a new WIDGET_T */
WIDGET_T *
WIDGETcreate (int m)
{
WIDGET_T *x = 0;
x = malloc (sizeof (WIDGET_T));
if (x == 0)
abort (); /* no memory */
WIDGETctor (x, m);
return x;
}
/* destroy function - calls the destructor, then frees the object */
void
WIDGETdestroy (WIDGET_T *this)
{
WIDGETdtor (this);
free (this);
}
/* END OF CODE */
```
## References
- Memory Management
fr:Programmation C/Gestion de la
mémoire
[^1]: \"Bug 478901 \... libpng-1.2.34 and earlier might free undefined
pointers\"
|
# C Programming/Error handling
C does not provide direct support for error handling (also known as
exception handling). By convention, the programmer is expected to
prevent errors from occurring in the first place, and test return values
from functions. For example, -1 and NULL are used in several functions
such as socket() (Unix socket programming) or malloc() respectively to
indicate problems that the programmer should be aware about. In a worst
case scenario where there is an unavoidable error and no way to recover
from it, a C programmer usually tries to log the error and
\"gracefully\" terminate the program.
There is an external variable called \"errno\", accessible by the
programs after including \<errno.h\> - that file comes from the
definition of the possible errors that can occur in some Operating
Systems (e.g. Linux - in this case, the definition is in
include/asm-generic/errno.h) when programs ask for resources. Such
variable indexes error descriptions accessible by the function
\'strerror( errno )\'.
The following code tests the return value from the library function
malloc to see if dynamic memory allocation completed properly:
``` c
#include <stdio.h> /* perror */
#include <errno.h> /* errno */
#include <stdlib.h> /* malloc, free, exit */
int main(void)
{
/* Pointer to char, requesting dynamic allocation of 2,000,000,000
* storage elements (declared as an integer constant of type
* unsigned long int). (If your system has less than 2 GB of memory
* available, then this call to malloc will fail.)
*/
char *ptr = malloc(2000000000UL);
if (ptr == NULL) {
perror("malloc failed");
/* here you might want to exit the program or compensate
for that you don't have 2GB available
*/
} else {
/* The rest of the code hereafter can assume that 2,000,000,000
* chars were successfully allocated...
*/
free(ptr);
}
exit(EXIT_SUCCESS); /* exiting program */
}
```
The code snippet above shows the use of the return value of the library
function malloc to check for errors. Many library functions have return
values that flag errors, and thus should be checked by the astute
programmer. In the snippet above, a NULL pointer returned from malloc
signals an error in allocation, so the program exits. In more
complicated implementations, the program might try to handle the error
and try to recover from the failed memory allocation.
## Preventing divide by zero errors
A common pitfall made by C programmers is not checking if a divisor is
zero before a division command. The following code will produce a
runtime error and in most cases, exit.
``` c
int dividend = 50;
int divisor = 0;
int quotient;
quotient = (dividend/divisor); /* This will produce a runtime error! */
```
In ordinary arithmetic division by zero
is undefined. Because of this, you must check or make sure that a
divisor is never zero. Alternatively, for \*nix processes, you can stop
the OS from terminating your process by blocking the SIGFPE signal.
The code below fixes this by checking if the divisor is zero before
dividing.
``` c
#include <stdio.h> /* for fprintf and stderr */
#include <stdlib.h> /* for exit */
int main( void )
{
int dividend = 50;
int divisor = 0;
int quotient;
if (divisor == 0) {
/* Example handling of this error. Writing a message to stderr, and
* exiting with failure.
*/
fprintf(stderr, "Division by zero! Aborting...\n");
exit(EXIT_FAILURE); /* indicate failure.*/
}
quotient = dividend / divisor;
exit(EXIT_SUCCESS); /* indicate success.*/
}
```
## Signals
In some cases, the environment may respond to a programming error in C
by raising a signal. Signals are events raised by the host environment
or operating system to indicate that a specific error or critical event
has occurred (e.g. a division by zero, interrupt, and so on.) However,
these signals are not meant to be used as a means of error catching;
they usually indicate a critical event that will interfere with normal
program flow.
To handle signals, a program needs to use the `signal.h` header file. A
signal handler will need to be defined, and the signal() function is
then called to allow the given signal to be handled. Some signals that
are raised to an exception within your code (e.g. a division by zero)
are unlikely to allow your program to recover. These signal handlers
will be required to instead ensure that some resources are properly
cleaned up before the program terminates.
The C Standard Library only defines six signals; Unix systems define 15
more. Each signal has a number, called a signum, associated with it.
Here are a few common ones:
``` c
#define SIGHUP 1 /* Hangup the process */
#define SIGINT 2 /* Interrupt the process. C standard */
#define SIGQUIT 3 /* Quit the process */
#define SIGILL 4 /* Illegal instruction. C standard.*/
#define SIGTRAP 5 /* Trace trap, for debugging. C standard.*/
#define SIGABRT 6 /* Abort. C standard. */
#define SIGFPE 8 /* Floating Point Error. C standard. */
#define SIGSEGV 11 /* Memory error. C standard. */
#define SIGTERM 15 /* Termination request. C standard. */
```
Signals are handled with the `signal()` function, from the `signal.h`
library. Its syntax is:
``` c
void signal(signal_to_catch, signal_handler)
```
Signals can be raised with `raise()` or `kill()`. `raise()` sends the
signal to the current process; `kill()` sends it to a specific process.
*Note that `signal` is now deprecated in favor of `sigaction()`, due to
a lack of portability between Unix systems and potential for unexpected
behavior. However, as `sigaction()`\'s use is more complicated, we will
stick with `signal()` to illustrate the concept here.*
To understand how signals work, here\'s a simple example:
``` c
#include <stdio.h>
#include <unistd.h> // Unix Standard library, used to import sleep()
#include <stdlib.h>
#include <signal.h>
void handler(int signum) {
printf("Signal received %d, coming out...\n", signum);
exit(1);
}
int main () {
signal(SIGINT, handler); // attaching the handler() function to SIGINT signals; i.e, ctrl+c, keyboard interrupt.
while(1) {
printf("Sleeping...\n");
sleep(1000); // sleep pauses the process for a given number of seconds, or until a signal is received.
}
return(0);
}
```
Try compiling and testing this on your machine; after you see
\"Sleeping\...\", send the interrupt signal by pressing `ctrl + c`.
Here\'s a more complex example. This creates a signal handler and raises
the signal:
``` c
#include <signal.h>
#include <stdio.h>
#include <stdlib.h>
static void catch_function(int signal) {
puts("Interactive attention signal caught.");
}
int main(void) {
if (signal(SIGINT, catch_function) == SIG_ERR) {
fputs("An error occurred while setting a signal handler.\n", stderr);
return EXIT_FAILURE;
}
puts("Raising the interactive attention signal.");
if (raise(SIGINT) != 0) {
fputs("Error raising the signal.\n", stderr);
return EXIT_FAILURE;
}
puts("Exiting.");
return 0;
}
```
## setjmp
The setjmp function can be
used to emulate the exception handling feature of other programming
languages. The first call to setjmp stores a reference point to the
current execution point, and is valid as long as the function containing
setjmp() doesn\'t return or exit. A call to longjmp causes the execution
to return to the point of the associated setjmp call.
`setjmp` takes a \`jmp_buf\` (a type that will store an execution
context) as an argument, and returns `0` the first time it runs (i.e.,
when it sets the return point). When it runs a second time - when
`longjmp` is called - it then returns the value passed to `longjmp`.
`longjmp` takes a \`jmp_buf\` as an argument (one that\'s already been
passed to `setjmp`), and a value to pass to `setjmp` to return.
``` c
#include <stdio.h>
#include <stdlib.h>
#include <setjmp.h>
int main(void) {
int val;
jmp_buf environment;
val = setjmp(environment); // val is set to 0 the first time this is called
if (val !=0)
{
printf("You returned from a longjmp call, return value is %d", val); // now, value is 1, passed from longjmp()
exit(0);
}
puts("Calling longjmp now");
longjmp(environment, 1);
return(0);
}
```
Try running this with a compiler on your own machine.
The values of non-volatile variables may be corrupted when setjmp
returns from a longjmp call.
While setjmp() and longjmp() may be used for error handling, it is
generally preferred to use the return value of a function to indicate an
error, if possible. setjmp() and longjmp() are most useful when errors
occur in deeply nested function calls, and it would be tedious to check
return values all the way back to the point you wish to return to.
|
# C Programming/Stream IO
## Introduction
The `stdio.h` header declares a broad assortment of functions that
perform input and output to files and devices such as the console. It
was one of the earliest headers to appear in the C library. It declares
more functions than any other standard header and also requires more
explanation because of the complex machinery that underlies the
functions.
The device-independent model of input and output has seen dramatic
improvement over the years and has received little recognition for its
success. FORTRAN II was touted as a machine-independent language in the
1960s, yet it was essentially impossible to move a FORTRAN program
between architectures without some change. In FORTRAN II, you named the
device you were talking to right in the FORTRAN statement in the middle
of your FORTRAN code. So, you said `READ INPUT TAPE 5` on a
tape-oriented IBM 7090 but `READ CARD` to read a card image on other
machines. FORTRAN IV had more generic `READ` and `WRITE` statements,
specifying a *logical unit number* (LUN) instead of the device name. The
era of device-independent I/O had dawned.
Peripheral devices such as printers still had fairly strong notions
about what they were asked to do. And then, *peripheral interchange*
utilities were invented to handle bizarre devices. When cathode-ray
tubes came onto the scene, each manufacturer of consoles solved problems
such as console cursor movement in an independent manner, causing
further headaches.
It was into this atmosphere that Unix was born. Ken Thompson and Dennis
Ritchie, the developers of Unix, deserve credit for packing any number
of bright ideas into the operating system. Their approach to device
independence was one of the brightest.
The ANSI C `<stdio.h>` library is based on the original Unix file I/O
primitives but casts a wider net to accommodate the least-common
denominator across varied systems.
## Streams
Input and output, whether to or from physical devices such as terminals
and tape drives, or whether to or from files supported on structured
storage devices, are mapped into logical data streams, whose properties
are more uniform than their various inputs and outputs. Two forms of
mapping are supported: text streams and binary streams.
A text stream consists of one or more lines. A line in a text stream
consists of zero or more characters plus a terminating new-line
character. (The only exception is that in some implementations the last
line of a file does not require a terminating new-line character.) Unix
adopted a standard internal format for all text streams. Each line of
text is terminated by a new-line character. That\'s what any program
expects when it reads text, and that\'s what any program produces when
it writes text. (This is the most basic convention, and if it doesn\'t
meet the needs of a text-oriented peripheral attached to a Unix machine,
then the fix-up occurs out at the edges of the system. Nothing in
between needs to change.) The string of characters that go into, or come
out of a text stream may have to be modified to conform to specific
conventions. This results in a possible difference between the data that
go into a text stream and the data that come out. For instance, in some
implementations when a space-character precedes a new-line character in
the input, the space character gets removed out of the output. In
general, when the data only consists of printable characters and control
characters like horizontal tab and new-line, the input and output of a
text stream are equal.
Compared to a text stream, a binary stream is pretty straight forward. A
binary stream is an ordered sequence of characters that can
transparently record internal data. Data written to a binary stream
shall always equal the data that gets read out under the same
implementation. Binary streams, however, may have an
implementation-defined number of null characters appended to the end of
the stream. There are no further conventions which need to be
considered.
Nothing in Unix prevents the program from writing arbitrary 8-bit binary
codes to any open file, or reading them back unchanged from an adequate
repository. Thus, Unix obliterated the long-standing distinction between
text streams and binary streams.
## Standard Streams
When a C program starts its execution the program automatically opens
three standard streams named `stdin`, `stdout`, and `stderr`. These are
attached for every C program.
The first standard stream is used for input buffering and the other two
are used for output. These streams are sequences of bytes.
Consider the following program:
``` c
/* An example program. */
int main()
{
int var;
scanf ("%d", &var); /* use stdin for scanning an integer from keyboard. */
printf ("%d", var); /* use stdout for printing the integer that was just scanned in. */
return 0;
}
/* end program. */
```
By default `stdin` points to the keyboard and `stdout` and `stderr`
point to the screen. It is possible under Unix and may be possible under
other operating systems to redirect input from or output to a file or
both.
## Pointers to streams
The `<stdio.h>` header contains a definition for a type `FILE` (usually
via a `typedef`) which is capable of processing all the information
needed to exercise control over a stream, including its file position
indicator, a pointer to the associated buffer (if any), an error
indicator that records whether a read/write error has occurred, and an
end-of-file indicator that records whether the end of the file has been
reached.
It is considered bad form to access the contents of `FILE` directly
unless the programmer is writing an implementation of `<stdio.h>` and
its contents. Better access to the contents of `FILE` is provided via
the functions in `<stdio.h>`. It can be said that the `FILE` type is an
early example of object-oriented
programming.
## Opening and Closing Files
To open and close files, the `<stdio.h>` library has three functions:
`fopen`, `freopen`, and `fclose`.
### Opening Files
``` c
#include <stdio.h>
FILE *fopen(const char *filename, const char *mode);
FILE *freopen(const char *filename, const char *mode, FILE *stream);
```
`fopen` and `freopen` opens the file whose name is in the string pointed
to by `filename` and associates a stream with it. Both return a pointer
to the object controlling the stream, or, if the open operation fails, a
null pointer. The error and end-of-file indicators are cleared, and if
the open operation fails error is set. `freopen` differs from `fopen` in
that the file pointed to by `stream` is closed first when already open
and any close errors are ignored.
`mode` for both functions points to a string beginning with one of the
following sequences (additional characters may follow the sequences):
`r open a text file for reading`\
`w truncate to zero length or create a text file for writing`\
`a append; open or create text file for writing at end-of-file`\
`rb open binary file for reading`\
`wb truncate to zero length or create a binary file for writing`\
`ab append; open or create binary file for writing at end-of-file`\
`r+ open text file for update (reading and writing)`\
`w+ truncate to zero length or create a text file for update`\
`a+ append; open or create text file for update`\
`r+b or rb+ open binary file for update (reading and writing)`\
`w+b or wb+ truncate to zero length or create a binary file for update`\
`a+b or ab+ append; open or create binary file for update`
Opening a file with read mode (\'`r`\' as the first character in the
`mode` argument) fails if the file does not exist or cannot be read.
Opening a file with append mode (\'`a`\' as the first character in the
`mode` argument) causes all subsequent writes to the file to be forced
to the then-current end-of-file, regardless of intervening calls to the
`fseek` function. In some implementations, opening a binary file with
append mode (\'`b`\' as the second or third character in the above list
of `mode` arguments) may initially position the file position indicator
for the stream beyond the last data written, because of null character
padding.
When a file is opened with update mode (\'`+`\' as the second or third
character in the above list of `mode` argument values), both input and
output may be performed on the associated stream. However, output may
not be directly followed by input without an intervening call to the
`fflush` function or to a file positioning function (`fseek`, `fsetpos`,
or `rewind`), and input may not be directly followed by output without
an intervening call to a file positioning function, unless the input
operation encounters end-of-file. Opening (or creating) a text file with
update mode may instead open (or create) a binary stream in some
implementations.
When opened, a stream is fully buffered if and only if it can be
determined not to refer to an interactive device.
### Closing Files
``` c
#include <stdio.h>
int fclose(FILE *stream);
```
The `fclose` function causes the stream pointed to by `stream` to be
flushed and the associated file to be closed. Any unwritten buffered
data for the stream are delivered to the host environment to be written
to the file; any unread buffered data are discarded. The stream is
disassociated from the file. If the associated buffer was automatically
allocated, it is deallocated. The function returns zero if the stream
was successfully closed or `EOF` if any errors were detected.
## Stream buffering functions
### The `fflush` function
``` c
#include <stdio.h>
int fflush(FILE *stream);
```
If `stream` points to an output stream or an update stream in which the
most recent operation was not input, the `fflush` function causes any
unwritten data for that stream to be deferred to the host environment to
be written to the file. The behavior of fflush is undefined for input
stream.
If `stream` is a null pointer, the `fflush` function performs this
flushing action on all streams for which the behavior is defined above.
The `fflush` functions returns `EOF` if a write error occurs, otherwise
zero.
The reason for having a `fflush` function is because streams in C can
have buffered input/output; that is, functions that write to a file
actually write to a buffer inside the `FILE` structure. If the buffer is
filled to capacity, the write functions will call `fflush` to actually
\"write\" the data that is in the buffer to the file. Because `fflush`
is only called every once in a while, calls to the operating system to
do a raw write are minimized.
### The `setbuf` function
``` c
#include <stdio.h>
void setbuf(FILE *stream, char *buf);
```
Except that it returns no value, the `setbuf` function is equivalent to
the `setvbuf` function invoked with the values `_IOFBF` for `mode` and
`BUFSIZ` for `size`, or (if `buf` is a null pointer) with the value
`_IONBF` for `mode`.
### The `setvbuf` function
``` c
#include <stdio.h>
int setvbuf(FILE *stream, char *buf, int mode, size_t size);
```
The `setvbuf` function may be used only after the stream pointed to by
`stream` has been associated with an open file and before any other
operation is performed on the stream. The argument `mode` determines how
the stream will be buffered, as follows: `_IOFBF` causes input/output to
be fully buffered; `_IOLBF` causes input/output to be line buffered;
`_IONBF` causes input/output to be unbuffered. If `buf` is not a null
pointer, the array it points to may be used instead of a buffer
associated by the `setvbuf` function. (The buffer must have a lifetime
at least as great as the open stream, so the stream should be closed
before a buffer that has automatic storage duration is deallocated upon
block exit.) The argument `size` specifies the size of the array. The
contents of the array at any time are indeterminate.
The `setvbuf` function returns zero on success, or nonzero if an invalid
value is given for `mode` or if the request cannot be honored.
## Functions that Modify the File Position Indicator
The `stdio.h` library has five functions that affect the file position
indicator besides those that do reading or writing: `fgetpos`, `fseek`,
`fsetpos`, `ftell`, and `rewind`.
The `fseek` and `ftell` functions are older than `fgetpos` and
`fsetpos`.
### The `fgetpos` and `fsetpos` functions
``` c
#include <stdio.h>
int fgetpos(FILE *stream, fpos_t *pos);
int fsetpos(FILE *stream, const fpos_t *pos);
```
The `fgetpos` function stores the current value of the file position
indicator for the stream pointed to by `stream` in the object pointed to
by `pos`. The value stored contains unspecified information usable by
the `fsetpos` function for repositioning the stream to its position at
the time of the call to the `fgetpos` function.
If successful, the `fgetpos` function returns zero; on failure, the
`fgetpos` function returns nonzero and stores an implementation-defined
positive value in `errno`.
The `fsetpos` function sets the file position indicator for the stream
pointed to by `stream` according to the value of the object pointed to
by `pos`, which shall be a value obtained from an earlier call to the
`fgetpos` function on the same stream.
A successful call to the `fsetpos` function clears the end-of-file
indicator for the stream and undoes any effects of the `ungetc` function
on the same stream. After an `fsetpos` call, the next operation on an
update stream may be either input or output.
If successful, the `fsetpos` function returns zero; on failure, the
`fsetpos` function returns nonzero and stores an implementation-defined
positive value in `errno`.
### The `fseek` and `ftell` functions
``` c
#include <stdio.h>
int fseek(FILE *stream, long int offset, int whence);
long int ftell(FILE *stream);
```
The `fseek` function sets the file position indicator for the stream
pointed to by `stream`.
For a binary stream, the new position, measured in characters from the
beginning of the file, is obtained by adding `offset` to the position
specified by `whence`. Three macros in `stdio.h` called `SEEK_SET`,
`SEEK_CUR`, and `SEEK_END` expand to unique values. If the position
specified by `whence` is `SEEK_SET`, the specified position is the
beginning of the file; if `whence` is `SEEK_END`, the specified position
is the end of the file; and if `whence` is `SEEK_CUR`, the specified
position is the current file position. A binary stream need not
meaningfully support `fseek` calls with a `whence` value of `SEEK_END`.
For a text stream, either `offset` shall be zero, or `offset` shall be a
value returned by an earlier call to the `ftell` function on the same
stream and `whence` shall be `SEEK_SET`.
The `fseek` function returns nonzero only for a request that cannot be
satisfied.
The `ftell` function obtains the current value of the file position
indicator for the stream pointed to by `stream`. For a binary stream,
the value is the number of characters from the beginning of the file;
for a text stream, its file position indicator contains unspecified
information, usable by the `fseek` function for returning the file
position indicator for the stream to its position at the time of the
`ftell` call; the difference between two such return values is not
necessarily a meaningful measure of the number of characters written or
read.
If successful, the `ftell` function returns the current value of the
file position indicator for the stream. On failure, the `ftell` function
returns `-1L` and stores an implementation-defined positive value in
`errno`.
### The `rewind` function
``` c
#include <stdio.h>
void rewind(FILE *stream);
```
The `rewind` function sets the file position indicator for the stream
pointed to by `stream` to the beginning of the file. It is equivalent to
`(void)fseek(stream, 0L, SEEK_SET)`
except that the error indicator for the stream is also cleared.
## Error Handling Functions
### The `clearerr` function
``` c
#include <stdio.h>
void clearerr(FILE *stream);
```
The `clearerr` function clears the end-of-file and error indicators for
the stream pointed to by `stream`.
### The `feof` function
``` c
#include <stdio.h>
int feof(FILE *stream);
```
The `feof` function tests the end-of-file indicator for the stream
pointed to by `stream` and returns nonzero if and only if the
end-of-file indicator is set for `stream`, otherwise it returns zero.
### The `ferror` function
``` c
#include <stdio.h>
int ferror(FILE *stream);
```
The `ferror` function tests the error indicator for the stream pointed
to by `stream` and returns nonzero if and only if the error indicator is
set for `stream`, otherwise it returns zero.
### The `perror` function
``` c
#include <stdio.h>
void perror(const char *s);
```
The `perror` function maps the error number in the integer expression
`errno` to an error message. It writes a sequence of characters to the
standard error stream thus: first, if `s` is not a null pointer and the
character pointed to by `s` is not the null character, the string
pointed to by `s` followed by a colon (`:`) and a space; then an
appropriate error message string followed by a new-line character. The
contents of the error message are the same as those returned by the
`strerror` function with the argument `errno`, which are
implementation-defined.
## Other Operations on Files
The `stdio.h` library has a variety of functions that do some operation
on files besides reading and writing.
### The `remove` function
``` c
#include <stdio.h>
int remove(const char *filename);
```
The `remove` function causes the file whose name is the string pointed
to by `filename` to be no longer accessible by that name. A subsequent
attempt to open that file using that name will fail, unless it is
created anew. If the file is open, the behavior of the `remove` function
is implementation-defined.
The `remove` function returns zero if the operation succeeds, nonzero if
it fails.
### The `rename` function
``` c
#include <stdio.h>
int rename(const char *old_filename, const char *new_filename);
```
The `rename` function causes the file whose name is the string pointed
to by `old_filename` to be henceforth known by the name given by the
string pointed to by `new_filename`. The file named `old_filename` is no
longer accessible by that name. If a file named by the string pointed to
by `new_filename` exists prior to the call to the `rename` function, the
behavior is implementation-defined.
The `rename` function returns zero if the operation succeeds, nonzero if
it fails, in which case if the file existed previously it is still known
by its original name.
### The `tmpfile` function
``` c
#include <stdio.h>
FILE *tmpfile(void);
```
The `tmpfile` function creates a temporary binary file that will
automatically be removed when it is closed or at program termination. If
the program terminates abnormally, whether an open temporary file is
removed is implementation-defined. The file is opened for update with
`"wb+"` mode.
The `tmpfile` function returns a pointer to the stream of the file that
it created. If the file cannot be created, the `tmpfile` function
returns a null pointer.
### The `tmpnam` function
``` c
#include <stdio.h>
char *tmpnam(char *s);
```
The `tmpnam` function generates a string that is a valid file name and
that is not the name of an existing file.
The `tmpnam` function generates a different string each time it is
called, up to `TMP_MAX` times. (`TMP_MAX` is a macro defined in
`stdio.h`.) If it is called more than `TMP_MAX` times, the behavior is
implementation-defined.
The implementation shall behave as if no library function calls the
`tmpnam` function.
If the argument is a null pointer, the `tmpnam` function leaves its
result in an internal static object and returns a pointer to that
object. Subsequent calls to the `tmpnam` function may modify the same
object. If the argument is not a null pointer, it is assumed to point to
an array of at least `L_tmpnam` characters (`L_tmpnam` is another macro
in `stdio.h`); the `tmpnam` function writes its result in that array and
returns the argument as its value.
The value of the macro `TMP_MAX` must be at least 25.
## Reading from Files
### Character Input Functions
#### The `fgetc` function
``` c
#include <stdio.h>
int fgetc(FILE *stream);
```
The `fgetc` function obtains the next character (if present) as an
`unsigned char` converted to an `int`, from the stream pointed to by
`stream`, and advances the associated file position indicator for the
stream (if defined).
The `fgetc` function returns the next character from the stream pointed
to by `stream`. If the stream is at end-of-file or a read error occurs,
`fgetc` returns `EOF` (`EOF` is a negative value defined in `<stdio.h>`,
usually `(-1)`). The routines `feof` and `ferror` must be used to
distinguish between end-of-file and error. If an error occurs, the
global variable `errno` is set to indicate the error.
#### The `fgets` function
``` C
#include <stdio.h>
char *fgets(char *s, int n, FILE *stream);
```
The `fgets` function reads at most one less than the number of
characters specified by `n` from the stream pointed to by `stream` into
the array pointed to by `s`. No additional characters are read after a
new-line character (which is retained) or after end-of-file. A null
character is written immediately after the last character read into the
array.
The `fgets` function returns `s` if successful. If end-of-file is
encountered and no characters have been read into the array, the
contents of the array remain unchanged and a null pointer is returned.
If a read error occurs during the operation, the array contents are
indeterminate and a null pointer is returned.
Warning: Different operating systems may use different character
sequences to represent the end-of-line sequence. For example, some
filesystems use the terminator `\r\n` in text files; `fgets` may read
those lines, removing the `\n` but keeping the `\r` as the last
character of `s`. This expurious character should be removed in the
string `s` before the string is used for anything (unless the programmer
doesn\'t care about it). Unixes typically use `\n` as its end-of-line
sequence, MS-DOS and Windows uses `\r\n`, and Mac OSes used `\r` before
OS X. Many compilers on operating systems other than Unix or Linux map
newline sequences to `\n` on input for text files; check your
compiler\'s documentation to discover what it does in this situation.
``` c
/* An example program that reads from stdin and writes to stdout */
#include <stdio.h>
#define BUFFER_SIZE 100
int main(void)
{
char buffer[BUFFER_SIZE]; /* a read buffer */
while( fgets (buffer, BUFFER_SIZE, stdin) != NULL)
{
printf("%s",buffer);
}
return 0;
}
/* end program. */
```
#### The `getc` function
``` C
#include <stdio.h>
int getc(FILE *stream);
```
The `getc` function is equivalent to `fgetc`, except that it may be
implemented as a macro. If it is implemented as a macro, the `stream`
argument may be evaluated more than once, so the argument should never
be an expression with side effects (i.e. have an assignment, increment,
or decrement operators, or be a function call).
The `getc` function returns the next character from the input stream
pointed to by `stream`. If the stream is at end-of-file, the end-of-file
indicator for the stream is set and `getc` returns `EOF` (`EOF` is a
negative value defined in `<stdio.h>`, usually `(-1)`). If a read error
occurs, the error indicator for the stream is set and `getc` returns
`EOF`.
#### The `getchar` function
``` C
#include <stdio.h>
int getchar(void);
```
The `getchar` function is equivalent to `getc` with the argument
`stdin`.
The `getchar` function returns the next character from the input stream
pointed to by `stdin`. If `stdin` is at end-of-file, the end-of-file
indicator for `stdin` is set and `getchar` returns `EOF` (`EOF` is a
negative value defined in `<stdio.h>`, usually `(-1)`). If a read error
occurs, the error indicator for `stdin` is set and `getchar` returns
`EOF`.
#### The `gets` function
``` C
#include <stdio.h>
char *gets(char *s);
```
The `gets` function reads characters from the input stream pointed to by
`stdin` into the array pointed to by `s` until an end-of-file is
encountered or a new-line character is read. Any new-line character is
discarded, and a null character is written immediately after the last
character read into the array.
The `gets` function returns `s` if successful. If the end-of-file is
encountered and no characters have been read into the array, the
contents of the array remain unchanged and a null pointer is returned.
If a read error occurs during the operation, the array contents are
indeterminate and a null pointer is returned.
This function and description is only included here for completeness.
Most C programmers nowadays shy away from using `gets`, as there is no
way for the function to know how big the buffer is that the programmer
wants to read into.
Commandment #5 of Henry Spencer\'s *The
Ten Commandments for C Programmers (Annotated Edition)* reads
It mentions `gets` in the annotation:
Before the 2018 version of the C standard, the `gets` function was
deprecated. It is hoped that programmers would use the `fgets` function
instead.
#### The `ungetc` function
``` C
#include <stdio.h>
int ungetc(int c, FILE *stream);
```
The `ungetc` function pushes the character specified by `c` (converted
to an `unsigned char`) back onto the input stream pointed to by stream.
The pushed-back characters will be returned by subsequent reads on that
stream in the reverse order of their pushing. A successful intervening
call (with the stream pointed to by `stream`) to a file-positioning
function (`fseek`, `fsetpos`, or `rewind`) discards any pushed-back
characters for the stream. The external storage corresponding to the
stream is unchanged.
One character of pushback is guaranteed. If the `ungetc` function is
called too many times on the same stream without an intervening read or
file positioning operation on that stream, the operation may fail.
If the value of `c` equals that of the macro `EOF`, the operation fails
and the input stream is unchanged.
A successful call to the `ungetc` function clears the end-of-file
indicator for the stream. The value of the file position indicator for
the stream after reading or discarding all pushed-back characters shall
be the same as it was before the characters were pushed back. For a text
stream, the value of its file-position indicator after a successful call
to the `ungetc` function is unspecified until all pushed-back characters
are read or discarded. For a binary stream, its file position indicator
is decremented by each successful call to the `ungetc` function; if its
value was zero before a call, it is indeterminate after the call.
The `ungetc` function returns the character pushed back after
conversion, or `EOF` if the operation fails.
### EOF pitfall
A mistake when using `fgetc`, `getc`, or `getchar` is to assign the
result to a variable of type `char` *before* comparing it to `EOF`. The
following code fragments exhibit this mistake, and then show the correct
approach (using type int):
```{=html}
<center>
```
+--------------------------------+--------------------------------+
| Mistake | Correction |
+================================+================================+
| ``` c | ``` c |
| char c; | int c; |
| while ((c = getchar()) != EOF) | while ((c = getchar()) != EOF) |
| putchar(c); | putchar(c); |
| ``` | ``` |
+--------------------------------+--------------------------------+
```{=html}
</center>
```
Consider a system in which the type `char` is 8 bits wide, representing
256 different values. `getchar` may return any of the 256 possible
characters, and it also may return `EOF` to indicate
end-of-file, for a total of 257 different
possible return values.
When `getchar`\'s result is assigned to a `char`, which can represent
only 256 different values, there is necessarily some loss of
information---when packing 257 items into 256 slots, there must be a
collision. The `EOF` value, when
converted to `char`, becomes indistinguishable from whichever one of the
256 characters shares its numerical value. If that character is found in
the file, the above example may mistake it for an end-of-file indicator;
or, just as bad, if type `char` is unsigned, then because `EOF` is
negative, it can never be equal to any unsigned `char`, so the above
example will not terminate at end-of-file. It will loop forever,
repeatedly printing the character which results from converting `EOF` to
`char`.
However, this looping failure mode does not occur if the char definition
is signed (C makes the signedness of the default char type
implementation-dependent),[^1] assuming the commonly used `EOF` value
of -1. However, the fundamental issue remains
that if the `EOF` value is defined outside of the range of the `char`
type, when assigned to a `char` that value is sliced and will no longer
match the full `EOF` value necessary to exit the loop. On the other
hand, if `EOF` is within range of `char`, this guarantees a collision
between `EOF` and a char value. Thus, regardless of how system types are
defined, never use `char` types when testing against `EOF`.
On systems where `int` and `char` are the same size (i.e., systems
incompatible with minimally the POSIX and C99 standards), even the
\"good\" example will suffer from the indistinguishability of `EOF` and
some character\'s value. The proper way to handle this situation is to
check `feof` and `ferror` after
`getchar` returns `EOF`. If `feof` indicates that end-of-file has not
been reached, and `ferror` indicates that no errors have occurred, then
the `EOF` returned by `getchar` can be assumed to represent an actual
character. These extra checks are rarely done, because most programmers
assume that their code will never need to run on one of these \"big
`char`\" systems. Another way is to use a compile-time assertion to make
sure that `UINT_MAX > UCHAR_MAX`, which at least prevents a program with
such an assumption from compiling in such a system.
### Direct input function: the `fread` function
``` C
#include <stdio.h>
size_t fread(void *ptr, size_t size, size_t nmemb, FILE *stream);
```
The `fread` function reads, into the array pointed to by `ptr`, up to
`nmemb` elements whose size is specified by `size`, from the stream
pointed to by `stream`. The file position indicator for the stream (if
defined) is advanced by the number of characters successfully read. If
an error occurs, the resulting value of the file position indicator for
the stream is indeterminate. If a partial element is read, its value is
indeterminate.
The `fread` function returns the number of elements successfully read,
which may be less than `nmemb` if a read error or end-of-file is
encountered. If `size` or `nmemb` is zero, `fread` returns zero and the
contents of the array and the state of the stream remain unchanged.
### Formatted input functions: the `scanf` family of functions
``` C
#include <stdio.h>
int fscanf(FILE *stream, const char *format, ...);
int scanf(const char *format, ...);
int sscanf(const char *s, const char *format, ...);
```
The `fscanf` function reads input from the stream pointed to by
`stream`, under control of the string pointed to by `format` that
specifies the admissible sequences and how they are to be converted for
assignment, using subsequent arguments as pointers to the objects to
receive converted input. If there are insufficient arguments for the
format, the behavior is undefined. If the format is exhausted while
arguments remain, the excess arguments are evaluated (as always) but are
otherwise ignored.
The format shall be a multibyte character sequence, beginning and ending
in its initial shift state. The format is composed of zero or more
directives: one or more white-space characters; an ordinary multibyte
character (neither `%` or a white-space character); or a conversion
specification. Each conversion specification is introduced by the
character `%`. After the `%`, the following appear in sequence:
- An optional assignment-suppressing character `*`.
- An optional nonzero decimal integer that specifies the maximum field
width.
- An optional `h`, `l` (ell) or `L` indicating the size of the
receiving object. The conversion specifiers `d`, `i`, and `n` shall
be preceded by `h` if the corresponding argument is a pointer to
`short int` rather than a pointer to `int`, or by `l` if it is a
pointer to `long int`. Similarly, the conversion specifiers `o`,
`u`, and `x` shall be preceded by `h` if the corresponding argument
is a pointer to `unsigned short int` rather than `unsigned int`, or
by `l` if it is a pointer to `unsigned long int`. Finally, the
conversion specifiers `e`, `f`, and `g` shall be preceded by `l` if
the corresponding argument is a pointer to `double` rather than a
pointer to `float`, or by `L` if it is a pointer to `long double`.
If an `h`, `l`, or `L` appears with any other format specifier, the
behavior is undefined.
- A character that specifies the type of conversion to be applied. The
valid conversion specifiers are described below.
The `fscanf` function executes each directive of the format in turn. If
a directive fails, as detailed below, the `fscanf` function returns.
Failures are described as input failures (due to the unavailability of
input characters) or matching failures (due to inappropriate input).
A directive composed of white-space character(s) is executed by reading
input up to the first non-white-space character (which remains unread)
or until no more characters remain unread.
A directive that is an ordinary multibyte character is executed by
reading the next characters of the stream. If one of the characters
differs from one comprising the directive, the directive fails, and the
differing and subsequent characters remain unread.
A directive that is a conversion specification defines a set of matching
input sequences, as described below for each specifier. A conversion
specification is executed in the following steps:
Input white-space characters (as specified by the `isspace` function)
are skipped, unless the specification includes a `[`, `c`, or `n`
specifier. (The white-space characters are not counted against the
specified field width.)
An input item is read from the stream, unless the specification includes
an `n` specifier. An input item is defined as the longest matching
sequences of input characters, unless that exceeds a specified field
width, in which case it is the initial subsequence of that length in the
sequence. The first character, if any, after the input item remains
unread. If the length of the input item is zero, the execution of the
directive fails; this condition is a matching failure, unless an error
prevented input from the stream, in which case it is an input failure.
Except in the case of a `%` specifier, the input item (or, in the case
of a `%n` directive, the count of input characters) is converted to a
type appropriate to the conversion specifier. If the input item is not a
matching sequence, the execution of the directive fails; this condition
is a matching failure. Unless assignment suppression was indicated by a
`*`, the result of the conversion is placed in the object pointed to by
the first argument following the `format` argument that has not already
received a conversion result. If this object does not have an
appropriate type, or if the result of the conversion cannot be
represented in the space provided, the behavior is undefined.
The following conversion specifiers are valid:
`d` : Matches an optionally signed decimal integer, whose format is the same as expected for the subject sequence of the `strtol` function with the value 10 for the `base` argument. The corresponding argument shall be a pointer to integer.
```{=html}
<!-- -->
```
`i` : Matches an optionally signed integer, whose format is the same as expected for the subject sequence of the `strtol` function with the value 0 for the `base` argument. The corresponding argument shall be a pointer to integer.
```{=html}
<!-- -->
```
`o` : Matches an optionally signed octal integer, whose format is the same as expected for the subject sequence of the `strtoul` function with the value 8 for the `base` argument. The corresponding argument shall be a pointer to unsigned integer.
```{=html}
<!-- -->
```
`u` : Matches an optionally signed decimal integer, whose format is the same as expected for the subject sequence of the `strtoul` function with the value 10 for the `base` argument. The corresponding argument shall be a pointer to unsigned integer.
```{=html}
<!-- -->
```
`x` : Matches an optionally signed hexadecimal integer, whose format is the same as expected for the subject sequence of the `strtoul` function with the value 16 for the `base` argument. The corresponding argument shall be a pointer to unsigned integer.
```{=html}
<!-- -->
```
`e`, `f`, `g` : Matches an optionally signed floating-point number, whose format is the same as expected for the subject string of the `strtod` function. The corresponding argument will be a pointer to floating.
```{=html}
<!-- -->
```
`s` : Matches a sequence of non-white-space characters. (No special provisions are made for multibyte characters.) The corresponding argument shall be a pointer to the initial character of an array large enough to accept the sequence and a terminating null character, which will be added automatically.
```{=html}
<!-- -->
```
`[` : Matches a nonempty sequence of characters (no special provisions are made for multibyte characters) from a set of expected characters (the `<i>`{=html}scanset`</i>`{=html}). The corresponding argument shall be a pointer to the initial character of an array large enough to accept the sequence and a terminating null character, which will be added automatically. The conversion specifier includes all subsequent characters in the `format` string, up to and including the matching right bracket (`]`). The characters between the brackets (the `<i>`{=html}scanlist`</i>`{=html}) comprise the scanset, unless the character after the left bracket is a circumflex (`^`), in which case the scanset contains all the characters that do not appear in the scanlist between the circumflex and the right bracket. If the conversion specifier begins with `[]` or `[^]`, the right-bracket character is in the scanlist and the next right bracket character is the matching right bracket that ends the specification; otherwise, the first right bracket character is the one that ends the specification. If a `-` character is in the scanlist and is not the first, nor the second where the first character is a `^`, nor the last character, the behavior is implementation-defined.
```{=html}
<!-- -->
```
`c` : Matches a sequence of characters (no special provisions are made for multibyte characters) of the number specified by the field width (1 if no field width is present in the directive). The corresponding argument shall be a pointer to the initial character of an array large enough to accept the sequence. No null character is added.
```{=html}
<!-- -->
```
`p` : Matches an implementation-defined set of sequences, which should be the same as the set of sequences that may be produced by the `%p` conversion of the `fprintf` function. The corresponding argument shall be a pointer to `void`. The interpretation of the input then is implementation-defined. If the input item is a value converted earlier during the same program execution, the pointer that results shall compare equal to that value; otherwise the behavior of the `%p` conversion is undefined.
```{=html}
<!-- -->
```
`n` : No input is consumed. The corresponding argument shall be a pointer to integer into which is to be written the number of characters read from the input stream so far by this call to the `fscanf` function. Execution of a `%n` directive does not increment the assignment count returned at the completion of execution of the `fscanf` function.
```{=html}
<!-- -->
```
`%` : Matches a single `%`; no conversion or assignment occurs. The complete conversion specification shall be `%%`.
If a conversion specification is invalid, the behavior is undefined.
The conversion specifiers `E`, `G`, and `X` are also valid and behave
the same as, respectively, `e`, `g`, and `x`.
If end-of-file is encountered during input, conversion is terminated. If
end-of-file occurs before any characters matching the current directive
have been read (other than leading white space, where permitted),
execution of the current directive terminates with an input failure;
otherwise, unless execution of the current directive is terminated with
a matching failure, execution of the following directive (if any) is
terminated with an input failure.
If conversion terminates on a conflicting input character, the offending
input character is left unread in the input stream. Trailing white space
(including new-line characters) is left unread unless matched by a
directive. The success of literal matches and suppressed assignments is
not directly determinable other than via the `%n` directive.
The `fscanf` function returns the value of the macro `EOF` if an input
failure occurs before any conversion. Otherwise, the `fscanf` function
returns the number of input items assigned, which can be fewer than
provided for, or even zero, in the event of an early matching failure.
The `scanf` function is equivalent to `fscanf` with the argument `stdin`
interposed before the arguments to `scanf`. Its return value is similar
to that of `fscanf`.
The `sscanf` function is equivalent to `fscanf`, except that the
argument `s` specifies a string from which the input is to be obtained,
rather than from a stream. Reaching the end of the string is equivalent
to encountering the end-of-file for the `fscanf` function. If copying
takes place between objects that overlap, the behavior is undefined.
## Writing to Files
### Character Output Functions
#### The `fputc` function
`#include <stdio.h>`\
`int fputc(int c, FILE *stream);`
The `fputc` function writes the character specified by `c` (converted to
an `unsigned char`) to the stream pointed to by `stream` at the position
indicated by the associated file position indicator (if defined), and
advances the indicator appropriately. If the file cannot support
positioning requests, or if the stream is opened with append mode, the
character is appended to the output stream. The function returns the
character written, unless a write error occurs, in which case the error
indicator for the stream is set and `fputc` returns `EOF`.
#### The `fputs` function
`#include <stdio.h>`\
`int fputs(const char *s, FILE *stream);`
The `fputs` function writes the string pointed to by `s` to the stream
pointed to by `stream`. The terminating null character is not written.
The function returns `EOF` if a write error occurs, otherwise it returns
a nonnegative value.
#### The `putc` function
`#include <stdio.h>`\
`int putc(int c, FILE *stream);`
The `putc` function is equivalent to `fputc`, except that if it is
implemented as a macro, it may evaluate `stream` more than once, so the
argument should never be an expression with side effects. The function
returns the character written, unless a write error occurs, in which
case the error indicator for the stream is set and the function returns
`EOF`.
#### The `putchar` function
`#include <stdio.h>`\
`int putchar(int c);`
The `putchar` function is equivalent to `putc` with the second argument
`stdout`. It returns the character written, unless a write error occurs,
in which case the error indicator for `stdout` is set and the function
returns `EOF`.
#### The `puts` function
`#include <stdio.h>`\
`int puts(const char *s);`
The `puts` function writes the string pointed to by `s` to the stream
pointed to by `stdout`, and appends a new-line character to the output.
The terminating null character is not written. The function returns
`EOF` if a write error occurs; otherwise, it returns a nonnegative
value.
### Direct output function: the `fwrite` function
`#include <stdio.h>`\
`size_t fwrite(const void *ptr, size_t size, size_t nmemb, FILE *stream);`
The `fwrite` function writes, from the array pointed to by `ptr`, up to
`nmemb` elements whose size is specified by `size` to the stream pointed
to by `stream`. The file position indicator for the stream (if defined)
is advanced by the number of characters successfully written. If an
error occurs, the resulting value of the file position indicator for the
stream is indeterminate. The function returns the number of elements
successfully written, which will be less than `nmemb` only if a write
error is encountered.
### Formatted output functions: the `printf` family of functions
`#include <stdarg.h>`\
`#include <stdio.h>`\
`int fprintf(FILE *stream, const char *format, ...);`\
`int printf(const char *format, ...);`\
`int sprintf(char *s, const char *format, ...);`\
`int vfprintf(FILE *stream, const char *format, va_list arg);`\
`int vprintf(const char *format, va_list arg);`\
`int vsprintf(char *s, const char *format, va_list arg);`
*Note: Some length specifiers and format specifiers are new in C99.
These may not be available in older compilers and versions of the stdio
library, which adhere to the C89/C90 standard. Wherever possible, the
new ones will be marked with (C99).*
The `fprintf` function writes output to the stream pointed to by
`stream` under control of the string pointed to by `format` that
specifies how subsequent arguments are converted for output. If there
are insufficient arguments for the format, the behavior is undefined. If
the format is exhausted while arguments remain, the excess arguments are
evaluated (as always) but are otherwise ignored. The `fprintf` function
returns when the end of the format string is encountered.
The format shall be a multibyte character sequence, beginning and ending
in its initial shift state. The format is composed of zero or more
directives: ordinary multibyte characters (not `%`), which are copied
unchanged to the output stream; and conversion specifications, each of
which results in fetching zero or more subsequent arguments, converting
them, if applicable, according to the corresponding conversion
specifier, and then writing the result to the output stream.
Each conversion specification is introduced by the character `%`. After
the `%`, the following appear in sequence:
- Zero or more flags (in any order) that modify the meaning of the
conversion specification.
- An optional minimum field width. If the converted value has fewer
characters than the field width, it is padded with spaces (by
default) on the left (or right, if the left adjustment flag,
described later, has been given) to the field width. The field width
takes the form of an asterisk `*` (described later) or a decimal
integer. (Note that 0 is taken as a flag, not as the beginning of a
field width.)
- An optional precision that gives the minimum number of digits to
appear for the `d`, `i`, `o`, `u`, `x`, and `X` conversions, the
number of digits to appear after the decimal-point character for
`a`, `A`, `e`, `E`, `f`, and `F` conversions, the maximum number of
significant digits for the `g` and `G` conversions, or the maximum
number of characters to be written from a string in `s` conversions.
The precision takes the form of a period (`.`) followed either by an
asterisk `*` (described later) or by an optional decimal integer; if
only the period is specified, the precision is taken as zero. If a
precision appears with any other conversion specifier, the behavior
is undefined. Floating-point numbers are *rounded* to fit the
precision; i.e. `printf("%1.1f\n", 1.19);` produces `1.2`.
- An optional length modifier that specifies the size of the argument.
- A conversion specifier character that specifies the type of
conversion to be applied.
As noted above, a field width, or precision, or both, may be indicated
by an asterisk. In this case, an `int` argument supplies the field width
or precision. The arguments specifying field width, or precision, or
both, shall appear (in that order) before the argument (if any) to be
converted. A negative field width argument is taken as a `-` flag
followed by a positive field width. A negative precision argument is
taken as if the precision were omitted.
The flag characters and their meanings are:
`-` : The result of the conversion is left-justified within the field. (It is right-justified if this flag is not specified.)\
`+` : The result of a signed conversion always begins with a plus or minus sign. (It begins with a sign only when a negative value is converted if this flag is not specified. The results of all floating conversions of a negative zero, and of negative values that round to zero, include a minus sign.)\
`<i>`{=html}space`</i>`{=html} : If the first character of a signed conversion is not a sign, or if a signed conversion results in no characters, a space is prefixed to the result. If the space and `+` flags both appear, the space flag is ignored.\
`#` : The result is converted to an \"alternative form\". For `o` conversion, it increases the precision, if and only if necessary, to force the first digit of the result to be a zero (if the value and precision are both 0, a single 0 is printed). For `x` (or `X`) conversion, a nonzero result has `0x` (or `0X`) prefixed to it. For `a`, `A`, `e`, `E`, `f`, `F`, `g`, and `G` conversions, the result always contains a decimal-point character, even if no digits follow it. (Normally, a decimal-point character appears in the result of these conversions only if a digit follows it.) For `g` and `G` conversions, trailing zeros are not removed from the result. For other conversions, the behavior is undefined.\
`0` : For `d`, `i`, `o`, `u`, `x`, `X`, `a`, `A`, `e`, `E`, `f`, `F`, `g`, and `G` conversions, leading zeros (following any indication of sign or base) are used to pad to the field width; no space padding is performed. If the `0` and `-` flags both appear, the `0` flag is ignored. For `d`, `i`, `o`, `u`, `x`, and `X` conversions, if a precision is specified, the `0` flag is ignored. For other conversions, the behavior is undefined.
The length modifiers and their meanings are:
`hh` : (C99) Specifies that a following `d`, `i`, `o`, `u`, `x`, or `X` conversion specifier applies to a `signed char` or `unsigned char` argument (the argument will have been promoted according to the integer promotions, but its value shall be converted to `signed char` or `unsigned char` before printing); or that a following `n` conversion specifier applies to a pointer to a `signed char` argument.
```{=html}
<!-- -->
```
`h` : Specifies that a following `d`, `i`, `o`, `u`, `x`, or `X` conversion specifier applies to a `short int` or `unsigned short int` argument (the argument will have been promoted according to the integer promotions, but its value shall be converted to `short int` or `unsigned short int` before printing); or that a following `n` conversion specifier applies to a pointer to a `short int` argument.
```{=html}
<!-- -->
```
`l` (ell) : Specifies that a following `d`, `i`, `o`, `u`, `x`, or `X` conversion specifier applies to a `long int` or `unsigned long int` argument; that a following `n` conversion specifier applies to a pointer to a `long int` argument; (C99) that a following `c` conversion specifier applies to a `wint_t` argument; (C99) that a following `s` conversion specifier applies to a pointer to a `wchar_t` argument; or has no effect on a following `a`, `A`, `e`, `E`, `f`, `F`, `g`, or `G` conversion specifier.
```{=html}
<!-- -->
```
`ll` (ell-ell) : (C99) Specifies that a following `d`, `i`, `o`, `u`, `x`, or `X` conversion specifier applies to a `long long int` or `unsigned long long int` argument; or that a following `n` conversion specifier applies to a pointer to a `long long int` argument.
```{=html}
<!-- -->
```
`j` : (C99) Specifies that a following `d`, `i`, `o`, `u`, `x`, or `X` conversion specifier applies to an `intmax_t` or `uintmax_t` argument; or that a following `n` conversion specifier applies to a pointer to an `intmax_t` argument.
```{=html}
<!-- -->
```
`z` : (C99) Specifies that a following `d`, `i`, `o`, `u`, `x`, or `X` conversion specifier applies to a `size_t` or the corresponding signed integer type argument; or that a following `n` conversion specifier applies to a pointer to a signed integer type corresponding to `size_t` argument.
```{=html}
<!-- -->
```
`t` : (C99) Specifies that a following `d`, `i`, `o`, `u`, `x`, or `X` conversion specifier applies to a `ptrdiff_t` or the corresponding unsigned integer type argument; or that a following `n` conversion specifier applies to a pointer to a `ptrdiff_t` argument.
```{=html}
<!-- -->
```
`L` : Specifies that a following `a`, `A`, `e`, `E`, `f`, `F`, `g`, or `G` conversion specifier applies to a `long double` argument.
If a length modifier appears with any conversion specifier other than as
specified above, the behavior is undefined.
The conversion specifiers and their meanings are:
`d`, `i` : The `int` argument is converted to signed decimal in the style `<i>`{=html}\[`</i>`{=html}`<b>`{=html}`−``</b>`{=html}`<i>`{=html}\]dddd`</i>`{=html}. The precision specifies the minimum number of digits to appear; if the value being converted can be represented in fewer digits, it is expanded with leading zeros. The default precision is 1. The result of converting a zero value with a precision of zero is no characters.
```{=html}
<!-- -->
```
`o`, `u`, `x`, `X` : The `unsigned int` argument is converted to unsigned octal (`o`), unsigned decimal (`u`), or unsigned hexadecimal notation (`x` or `X`) in the style `<i>`{=html}dddd`</i>`{=html}; the letters `<b>`{=html}`abcdef``</b>`{=html} are used for `x` conversion and the letters `<b>`{=html}`ABCDEF``</b>`{=html} for `X` conversion. The precision specifies the minimum number of digits to appear; if the value being converted can be represented in fewer digits, it is expanded with leading zeros. The default precision is 1. The result of converting a zero value with a precision of zero is no characters.
```{=html}
<!-- -->
```
`f`, `F` : A `double` argument representing a (finite) floating-point number is converted to decimal notation in the style `<i>`{=html}\[`</i>`{=html}`−``<i>`{=html}\]ddd`</i>`{=html}`.``<i>`{=html}ddd`</i>`{=html}, where the number of digits after the decimal-point character is equal to the precision specification. If the precision is missing, it is taken as 6; if the precision is zero and the `#` flag is not specified, no decimal-point character appears. If a decimal-point character appears, at least one digit appears before it. The value is rounded to the appropriate number of digits.\
(C99) A `double` argument representing an infinity is converted in one of the styles `<i>`{=html}\[`</i>`{=html}`-``<i>`{=html}\]`</i>`{=html}`inf` or `<i>`{=html}\[`</i>`{=html}`-``<i>`{=html}\]`</i>`{=html}`infinity` --- which style is implementation-defined. A double argument representing a NaN is converted in one of the styles `<i>`{=html}\[`</i>`{=html}`-``<i>`{=html}\]`</i>`{=html}`nan` or `<i>`{=html}\[`</i>`{=html}`-``<i>`{=html}\]`</i>`{=html}`nan(``<i>`{=html}n-char-sequence`</i>`{=html}`)` --- which style, and the meaning of any `<i>`{=html}n-char-sequence`</i>`{=html}, is implementation-defined. The `F` conversion specifier produces `INF`, `INFINITY`, or `NAN` instead of `inf`, `infinity`, or `nan`, respectively. (When applied to infinite and NaN values, the `-`, `+`, and `<i>`{=html}space`</i>`{=html} flags have their usual meaning; the `#` and `0` flags have no effect.)
```{=html}
<!-- -->
```
`e`, `E` : A `double` argument representing a (finite) floating-point number is converted in the style `<i>`{=html}\[`</i>`{=html}`−``<i>`{=html}\]d`</i>`{=html}`.``<i>`{=html}ddd`</i>`{=html}`e±``<i>`{=html}dd`</i>`{=html}, where there is one digit (which is nonzero if the argument is nonzero) before the decimal-point character and the number of digits after it is equal to the precision; if the precision is missing, it is taken as 6; if the precision is zero and the `#` flag is not specified, no decimal-point character appears. The value is rounded to the appropriate number of digits. The `E` conversion specifier produces a number with `E` instead of `e` introducing the exponent. The exponent always contains at least two digits, and only as many more digits as necessary to represent the exponent. If the value is zero, the exponent is zero.\
(C99) A `double` argument representing an infinity or NaN is converted in the style of an `f` or `F` conversion specifier.
```{=html}
<!-- -->
```
`g`, `G` : A `double` argument representing a (finite) floating-point number is converted in style `f` or `e` (or in style `F` or `E` in the case of a `G` conversion specifier), with the precision specifying the number of significant digits. If the precision is zero, it is taken as 1. The style used depends on the value converted; style `e` (or `E`) is used only if the exponent resulting from such a conversion is less than --4 or greater than or equal to the precision. Trailing zeros are removed from the fractional portion of the result unless the `#` flag is specified; a decimal-point character appears only if it is followed by a digit.\
(C99) A `double` argument representing an infinity or NaN is converted in the style of an `f` or `F` conversion specifier.
```{=html}
<!-- -->
```
`a`, `A` : (C99) A double argument representing a (finite) floating-point number is converted in the style `<i>`{=html}\[`</i>`{=html}`−``<i>`{=html}\]`</i>`{=html}`0x``<i>`{=html}h`</i>`{=html}`.``<i>`{=html}hhhh`</i>`{=html}`p±``<i>`{=html}d`</i>`{=html}, where there is one hexadecimal digit (which is nonzero if the argument is a normalized floating-point number and is otherwise unspecified) before the decimal-point character (Binary implementations can choose the hexadecimal digit to the left of the decimal-point character so that subsequent digits align to nibble \[4-bit\] boundaries.) and the number of hexadecimal digits after it is equal to the precision; if the precision is missing and `FLT_RADIX` is a power of 2, then the precision is sufficient for an exact representation of the value; if the precision is missing and `FLT_RADIX` is not a power of 2, then the precision is sufficient to distinguish (The precision `<i>`{=html}p`</i>`{=html} is sufficient to distinguish values of the source type if 16^`<i>`{=html}p`</i>`{=html}--1^ \> `<i>`{=html}b^n^`</i>`{=html} where `<i>`{=html}b`</i>`{=html} is `FLT_RADIX` and `<i>`{=html}n`</i>`{=html} is the number of base-`<i>`{=html}b`</i>`{=html} digits in the significand of the source type. A smaller `<i>`{=html}p`</i>`{=html} might suffice depending on the implementation\'s scheme for determining the digit to the left of the decimal-point character.) values of type `double`, except that trailing zeros may be omitted; if the precision is zero and the `#` flag is not specified, no decimal-point character appears. The letters `<b>`{=html}`abcdef``</b>`{=html} are used for `a` conversion and the letters `<b>`{=html}`ABCDEF``</b>`{=html} for `A` conversion. The `A` conversion specifier produces a number with `X` and `P` instead of `x` and `p`. The exponent always contains at least one digit, and only as many more digits as necessary to represent the decimal exponent of 2. If the value is zero, the exponent is zero.\
A `double` argument representing an infinity or NaN is converted in the style of an `f` or `F` conversion specifier.
```{=html}
<!-- -->
```
`c` : If no `l` length modifier is present, the `int` argument is converted to an `unsigned char`, and the resulting character is written.\
(C99) If an `l` length modifier is present, the `wint_t` argument is converted as if by an `ls` conversion specification with no precision and an argument that points to the initial element of a two-element array of `wchar_t`, the first element containing the `wint_t` argument to the `lc` conversion specification and the second a null wide character.
```{=html}
<!-- -->
```
`s` : If no `l` length modifier is present, the argument shall be a pointer to the initial element of an array of character type. (No special provisions are made for multibyte characters.) Characters from the array are written up to (but not including) the terminating null character. If the precision is specified, no more than that many characters are written. If the precision is not specified or is greater than the size of the array, the array shall contain a null character.\
(C99) If an `l` length modifier is present, the argument shall be a pointer to the initial element of an array of `wchar_t` type. Wide characters from the array are converted to multibyte characters (each as if by a call to the `wcrtomb` function, with the conversion state described by an `mbstate_t` object initialized to zero before the first wide character is converted) up to and including a terminating null wide character. The resulting multibyte characters are written up to (but not including) the terminating null character (byte). If no precision is specified, the array shall contain a null wide character. If a precision is specified, no more than that many characters (bytes) are written (including shift sequences, if any), and the array shall contain a null wide character if, to equal the multibyte character sequence length given by the precision, the function would need to access a wide character one past the end of the array. In no case is a partial multibyte character written. (Redundant shift sequences may result if multibyte characters have a state-dependent encoding.)
```{=html}
<!-- -->
```
`p` : The argument shall be a pointer to `void`. The value of the pointer is converted to a sequence of printable characters, in an implementation-defined manner.
```{=html}
<!-- -->
```
`n` : The argument shall be a pointer to signed integer into which is written the number of characters written to the output stream so far by this call to `fprintf`. No argument is converted, but one is consumed. If the conversion specification includes any flags, a field width, or a precision, the behavior is undefined.
```{=html}
<!-- -->
```
`%` : A `%` character is written. No argument is converted. The complete conversion specification shall be `%%`.
If a conversion specification is invalid, the behavior is undefined. If
any argument is not the correct type for the corresponding coversion
specification, the behavior is undefined.
In no case does a nonexistent or small field width cause truncation of a
field; if the result of a conversion is wider than the field width, the
field is expanded to contain the conversion result.
For `a` and `A` conversions, if `FLT_RADIX` is a power of 2, the value
is correctly rounded to a hexadecimal floating number with the given
precision.
It is recommended practice that if `FLT_RADIX` is not a power of 2, the
result should be one of the two adjacent numbers in hexadecimal floating
style with the given precision, with the extra stipulation that the
error should have a correct sign for the current rounding direction.
It is recommended practice that for `e`, `E`, `f`, `F`, `g`, and `G`
conversions, if the number of significant decimal digits is at most
`DECIMAL_DIG`, then the result should be correctly rounded. (For
binary-to-decimal conversion, the result format\'s values are the
numbers representable with the given format specifier. The number of
significant digits is determined by the format specifier, and in the
case of fixed-point conversion by the source value as well.) If the
number of significant decimal digits is more than `DECIMAL_DIG` but the
source value is exactly representable with `DECIMAL_DIG` digits, then
the result should be an exact representation with trailing zeros.
Otherwise, the source value is bounded by two adjacent decimal strings
`<i>`{=html}L \< U`</i>`{=html}, both having `DECIMAL_DIG` significant
digits; the value of the resultant decimal string
`<i>`{=html}D`</i>`{=html} should satisfy `<i>`{=html}L ≤ D ≤
U`</i>`{=html}, with the extra stipulation that the error should have a
correct sign for the current rounding direction.
The `fprintf` function returns the number of characters transmitted, or
a negative value if an output or encoding error occurred.
The `printf` function is equivalent to `fprintf` with the argument
`stdout` interposed before the arguments to `printf`. It returns the
number of characters transmitted, or a negative value if an output error
occurred.
The `sprintf` function is equivalent to `fprintf`, except that the
argument `s` specifies an array into which the generated input is to be
written, rather than to a stream. A null character is written at the end
of the characters written; it is not counted as part of the returned
sum. If copying takes place between objects that overlap, the behavior
is undefined. The function returns the number of characters written in
the array, not counting the terminating null character.
The `vfprintf` function is equivalent to `fprintf`, with the variable
argument list replaced by `arg`, which shall have been initialized by
the `va_start` macro (and possibly subsequent `va_arg` calls). The
`vfprintf` function does not invoke the `va_end` macro. The function
returns the number of characters transmitted, or a negative value if an
output error occurred.
The `vprintf` function is equivalent to `printf`, with the variable
argument list replaced by `arg`, which shall have been initialized by
the `va_start` macro (and possibly subsequent `va_arg` calls). The
`vprintf` function does not invoke the `va_end` macro. The function
returns the number of characters transmitted, or a negative value if an
output error occurred.
The `vsprintf` function is equivalent to `sprintf`, with the variable
argument list replaced by `arg`, which shall have been initialized by
the `va_start` macro (and possibly subsequent `va_arg` calls). The
`vsprintf` function does not invoke the `va_end` macro. If copying takes
place between objects that overlap, the behavior is undefined. The
function returns the number of characters written into the array, not
counting the terminating null character.
## References
fr:Programmation
C/Entrées/sorties
pl:C/Czytanie i pisanie do
plików
[^1]: C99 §6.2.5/15
|
# C Programming/String manipulation
A **string** in C is merely an array of characters. The length of a
string is determined by a terminating null character: `'\0'`. So, a
string with the contents, say, `"abc"` has four characters: `'a'`,
`'b'`, `'c'`, and the terminating null (`'\0'`) character.
The terminating null character has the value zero.
## Syntax
In C, string constants (literals) are surrounded by double quotes (`"`),
e.g. `"Hello world!"` and are compiled to an array of the specified
`char` values with an additional null terminating character (0-valued)
code to mark the end of the string. The type of a string constant is
`char []`.
### backslash escapes
String literals may not directly in the source code contain embedded
newlines or other control characters, or some other characters of
special meaning in string.
To include such characters in a string, the backslash escapes may be
used, like this:
Escape Meaning
---------- -----------------------------------------------------------------------
`\\` Literal backslash
`\"` Double quote
`\'` Single quote
`\n` Newline (line feed)
`\r` Carriage return
`\b` Backspace
`\t` Horizontal tab
`\f` Form feed
`\a` Alert (bell)
`\v` Vertical tab
`\?` Question mark (used to escape trigraphs)
`\`*nnn* Character with octal value *nnn*
`\x`*hh* Character with hexadecimal value *hh*
### Wide character strings
C supports wide character strings, defined as arrays of the type
`wchar_t`, 16-bit (at least) values. They are written with an L before
the string like this
: `wchar_t *p = L"Hello` `world!";`
This feature allows strings where more than 256 different possible
characters are needed (although also variable length `char` strings can
be used). They end with a zero-valued `wchar_t`. These strings are not
supported by the `<string.h>` functions. Instead they have their own
functions, declared in `<wchar.h>`.
### Character encodings
What character encoding the `char` and `wchar_t` represent is not
specified by the C standard, except that the value 0x00 and 0x0000
specify the end of the string and not a character. It is the input and
output code which are directly affected by the character encoding. Other
code should not be too affected. The editor should also be able to
handle the encoding if strings shall be able to written in the source
code.
There are three major types of encodings:
- One byte per character. Normally based on ASCII. There is a limit of
255 different characters plus the zero termination character.
- Variable length `char` strings, which allows many more than 255
different characters. Such strings are written as normal
`char`-based arrays. These encodings are normally ASCII-based and
examples are UTF-8 or Shift
JIS.
- Wide character strings. They are arrays of `wchar_t` values.
UTF-16 is the most common such encoding, and it
is also variable-length, meaning that a character can be two
`wchar_t`.
## The `<string.h>` Standard Header
Because programmers find raw strings cumbersome to deal with, they wrote
the code in the `<string.h>` library. It represents not a concerted
design effort but rather the accretion of contributions made by various
authors over a span of years.
First, three types of functions exist in the string library:
- the `mem` functions manipulate sequences of arbitrary characters
without regard to the null character;
- the `str` functions manipulate null-terminated sequences of
characters;
- the `strn` functions manipulate sequences of non-null characters.
### The more commonly-used string functions
The nine most commonly used functions in the string library are:
- `strcat` - concatenate two strings
- `strchr` - string scanning operation
- `strcmp` - compare two strings
- `strcpy` - copy a string
- `strlen` - get string length
- `strncat` - concatenate one string with part of another
- `strncmp` - compare parts of two strings
- `strncpy` - copy part of a string
- `strrchr` - string scanning operation
Other functions, such as `strlwr` (convert to lower case), `strrev`
(return the string reversed), and `strupr` (convert to upper case) may
be popular; however, they are neither specified by the C Standard nor
the Single Unix Standard. It is also unspecified whether these functions
return copies of the original strings or convert the strings in place.
#### The `strcat` function
``` C
char *strcat(char * restrict s1, const char * restrict s2);
```
*Some people recommend using* `strncat()` *or* `strlcat()` *instead of
strcat, in order to avoid buffer overflow.*
The `strcat()` function shall append a copy of the string pointed to by
`s2` (including the terminating null byte) to the end of the string
pointed to by `s1`. The initial byte of `s2` overwrites the null byte at
the end of `s1`. If copying takes place between objects that overlap,
the behavior is undefined. The function returns `s1`.
This function is used to attach one string to the end of another string.
It is imperative that the first string (`s1`) have the space needed to
store both strings.
Example:
``` c
#include <stdio.h>
#include <string.h>
...
static const char *colors[] = {"Red","Orange","Yellow","Green","Blue","Purple" };
static const char *widths[] = {"Thin","Medium","Thick","Bold" };
...
char penText[20];
...
int penColor = 3, penThickness = 2;
strcpy(penText, colors[penColor]);
strcat(penText, widths[penThickness]);
printf("My pen is %s\n", penText); /* prints 'My pen is GreenThick' */
```
Before calling `strcat()`, the destination must currently contain a null
terminated string or the first character must have been initialized with
the null character (e.g. `penText[0] = '\0';`).
The following is a public-domain implementation of `strcat`:
``` c
#include <string.h>
/* strcat */
char *(strcat)(char *restrict s1, const char *restrict s2)
{
char *s = s1;
/* Move s so that it points to the end of s1. */
while (*s != '\0')
s++;
/* Copy the contents of s2 into the space at the end of s1. */
strcpy(s, s2);
return s1;
}
```
#### The `strchr` function
``` C
char *strchr(const char *s, int c);
```
The `strchr()` function shall locate the first occurrence of `c`
(converted to a `char`) in the string pointed to by `s`. The terminating
null byte is considered to be part of the string. The function returns
the location of the found character, or a null pointer if the character
was not found.
This function is used to find certain characters in strings.
At one point in history, this function was named `index`. The `strchr`
name, however cryptic, fits the general pattern for naming.
The following is a public-domain implementation of `strchr`:
``` c
#include <string.h>
/* strchr */
char *(strchr)(const char *s, int c)
{
char ch = c;
/* Scan s for the character. When this loop is finished,
s will either point to the end of the string or the
character we were looking for. */
while (*s != '\0' && *s != ch)
s++;
return (*s == ch) ? (char *) s : NULL;
}
```
#### The `strcmp` function
``` C
int strcmp(const char *s1, const char *s2);
```
A rudimentary form of string comparison is done with the strcmp()
function. It takes two strings as arguments and returns a value less
than zero if the first is lexographically less than the second, a value
greater than zero if the first is lexographically greater than the
second, or zero if the two strings are equal. The comparison is done by
comparing the coded (ascii) value of the characters, character by
character.
This simple type of string comparison is nowadays generally considered
unacceptable when sorting lists of strings. More advanced algorithms
exist that are capable of producing lists in dictionary sorted order.
They can also fix problems such as strcmp() considering the string
\"Alpha2\" greater than \"Alpha12\". (In the previous example,
\"Alpha2\" compares greater than \"Alpha12\" because \'2\' comes after
\'1\' in the character set.) What we\'re saying is, don\'t use this
`strcmp()` alone for general string sorting in any commercial or
professional code.
The `strcmp()` function shall compare the string pointed to by `s1` to
the string pointed to by `s2`. The sign of a non-zero return value shall
be determined by the sign of the difference between the values of the
first pair of bytes (both interpreted as type `unsigned char`) that
differ in the strings being compared. Upon completion, `strcmp()` shall
return an integer greater than, equal to, or less than 0, if the string
pointed to by `s1` is greater than, equal to, or less than the string
pointed to by `s2`, respectively.
Since comparing pointers by themselves is not practically useful unless
one is comparing pointers within the same array, this function lexically
compares the strings that two pointers point to.
This function is useful in comparisons, e.g.
`if (strcmp(s, "whatever") == 0) /* do something */`\
` ;`
The collating sequence used by `strcmp()` is equivalent to the
machine\'s native character set. The only guarantee about the order is
that the digits from `'0'` to `'9'` are in consecutive order.
The following is a public-domain implementation of `strcmp`:
``` c
#include <string.h>
/* strcmp */
int (strcmp)(const char *s1, const char *s2)
{
unsigned char uc1, uc2;
/* Move s1 and s2 to the first differing characters
in each string, or the ends of the strings if they
are identical. */
while (*s1 != '\0' && *s1 == *s2) {
s1++;
s2++;
}
/* Compare the characters as unsigned char and
return the difference. */
uc1 = (*(unsigned char *) s1);
uc2 = (*(unsigned char *) s2);
return ((uc1 < uc2) ? -1 : (uc1 > uc2));
}
```
#### The `strcpy` function
``` C
char *strcpy(char *restrict s1, const char *restrict s2);
```
*Some people recommend always using* `strncpy()` *instead of strcpy, to
avoid buffer overflow.*
The `strcpy()` function shall copy the C string pointed to by `s2`
(including the terminating null byte) into the array pointed to by `s1`.
If copying takes place between objects that overlap, the behavior is
undefined. The function returns `s1`. There is no value used to indicate
an error: if the arguments to `strcpy()` are correct, and the
destination buffer is large enough, the function will never fail.
Example:
``` c
#include <stdio.h>
#include <string.h>
/* ... */
static const char *penType="round";
/* ... */
char penText[20];
/* ... */
strcpy(penText, penType);
```
Important: You must ensure that the destination buffer (`s1`) is able to
contain all the characters in the source array, including the
terminating null byte. Otherwise, `strcpy()` will overwrite memory past
the end of the buffer, causing a buffer overflow, which can cause the
program to crash, or can be exploited by hackers to compromise the
security of the computer.
The following is a public-domain implementation of `strcpy`:
``` c
#include <string.h>
/* strcpy */
char *(strcpy)(char *restrict s1, const char *restrict s2)
{
char *dst = s1;
const char *src = s2;
/* Do the copying in a loop. */
while ((*dst++ = *src++) != '\0')
; /* The body of this loop is left empty. */
/* Return the destination string. */
return s1;
}
```
#### The `strlen` function
``` C
size_t strlen(const char *s);
```
The `strlen()` function shall compute the number of bytes in the string
to which `s` points, not including the terminating null byte. It returns
the number of bytes in the string. No value is used to indicate an
error.
The following is a public-domain implementation of `strlen`:
``` c
#include <string.h>
/* strlen */
size_t (strlen)(const char *s)
{
const char *p = s; /* pointer to character constant */
/* Loop over the data in s. */
while (*p != '\0')
p++;
return (size_t)(p - s);
}
```
Note how the line
``` C
const char *p = s
```
declares and initializes a pointer `p` to an integer constant, i.e. `p`
cannot change the value it points to.
#### The `strncat` function
``` C
char *strncat(char *restrict s1, const char *restrict s2, size_t n);
```
The `strncat()` function shall append not more than `n` bytes (a null
byte and bytes that follow it are not appended) from the array pointed
to by `s2` to the end of the string pointed to by `s1`. The initial byte
of `s2` overwrites the null byte at the end of `s1`. A terminating null
byte is always appended to the result. If copying takes place between
objects that overlap, the behavior is undefined. The function returns
`s1`.
The following is a public-domain implementation of `strncat`:
``` c
#include <string.h>
/* strncat */
char *(strncat)(char *restrict s1, const char *restrict s2, size_t n)
{
char *s = s1;
/* Loop over the data in s1. */
while (*s != '\0')
s++;
/* s now points to s1's trailing null character, now copy
up to n bytes from s2 into s stopping if a null character
is encountered in s2.
It is not safe to use strncpy here since it copies EXACTLY n
characters, NULL padding if necessary. */
while (n != 0 && (*s = *s2++) != '\0') {
n--;
s++;
}
if (*s != '\0')
*s = '\0';
return s1;
}
```
#### The `strncmp` function
``` C
int strncmp(const char *s1, const char *s2, size_t n);
```
The `strncmp()` function shall compare not more than `n` bytes (bytes
that follow a null byte are not compared) from the array pointed to by
`s1` to the array pointed to by `s2`. The sign of a non-zero return
value is determined by the sign of the difference between the values of
the first pair of bytes (both interpreted as type `unsigned char`) that
differ in the strings being compared. See `strcmp` for an explanation of
the return value.
This function is useful in comparisons, as the `strcmp` function is.
The following is a public-domain implementation of `strncmp`:
``` c
#include <string.h>
/* strncmp */
int (strncmp)(const char *s1, const char *s2, size_t n)
{
unsigned char uc1, uc2;
/* Nothing to compare? Return zero. */
if (n == 0)
return 0;
/* Loop, comparing bytes. */
while (n-- > 0 && *s1 == *s2) {
/* If we've run out of bytes or hit a null, return zero
since we already know *s1 == *s2. */
if (n == 0 || *s1 == '\0')
return 0;
s1++;
s2++;
}
uc1 = (*(unsigned char *) s1);
uc2 = (*(unsigned char *) s2);
return ((uc1 < uc2) ? -1 : (uc1 > uc2));
}
```
#### The `strncpy` function
``` C
char *strncpy(char *restrict s1, const char *restrict s2, size_t n);
```
The `strncpy()` function shall copy not more than `n` bytes (bytes that
follow a null byte are not copied) from the array pointed to by `s2` to
the array pointed to by `s1`. If copying takes place between objects
that overlap, the behavior is undefined. If the array pointed to by `s2`
is a string that is shorter than `n` bytes, null bytes shall be appended
to the copy in the array pointed to by `s1`, until `n` bytes in all are
written. The function shall return s1; no return value is reserved to
indicate an error.
It is possible that the function will `<b>`{=html}not`</b>`{=html}
return a null-terminated string, which happens if the `s2` string is
longer than `n` bytes.
The following is a public-domain version of `strncpy`:
``` c
#include <string.h>
/* strncpy */
char *(strncpy)(char *restrict s1, const char *restrict s2, size_t n)
{
char *dst = s1;
const char *src = s2;
/* Copy bytes, one at a time. */
while (n > 0) {
n--;
if ((*dst++ = *src++) == '\0') {
/* If we get here, we found a null character at the end
of s2, so use memset to put null bytes at the end of
s1. */
memset(dst, '\0', n);
break;
}
}
return s1;
}
```
#### The `strrchr` function
``` C
char *strrchr(const char *s, int c);
```
The `strrchr` function is similar to the `strchr` function, except that
`strrchr` returns a pointer to the `<b>`{=html}last`</b>`{=html}
occurrence of `c` within `s` instead of the first.
The `strrchr()` function shall locate the last occurrence of `c`
(converted to a `char`) in the string pointed to by `s`. The terminating
null byte is considered to be part of the string. Its return value is
similar to `strchr`\'s return value.
At one point in history, this function was named `rindex`. The `strrchr`
name, however cryptic, fits the general pattern for naming.
The following is a public-domain implementation of `strrchr`:
``` c
#include <string.h>
/* strrchr */
char *(strrchr)(const char *s, int c)
{
const char *last = NULL;
/* If the character we're looking for is the terminating null,
we just need to look for that character as there's only one
of them in the string. */
if (c == '\0')
return strchr(s, c);
/* Loop through, finding the last match before hitting NULL. */
while ((s = strchr(s, c)) != NULL) {
last = s;
s++;
}
return (char *) last;
}
```
### The less commonly-used string functions
The less-used functions are:
- `memchr` - Find a byte in memory
- `memcmp` - Compare bytes in memory
- `memcpy` - Copy bytes in memory
- `memmove` - Copy bytes in memory with overlapping areas
- `memset` - Set bytes in memory
- `strcoll` - Compare bytes according to a locale-specific collating
sequence
- `strcspn` - Get the length of a complementary substring
- `strerror` - Get error message
- `strpbrk` - Scan a string for a byte
- `strspn` - Get the length of a substring
- `strstr` - Find a substring
- `strtok` - Split a string into tokens
- `strxfrm` - Transform string
#### Copying functions
##### The `memcpy` function
``` C
void *memcpy(void * restrict s1, const void * restrict s2, size_t n);
```
The `memcpy()` function shall copy `n` bytes from the object pointed to
by `s2` into the object pointed to by `s1`. If copying takes place
between objects that overlap, the behavior is undefined. The function
returns `s1`.
Because the function does not have to worry about overlap, it can do the
simplest copy it can.
The following is a public-domain implementation of `memcpy`:
``` c
#include <string.h>
/* memcpy */
void *(memcpy)(void * restrict s1, const void * restrict s2, size_t n)
{
char *dst = s1;
const char *src = s2;
/* Loop and copy. */
while (n-- != 0)
*dst++ = *src++;
return s1;
}
```
##### The `memmove` function
``` C
void *memmove(void *s1, const void *s2, size_t n);
```
The `memmove()` function shall copy `n` bytes from the object pointed to
by `s2` into the object pointed to by `s1`. Copying takes place as if
the `n` bytes from the object pointed to by `s2` are first copied into a
temporary array of `n` bytes that does not overlap the objects pointed
to by `s1` and `s2`, and then the `n` bytes from the temporary array are
copied into the object pointed to by `s1`. The function returns the
value of `s1`.
The easy way to implement this without using a temporary array is to
check for a condition that would prevent an ascending copy, and if
found, do a descending copy.
The following is a public-domain, though not completely portable,
implementation of `memmove`:
``` c
#include <string.h>
/* memmove */
void *(memmove)(void *s1, const void *s2, size_t n)
{
/* note: these don't have to point to unsigned chars */
char *p1 = s1;
const char *p2 = s2;
/* test for overlap that prevents an ascending copy */
if (p2 < p1 && p1 < p2 + n) {
/* do a descending copy */
p2 += n;
p1 += n;
while (n-- != 0)
*--p1 = *--p2;
} else
while (n-- != 0)
*p1++ = *p2++;
return s1;
}
```
#### Comparison functions
##### The `memcmp` function
``` C
int memcmp(const void *s1, const void *s2, size_t n);
```
The `memcmp()` function shall compare the first `n` bytes (each
interpreted as `unsigned char`) of the object pointed to by `s1` to the
first `n` bytes of the object pointed to by `s2`. The sign of a non-zero
return value shall be determined by the sign of the difference between
the values of the first pair of bytes (both interpreted as type
`unsigned char`) that differ in the objects being compared.
The following is a public-domain implementation of `memcmp`:
``` c
#include <string.h>
/* memcmp */
int (memcmp)(const void *s1, const void *s2, size_t n)
{
const unsigned char *us1 = (const unsigned char *) s1;
const unsigned char *us2 = (const unsigned char *) s2;
while (n-- != 0) {
if (*us1 != *us2)
return (*us1 < *us2) ? -1 : +1;
us1++;
us2++;
}
return 0;
}
```
##### The `strcoll` and `strxfrm` functions
``` C
int strcoll(const char *s1, const char *s2);
```
`size_t strxfrm(char *s1, const char *s2, size_t n);`
The ANSI C Standard specifies two locale-specific comparison functions.
The `strcoll` function compares the string pointed to by `s1` to the
string pointed to by `s2`, both interpreted as appropriate to the
`LC_COLLATE` category of the current locale. The return value is similar
to `strcmp`.
The `strxfrm` function transforms the string pointed to by `s2` and
places the resulting string into the array pointed to by `s1`. The
transformation is such that if the `strcmp` function is applied to the
two transformed strings, it returns a value greater than, equal to, or
less than zero, corresponding to the result of the `strcoll` function
applied to the same two original strings. No more than `n` characters
are placed into the resulting array pointed to by `s1`, including the
terminating null character. If `n` is zero, `s1` is permitted to be a
null pointer. If copying takes place between objects that overlap, the
behavior is undefined. The function returns the length of the
transformed string.
These functions are rarely used and nontrivial to code, so there is no
code for this section.
#### Search functions
##### The `memchr` function
``` C
void *memchr(const void *s, int c, size_t n);
```
The `memchr()` function shall locate the first occurrence of `c`
(converted to an `unsigned char`) in the initial `n` bytes (each
interpreted as `unsigned char`) of the object pointed to by `s`. If `c`
is not found, `memchr` returns a null pointer.
The following is a public-domain implementation of `memchr`:
``` c
#include <string.h>
/* memchr */
void *(memchr)(const void *s, int c, size_t n)
{
const unsigned char *src = s;
unsigned char uc = c;
while (n-- != 0) {
if (*src == uc)
return (void *) src;
src++;
}
return NULL;
}
```
##### The `strcspn`, `strpbrk`, and `strspn` functions
``` C
size_t strcspn(const char *s1, const char *s2);
```
``` C
char *strpbrk(const char *s1, const char *s2);
```
``` C
size_t strspn(const char *s1, const char *s2);
```
The `strcspn` function computes the length of the maximum initial
segment of the string pointed to by `s1` which consists entirely of
characters `<b>`{=html}not`</b>`{=html} from the string pointed to by
`s2`.
The `strpbrk` function locates the first occurrence in the string
pointed to by `s1` of any character from the string pointed to by `s2`,
returning a pointer to that character or a null pointer if not found.
The `strspn` function computes the length of the maximum initial segment
of the string pointed to by `s1` which consists entirely of characters
from the string pointed to by `s2`.
All of these functions are similar except in the test and the return
value.
The following are public-domain implementations of `strcspn`, `strpbrk`,
and `strspn`:
``` c
#include <string.h>
/* strcspn */
size_t (strcspn)(const char *s1, const char *s2)
{
const char *sc1;
for (sc1 = s1; *sc1 != '\0'; sc1++)
if (strchr(s2, *sc1) != NULL)
return (sc1 - s1);
return sc1 - s1; /* terminating nulls match */
}
```
``` c
#include <string.h>
/* strpbrk */
char *(strpbrk)(const char *s1, const char *s2)
{
const char *sc1;
for (sc1 = s1; *sc1 != '\0'; sc1++)
if (strchr(s2, *sc1) != NULL)
return (char *)sc1;
return NULL; /* terminating nulls match */
}
```
``` c
#include <string.h>
/* strspn */
size_t (strspn)(const char *s1, const char *s2)
{
const char *sc1;
for (sc1 = s1; *sc1 != '\0'; sc1++)
if (strchr(s2, *sc1) == NULL)
return (sc1 - s1);
return sc1 - s1; /* terminating nulls don't match */
}
```
##### The `strstr` function
``` C
char *strstr(const char *haystack, const char *needle);
```
The `strstr()` function shall locate the first occurrence in the string
pointed to by `haystack` of the sequence of bytes (excluding the
terminating null byte) in the string pointed to by `needle`. The
function returns the pointer to the matching string in `haystack` or a
null pointer if a match is not found. If `needle` is an empty string,
the function returns `haystack`.
The following is a public-domain implementation of `strstr`:
``` c
#include <string.h>
/* strstr */
char *(strstr)(const char *haystack, const char *needle)
{
size_t needlelen;
/* Check for the null needle case. */
if (*needle == '\0')
return (char *) haystack;
needlelen = strlen(needle);
for (; (haystack = strchr(haystack, *needle)) != NULL; haystack++)
if (memcmp(haystack, needle, needlelen) == 0)
return (char *) haystack;
return NULL;
}
```
##### The `strtok` function
``` C
char *strtok(char *restrict s1, const char *restrict delimiters);
```
A sequence of calls to `strtok()` breaks the string pointed to by `s1`
into a sequence of tokens, each of which is delimited by a byte from the
string pointed to by `delimiters`. The first call in the sequence has
`s1` as its first argument, and is followed by calls with a null pointer
as their first argument. The separator string pointed to by `delimiters`
may be different from call to call.
The first call in the sequence searches the string pointed to by `s1`
for the first byte that is not contained in the current separator string
pointed to by `delimiters`. If no such byte is found, then there are no
tokens in the string pointed to by `s1` and `strtok()` shall return a
null pointer. If such a byte is found, it is the start of the first
token.
The `strtok()` function then searches from there for a byte (or
multiple, consecutive bytes) that is contained in the current separator
string. If no such byte is found, the current token extends to the end
of the string pointed to by `s1`, and subsequent searches for a token
shall return a null pointer. If such a byte is found, it is overwritten
by a null byte, which terminates the current token. The `strtok()`
function saves a pointer to the following byte, from which the next
search for a token shall start.
Each subsequent call, with a null pointer as the value of the first
argument, starts searching from the saved pointer and behaves as
described above.
The `strtok()` function need not be reentrant. A function that is not
required to be reentrant is not required to be thread-safe.
Because the `strtok()` function must save state between calls, and you
could not have two tokenizers going at the same time, the Single Unix
Standard defined a similar function, `strtok_r()`, that does not need to
save state. Its prototype is this:
`char *strtok_r(char *s, const char *delimiters, char **lasts);`
The `strtok_r()` function considers the null-terminated string `s` as a
sequence of zero or more text tokens separated by spans of one or more
characters from the separator string `delimiters`. The argument lasts
points to a user-provided pointer which points to stored information
necessary for `strtok_r()` to continue scanning the same string.
In the first call to `strtok_r()`, `s` points to a null-terminated
string, `delimiters` to a null-terminated string of separator
characters, and the value pointed to by `lasts` is ignored. The
`strtok_r()` function shall return a pointer to the first character of
the first token, write a null character into `s` immediately following
the returned token, and update the pointer to which `lasts` points.
In subsequent calls, `s` is a null pointer and `lasts` shall be
unchanged from the previous call so that subsequent calls shall move
through the string `s`, returning successive tokens until no tokens
remain. The separator string `delimiters` may be different from call to
call. When no token remains in `s`, a NULL pointer shall be returned.
The following public-domain code for `strtok` and `strtok_r` codes the
former as a special case of the latter:
``` c
#include <string.h>
/* strtok_r */
char *(strtok_r)(char *s, const char *delimiters, char **lasts)
{
char *sbegin, *send;
sbegin = s ? s : *lasts;
sbegin += strspn(sbegin, delimiters);
if (*sbegin == '\0') {
*lasts = "";
return NULL;
}
send = sbegin + strcspn(sbegin, delimiters);
if (*send != '\0')
*send++ = '\0';
*lasts = send;
return sbegin;
}
/* strtok */
char *(strtok)(char *restrict s1, const char *restrict delimiters)
{
static char *ssave = "";
return strtok_r(s1, delimiters, &ssave);
}
```
#### Miscellaneous functions
These functions do not fit into one of the above categories.
##### The `memset` function
``` C
void *memset(void *s, int c, size_t n);
```
The `memset()` function converts `c` into `unsigned char`, then stores
the character into the first `n` bytes of memory pointed to by `s`.
The following is a public-domain implementation of `memset`:
``` c
#include <string.h>
/* memset */
void *(memset)(void *s, int c, size_t n)
{
unsigned char *us = s;
unsigned char uc = c;
while (n-- != 0)
*us++ = uc;
return s;
}
```
##### The `strerror` function
``` C
char *strerror(int errorcode);
```
This function returns a locale-specific error message corresponding to
the parameter. Depending on the circumstances, this function could be
trivial to implement, but this author will not do that as it varies.
The Single Unix System Version 3 has a variant, `strerror_r`, with this
prototype:
`int strerror_r(int errcode, char *buf, size_t buflen);`
This function stores the message in `buf`, which has a length of size
`buflen`.
## Examples
To determine the number of characters in a string, the `strlen()`
function is used:
``` c
#include <stdio.h>
#include <string.h>
...
int length, length2;
char *turkey;
static char *flower= "begonia";
static char *gemstone="ruby ";
length = strlen(flower);
printf("Length = %d\n", length); // prints 'Length = 7'
length2 = strlen(gemstone);
turkey = malloc( length + length2 + 1);
if (turkey) {
strcpy( turkey, gemstone);
strcat( turkey, flower);
printf( "%s\n", turkey); // prints 'ruby begonia'
free( turkey );
}
```
Note that the amount of memory allocated for \'turkey\' is one plus the
sum of the lengths of the strings to be concatenated. This is for the
terminating null character, which is not counted in the lengths of the
strings.
### Exercises
1. The string functions use a lot of looping constructs. Is there some
way to portably unravel the loops?
2. What functions are possibly missing from the library as it stands
now?
## References
- A Little C Primer/C String Function
Library
- C++
Programming/Code/IO/Streams/string
- Because so many functions in the standard `string.h` library are
vulnerable to buffer overflow errors, some
people recommend avoiding the
`string.h` library and \"C style strings\" and instead using a
dynamic string API, such as the ones listed in the String library
comparison.
- There\'s a tiny public domain concat() function, which will
allocate memory and safely concatenate any number of strings in
portable C/C++
code
fr:Programmation C/Chaînes de
caractères
pl:C/Napisy pt:Programar em
C/Strings
|
# C Programming/Further math
The `<math.h>` header contains prototypes for several functions that
deal with mathematics. In the 1990 version of the ISO standard, only the
`double` versions of the functions were specified; the 1999 version
added the `float` and `long double` versions. To use these math
functions, you must link your program with the math library. For some
compilers (including GCC), you must specify the additional parameter
`-lm`[^1][^2].
The math functions may produce one of two kinds of errors. *Domain
errors* occur when the parameters to the functions are invalid, such as
a negative number as a parameter to `sqrt` (the square root function).
*Range errors* occur when the result of the function cannot be expressed
in that particular floating-point type, such as `pow(1000.0, 1000.0)` if
the maximum value of a double is around 10^308^.
The functions can be grouped into the following categories:
## Trigonometric functions
### The `acos` and `asin` functions
The `acos` functions return the arccosine of their arguments in radians,
and the `asin` functions return the arcsine of their arguments in
radians. All functions expect the argument in the range \[-1,+1\]. The
arccosine returns a value in the range \[0,π\]; the arcsine returns a
value in the range \[-π/2,+π/2\].
``` c
#include <math.h>
float asinf(float x); /* C99 */
float acosf(float x); /* C99 */
double asin(double x);
double acos(double x);
long double asinl(long double x); /* C99 */
long double acosl(long double x); /* C99 */
```
### The `atan` and `atan2` functions
The `atan` functions return the arctangent of their arguments in
radians, and the `atan2` function return the arctangent of `y/x` in
radians. The `atan` functions return a value in the range \[-π/2,+π/2\]
(the reason why ±π/2 are included in the range is because the
floating-point value may represent infinity, and atan(±∞) = ±π/2); the
`atan2` functions return a value in the range \[-π,+π\]. For `atan2`, a
domain error may occur if both arguments are zero.
``` c
#include <math.h>
float atanf(float x); /* C99 */
float atan2f(float y, float x); /* C99 */
double atan(double x);
double atan2(double y, double x);
long double atanl(long double x); /* C99 */
long double atan2l(long double y, long double x); /* C99 */
```
### The `cos`, `sin`, and `tan` functions
The `cos`, `sin`, and `tan` functions return the cosine, sine, and
tangent of the argument, expressed in radians.
``` c
#include <math.h>
float cosf(float x); /* C99 */
float sinf(float x); /* C99 */
float tanf(float x); /* C99 */
double cos(double x);
double sin(double x);
double tan(double x);
long double cosl(long double x); /* C99 */
long double sinl(long double x); /* C99 */
long double tanl(long double x); /* C99 */
```
## Hyperbolic functions
The `cosh`, `sinh` and `tanh` functions compute the hyperbolic cosine,
the hyperbolic sine, and the hyperbolic tangent of the argument
respectively. For the hyperbolic sine and cosine functions, a range
error occurs if the magnitude of the argument is too large.
The `acosh` functions compute the inverse hyperbolic cosine of the
argument. A domain error occurs for arguments less than 1.
The `asinh` functions compute the inverse hyperbolic sine of the
argument.
The `atanh` functions compute the inverse hyperbolic tangent of the
argument. A domain error occurs if the argument is not in the interval
\[-1, +1\]. A range error may occur if the argument equals -1 or +1.
``` c
#include <math.h>
float coshf(float x); /* C99 */
float sinhf(float x); /* C99 */
float tanhf(float x); /* C99 */
double cosh(double x);
double sinh(double x);
double tanh(double x);
long double coshl(long double x); /* C99 */
long double sinhl(long double x); /* C99 */
long double tanhl(long double x); /* C99 */
float acoshf(float x); /* C99 */
float asinhf(float x); /* C99 */
float atanhf(float x); /* C99 */
double acosh(double x); /* C99 */
double asinh(double x); /* C99 */
double atanh(double x); /* C99 */
long double acoshl(long double x); /* C99 */
long double asinhl(long double x); /* C99 */
long double atanhl(long double x); /* C99 */
```
## Exponential and logarithmic functions
### The `exp`, `exp2`, and `expm1` functions
The `exp` functions compute the base-*e* exponential function of `x`
(*e*^x^). A range error occurs if the magnitude of `x` is too large.
The `exp2` functions compute the base-2 exponential function of `x`
(2^x^). A range error occurs if the magnitude of `x` is too large.
The `expm1` functions compute the base-*e* exponential function of the
argument, minus 1. A range error occurs if the magnitude of `x` is too
large.
``` c
#include <math.h>
float expf(float x); /* C99 */
double exp(double x);
long double expl(long double x); /* C99 */
float exp2f(float x); /* C99 */
double exp2(double x); /* C99 */
long double exp2l(long double x); /* C99 */
float expm1f(float x); /* C99 */
double expm1(double x); /* C99 */
long double expm1l(long double x); /* C99 */
```
### The `frexp`, `ldexp`, `modf`, `scalbn`, and `scalbln` functions
These functions are heavily used in software floating-point emulators,
but are otherwise rarely directly called.
Inside the computer, each floating point number is represented by two
parts:
- The significand is either in the range \1/2, 1), or it equals zero.
- The exponent is an integer.
The value of a floating point number $v$ is
$v = {\rm significand} \times 2^{\rm exponent}$.
The `frexp` functions break the argument floating point number `value`
into those two parts, the exponent and significand. After breaking it
apart, it stores the exponent in the `int` object pointed to by `ex`,
and returns the significand. In other words, the value returned is a
copy of the given floating point number but with an exponent replaced by
0. If `value` is zero, both parts of the result are zero.
The `ldexp` functions multiply a floating-point number by a integral
power of 2 and return the result. In other words, it returns copy of the
given floating point number with the exponent increased by ex. A range
error may occur.
The `modf` functions break the argument `value` into integer and
fraction parts, each of which has the same sign as the argument. They
store the integer part in the object pointed to by `*iptr` and return
the fraction part. The `*iptr` is a floating-point type, rather than an
\"int\" type, because it might be used to store an integer like 1 000
000 000 000 000 000 000 which is too big to fit in an int.
The `scalbn` and `scalbln` compute `x` × `FLT_RADIX`^`n`^. `FLT_RADIX`
is the base of the floating-point system; if it is 2, the functions are
equivalent to `ldexp`.
``` c
#include <math.h>
float frexpf(float value, int *ex); /* C99 */
double frexp(double value, int *ex);
long double frexpl(long double value, int *ex); /* C99 */
float ldexpf(float x, int ex); /* C99 */
double ldexp(double x, int ex);
long double ldexpl(long double x, int ex); /* C99 */
float modff(float value, float *iptr); /* C99 */
double modf(double value, double *iptr);
long double modfl(long double value, long double *iptr); /* C99 */
float scalbnf(float x, int ex); /* C99 */
double scalbn(double x, int ex); /* C99 */
long double scalbnl(long double x, int ex); /* C99 */
float scalblnf(float x, long int ex); /* C99 */
double scalbln(double x, long int ex); /* C99 */
long double scalblnl(long double x, long int ex); /* C99 */
```
Most C floating point libraries also implement the IEEE754-recommended
nextafter(), nextUp( ), and nextDown( ) functions.
[3
### The `log`, `log2`, `log1p`, and `log10` functions
The `log` functions compute the base-*e* natural logarithm of the
argument and return the result. A domain error occurs if the argument is
negative. A range error may occur if the argument is zero.
The `log1p` functions compute the base-*e* natural logarithm of one plus
the argument and return the result. A domain error occurs if the
argument is less than -1. A range error may occur if the argument is -1.
The `log10` functions compute the common (base-10) logarithm of the
argument and return the result. A domain error occurs if the argument is
negative. A range error may occur if the argument is zero.
The `log2` functions compute the base-2 logarithm of the argument and
return the result. A domain error occurs if the argument is negative. A
range error may occur if the argument is zero.
``` c
#include <math.h>
float logf(float x); /* C99 */
double log(double x);
long double logl(long double x); /* C99 */
float log1pf(float x); /* C99 */
double log1p(double x); /* C99 */
long double log1pl(long double x); /* C99 */
float log10f(float x); /* C99 */
double log10(double x);
long double log10l(long double x); /* C99 */
float log2f(float x); /* C99 */
double log2(double x); /* C99 */
long double log2l(long double x); /* C99 */
```
### The `ilogb` and `logb` functions
The `ilogb` functions extract the exponent of `x` as a signed int value.
If `x` is zero, they return the value `FP_ILOGB0`; if `x` is infinite,
they return the value `INT_MAX`; if `x` is not-a-number they return the
value `FP_ILOGBNAN`; otherwise, they are equivalent to calling the
corresponding `logb` function and casting the returned value to type
`int`. A range error may occur if `x` is zero. `FP_ILOGB0` and
`FP_ILOGBNAN` are macros defined in `math.h`; `INT_MAX` is a macro
defined in `limits.h`.
The `logb` functions extract the exponent of `x` as a signed integer
value in floating-point format. If `x` is subnormal, it is treated as if
it were normalized; thus, for positive finite `x`, 1 ≤ `x` ×
`FLT_RADIX`^`-logb(x)`^ \< `FLT_RADIX` . `FLT_RADIX` is the radix for
floating-point numbers, defined in the `float.h` header.
``` c
#include <math.h>
int ilogbf(float x); /* C99 */
int ilogb(double x); /* C99 */
int ilogbl(long double x); /* C99 */
float logbf(float x); /* C99 */
double logb(double x); /* C99 */
long double logbl(long double x); /* C99 */
```
## Power functions
### The `pow` functions
The `pow` functions compute `x` raised to the power `y` and return the
result. A domain error occurs if `x` is negative and `y` is not an
integral value. A domain error occurs if the result cannot be
represented when `x` is zero and `y` is less than or equal to zero. A
range error may occur.
``` c
#include <math.h>
float powf(float x, float y); /* C99 */
double pow(double x, double y);
long double powl(long double x, long double y); /* C99 */
```
### The `sqrt` functions
The `sqrt` functions compute the positive square root of `x` and return
the result. A domain error occurs if the argument is negative.
``` c
#include <math.h>
float sqrtf(float x); /* C99 */
double sqrt(double x);
long double sqrtl(long double x); /* C99 */
```
### The `cbrt` functions
The `cbrt` functions compute the cube root of `x` and return the result.
``` c
#include <math.h>
float cbrtf(float x); /* C99 */
double cbrt(double x); /* C99 */
long double cbrtl(long double x); /* C99 */
```
### The `hypot` functions
The `hypot` functions compute the square root of the sums of the squares
of `x` and `y`, without overflow or underflow, and return the result.
``` c
#include <math.h>
float hypotf(float x, float y); /* C99 */
double hypot(double x, double y); /* C99 */
long double hypotl(long double x, long double y); /* C99 */
```
## Nearest integer, absolute value, and remainder functions
### The `ceil` and `floor` functions
The `ceil` functions compute the smallest integral value not less than
`x` and return the result; the `floor` functions compute the largest
integral value not greater than `x` and return the result.
``` c
#include <math.h>
float ceilf(float x); /* C99 */
double ceil(double x);
long double ceill(long double x); /* C99 */
float floorf(float x); /* C99 */
double floor(double x);
long double floorl(long double x); /* C99 */
```
### The `fabs` functions
The `fabs` functions compute the absolute value of a floating-point
number `x` and return the result.
``` c
#include <math.h>
float fabsf(float x); /* C99 */
double fabs(double x);
long double fabsl(long double x); /* C99 */
```
### The `fmod` functions
The `fmod` functions compute the floating-point remainder of `x/y` and
return the value `x` - *i* \* `y`, for some integer *i* such that, if
`y` is nonzero, the result has the same sign as `x` and magnitude less
than the magnitude of `y`. If `y` is zero, whether a domain error occurs
or the `fmod` functions return zero is implementation-defined.
``` c
#include <math.h>
float fmodf(float x, float y); /* C99 */
double fmod(double x, double y);
long double fmodl(long double x, long double y); /* C99 */
```
### The `nearbyint`, `rint`, `lrint`, and `llrint` functions
The `nearbyint` functions round their argument to an integer value in
floating-point format, using the current rounding direction and without
raising the \"inexact\" floating-point exception.
The `rint` functions are similar to the `nearbyint` functions except
that they can raise the \"inexact\" floating-point exception if the
result differs in value from the argument.
The `lrint` and `llrint` functions round their arguments to the nearest
integer value according to the current rounding direction. If the result
is outside the range of values of the return type, the numeric result is
undefined and a range error may occur if the magnitude of the argument
is too large.
``` c
#include <math.h>
float nearbyintf(float x); /* C99 */
double nearbyint(double x); /* C99 */
long double nearbyintl(long double x); /* C99 */
float rintf(float x); /* C99 */
double rint(double x); /* C99 */
long double rintl(long double x); /* C99 */
long int lrintf(float x); /* C99 */
long int lrint(double x); /* C99 */
long int lrintl(long double x); /* C99 */
long long int llrintf(float x); /* C99 */
long long int llrint(double x); /* C99 */
long long int llrintl(long double x); /* C99 */
```
### The `round`, `lround`, and `llround` functions
The `round` functions round the argument to the nearest integer value in
floating-point format, rounding halfway cases away from zero, regardless
of the current rounding direction.
The `lround` and `llround` functions round the argument to the nearest
integer value, rounding halfway cases away from zero, regardless of the
current rounding direction. If the result is outside the range of values
of the return type, the numeric result is undefined and a range error
may occur if the magnitude of the argument is too large.
``` c
#include <math.h>
float roundf(float x); /* C99 */
double round(double x); /* C99 */
long double roundl(long double x); /* C99 */
long int lroundf(float x); /* C99 */
long int lround(double x); /* C99 */
long int lroundl(long double x); /* C99 */
long long int llroundf(float x); /* C99 */
long long int llround(double x); /* C99 */
long long int llroundl(long double x); /* C99 */
```
### The `trunc` functions
The `trunc` functions round their argument to the integer value in
floating-point format that is nearest but no larger in magnitude than
the argument.
``` c
#include <math.h>
float truncf(float x); /* C99 */
double trunc(double x); /* C99 */
long double truncl(long double x); /* C99 */
```
### The `remainder` functions
The `remainder` functions compute the remainder `x` REM `y` as defined
by IEC 60559. The definition reads, \"When *y* ≠ 0, the remainder *r* =
*x* REM *y* is defined regardless of the rounding mode by the
mathematical reduction *r* = *x* - *ny*, where *n* is the integer
nearest the exact value of *x*/*y*; whenever \|*n* - *x*/*y*\| = ½, then
*n* is even. Thus, the remainder is always exact. If *r* = 0, its sign
shall be that of *x*.\" This definition is applicable for all
implementations.
``` c
#include <math.h>
float remainderf(float x, float y); /* C99 */
double remainder(double x, double y); /* C99 */
long double remainderl(long double x, long double y); /* C99 */
```
### The `remquo` functions
The `remquo` functions return the same remainder as the `remainder`
functions. In the object pointed to by `quo`, they store a value whose
sign is the sign of `x`/`y` and whose magnitude is congruent modulo 2^n^
to the magnitude of the integral quotient of `x`/`y`, where *n* is an
implementation-defined integer greater than or equal to 3.
``` c
#include <math.h>
float remquof(float x, float y, int *quo); /* C99 */
double remquo(double x, double y, int *quo); /* C99 */
long double remquol(long double x, long double y, int *quo); /* C99 */
```
## Error and gamma functions
The `erf` functions compute the error function of the argument
$\frac{2}{\sqrt{\pi}}\int_{0}^x e^{-t^2}\,\mathrm dt$
The `erfc` functions compute the complimentary error function of the
argument (that is, 1 - erf x). For the `erfc` functions, a range error
may occur if the argument is too large.
The `lgamma` functions compute the natural logarithm of the absolute
value of the gamma of the argument (that is, log~*e*~\|Γ(x)\|). A range
error may occur if the argument is a negative integer or zero.
The `tgamma` functions compute the gamma of the argument (that is,
Γ(x)). A domain error occurs if the argument is a negative integer or if
the result cannot be represented when the argument is zero. A range
error may occur.
``` c
#include <math.h>
float erff(float x); /* C99 */
double erf(double x); /* C99 */
long double erfl(long double x); /* C99 */
float erfcf(float x); /* C99 */
double erfc(double x); /* C99 */
long double erfcl(long double x); /* C99 */
float lgammaf(float x); /* C99 */
double lgamma(double x); /* C99 */
long double lgammal(long double x); /* C99 */
float tgammaf(float x); /* C99 */
double tgamma(double x); /* C99 */
long double tgammal(long double x); /* C99 */
```
## References
{{-}}
fr:Programmation
C/Mathématiques
pl:C/Zaawansowane operacje
matematyczne
[^1]: 1
Why do you have to link the math library in C?
[^2]: 2
Why do I have to explicitly link with libm?
|
# C Programming/Libraries
A *library* in C is a collection of header files, exposed for use by
other programs. The library therefore consists of an *interface*
expressed in a `.h` file (named the \"header\") and an *implementation*
expressed in a `.c` file. This `.c` file might be precompiled or
otherwise inaccessible, or it might be available to the programmer.
(Note: Libraries may call functions in other libraries such as the
Standard C or math libraries to do various tasks.)
The format of a library varies with the operating system and compiler
one is using. For example, in the Unix and Linux operating systems, a
library consists of one or more *object files*, which consist of object
code that is usually the output of a compiler (if the source language is
C or something similar) or an assembler (if the source language is
assembly language). These object files are then turned into a library in
the form of an archive by the *ar* archiver (a program that takes files
and stores them in a bigger file without regard to compression). The
filename for the library usually starts with \"lib\" and ends with
\".a\"; e.g. the *libc.a* file contains the Standard C library and the
\"libm.a\" the mathematics routines, which the linker would then link
in. Other operating systems such as Microsoft Windows use a \".lib\"
extension for libraries and an \".obj\" extension for object files. Some
programs in the Unix environment such as lex and yacc generate C code
that can be linked with the libl and liby libraries to create an
executable.
We\'re going to use as an example a library that contains one function:
a function to parse arguments from the command
line. Arguments on the command line could be by themselves:
` -i`
have an optional argument that is
concatenated to the letter:
` -ioptarg`
or have the argument in a separate argv-element:
` -i optarg`
The library also has four declarations that it exports in addition to
the function: three integers and a pointer to the optional argument. If
the argument does not have an optional argument, the pointer to the
optional argument will be null.
In order to parse all these types of arguments, we have written the
following \"getopt.c\" file:
``` c
#include <stdio.h> /* for fprintf() and EOF */
#include <string.h> /* for strchr() */
#include "getopt.h" /* consistency check */
/* variables */
int opterr = 1; /* getopt prints errors if this is on */
int optind = 1; /* token pointer */
int optopt; /* option character passed back to user */
char *optarg; /* flag argument (or value) */
/* function */
/* return option character, EOF if no more or ? if problem.
The arguments to the function:
argc, argv - the arguments to the main() function. An argument of "--"
stops the processing.
opts - a string containing the valid option characters.
an option character followed by a colon (:) indicates that
the option has a required argument.
*/
int
getopt (int argc, char **argv, char *opts)
{
static int sp = 1; /* character index into current token */
register char *cp; /* pointer into current token */
if (sp == 1)
{
/* check for more flag-like tokens */
if (optind >= argc || argv[optind][0] != '-' || argv[optind][1] == '\0')
return EOF;
else if (strcmp (argv[optind], "--") == 0)
{
optind++;
return EOF;
}
}
optopt = argv[optind][sp];
if (optopt == ':' || (cp = strchr (opts, optopt)) == NULL)
{
if (opterr)
fprintf (stderr, "%s: invalid option -- '%c'\n", argv[0], optopt);
/* if no characters left in this token, move to next token */
if (argv[optind][++sp] == '\0')
{
optind++;
sp = 1;
}
return '?';
}
if (*++cp == ':')
{
/* if a value is expected, get it */
if (argv[optind][sp + 1] != '\0')
/* flag value is rest of current token */
optarg = argv[optind++] + (sp + 1);
else if (++optind >= argc)
{
if (opterr)
fprintf (stderr, "%s: option requires an argument -- '%c'\n",
argv[0], optopt);
sp = 1;
return '?';
}
else
/* flag value is next token */
optarg = argv[optind++];
sp = 1;
}
else
{
/* set up to look at next char in token, next time */
if (argv[optind][++sp] == '\0')
{
/* no more in current token, so setup next token */
sp = 1;
optind++;
}
optarg = 0;
}
return optopt;
}
/* END OF FILE */
```
The interface would be the following \"getopt.h\" file:
``` c
#ifndef GETOPT_H
#define GETOPT_H
/* exported variables */
extern int opterr, optind, optopt;
extern char *optarg;
/* exported function */
int getopt(int, char **, char *);
#endif
/* END OF FILE */
```
At a minimum, a programmer has the interface file to figure out how to
use a library, although, in general, the library programmer also wrote
documentation on how to use the library. In the above case, the
documentation should say that the provided arguments `**argv` and
`*opts` both shouldn\'t be null pointers (or why would you be using the
`getopt` function anyway?). Specifically, it typically states what each
parameter is for and what return values can be expected in which
conditions. Programmers that use a library, are normally not interested
in the implementation of the library \-- unless the implementation has a
bug, in which case he would want to complain somehow.
Both the implementation of the getopts library, and programs that use
the library should state `#include "getopt.h"`, in order to refer to the
corresponding interface. Now the library is \"linked\" to the program
\-- the one that contains the main() function. The program may refer to
dozens of interfaces.
In some cases, just placing `#include "getopt.h"` may appear correct but
will still fail to link properly. This indicates that the library is not
installed correctly, or there may be some additional configuration
required. You will have to check either the compiler\'s documentation or
library\'s documentation on how to resolve this issue.
## What to put in header files
As a general rule, headers should contain any declarations and macro
definitions (preprocessor `#define`s) to be \"seen\" by the other
modules in a program.
Possible declarations:
- struct, union, and enum declarations
- typedef declarations
- external function declarations
- global variable declarations
In the above `getopt.h` example file, one function (`getopt`) is
declared and four global variables (`optind`, `optopt`, `optarg`, and
`opterr`) are also declared. The variables are declared with the storage
class specifier `extern` in the header file because that keyword
specifies that the \"real\" variables are stored elsewhere (i.e. the
`getopt.c` file) and not within the header file.
The `#ifndef GETOPT_H/#define GETOPT_H` trick is colloquially called
`<b>`{=html}include guards`</b>`{=html}. This is used so that if the
`getopt.h` file were included more than once in a translation unit, the
unit would only see the contents once. Alternatively,
`#pragma once` in a header file can also be
used to achieve the same thing in some compilers (`#pragma` is an
unportable catchall).
## Linking Libraries Into Executables
Linking libraries into executables varies by operating system and
compiler/linker used. In Unix, directories of linked object files can be
specified with the `-L` option to the cc command and individual
libraries are specified with the `-l` (small ell) option. The `-lm`
option specifies that the libm math library should be linked in, for
example.
## References
- C FAQ: \"I\'m wondering what to put in .c files and what to put in
.h files. (What does \".h\" mean,
anyway?)\"
- PIClist thread: \"Global variables in projects with many C
files.\"
- \"How do I use extern to share variables between source files in
C?\".
fr:Programmation C/Bibliothèque
standard
pl:C/Biblioteki
|
# C Programming/Common practices
With its extensive use, a number of common practices and conventions
have evolved to help avoid errors in C programs. These are
simultaneously a demonstration of the application of good software
engineering principles to a language and an indication of the
limitations of C. Although few are used universally, and some are
controversial, each of these enjoys wide use.
## Dynamic multidimensional arrays
Although one-dimensional arrays are easy to create dynamically using
malloc, and fixed-size multidimensional arrays are easy to create using
the built-in language feature, dynamic multidimensional arrays are
trickier. There are a number of different ways to create them, each with
different tradeoffs. The two most popular ways to create them are:
- They can be allocated as a single block of memory, just like static
multidimensional arrays. This requires that the array be
*rectangular* (i.e. subarrays of lower dimensions are static and
have the same size). The disadvantage is that the syntax of
declaration the pointer is a little tricky for programmers at first.
For example, if one wanted to create an array of ints of 3 columns
and `rows` rows, one would do
``` c
int (*multi_array)[3] = malloc(rows * sizeof(int[3]));
```
: (Note that here `multi_array` is a pointer to an array of 3 ints.)
```{=html}
<!-- -->
```
: Because of array-pointer interchangeability, you can index this just
like static multidimensional arrays, i.e. `multi_array[5][2]` is the
element at the 6th row and 3rd column.
- Dynamic multidimensional arrays can be allocated by first allocating
an array of pointers, and then allocating subarrays and storing
their addresses in the array of pointers.`<ref>`{=html}
Adam N. Rosenberg. \"A Description of One Programmer's Programming
Style Revisited\".
2001. p. 19-20.
```{=html}
</ref>
```
(This approach is also known as an Iliffe
vector). The syntax for accessing elements
is the same as for multidimensional arrays described above (even though
they are stored very differently). This approach has the advantage of
the ability to make ragged arrays (i.e. with subarrays of different
sizes). However, it also uses more space and requires more levels of
indirection to index into, and can have worse cache performance. It also
requires many dynamic allocations, each of which can be expensive.
For more information, see the comp.lang.c FAQ, question
6.16.
In some cases, the use of multi-dimensional arrays can best be addressed
as an array of structures. Before user-defined data structures were
available, a common technique was to define a multi-dimensional array,
where each column contained different information about the row. This
approach is also frequently used by beginner programmers. For example,
columns of a two-dimensional character array might contain last name,
first name, address, etc.
In cases like this, it is better to define a structure that contains the
information that was stored in the columns, and then create an array of
pointers to that structure. This is especially true when the number of
data points for a given record might vary, such as the tracks on an
album. In these cases, it is better to create a structure for the album
that contains information about the album, along with a dynamic array
for the list of songs on the album. Then an array of pointers to the
album structure can be used to store the collection.
- Another useful way to create a dynamic multi-dimensional array is to
flatten the array and index manually. For example, a 2-dimensional
array with sizes x and y has x\*y elements, therefore can be created
by
``` c
int dynamic_multi_array[x*y];
```
The index is slightly trickier than before, but can still be obtained by
y\*i+j. You then access the array with
``` c
static_multi_array[i][j];
dynamic_multi_array[y*i+j];
```
Some more examples with higher dimensions:
``` c
int dim1[w];
int dim2[w*x];
int dim3[w*x*y];
int dim4[w*x*y*z];
dim1[i]
dim2[w*j+i];
dim3[w*(x*i+j)+k] // index is k + w*j + w*x*i
dim4[w*(x*(y*i+j)+k)+l] // index is w*x*y*i + w*x*j + w*k + l
```
Note that w\*(x\*(y\*i+j)+k)+l is equal to w\*x\*y\*i + w\*x\*j + w\*k +
l, but uses fewer operations (see Horner\'s
Method). It uses the
same number of operations as accessing a static array by
dim4\[i\]\[j\]\[k\]\[l\], so should not be any slower to use.
The advantage to using this method is that the array can be passed
freely between functions without knowing the size of the array at
compile time (since C sees it as a 1-dimensional array, although some
way of passing the dimensions is still necessary), and the entire array
is contiguous in memory, so accessing consecutive elements should be
fast. The disadvantage is that it can be difficult at first to get used
to how to index the elements.
## Constructors and destructors
In most object-oriented languages, objects cannot be created directly by
a client that wishes to use them. Instead, the client must ask the class
to build an instance of the object using a special routine called a
constructor. Constructors are important because they allow an object to
enforce invariants about its internal state throughout its lifetime.
Destructors, called at the end of an object\'s lifetime, are important
in systems where an object holds exclusive access to some resource, and
it is desirable to ensure that it releases these resources for use by
other objects.
Since C is not an object-oriented language, it has no built-in support
for constructors or destructors. It is not uncommon for clients to
explicitly allocate and initialize records and other objects. However,
this leads to a potential for errors, since operations on the object may
fail or behave unpredictably if the object is not properly initialized.
A better approach is to have a function that creates an instance of the
object, possibly taking initialization parameters, as in this example:
``` c
struct string {
size_t size;
char *data;
};
struct string *create_string(const char *initial) {
assert (initial != NULL);
struct string *new_string = malloc(sizeof(*new_string));
if (new_string != NULL) {
new_string->size = strlen(initial);
new_string->data = strdup(initial);
}
return new_string;
}
```
Similarly, if it is left to the client to destroy objects correctly,
they may fail to do so, causing resource leaks. It is better to have an
explicit destructor which is always used, such as this one:
``` c
void free_string(struct string *s) {
assert (s != NULL);
free(s->data); ''/* free memory held by the structure */''
free(s); ''/* free the structure itself */''
}
```
It is often useful to combine destructors with *#Nulling freed
pointers*.
Sometimes it is useful to hide the definition of the object to ensure
that the client does not allocate it manually. To do this, the structure
is defined in the source file (or a private header file not available to
users) instead of the header file, and a forward declaration is put in
the header file:
``` c
struct string;
struct string *create_string(const char *initial);
void free_string(struct string *s);
```
## Nulling freed pointers
As discussed earlier, after `free()` has been called on a pointer, it
becomes a dangling pointer. Worse still, most modern platforms cannot
detect when such a pointer is used before being reassigned.
One simple solution to this is to ensure that any pointer is set to a
null pointer immediately after being freed: [^1]
``` c
free(p);
p = NULL;
```
Unlike dangling pointers, a hardware exception will arise on many modern
architectures when a null pointer is dereferenced. Also, programs can
include error checks for the null value, but not for a dangling pointer
value. To ensure it is done at all locations, a macro can be used:
``` c
#define FREE(p) do { free(p); (p) = NULL; } while(0)
```
(To see why the macro is written this way, see *#Macro
conventions*.) Also, when this technique
is used, destructors should zero out the pointer that they are passed,
and their argument must be passed by reference to allow this. For
example, here\'s the destructor from *#Constructors and
destructors* updated:
``` c
void free_string(struct string **s) {
assert(s != NULL && *s != NULL);
FREE((*s)->data); ''/* free memory held by the structure */''
FREE(*s); ''/* free the structure itself */''
*s=NULL; ''/* zero the argument */''
}
```
Unfortunately, this idiom will not do anything to any other pointers
that may be pointing to the freed memory. For this reason, some C
experts regard this idiom as dangerous due to creating a false sense of
security.
## Macro conventions
Because preprocessor macros in C work using simple token replacement,
they are prone to a number of confusing errors, some of which can be
avoided by following a simple set of conventions:
1. Placing parentheses around macro arguments wherever possible. This
ensures that, if they are expressions, the order of operations does
not affect the behavior of the expression. For example:
- Wrong: `#define square(x) x*x`
- Better: `#define square(x) (x)*(x)`
2. Placing parentheses around the entire expression if it is a single
expression. Again, this avoids changes in meaning due to the order
of operations.
- Wrong: `#define square(x) (x)*(x)`
- Better: `#define square(x) ((x)*(x))`
- Dangerous, remember it replaces the text in verbatim. Suppose
your code is `square (x++)`, after the macro invocation will x
be incremented by 2
3. If a macro produces multiple statements, or declares variables, it
can be wrapped in a **do** { \... } **while**(0) loop, with no
terminating semicolon. This allows the macro to be used like a
single statement in any location, such as the body of an if
statement, while still allowing a semicolon to be placed after the
macro invocation without creating a null statement.`<ref>`{=html}
\"comp.lang.c FAQ: What\'s the best way to write a multi-statement
macro?\".
```{=html}
</ref>
```
[^2][^3][^4][^5][^6] Care must be taken that any new variables do not
potentially mask portions of the macro\'s arguments.
#\*Wrong: `#define FREE(p) free(p); p = NULL;`
#\*Better: `#define FREE(p) do { free(p); p = NULL; } while(0)`
1. Avoiding using a macro argument twice or more inside a macro, if
possible; this causes problems with macro arguments that contain
side effects, such as assignments.
2. If a macro may be replaced by a function in the future, considering
naming it like a function.
3. By convention, preprocessor values and macros defined by `#define`
are named in all uppercase letters.`<ref>`{=html}
\"What is the history for naming constants in all
uppercase?\"
```{=html}
</ref>
```
[^7][^8][^9][^10]
## Further reading
There are a huge number of C style guidelines.
- \"C and C++ Style
Guides\" by Chris Lott
lists many popular C style guides.
- The Motor Industry Software Reliability Association (MISRA)
publishes \"MISRA-C: Guidelines for the use of the C language in
critical systems\". (Wikipedia: MISRA
C; 1).
pl:C/Powszechne praktyki
[^1]: comp.lang.c FAQ list: \"Why isn\'t a pointer null after calling
free?\" mentions that
\"it is often useful to set \[pointer variables\] to NULL
immediately after freeing them\".
[^2]: \"The C Preprocessor: Swallowing the
Semicolon\"
[^3]: \"Why use apparently meaningless do-while and if-else statements
in
macros?\"
[^4]: \"do {\...} while (0) in
macros\"
[^5]: \"KernelNewbies: FAQ /
DoWhile0\".
[^6]: \"PRE10-C. Wrap multistatement macros in a do-while
loop\".
[^7]: \"The
Preprocessor\".
[^8]: \"C Language Style
Guide\".
[^9]: \"non capitalized macros are always
evil\".
[^10]: \"Exploiting the Preprocessor for Fun and
Profit\".
|
# C Programming/Preprocessor directives and macros
Preprocessors are a way of making text processing with your C program
before they are actually compiled. Before the actual compilation of
every C program it is passed through a Preprocessor. The Preprocessor
looks through the program trying to find out specific instructions
called Preprocessor directives that it can understand. All Preprocessor
directives begin with the \# (hash) symbol. C++ compilers use the same C
preprocessor.[^1]
The preprocessor is a part of the compiler
which performs preliminary operations (conditionally compiling code,
including files etc\...) to your code before the compiler sees it. These
transformations are lexical, meaning that the output of the preprocessor
is still text.
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
NOTE: Technically the output of the preprocessing phase for C consists of a sequence of tokens, rather than source text, but it is simple to output source text which is equivalent to the given token sequence, and that is commonly supported by compilers via a `-E` or `/E` option \-- although command line options to C compilers aren\'t completely standard, many follow similar rules.
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
## Directives
Directives are special instructions directed to the preprocessor
(preprocessor directive) or to the compiler
(compiler directive) on how it should process part or all of your source
code or set some flags on the final object and are used to make writing
source code easier (more portable for instance) and to make the source
code more understandable. Directives are handled by the preprocessor,
which is either a separate program invoked by the compiler or part of
the compiler itself.
### #include
C has some features as part of the language and some others as part of a
**standard library**, which is a repository of code that is available
alongside every standard-conformant C compiler. When the C compiler
compiles your program it usually also links it with the standard C
library. For example, on encountering a `#include <stdio.h>` directive,
it replaces the directive with the contents of the `stdio.h` header
file.
When you use features from the library, C requires you to *declare* what
you would be using. The first line in the program is a **preprocessing
directive** which should look like this:
`#include <stdio.h>`
The above line causes the C declarations which are in the `stdio.h`
header to be included for use in your
program. Usually this is implemented by just inserting into your program
the contents of a **header file** called `stdio.h`, located in a
system-dependent location. The location of such files may be described
in your compiler\'s documentation. A list of standard C header files is
listed below in the Headers table.
The `stdio.h` header contains various declarations for input/output
(I/O) using an abstraction of I/O mechanisms called **streams**. For
example there is an output stream object called `stdout` which is used
to output text to the standard output, which usually displays the text
on the computer screen.
If using angle brackets like the example above, the preprocessor is
instructed to search for the include file along the development
environment path for the standard includes.
`#include "other.h"`
If you use quotation marks (`" "`), the preprocessor is expected to
search in some additional, usually user-defined, locations for the
header file, and to fall back to the standard include paths only if it
is not found in those additional locations. It is common for this form
to include searching in the same directory as the file containing the
`#include` directive.
------------------------------------------------------------------------------------------------------------------------------------------------------------
NOTE: You should check the documentation of the development environment you are using for any vendor specific implementations of the `#include` directive.
------------------------------------------------------------------------------------------------------------------------------------------------------------
#### Headers
`<b>`{=html}The C90 standard headers list:`</b>`{=html}
```{=html}
<table>
```
```{=html}
<td>
```
- `<assert.h>`
- `<ctype.h>`
- `<errno.h>`
- `<float.h>`
- `<limits.h>`
```{=html}
</td>
```
```{=html}
<td>
```
- `<locale.h>`
- `<math.h>`
- `<setjmp.h>`
- `<signal.h>`
- `<stdarg.h>`
```{=html}
</td>
```
```{=html}
<td>
```
- `<stddef.h>`
- `<stdio.h>`
- `<stdlib.h>`
- `<string.h>`
- `<time.h>`
```{=html}
</td>
```
```{=html}
</table>
```
`<b>`{=html}Headers added since C90:`</b>`{=html}
```{=html}
<table>
```
```{=html}
<td>
```
- `<complex.h>`
- `<fenv.h>`
- `<inttypes.h>`
```{=html}
</td>
```
```{=html}
<td>
```
- `<iso646.h>`
- `<stdbool.h>`
- `<stdint.h>`
```{=html}
</td>
```
```{=html}
<td>
```
- `<tgmath.h>`
- `<wchar.h>`
- `<wctype.h>`
```{=html}
</td>
```
```{=html}
</table>
```
### #pragma
The **pragma** (pragmatic information) directive is part of the
standard, but the meaning of any pragma depends on the software
implementation of the standard that is used. The #pragma directive
provides a way to request special behavior from the compiler. This
directive is most useful for programs that are unusually large or that
need to take advantage of the capabilities of a particular compiler.
Pragmas are used within the source program.
`#pragma token(s)`
1. pragma is usually followed by a single token, which represents a
command for the compiler to obey. You should check the software
implementation of the C standard you intend on using for a list of
the supported tokens. Not surprisingly, the set of commands that can
appear in #pragma directives is different for each compiler; you\'ll
have to consult the documentation for your compiler to see which
commands it allows and what those commands do.
For instance one of the most implemented preprocessor directives,
`#pragma once` when placed at the beginning of a header file, indicates
that the file where it resides will be skipped if included several times
by the preprocessor.
----------------------------------------------------------------------------------------------------
NOTE: Other methods exist to do this action that is commonly referred as using **include guards**.
----------------------------------------------------------------------------------------------------
\
\
### `#define`
Each `#define` preprocessor instruction defines a macro. For example,
` #define PI 3.14159265358979323846 /* pi */`
A macro defined with a space immediately after the name is called a
constant or literal. A macro defined with a parenthesis immediately
after the name is called a function-like macro.[^2]
+----------------------------------------------------------------------+
| WARNING: Preprocessor macros, although tempting, can produce quite |
| unexpected results if not done right. Always keep in mind that |
| macros are textual substitutions done to your source code before |
| anything is compiled. The compiler does not know anything about the |
| macros and never gets to see them. This can produce obscure errors, |
| amongst other negative effects. Prefer to use language features, if |
| there are equivalent. For example, use `const int` or `enum` instead |
| of `#define`d constants). |
| |
| That said, there are cases, where macros are very useful (see the |
| `debug` macro below for an example). |
+----------------------------------------------------------------------+
The `#define` directive is used to define macros. Macros are used by the
preprocessor to manipulate the program source code before it is
compiled. Because preprocessor macro definitions are substituted before
the compiler acts on the source code, any errors that are introduced by
`#define` are difficult to trace.
By convention, macros defined using `#define` are named in uppercase.
Although doing so is not a requirement, it is considered very bad
practice to do otherwise. This allows the macros to be easily identified
when reading the source code. (We mention many other common conventions
for using `#define` in a later chapter, C Programming/Common
practices).
Today, `#define` is primarily used to handle compiler and platform
differences. E.g., a define might hold a constant which is the
appropriate error code for a system call. The use of `#define` should
thus be limited unless absolutely necessary; `typedef` statements and
constant variables can often perform the same functions more safely.
Another feature of the `#define` command is that it can take arguments,
making it rather useful as a pseudo-function creator. Consider the
following code:
`#define ABSOLUTE_VALUE( x ) ( ((x) < 0) ? -(x) : (x) )`\
`...`\
`int x = -1;`\
`while( ABSOLUTE_VALUE( x ) ) {`\
`...`\
`}`\
` `
It\'s generally a good idea to use extra parentheses when using complex
macros. Notice that in the above example, the variable \"x\" is always
within its own set of parentheses. This way, it will be evaluated in
whole, before being compared to 0 or multiplied by -1. Also, the entire
macro is surrounded by parentheses, to prevent it from being
contaminated by other code. If you\'re not careful, you run the risk of
having the compiler misinterpret your code.
Because of side-effects it is considered a very bad idea to use macro
functions as described above.
`int x = -10;`\
`int y = ABSOLUTE_VALUE( x++ );`
If ABSOLUTE_VALUE() were a real function \'x\' would now have the value
of \'-9\', but because it was an argument in a macro it was expanded
twice and thus has a value of -8.
+----------------------------------------------------------------------+
| Example: |
| |
| To illustrate the dangers of macros, consider this naive macro |
| |
| `#define MAX(a,b) a>b?a:b` |
| |
| and the code |
| |
| `i = MAX(2,3)+5;`\ |
| `j = MAX(3,2)+5;` |
| |
| Take a look at this and consider what the value after execution |
| might be. The statements are turned into |
| |
| `int i = 2>3?2:3+5;`\ |
| `int j = 3>2?3:2+5;` |
| |
| Thus, after execution `i=8` and `j=3` instead of the expected result |
| of `i=j=8`! This is why you were cautioned to use an extra set of |
| parenthesis above, but even with these, the road is fraught with |
| dangers. The alert reader might quickly realize that if `a` or `b` |
| contains expressions, the definition must parenthesize every use of |
| `a,b` in the macro definition, like this: |
| |
| `#define MAX(a,b) ((a)>(b)?(a):(b))` |
| |
| This works, provided `a,b` have no side effects. Indeed, |
| |
| `i = 2;`\ |
| `j = 3;`\ |
| `k = MAX(i++, j++);` |
| |
| would result in `k=4`, `i=3` and `j=5`. This would be highly |
| surprising to anyone expecting `MAX()` to behave like a function. |
| |
| So what is the correct solution? The solution is not to use macro at |
| all. A global, inline function, like this |
| |
| `inline int max(int a, int b) { `\ |
| ` return a>b?a:b `\ |
| `}` |
| |
| has none of the pitfalls above, but will not work with all types. |
| |
| +----------------------------------------------------------------+ |
| | NOTE: The explicit `inline` declaration is not really | |
| | necessary unless the definition is in a header file, since | |
| | your compiler can inline functions for you (with gcc this can | |
| | be done with `-finline-functions` or `-O3`). The compiler is | |
| | often better than the programmer at predicting which functions | |
| | are worth inlining. Also, function calls are not really | |
| | expensive (they used to be). | |
| | | |
| | The compiler is actually free to ignore the `inline` keyword. | |
| | It is only a hint (except that `inline` is necessary in order | |
| | to allow a function to be defined in a header file without | |
| | generating an error message due to the function being defined | |
| | in more than one translation unit). | |
| +----------------------------------------------------------------+ |
+----------------------------------------------------------------------+
(**#, \##**)
The **\#** and **\##** operators are used with the `#define` macro.
Using \# causes the first argument after the **\#** to be returned as a
string in quotes. For example, the command
`#define as_string( s ) # s`\
` `
will make the compiler turn this command
`puts( as_string( Hello World! ) ) ;`\
` `
into
`puts( "Hello World!" );`\
` `
Using **\##** concatenates what\'s before the **\##** with what\'s after
it. For example, the command
`#define concatenate( x, y ) x ## y`\
`...`\
`int xy = 10;`\
`...`\
` `
will make the compiler turn
`printf( "%d", concatenate( x, y ));`\
` `
into
`printf( "%d", xy);`\
` `
which will, of course, display `10` to standard output.
It is possible to concatenate a macro argument with a constant prefix or
suffix to obtain a valid identifier as in
`#define make_function( name ) int my_ ## name (int foo) {}`\
`make_function( bar )`
which will define a function called `my_bar()`. But it isn\'t possible
to integrate a macro argument into a constant string using the
concatenation operator. In order to obtain such an effect, one can use
the ANSI C property that two or more consecutive string constants are
considered equivalent to a single string constant when encountered.
Using this property, one can write
`#define eat( what ) puts( "I'm eating " #what " today." )`\
`eat( fruit )`
which the macro-processor will turn into
`puts( "I'm eating " "fruit" " today." )`
which in turn will be interpreted by the C parser as a single string
constant.
The following trick can be used to turn a numeric constants into string
literals
`#define num2str(x) str(x)`\
`#define str(x) #x`\
`#define CONST 23`\
\
`puts(num2str(CONST));`
This is a bit tricky, since it is expanded in 2 steps. First
`num2str(CONST)` is replaced with `str(23)`, which in turn is replaced
with `"23"`. This can be useful in the following example:
`#ifdef DEBUG`\
`#define debug(msg) fputs(__FILE__ ":" num2str(__LINE__) " - " msg, stderr)`\
`#else`\
`#define debug(msg)`\
`#endif`
This will give you a nice debug message including the file and the line
where the message was issued. If DEBUG is not defined however the
debugging message will completely vanish from your code. Be careful not
to use this sort of construct with anything that has side effects, since
this can lead to bugs, that appear and disappear depending on the
compilation parameters.
### macros
Macros aren\'t type-checked and so they do not evaluate arguments. Also,
they do not obey scope properly, but simply take the string passed to
them and replace each occurrence of the macro argument in the text of
the macro with the actual string for that parameter (the code is
literally copied into the location it was called from).
An example on how to use a macro:
#include <stdio.h>
#define SLICES 8
#define ADD(x) ( (x) / SLICES )
int main(void)
{
int a = 0, b = 10, c = 6;
a = ADD(b + c);
printf("%d\n", a);
return 0;
}
\-- the result of \"a\" should be \"2\" (b + c = 16 -\> passed to ADD
-\> 16 / SLICES -\> result is \"2\")
+----------------------------------------------------------------------+
| NOTE:\ |
| It is usually bad practice to define macros in headers. |
| |
| A macro should be defined only when it is not possible to achieve |
| the same result with a function or some other mechanism. Some |
| compilers are able to optimize code to where calls to small |
| functions are replaced with inline code, negating any possible speed |
| advantage. Using typedefs, enums, and `inline` (in C99) is often a |
| better option. |
+----------------------------------------------------------------------+
One of the few situations where inline functions won\'t work \-- so you
are pretty much forced to use function-like macros instead \-- is to
initialize compile time constants (static initialization of structs).
This happens when the arguments to the macro are literals that the
compiler can optimize to another literal. [^3]
### #error
The **#error** directive halts compilation. When one is encountered the
standard specifies that the compiler should emit a diagnostic containing
the remaining tokens in the directive. This is mostly used for debugging
purposes.
Programmers use \"#error\" inside a conditional block, to immediately
halt the compiler when the \"#if\" or \"#ifdef\" \-- at the beginning of
the block \-- detects a compile-time problem. Normally the compiler
skips the block (and the \"#error\" directive inside it) and the
compilation proceeds.
``` c
#error message
```
### #warning
Many compilers support a **#warning** directive. When one is
encountered, the compiler emits a diagnostic containing the remaining
tokens in the directive.
``` c
#warning message
```
### #undef
The **#undef** directive undefines a macro. The identifier need not have
been previously defined.
### #if,#else,#elif,#endif (conditionals)
The **#if** command checks whether a controlling conditional expression
evaluates to zero or nonzero, and excludes or includes a block of code
respectively. For example:
``` c
#if 1
/* This block will be included */
#endif
#if 0
/* This block will not be included */
#endif
```
The conditional expression could contain any C operator except for the
assignment operators, the increment and decrement operators, the
address-of operator, and the sizeof operator.
One unique operator used in preprocessing and nowhere else is the
**defined** operator. It returns 1 if the macro name, optionally
enclosed in parentheses, is currently defined; 0 if not.
The **#endif** command ends a block started by `#if`, `#ifdef`, or
`#ifndef`.
The **#elif** command is similar to `#if`, except that it is used to
extract one from a series of blocks of code. E.g.:
#if /* some expression */
:
:
:
#elif /* another expression */
:
/* imagine many more #elifs here ... */
:
#else
/* The optional #else block is selected if none of the previous #if or
#elif blocks are selected */
:
:
#endif /* The end of the #if block */
### #ifdef,#ifndef
The **#ifdef** command is similar to `#if`, except that the code block
following it is selected if a macro name is defined. In this respect,
`#ifdef NAME`
is equivalent to
`#if defined NAME`
The **#ifndef** command is similar to **#ifdef**, except that the test
is reversed:
`#ifndef NAME`
is equivalent to
`#if !defined NAME`
### #line
This preprocessor directive is used to set the file name and the line
number of the line following the directive to new values. This is used
to set the \_\_FILE\_\_ and \_\_LINE\_\_ macros.
## Useful Preprocessor Macros for Debugging
ANSI C defines some useful preprocessor macros and variables,[^4][^5]
also called \"magic constants\", include:
\_\_FILE\_\_ =\> The name of the current file, as a string literal\
\_\_LINE\_\_ =\> Current line of the source file, as a numeric literal\
\_\_DATE\_\_ =\> Current system date, as a string\
\_\_TIME\_\_ =\> Current system time, as a string\
\_\_TIMESTAMP\_\_ =\> Date and time (non-standard)\
\_\_cplusplus =\> undefined when your C code is being compiled by a C
compiler; 199711L when your C code is being compiled by a C++ compiler
compliant with 1998 C++ standard.\
\_\_func\_\_ =\> Current function name of the source file, as a string
(part of C99)\
\_\_PRETTY_FUNCTION\_\_ =\> \"decorated\" Current function name of the
source file, as a string (in GCC; non-standard)\
#### Compile-time assertions
Compile-time assertions can help you debug faster than using only
run-time assert() statements, because the compile-time assertions are
all tested at compile time, while it is possible that a test run of a
program may fail to exercise some run-time assert() statements.
Prior to the C11 standard, some people[^6][^7][^8] defined a
preprocessor macro to allow compile-time assertions, something like:
``` c
#define COMPILE_TIME_ASSERT(pred) switch(0){case 0:case pred:;}
COMPILE_TIME_ASSERT( BOOLEAN CONDITION );
```
The `static_assert.hpp` Boost
library defines a similar
macro.[^9]
Since C11, such macros are obsolete, as `_Static_assert` and its macro
equivalent `static_assert` are standardized and built-in to the
language.
#### X-Macros
One little-known usage pattern of the C preprocessor is known as
\"X-Macros\".[^10][^11][^12][^13] An X-Macro is a header
file
or macro. Commonly these use the extension \".def\" instead of the
traditional \".h\". This file contains a list of similar macro calls,
which can be referred to as \"component macros\". The include file is
then referenced repeatedly in the following pattern. Here, the include
file is \"xmacro.def\" and it contains a list of component macros of the
style \"foo(x, y, z)\".
``` c
#define foo(x, y, z) doSomethingWith(x, y, z);
#include "xmacro.def"
#undef foo
#define foo(x, y, z) doSomethingElseWith(x, y, z);
#include "xmacro.def"
#undef foo
(etc...)
```
The most common usage of X-Macros is to establish a list of C objects
and then automatically generate code for each of them. Some
implementations also perform any `#undef`s they need inside the X-Macro,
as opposed to expecting the caller to undefine them.
Common sets of objects are a set of global configuration settings, a set
of members of a struct "wikilink"), a
list of possible XML tags for converting an XML file
to a quickly-traversable tree, or the body of an
enum declaration; other lists are
possible.
Once the X-Macro has been processed to create the list of objects, the
component macros can be redefined to generate, for instance, accessor
and/or mutator functions. Structure
serializing and deserializing are also
commonly done.
Here is an example of an X-Macro that establishes a struct and
automatically creates serialize/deserialize functions. For simplicity,
this example doesn\'t account for endianness or buffer overflows.
File **star.def**:
``` c
EXPAND_EXPAND_STAR_MEMBER(x, int)
EXPAND_EXPAND_STAR_MEMBER(y, int)
EXPAND_EXPAND_STAR_MEMBER(z, int)
EXPAND_EXPAND_STAR_MEMBER(radius, double)
#undef EXPAND_EXPAND_STAR_MEMBER
```
File **star_table.c**:
``` c
typedef struct {
#define EXPAND_EXPAND_STAR_MEMBER(member, type) type member;
#include "star.def"
} starStruct;
void serialize_star(const starStruct *const star, unsigned char *buffer) {
#define EXPAND_EXPAND_STAR_MEMBER(member, type) \
memcpy(buffer, &(star->member), sizeof(star->member)); \
buffer += sizeof(star->member);
#include "star.def"
}
void deserialize_star(starStruct *const star, const unsigned char *buffer) {
#define EXPAND_EXPAND_STAR_MEMBER(member, type) \
memcpy(&(star->member), buffer, sizeof(star->member)); \
buffer += sizeof(star->member);
#include "star.def"
}
```
Handlers for individual data types may be created and accessed using
token concatenation (\"`##`\") and quoting (\"`#`\") operators. For
example, the following might be added to the above code:
``` c
#define print_int(val) printf("%d", val)
#define print_double(val) printf("%g", val)
void print_star(const starStruct *const star) {
/* print_##type will be replaced with print_int or print_double */
#define EXPAND_EXPAND_STAR_MEMBER(member, type) \
printf("%s: ", #member); \
print_##type(star->member); \
printf("\n");
#include "star.def"
}
```
Note that in this example you can also avoid the creation of separate
handler functions for each datatype in this example by defining the
print format for each supported type, with the additional benefit of
reducing the expansion code produced by this header file:
``` c
#define FORMAT_(type) FORMAT_##type
#define FORMAT_int "%d"
#define FORMAT_double "%g"
void print_star(const starStruct *const star) {
/* FORMAT_(type) will be replaced with FORMAT_int or FORMAT_double */
#define EXPAND_EXPAND_STAR_MEMBER(member, type) \
printf("%s: " FORMAT_(type) "\n", #member, star->member);
#include "star.def"
}
```
The creation of a separate header file can be avoided by creating a
single macro containing what would be the contents of the file. For
instance, the above file \"star.def\" could be replaced with this macro
at the beginning of:
File **star_table.c**:
``` c
#define EXPAND_STAR \
EXPAND_STAR_MEMBER(x, int) \
EXPAND_STAR_MEMBER(y, int) \
EXPAND_STAR_MEMBER(z, int) \
EXPAND_STAR_MEMBER(radius, double)
```
and then all calls to `#include "star.def"` could be replaced with a
simple `EXPAND_STAR` statement. The rest of the above file would become:
``` c
typedef struct {
#define EXPAND_STAR_MEMBER(member, type) type member;
EXPAND_STAR
#undef EXPAND_STAR_MEMBER
} starStruct;
void serialize_star(const starStruct *const star, unsigned char *buffer) {
#define EXPAND_STAR_MEMBER(member, type) \
memcpy(buffer, &(star->member), sizeof(star->member)); \
buffer += sizeof(star->member);
EXPAND_STAR
#undef EXPAND_STAR_MEMBER
}
void deserialize_star(starStruct *const star, const unsigned char *buffer) {
#define EXPAND_STAR_MEMBER(member, type) \
memcpy(&(star->member), buffer, sizeof(star->member)); \
buffer += sizeof(star->member);
EXPAND_STAR
#undef EXPAND_STAR_MEMBER
}
```
and the print handler could be added as well as:
``` c
#define print_int(val) printf("%d", val)
#define print_double(val) printf("%g", val)
void print_star(const starStruct *const star) {
/* print_##type will be replaced with print_int or print_double */
#define EXPAND_STAR_MEMBER(member, type) \
printf("%s: ", #member); \
print_##type(star->member); \
printf("\n");
EXPAND_STAR
#undef EXPAND_STAR_MEMBER
}
```
or as:
``` c
#define FORMAT_(type) FORMAT_##type
#define FORMAT_int "%d"
#define FORMAT_double "%g"
void print_star(const starStruct *const star) {
/* FORMAT_(type) will be replaced with FORMAT_int or FORMAT_double */
#define EXPAND_STAR_MEMBER(member, type) \
printf("%s: " FORMAT_(type) "\n", #member, star->member);
EXPAND_STAR
#undef EXPAND_STAR_MEMBER
}
```
A variant which avoids needing to know the members of any expanded
sub-macros is to accept the operators as an argument to the list macro:
File **star_table.c**:
``` c
/*
Generic
*/
#define STRUCT_MEMBER(member, type, dummy) type member;
#define SERIALIZE_MEMBER(member, type, obj, buffer) \
memcpy(buffer, &(obj->member), sizeof(obj->member)); \
buffer += sizeof(obj->member);
#define DESERIALIZE_MEMBER(member, type, obj, buffer) \
memcpy(&(obj->member), buffer, sizeof(obj->member)); \
buffer += sizeof(obj->member);
#define FORMAT_(type) FORMAT_##type
#define FORMAT_int "%d"
#define FORMAT_double "%g"
/* FORMAT_(type) will be replaced with FORMAT_int or FORMAT_double */
#define PRINT_MEMBER(member, type, obj) \
printf("%s: " FORMAT_(type) "\n", #member, obj->member);
/*
starStruct
*/
#define EXPAND_STAR(_, ...) \
_(x, int, __VA_ARGS__) \
_(y, int, __VA_ARGS__) \
_(z, int, __VA_ARGS__) \
_(radius, double, __VA_ARGS__)
typedef struct {
EXPAND_STAR(STRUCT_MEMBER, )
} starStruct;
void serialize_star(const starStruct *const star, unsigned char *buffer) {
EXPAND_STAR(SERIALIZE_MEMBER, star, buffer)
}
void deserialize_star(starStruct *const star, const unsigned char *buffer) {
EXPAND_STAR(DESERIALIZE_MEMBER, star, buffer)
}
void print_star(const starStruct *const star) {
EXPAND_STAR(PRINT_MEMBER, star)
}
```
This approach can be dangerous in that the entire macro set is always
interpreted as if it was on a single source line, which could encounter
compiler limits with complex component macros and/or long member lists.
This technique was reported by Lars Wirzenius[^14] in a web page dated
January 17, 2000, in which he gives credit to Kenneth Oksanen for
\"refining and developing\" the technique prior to 1997. The other
references describe it as a method from at least a decade before the
turn of the century.
We discuss X-Macros more in a later section, Serialization and
X-Macros.
de:C-Programmierung:
Präprozessor
fr:Programmation
C/Préprocesseur
it:C/Compilatore e
precompilatore/Direttive
pl:C/Preprocesor
[^1]: Understanding C++/C
Preprocessor
[^2]: \"Exploiting the Preprocessor for Fun and
Profit\".
[^3]: David Hart, Jon Reid. \"9 Code Smells of Preprocessor
Use\". 2012.
[^4]: HP C Compiler Reference
Manual
[^5]: C++ reference: Predefined preprocessor
variables
[^6]: \"Compile Time Assertions in
C\" by Jon Jagger 1999
[^7]: Pádraig Brady. \"static
assertion\".
[^8]: \"ternary operator with a constant (true)
value?\".
[^9]: Wikipedia: C++0x#Static
assertions
[^10]: Wirzenius, Lars. C Preprocessor Trick For Implementing Similar
Data Types Retrieved
January 9, 2011.
[^11]:
[^12]:
[^13]: Keith Schwarz. \"Advanced Preprocessor
Techniques\".
2009. Includes \"Practical Applications of the Preprocessor II: The
X Macro Trick\".
[^14]: Wirzenius, Lars. C Preprocessor Trick For Implementing Similar
Data Types Retrieved
January 9, 2011.
|
# C Programming/Serialization
## Serialization
It is often necessary to send or receive complex data structures to or
from another program that may run on a different architecture or may
have been designed for different version of the data structures in
question. A typical example is a program that saves its state to a file
on exit and then reads it back when started.
The \'send\' function will typically start by writing a magic identifier
and version to the file or network socket and then proceed to write all
the data members one by one (i.e. in serial). If variable length arrays
are encountered (e.g. strings), it will either write a length followed
by the data or it will write the data followed by a special terminator.
The format is often XML or binary; in the latter case the htonl() set of
macros may come in handy.
The \'receive\' function will be nearly identical: it will read all the
items one by one. Variable length arrays are either handled by reading
the count followed by the data, or by reading the data until the special
terminator is reached.
Since these two functions often follow the same pattern as the
declaration of the data(structures), it would be nice if they could all
be generated from a common definition.
## X-Macros
X-Macros uses the preprocessor to force the compiler to compile the same
piece of text more than once. Sometimes a special file (with extension
.def) is included multiple times. For example variables.def may look
like this :
`INT(value)`\
`INT(shift)`
In this example the C programming will then look like this :
`...`\
`#define INT(var) int var;`\
`#include "variables.def"`\
`#undef INT`\
`...`\
`printf ("version=1\n");`\
`#define INT(var) printf (#var "=%d\n", var);`\
`#include "variables.def"`\
`#undef INT`\
`...`
If including a separate file multiple times is undesirable, another
macro can be used. For example :
`#define VARIABLES INT(value) \`\
` INT(shift)`
The `#include`s can then be replaced with calls to the macro.
Using this method, one can also pass in the name(s) of (an)other
macro(s) that can operate on the list of values. For example:
`#define VAR_LIST(_) _(value) \`\
` _(shift)`\
`...`\
`#define VAR_INT_DECL(var) int var;`\
`VAR_LIST(VAR_INT_DECL)`\
`...`\
`printf ("version=1\n");`\
`#define VAR_INT_PRINTF(var) printf (#var "=%d\n", var);`\
`VAR_LIST(VAR_INT_PRINTF)`\
`...`
This does not require the redefinition of macros and can make the code
easier to understand and maintain.
X-Macros are also particularly useful for keeping mappings between
strings and enumerated types synchronized.
## Serialization with versioning
Suppose we want to add additional variables to the above example, but we
still want the program to be able to read the old version 1 files. Then
we would add a version parameter and a default value parameter to the
list processing macros:
`#define VAR_LIST(_) _(value,1,0) \`\
` _(shift,1,0) \`\
` _(mask,2,0xffff)`\
`...`\
`int inputVer;`\
`#define VAR_INT_DECL(var,varVer,default) int var;`\
`VAR_LIST(VAR_INT_DECL)`\
`...`\
`scanf ("version=%d", &inputVer);`\
`#define VAR_INT_SCN(var,varVer,default) if (varVer <= inputVer) scanf (#var "=%d", &var); else var = default;`\
`VAR_LIST(VAR_INT_SCN)`\
`...`\
`printf ("version=2\n"); /* Always output at highest known version */`\
`#define VAR_INT_PRT(var,varVer,default) printf (#var "=%d\n", var);`\
`VAR_LIST(VAR_INT_PRT)`\
`...`
|
# C Programming/Coroutines
A little known fact is that most C implementations have built-in
primitives that can be used for cooperative multitasking / coroutines.
They are setcontext and
setjmp.
## setjmp
The function `setjmp` is used in a pair with `longjmp` to transfer
execution to a different point in the code. It relies on an existing
`jmp_buf` declaration.
``` c
#include <setjmp.h>
int main (void)
{
jmp_buf buf1;
if (setjmp(buf1) == 0)
{
/* This code is executed on the first call to setjmp. */
longjmp(buf1, 1);
} else {
/* This code is executed once longjmp is called. */
}
return 0;
}
```
`setjmp()` stores the current execution point in memory, which remains
valid as long as the containing function doesn\'t return. It initially
returns `0`. Control is returned to `setjmp` once `longjmp` is called
with the original `jmp_buf` and the replacement return value.
Note that jmp_buf is passed to setjmp without using the address-of
operator.
The easiest way to understand setjmp and longjmp, is that setjmp stores
the state of the cpu which includes program counter, stack pointer, all
the registers, including the bits of the flags register, at the location
pointed to by jmp_buf , which is defined some LEN+1, which is enough
bytes to store the registers of whatever CPU is involved. The
longjmp(buf) , never returns, because it restores the CPU from the
contents of struct jmp_buf buf previously set by a previous call the
setjmp, so execution begins from after setjmp was called, but the return
value of setjmp is not 0, but whatever value was used in the second
parameter to longjmp. This is similar to the fork() system call, which
returns 0 to the child process, and the PID of the child process to the
parent.
The internet suggests co-routines are useful for implementing software
as state machines cooperating, such as lexer processing input text and
emitting tokens , so that a parser can decide to store the token and ask
for the next one, or to act on its current set of token . This is not
multithreaded programs synchronising on data, with possibly race
conditions if a bug forgets to acquire a lock, but with setjmp and
longjmp, seems to be cooperative processes that guarantee only one
process will run at a time, with no worries about context switches
waking up sleeping processes ( using separate jmp_buf static locations,
each process can call setjmp for its own jmp_buf, before calling longjmp
later on if zero was returned, or continue a loop to process shared data
for non-zero returns).
|
# C Programming/Particularities of C
C is an efficient, minimalist language that has some peculiarities that
a programmer must be aware of. To address these, sometimes a good
solution is to combine another language with C for added flexibility and
power, like the combination of Emacs-LISP and C used for Emacs.
Sometimes they can be addressed at the cost of slower speed and
increased complexity by using special constructs that will guarantee
function and security. Mostly however, through practice, C programmers
have no trouble with the things mentioned here, and prefer using a
language that closely models the general purpose, Von Neumann hardware
architecture.
Below are several of these particularities of ANSI C (that sometimes are
also its strengths), some minor and some major:
Lack of differentiation between arrays and pointers : The very first C (around 1973) did not have arrays at all; modern implementations are contiguous areas in memory accessed with pointer arithmetic (note: a declared array cannot be assigned to like a pointer), which circumvents the need to declare arrays with a fixed size. This ability, however, can cause buffer overflow errors with careless use.
```{=html}
<!-- -->
```
Arrays do not store their length : A consequence of the above feature. This means that the program might need to explicitly perform a bounds check before accessing an array. Unless a function is passed an array of a fixed size, there is no way for it to discover the length of the array it was given: So the function must be given the length, perhaps passed to the function as a separate variable or in a structure. Because of this, most implementations do not provide automatic array bounds checking, and manual bounds checking is error prone.
```{=html}
<!-- -->
```
: If a C (or C++) program attempts to access an array element outside
of the actual allocated memory, then a buffer overflow occurs,
typically crashing the program. Buffer overflow bugs are a common
security vulnerability too. Many other computer languages provide
automatic bounds checking, and so they are nearly immune to such
bugs. [^1][^2][^3][^4][^5]
```{=html}
<!-- -->
```
Variable Length Arrays : A VLA ‒ variable length array ‒ can only be used for function parameters and auto variables. VLAs cannot be used inside a structure (except as the last item in the structure). It\'s not possible to define a structure that corresponds to the standard Forth dictionary definition (which has 2 variable-length parts), except as an undifferentiated array of `char`.
```{=html}
<!-- -->
```
Arbitrary-size built-in 2D or 3D arrays are not widely supported : This feature has been added starting with the C99 specification for variable-length arrays, although many C compilers still do not support it. Without VLAs, there is no way for a function to accept 2D or 3D arrays of arbitrary size. In particular, it\'s impossible to define a function that accepts `int a[5][4][3];` on one call, and later accepts `int b[10][10][10];` in a later call. Instead of using the built-in 2D or 3D array data type, C programmers use some other data type to hold (mathematical) 2D or 3D arrays of arbitrary size (multi-dimensional arrays) \-- see C Programming/Common practices#Dynamic multidimensional arrays for details.
```{=html}
<!-- -->
```
No formal String data type : Strings are character arrays (lacking any abstraction) and inherit all their constraints (structs can provide an abstraction, to an extent).
```{=html}
<!-- -->
```
Weak type safety : C is not very type-safe. The memory management functions operate on untyped pointers, there is no built-in run-time type enforcement, and the type system can be circumvented with pointers and casts. Additionally, typedef does not create a new type but only an alias, thus it serves solely for code legibility. However, it is possible to use single member structs to enforce type safety.
```{=html}
<!-- -->
```
No garbage collection : As a low-level language designed for minimum overhead, C features only manual memory management, which can allow simple memory leaks to go on unchecked.
```{=html}
<!-- -->
```
Local variables are uninitialized upon declaration : Local (but not global) variables must be initialized manually; before this, they contain whatever was already in memory at the time. This is not unusual, but the C standard does not forbid access to uninitialized variables (which is).
```{=html}
<!-- -->
```
Unwieldy function pointer syntax : Function pointers take the form of `[return type] name([arg2 type])`, making them somewhat difficult to use. Typedefs can alleviate this burdensome syntax. For example, `typedef int fn(int i);`. See C Programming/Pointers and arrays#Pointers to Functions for more details.
```{=html}
<!-- -->
```
No reflection : It is not possible for a C program \-- at runtime \-- to evaluate a string as if it were a source C code statement.
```{=html}
<!-- -->
```
Nested functions are not standard: However, many C compilers do support nested functions, including GNU C.`<ref>`{=html}
\"A GNU Manual\": \"Extensions to the C Language: Nested Functions\"
1
```{=html}
</ref>
```
No formal exception handling :Some standard functions return special values that must be handled manually. For example, `malloc()` returns null upon failure. For example, one must store the return value of `getchar()` in an `int` (not, as one might expect, in a `char`) in order to reliably detect the end-of-file \-- see EOF pitfall. Programs that do not include appropriate error handling might work fine most of the time, but can crash or otherwise malfunction when exceptional cases occur. POSIX systems often use `signal()` to handle some kinds of exceptions. (See C Programming/Error handling#Signals for details). Some programs use `setjmp()`, `longjmp()` or `goto` to manually handle some kinds of exceptions. (See C Programming/Control#One last thing: goto and C Programming/Coroutines for details).
```{=html}
<!-- -->
```
No anonymous function definitions
## References
[^1]: <http://projects.webappsec.org/Buffer-Overflow>
[^2]: <http://www.dwheeler.com/secure-programs/Secure-Programs-HOWTO/buffer-overflow.html>
[^3]: <http://searchsecurity.techtarget.com/news/article/0,289142,sid14_gci860185,00.html>
[^4]: <http://www.owasp.org/index.php/Buffer_Overflows>
[^5]: <http://cyclone.thelanguage.org/wiki/Why%20Cyclone>
|
# C Programming/Low-level IO
## File descriptors
While not specified by the C standard, many operating systems provide
the concept of a **file descriptor** (sometimes abbreviated as **fd**).
While the `FILE` type from `stdio.h` and its associated
functions encapsulate the low-level details of
a stream, a file descriptor is an integer that refers to a stream that
the operating system is keeping track of.
This section will explore file descriptors as they are implemented in
POSIX systems, such as Linux.
### Standard streams as file descriptors
When a process is being created, the operating system allocates, among
other resources, three streams for a process: the standard streams
`stdin`, `stdout`, and `stderr`. Typically, the standard streams are
interacted with using their `FILE`-based definitions in `stdio.h`, as
covered in an earlier section. These streams can also be interacted with
through their raw file descriptors, which are the same for each process:
`unistd.h` symbol stream File descriptor
------------------- ---------- -----------------
`STDIN_FILENO` `stdin` `0`
`STDOUT_FILENO` `stdout` `1`
`STDERR_FILENO` `stderr` `2`
Notice that these file descriptors are the same for every process, even
though the standard streams contain different data for each process.
This means that file descriptors are not necessarily unique system-wide;
each process may have a different view of which file descriptors map to
which streams, just like how each process has a different view of the
system\'s virtual address space.
### Basic reading and writing
Reading to and writing from a file descriptor can be performed using the
following functions:[^1]
``` c
#include <unistd.h>
ssize_t read(int fd, void *buf, size_t count);
ssize_t write(int fd, const void *buf, size_t count);
```
Compare and contrast these definitions with the `FILE`-based
functions:[^2]
``` c
#include <stdio.h>
char *fgets(char *s, int size, FILE *stream);
int fputs(const char *s, FILE *stream);
```
Three differences are apparent:
1. The data being read from and written to the stream are not assumed
to be strings.
2. File descriptors are taken as parameters instead of `FILE`s.
3. A consistent type is used for the return value.
`read` and `fgets` take similar sets of parameters: something
representing the stream, a buffer, and a size; additionally, if the
amount of data read equals the requested size, the buffer will have the
same contents regardless of the function used. However, these functions
behave differently in the case where the amount of data read does not
match the requested size. `fgets`, being intended for use with strings,
will stop reading early if a newline is encountered, and the function
may block if it is waiting for the rest of the string to appear in the
stream. `read`, on the other hand, won\'t stop reading early if a
special value is encountered, but it will stop if not all the requested
data has been written to the pipe yet. Since `read` can\'t guarantee
that something wholly usable has been written to the buffer (in the case
that it stops reading early), the return value contains the number of
bytes written to the buffer. This makes `read` more appropriate for
situations where the programmer needs more control over the type of the
data being read or is willing to trade receiving partially-read data for
reducing the number of blocking I/O operations.
Similarly, `write` needs an explicit size parameter since it can\'t
assume a NULL-terminated string is being written, and it will return the
number of bytes written so the program can determine whether the passed
data was fully written to the stream.
### Obtaining and discarding file descriptors
### `FILE`-file descriptor conversions
### Security through `openat`
[^1]: read(2) and write(2), Linux Programmer\'s Manual, 2019-10-10
[^2]: fgets(3) and fputs(3), Linux Programmer\'s Manual, 2020-08-13
|
# C Programming/C trigraph
## Trigraphs
C was designed in English and assumes the common English character set,
which includes such characters as `{`, `}`, `[`, `]`, and so on. Some
other languages, however, do not have these or other characters which
are required by C. To solve this problem, the 1989 C standard in section
5.2.1.1 defined a set of `<i>`{=html}trigraph sequences`</i>`{=html}
which can be substitutes for the symbols and which will work in any
situation. In fact, the first translation phase of compilation specified
in the 1989 C standard (section 5.1.1.2) is to replace the trigraph
sequences with their corresponding single-character equivalents. Note
that trigraphs will be removed from C after the next major standard of
it, C23, is released.[^1]
The following trigraph sequences exist, and no other. Each question mark
`?` that does not begin one of the trigraph sequences listed is not
changed.
`Sequence Replacement`\
`======== ===========`\
` ??= #`\
` ??( [`\
` ??/ \`\
` ??) ]`\
` ??' ^`\
` ??< {`\
` ??! |`\
` ??> }`\
` ??- ~`
The effect of this is that statements such as
``` c
printf ("Eh???/n");
```
will, after the trigraph is replaced, be the equivalent of
``` c
printf ("Eh?\n");
```
Should the programmer want the trigraph *not* to be replaced, within
strings and character constants (which is the only place they would need
replacing and it would change things), the programmer can simply escape
the second question mark; e.g.
``` c
printf ("Two question marks in a row: ?\?!\n");
```
The 1999 C standard added these punctuators, sometimes called
`<i>`{=html}digraphs`</i>`{=html}, in section 6.4.6. They are equivalent
to the following tokens except for their spelling:
`Digraph Equivalent`\
`======= ==========`\
` <: [`\
` :> ]`\
` <% {`\
` %> }`\
` %: #`\
` %:%: ##`
In other words, they behave differently when stringized as part of a
macro replacement, but are otherwise equivalent.
## References
[^1]: <https://www.open-std.org/jtc1/sc22/wg14/www/docs/n2940.pdf>
|
# C Programming/Language overloading and extensions
Most C compilers have one or more \"extensions\" to the standard C
language, to do things that are inconvenient to do in standard, portable
C.
Some examples of language extensions:
- in-line assembly language
- interrupt service routines
- variable-length data structure (a structure whose last item is a
\"zero-length array\").`<ref>`{=html}
comp.lang.c FAQ list: Question
2.6: \"C99 introduces the
concept of a flexible array member, which allows the size of an array to
be omitted if it is the last member in a structure, thus providing a
well-defined solution.\"
```{=html}
</ref>
```
- re-sizeable multidimensional arrays
- various \"#pragma\" settings to compile quickly, to generate fast
code, or to generate compact code.
- bit manipulation, especially bit-rotations and things involving the
\"carry\" bit
- storage alignment
- Arrays whose length is computed at run time.
## External links
- GNU C: Extensions to the C
Language
- C/C++ interpreter Ch extensions to the C language for
scripting
- SDCC: Storage Class Language
Extensions
```{=html}
<references />
```
|
# C Programming/Mixing languages
## Assembler
See Embedded Systems/Mixed C and Assembly
Programming
## Cg
Make the main program (for CPU) in C, which loads and run the
Cg "wikilink") program ( for GPU
).[^1][^2][^3]
### Header files
Add to C program:[^4]
``` c
#include <Cg/cg.h> /* To include the core Cg runtime API into your program */
#include <Cg/cgGL.h> /* to include the OpenGL-specific Cg runtime API */
```
### Minimal program
- by bobobobo[^5]
## Java
Using the Java native interface (JNI), Java applications can call C
libraries.
See also
- Java_Programming/Keywords/native
## Perl
To mix Perl and C, we can use XS. XS is an interface description file
format used to create an extension interface between Perl and C code (or
a C library) which one wishes to use with Perl.
The basic procedure is very simple. We can create the necessary
subdirectory structure by running \"h2xs\" application (e.g. \"h2xs -A
-n Modulename\"). This will create - among others - a Makefile.PL, a .pm
Perl module and a .xs XSUB file in the subdirectory tree. We can edit
the .xs file by adding our code to that, let\'s say:
`void`\
`hello()`\
` CODE:`\
` printf("Hello, world!\n");`
and we can successfully use our new command at Perl side, after running
a \"perl Makefile.PL\" and \"make\".
Further details can be found on the
perlxstut
perldoc page.
## Python
Here can be found some details about extending Python with modules
written in C. You
might read about Cython and
Pyrex as
well, that makes easier to create modules in C, translating a
Python-like code into C. Using the Python
ctypes module, one can
write C code directly into Python.
## Further reading
- Embedded Systems/Mixed C and Assembly
Programming
## References
```{=html}
<references/>
```
pl:C/Łączenie z innymi
językami
[^1]: Lesson: 47 from NeHe
Productions
[^2]: Cg Bumpmapping by Razvan Surdulescu at
GameDev
[^3]: \[<http://www.fusionindustries.com/default.asp?page=cg-hlsl-faq>
\| Cg & HLSL Shading Language FAQ
`by Fusion Industries]`
[^4]: <http://http.developer.nvidia.com/CgTutorial/cg_tutorial_appendix_b.html>
NVidia Cg tutorial. Appendix B. The Cg Runtime
[^5]: Absolutely minimal CG program for good fundamentals
understanding
|
# C Programming/GObject
Since the C Programming-Language was not created with Object Oriented
Programming in mind, it has no explicit support for classes,
inheritance, polymorphism and other OO Concepts. Neither does it have
its own Virtual Table, which is found in object-oriented languages such
C++, Java
and C#. Therefore, it might not be as
easy to implement an object-oriented programming paradigm using only
C\'s language features and standard library. However, it can be done
using structures which contain both function pointers as well as data,
for example, or by using third-party libraries.
There are many third-party libraries designed to add support for
object-oriented programming in C. The most general-purpose and widely
used among these is the GObject System, which is part of Glib. The
GObject System comes with its own virtual table. To create an object in
C using the GObject system, it has to be sub-classed from the GObject
struct.
## Object-Creation
In this example a new object will be implemented directly derived from
GObject. For simplicity, the object is named *MyObject*.
### Declaring An Object
To create a simple non-derivable (final)
_object_, two structs must be declared, the
*instance* and the *class*. They are declared using a macro:
``` c
/* in myobject.h */
G_DECLARE_FINAL_TYPE (MyObject, my_object, MY, OBJECT, GObject)
```
This declares two structures, MyObject and MyObjectClass. MyObject must
be defined in the C implementation, and MyObjectClass is already defined
by the macro.
### Boiler-Plate Code
Since the GObject System is just a third-party library and therefore
cannot make any changes to the C Language itself, creating a new object
requires a lot of boiler-plate code. This is mostly handled by the macro
shown above. However, the following is also required:
``` c
/* in myobject.h */
#define MY_TYPE_OBJECT my_object_get_type ()
```
The macro defines several functions, namely MY_OBJECT () and
MY_OBJECT_CLASS (), used for casting, MY_IS_OBJECT () and
MY_IS_OBJECT_CLASS () for testing whether an object or class is of the
correct type and MY_OBJECT_GET_CLASS () for getting the class structure
from an instance.
### Defining The Object
Before use, the newly created object must be
_defined_, along with the instance structure.
``` c
/* in myobject.c */
struct _MyObject
{
GObject parent_instance;
/* other members */
};
G_DEFINE_TYPE (MyObject, my_object, G_TYPE_OBJECT)
```
### Static Functions
There are a few _static_ functions that may or
may not to be defined, depending on your object. For a minimal object
these ones are compulsory:
``` c
/* in myobject.c */
static void
my_object_class_init (MyObjectClass *klass)
{
/* code */
}
static void
my_object_init (MyObject *self)
{
/* code */
}
```
### The Constructor
There is no internal way of _allocating memory_
for an object in C. Therefore an _explicit_
constructor must be declared for the new object.
``` c
/* in myobject.c */
GObject *
my_object_new (void)
{
return g_object_new (MY_TYPE_OBJECT,
0);
}
```
### Object-Usage
Although creating the object using its own pointer-type is perfectly
valid, it is recommended to use the pointer-type of the object at the
top of the hierarchy i.e the furthest off base class. The newly created
object may now be used like this:
``` c
/* in main.c */
/* Note: GObject is at the top of the hierarchy. */
/* declaration and construction */
GObject *myobj = my_object_new ();
/* destruction */
g_object_unref (myobj);
```
## Inheritance
### Concept
Inheritance is one of the most widely used and useful OO Concepts. It
provides an efficient way to reuse existing code by wrapping it up into
an object and then sub-classing it. The new classes are known as derived
classes. Many object hieriarchies can be created using inheritance.
Inheritance is also one of the most efficient ways of abstracting code.
### Implementation
In the GObject System, inheritance can be achieved by sub-classing
_GObject_. Since C provides no keyword or
operator for inheritance, a derived object is usually made by declaring
the base instance and base class as a *member* of the derived instance
and derived class respectively. In C code:
``` c
/* derived object instance */
struct _DerivedObject
{
/* the base instance is a member of the derived instance */
BaseObject parent_instance;
};
```
## Further reading
- Hanser. \"Object-oriented programming with
ANSI-C\". 1994. Hanser
describes another way of implementing classes, inheritance,
instances, methods, objects, vtables, polymorphism, late binding,
etc. in standard ANSI C.
```{=html}
<!-- -->
```
- Gregory Naçu - C64OS.com. \"Object Orientation in 6502 (Take
2)\". 2019. Greg
Nacu describes another way of implementing classes, inheritance,
instances, methods, objects, etc., using very little memory, in
6502 Assembly language.
```{=html}
<!-- -->
```
- Greg Kroah-Hartman. \"Everything you never wanted to know about
kobjects, ksets, and ktypes\".
mirror: \"Everything you never wanted to know about kobjects,
ksets, and
ktypes\".
2007.
|
# C Programming/Code library
The following is an implementation of the Standard C99 version of
`<assert.h>`:
``` c
/* assert.h header */
#undef assert
#ifdef NDEBUG
#define assert(_Ignore) ((void)0)
#else
void _Assertfail(char *, char *, int, char *);
#define assert(_Test) ((_Test)?((void)0):_Assertfail(#_Test,__FILE__,__LINE__,__func__))
#endif
/* END OF FILE */
```
``` c
/* xassertfail.c -- _Assertfail function */
#include <stdlib.h>
#include <stdio.h>
#include <assert.h>
void
_Assertfail(char *test, char *filename, int line_number, char *function_name)
{
fprintf(stderr, "Assertion failed: %s, function %s, file %s, line %d.",
test, function_name, filename, line_number);
abort();
}
/* END OF FILE */
```
|
# C Programming/Statements
A **statement** is a command given to the computer that instructs the
computer to take a specific action, such as display to the screen, or
collect input. A computer program is made up of a series of statements.
In C, a statement can be any of the following:
## Labeled Statements
A statement can be preceded by a label. Three types of labels exist in
C.
A simple identifier followed by a colon (`:`) is a label. Usually, this
label is the target of a `goto` statement.
Within `switch` statements, `case` and `default` labeled statements
exist.
A statement of the form
`case` *constant-expression* `:` *statement*
indicates that control will pass to this statement if the value of the
control expression of the `switch` statement matches the value of the
*constant-expression*. (In this case, the type of the
*constant-expression* must be an integer or character.)
A statement of the form
`default` `:` *statement*
indicates that control will pass to this statement if the control
expression of the `switch` statement does not match any of the
*constant-expressions* within the `switch` statement. If the `default`
statement is omitted, the control will pass to the statement following
the `switch` statement. Within a `switch` statement, there can be only
one `default` statement, unless the `switch` statement is within another
`switch` statement.
## Compound Statements
A *compound statement* is the way C groups multiple statements into a
single statement. It consists of multiple statements and declarations
within braces (i.e. `{` and `}`). In the ANSI C Standard of 1989-1990, a
compound statement contained an optional list of declarations followed
by an optional list of statements; in more recent revisions of the
Standard, declarations and statements can be freely interwoven through
the code. The body of a function is also a compound statement by rule.
## Expression Statements
An *expression statement* consists of an optional expression followed by
a semicolon (`;`). If the expression is present, the statement may have
a value. If no expression is present, the statement is often called the
*null statement*.
The `printf` function calls are expressions, so statements such as
`printf ("Hello World!\n");` are expression statements.
## Selection Statements
Three types of selection statements exist in C:
`if` `(` *expression* `)` *statement*
In this type of if-statement, the sub-statement will only be executed
iff the expression is non-zero.
`if` `(` *expression* `)` *statement* `else` *statement*
In this type of if-statement, the first sub-statement will only be
executed iff the expression is non-zero; otherwise, the second
sub-statement will be executed. Each `else` matches up with the closest
unmatched `if`, so that the following two snippets of code are not
equal:
``` c
if (expression)
if (secondexpression) statement1;
else
statement2;
if (expression)
{
if (secondexpression) statement1;
}
else
statement2;
```
because in the first, the `else` statement matches up with the if
statement that has `secondexpression` for a control, but in the second,
the braces force the `else` to match up with the if that has
`expression` for a control.
Switch statements are also a type of selection statement. They have the
format
`switch` `(` *expression* `)` *statement*
The expression here is an integer or a character. The statement here is
usually compound and it contains case-labeled statements and optionally
a default-labeled statement. The compound statement should not have
local variables as the jump to an internal label may skip over the
initialization of such variables.
## Iteration Statements
C has three kinds of iteration statements. The first is a
while-statement with the form
`while` `(` *expression* `)` *statement*
The substatement of a while runs repeatedly as long as the control
expression evaluates to non-zero at the beginning of each iteration. If
the control expression evaluates to zero the first time through, the
substatement may not run at all.
The second is a do-while statement of the form
`do` *statement* `while` `(` *expression* `)` `;`
This is similar to a while loop, except that the controlling expression
is evaluated at the end of the loop instead of the beginning and
consequently the sub-statement must execute at least once.
The third type of iteration statement is the for-statement. In ANSI C
1989, it has the form
`for` `(` *expression~opt~* `;` *expression~opt~* `;` *expression~opt~*
`)` *statement*
In more recent versions of the C standard, a declaration can substitute
for the first expression. The *opt* subscript indicates that the
expression is optional.
The statement
``` c
for (e1; e2; e3)
s;
```
is the rough equivalent of
``` c
{
e1;
while (e2)
{
s;
e3;
}
}
```
except for the behavior of `continue` statements within `s`.
The `e1` expression represents an initial condition; `e2` a control
expression; and `e3` what to happen on each iteration of the loop. If
`e2` is missing, the expression is considered to be non-zero on every
iteration, and only a `break` statement within `s` (or a call to a
non-returning function such as `exit` or `abort`) will end the loop.
## Jump Statements
C has four types of jump statements. The first, the `goto` statement, is
used sparingly and has the form
`goto` *identifier* `;`
This statement transfers control flow to the statement labeled with the
given identifier. The statement must be within the same function as the
`goto`.
The second, the break statement, with the form
`break` `;`
is used within iteration statements and `switch` statements to pass
control flow to the statement following the while, do-while, for, or
switch.
The third, the continue statement, with the form
`continue` `;`
is used within the substatement of iteration statements to transfer
control flow to the place just before the end of the substatement. In
`for` statements the iteration expression (`e3` above) will then be
executed before the controlling expression (`e2` above) is evaluated.
The fourth type of jump statement is the `return` statement with the
form
`return` *expression~opt~* `;`
This statement returns from the function. If the function return type is
`void`, the function may not return a value; otherwise, the expression
represents the value to be returned.
|
# C Programming/Side effects and sequence points
In C and more generally in computer science, a function or expression is
said to have a **side effect** if it modifies a state outside its scope
or has an *observable* interaction with its calling functions or the
outside world. By convention, returning a value has an effect on the
calling function, but this is usually not considered as a side effect.
Some side effects are:
- Modification of a global variable or static variable
- Modification of function arguments
- Writing data to a display or file
- Reading data
- Calling other side-effecting functions
In the presence of side effects, a program\'s behaviour may depend on
history; that is, the order of evaluation matters. Understanding and
debugging a function with side effects requires knowledge about the
context and its possible histories.[^1][^2]
A **sequence point** defines any point in a computer program\'s
execution at which it is guaranteed that all side effects of previous
evaluations will have been performed, and no side effects from
subsequent evaluations have yet been performed. They are often mentioned
in reference to C, because they are a core concept for determining the
validity and, if valid, the possible results of expressions. Adding more
sequence points is sometimes necessary to make an expression defined and
to ensure a single valid order of evaluation.
1. An expression\'s evaluation can be **sequenced before** that of
another expression, or equivalently the other expression\'s
evaluation is **sequenced after** that of the first.
2. The expressions\' evaluation is **indeterminately sequenced,**
meaning one is sequenced before the other, but which is unspecified.
3. The expressions\' evaluation is **unsequenced.**
The execution of unsequenced evaluations can overlap, with catastrophic
undefined behavior if they share state. This situation can arise in
parallel computations, causing race conditions.
## Examples of ambiguity
Consider two functions `f()` and `g()`. In C, the `+` operator is not
associated with a sequence point, and therefore in the expression
`f()+g()` it is possible that either `f()` or `g()` will be executed
first. The comma operator introduces a sequence point, and therefore in
the code `f(),g()` the order of evaluation is defined: first `f()` is
called, and then `g()` is called.
Sequence points also come into play when the same variable is modified
more than once within a single expression. An often-cited example is the
C expression `i=i++`, which apparently both assigns `i` its previous
value and increments `i`. The final value of `i` is ambiguous, because,
depending on the order of expression evaluation, the increment may occur
before, after, or interleaved with the assignment. The definition of a
particular language might specify one of the possible behaviors or
simply say the behavior is undefined. In C, evaluating such an
expression yields undefined behavior.[^3]
In C[^4], sequence points occur in the following places.
1. Between evaluation of the left and right operands of the && (logical
AND), \|\| (logical OR) (as part of short-circuit evaluation), and
comma operators. For example, in the expression
`*p++ != 0 && *q++ != 0`, all side effects of the sub-expression
`*p++ != 0` are completed before any attempt to access `q`.
2. Between the evaluation of the first operand of the ternary
\"question-mark\" operator and the second or third operand. For
example, in the expression `a = (*p++) ? (*p++) : 0` there is a
sequence point after the first `*p++`, meaning it has already been
incremented by the time the second instance is executed.
3. At the end of a full expression. This category includes expression
statements (such as the assignment `a=b;`), return statements, the
controlling expressions of `if`, `switch`, `while`, or `do`-`while`
statements, and all three expressions in a `for` statement.
4. Before a function is entered in a function call. The order in which
the arguments are evaluated is not specified, but this sequence
point means that all of their side effects are complete before the
function is entered. In the expression `f(i++) + g(j++) + h(k++)`,
`f` is called with a parameter of the original value of `i`, but `i`
is incremented before entering the body of `f`. Similarly, `j` and
`k` are updated before entering `g` and `h` respectively. However,
it is not specified in which order `f()`, `g()`, `h()` are executed,
nor in which order `i`, `j`, `k` are incremented. If the body of `f`
accesses the variables `j` and `k`, it might find both, neither, or
just one of them to have been incremented. (The function call
`f(a,b,c)` is *not* a use of the comma operator; the order of
evaluation for `a`, `b`, and `c` is unspecified.)
5. At a function return, after the return value is copied into the
calling context. (This sequence point is only specified in the C++
standard; it is present only implicitly in C.)
6. At the end of an initializer; for example, after the evaluation of
`5` in the declaration `int a = 5;`.
7. Between each declarator in each declarator sequence; for example,
between the two evaluations of `a++` in `int x = a++, y = a++`.
(This is *not* an example of the comma operator.)
8. After each conversion associated with an input/output format
specifier. For example, in the expression
`printf("foo %n %d", &a, 42)`, there is a sequence point after the
`%n` is evaluated and before printing `42`.
## References
## External links
- Question 3.8 of the FAQ for
comp.lang.c
[^1]: "Research Topics in Functional Programming" ed. D. Turner,
Addison-Wesley, 1990, pp 17--42. Retrieved from:
[^2]:
[^3]: Clause 6.5#2 of the C99 specification: \"*Between the previous and
next sequence point an object shall have its stored value modified
at most once by the evaluation of an expression. Furthermore, the
prior value shall be accessed only to determine the value to be
stored.*\"
[^4]: Annex C of the C99 specification lists the circumstances under
which a sequence point may be assumed.
|
# C Programming/Standard Library Reference
## Headers
### ANSI C (C89)/ISO C (C90)
------------------------------------------ ----------------------------------------
`assert.h` Verify program assertion file
`ctype.h` Character types file.
**`errno.h`** System error numbers file
**`float.h`** Floating types file
**`limits.h`** Implementation-defined constants file.
`locale.h` Category macros file.
`math.h` Mathematical declarations file.
`setjmp.h` Stack environment declarations file.
`signal.h` Signals file.
`stdarg.h` Handle variable argument list file.
`stddef.h` Standard type definitions file.
`stdio.h` Standard buffered input/output file.
`stdlib.h` Standard library definitions file.
`string.h` String operations file.
`time.h` Time types file.
------------------------------------------ ----------------------------------------
### ISO C (C94/C95), Amendment 1 (AMD1)
Very old compilers may not include some or all of these headers
------------------------------------------ ------------------------------------------------------
**`iso646.h`** Alternative spellings.
`wchar.h` Wide-character handling.
`wctype.h` Wide-character classification and mapping utilities.
------------------------------------------ ------------------------------------------------------
### ISO C (C99)
These are supported only in newer compilers
-------------------------------------------- -----------------------------
`complex.h` Complex arithmetic.
`fenv.h` Floating-point environment.
`inttypes.h` Fixed size integer types.
**`stdbool.h`** Boolean type and values.
**`stdint.h`** Integer types.
**`tgmath.h`** Type-generic macros.
-------------------------------------------- -----------------------------
### ISO C (C11)
These are supported only in newer compilers
---------------------------------------------------- ----------------------------------------------------------
**`stdalign.h`** Alignment keywords and macros.
`stdatomic.h` Atomic operations on data shared between threads.
**`stdnoreturn.h`** \_Noreturn function specifier macro.
`threads.h` Support for multiple threads of execution.
`uchar.h` Types and functions for manipulating Unicode characters.
---------------------------------------------------- ----------------------------------------------------------
## Table of functions
This table also includes function-like macros
### assert.h
+----------------------------------------------------------------------+
| - `a |
| ssert` |
+----------------------------------------------------------------------+
### complex.h
+-------+-------+-------+-------+-------+-------+-------+-------+-------+
| - | - | - | - [ | - | - | - | - | - |
| [`ca | [`cat | [`c | `cata | [`csi | [`c | [`cs | [`cim | [`cre |
| cos`] | anl`] | tanf` | nh`]( | nhl`] | logf` | qrt`] | agl`] | alf`] |
| (../c | (../c | ](../ | ../co | (../c | ](../ | (../c | (../c | (../c |
| omple | omple | compl | mplex | omple | compl | omple | omple | omple |
| x.h/F | x.h/F | ex.h/ | .h/Fu | x.h/F | ex.h/ | x.h/F | x.h/F | x.h/F |
| uncti | uncti | Funct | nctio | uncti | Funct | uncti | uncti | uncti |
| on_re | on_re | ion_r | n_ref | on_re | ion_r | on_re | on_re | on_re |
| feren | feren | efere | erenc | feren | efere | feren | feren | feren |
| ce#ca | ce#ca | nce#c | e#cat | ce#cs | nce#c | ce#cs | ce#ci | ce#cr |
| cos " | tan " | tan " | anh " | inh " | log " | qrt " | mag " | eal " |
| wikil | wikil | wikil | wikil | wikil | wikil | wikil | wikil | wikil |
| ink") | ink") | ink") | ink") | ink") | ink") | ink") | ink") | ink") |
| - | - | - | - | - | - | - | - | - |
| [`cac | [` | [`c | [` | [`ct | [`c | [`csq | [` | [`cre |
| osf`] | ccos` | tanl` | catan | anh`] | logl` | rtf`] | conj` | all`] |
| (../c | ](../ | ](../ | hf`]( | (../c | ](../ | (../c | ](../ | (../c |
| omple | compl | compl | ../co | omple | compl | omple | compl | omple |
| x.h/F | ex.h/ | ex.h/ | mplex | x.h/F | ex.h/ | x.h/F | ex.h/ | x.h/F |
| uncti | Funct | Funct | .h/Fu | uncti | Funct | uncti | Funct | uncti |
| on_re | ion_r | ion_r | nctio | on_re | ion_r | on_re | ion_r | on_re |
| feren | efere | efere | n_ref | feren | efere | feren | efere | feren |
| ce#ca | nce#c | nce#c | erenc | ce#ct | nce#c | ce#cs | nce#c | ce#cr |
| cos " | cos " | tan " | e#cat | anh " | log " | qrt " | onj " | eal " |
| wikil | wikil | wikil | anh " | wikil | wikil | wikil | wikil | wikil |
| ink") | ink") | ink") | wikil | ink") | ink") | ink") | ink") | ink") |
| - | - | - [ | ink") | - | - | - | - | |
| [`cac | [`c | `caco | - | [`cta | [` | [`csq | [`c | |
| osl`] | cosf` | sh`]( | [` | nhf`] | cabs` | rtl`] | onjf` | |
| (../c | ](../ | ../co | catan | (../c | ](../ | (../c | ](../ | |
| omple | compl | mplex | hl`]( | omple | compl | omple | compl | |
| x.h/F | ex.h/ | .h/Fu | ../co | x.h/F | ex.h/ | x.h/F | ex.h/ | |
| uncti | Funct | nctio | mplex | uncti | Funct | uncti | Funct | |
| on_re | ion_r | n_ref | .h/Fu | on_re | ion_r | on_re | ion_r | |
| feren | efere | erenc | nctio | feren | efere | feren | efere | |
| ce#ca | nce#c | e#cac | n_ref | ce#ct | nce#c | ce#cs | nce#c | |
| cos " | cos " | osh " | erenc | anh " | abs " | qrt " | onj " | |
| wikil | wikil | wikil | e#cat | wikil | wikil | wikil | wikil | |
| ink") | ink") | ink") | anh " | ink") | ink") | ink") | ink") | |
| - | - | - | wikil | - | - | - | - | |
| [`ca | [`c | [` | ink") | [`cta | [`c | [` | [`c | |
| sin`] | cosl` | cacos | - | nhl`] | absf` | carg` | onjl` | |
| (../c | ](../ | hf`]( | `cc | (../c | | ink") | wikil | osh " | ink") | ink") | ink") | ink") | |
| - | - | ink") | wikil | - | - | - | - | |
| [`cas | [` | - | ink") | [` | [`c | [`c | [`cp | |
| inf`] | csin` | [` | - | cexp` | absl` | argf` | roj`] | |
| (../c | ](../ | cacos | `cco | | ink") | osh " | osh " | ink") | ink") | ink") | ink") | |
| - | - | wikil | wikil | - | - | - | - | |
| [`cas | [`c | ink") | ink") | [`c | [` | [`c | [`cpr | |
| inl`] | sinf` | - [ | - | expf` | cpow` | argl` | ojf`] | |
| (../c | ](../ | `casi | `cco | | ink") | inh " | osh " | ink") | ink") | ink") | ink") | |
| - | - | wikil | wikil | - | - | - | - | |
| [`ca | [`c | ink") | ink") | [`c | [`c | [`ci | [`cpr | |
| tan`] | sinl` | - | - | expl` | powf` | mag`] | ojl`] | |
| (../c | ](../ | ` | [`cs | | ink") | e#cas | inh " | ink") | ink") | ink") | ink") | |
| - | - | inh " | wikil | - | - | - | - | |
| [`cat | [` | wikil | ink") | [` | [`c | [`cim | [`cr | |
| anf`] | ctan` | ink") | - | clog` | powl` | agf`] | eal`] | |
| (../c | ](../ | - | `csi | | ink") | erenc | inh " | ink") | ink") | ink") | ink") | |
| | | e#cas | wikil | | | | | |
| | | inh " | ink") | | | | | |
| | | wikil | | | | | | |
| | | ink") | | | | | | |
+-------+-------+-------+-------+-------+-------+-------+-------+-------+
### ctype.h
+----------------+----------------+----------------+----------------+
| - ` | - [` | - [` | - [` |
| isalnum` | it "wikilink") | ct "wikilink") | er "wikilink") |
| - ` | - [` | - [` | - [` |
| isalpha` | ph "wikilink") | ce "wikilink") | er "wikilink") |
| - ` | - [` | - [` | |
| isblank` | er "wikilink") | er "wikilink") | |
| - ` | - [` | - [`is | |
| iscntrl` | nt "wikilink") | it "wikilink") | |
+----------------+----------------+----------------+----------------+
### fenv.h
+----------------------+----------------------+----------------------+
| - `f | - [` | - |
| eclearexcept` | setround "wikilink") |
| arexcept "wikilink") | - `fe | - [` |
| - [`fegetenv` | stexcept "wikilink") |
| fegetenv "wikilink") | - `fesetenv` | pdateenv "wikilink") |
| ceptflag "wikilink") | - `fesete | |
| - | xceptflag` | |
| getround "wikilink") | | |
+----------------------+----------------------+----------------------+
### inttypes.h
+----------------------------------+----------------------------------+
| - `imaxabs` | inttypes.h/wcstoimax "wikilink") |
| - `imaxdiv` | inttypes.h/wcstoumax "wikilink") |
| - `strtoimax` | |
| - `strtoumax` | |
+----------------------------------+----------------------------------+
### locale.h
+------------------------------------------------------------------+
| - `localeconv` |
| - `setlocale` |
+------------------------------------------------------------------+
### math.h
+----------------------+----------------------+----------------------+
| - | - `isinf` | - [`isnan` |
| [`fpclassify` | .h/isnan "wikilink") |
| classify "wikilink") | - [`isless`] | - `isnormal` | isnormal "wikilink") |
| isfinite "wikilink") | - | - |
| - `isgreater` | essequal "wikilink") | nordered "wikilink") |
| - `isgr | - `is | - [`signbit` |
| terequal "wikilink") | sgreater "wikilink") | |
+----------------------+----------------------+----------------------+
### setjmp.h
+------------------------------------------------------------+
| - `longjmp` |
| - `setjmp` |
+------------------------------------------------------------+
### signal.h
+--------------------------------------------------------+
| - `raise` |
+--------------------------------------------------------+
### stdarg.h
+--------------------------------------------------------------+
| - `va_arg` |
| - `va_copy` |
| - `va_end` |
| - `va_start` |
+--------------------------------------------------------------+
### stdatomic.h
+----------------------+----------------------+----------------------+
| - `ato | - [`atomic_exch | - [`atomic_fl |
| mic_init` | exchange "wikilink") | c.h/atomic_flag_test |
| - | - [`atomic_ex | _and_set "wikilink") |
| `atomic_thread_fence | change_explicit` | explicit "wikilink") | ng/stdatomic.h/atomi |
| - | - [`a | c_flag_test_and_set_ |
| `atomic_signal_fence | tomic_compare_exchan | explicit "wikilink") |
| ` | omic_compare_exchang | ar` | tdatomic.h/atomic_fl |
| `atomic_is_lock_free | - | ag_clear "wikilink") |
| ` | ` | |
| ic_store "wikilink") | - | |
| - [`ato | [`atomic_compare_ex | |
| mic_store_explicit`] | change_weak`](C_Prog | |
| (C_Programming/stdat | ramming/stdatomic.h/ | |
| omic.h/atomic_store_ | atomic_compare_excha | |
| explicit "wikilink") | nge_weak "wikilink") | |
| - `ato | - | |
| mic_load` | it`](C_Programming/s | |
| - `a | tdatomic.h/atomic_co | |
| tomic_load_explicit` | mpare_exchange_weak_ | |
| | |
| tomic.h/atomic_load_ | - `atomi | |
| explicit "wikilink") | c_fetch` | |
| | - `atomic_fetc | |
| | h_key_explicit` | |
+----------------------+----------------------+----------------------+
### stddef.h
+----------------------------------------------------------------------+
| - `offse |
| tof` |
+----------------------------------------------------------------------+
### stdio.h
+----------------+----------------+----------------+----------------+
| - `cl | - | - `gets` | _reference#sca |
| rr "wikilink") | ad "wikilink") | - | nf "wikilink") |
| - | - | `perror` | reference#setb |
| se "wikilink") | nf "wikilink") | - | uf "wikilink") |
| - `feof` | _reference#fse | tf "wikilink") | eference#setvb |
| - | ek "wikilink") | - `putc` |
| `fgetc` | o.h/Function_r |
| tc "wikilink") | eference#fsetp | - ` | eference#sprin |
| - [` | os "wikilink") | putchar` |
| fgetpos` | _reference#fte | - `puts` | _Programming/s | le "wikilink") |
| `fgets` | rogramming/std |
| _reference#fge | .h/Function_re | - | io.h/Function_ |
| ts "wikilink") | ference#fwprin | `remove` | rogramming/std | tc "wikilink") |
| `fopen` | gramming/stdio |
| _reference#fop | io.h/Function_ | - | .h/Function_re |
| en "wikilink") | reference#fwri | `rename` | rogramming/std | tf "wikilink") |
| fprintf`](C_Pr | - `getc` | ogramming/stdi |
| eference#fprin | n_reference#ge | - | o.h/Function_r |
| tf "wikilink") | tc "wikilink") | `rewind` |
| `fputs` | rogramming/std |
| _reference#fpu | eference#getch | | io.h/Function_ |
| ts "wikilink") | ar "wikilink") | | reference#wsca |
| | | | nf "wikilink") |
+----------------+----------------+----------------+----------------+
### stdlib.h
+----------+----------+----------+----------+----------+----------+
| - | - `a | - | - [`ge | - | - |
| [`abo | tof` | tion_ref | ion_refe |
| erence#a | atof "wi | #div "wi | - | erence#q | rence#st |
| bort "wi | kilink") | kilink") | `mallo | sort "wi | rtol "wi |
| kilink") | - [`a | - [` | c` | kilink") |
| - | toi` | kilink") | toul "wi |
| #abs "wi | kilink") | kilink") | - | - ` | kilink") |
| kilink") | - | - [`e | [`callo | srand` | .h/Funct |
| nction_r | kilink") | ference# | rence#ma | - | ion_refe |
| eference | - | exit "wi | lloc "wi | [`strto | rence#sy |
| #abs "wi | `bsearch | kilink") | kilink") | d` | ` |
| - | gramming | [`free`] | `reallo | g/stdlib | |
| [`atexi | /stdlib. | (C_Progr | c` | ion_refe | kilink") | |
| ion_refe | kilink") | | rence#ma | | |
| rence#at | | | lloc "wi | | |
| exit "wi | | | kilink") | | |
| kilink") | | | | | |
+----------+----------+----------+----------+----------+----------+
### string.h
+----------+----------+----------+----------+----------+----------+
| - `me | - | - | - | - | - |
| mchr` | ion_refe | ion_refe | ion_refe | pbrk "wi | ion_refe |
| - `me | rence#me | rence#st | rence#st | kilink") | rence#st |
| mcmp` | kilink") | kilink") | `strrchr | kilink") |
| ming/str | - | - | - | ` | ogrammin | ogrammin | gramming | h/Functi | gramming |
| - `me | g/string | g/string | /string. | on_refer | /string. |
| mcpy` | ence#str |
| ing.h/me | rcat "wi | rcmp "wi | cspn "wi | - | xfrm "wi |
| mcpy "wi | kilink") | kilink") | kilink") | `strsp | kilink") |
| kilink") | - | - | - [`s | n` | nce#stre | kilink") | |
| kilink") | rcat "wi | - | rror "wi | - | |
| | kilink") | `strcp | kilink") | [`strst | |
| | - | y` | rence#st | kilink") | |
| | rchr "wi | | rlen "wi | | |
| | kilink") | | kilink") | | |
+----------+----------+----------+----------+----------+----------+
### threads.h
+----------------+----------------+----------------+----------------+
| - `c | - [ | - [`thr | - |
| all_once` | it "wikilink") | al "wikilink") | et "wikilink") |
| - | - | - [`t | |
| [`cnd_broad | `mtx_lock` | it "wikilink") | |
| st "wikilink") | - | - `t | |
| - [`cnd_d | [`mtx_timed | hrd_join` | |
| oy "wikilink") | ck "wikilink") | - `thr | |
| - [ | - [`mtx_t | d_sleep` | |
| it "wikilink") | ck "wikilink") | - `thr | |
| - [`cnd | - [`mtx | d_yield` | |
| al "wikilink") | ck "wikilink") | - `tss | |
| - | - [`thrd_ | _create` | |
| .h/cnd_timedwa | te "wikilink") | - `tss | |
| it "wikilink") | - [`thrd_cu | _delete` | |
| reads.h/cnd_wa | nt "wikilink") | - | |
| it "wikilink") | - `thrd_ | [`tss_get` | |
| ds.h/mtx_destr | ch "wikilink") | | |
| oy "wikilink") | | | |
+----------------+----------------+----------------+----------------+
### time.h
+----------------------+----------------------+----------------------+
| - [`asctime`] | - `gmtime` | - [`time |
| (C_Programming/time. | |
| #asctime "wikilink") | e#gmtime "wikilink") | |
| - `clock | - [`localtime` | ocaltime "wikilink") | |
| - `ctime | - [`mktime` | |
| ` | e#mktime "wikilink") | |
| - `difftime` | |
| difftime "wikilink") | | |
+----------------------+----------------------+----------------------+
### uchar.h
+-------------------------------------------------------------+
| - `mbrtoc16` |
| - `c16rtomb` |
| - `mbrtoc32` |
| - `c32rtomb` |
+-------------------------------------------------------------+
### wchar.h
+---------+---------+---------+---------+---------+---------+---------+
| - | - | - | - | - | - ` | - |
| [`btowc | [`getw | [`sw | [`wcscm | [`wc | wcstol` | [`wm |
| ` | ar "wik | nf "wik | referen | py "wik | ilink") | et "wik |
| - ` | ilink") | ilink") | ce#wcsc | ilink") | - | ilink") |
| fgetwc` | - | - | mp "wik | - | [`wc | - |
| | [`wc | stoul`] | `wp |
| grammin | n` | nction_ | h/unget | ramming | h/wcspb | ul "wik | h/wprin |
| - | referen | wc "wik | /wchar. | rk "wik | ilink") | tf "wik |
| `fgetw | ce#mbrl | ilink") | h/wcsco | ilink") | - | ilink") |
| s` | `vfwpri | ilink") | [`wc | sxfrm`] | wscanf` |
| ing/wch | - ` | ntf` | ilink") |
| ilink") | ction_r | ilink") | nction_ | ilink") | - | |
| - ` | eferenc | - [ | referen | - [ | [`wctob | |
| fputwc` | e#mbrto | `vswpri | ce#wcsc | `wcsrto | ` | _Progra | ilink") | _Progra | ng/wcha | |
| g/wchar | - | mming/w | - | mming/w | r.h/wct | |
| .h/fput | [`mb | char.h/ | [`wc | char.h/ | ob "wik | |
| wc "wik | sinit`] | vswprin | scspn`] | wcsrtom | ilink") | |
| ilink") | (C_Prog | tf "wik | (C_Prog | bs "wik | - | |
| - | ramming | ilink") | ramming | ilink") | [`wm | |
| [`fwide | /wchar. | - | /wchar. | - [` | emchr`] | |
| `](C_Pr | h/mbsin | `vwpr | h/wcscs | wcsspn` | (C_Prog | |
| ogrammi | it "wik | intf` | C_Progr | ilink") | grammin | /wchar. | |
| r.h/fwi | - | amming/ | - | g/wchar | h/wmemc | |
| de "wik | `mbsrto | wchar.h | [`wcsf | .h/wcss | hr "wik | |
| ilink") | wcs` | |
| - | _Progra | tf "wik | C_Progr | ilink") | - | |
| `fwpr | mming/w | ilink") | amming/ | - [` | [`wm | |
| intf` | (C_Prog | ilink") | g/wchar | /wchar. | |
| /fwprin | - | ramming | - ` | .h/wcss | h/wmemc | |
| tf "wik | [`put | /wchar. | wcslen` | tr "wik | mp "wik | |
| ilink") | wc` | ilink") | |
| - | Program | mb "wik | grammin | - [` | - | |
| [`fw | ming/wc | ilink") | g/wchar | wcstod` | [`wm | |
| scanf`] | har.h/F | - | .h/wcsl | ](C_Pro | emcpy`] | |
| (C_Prog | unction | `wcsca | en "wik | grammin | (C_Prog | |
| ramming | _refere | t` | g/wchar | ramming | |
| /wchar. | nce#put | rogramm | - | .h/wcst | /wchar. | |
| h/fwsca | wc "wik | ing/wch | [`wc | od "wik | h/wmemc | |
| nf "wik | ilink") | ar.h/Fu | sncat`] | ilink") | py "wik | |
| ilink") | - | nction_ | (C_Prog | - ` | ilink") | |
| - | `putw | referen | ramming | wcstok` | - | |
| [`getwc | char` | at "wik | g/wchar | C_Progr | |
| ng/wcha | wchar.h | - ` | ilink") | .h/wcst | amming/ | |
| r.h/get | /putwch | wcschr` | - [ | ok "wik | wchar.h | |
| wc "wik | ar "wik | | /wmemmo | |
| ilink") | ilink") | grammin | p` | |
| | `sw | .h/wcsc | ing/wch | | | |
| | printf` | hr "wik | ar.h/Fu | | | |
| | | nction_ | | | |
| | grammin | | referen | | | |
| | g/wchar | | ce#wcsc | | | |
| | .h/Func | | mp "wik | | | |
| | tion_re | | ilink") | | | |
| | ference | | | | | |
| | #swprin | | | | | |
| | tf "wik | | | | | |
| | ilink") | | | | | |
+---------+---------+---------+---------+---------+---------+---------+
### wctype.h
+-------------+-------------+-------------+-------------+-------------+
| - [ | - [ | - [ | - [`t | - |
| `iswalnum`] | `iswdigit`] | `iswpunct`] | owctrans`]( | `wctype |
| (C_Programm | (C_Programm | (C_Programm | C_Programmi | ` | "wikilink") | "wikilink") | "wikilink") | "wikilink") |
| - [ | - [ | - [ | - [ | |
| `iswalpha`] | `iswgraph`] | `iswspace`] | `towlower`] | |
| (C_Programm | (C_Programm | (C_Programm | (C_Programm | |
| ing/string. | ing/string. | ing/string. | ing/string. | |
| h/iswalpha | h/iswgraph | h/iswspace | h/towlower | |
| "wikilink") | "wikilink") | "wikilink") | "wikilink") | |
| - [ | - [ | - [ | - [ | |
| `iswcntrl`] | `iswlower`] | `iswupper`] | `towupper`] | |
| (C_Programm | (C_Programm | (C_Programm | (C_Programm | |
| ing/string. | ing/string. | ing/string. | ing/string. | |
| h/iswcntrl | h/iswlower | h/iswupper | h/towupper | |
| "wikilink") | "wikilink") | "wikilink") | "wikilink") | |
| - [ | - [ | - [`i | - | |
| `iswctype`] | `iswprint`] | swxdigit`]( | `wctrans` | |
| (C_Programm | (C_Programm | C_Programmi | | "wikilink") | "wikilink") | "wikilink") | |
+-------------+-------------+-------------+-------------+-------------+
|
# C Programming/Preprocessor reference
## Preprocessor Reference
The following preprocessor statements exist:
`Statement Subsequent items on the control line Meaning`\
`========= ==================================== =======`\
`#if conditional-expression conditional`\
`#ifdef identifier true iff identifier is a macro`\
`#ifndef identifier true iff identifier is not a macro`\
`#elif conditional-expression continues a conditional`\
`#else continues a conditional`\
`#endif ends a conditional`\
`#include header-name includes a file`\
`#define identifier defines a macro`\
`#undef identifier removes a previously defined macro`\
`#line number filename changes the line number and file name`\
`#error token-list specifies an error`\
`#pragma token-list catchall`
Some nonstandard compilers also specify `#warning` and `#import`.
A *conditional-expression* above can include the defined operator.
The `#define ``<i>`{=html}`identifier``</i>`{=html} above can be
followed by an optional list of parameters and then an optional list of
replacement tokens. The left parenthesis of the parameter list must have
no preceding white space.
|
# C Programming/POSIX Reference
The **C POSIX library** is a language-independent library (using C
calling conventions) that adds functions specific to POSIX systems.
POSIX (and the Single Unix Specification) specifies a number of routines
that should be available over and above those in the C standard library
proper. It was developed at the same time as the ANSI C standard and is
closely related to C. Some effort was made to make the C and POSIX
libraries compatible, but there are a few POSIX functions that were
never introduced into ANSI C.
Facilities are often implemented alongside the C standard library
functionality, with varying degrees of closeness. For example, glibc
implements functions such as fork within libc.so, but before NPTL was
merged into glibc it constituted a separate library with its own linker
flag. Often, this POSIX-specified functionality will be regarded as part
of the library; the C library proper may be identified as the ANSI or
ISO C library.
## Header files
-------------------- -------------------------------------------------------------------
**aio.h** Asynchronous input and output.
**arpa/inet.h** Definitions for internet operations.
**cpio.h** Magic numbers for the cpio archive format.
**dirent.h** Allows the opening and listing of directories.
**fcntl.h** File opening, locking and other operations.
**fmtmsg.h** Message display structures.
**fnmatch.h** Filename-matching types.
**ftw.h** File tree traversal.
**glob.h** Pathname pattern-matching types.
**grp.h** User group information and control.
**iconv.h** Codeset conversion facility.
**langinfo.h** Language information constants.
**libgen.h** Definitions for pattern matching functions.
**monetary.h** Monetary types.
**mqueue.h** Message queues (REALTIME).
**ndbm.h** Definitions for ndbm database operations.
**net/if.h** Sockets local interfaces.
**netdb.h** Definitions for network database operations.
**netinet/in.h** Internet address family.
**netinet/tcp.h** Definitions for the Internet Transmission Control Protocol (TCP).
**nl_types.h** Data types.
**poll.h** Definitions for the poll() function.
**pthread.h** Defines an API for creating and manipulating POSIX threads.
**pwd.h** Passwd (user information) access and control.
**regex.h** Regular expression matching types.
**sched.h** Execution scheduling.
**search.h** Search tables.
**semaphore.h** Semaphores.
**spawn.h** Create a new process to run an executable program.
**strings.h** String operations.
**stropts.h** STREAMS interface (STREAMS).
**sys/ipc.h** Inter-process communication (IPC).
**sys/mman.h** POSIX memory management declarations.
**sys/msg.h** POSIX message queues.
**sys/resource.h** Definitions for XSI resource operations.
**sys/select.h** Select types.
**sys/sem.h** POSIX semaphores.
**sys/shm.h** XSI shared memory facility.
**sys/socket.h** Main sockets header.
**sys/stat.h** File information (stat et al.).
**sys/statvfs.h** VFS File System information structure.
**sys/time.h** Time and date functions and structures.
**sys/times.h** File access and modification times structure.
**sys/types.h** Various data types used elsewhere.
**sys/uio.h** Definitions for vector I/O operations.
**sys/un.h** Definitions for UNIX domain sockets.
**sys/utsname.h** uname and related structures.
**sys/wait.h** Status of terminated child processes.
**syslog.h** Definitions for system error logging.
**tar.h** Magic numbers for the tar archive format.
**termios.h** Allows terminal I/O interfaces.
**trace.h** Tracing.
**ulimit.h** ulimit commands.
**unistd.h** Various essential POSIX functions and constants.
**utime.h** File access and modification times.
**utmpx.h** User accounting database definitions.
**wordexp.h** Word-expansion types.
-------------------- -------------------------------------------------------------------
## Standard overlap headers
Headers that overlap/extend the C standard.
---------------- ------------------------------------------------------
**assert.h** Verify program assertion.
**complex.h** Complex arithmetic.
**ctype.h** Character types.
**fenv.h** Floating-point environment.
**float.h** Floating types.
**inttypes.h** Fixed size integer types.
**iso646.h** Alternative spellings.
**limits.h** Implementation-defined constants.
**locale.h** Category macros.
**math.h** Mathematical declarations.
**setjmp.h** Stack environment declarations.
**signal.h** Signals.
**stdarg.h** Handle variable argument list.
**stdbool.h** Boolean type and values.
**stddef.h** Standard type definitions.
**stdint.h** Integer types.
**stdio.h** Standard buffered input/output.
**stdlib.h** Standard library definitions.
**string.h** String operations.
**tgmath.h** Type-generic macros.
**time.h** Time types.
**wchar.h** Wide-character handling.
**wctype.h** Wide-character classification and mapping utilities.
---------------- ------------------------------------------------------
## References
- Official List of headers in the POSIX library on
opengroup.org
- Lists headers in the POSIX
library
- Description of the posix library from the Flux
OSKit
## Bibliography
-
|
# C Programming/GNU C Library Reference
## Header files
---------------------------------------- ----------------------------------------------------
`argp.h` Interface for parsing unix-style argument vectors.
`argz.h` Allocate/grow argz vectors.
`envz.h`
`execinfo.h` Backtrace support.
`libintl.h`
---------------------------------------- ----------------------------------------------------
## Table of functions
### argp.h
+----------------------------------+----------------------------------+
| - `argp_error` | - `argp_state_help` | gp.h/argp_state_help "wikilink") |
| - `argp_failure` | |
| - `argp_help | |
| ` | |
| - `argp_parse` | |
| | |
+----------------------------------+----------------------------------+
### argz.h
+----------------------+----------------------+----------------------+
| - | - [`argz_ | - [`argz_ |
| `argz_add` | z_create "wikilink") | z_insert "wikilink") |
| - `argz_ad | - [`argz_create_s | - [`a |
| d_sep` | eate_sep "wikilink") | rgz_next "wikilink") |
| - `argz_ | - [`argz_ | - [`argz_re |
| append` | z_delete "wikilink") | _replace "wikilink") |
| - `arg | - [`argz_ex | - [`argz_string |
| z_count` | _extract "wikilink") | tringify "wikilink") |
+----------------------+----------------------+----------------------+
### envz.h
+----------------------------------+----------------------------------+
| - [`envz_ad | - [`envz_remove`] |
| d`](/envz.h/envz_add "wikilink") | (/envz.h/envz_remove "wikilink") |
| - `envz_entry` | - `envz_strip` |
| | |
| - `envz_ge | |
| t` | |
| - `envz_merge` | |
| | |
+----------------------------------+----------------------------------+
### execinfo.h
+----------------------------------------------------------------------+
| - `backtrace` |
| - `backtrace_symbols` |
| - |
| `backtrace_symbols_fd` |
+----------------------------------------------------------------------+
### libintl.h
+----------------------+----------------------+----------------------+
| - `bind_textdom | - [`dg | - [`textdo |
| ain_codeset` | xtdomain "wikilink") |
| _codeset "wikilink") | - `dnge | |
| - [`bindtextdomain | ttext` | |
| xtdomain "wikilink") | - ` | |
| - [`dcge | gettext` | |
| cgettext "wikilink") | - `ng | |
| - [`dcnget | ettext` | |
| ngettext "wikilink") | | |
+----------------------+----------------------+----------------------+
## Standard Library Extensions
Platform facilities that extend the standard library headers.
### assert.h
+-------------------------------------------------------------+
| - `assert_perror` |
+-------------------------------------------------------------+
### complex.h
+----------------------------------+----------------------------------+
| - [`clog10`] | - `clog10l` | ../complex.h/clog10l "wikilink") |
| - `clog10f` | |
| - `clog10fN` | |
| - `clog10fNx` | |
+----------------------------------+----------------------------------+
### fenv.h
+---------------------------------------------------------------+
| - `fedisableexcept` |
| - `feenableexcept` |
| - `fegetexcept` |
+---------------------------------------------------------------+
### math.h
+----------------------+----------------------+----------------------+
| - `j0fN` | `pow10` |
| - | .h/pow10 "wikilink") | - |
| `j0fNx` | `pow10f` |
| - `j1fN` | - `y1fN` | - | h.h/y1fN "wikilink") |
| - | `pow10l` | `y1fNx` | - | .h/y1fNx "wikilink") |
| - `jnfN` | h/sincos "wikilink") | h.h/ynfN "wikilink") |
| - | - | - |
| `jnfNx` | /sincosf "wikilink") | .h/ynfNx "wikilink") |
| - `lgamm | - [`s | |
| afN_r` | sincosfN "wikilink") | |
| - `lgammaf | - [`sin | |
| Nx_r` | incosfNx "wikilink") | |
| | - | |
| | `sincosl` | |
+----------------------+----------------------+----------------------+
### signal.h
+---------------------------------------------------------+
| - `sysv_signal` |
+---------------------------------------------------------+
### stdio.h
+----------------------+----------------------+----------------------+
| - `as | - [`fputs_unlocked | - [` |
| printf` | unlocked "wikilink") | /getline "wikilink") |
| - ` | - [`fread_unlocked | - [`obstack_printf |
| clearerr_unlocked` | k_printf "wikilink") |
| unlocked "wikilink") | - | - |
| - `feof_unlocke | [`fwrite_unlocked` | [`obstack_vprintf` |
| d` | unlocked "wikilink") | _vprintf "wikilink") |
| - | - `ge | - [`open_memstream |
| [`ferror_unlocked` | tdelim` | emstream "wikilink") |
| unlocked "wikilink") | - `fputs_unlocked | - [`sn |
| - [`fgets_unlocked | ` | snprintf "wikilink") |
| unlocked "wikilink") | - `fread_unlocked | - [`tm |
| - | ` | tmpnam_r "wikilink") |
| | `fwrite_unlocked` | rintf` |
| emopen` | - `vsnp |
| fmemopen "wikilink") | - `ge | rintf` |
| kie` | |
| encookie "wikilink") | | |
+----------------------+----------------------+----------------------+
### stdlib.h
+----------------+----------------+----------------+----------------+
| - | - `jra | - | - [`sra |
| [`alloca` | _r "wikilink") | vt "wikilink") | _r "wikilink") |
| - `c | - [`lco | - [ | - [`sra |
| anonicalize_fi | ng48_r` | _r "wikilink") | _r "wikilink") |
| calize_file_na | - `lra | - | |
| me "wikilink") | nd48_r` | vt "wikilink") | |
| tdlib.h/cleare | - `mra | - [`r | |
| nv "wikilink") | nd48_r` | _r "wikilink") | |
| dlib.h/drand48 | - `nra | - [ | |
| _r "wikilink") | nd48_r` | ch "wikilink") | |
| /stdlib.h/ecvt | - `pts | - | |
| _r "wikilink") | name_r` | .h/secure_gete | |
| dlib.h/erand48 | - | nv "wikilink") | |
| _r "wikilink") | `qecvt` | tdlib.h/seed48 | |
| ./stdlib.h/get | - | _r "wikilink") | |
| pt "wikilink") | `qecvt_r` | lib.h/setstate | |
| ib.h/initstate | | _r "wikilink") | |
| _r "wikilink") | | | |
+----------------+----------------+----------------+----------------+
|
# C Programming/MS Windows Reference
## Header files
-------------------------------------- ----------------------------
`alloc.h` Dynamic memory allocation.
`conio.h` Text user interfaces.
`process.h` Threads and processes.
-------------------------------------- ----------------------------
## Table of functions
### alloc.h
+--------------------------------------------------+
| - `farmalloc` |
+--------------------------------------------------+
### conio.h
+--------------------------------------------+
| - `getch` |
| - `getche` |
| - `gotoxy` |
| - `clrscr` |
+--------------------------------------------+
### process.h
|
# C Programming/C Compilers Reference List
For a brief introduction to setting up and using some of the more
beginner-friendly compilers and IDEs, see ../Using a
Compiler/.
## Free (or with a free version)
- Ch_interpreter
(http://www.softintegration.com) - The software works in Windows,
Linux, Mac OS X, Freebsd, Solaris, AIX and HP-UX. The Ch Standard
Edition is free for noncommercial use.
- Interactive C
(http://www.botball.org/educational-resources/ic.php).
- target platform: Handy Board (Freescale 68HC11); Lego RCX
- CINT is an interpreter for C and C++ code,
included in the data-analysis package ROOT. The
CINT interpreter is licensed under the X11/MIT license. (
<https://root.cern.ch/drupal/content/cint> ).
- PicoC is a very small C
interpreter, intended for small embedded systems with very little
code space or data space.
- PicoC target platform: x86-32, x86-64, powerpc, arm, ultrasparc,
HP-PA and blackfin processors; and is easy to port to new
targets.
- Extensible Interactive C
(EiC)
- lcc-win32
(http://www.cs.virginia.edu/\~lcc-win32) - Software copyrighted by
Jacob Navia. It is free for non-commercial use. Windows
(98/ME/XP/2000/NT).
- GNU Compiler Collection
(http://gcc.gnu.org) - GNU Compiler Collection. GNU General Public
License / GNU Lesser General Public License.
- MinGW (http://www.mingw.org/) provides GCC
for Windows
- clang (LLVM) (http://clang.llvm.org/) - Almost
everywhere
- Open Watcom (http://www.openwatcom.org)
Open Source development community to maintain and enhance the Watcom
C/C++ and Fortran cross compilers and tools. Version 1.4 released in
December 2005.
- **Host Platforms:** Win32 systems (IDE and command line), 32-bit
OS/2 (IDE and command line), DOS (command line), and Windows 3.x
(IDE)
- **Target Platforms:** DOS (16-bit), Windows 3.x (16-bit), OS/2
1.x (16-bit), Extended DOS, Win32s, Windows 95/98/Me, Windows
NT/2000/XP, 32-bit OS/2, and Novell NLMs
- **Experimental / Development:** Linux, BSD, \*nix, PowerPC,
Alpha AXP, MIPS, and Sparc v8
- Tiny C Compiler
(http://www.tinycc.org) - A small C compiler designed to work for
slow computers with little disk space (e.g. on rescue disks).
- Portable C Compiler
(http://pcc.ludd.ltu.se) - Portable C Compiler. BSD Style
License(s).
- Small Device C Compiler
(SDCC)
- target platforms: Intel 8051-compatibles; Freescale (Motorola)
HC08; Microchip PIC16 and PIC18.
- FpgaC. Target platform: FPGA hardware via XNF
or VHDL files.
- C compilers for many digital signal processors (DSPs), many of them
are free, and are listed in the comp.dsp
FAQ.
- Microsoft Visual C++
(http://msdn.microsoft.com/visualc) - Free (partially limited)
version available (Express\|Community Edition)
## Paid
- Intel C Compiler
(http://software.intel.com/en-us/intel-compilers) - Windows, Linux,
Mac, QNX, and embedded C/C++ compilers. Optimized for Intel 32-bit
and 64-bit CPUs.
- Impulse C - Target platform: FPGA hardware
via Hardware Description Language (HDL) files.
|
# C Programming/Index
This is an alphabetical index of the book.
## A
- `argv` - poorly treated
- ../Pointers and
arrays/#sizeof
- ../Arrays/
- Assignment
- ../Variables/#Declaring, Initializing, and Assigning
Variables
- ../Simple math/#Assignment
operators
- `auto`
## B
- Boolean
- ../Control#Conditionals
- `break`
## C
- `calloc`
- `case`
- Cast operators
- `char`
- ../Variables/#The char
type
- Comments
- Comparison
- ../Compiling/
- `const`
- `continue`
- Conditionals
- Control structures
## D
- Data types
- ../Variables/
- ../Complex types/
- `default`
- `do`
- `#define`
- `double`
- ../Variables/#The double
type
## E
- `else`
- `#else` "wikilink")
- `#elif` "wikilink")
- `#endif` "wikilink")
- `#error`
- `extern`
## F
- Files
- `float`
- ../Variables/#The float
type
- `for`
- `free`
- Functions
- Variadic
functions
## G
- `goto`
## I
- `if`
- `#if` "wikilink")
- `#ifdef`
- `#ifndef`
- `#include`
- Input and output
- ../Simple input and
output/
- ../File IO/
- `int`
- ../Variables/#The int
type
## L
- Logical operators
- ../Simple math/#Logical
operators
- ../Control#Logical
Expressions
- `long`
- ../Variables/#Data type
modifiers
- Loops
## M
- Macros
- `main` - poorly treated
- ../Pointers and
arrays/#sizeof
- `malloc`
- Math
- ../Simple math/ - addition,
subtraction, multiplication, division, and modulus
- ../Further math/ - functions from
`math.h` library
- ../Memory management/
- Multidimensional arrays
- ../Common practices/#Dynamic multidimensional
arrays
## O
- Operator
- ../Simple math/
- ../Reference Tables/#Table of
Operators
## P
- `#pragma`
- ../Preprocessor/
- `printf`
- ../Simple input and output/#Output using
printf() "wikilink")
- ../C
Reference/stdio.h/printf/
- ../Procedures and
functions/printf/
- Pointers
- ../Pointers and arrays/
- ../Complex
types/#Pointers
- Procedures
## R
- `ralloc`
- `register`
- `return`
## S
- `short`
- ../Variables/#Data type
modifiers
- `signed`
- ../Variables/#Data type
modifiers
- `sizeof`
- Standard libraries
- ../Standard libraries/
- ../Procedures and functions/#Functions from the C Standard
Library
- ../C Reference/assert.h/
- ../C
Reference/complex.h/
- ../C Reference/ctype.h/
- ../C Reference/errno.h/
- ../C Reference/fenv.h/
- ../C Reference/float.h/
- ../C
Reference/inttypes.h/
- ../C Reference/iso646.h/
- ../C Reference/limits.h/
- ../C Reference/locale.h/
- ../C Reference/math.h/
- ../C Reference/setjmp.h/
- ../C Reference/signal.h/
- ../C Reference/stdarg.h/
- ../C
Reference/stdbool.h/
- ../C Reference/stddef.h/
- ../C Reference/stdint.h/
- ../C Reference/stdio.h/
- ../C Reference/stdlib.h/
- ../C Reference/string.h/
- ../C Reference/tgmath.h/
- ../C Reference/time.h/
- ../C Reference/wchar.h/
- ../C Reference/wctype.h/
- `static`
- Static
functions
- ../Strings/
- `struct`
- Subprograms
- `switch`
## T
- `typedef` - seems poorly treated
## U
- `#undef`
- `union`
- `unsigned`
- ../Variables/#Data type
modifiers
## V
- Variable-length argument
lists
- Variadic
functions
- `volatile`
## W
- `while`
## Variables
- ../Variables/
|
# C Programming/Links
Links to online resources relating to learning how to program in C:
- *The C Book*, second
edition by Mike Banahan, Declan Brady and Mark Doran, originally
published by Addison Wesley in 1991. This version is made freely
available.
- Programming in C: A
Tutorial, by Brian W.
Kernighan; 9600 words
- The GNU C Reference
Manual \-- a reference
for the C programming language, as implemented by the GNU C Compiler
- The GNU C Library \-- a
manual for the GNU C library, which defines all of the library
functions specified by the ISO C standard, and other functions
- Wikipedia:C syntax - cca 9700
words, as of March 2012
- Wikipedia:C (programming
language) "wikilink")
-
Printed books and materials:
-
fr:Programmation
C/Bibliographie
|
# Wikijunior:Biology/Introduction
## Introduction
**Biology** is the study of life. It helps us understand many things,
such as how our body works, how our body keeps warm, and what we are
made of. Biology is very important to know. Some things we can learn
about in biology are *genetics* (the study of human traits), *zoology*
(the study of animals), *botany* (the study of plants), and *ecology*
(the study of relationships between all living things).
Someone who studies biology is called a *biologist*.
Image:Human female metaphase chromosomes.tif\|thumb\|genetics
Image:Begegnung-01.jpg\|thumb\|zoology Image: Pachycereus pringlei
forest.jpg \|thumb\|botany
Image:Sunrise_over_Veterans_Park_2420.jpg\|thumb\|ecology
## What is life?
Living things are different from things that are not alive. It is
usually easy to tell what is living and what is not, but it is sometimes
hard to tell, like with very small organisms.
Here are some properties of living things. You might notice that some
non-living things can also have some of these properties.
- **Living things can change and grow.** However, volcanoes can also
change and grow when they erupt.
- **Living things can move.** However, the wind is moving air, and
water always moves downhill.
- Just like animals, even plants can move. They can grow, and
sometimes move more rapidly than that, in response to things
such as the sun or water. One example is that sunflowers will
naturally turn during the course of the day so that they are
always facing the sun. Similarly, another example is that if a
plant gets tipped over, it will want to turn upwards to face the
sun.
- **Living things can reproduce,** which means that they can produce
copies of themselves, over and over. This is the most important
difference between living and non-living things.
- In order to reproduce, living things need nutrition, which are
nutrients and energy sources in order to assemble the materials
needed to reproduce themselves. In this process, living things
must excrete waste. Waste is material which is of no use to
living things, or in some cases, material that can be harmful.
Animals, bacteria, and plants are examples of living things. Rivers,
mountains, oceans, and soil are examples of non-living things, but they
are often homes for living things.
Cars and tables are also not living things, because they cannot
reproduce themselves.
Image:The_freshwater_alga_Spirogyra.jpg\|thumb\|Freshwater Alga
Image:Viburnum_opulus_fruits_close-up\_-\_Keila.jpg\|thumb\|Berry
Image:Anax imperator 2015 11 23 6807.jpg\|thumb\|Dragon-fly
Image:Uitrollend blad van een Polystichum setiferum. 19-04-2023. (d.j.b)
01.jpg\|thumb\|Fern Image:Hellenic pond turtle (Emys orbicularis
hellenica) Butrint.jpg\|thumb\|Turtle Image:Iceland Poppy Papaver
nudicaule \'Champagne Bubbles\' Orange Center.jpg\|thumb\|Bloom
## Levels of life
Living things can be of many different sizes. Size is very important in
biology, since biologists organize the structures and groupings of
living creatures according to size. A living creature is called an
organism. Organisms can consist of single cells or multiple different
types of cells grouped into tissues and organs.
From smallest to largest, these are how living things are grouped:
Cells
: Most cells are only a few microns wide, and are so small that they
can only be seen with a microscope. A micron is one thousandth of a
millimeter.
Tissues
: Tissues are groups of similar cells that are all doing similar
things, like a muscle, which pulls things together.
Organs
: Organs are made of lots of tissues. They all have a special
function, like the heart, which pumps blood.
Organ systems
: Organ systems are groups of organs which work together to do
something. For example, all the organs which digest your food make
up the digestive system.
Organisms
: An organism is a whole living thing, like you, or a tree.
Populations
: A population is a group of organisms which are all the same species
and live together.
Communities
: A community is a group of populations of different species, which
live together; for example, all the fish in a lake.
Ecosystems
: All the communities of organisms in an area, and the way they
interact with non-living things like rivers or the weather in that
area, form an ecosystem.
Biomes
: A biome is a large region of Earth that has a certain climate and
certain types of living things. Major biomes include tundra,
forests, grasslands, and deserts.
Biosphere
: The biosphere is the whole network of living things on planet Earth
--- eight thousand miles in diameter, twenty five thousand miles
around the equator.
Everything in this list is made up of the things above it. For example,
communities are made of many populations and populations are made up of
many organisms.
----------------------------------------------------------------------------------------------
! {width="400"}
The earth, our home, our biotope in space.
----------------------------------------------------------------------------------------------
|
# Wikijunior:Biology/Creatures
## Creatures
Many different creatures live on earth: plants, insects, birds, fish,
bacteria, humans and many more. They have many differences and
similarities.
<File:Monkey> of kembang island.jpg\|thumb\|Monkey <File:Pterois>
volitans Manado-e edit.jpg\|thumb\|Pterois volitans <File:Marabou> Stork
at Animal Kingdom Lodge.jpg\|thumb\|Marabou Stork <File:Jelly>
Monterey.jpg\|thumb\|Jelly Monterey <File:Common> carder
bee.jpg\|thumb\|Common carder bee
<File:Euglena.gracilis.jpg%7Cthumb%7CEuglena>
## Are living beings related to each other?
*According to the theory of evolution, all living beings on earth are
related to each other.*
We humans are therefore related to monkeys, cows and apple trees, but
also to mosquitoes and bacteria. All living organisms on earth today
share common ancestors.
## Tree of life
<File:Tree> of life by Haeckel.jpg\|thumb\|Tree of life by Haeckel
<File:Circular> timetree-of-life 2009.jpg\|thumb\|ohne\|Circular
timetree-of-life 2009
In 1879, the biologist **Ernst Haeckel** drew a family tree of living
things inspired by the family trees of noble families. At the roots are
the unicellular organisms, from which all higher living beings descend,
with humans at the top. Scientists are aware that scientific statements
can be wrong and therefore look for errors. They found errors in
Haeckel\'s picture. So, the birds are drawn in the wrong place.
Modern family trees are much more complex. Humans are now on a par with
other living organisms. On the left of the picture are the mammals, next
to them are the birds. One of the lines denotes the humans. Extinct
creatures such as dinosaurs are not shown here. Even the modern family
trees are probably wrong. Viruses can transfer genes between different
organisms and protozoa can exchange genes with other protozoa. There are
many unanswered questions, especially in the case of unicellular
organisms. There is still much to explore for future biologists.
|
# Wikijunior:Biology/Science and theory of evolution
## Science and theory of evolution
The aim of science is to understand the world better (knowledge) and to
produce new technology (innovation). Scientists develop mental models
(theories) and functional models (such as an engine). Then they test
these models through thought and experimentation. Appropriate models are
presented to the public. Scientists change these models and test them
again. *For scientists, models are not the truth.* They are therefore
always looking for improvements. Good theories are understandable and
thoroughly tested.
<File:Cucurbita_maxima_02_-_Orange.jpg%7CCucurbita> maxima
<File:Cucurbita_pepo_accidental_hybrid_Acchini.jpg%7CCucurbita> pepo
Acchini <File:2006-10-18Cucurbita_pepo02.jpg%7CCucurbita> pepo
File:Cucurbita_pepo\_\'Ufo\'\_-\_scallop_group_Weißer_Kürbis\_\"Ufo\".jpg\|Cucurbita
pepo \'Ufo\' <File:Assorted> Cucurbita pepo and maxima
gourds.jpg\|thumb\|Assorted Cucurbita pepo <File:Cucurbita> in Bayern -
Kürbis - Sortenvielfalt.jpg\|thumb\|Cucurbita from Bayern
## Development of the theory of evolution
For centuries, scientists have repeatedly suspected that today\'s
creatures had common ancestors. They knew that breeders of crops and
livestock can change the characteristics of living beings. So, the
pumpkin growers used only the seeds of the best pumpkins for the next
seed. In some regions, growers propagated red gourds, in other regions
green ones. This is how different shapes came about. However, the
scientists could not explain how breeding without a breeder could work.
!Charles Darwin **Charles
natural selection is possible without a breeder. For example, if the
food is encased in a thick shell, a bird must be able to crack it to
survive. A thick beak makes this possible. The problems in the habitat
of living beings determines the goals. A narrow-pointed beak helps to
capture insects. He realized this through observations on a trip to the
Galapagos Islands.
<File:Evolution_sm.png> <File:Darwin's_finches.png%7CDarwin's> finches
<File:Darwin's_Finches,_Denver_Museum_of_Nature_and_Science.jpg>
## Evolution as natural selection
!DNA double helix
horizontal
The genes are located in the cells of living beings. The genes contain
the building and operating instructions of the living being. These are
made up of DNA.
- **Mutation**\
DNA can be altered by naturally occurring radioactive radiation.
This leads to changes in the offspring. As a result, children, such
as Darwin finch chicks, have different characteristics. They are
called variations.
```{=html}
<!-- -->
```
- **Selection**\
Some chicks die, others survive. Creatures that are better adapted
to the environment have a better chance of surviving. Chicks with a
feeding beak are more likely to survive.
```{=html}
<!-- -->
```
- **Propagation**\
The survivors can have children.
## Evolution as a scientific process
!Chick before the first
flight
Evolution and scientific work are based on a comparable process:
- **Mutation**\
Science: Scientists modify existing models.\
Evolution: Radiation changes DNA.
```{=html}
<!-- -->
```
- **Selection**\
Science: Scientists test the models in experiments.\
Evolution: The realities of nature are a severe test for all living
beings.
```{=html}
<!-- -->
```
- **Propagation**\
Science: Through a publication, other scientists learn about the
model.\
Evolution: Living beings have children.
The theory of evolution describes a scientific knowledge and innovation
## Direction and destination
A breeder strives for goals, the evolutionary process of nature does
not. But the mechanism of evolution has one direction: best possible
adaptation to the environment in order to survive. For example, being
able to fly is very useful. Many innovations were needed before the
physical and mental abilities were available for birds to fly and find
their way back home after a migration.
## Science and truth
In philosophy, there have been arguments about truth for centuries.
Since each side was convinced that they had the (absolute) truth at
their disposal, an agreement was impossible. Science has solved the
problem: on the one hand, science dispenses with absolute statements.
All statements are preliminary and can always be improved. On the other
hand, scientists have developed methods to check the quality of models.
Scientific statements are not true or untrue, but of high or low
quality. That doesn\'t mean scientists aren\'t confident in their
theories. However, good scientists are generally willing to revise their
## Is the theory of evolution a good theory?
- The theory of evolution is understandable. It is clear that
well-adapted creatures are more likely to survive.
```{=html}
<!-- -->
```
- The theory of evolution is logical. It is based on the laws of
mathematics (statistics).
```{=html}
<!-- -->
```
- The theory of evolution is often tested. It has been verified by
computer simulations, experiments and observations in nature.
```{=html}
<!-- -->
```
- The theory of evolution is helpful. It explains many properties of
living things and processes in nature that would otherwise be
incomprehensible.
The theory of evolution is a high-quality theory.
In particular, questions of speciation, the Cambrian Explosion and the
origin of life are currently under discussion.
## References
[^1]:
[^2]:
[^3]:
|
# Wikijunior:Biology/Life
## Definition of Life
Scientists have come up with over a hundred different definitions of the
term \"life\". Many definitions are similar. They usually belong to one
## Definitions
- **Enumeration of properties**\
*Life is a system that has a metabolism, can grow, multiply and move
Definitions should help to avoid misunderstandings when working
together. Scientists have therefore developed quality criteria for
definitions.
With enumerations, a problem arises with classification if not all
criteria are met. Enumerated definitions are of low quality.
Metabolism wip.png\|thumb\|Metabolism Cross-section of an Oak Log
Showing Growth Rings.jpg\|thumb\|Growth Rings Sonchus April
2010-1.jpg\|Seeds Christie (racing automobile)
LCCN2014682300.jpg\|thumb\|Racing Automobile Darwin Hybrid Tulip
Mutation 2014-05-01.jpg\|Tulip with Mutation
- **Matter and Energy**\
*Life is a system of nucleic acids and polymerases that absorbs and
The definition is limited. It relates to earthly life.
Some scientists suspect that there are life forms in space that use
other substances.
RNA Polymerase II.png\|thumb\|RNA Polymerase
- **Information**\
*Life is a system that can absorb, process and deliver information
The definition is wide. It does not separate biology from technology.\
Papertape3.jpg\|thumb\|Information
- **Themodynamics**\
*Life is an unstable system that creates and maintains order within
This definition applies to other aspects of life: Living beings try to
maintain order so they don\'t die.
That\'s true, but definitions are meant to be intuitive.\
Mischentropie.jpg\|thumb\|Entropie
The definitions listed distinguish between animate and inanimate.
In the following symbiosis-based definition, there is a graded
transition. Organisms are a higher life form than unicellular organisms
because they rely on more symbioses.
- **Scientifically - philosophically**\
*Life is based on symbioses, ie on cooperation for mutual benefit.
Philosophical: Life is based on the principle of love.*
Principle-based definitions are of higher quality than adhock
definitions.
This is a principle-based definition: symbioses form the basis of life.
But biologists have different opinions on this question.
The symbiosis definition gives life an (ethical) value, which is
important for doctors and lawyers. But many biologists strictly reject
the use of philosophical terms like \"love\".\
Common clownfish curves dnsmpl.jpg\|thumb\|Clownfish with Sea anemone
African buffalo (Syncerus caffer caffer) male with cattle
egret.jpg\|thumb\|African buffalo with cattle egret Hummingbird hawkmoth
a.jpg\|thumb\|Hummingbird with Plant
PloverCrocodileSymbiosis.jpg\|thumb\|Crocodile with Plover Lasius niger
y Cinara tujafilina en Thuja orientalis.jpg\|thumb\|Ant with Aphid
Lichen Cladonia portentosa and Hypogymnia physodes.jpg\|thumb\|Lichen:
Mushroom with Algae
Life is diverse. Each of these definitions illuminates different aspects
of life.
`== Literature ==`
[^1]:
[^2]:
[^3]:
[^4]:
[^5]:
[^6]:
|
# Wikijunior:Biology/Origin of Life
## Origin of Life
!Stromatolite about 3.4 billion years
old
after the formation of the earth, there were protozoa and stromatolite.
It is not known how life evolved. There are various theories, one of
which is presented below. Living things control their internal
chemistry. A main problem in the transition from chemistry to biology is
self-regulation. A comparison with the market economy should clarify
this.
\- When the warehouses in a factory are full, production is stopped.
(End products inhibit chemical reactions.) In a market economy,
production is regulated by supply and demand. (In chemistry, starting
products and end products control the reaction.)
\- There is a problem with waste products in the industry. These must be
disposed of. This reduces the profit. Many entrepreneurs have become
rich because they came up with an idea of how to make something valuable
out of the waste products. The waste product became the raw material.
(The end product of one chemical reaction can become the starting
product of another reaction.)
**\* Step 1: Cycle** Starting products and catalysts promote chemical
reactions. After the reaction, the catalysts are released and are
available for further reactions. That\'s why it\'s called a cycle. End
products inhibit chemical reactions.
The end product of a chemical reaction becomes the starting product in
another reaction. This restarts the production of this substance.
**Both cycles control and sustain each other in a symbiosis.**
The metabolic processes of every cell are based on a large number of
hypercycles. According to the symbiosis-based definition, hypercycles
are very simple life forms based on chemistry, single-celled organisms
with their enormous number of hypercycles are a significantly higher
life form.
Living things need a boundary to hold their parts together. It is
possible that the first living beings arose in small cavities in the
rock between which chemical substances circulated. It was only much
later that individual cells left the cavities and settled elsewhere. The
first form of life was therefore not a single cell, but a habitat. The
first biological form of life was therefore the biotope.
400px \|MUexperiment
Living beings need to constantly take in food. This consists of
chemicals and provide building material and energy. In order for life to
arise, the necessary chemical substances such as sugar, fats and amino
acids had to be available. Scientists have shown that for all important
substances there are a number of ways in which they could have formed.
therefore believe that there are living beings on many planets and moons
in space.
## Speciation
All living things live in a shared **environment**. They need to use
certain things (called **resources**) from their environment, like food,
water, and a place to live. These resources are limited, so when more
than one organism tries to use the same resource, they end up in
**competition**. When two living things compete for a common resource,
one of them will eventually win and **consume** (or use) that resource.
When something about a thing\'s body makes it better at competing for
resources, we call that special feature **adaptation**. Since these
adaptations can be passed from parents onto their children, as time goes
on, these adaptations become more common within a **population**, or
group of similar living things living together. This is called **Natural
Selection**, or **Evolution**.
When a small group of living things (a small population) gets separated
from the main population that they came from (like if they move over a
mountain range or a river, or if they move to a new island so that they
can\'t easily move back) they will often find themselves in a different
environment than they were in before. This new environment has different
resources and different competitors, so the new population will need
different features or adaptations to be a strong competitor than what
they had needed before. The original population hasn\'t changed at all,
they still need the same adaptations as before. Over time, as the new
population begins to adapt to their new environment, they start to look
less and less like the other population. Eventually, after thousands or
even millions of years, the two populations will look so different that
they can\'t be called the same species. We call this process
**speciation**, which just means the formation of new species.
Speciation is an unavoidable consequence and a very important part of
evolution.
The Earth itself was born *4 and a half billion years ago*. At first, it
was just a bunch of rock and water. There were no living things. But
then, about *3.8 billion years ago*, the first life was formed in the
oceans. It was no bigger than a single cell, but that single cell was
able to copy itself and form more and more cells. Over billions of
years, as that one cell evolved, it became more and more complex.
Eventually, about *1 billion years ago*, the first living things with
more than one cell were born. Many of the kinds of things that lived
that long ago can\'t be found living in the world any more, because
newer things have been better competitors and forced the older things to
**extinction**, but we know they existed because we can find their
**fossils**, which are traces of ancient living things buried deep in
the rocks.
## Literature
[^1]:
[^2]: Eigen & Schuster (1977) The Hypercycle. A Principle of Natural
Self-Organisation. Part A: Emergence of the
Hypercycle.
Naturwissenschaften Vol. 64, pp. 541--565.
[^3]:
[^4]:
|
# Wikijunior:Biology/Cells
## Cells
!Plant cells
All living things are made of cells. They are the components and
building blocks of life.
**What is a cell?** *A cell is a bag of liquid that holds in the stuff
of life.*
A cell is the smallest structural and functional unit of a living
organism. The word \"cell\" comes from the Latin word *cella*, which
means small room. If you look at living things under a microscope, you
will see that they are made of small squares or balls. **Robert Hooke**,
a biologist from England, saw these small squares in a hard material
called cork using a microscope in the year 1665. They looked like rooms,
and so he called them cells. He was also the first person to observe
dead cells.
**What types of cells are there?** There are two kinds of cells:
**eukaryotes**, which have a large ball in them called a *nucleus*, and
**prokaryotes**, which do not.
Most prokaryotes are very small. Two of the six kingdoms, *Bacteria* and
*Archea*, are made up of prokaryotes. All of the rest of the kingdoms --
*Animalia*, *Plantae*, *Fungi*, and *Protista* -- are made up of
eukaryotes.
### Endosymbiosis Theory
Every human being is a living being. But each one is made up of a large
number of tiny living things called cells. But that\'s not all! Smaller
creatures live in every cell of our body. The larger cell nourishes the
smaller cells inside, and the small cells perform important functions.
Working together for mutual benefit is called symbiosis. According to
the endosymbiont theory, single-celled organisms were taken up by the
large cells long ago. The mitochondria are derived from aerobic
bacteria. The mitochondria gain energy from food and oxygen. The
chloroplasts found in plant cells are derived from cyanobacteria. The
chloroplasts gain energy from light.
<File:Euglena> mutabilis - 400x - 1 (10388739803).jpg\|thumb\|Euglena
<File:Euglena> viridis TK-UT.svg\|thumb\|Euglena
<File:Endosymbiosis.svg%7Cthumb%7CEndosymbiosis> <File:Serial>
Endosymbiosis Theory.png\|thumb\|Endosymbiosis Theory
## What do cells look like?
Cells are surrounded by a thin layer of oil called the *cell membrane*.
It separates the inside of the cell from the outside. Some cells also
have a firm box around them called a *cell wall* that keeps it from
breaking. The water that fills a cell is called the *cytoplasm*. Inside
a cell, knowledge is stored in something called a *chromosome*. It tells
the cell how to work, like steps in a book.
Eukaryotic cells hold their chromosomes in a structure called a
**nucleus**, which has its own oily membrane around it. Cells also have
many other things with membranes called **organelles**, which means
\"little organs\". Some organelles found in eukaryotic cells are called
*ribosomes*, *vacuoles*, *mitochondria*, and *chloroplasts*. !Human
sperm
cell{width="600"}
Cells that do different things have different shapes. A plant leaf cell
takes light and uses it to make sugar. To do this, it has green
organelles called *chloroplasts*. To get the most light, it pushes
cytoplasm in circles around a hollow bubble of water in the center of
the cell called a *vacuole*.
A human sperm cell carries its chromosomes, found in the nucleus, to an
egg cell in order to make a new baby. It has a large tail called a
*flagella* that helps it to swim. It also has many organelles called
*mitochondria* that give it power, like how gasoline powers a car.
#### Vocabulary words
: **nucleus** - A ball of membrane in the middle of the cell that
holds the chromosomes.
: **chromosomes** - Things that hold the knowledge of the cell.
: **prokaryote** - A cell without a nucleus.
: **eukaryote** - A cell with a nucleus.
: **organelles** - Little things inside a cell.
: **cytoplasm** - The gel-like inside of a cell.
: **membrane** - An oil bag that holds water.
: **vacuole** - An organelle full of water and waste inside a cell.
: **mitochondria** - An organelle that makes power in a cell.
: **chloroplast** - An organelle that makes sugar found in a plant or
protist.
: **flagella** - A tail on a cell that makes it swim.
: **Golgi body** - An organelle which helps in secretion.
: **ribosome** - An organelle which helps in synthesis of proteins.
|
# Wikijunior:Biology/Tissues
! Animal muscle
tissue
## Tissues
Organisms are made of tissues. Tissues are groups of
cells that work together. Plant leaves
have tissues that capture light and make sugar. Most animals have
**muscle** tissues that help them move.
When two or more tissues work together to do one thing, they make up
organs.
In plants, there are two types of tissues:
- **Meristematic tissue**: This has actively dividing cells.
- **Permanent tissue**: This type of tissue has developed cells. They
do not divide.
There are also two different types of permanent tissue:
- **Simple permanent tissue**: This type of permanent tissue has only
one kind of cells. Some examples of simple permanent tissues are:
- **Parenchyma**: They have loosely packed cells. The cells do not
have a particular function.
- **Collenchyma**: They have cells which have layers called
pectin. They contain chlorophyll.
- **Sclerenchyma**: They have dead cells. Between the cells, there
are layers called lignin.
- **Complex permanent tissue**: This type of permanent tissue contains
different kinds of cells. Some examples of complex permanent tissues
are:
- **Xylem**: This type of tissue contains mainly dead cells. They
help to move water from the roots to leaves.
- **Phloem**: This type of tissue contains mainly living cells.
They help moving food materials from leaves to other parts.
|
# Wikijunior:Biology/Organs
## Organs
! A heart\|thumb Many living things have
**organs**. Your heart, brain, lungs, liver, and kidneys are all
examples of organs.
Organs are made up of two or more
tissues.
All organs have something that they do to keep you healthy. For example,
the heart pumps blood, and the lungs give you air.
Organs work together in groups called organ
systems.
## Cambrian explosion and organ formation
The earth formed about 4.5 billion years ago. One billion years later,
at the latest, there will be unicellular organisms. Complex
single-celled organisms, called eukaryotes, arose until about 2 billion
years ago. The first multicellular organisms, such as filamentous algae,
jellyfish and worms, arose 1-2 billion years ago. Around 540 million
years ago, animals with complex organs such as hearts and eyes emerged.
Just as the economy and way of life of people fundamentally changed in
the industrial revolution in a short historical time, all animal tribes
that exist today have developed in a geologically short time. Biologists
speak of the Cambrian explosion. The industrial was triggered by basic
innovations such as the steam engine. The Cambrian explosion was also
probably triggered by basic innovations that made organ formation
possible. The prerequisite is that cells specialize and then migrate to
the right place.
<File:Medusina> asteroides.jpg <File:Life> in the Ediacaran sea.jpg
<File:DickinsoniaCostata.jpg> <File:20191020> Yohoia tenuis.png
<File:Opabinia> smithsonian.JPG <File:Figure> 27 04 03.jpg
|
# Wikijunior:Biology/Systems
`__NOTOC__`
## Organ systems
!A diagram of the reproductive system in
women{width="250"}
Two or more organs that work together make up an **organ system**.
Organ systems are found in all different kinds of living things.
Some of the organ systems found in humans include:
- Circulatory
system
- Respiratory
system
- Digestive system
- Endocrine system
- Reproductive
system
- Urinary system
- Immune system
- Muscular system
- Skeletal system
- Integumentary
system
- Nervous system
|
# Wikijunior:Biology/Kingdoms
## Kingdoms
When we look at living things we divide them up into groups and give the
groups names. This is called **classification**.
Living things are classified into groups of different sizes. The biggest
groups contain almost everything. The smallest groups have only a few
types of living things in them.
The groups are, from large to small:
: Domain
: Kingdom
: Phylum
: Class
: Order
: Family
: Genus
: Species
The domains are *Bacteria*, *Archaea*, and *Eukarya*, but most people
still find it easiest to divide things by **kingdom**.
The five kingdoms are:
: Archaea
: Bacteria
: Animalia (Animals)
: Plantae (Plants)
: Fungi (Funguses and
mushrooms)
|
# Wikijunior:Biology/Viruses
## Viruses
!A virus called a rotavirus, which can cause diarrhea.\|alt=Diarrhoea
causing virus,
Rotavirus.
Viruses are much smaller than other living things like
bacteria, so small that it
would take around one hundred viruses laid end to end just to make the
length of a bacterium! Viruses are not really alive. They fall in the
line between living things and non-living things. They do not do all of
the things that living things
do. They can only
make more copies of themselves when they are inside living cells.
Viruses often kill cells, and also make
you ill. Lots of diseases are caused by viruses, the most famous ones
are the viruses that cause the flu, colds, and Covid-19.
|
# MySQL/Introduction
## What is SQL?
For a more general introduction see the SQL Wikibook.
**S**tructured **Q**uery **L**anguage is a third generation language for
working with relational databases. Being a 3G language it is closer to
human language than machine language and therefore easier to understand
and work with.
- Dr. E. F. Ted Codd who worked for IBM described a relational model
for database in 1970.
- In 1992, ANSI (American National Standards Institute), the apex
body, standardized most of the basic syntax.
- Its called SQL 92 and most databases (like Oracle, MySQL, Sybase,
etc.) implement a subset of the standard (and proprietary extensions
that makes them often incompatible).
## Why MySQL?
- Free as in Freedom - Released with GPL version 2 license (though a
different license can be bought from Oracle, see below)
- Cost - Free!
- Support - Online tutorials, forums, mailing list (lists.mysql.com),
paid support contracts.
- Speed - One of the fastest databases available.
(1)
- Functionality - supports most of ANSI SQL commands.
- Ease of use - less need of training / retraining.
- Portability - easily import / export from Excel and other databases
- Scalable - Useful for both small as well as large databases
containing billions of records and terabytes of data in hundreds of
thousands of tables.
- Permission Control - selectively grant or revoke permissions to
users.
### The MySQL license
MySQL is available under a *dual-licensing* scheme:
1. Under the GNU General Public License, version 2, (\"or later\"
allowed in versions released before 2007): this is a Free (as in
freedom), copyleft software license that allows you to use MySQL for
commercial and non-commercial purposes in your application, as long
as your application is released under the GNU GPL. There is also a
\"FLOSS Exception\"
which essentially allows non-GPL\'d but Free applications (such as
the PHP programming language, under the PHP license) to connect to a
MySQL server. The exception lists a set of free and open-source
software license that can be used in addition to the GNU GPL for
your MySQL-dependent Free application.
2. A so-called \"commercial\" [^1], paid license, that is, a license
where MySQL grants you the right to integrate MySQL with a non-FLOSS
application that you are redistributing outside your own
organization. [^2]
## MySQL and its forks
MySQL is Free Software, so some forks and unofficial builds delivering
contributions from the community exist.
### MariaDB
In 2008 Sun Microsystems bought MySQL, Sun being itself later acquired
by Oracle, in 2010. After the acquisition, the development process has
changed. The team has started to release new MySQL versions less
frequently, so the new code is less tested.There were also less
contributions from the community.
In 2009 Monty Widenius, the founder of MySQL, left the company and
created a new one, called The Monty Program.
He started a new fork called MariaDB. The scopes of MariaDB,
- import all the new code that will be added to the main MySQL branch,
but enhancing it to make it more stable;
- clean the MySQL code;
- add contributions from the community (new plugins, new features);
- develop the Aria storage engine, formerly named Maria;
- improving the performance;
- adding new features to the server.
The license is the GNU GPLv2 (inherited from MySQL).
The primary platform for MariaDB is GNU/Linux, but also works on one
proprietary system. The following Storage Engine have been added:
- Aria (also used for internal tables)
- PBXT
- XtraDB
- FederatedX
- SphinxSE
- OQGRAPH
- Others may be added in the future.
### Drizzle
In 2008 Brian Aker, chief architect of MySQL, left the project to start
a new fork called Drizzle. While Oracle
initially funded the project, Drizzle is now funded by Rackspace. Its
characteristics are:
- only a small part of the MySQL code has survived in this fork, the
rest being removed: only essential features are implemented in the
Drizzle server;
- the survived code has been cleaned;
- Drizzle is modular: many features are or can be implemented as
plugins;
- the software is optimized for multiCPU and multicore 64 bit
machines;
- only GNU/Linux and UNIX systems are supported.
There are no public releases of this fork, still. Its main license will
be the GNU GPLv2 (inherited from MySQL), but where possible the BSD
license is applied.
### OurDelta
OurDelta is another fork, maintained by Open
Query. The first branch, which has number 5.0, is based on MySQL 5.0.
The 5.1 branch is based on MariaDB. OurDelta includes some patches
developed by the community or by third parties. OurDelta provides
packages for some GNU/Linux distributions: Debian, Ubuntu, Red
Hat/CentOS. It is not available for other systems, but the source code
is freely available.
### Percona Server
Percona Server is a MySQL fork maintained by Percona. It provides the
ExtraDB Storage Engine, which is a fork of InnoDB, and some patches
which mainly improve the performance.
## Notes
```{=html}
<references />
```
fr:MySQL/Introduction
[^1]: Calling it \"commercial\" is misleading, because the GNU GPL can
be used in commercial (but non-proprietary) projects.
[^2]: Proprietary projects still can connect to a MySQL server without
purchasing this license by using old versions of the MySQL client
connection libraries (under the GNU Lesser General Public License).
However, these libraries cannot connect to the newest versions of
the MySQL server.
|
# MySQL/MySQL Practical Guide
## Installing MySQL
### All in one solutions
As MySQL alone isn\'t enough to run a real database server, the more
practical way to install it is to deploy an all in one
pack in this purpose,
including all the needed additional elements:
Apache and
PHP.
1. On Linux: XAMP or LAMP "wikilink").
2. On Windows: XAMP, WAMP, or
EasyPHP.
**Attention on Windows 10:**
- The server IIS is launched by default, which forces Apache to change
its port (888 instead of 80). To resolve this, just untick *Internet
Information Services* in *Programs and functionalities*, *Activate
or deactivate the Windows functionalities.* In the same way, the
MySQL port can change from 3306 to 3388.
- Moreover, *EasyPHP development server* (alias *Devserver*, the red
version) doesn\'t work properly (*MSVCR110.dll is missing*) but
*EasyPHP hosting server* (alias *Webserver*, the blue one) yes.
However, it launched automatically at each boot which slows the
system significantly. To avoid this, execute *services.msc*, and
toggle the three services below in manual start. Then to launch them
on demand (as an administrator), create a script called *MySQL.cmd*,
containing the following lines:
``` dos
net start ews-dbserver
net start ews-httpserver
net start ews-dashboard
pause
net stop ews-dashboard
net stop ews-httpserver
net stop ews-dbserver
```
### Single installation
*This guide is written from the perspective of using the Linux Shell
with Ubuntu and
apt-get*1.
_If you want to solely use the Terminal_:
Make sure you have the MySQL Client and Server installed. To install the
client and the server under apt-get distributions (for example Debian
and Ubuntu), Execute:
apt-get install mysql-client mysql-client-5.0 mysql-server mysql-server-5.0
_About the MySQL package:_
2
_Having a secure installation_:
If all your answers are \"yes\" to what follows, this cleans up your
installation, forces you to set a root password, asks you to test for
anonymous users and makes your database internal.
Just be careful. Be sure that you are configuring MySQL to the
specifications you want.
Here\'s the code:
mysql_secure_installation
## Creating your own MySQL account and database:
Now that MySQL is installed, you wouldn\'t necessarily have your own
account, so you have to log in as root.
To do this type:
sudo mysql -u root -p
(This means that you\'re logging on as the user \"root\" (**-u root**)
and that you\'re requesting the password for \"root\" (**-p**) )
Once you\'ve managed to log in, your command-line should look like this:
**mysql\>**
By the way, if your command-line ends up looking like this: **-\>**
theres an explanation behind it.
In MySQL each command you do has to end with **;** . This way it knows
that everything behind **;** is a command.
So to get out of there, simply type **;** There will be more on this
later.
Now you can check what databases (if any) are available to your user (in
this case \"root\" ):
show databases;
Let\'s get straight to the chase and create our own database. Let\'s
call it **people**. While we\'re doing this we can also create our own
user account. Two birds with one stone.
So first create the database:
create database people;
(NOTE: in this particular case, you have to be \"root\" to create new
databases.)
Now we want to grant ( **GRANT** ) all user rights ( **ALL** ) from (
**ON** ) the entire ( **\*** ) **people** database to ( **TO** ) your
account ( *yourusername***\@localhost** ) with your user password being
*stuffedpoodle* ( **IDENTIFIED BY \"stuffedpoodle\"** ).
So we\'d input this as:
GRANT ALL ON people.* TO yourusername@localhost IDENTIFIED BY "stuffedpoodle";
Tada! You now have your own user account. Let\'s say you chose **ted**
as your username. You\'ve configured MySQL to say that **ted** can play
around with the **people** database in whatever ways he wishes.
Now get out of MySQL by typing
exit
To start working with the **people** database, you can now login as
**ted**:
mysql -u ted -p
## Creating tables with information in your database:
In MySQL information is stored in tables. Tables contain columns and
rows.
**Ted** has now created a **people** database. So we want now to enter
some information into a table.
Login as **ted**.
Firstly, we need to make sure we\'re working with the **people**
database. So typing:
select database();
will show you what database you\'re currently using. You should see a
**NULL** , meaning that you\'re working with nothing at the moment.
So to start using the people database, type:
\u people
(NOTICE: Typing: **USE people** OR logging in as **mysql people -u ted
-p** is also acceptable.)
So how to create a table.
Keep in mind that we need to set all the column values (like surname,
age etc.).
Now, remember that annoying **-\>** symbol? MySQL reads your command as
just one command, not a series. So, **-\>** enables you to enter your
inputs in a nicer way than just writing everything on one line. (NOTE:
The problem with this method is that if you screw up on a line and press
ENTER to go to the next line, you can\'t go back and fix your mistake.
That\'s why a nice way to do this is using something like *SciTE Text
Editor* (set language to **SQL**) to write your code and just copy/paste
that into the shell.)
Another thing is that you must separate your lines with **,** at the end
of each line except when you\'ve written your two last lines. On the
second to last line, **don\'t** add **,** and the last line always ends
with **;** .
First I have to explain a few things so you\'re not blown away by an
unfamiliar bunch of code.
If you don\'t know, we use brackets **()** to **encapsulate** code.
(Often called *parenthesis*).
The first thing we will be writing after the **CREATE TABLE
\'\'tableName***and the first bracket will be the*database ID\'\'
number(we use integers 3) of
each person, mainly known as the **Primary Key**. It\'s kinda like a
passport ID number. Each number is unique to its owner and it has to be
to prevent duplication and imposters.
Now, any variable in SQL is created as
variableNAME variableTYPE otherVariableAttributes
. So in order to **define** the Primary Key variable, we need to type
for example:
**peopleID**(variableNAME) **int**(variableTYPE - short for \"integer\")
**unsigned**(means we want our integer value to always be a positive
number) **not null**(we want each row to have a value, so obviously the
value can\'t be empty(NULL) ) **auto_increment**(this ensures that each
new row that is created will be a unique value) **primary key**(we are
saying that this particular variable will be our Primary Key for this
Table.)**,** (a reminder that the **,** symbol indicates the end of this
line so MySQL knows to go to the next line)
You already know about the **int** variable. There is another which is
kinda like *String* (for example: if you\'ve programmed in Java before).
It\'s called **varchar** which stands for *variable characters*. You set
the amount of characters someone is able to input into a **varchar**
variable. Like this: **nameOfFattestMooseAlive varchar(30)** So
**nameOfFattestMooseAlive** can have a maximum of 30 characters.
Okay, so let\'s see an example of how to create a table relating to the
**people** database:
CREATE TABLE peopleInfo
(
peopleID int unsigned not null auto_increment primary key,
firstName varchar(30),
lastName varchar(30),
age int,
gender varchar(13)
);
Just a note that I set the maximum value of **gender** to 13 because
\"hermaphrodite\" has 13 characters. :)
Now you can type: **CREATE TABLE peopleInfo** and press ENTER if you\'d
like to start **-\>** and write the rest of the code or you can use
SCITE and copy/paste it into your shell.
Great. We now completed our first Table.
_Now comes the part when we have to get some actual people
into our **peopleInfo** Table._
Since your already using the **people** database, you can type
show tables;
to see what tables are currently in your database.
To see the *properties* of your table type:
describe peopleInfo;
So, how to fill in our **peopleInfo** table with people\...
This is done by telling MySQL **what** *rows* you are filling in and the
**actual information/data** you want to fill in.
So we want to **insert into** our table (specifying the rows) and
inputting the **values**(actual data) that we want. (NOTE: We are not
filling in the primary key.)
To create our first person you would type this:
INSERT INTO peopleInfo
(firstName, lastName, age, gender)
values
("Bill", "Harper", 17, "male");
Great. Now if you want to printout to the screen all the information
about your table, type:
select * from peopleInfo;
and there you have it. Your table now has one person stored in it.
_ Inserting lots of information into your table:_
A brief point that shall be covered later, MySQL backs-up itself in .sql
files. The reason this is smart is because it backs-up the actual code
inside the text file.
Keeping this in mind, let\'s say we want to add 10 other people into
your peopleInfo table. It would be one hell of a hassle typing each
person into existence. What if there were a 1000?
So I\'ve graciously typed out the code of filling in 10 other people to
a database. :) Create a blank .txt file and copy/paste this information
into it, saving it as **tenPeople.sql** .
INSERT INTO peopleInfo (firstName, lastName, age, gender) values ("Mary", "Jones", 21, "female");
INSERT INTO peopleInfo (firstName, lastName, age, gender) values ("Jill", "Harrington", 19, "female");
INSERT INTO peopleInfo (firstName, lastName, age, gender) values ("Bob", "Mill", 26, "male");
INSERT INTO peopleInfo (firstName, lastName, age, gender) values ("Alfred", "Jinks", 23, "male");
INSERT INTO peopleInfo (firstName, lastName, age, gender) values ("Sandra", "Tussel", 31, "female");
INSERT INTO peopleInfo (firstName, lastName, age, gender) values ("Mike", "Habraha", 45, "male");
INSERT INTO peopleInfo (firstName, lastName, age, gender) values ("John", "Murry", 22, "male");
INSERT INTO peopleInfo (firstName, lastName, age, gender) values ("Jake", "Mechowsky", 34, "male");
INSERT INTO peopleInfo (firstName, lastName, age, gender) values ("Hobrah", "Hinbrah", 24, "hermaphrodite");
INSERT INTO peopleInfo (firstName, lastName, age, gender) values ("Laura", "Smith", 17, "female");
Excellent. Now we want to get all these people in our table. **exit**
MySQL and go to the directory where you saved the **tenPeople.sql**
file.
Once there, to get all the data into your database, type:
mysql -u ted -p people <tenPeople.sql
and enter your password.
Now log into MySQL and remember to select the database your using. **\\u
people**
Now check again what information you have. There ya go.
## Manipulating your database:
Now that we have a database full of people. We can display that
information anyway we want. A brief example would be
select firstName, lastName, gender from peopleInfo;
This would display to the screen only people\'s names, surnames, and
genders. You\'ve not specified that you want people\'s Database IDs,
Numbers, or Ages to be displayed. And the great thing is you can choose
whatever you want from the database to be displayed.
Now, if you want to delete your table, simply type:
drop table peopleInfo;
Extra conditions:
You can also you extra conditions (filters) through when displaying
data.
select * from peopleInfo where gender = 'female';
will display everyone who is female.
(NOTE: letters are enclosed with **\'** while numbers are plain.)
You can also compare numbers. For example:
select * from peopleInfo where age > 17;
will show everyone in your table who is older than 17.
Little index here:
> greater than
< less than
>= greater or equal to
<= less than or equal to
<> not equal to
Let\'s say we wanted to display all people whose first names began with
the letter \"j\". We would use the LIKE condition. (Makes sense, is your
name LIKE the letter \"j\", well it start with j so yes. :) )
About the LIKE condition.
4
select * from peopleInfo where firstName LIKE "j%";
(NOTE: LIKE \'s evil opposite cousin is NOT LIKE)
## Backing up and restoring your MySQL database:
There is a function called **mysqldump**. This is a way to backup your
database.
Remember how you managed to get information into your database from
**tenPeople.sql**? Well that\'s how you restore information to a
database.
(In this particular case you gotta make sure that in your database you
have a table called \"peopleInfo\")
Now\...
To backup your database (in this case backup the **people** database):
We first have to create the .txt file that we will be backing it up to.
Open a blank .txt file and save it as **backupfile.sql** .
Now we can type:
mysqldump -u ted -p people > backupfile.sql
Congratulations. You have now backuped your people**database.**
`<b>`{=html}WARNING!`</b>`{=html} `mysqldump` is one of the worst ways
to backup production databases for the following reasons:
- it will take quite a lot of time to dump data
- even more time to restore. Depends on datasize, it can be counted in
days!
- locking problem with MyISAM tables or mixed environment
Better solutions are based on binary copy. It allows you to perform
non-locking, consistent backups.
For MyISAM or mixed environment:
- LVM snapshots
For InnoDB:
- LVM snapshots
- ZFS snapshots (for Solaris systems)
- InnoDB Hot Backup
- XtraBackup (similar to InnoDB Hot Backup but free)
## phpMyAdmin
This graphic interface allows the generation of SQL code by selecting
some options with the mouse. This software has its own wiki on
<http://wiki.cihar.com/pma/Welcome_to_phpMyAdmin_Wiki>.
## Hello world
To enter the SQL commands:
- Launch MySQL in shell:
- Linux: `mysql` -h localhost -u root MyDB
- Windows:
`"C:\Program Files (x86)\EasyPHP\binaries\mysql\bin\mysql.exe"`
-h localhost -u root MyDB
- Or open an SQL window in PhpMyAdmin (eg:
<http://localhost/modules/phpmyadmin/#PMAURL-1:server_sql.php?server=1>).
``` mysql
select "hello world";
+-------------+
| hello world |
+-------------+
| hello world |
+-------------+
1 row in set (0.00 sec)
```
|