12.4 Building the Sample Implementation
It is not necessary to compile the LSB-si for testing; the official packages released by the LSB project should be used for this purpose. However, it may be instructive to see how the LSB-si is constructed to get to a clean implementation of the LSB Written Specification. The remainder of this section describes the process.
The LSB-si is built from clean upstream package sources. The build instructions are captured in a set of XML files that serve as inputm to a tool called nALFS; the concept is derived from the Linux From Scratch  project.
The build is a multistage process (Figure 12.1), so that the final result has been built by the LSB-si itself, and the majority of dependencies on the build environment of the host machine are eliminated. Ideally, all the dependencies would be eliminated, but in practice a few minor things may leak through. In particular, the initial stage of the LSB-si build now does not do the GCC fixincludes step as this pulled in some details of the host system in the "fixed" header files that were then used throughout the build process.
Figure 12.1 LSB-si Build
The first phase, or bootstrap, of the LSB-si build is to produce a minimal toolchain of static binaries as shown in Figure 12.1. Packages such as gcc, binutils, kernel-headers, and coreutils are built.
The second phase of the build is to use the bootstrap as a chroot environment to build a more complete toolchain as shown in Figure 12.1. As binaries are rebuilt, the new ones are installed on top of the old static copies built in the bootstrap phase so that by the end of the second phase, we have a complete development environment, using all dynamic libraries. This environment has the characteristic that it is entirely isolated from the details of the build host environment, since none of the tools from the build host have been used to compile the final binaries and libraries.
To reduce the rebuild time, the bootstrap phase is copied to another location before starting, and the copy is used as phase 2. During LSB-si development, there tend to be few changes to the bootstrap, but many to the later phases. For a released LSB-si source tree, this really doesn't matter except that it increases the space requirements of the build area a bit. Thus the bootstrap copy of second phase is not essential for the build strategy, but rather a convenience for LSB-si developers.
This intermediate phase 2 of the build can be used as an LSB Development Environment; in effect, this is what it does when building the final phase. The final phase does not have a compilation environment, as that is not part of the LSB Written Specification. The intermediate phase 2 is designed to be used as a chroot environment; using the compiler directly (not in a chroot) won't work as things will point relatively to the wrong places. Although the intermediate phase 2 is for the same architecture as the host machine, it is more like a cross compilation environment. Note that producing a more usable build environment is a future direction; the current intermediate phase is not officially supported as such and the bundle is not part of the released materials.
The third phase is the construction of the actual LSB si as it will be delivered as shown in Figure 12.1. In this phase, the completed second phase is used in a chroot as the development environment, and each package is then compiled and installed to a target location in the LSB-si tree. During the third phase, care is taken not to install unnecessary binaries or libraries, because an upstream source package will often build and install more than is required by the LSB, and these need to be pruned from the final tree.
Since the LSB team has already anticipated several uses for the LSB-si that require more than the core set, there exists a fourth phase that builds add-on bundles that can be installed on top of the base LSB-si bundle to provide additional functionality as shown in Figure 12.1. There are currently three subphases of the fourth phase: the first one to build additional tools required for running the lsb-runtime-test suite on the LSB-si, the second to build additional binaries to make a bootable system, and the third to build additional binaries to make a User-Mode Linux system. The fourth phase is built by the second phase build environment just like third phase is, and is completely independent of the third phase. That is, if one had a completed second phase, one could start off a fourth phase build without ever building the third phase and it would work fine. It is likely that in the future, there will be additional fourth phase subphases to include in a build environment.
12.4.1 Sample Implementation Build Process
The source code for the LSB-si Development Environment can be obtained from the LSB CVS tree. The code can be checked out in several ways: as a snapshot either by release tag or by date, or as a working cvs directory (even if you're not an LSB developer, having a working directory can let you check developments more quickly by doing a "cvs update"). For an example using a release-tag snapshot, see the build instructions in Section 12.4.2.
You can browse the CVS tree Web interface to determine the available release tags.
You will also need to check out (or export) the tools/nALFS directory to get the build tool. Again, see Section 12.4.2 for an example.
Source code for the patches to the base tarballs is in the CVS tree in si/build/patches. These patches should be copied to the package source directory. The base tarballs must be obtained separately. Once the build area has been configured, a provided tool can be used to populate the package source directory.
The same tool (extras/entitycheck.py) can be used to check if all the necessary files are present before starting a build. With a -c option, it will do a more rigorous test, checking md5sums, not just existence. Every effort has been made to describe reliable locations for the files, but sometimes a project chooses to move an old version aside after releasing a new one (if they have a history of doing so, the location where old versions are placed is probably already captured). The packages are also mirrored on the Free Standards Group Web site.  Still, retrieval sometimes fails; entitycheck.py will inform of missing files and the expected locations are listed in extras/package_locations so it's possible to try to fetch the missing packages manually.
12.4.2 Sample Implementation Build Steps
Obtain LSB-si sources from CVS:
$ export CVSROOT $ CVSROOT=":pserver:firstname.lastname@example.org:/cvsroot/lsb" $ cvs -d $CVSROOT export -r lsbsi-2.0_1 si/build
Use -D now instead of the release tag to grab the current development version.
Configure the build environment. There's a simple configuration script that localizes the makefile, some of the entities, and other bits. The main question is where you're going to build the LSB-si. The default location is /usr/src/si. Make sure the build directory exists and is in a place that has enough space (see the note at the end of this section).
$ cd src/si $ ./Configure
Answer the questions.
From here on, you'll need to operate as superuser, as the build process does mounts and the chroot command, operations restricted to root in most environments.
Copy patches to their final destination (substitute your build directory if not using the default):
# cp patches/* /usr/src/si/packages
Check that the package and patch area is up to date:
# python extras/entitycheck.py -f
You're now ready to build the LSB-si:
If there's a problem, make should restart the build where it failed. If the interruption happened during the intermediate LSB-si phase, it is likely that the whole phase will be restarted; this is normal.
Building the add-on packages lsbsi-test, lsbsi-boot, and lsbsi-uml requires an additional step. This step is not dependent on the LSB-si (phase 3) step having completed, but it is dependent on the intermediateLSB-si (phase 2) step being complete:
# make addons
Now you can build the UML installable package (IA-32 build host or target only). This step is dependent on all of the other phases, including the add-ons, having completed:
# cd rpm # make
The build takes a lot of space (around 1.4GB), and may take a lot of time. A full build on a fast dual-CPU Pentium 4 is about 2.5 hours; depending on architecture, memory, and processor speed it may take as much as 20 hours.
If the build stops, voluntarily or through some problem, there should be a fair bit of support for restartability, but this is not perfect. In particular, be cautious about cleaning out any of the build areas, as the package directory may still be bind-mounted. Each of the team members has accidentally removed the packages directory more than once, causing big delays while it's being refetched (it pays to make a copy of this directory somewhere else). Be careful! The makefile has a clear_mounts target that may be helpful.