[nSLUG] General Linux Question

Donald Teed dteed at artistic.ca
Sun Mar 23 23:29:44 AST 2003


On Sun, 23 Mar 2003 bdavidso at supercity.ns.ca wrote:

> On Sun, 23 Mar 2003, Donald Teed wrote:
> 
> > I asked in this group awhile ago whether there were any Linux
> > distros that handled everything in the so called "standard way"
> > for building from source.  When it comes to installation locations,
> > config file locations, etc., all ready to be used with the Make
> > from the software source, there were none.  The so called standard
> > is a myth.  Each distro has their own standard, loosely coupled
> > around a few key varieties that many are based on.
> 
> I don't disagree with your statement about the lack of a real standard.
> I also don't consider that a big problem most of the time.  However, your
> statement above suggests a misunderstanding of how source packages are
> built.  First of all, they don't come with a Makefile.  They come with a
> Makefile.in and a configure script.  The configure script builds the
> Makefile after checking the build environment for the presence and
> location of various needed components (supporting programs, libraries,
> headers, etc.), and allows you to specify install locations (or accept
> defaults).  It's a wonderful system that allows the administrator (or even
> user) to configure the package to suit their needs.  It's part of what
> allows cross-compilation, test environments, etc.  You want package foo
> version 1.0 to be "production" while you test foov2?  No problem.

No misunderstanding about what make does.  I've followed whatever 
instructions that come with the code.  If they say use ./configure,
I do it, and I can see it is testing the compiler, libraries, etc.
Usually the configure step ensures the application will compile,
but that is only a part of the battle.

I have several experiences trying to make package X work
with Redhat or Debian from sources.

In the typical attempt from my experience, I wanted to have one
database server listening on the typical port, or have one particular
version of PHP linked in with Apache.  That can be tricky to do when
the package typically installs under /usr/local/packagename in all
documentation, and something like Redhat puts some executables in /bin, some
more stuff in /usr/lib, some more stuff in /usr/share, and so on.
You read the instructions from the package vendor and they refer to
files under /usr/local/packagename.  When running that application
has problems, you wonder how the package provided with the
Linux distro could have left many files in various locations (/var,
/etc, deep under /etc/somewhere, etc.) which might still be having
some impact.  You might also find troubleshooting resources
mentioned by the package vendor don't exist, or the way the
software is configured is done by a different file from linux
distro to linux distro.  Heck, even modules.conf is a file that
should not be edited directly in Debian - a very different way to
treat the file than in Redhat.

One time, in Redhat 8, I managed to completely screw up Perl in my
attempts to go backwards one version with mod_perl using source
from perl website and Apache.  I know I don't completely understand
the layout of the files under /usr/lib/perl but I could see that the
result of my compiling efforts was to install newer versions of
only some files in target directories effected there.  The symptoms
of the botched perl install were bogus errors running perl code, such
as flagging errors in any perl code subroutine defined with a parameter list.

I wasn't able to back out the changes to the system by reinstalling
perl packages with --force and rpm.  Rpm also prevented me from
uninstalling perl.  Manipulating the file system so that a forced
install would give me a fresh /usr/lib/perl didn't help either.
I found it was quicker to reinstall with Redhat 7.3 than reverse
engineer what screw ups existed in the system.

I could see why there were people who had sworn off Redhat.

> So the point is, ALL distros are capable of configuring, compiling, and
> running packages from source tarballs.  I really don't see what the
> problem is.  Either your distro supplies the package as a binary and you
> can upgrade to the latest version, or they don't and you can install from
> source and configure it any way you want.  Or you get the pre-configured
> source as supplied by your distro.

I've had good results working with .deb and .rpm files, including
source .deb to compile myself.

Getting a package to work all by itself is typically not hard
(say for netscape browser).  It is when you want modules between
different packages to talk to each other that you need more intimate
knowledge of how the gears and cogs allow this to work, or you
can save yourself the headache and see if there is a package
managed solution that fits your distro.  Again, learning how
the pieces connect would be easier if there were good matches
between what the software vendor documents and how the 
Linux distro provides things.







More information about the nSLUG mailing list