[nSLUG] Why you are not seeing software ported to Linux

D G Teed donald.teed at gmail.com
Wed Jun 22 16:10:55 ADT 2011


On Wed, Jun 22, 2011 at 2:05 PM, Ben Armstrong
<synrg at sanctuary.nslug.ns.ca>wrote:

> On 06/22/2011 11:04 AM, D G Teed wrote:
> > On Wed, Jun 22, 2011 at 7:35 AM, Ben Armstrong
> > <synrg at sanctuary.nslug.ns.ca <mailto:synrg at sanctuary.nslug.ns.ca>>
> wrote:
> >     libc5 to libc6? You've *got* to be kidding ...
> >
> >
> > He did say "and not alienate them".
> >
> > Care to elaborate?  I'm not defending the blog, and
> > parts of it are historical but I think there is
> > something real about this perception from ISVs.
>
> You know the first release of Debian replacing libc5 with libc6 was in
> Debian 2.0 "Hamm" in 1998?  Debian even continued to include libc5 right
> through Debian 3.1 "Sarge" which wasn't discontinued until 2008. This is
> an example of "alienating" ISVs? Debian bends over backwards to continue
> to remain compatible, e.g. through the 'oldlibs' section of the archive,
> and with ia32-libs so that 32-bit binaries can continue to work on
> 64-bit systems.
>

No, I meant a response like "give me a break" with no further
content is an example of alienating people.  But it was written early
in the morning, when the world can appear to be in a mess,
or one is in a hurry.

Yes, I saw the examples given were dated, but it nevertheless
was a perception from a few years back, when talk of Linux
on the desktop was ramping up.  Optics like this sometimes
stick.  Someone having troubles in Linux might be only
doing QA for this ISV, and doesn't know the intrinsics of the OS.
Unlike something like Solaris or Windows, in Linux
you are expected to add another package to provide the library
support, and then it works (e.g. ia32-libs).  That doesn't
mean it would appear to be possible to someone new to a
particular Linux distro.  When googling libc5 libc6 I did see
some other open source devs complaining about Linux changing
the map every 2 years.  However, that was then.

I can provide a more recent small example of a change that
could mess up things.  Grub 2 changed the partition numbering
so the first partition is now 1, not 0.  Oddly, the drive numbering still
starts at 0.  If this kind of engineering change is done
elsewhere, say in aircraft displays, people die.

In open source, we tend not to mind those changes too much
and instead we try to be helpful to point out the potential pitfalls
to others.

The Linux kernel is another example: while the ABI does change over
> time, with things constantly being added, they're quite conservative
> about making changes that would remove old things. When they do, there's
> usually just a smallish userspace around it that needs to be adjusted to
> deal with the changes.
>
> I think the kernel is doing OK compared to other projects.

However, I did have a problem booting an IBM x345 with
Debian 6's 2.6.32 kernel last week.  The previous kernel
from Debian 5 - 2.6.24 booted fine on same machine.
The new kernel required some esoteric parameters to work:

rootdelay=9 scsi_mod.scan=sync

This isn't the first time I've needed rootdelay on older IBM
equipment.

If I contrast that with my experience installing Solaris 10 on
~10 year old Sun equipment, I've had no hardware/kernel
problems, going back to an Ultra 2 from 1999.

Or another example, the inability for the Debian 6 Linux ISO
to boot from CDROM on any Sparc64 based system I've
tried.  The assumption amongst users is that netboot
will do.  That might get you installed, but I can't see
using netboot for rescue work.  Again, "I got it working"
is the basic requirement.

When a CDROM won't boot Solaris or Windows, I know the
drive or media is bad.

> Why can a Windows software product released in 1998 still work today?
> > Why do Sun sparc binaries from ages ago still run today?
>
> I have binaries on my oldest Debian system in /usr/local/bin dating back
> to 1997 that still work today. I'm sure there are many other examples in
> existence today.
>
> But the world marches on. Things do change. And old software that used
> to work in 1998 may *not* work today because of changes to the systems
> it runs on. Certainly there is a drive in the Windows software-using
> world to be on a constant upgrade treadmill. Is perfectly enduring ABI
> compatibility for timespans exceeding a decade really the holy grail you
> make it out to be?
>

I never spoke of the Holy Grail.  I'm just exploring the topics of
having ABI and API standards.

All platforms share examples of programs that still work, and
those which are busted by OS changes.  But in my experience
those projects developed on open source are more likely to be broken
if not run within a narrow range of supporting software.

I got into this topic area while looking at why I can't upgrade pear
on Redhat 5, which is a requirement of running Horde 4.
Oddly, Debian 5, having a slightly older php release
than Redhat 5, can upgrade pear to current release.

Overall, I find Debian to be the more flexible Linux than Redhat,
partly because it has a wide breadth of packages which have
come through a single large project.  In Redhat, unless one
uses it purely for something they develop in-house, there
will tend to be times we need rpmforge, remi and other repositories.
We don't know if they will always play together well or if
there are assumptions about how much stuff you will be
using from the alternate repository (some, like Jason's Utter Ramblings,
will replace the entire LAMP stack).  When using alternate
repositories we don't know if they will be around or offering
the same materials if we need to do a system restore.


> > As long as there is only compatibility
> > by moving everything forward, much of open source
> > works like a very large home brew project, where the
> > minimal goal is "I got it to work".  Of course projects
> > very often exceed that, stating and achieving goals beyond
> > the minimum, but in terms of a minimum standard common
> > to all open source, this seems to be it.
>
> That's a crazy comparison. There is a hell of a lot of crap proprietary
> software out there written for proprietary platforms that barely passes
> that minimum criteria. And *we* get blamed for being amateurish?
>

Yes, there are software companies releasing junk that barely works,
but we're talking about the standards around the platform here.

Windows has the Windows installer service.  A developer can
use an installer based on that and it does predictable things
following the requirements.   Really the developer
doesn't need to know the details of how - it just gives them
a clean result.

Contrast that with how I get hardware support for my nVidia card
(I want twinview working).  nVidia provides a tarball - fortunately
I know how to find it and use it.  I run their script and it fails because I
don't have the kernel headers package it depends on.  I install
the missing package from Debian and do it again.  xorg now prefers
to run without an xorg.conf.  Fortunately I kept a copy of it from when
Xorg did require a config file, and then I get twinview set up.

With a more standards based framework, these interventions
would not be necessary.  nVidia could write a installer that
worked on all Linux platforms.  That is what ISVs would prefer
to see, but it is a gap.

Many vendors take the /opt/vendorname approach for
software installs.  I have desktop applications like
Thunderbird installed in that manner,  and I use their
built-in GUI wizard to get updates.

The home brew approach of open source can be good.  It
involves putting the user in control of their system and sometimes able to
solve things in ways a vendor might block or make difficult
to cope with.  There are lots of advantages.  From the engineering
perspective there are also difficulties.

--Donald
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://nslug.ns.ca/mailman/private/nslug/attachments/20110622/9d8f1a7d/attachment-0002.html>


More information about the nSLUG mailing list