[nSLUG] Why you are not seeing software ported to Linux

Daniel Morrison draker at gmail.com
Wed Jun 22 18:44:28 ADT 2011


Hi,

Miguel's post is close to 10 years old (?). Miguel also has a
reputation for saying things very bluntly, perhaps even overdoing
things a bit for effect. So we should not spend too long arguing the
details of his writing.

On 22 June 2011 16:10, D G Teed <donald.teed at gmail.com> wrote:
> Unlike something like Solaris or Windows, in Linux
> you are expected to add another package to provide the library
> support, and then it works (e.g. ia32-libs).

Sometimes it seems to me like many Linux distros are _purposefully_
making things hard for people. Both Redhat/CentOS and Ubuntu have this
strange habit of splitting up packages into 'binaries' and
'development'. I can't count the number of times I've failed to
install something because, although the binaries/libraries are
present, the development headers are missing. With Slackware you don't
have this problem. The default recommended install is to install
everything in the official distro, and then you're not missing
anything.

Granted however, Slackware64 specifically does NOT include 32-bit
libraries. And I'm not suggesting Slackware as the best choice for Joe
Blow computer user.

I think it comes down to: why do you care about binary compatibility?
The very thing that makes Linux strong is the widely varying group of
developers in a rich "open source ecosystem". Unfortunately this
causes a lot of trouble on the compatibility front. If you want a
magic bullet Linux system to compete readily with Windows or MacOS...
well, Linux is just not like that.

> I can provide a more recent small example of a change that
> could mess up things.  Grub 2 changed the partition numbering
> so the first partition is now 1, not 0.  Oddly, the drive numbering still
> starts at 0.  If this kind of engineering change is done
> elsewhere, say in aircraft displays, people die.

Even worse that that: e2fsprogs has changed the default inode size
from 128 to 256 bytes without really any warning. GRUB 2 can handle
the new size but the original GRUB (still used by RHEL/CentOS 5) can
not. So shrink your laptop WinXP drive using GParted, create a new
Linux filesystem, and install CentOS on it. All proceeds fine until
you reboot, and discover there is no GRUB. Have to wipe the partition
and reinstall to fix it!

> If I contrast that with my experience installing Solaris 10 on
> ~10 year old Sun equipment, I've had no hardware/kernel
> problems, going back to an Ultra 2 from 1999.

Apples and oranges. Sun (like Apple) has had complete control over the
hardware platform. Linux never did.

> Or another example, the inability for the Debian 6 Linux ISO
> to boot from CDROM on any Sparc64 based system I've
> tried.  The assumption amongst users is that netboot
> will do.  That might get you installed, but I can't see
> using netboot for rescue work.  Again, "I got it working"
> is the basic requirement.

There is limited time for open source development. Who runs Sun
hardware anymore? The distro is oriented towards x86_64. If you're
using unusual hardware you have to expect to jump through a few hoops.

(Also: you can't see using netboot for rescue work? Why ever not? Once
you go through all the effort to setup netboot for an install it's
fabulous to keep it around for subsequent rescues. I never burn CDs
anymore... too much hassle when my netboot server is just a few
keystrokes away...)

> When a CDROM won't boot Solaris or Windows, I know the
> drive or media is bad.

You have an 'or' in there, which means you don't really know anything.
(The computer may also not have 'boot from CDROM' enabled. Or you
forgot to press the any key. Or...) The only way to know the media is
good is to run the checksum.

>> > Why can a Windows software product released in 1998 still work today?

A rather sweeping statement of questionable veracity.

>> > Why do Sun sparc binaries from ages ago still run today?

If you can find a sparc to run them on!

The serious answer is this: Sparc was a mature architecture from a
vendor with years of Unix experience when Linux was still a 0.x
release from an unknown grad student in Finland. You may as well
wonder why it is that you can plug in a vacuum cleaner from the 60s
and it still works on 120V AC, but you can no longer easily buy
batteries for a 1990s mobile phone. The mains power specifications had
matured and stabilized, while mobile phone battery sources are in
rapid development.

> All platforms share examples of programs that still work, and
> those which are busted by OS changes.  But in my experience
> those projects developed on open source are more likely to be broken
> if not run within a narrow range of supporting software.

Strange experience. In my experience, old binary-only proprietary
software products are more likely to be irreparable. Open source
projects often have troubles too, but at least I can recompile them.

> I got into this topic area while looking at why I can't upgrade pear
> on Redhat 5, which is a requirement of running Horde 4.
> Oddly, Debian 5, having a slightly older php release
> than Redhat 5, can upgrade pear to current release.

You haven't given any details. It's hard to believe that a
distribution issue is preventing the installation of new software.
Sounds more like it's a PHP version issue. If you're complaining about
pre-compiled packages made available by the distro authors, that is
one thing, but you can always compile your own server software...

>> > As long as there is only compatibility
>> > by moving everything forward, much of open source
>> > works like a very large home brew project, where the
>> > minimal goal is "I got it to work".  Of course projects
>> > very often exceed that, stating and achieving goals beyond
>> > the minimum, but in terms of a minimum standard common
>> > to all open source, this seems to be it.
>>
>> That's a crazy comparison. There is a hell of a lot of crap proprietary
>> software out there written for proprietary platforms that barely passes
>> that minimum criteria. And *we* get blamed for being amateurish?
>
> Yes, there are software companies releasing junk that barely works,
> but we're talking about the standards around the platform here.

Lost you here. I thought you were talking about "much of open source
works". Comparing to "much of shareware works" and "much of
proprietary works" seems perfectly reasonable.

> Windows has the Windows installer service.  A developer can
> use an installer based on that and it does predictable things
> following the requirements.   Really the developer
> doesn't need to know the details of how - it just gives them
> a clean result.
>
> Contrast that with how I get hardware support for my nVidia card
> (I want twinview working).

I can find exceptional packages for Windows that bypass the Windows
installer also. Surely you are aware of the uproar caused by the
binary-only nVidia graphics drivers for Linux. By picking such a
controversial package (one of perhaps only three, including AMD/ATI
graphics drivers, and Adobe Crash(TM) products) you are skating on
very thin ice. There are dozens, maybe hundreds of major third party
software packages which are available using standard RPM / DEB
packages for the system installer to process.

> With a more standards based framework, these interventions
> would not be necessary.  nVidia could write a installer that
> worked on all Linux platforms.  That is what ISVs would prefer
> to see, but it is a gap.

I don't agree. nVidia already *could* provide a standard RPM and/or
DEB package, which would work with the system's installation scripts
to automatically install prerequisites. But they don't. Blame nVidia,
not Linux distributions!

Well, I think we're well off-topic now. The original post seems to be
to highlight an essay which suggested that having a stable API and a
stable ABI is very important in order to not alienate third-party
vendors. I think this is almost certainly true, and difficult for
anyone to argue against.

The rest of this thread seems to be arguing about whether or not the
various Linux distros and projects abide by this, or not.

So what is it that you're trying to say? That Linux and open source is
generally "not good enough" in this area? That volunteer-run projects
like Debian are less good at it than commercially run projects like
Ubuntu and RHEL? That if it were a perfect world, we'd never have to
recompile anything?

It's a good issue to bring up, and to remind people to "code for the
future", and to try to avoid releasing code with stop-gap hacks that
are not future-proof. Apart from that -- I have plenty to bitch about
in the Linux world also, what with the binary/devel package idiocy I
mentioned earlier, and the crazy "upstart" system that's made init
completely opaque, and horrible tools like rpm that don't adhere to
usability standards. But I'm sure glad these open source tools exist,
warts and all, rather than having to deal with a monolithic MS Windows
world.

-D.



More information about the nSLUG mailing list