Home » Uncategorized » Faster Linux World Domination

Faster Linux World Domination

This was originally posted to Linus and the rest of the Linux kernel mailing list, and then to Tom’s Hardware.

“The future is open source everything.”
—Linus Torvalds

Dear LKML;

I have written a book that makes the case for Linux world domination. I find it interesting that the idea of Linux on the desktop is responded to by either yawns or derision. I think it depends on whether you see Linux as a powerful operating system built by a million-man army, or one filled with bugs and missing the cool stuff like speech recognition.

The points I wrote should be obvious to you all, but there are some pages on how to have Linux succeed faster I thought I would summarize here. Given this is such a high volume list, I figured it cannot decrease the signal to noise ratio very much! 😉 I didn’t see such emails are disallowed by the LKML FAQ.

I’ve been using Linux since mid-2005, and considering how much better things everywhere are now compared to then, it surely is an interesting time to be involved with free software. From no longer having to compile my Intel wireless driver or hack the xorg.conf, to the 3-D desktop, to better Flash and WMV support, to the countless kernel enhancements like OSS -> ALSA and better suspend/resume, things are moving along nicely. But this is a constant battle as there must be 10,000 devices, with new ones arriving constantly, that all need to just work. Being better overall is not sufficient, every barrier needs to be worked on (http://www.joelonsoftware.com/articles/fog0000000052.html).

The Linux kernel:

The lack of iPod & iTunes support on Linux is not a bug solved by the kernel alone, but Step 1 of Linux World Domination is World Installation. Software incompatibilities will be better solved as soon as the hardware incompatibilities become better solved. The only problem you can’t work around is a hardware problem.

If you hit a kernel bug, it is quite possible the rest of the free software stack cannot be used. That is generally not the case for other software. Fixing kernel bugs faster will increase the pace of Linux desktop adoption, as each bug is a potential barrier. If you assume 50M users running Linux and each bug typically affects .1% of those users, that is 10s of thousands of people. Currently, the Linux kernel has 1,700 active bugs (http://tinyurl.com/LinuxBugs). Ubuntu has 76,371 bugs (https://launchpad.net/ubuntu/+bugs). I think bug count goals of some kind would be good.

In general, Linux hardware support for the desktop is good, but it could get better faster. From Intel, to Dell, to IBM and Lenovo, to all of their suppliers, the ways in which they are all over-investing in the past at the expense of the future should be clear; the Linux newswires document them in detail on a daily basis. I was told by an Intel kernel engineer that his company invests 1% of the resources into Linux as it does to Windows. It is only because writing Linux drivers is so much easier that Intel is seen as a quite credible supporter of it. The few laptops by Dell that even ship with Linux still contain proprietary drivers, drivers that aren’t in the kernel, and so forth.

Peter Drucker wrote: “Management is doing things right, leadership is doing the right things.” Free software is better for hardware companies because it allow for more money to go into their pocket. Are they waiting for it to hit 10% marketshare first? I recommend senior IBM employees be forced to watch their own 2003 Linux “Prodigy” video (http://www.youtube.com/watch?v=EwL0G9wK8j4) over and over like in Clockwork Orange until they promise free, feature-complete drivers for every piece of hardware in the kernel tree before the device ships. How hard can it be to get companies to commit to that minuscule technical goal? In fact, it is hard to imagine you can be happy with a device without having a production Linux driver to test it with.

It is amazing that it all works as well as it does right now given this, and this is a testament to the general high standard of many parts of the free software stack, but every hardware company could double their Linux kernel investment without breaking a sweat. The interesting thing is that PC vendors that don’t even offer Linux on their computers have no idea how many of its customers are actually running it. It might already be at the point that it would make sense for them to invest more, or simply push their suppliers to invest more.

There are more steps beyond Step 1, but we can work on all of them in parallel.

And to the outside community:
* Garbage collection is necessary but insufficient for reliable code. We should move away from C/C++ for user-mode code. For new efforts, I recommend Mono or Python. Moving to modern languages and fewer runtimes will increase the amount of code sharing and the pace of progress. There is a large bias against Python in the free software community because of performance, but it is overblown because it has multiple workarounds. There is a large bias against Mono that is also overblown.
* The research community has not adopted free software and shared codebases sufficiently. I believe there are enough PhDs today working on computer vision, but there are 200+ different codebases (http://www.cs.cmu.edu/~cil/v-source.html) plus countless proprietary ones. I think scientists should use SciPy.
* I don’t think IBM would have contributed back all of its enhancements to the kernel if it weren’t also a legal requirement. This is a good argument for GPL over BSD.
* Free software is better for the free market than proprietary software.
* The idea of Google dominating strong AI is scarier than Microsoft’s dominance with Windows and Office. It might be true that Microsoft doesn’t get free software, but neither does Google, Apple and many others. Hadoop is good evidence of this.
* The split between Ubuntu and Debian is inefficient as you have separate teams maintaining the same packages, and no unified effort on the bug list. (http://keithcu.com/wordpress/?page_id=558)
* The Linux desktop can revive the idea of rich applications. HTML and Ajax improve, but the web defines the limits of what you can do, and I don’t think we want to live in a world of HTML and Javascript.
* Wine is an important transitional tool that needs lots of work (http://bit.ly/fT3pXr)
* OpenOffice is underfunded. You wonder whether Sun ever thought they could beat Microsoft if they only put 30 developers on it, which is tiny by MS standards. Web + OpenOffice + a desktop is the minimum, but the long tail of applications which demonstrate the power of free software, all need a coat of polish. Modern tools, more attention to detail, and another doubling of users will help. But for the big apps like OpenOffice, it will take paid programmers to work on those important beasts.

There are other topics, but these are the biggest ones (http://keithcu.com/wordpress/?page_id=407). I’ve talked to a number of kernel and other hackers while researching this and it was enjoyable and interesting. I cite Linus a fair amount because he is quotable and has the most credibility with the outside world 😉 Although, Bill Gates has said some nice things about Linux as well.

If you want to respond off-list, you can comment here http://keithcu.com/wordpress/?p=272.

Thank you for your time.

Keep at it! Very warm regards,

-Keith

Digg


41 Comments

  1. The next sentence says: “If you have some ideas or bug reports to contribute, this is the place.”

    I would put my email under the category of ideas to contribute.

  2. I’m completely confused by the mention of iPod and iTunes in this letter. It seems random and pointless.

    Anyway, lack of iPod support (BTW, libgpod?) or iTunes is not the fault of Linux Kernel but the fault of Apple. They purposefully designed their devices to be closed, DRM-laden and vendor-locked. If you don’t like that, don’t buy their product. Almost every other portable musicplayer will let you simply mount as standard USB Mass Storage device and copy your non-DRMed files and enjoy the rest of your day.

    IMO, anyone who is a supporter of Linux or believer in freedom should not give money to Apple… and I think their products are ugly and overpriced anyway so it’s no big loss to me personally. 🙂

  3. Hello Paul;

    I mention the iPod because I read in Wikipedia that there are 220 million of them. I see that as a significant hardware barrier to using Linux. For best results, each barrier to Linux needs to be removed, especially these big ones. I think via WINE, users should be able to run iTunes. Very little of the code WINE needs is iTunes specific.

    I agree that Apple doesn’t respect the user’s freedom. But that doesn’t mean Linux shouldn’t support iPods. Linux can play DVDs which have DRM.

    • Hi,

      Of course if someone owns an iPod and wants to use Linux they will benefit if they work together.

      My point was that if you’re looking at it as Linux not supporting iPods and iTunes, you’re looking at it backwards. It is a matter of Apple not supporting Linux. And why would they? Linux is a direct competitor to their own OS.

      Apple may try to give the whole California hippie image in their advertisements but they behave like fascist leaders. It’s all Apple, all the time, and nothing else exists. You should only buy Apple products and can use only Apple software with your Apple device. The only appropriate response to this ridiculous behavior is, of course, undying devotion to the brand. Facts and reality are not as important as the belief that there is only one true technology brand and its name is Apple. All others are inferior, especially those with their crazy ideas about freedom, diversity and public sharing of knowledge.

      The only reason why iPods don’t work easily with Linux is because Apple intentionally crippled them and made them dependent on their software (which they then don’t provide in Linux). Almost every other PMP works fine in Linux, including all of those cheap generic Chinese ones, without any special drivers.

      I avoid buying things that I know won’t work with Linux. Of course it is sometimes unavoidable, but in general I see no benefit in rewarding a company who doesn’t care about Linux or at least standards that are usable across many platforms. Apple purposefully made their devices not work with Linux. I applaud the efforts of people who are able to reverse-engineer these kinds of closed devices and add drivers to Linux, but sincerely wish that time could be spent improving other, more open areas.

      By the way, in addition to libgpod, iTunes works in WINE, according to some reports:
      http://appdb.winehq.org/objectManager.php?sClass=application&iId=1347

      With regard to DVD, it’s probably illegal to watch them in Linux in the USA. The DVD-Video and CSS copy protection are not open standards and were reverse-engineered and cracked in order to be implemented in Linux. (DVD on-disk format in general is an ISO standard). And now watch history repeating itself with Blu-Ray movies, HDCP and these other crazy schemes.

      • I agree Apple should support Linux. Dream on! But iTunes on Linux needs to happen. Every barrier needs to be worked on.

        I agree Apple is a bad company. If free software had taken over sooner, a lot of these problems wouldn’t have happened. But we are letting those old barriers and Linux’s irrelevance which creates new barriers. We need to support iPod, Blackberry, everything, everytime.

        The thing is that we have the people.

  4. I presume posting on the LKML was a publicity stunt designed to spam your ideas out to as many people as possible ? Really i don’t think your post was relevant to everyday kernel work.

    As an aside although i work on Linux myself and use Linux extensively i don’t want it to take over the world that would just lead to a dull monoculture with no competition between projects or sufficient diversity to survive very long.

    • Hi Matthew;

      I don’t think of it as a stunt. The point of email is to share ideas! And in this are definitely thoughts on how to make the kernel better every day, and every day ways of doing that., etc.

      And you are wrong that you cannot have a ton of richness and diversity in a monoculture. Look at how many file systems Linux supports and articles Wikipedia has.

  5. I don’t think posting this to LKML was necessary nor relevant. I can’t think of anything that the kernel is necessarily holding back these days. I and many others use Linux for day to day productivity, fun, and work. There is work to be done across the whole stack, but you can say the same about any industry. It goes without saying that things will be getting better with time.

    The Linux hardware inferiority thing is a myth.

    • The email talked a lot about the Linux kernel, so I think it was relevant.

      Free software today is great, I agree. But it can and should get better. Hardware companies provide lukewarm support today. Linux and the distros have too many bugs. And things are working inefficiently. And we need more Python!!

      • Since you are mentioning Python again, I didn’t get the part about a bias against it.

        I think that Python is a favored language on Linux and in the FLOSS developer scene over all.
        It is almost certainly more widely used on Linux (and other Free Software operating systems) than on any other platform.

        • Hi Kevin;

          I agree Python is widely used on Linux. However, I have talked to many FOSS programmers who love the language but refuse to use it for “real” projects because they believe it is “just a scripting language” and has perf problems, etc. That sentiment is absolutely wrong.

          More Python ASAP is one of the best things the free software community can do.

  6. Talking about what language to use in user level code is such an obvious off-topic thing to do in the kernel mail list…

  7. Having a common bytecode interpreter such as Mono makes sense. However Mono is not released under GPLv3 and Microsoft is one of the few companies with a large patent portfolio which hasn’t made a legally binding non-enforcement promise against free software. Quite the contrary (just remember the recent TomTom lawsuit and the FAT32 patent). Simply trusting Microsoft is not an option.
    There are alternatives. Most notably the Java Virtual Machine (JVM). I also would like to use the opportunity to recommend Ruby as a scripting language 😉

    • Hi Jan;

      Mono has a compiler as well as an interpreter. This is one of the great things about Mono. Mono is not released under GPL, but it is released under a widely accepted free software license. And Microsoft has made several non-enforcement promises about .Net.

      I think Sun screwed up with Java by not making it free from the beginning so I don’t recommend it for new projects. Sun is not investing very much in it either.

      Ruby is a good language as well, but Python has a much richer set of libraries so it is more efficient to just keep using Python.

      • Hi Keith,
        Microsoft’s Community Promise only covers the core of DotNET. And if you want to run Silverlight under GNU+Linux, you even need to download a proprietary codec package from Microsoft’s homepage. See the analysis of the FSF for detailed discussion. I hope for the best and plan for the worst.
        Regarding Java. Unfortunately the JVM was proprietary software for a long time. It is free software now and as far as I know the project is very active. I recently saw an interesting presentation about JRuby (Ruby running on the JVM).
        Regarding Ruby. IMHO it’s a well designed language with open classes, continuations, blocks, and an indentation-independent syntax B-) I just found an interesting comparison of Ruby vs Python on Stackoverflow.com which will hopefully provide me with other strong arguments 😉

        • Hi Jan;

          You don’t have to use Silverlight when using Mono. It is important to look at all of these big pieces separately. And I think we can win any legal battles. What really is new in MS’s VM?

          I have talked to Sun employees about the size of their investments and it is not very big. JRuby isn’t that big a deal and I’ll bet it just one dev as their Jython efforts are. You can do everything in Python that you can do in Java without the enormous mess.

          I think Ruby is great, but that Python is good enough. It has a lot of momentum. Look at Google with their unladen-swallow JIT efforts.

          I read that lang comparison and think they are deep in the weeds for the most part. Even assuming Ruby is better than Python, it doesn’t matter. We need to move away from C/C++ and either Python or Ruby are light-years better so that the difference is not a big deal.

          • Using Silverlight codecs is not the problem. The problem is not having the license to distribute them. Regarding legal battles: Unless you have deep pockets you will be forced to settle out of court. You won’t even get to show that there is prior art to a patent!
            I don’t use JVM myself at the moment. I just wanted to point out that there are alternatives to Microsoft CLR. There is JVM (Sun), LLVM (Apple), GIMPLE (GNU C), V8 (Google). Especially JVM has very good runtime optimisation.
            I agree that it is desirable to move towards dynamically typed languages. The problem is that it is difficult to design a programming language where code is easy to read and understand by others while offering powerful meta-programming at the same time. If code is not easy to read by others, your language will never get mainstream adoption. But if the meta-programming facilities suck, it will limit what you can do in a lifetime.
            I think I have pointed out Paul Graham’s article about the hundred-year programming language before. I also read about embeddable grammars some time ago. I think there are possibilities for fundamental advances which may make today’s boundaries between programming languages and code bases less important.

          • The silverlight codecs don’t really matter. There is lots of code out there for one reason or another that can just be ignored.

            Regarding legal battles: I don’t think an MS lawsuit is likely or that it presents a big legal risk.

            I’m familiar with all the runtimes you mentioned, although what is up with the Javascript reference! 😉 I also promote Python which is a different runtime from .Net. Mono is a good community-built runtime with a mix of features, performance, etc. And you can do dynamic typed languages in it and use LLVM with Mono.

            A programming language can’t completely solve the problem of readability. Spaghetti logic can exist in any language. I think Python is readable and rich enough. We might not need programming languages in the future, but we sure need to use more of our better ones today.

  8. Just wanted to thank you for posting this. I did not consider the message to LKML to be spam, though I hope it does not spiral out into a huge thread. Other lists like linux-elitists would be more appropriate for that.

  9. Great comment, but LKML is for kernel devs discussing kernel patches, etc. And you are preaching to the choir – I’m sure they all agree with you re: Linux on the desktop.

    I think wine and/or virtualbox will be the key for Linux on the desktop. That is what Mac users are already accustomed to doing: run parallels or virtualbox for Windows software needs. If we think of it that way, Linux on the desktop is here now.

    • Hi Scott;

      I might be preaching to the choir, but I do think it is possible to go faster.

      You make a very good point that hardware companies like Dell could offer a pre-installed virtual box with Windows 7 pre-installed. But I also think WINE is great long-term because you don’t really need the entire OS to run many Windows apps. But your solution is a great option right now. I didn’t know that many Mac users did this.

  10. Bias against mono is not going to go way until at least 2012 due to legal issues there. You call it over blown. But lot of us have been on the wrong side of the mp3 patents making our life a living hell with distributions shipping without mp3 because of it. Same could happen end of 2011 when MS patent coverage for .net development ends. “Microsoft has made several non-enforcement promises about .Net.” All have first expire end of 2011 that is not far off.

    Python there are many possible solutions. But when it comes to good future languages they include items like google go and vala for gnome. Also gcc is heading down the path of containing a byte code.

    Kernel bugs are not as large of issue as you make out. Linux kernel has a very stable syscall system. So the kernel space can be old and the userspace new and most of the time it works perfectly. The big issue is distribution management of kernel so limiting users to a small list of kernels that many not run on there hardware.

    Driver support other than base bus’s and drivers needing speed rest these days could be implemented in userspace using syscalls that are kernel version netural.

    Lot of the issues have nothing todo with Linux kernel but distribution management.

    “Garbage collection is necessary” No it not. Good static checking of code can find where lost memory will happen. Improve compliers allow deeper static checking lots of these coding errors disappear for good because compiler can find them.

    Please note where I am sitting regional coding is illegal. I repeat illegal. So cracking css is perfectly legal same with cracking blueray. Because both are regional coded. There should be a lot of effort put into out lawing regional coding to allow more region to region competion on price of stuff.

    Google gets open source software more than lots of companies. Google is slowly bring there internal systems into line with outside open source world.

    Google has a solution to web http://code.google.com/p/nativeclient/ but that brings back c and c++ again.

    • > Linux kernel has a very stable syscall system. So
      > the kernel space can be old and the userspace new
      > and most of the time it works perfectly.

      Sadly, though this is generally true for glibc, it is not true for all other programs. Two recent examples:

      – the latest version of the intel X driver requires kernel mode-setting support (first appeared in 2.6.29);

      – the latest version of udev cannot abide the old CONFIG_SYSFS_DEPRECATED feature (necessary for supporting _old_ udev, finally turned off by default in 2.6.31).

      If it is important to you to support old out-of-tree drivers, I am sure there are plenty of people who would appreciate it if you backport changes to teach the ancient kernels to work with modern userspace. The RHEL kernel and Willy Tarreau’s 2.4.x kernel might be good starting points.

      But a better fix is to forward-port the drivers.

      • I know mondern udev and old udev with 2.6.31 and new kernels. Just two different versions of kernel have to be built. I am using development 2.6.33 with old and new systems. 2 builds of the kernel. The option is no longer default. Its not problem distributions could not choose to take on.

        Really some items like udev should be done as upgrades when using a new kernel. Like module tools.

        Intel driver requires kernel mode setting that is a rare limitation.

    • You cannot find all the memory leaks let alone buffer-overruns via static analysis. You need garbage collection. It was invented as the way to solve this problem (way back in 1959!) which is why it was given a name.

      The question you should ask yourself is why not all usermode code uses GC today. Imagine if your doctor was using medical devices from 1958.

      • Problem is you are badly wrong. All memory leaks and buffer-overruns and be found by static analysis.

        But there are rules.
        Main rule. All source libs, application and kernel has to be in the static analysis. Basically everything application uses. Other wise the static analysis will miss things it should have detected.

        Gcc is a particularly bad example. It does not even do link time analysis yet. Let alone library or kernel depth digging for flaws.

        This also includes finding incorrect function usage. ie X function is faster but you called Y in a way that will always call X.

        GC is a hack. A very bad hack. GC suffer from a even called object leaking. Where the GC cannot work out if it is allowed to free something or not. Result is a complete exhaustion of memory. You still need to run static analysis on GC programs to detect this problem. The static analysis has to be just as deep.

        Basically GC screws up just as much if not more than memory leaks and buffer overrun in C. Worse its harder to find. Is it a flaw in the GC or did you just create a tree of objects.

        Solution to the memory issues is a proper working static analysis system. Nothing else. Along the way it also finds other problems.

        Static analysis can even hunt items like over allocation of permissions and other things. Again the prime rule of Static analysis working right has to be there. Everything scanned.

        • Not all problems can be found because it doesn’t have all the information you have at runtime. Every instruction changes the hardware state of the machine. The system is constantly changing state via inputs that the static analysis cannot grasp. I agree you can find lots of problems and flag potential problems areas, but in the end, it is a lot of guessing unless it is really running the code. I’ve used those tools and seen their limitations. You can do static analysis of code written in a GC language before you run it as well. But you need the runtime support to be certain there are no memory leaks or buffer overruns.

          GC is required to solve the problem of not letting memory get lost or otherwise corrupt. For how long have you known that rebooting a computer very often solves your problem? It is because memory has gotten corrupt. GC is necessary and very helpful to solving this. And GC enables interesting programming features like reflection. I never said GC solves all problems, BTW.

          GC is not a hack. It is the idea that the programmer is no longer responsible for the lifetime of objects, but it is instead handled by the system. You’ll note it was created by a mathematician when he created Lisp, which is not considered a hack.

          • GC runtime is a hack to deal with unknowns. Programmers get the stupid idea they are not responsible for the lifetime of objects just like you have. So sooner or latter programmer creates a unfreeable object tree that eats up all memory. Simple fact no matter what Programmer has to take responsibility for the life time of objects or bring system down one day. Its not like Linux out of memory process killer always kills the right process when it runs out of memory.

            Again wrong about static analysis. Systems designed for static analysis extra information is include in the source code. Describing the range of inputs.

            Proper static analysis these days is finding all possible code paths. Even those 1 in a million year screw up events that you will never find no matter how many times you run the program for real. Its not exactly running the program KeithCu. Simple fact it does not have to run the program. Its mapping the program like you map workflow. Including thread to thread relationships in some of the advanced ones today.

            Static analysis is possible for one key reason. Computers are not illogical. They follow logic and rules.

            Now current day failures with Static analysis breaks the simple rule I layed out for static analysis to work all used parts. A part map of the program does not work. Lead to false postives and other junk.

            Be-aware Linux kernel internal operations does not have a GC it does not need a GC it purely depends on static analysis to find memory issues. It even have special data in the source code to help the static analysis do its job. The Linux kernel has an extremely low memory error rate of any form of error ie Object tree errors or buffer under-flows or overflows or miss usage of data.

            On top of it static analysis it using CPU support items like NX flags. So buffer overflows are impossible. So GC is not required to deal with these problems. Tighter work between CPU features and program is.

            A process corrupt memory in Linux. Does not happen. I have never had to reboot a Linux box to correct a memory problem. Reset processes yes. OS rebooting itself yes due to memory violation somewhere in kernel space. But never ME having todo it never. That said the most common cause if major distribution to OS is memory running out. That GC alone make more likely to happen.

            All memmory allocations are made and linked to processes under Linux. When process is freed all memory linked to that process that is only used by that process get freed. That simple. You might say this is garbage collection. The language running on Linux does not need to know about it. For secuirty tracking memory is a requirement. Release of application memory happens either when application requests it or application terminationed by something else. No leaks here.

            Proper working GC can only be done by static analysis to find where a data will no longer be required and either reuse as another block of data or free.

            Runtime GC as anything more than a backstop is also heavier on ram due to segments not being freed when they should have been. Basically runtime GC does not work it don’t know enough. Guess what lisp compiler built form does static analysis and places allocates and frees where they should be. It not depending on runtime GC. So using the creator of Lisp as a sheild for runtime GC is a insult to him.

            IE good example
            data X
            function(){
            data X=(defined value)
            Operation done on X
            return result
            }
            Now here at the end of function X is not reused. So should be freed and since X is not going to be used anywhere else it should be in function. Static analysis will find this.

            Now GC runtime on the other hand. leaves Data X hanging around for ever. So using more ram so causing more problems. Now what if the object that this function is in recalls the same object so the object never ever ends. You now have a object tree error. So GC dooms the complete system.

            There are many reasons for deallocating as soon as items are no longer required.

            I am sorry to say that reflection does not depend on using a GC. If you had used Linux kernel tracing of memory allocations you would have seen the same information is acquirable out the Linux kernel even that a GC is not used.

            Reflection is just a extra bit of data next to like a void * saying what is in there. Ie reflection/gc have nothing in common other than being linked to memory management. So can be used alone. Now there is a problem with Reflection being used on every bit of data. Heavier on ram without valid grounds and cpu time wasting writing reflection data that will never be used.

            Good systems like the Linux kernel. Use GC runtime designs where it required as a backstop for section you cannot determine what is running in there at all. Ie same reason why Just in time compilers use it. Price is always more ram and cpu time usage. Determinable areas using non GC due to the fact its lighter on cpu time and ram.

            Note Linux kernel is hybred for processes. GC with on request to free. That model is the best. Coder is still in control. Coder should not code that a GC is there instead should always be coding as if it is not there and the GC is just a backstop for mistakes.

            The multi process design of lot of applications under Unix is using the natural GC of the OS. Using C or C++ with using a GC does not mean you are not using one indirectly by using multi process so avoiding the need of a language with GC support.

            Mono Java… All have been pushed as safe languages. look far enough you would find object trees eating up all memory causing termination of programs. They are not safe. Lack of buffer overflow termination and the like means poor coders go unnoticed longer so allowed to work on thing more critical than there skill level. Even worse there GC at times duplicates what the OS would have done anyhow.

          • I don’t buy the argument that GC is a tool for poor programmers. It is a tool that makes all programmers more productive. It never makes a programmer worse.

            GC is not a hack to deal with unknowns. It is a design to deal with the real world. Static analysis cannot find all problems by definition.

            I agree GC doesn’t solve all problems but by removing double-frees, using after freeing and such it has improved the situation. It is necessary but insufficient. And often GC can find the bugs instantly that it may have taken a million years to find in C/C++. In fact a lot of the static analysis you would do in C/C++ doesn’t make sense in a GC language.

            If you want to implement hardware boundary checking rather than using it as a natural function of GC that would be fine, but I don’t think you could do it efficiently because the code constantly changes what memory is in use at any point in time and allocations can be just a few bytes big.

            My point about rebooting computers or processes is the same: the memory got corrupt. It is because GC is so unused even today that computers are considered unreliable.

            I don’t really understand your point about Lisp and GC, but I continue to believe that you are not qualified to say that GC is a hack because it was created by the same person who created Lisp.

            Reflection and GC have something in common. You’ll notice nearly every GC system you can think of has it, and nearly every non-GC system doesn’t. The point stands that GC requires infrastructure which enables features.

  11. Don’t spend too much time with Oiaohm aka Ohioham aka Squanto.

    He’s a Linux troll who hates Windows and Mono and is always trying to come up with reasons to criticize them. What is your alternative solution here Ohio? Send a memo to all programmers reminding them to catch all memory leaks? As if the overhead of managed code even matters when new laptops come with 2gb of RAM.

  12. How can the Linux desktop revive the idea of desktop applications? It benefits more from a move to web apps much more than Windows or OSX.

    Linux would have better support if the kernel team provided a stable abi. They take the position that open drivers are more important than having good support.

    Anyways most of your blog post could have been written in 2000. That’s something you need to think about.

    • Hi Ghost;

      Yes, Linux benefits from the web which is cross-platform. But a Linux OS has 1,000s of applications. It has repositories which make code distribution easy — one of the biggest benefits of the web.

      I realize this that much of this could have been written years ago. I wrote my first blog post criticizing Linux 4 years ago, in the first year of Ubuntu (http://keithcu.com/wordpress/?p=24). But it seemed more needed to be written. I see myself as a enthusiast / journalist, describing the world as it exists, and as it should.

      • I guess I don’t see the benefits of writing a rich app for Linux over Windows or OSX. Writing a client app for Linux is a PITA unless you do it the Linux way which is to open the source and let package managers figure out all the adjustments the various distros need. Distribution is a major headache if your software is proprietary, especially if you want to support multiple distros.

        Expecting most software companies to open their source is unrealistic. The Red Hat model of selling services doesn’t work for most software. Ergo the basic expectations of Linux are unrealistic. Linux works fine on servers where there is a incentive to fund it by hardware companies. Software companies don’t want to fund a movement that seeks to eliminate them.

        • Another of the reasons I wrote a book is because I believe that even though proprietary software should be allowed in a free market, its use is a mistake. Science is about making your results available so that other scientists have shoulders to stand on.

          I’ve been running 100% free software for 4 years and I use more different programs on Linux than I did on Windows. If there wasn’t an incentive to write all this code, it wouldn’t exist. There are reasons to write software other than a proprietary license. In short, software moves to a service consulting business, with a sprinkle of charity on top.

          As more large organizations depend on the Linux desktop, resources will flow in. I’m amazed at how good Linux is even thought it only has 2-3% marketshare.

          • There is a ton of proprietary software for which there is no open source alternative. It is not a mistake to use the software that works the best and in many cases is the only solution.

            Programming is closer to traditional engineering than laboratory science. Bridges don’t get built by volunteers.

            Software as a service doesn’t work for all software. Most software doesn’t require support and programs like Photoshop would just be undermined by clones if Adobe released the code. People would not pay $500 to have Adobe support. Everyone would just download the $0 version and get free support from forums and FAQs.

            Linux has 1% of the market and that hasn’t changed in over 10 years. There aren’t enough resources to fund alternatives to the mountain of proprietary software that exists.

  13. I agree that some proprietary software has features that free software doesn’t. But that is because this business model has been the norm for many years. But free software is catching up, and with the millions of people has the potential to do more. And many of those proprietary codebases are old and creeky because it takes an army to tend a garden the size of a city.

    Software is not a bridge. This is just the free software movement. And there are plenty of programmers already out there in the real world today who get paid to write free software. A service business is a commercial enterprise.

    I agree that Adobe Photoshop’s business is threatened by free software. Oh well. If they aren’t releasing their code to their users, they may as well not exist. Parts of Adobe get free software, because a number of their new initiatives are FOSS. But there is no outside developer community who understand the core Photoshop code anyway so they are doomed and may as well make money on the way down.

    Avatar was created using a 35,000-core Linux cluster. If that can be done, photo-editing can be done as well. GIMP is good enough for most users, just as OpenOffice is.

    Linux has not had a static marketshare for 10 years. It is growing and has been growing every year. And now that it is 2%, it starts to reach the critical mass because unlike Windows or the Mac, Linux gets better with more users — even if only .1% become contributors. There are deep similarities between Wikipedia and the free software movement.

    Ignorance of the idea that everyone should use free software is a problem, which is another reason why I wrote a book. There is no mountain of proprietary software. We were just talking about how the web helps Linux. There are just a few dozen pieces of proprietary software that matter, and I’ll bet that there are credible free alternatives today. And there is a ton of interesting free software out there that has no proprietary alternative.

    I don’t care when the last user quits using Photoshop. I just want the Linux user and developer base to double a few more times as soon as possible. And then maybe the people doing cancer research software and AI will start using FOSS and working in shared codebases.

  14. Linux does not have 2%
    http://gs.statcounter.com/#os-ww-monthly-200902-201003

    People have been claiming that the same revolution is coming for over a decade while Linux has stayed at 1%. You’re naive to think that most software companies can use open-source business models. Most companies would be ruined by allowing free clones of their software.

    This is the whole problem with Linux. It is rooted in this anti-proprietary attitude when it is proprietary developers that add value to a platform. Just look at the iphone and the app store. Those are proprietary apps and iphone users love them. So what if users don’t have access to the source. They could care less since they just want to run the applications.

    • Free software is bad for proprietary software companies, but better for everyone else, and there is money to be made in free software — IBM does.

      Free software is better for the free market than proprietary software because anyone can download the code for free, master it, and then become a consultant. Imagine being a lawyer without access to Lexis/Nexis or a law library.

      The fact that you don’t have a printing press means you don’t care about freedom of the press?

Leave a comment

Your email address will not be published. Required fields are marked *