Home » Uncategorized (Page 11)

Category Archives: Uncategorized

Do Not Mess With This Guy

DoNotMessWithThisGuy

Faster Linux World Domination

This was originally posted to Linus and the rest of the Linux kernel mailing list, and then to Tom’s Hardware.

“The future is open source everything.”
—Linus Torvalds

Dear LKML;

I have written a book that makes the case for Linux world domination. I find it interesting that the idea of Linux on the desktop is responded to by either yawns or derision. I think it depends on whether you see Linux as a powerful operating system built by a million-man army, or one filled with bugs and missing the cool stuff like speech recognition.

The points I wrote should be obvious to you all, but there are some pages on how to have Linux succeed faster I thought I would summarize here. Given this is such a high volume list, I figured it cannot decrease the signal to noise ratio very much! 😉 I didn’t see such emails are disallowed by the LKML FAQ.

I’ve been using Linux since mid-2005, and considering how much better things everywhere are now compared to then, it surely is an interesting time to be involved with free software. From no longer having to compile my Intel wireless driver or hack the xorg.conf, to the 3-D desktop, to better Flash and WMV support, to the countless kernel enhancements like OSS -> ALSA and better suspend/resume, things are moving along nicely. But this is a constant battle as there must be 10,000 devices, with new ones arriving constantly, that all need to just work. Being better overall is not sufficient, every barrier needs to be worked on (http://www.joelonsoftware.com/articles/fog0000000052.html).

The Linux kernel:

The lack of iPod & iTunes support on Linux is not a bug solved by the kernel alone, but Step 1 of Linux World Domination is World Installation. Software incompatibilities will be better solved as soon as the hardware incompatibilities become better solved. The only problem you can’t work around is a hardware problem.

If you hit a kernel bug, it is quite possible the rest of the free software stack cannot be used. That is generally not the case for other software. Fixing kernel bugs faster will increase the pace of Linux desktop adoption, as each bug is a potential barrier. If you assume 50M users running Linux and each bug typically affects .1% of those users, that is 10s of thousands of people. Currently, the Linux kernel has 1,700 active bugs (http://tinyurl.com/LinuxBugs). Ubuntu has 76,371 bugs (https://launchpad.net/ubuntu/+bugs). I think bug count goals of some kind would be good.

In general, Linux hardware support for the desktop is good, but it could get better faster. From Intel, to Dell, to IBM and Lenovo, to all of their suppliers, the ways in which they are all over-investing in the past at the expense of the future should be clear; the Linux newswires document them in detail on a daily basis. I was told by an Intel kernel engineer that his company invests 1% of the resources into Linux as it does to Windows. It is only because writing Linux drivers is so much easier that Intel is seen as a quite credible supporter of it. The few laptops by Dell that even ship with Linux still contain proprietary drivers, drivers that aren’t in the kernel, and so forth.

Peter Drucker wrote: “Management is doing things right, leadership is doing the right things.” Free software is better for hardware companies because it allow for more money to go into their pocket. Are they waiting for it to hit 10% marketshare first? I recommend senior IBM employees be forced to watch their own 2003 Linux “Prodigy” video (http://www.youtube.com/watch?v=EwL0G9wK8j4) over and over like in Clockwork Orange until they promise free, feature-complete drivers for every piece of hardware in the kernel tree before the device ships. How hard can it be to get companies to commit to that minuscule technical goal? In fact, it is hard to imagine you can be happy with a device without having a production Linux driver to test it with.

It is amazing that it all works as well as it does right now given this, and this is a testament to the general high standard of many parts of the free software stack, but every hardware company could double their Linux kernel investment without breaking a sweat. The interesting thing is that PC vendors that don’t even offer Linux on their computers have no idea how many of its customers are actually running it. It might already be at the point that it would make sense for them to invest more, or simply push their suppliers to invest more.

There are more steps beyond Step 1, but we can work on all of them in parallel.

And to the outside community:
* Garbage collection is necessary but insufficient for reliable code. We should move away from C/C++ for user-mode code. For new efforts, I recommend Mono or Python. Moving to modern languages and fewer runtimes will increase the amount of code sharing and the pace of progress. There is a large bias against Python in the free software community because of performance, but it is overblown because it has multiple workarounds. There is a large bias against Mono that is also overblown.
* The research community has not adopted free software and shared codebases sufficiently. I believe there are enough PhDs today working on computer vision, but there are 200+ different codebases (http://www.cs.cmu.edu/~cil/v-source.html) plus countless proprietary ones. I think scientists should use SciPy.
* I don’t think IBM would have contributed back all of its enhancements to the kernel if it weren’t also a legal requirement. This is a good argument for GPL over BSD.
* Free software is better for the free market than proprietary software.
* The idea of Google dominating strong AI is scarier than Microsoft’s dominance with Windows and Office. It might be true that Microsoft doesn’t get free software, but neither does Google, Apple and many others. Hadoop is good evidence of this.
* The split between Ubuntu and Debian is inefficient as you have separate teams maintaining the same packages, and no unified effort on the bug list. (http://keithcu.com/wordpress/?page_id=558)
* The Linux desktop can revive the idea of rich applications. HTML and Ajax improve, but the web defines the limits of what you can do, and I don’t think we want to live in a world of HTML and Javascript.
* Wine is an important transitional tool that needs lots of work (http://bit.ly/fT3pXr)
* OpenOffice is underfunded. You wonder whether Sun ever thought they could beat Microsoft if they only put 30 developers on it, which is tiny by MS standards. Web + OpenOffice + a desktop is the minimum, but the long tail of applications which demonstrate the power of free software, all need a coat of polish. Modern tools, more attention to detail, and another doubling of users will help. But for the big apps like OpenOffice, it will take paid programmers to work on those important beasts.

There are other topics, but these are the biggest ones (http://keithcu.com/wordpress/?page_id=407). I’ve talked to a number of kernel and other hackers while researching this and it was enjoyable and interesting. I cite Linus a fair amount because he is quotable and has the most credibility with the outside world 😉 Although, Bill Gates has said some nice things about Linux as well.

If you want to respond off-list, you can comment here http://keithcu.com/wordpress/?p=272.

Thank you for your time.

Keep at it! Very warm regards,

-Keith

Digg

Computer vision as codec

I’ve tried for a while to figure out why computer vision is mostly still in research labs in spite of the fact that there are many thousands of people and different algorithms and codebases for doing computer vision. One analogy that occurs to me is image compression.

There are an infinite number of ways of compressing an image, and each one gives a different result. In principle, we could have 1,000s of people around the world working by themselves on this very hard problem. But, it would be better to take a combination of the best ideas, and have everyone use that.

While codecs and computer vision seem quite different, they share an important similarity: in the pipeline of computer vision, from pre-processing to feature extraction, each step produces a smaller amount of data. At the end of the analysis you might be left with the data that this is an image of your house, which is just a few bytes. This compression is also precisely what a codec does.

Another similarity is that decoding is much simpler than encoding. Decompressing an image is faster than compressing it, and the encoders can typically get smarter while the decoder doesn’t even realize it. Likewise, we have plenty of software today that can generate a photo-realistic image of a house. The computer is doing the reverse process of what happens in our eyes.

So perhaps it could be that we have 1000s of computer vision people around the world taking an image and extracting the data, but it is some combination is the best. To be fair, this doesn’t tell us how hard problem is. Will it take the best ideas of 3 or 50 people?

To answer that involves look at each piece. Note that there is plenty of good free code for image processing, which is an important piece of computer vision. When it gets to lines and edges, it seems like that is less well decided. But I suspect that there are many ways to do this, but we should just pick some robust way and move on. [More here]

I’ve discovered the best codebase for people who want to work on computer vision is http://stefanv.github.com/scikits.image/index.html. It’s got Python-powered SciPy and DVCS.

So let’s get going.

Virtual Darpa Grand Challenge slide deck

I have had this idea for a Virtual Darpa Grand Challenge for a couple of years now and I’m shopping around this slide deck to angel / VC people. I don’t know many, but I am looking and learning.

But I thought I’d also put this out there to the Linux community and see what they think of it. I’ve never done anything like this before so I’m not even sure if I should take on this idea, but I’d be interested in hearing what people think of it and any advice on how to make it happen.

(The first few slides are background because I can’t assume someone knows about the benefits of free software.)

Thanks!

-Keith

Here is the latest version of the PDF you can also download.


Comment to Mark Shuttleworth

Mark announced he is stepping down as head of Canonical to work on design and quality, and this is what I wrote in his blog comments section:

You should focus on the buglist as one of your most important metrics, surely more important than boot time! You should use your Bully Pulpit and rally each team in the Ubuntu community, and those who work with Ubuntu, to get the buglist under control. Software that has bugs is like a house with 99% of a roof. It is impossible to have a quality product with an out-of-control buglist. I get concerned that every new release of Ubuntu has some ADD-like focus on a shiny new feature, and the fundamentals are being ignored.

The answer is not to be more or less “conservative” about software versions. I’m not going to argue that new Ubuntus are less reliable than previous releases as all releases have had bugs. (Breezy would sometimes hang on boot on my dual proc machine with some sort of race condition.) But it is not getting better. I live in Seattle, and if a Boeing crashes, it is very bad news. Pretend the same for Ubuntu.

Set goals and measure progress against them. Perhaps the most important is hardware because you can’t use any of the software until your hardware works. Step 1 of Linux World Domination is World Installation.

My book has more than one chapter full of advice for the Linux community.