Free download:

Software Wars, the Movie

Soundtrack for the book:

If you enjoyed the free download, a donation of the cost of a newspaper would be appreciated.

The best book explaining free market economics:

The best explanation of how we can build a space elevator in 10 years:

Book explaining a political solution to our economic problems:

Post to the Linux Kernel Mailing List about Zero Bugs in Linux

Original link here.

Copied here:

Many interesting ideas on version numbering schemes. I like 2.11.X because it maps to years easily in people’s mind, but I look forward to seeing what is chosen. You guys break many of the rules for software development, so why not going backwards in version numbers 😉

While you are talking about arbitrary numbers and new goals, I want to offer that you could consider a push towards zero bugs. In general, as long as your reliability monotonically increases (no regressions) that is an acceptable minimum approach because it means that you will never have a customer go from being happy to unhappy.

However, it is common in companies to make an effort to get towards zero bugs. Zero bugs is impossible, and that is a philosophical discussion. If you look through your current list of bugs, nearly every one looks scary to me and important to someone. You currently have 2,800 active bugs ( The last time I looked, I found the median age was 10 months. In general, bugs should be fixed in the next release and so therefore 3 months.

Zero bug bounces is hard for the others because they don’t have sufficient resources. However, I believe you easily do. I can’t say that anything magical technically will happen if you work on your bugs faster, but I can say that people I respect as much as you taught me this. My salary was based on my ability to promptly respond to my bugs, and zero was everyone’s goal. Hitting zero, even for a minute, could be a newsworthy event, as another way Linux is better than the others. It also shows leadership to user mode. I sometimes get the feeling that many in the FOSS community look at bugs as something they could work on when they get bored of adding new features, instead of: “Holy poop, there is someone unhappy out there.”

Warm regards,


Torcs-based driving simulator in Python

I have decided to restart OpenRacing: (Note, we may rename it to PyTorcs or pySpeed-Dreams[1]) Note, I’ve decided to call it PyTorcs for now. See you in Github.

PyTorcs is based on the Torcs codebase, which is widely considered the best FOSS racing game and which already has autonomous cars. However, the codebase contains a lot of cruft.

So, it has been methodically re-written into C# Python, ported to leverage the graphics engine Ogre, the physics engine ODE, OIS, standard widget APIs, OpenAL, and extended with a more general track model.

Then, there will be a clean and 6x smaller codebase, with the heritage of Torcs, for simulating autonomous vehicles which can handle the complexities of urban scenarios, and can eventually navigate via the use of a vision recognition engine and simulated sensors like radar and GPS.

There is nothing to announce yet but a plan. Here it is so far:

Python Port

  1. Do mechanical port of C# code (
  2. Port the Swig wrapping tools to generate Python
  3. Debug

Steps after which can go independently:

  • Extra data attached to map model
  • Define map APIs for the robots to use. ([[1]], [[2]],[[3]]) In the short term, it seems will need three spatial indices inside the map, physics, and graphics engines.
  • Get basic autocar driving (waypoints, lanes)
  • Grab more features from [Simplix]
  • Networking
  • Windows & Mac port (remaining code is very portable)
  • Make map more pleasant to look at.
  • Simulate Lidar (Can we put the source of light behind the camera?)
  • Faster than realtime simulation runs (turn off graphics, optimize physics)
  • Better weather
  • Port over a new and prettier car model from Speed-Dreams and discuss the current set of difficulties in using their data with Ogre
  • Find / modify a big urban map with highway exit
  • OpenStreetMap augmented-reality visual annotations
  • OpenStreetMap auto-generate mesh to drive through
  • Smart objects like street lights
  • Assisted-driving features (prevent crashes)
  • Port over automatic transmission, wheels, etc. from Torcs / Speed-Dreams
  • Plug into vision engine
  • Define, record, and replay simulations
  • Joystick, better keyboard
  • Port simuv3 to Python (low priority, isolated task)
  • Etc.

If you are interested in working in a Python driving simulator, please contact me or put in some information below. There are plenty of big and little tasks. A handful of people, let alone 10, could accomplish a lot, chipping away in any of those areas. I’ll make another post here when the Github repository is ready, C# is gone (Github here) Python is running.

[1] Just kidding about the last one. I am looking for a good readable simple name, but I never got to choose names of the codebases I worked on so I’m not practiced at it.

Open Letter to Ableton

This is a rant I posted to the Ableton forum.

Open letter to Ableton;

I was very annoyed about Ableton / Linux support, so I decided to come here and complain and I found a thread — of course!

If an application supports Windows and Mac, supporting Linux is not much work. Somehow, there are very many products that work on all 3 platforms. If you only supported Windows, you would be in much worse shape. I’ll bet a sandwich Ableton doesn’t have even have one person working on Linux. 3-5 could have a solid native port in a few months.

The Linux audio stack is getting mature now. What is required now is a realization by you that your customers want Linux support. Note, the WINE support for Ableton Live is getting solid today, but it does have problems. On the latest Ubuntu, it installs and runs, which is a big milestone, but it has some perf glitches (some things are very slow), and the audio doesn’t work. With Ableton supporting Linux directly, or via Wine, ideally both, these problems could easily and quickly get fixed.

A free / GPL Ableton would be very nice, but the proprietary version of Ableton on Linux enables users to run a free OS, which is even better. Not supporting Linux is damaging to the freedom of Ableton’s customers. Microsoft continues to win because of the lack of vision or laziness of others.

I don’t recommend rioting in the streets, but I do encourage customers to loudly remind every software vendor that the freedom to choose your own OS is very important, and companies should respect their customers’ hardware and software preferences.

You might think it isn’t worth it to build a Linux version today, but how can you know the demand of a product you don’t have? Linux marketshare is growing every year and studies show that worldwide usage is comparable to the Macintosh today. It is true that not much music-making is done on Linux now, but that is partially your fault! Are you waiting for Linux to be dominant in music-making before you enter the market? Any businessman will tell you that is exactly backwards.

People may not use a product if it doesn’t run on all platforms: PDF, Flash, Firefox, Wikipedia, etc., etc. are popular because they work on all platforms. Not having a Linux version puts the entire company at risk.

I know you are busy, but that you can afford it. It is not a matter of development being at capacity (as if people ever sit around), it is a matter of prioritizing. When you say you don’t have the resources, you just are saying it doesn’t seem important yet. You actually could make a major shift in priorities quickly if you wanted to. Requirements often show up up mid-way through every development cycle that need to be incorporated, and it gets done. Ableton says that they aren’t going to support Linux because they can’t be “all things to all people”. That is equating one feature with all features.

You either embrace the future or your competitors do it for you. I don’t care who builds it, but music-making software is one of the top challenges for the Linux desktop. Many people run 1 or 2 proprietary apps on Linux. Several of Ableton’s employees are long-time users of Debian Linux. It is sad that Linux has so many users who are not supporters. Supporting Linux can mean many things, I just ask you to start with creating a version of Ableton that runs on at least Debian. If you feel very busy, I can recommend moving away from C++ towards 99% Python. That will help help speed the Linux port and every other feature.



P.S. Here is a quote:

Sometimes the real hurdle to renewal is not a lack of options, but a lack of flexibility in resource allocation. All too often, legacy projects get richly funded year after year while new initiatives go begging. This, more than anything, is why companies regularly forfeit the future — they over invest in “what is” at the expense of “what could be.”

New projects are deemed “untested”, “risky”, or a “diversion of resources.” Thus while senior execs may happily fund a billion-dollar acquisition, someone a few levels down who attempts to “borrow” a half-dozen talented individuals for a new project, or carve a few thousand dollars out of a legacy budget, is likely to find the task on par with a dental extraction.

The resource allocation model is typically biased against new ideas, since it demands a level of certainty about volumes, costs, timelines, and profits that simply can’t be satisfied when an ideal is truly novel. While it’s easy to predict the returns on a project that is a linear extension of an existing business, the payback on an unconventional idea will be harder to calculate.

Managers running established businesses seldom have to defend the strategic risk they take when they pour good money into a slowly decaying business model, or overfund an activity that is already producing diminishing returns.

How do you accelerate the redeployment of resources from legacy programs to future-focused initiatives?

—Gary Hamel, The Future of Management

GC Lingua Franca(s)

Science doesn’t always proceed at the speed of thought. It often proceeds at sociological or even demographic speed. — John Tooby

Open Letter to the Linux Kernel Mailing List (LKML);

If we were already talking to our computers, etc. as we should be, I wouldn’t feel a need to write this to you. Given current rates of adoption, Linux still seems a generation away from being the priceless piece of free software useful to every child and PhD. This army your kernel enables has millions of people, but they often lose to smaller proprietary armies, because they are working inefficiently. My mail one year ago ( listed the biggest workitems, but I realize now I should have focused on one. In a sentence, I have discovered that we need garbage-collected (GC) lingua franca(s). (

Every Linux success builds momentum, but the desktop serves as a powerful daily reminder of the scientific tradition. Many software PhDs publish papers but not source, like Microsoft. I attended a human genomics conference and found that the biotech world is filled with proprietary software. IBM’s Jeopardy-playing Watson is proprietary, like Deep Blue was. This topic is not discussed in any of the news articles, as if the license does not matter. I find widespread fear of having ideas stolen in the software industry, and proprietary licenses encourage this. We need to get these paranoid programmers, hunched in the shadows, scribbled secrets clutched in their fists, working together, for any of them to succeed. Windows is not the biggest problem, it is the proprietary licensing model that has infected computing, and science. Desktop world domination is not necessary, but it is sufficient to get robotic chaffeurs and butlers.

There is, unsurprisingly, a consensus among kernel programmers that usermode is “a mess” today, which suggests there is a flaw in the Linux desktop programming paradigm. Consider the vast cosmic expanse of XML libraries in a Linux distribution. Like computer vision (, there are not yet clear places for knowledge to accumulate. It is a shame that the kernel is so far ahead of most of the rest of user mode.

The most popular free computer vision codebase is OpenCV, but it is time-consuming to integrate because it defines an entire world in C++ down to the matrix class. Because C/C++ didn’t define a matrix, nor provide code, countless groups have created their own. It is easier to build your own computer vision library using standard classes that do math, I/O, and graphics, than to integrate OpenCV. Getting productive in that codebase is months of work and people want to see results before then. Building it is a chore, and they have lost users because of that. Progress in the OpenCV core is very slow because the barriers to entry are high. OpenCV has some machine learning code, but they would be better delegating that out to others. They are now doing CUDA optimizations they could get from elsewhere. They also have 3 Python wrappers and several other wrappers as well; many groups spend more time working on wrappers than the underlying code. Using wrappers is fine if you only want to call the software, but if you want to improve the underlying code, then the programming environment instantly becomes radically different and more complicated.

There is a team working on Strong AI called OpenCog, a C++ codebase created in 2001. They are evolving slowly as they do not have a constant stream of demos. They don’t consider their codebase is a small amount of world-changing ideas buried in engineering baggage like STL. Their GC language for small pieces is Scheme, an unpopular GC language in the FOSS community. Some in their group recommend Erlang. The OpenCog team looks at their core of C++, and over to OpenCV’s core of C++, and concludes the situation is fine. One of the biggest features of the ROS (Robot OS), according to its documentation, is a re-implementation of RPC in C++, not what robotics was missing. I’ve emailed various groups and all know of GC, but they are afraid of any decrease in performance, and they do not think they will ever save time. The transition from brooms to vacuum cleaners was disruptive, but we managed.

C/C++ makes it harder to share code amongst disparate scientists than a GC language. It doesn’t matter if there are lots of XML parsers or RSS readers, but it does matter if we don’t have an official computer vision codebase. This is not against any codebase or language, only for free software lingua franca(s) in certain places to enable faster knowledge accumulation. Even language researchers can improve and create variants of a common language, and tools can output it from other domains like math. Agreeing on a standard still gives us an uncountably infinite number of things to disagree over.

Because the kernel is written in C, you’ve strongly influenced the rest of community. C is fully acceptable for a mature kernel like Linux, but many concepts aren’t so clear in user mode. What is the UI of OpenOffice when speech input is the primary means of control? Many scientists don’t understand the difference between the stack and the heap. Software isn’t buildable if those with the necessary expertise can’t use the tools they are given.

C is a flawed language for user mode because it is missing GC, invented a decade earlier, and C++ added as much as it took away as each feature came with an added cost of complexity. C++ compilers converting to C was a good idea, but being a superset was not. C/C++ never died in user mode because there are now so many GC replacements, it created a situation paralyzing many to inaction, as there seems no clear place to go. Microsoft doesn’t have this confusion as their language, as of 2001, is C#. Microsoft is steadily moving to C#, but it is 10x easier to port a codebase like MySQL than SQL Server, which has an operating system inside. C# is taking over at the edges first, where innovation happens anyway. There is a competitive aspect to this.

Lots of free software technologies have multiple C/C++ implementations, because it is often easier to re-write than share, and an implementation in each GC language. We all might not agree on the solution, so let’s start by agreeing on the problem. A good example for GC is how a Mac port can go from weeks to hours. GC also prevents code from being able to use memory after freeing, free twice, etc. and therefore that user code is less likely to corrupt your memory hardware. If everyone in user mode were still writing in assembly language, you would obviously be concerned. If Git had been built in 98% Python and 2% C, it would have become easier to use faster, found ways to speed up Python, and set a good example. It doesn’t matter now, but it was an opportunity in 2005.

You can “leak” memory in GC, but that just means that you are still holding a reference. GC requires the system to have a fuller understanding of the code, which enables features like reflection. It is helpful to consider that GC is a step-up for programming like C was to assembly language. In Lisp, first GC language, the binary was the source code — Lisp is free by default. The Baby Boomer generation didn’t bring the tradition of science to computers, and the biggest legacy of this generation is if we remember it. Boomers gave us proprietary software, C, C++, Java, and the bankrupt welfare state. Lisp and GC were created / discovered by John McCarthy, a mathematician of the WW II greatest generation. He wrote that computers of 1974 were fast enough to do Strong AI. There were plenty of people working on it back then, but not in a group big enough to achieve critical mass. If they had, we’d know their names. If our scientists had been working together in free software and Lisp in 1959, the technology we would have developed by today would seem magical to us. The good news is that we have more scientists than we need.

There are a number of good languages, and it doesn’t matter too much what one is chosen, but it seems the Python family (Cython / PyPy) require the least amount of work to get what we need as it has the most extensive libraries: I don’t argue the Python language and implementation is perfect, only good enough, like how the shape of the letters of the English language are good enough. Choosing and agreeing on a lingua franca will increase the results for the same amount of effort. No one has to understand the big picture, they just have to do their work in a place where knowledge can easily accumulate. A GC lingua franca isn’t a silver bullet, but it is the bottom piece of a solid science foundation and a powerful form of social engineering.

The most important thing is to get lingua franca(s) in key fields like computer vision and Strong AI. However, we should also consider a lingua franca for the Linux desktop. This will help, but not solve, the situation of the mass of Linux apps feeling dis-integrated. The Linux desktop is a lot harder because code here is 100x bigger than computer vision, and there is a lot of C/C++ in FOSS user mode today. In fact it seems hopeless to me, and I’m an optimist. It doesn’t matter; every team can move at a different pace. Many groups might not be able to finish a port for 5 years, but agreeing on a goal is more than half of the battle. The little groups can adopt it most quickly.

There are a lot of lurkers around codebases who want to contribute but don’t want to spend months getting up to speed on countless tedious things like learning a new error handling scheme. They would be happy to jump into a port as a way to get into a codebase. Unfortunately, many groups don’t encourage these efforts as they feel so busy. Many think today’s hardware is too slow, and that running any slower would doom the effort; they do not appreciate the steady doublings and forget that algorithm performance matters most. A GC system may add a one-time cost of 5-20%, but it has the potential to be faster, and it gives people more time to work on performance. There are also real-time, incremental, and NUMA-aware collectors. The ultimate in performance is taking advantage of parallelism in specialized hardware like GPUs, and a GC language can handle that because it supports arbitrary bitfields.

Science moves at demographic speed when knowledge is not being reused among the existing scientists. A lingua franca makes more sense as more adopt it. That is why I send this message to the main address of the free software mothership. The kernel provides code and leadership, you have influence and the responsibility to lead the rest, who are like wandering ants. If I were Linus, I would threaten to quit Linux and get people going on AI 😉 There are many things you could do. I mostly want to bring this to your attention. Thank you for reading this.

I am posting a copy of this open letter on my blog as well ( Reading the LKML for more than one week could be classified as torture under the Geneva conventions.

In liberty,


Article in Hindi

Here is some light New Year’s Eve reading for you. Original link (Translation per Shardul Pandey)
United States should elect Keith Curtis as President

I don’t know Keith Curtis. I have read his book, After the Software Wars and after decades I have encountered an intelligent American like him. He is so very intelligent that Republicans should choose him as their next President. I have discussed his wisdom to my political friends. We are giving him complete importance.

He is a friend of Shardul and supports his project in the public interest. He had discovered a few drawbacks in our work and we will improve it further. The United States must focus on Keith’s issues, as it is good both for US and world. I assure that India is focusing. We were thinking like Keith Curtis for a long time. We had thousands of hours in serious political discussion and one day when I was leaving the room, Mr. L.K. Advani spontaneously asked me that either someone from Microsoft or Google has written something like that somewhere? I thought that the answer was evidently negative. But a few days later I found a book in the British Library and once again my thoughts got transformed for America.
— Indian political veteran Rajendra Kumar who has been a close confidante of many prime ministers in India.