Here are a couple 3200×1800 wallpapers I rendered in Blender using the new Cycles engine based on a file I found.
Here are a couple 3200×1800 wallpapers I rendered in Blender using the new Cycles engine based on a file I found.
I’ve been running Debian-based distros for my first 8 years with Linux, but I decided to try Arch for my new Lenovo Yoga 2 Pro. Most recently, I was running Mint-Debian aka LMDE, but with this new hardware, I wanted to be more up to date with the latest software. For example, LMDE is currently using Gnome 3.4, whereas the latest is 3.10. LMDE also isn’t really rolling in the sense that Debian-Testing or Arch is, as new things only arrive about 3 times per year. Therefore, between the time to get into Debian, and the time to get the new batch of updates from LMDE means it can be many months after I’d read a news article about the new LibreOffice features before those bits would actually arrive on your computer. In fact, by then I might have forgotten why I was excited about their work in the new release.
I installed Arch but considered Manjaro also. The big benefit is a graphical installer:
Not everyone is comfortable in a command-line, but I believe everyone should be. Today, however, many people, especially those who came from Windows or the Mac, have not yet learned it, and so Manjaro is doing a valuable service to improve the initial out of box experience. There is plenty of time to learn more about how your computer works once it is installed and configured. It is surely possible to take the necessary steps, and the best practices in the wiki, and put them together into a friendly and beautiful installer.
I haven’t tried Manjaro so I can’t say how it would have worked for me. However, given the hardware problems I ran into such as needing to use rfkill, setting the acpi_backlight kernel command-line, needing to tweak the Synaptics configuration to make the mouse close to usable, etc., it would have been almost as difficult for me even with the work they are doing to lower the barrier to installation on this new hardware.
I had to know how to tweak GRUB, systemd, and other configuration files, so I may as well set it up. I hope Linux one day doesn’t require anything tricky to get it working out of the box, but that is not today.
Manjaro has taken the mature Ubuntu installer and ported it over to the Arch system. This is valuable to help new people getting into Linux, but assuming it works, the benefits over running Arch on a daily basis drop to near zero. In fact, running it is possibly less useful than Arch, a topic I’ll discuss later. Note that you can’t switch to the Arch repos after you install Manjaro because your computer might not boot.
Manjaro does provide benefits after installation, but they are of much less value. For example, they have added graphical notification of new updates:
Arch by default handles updates only on the command line, and has no GUI to prompt users when new updates are available. This is a nice but very optional feature as the alternative is just running the trivial command pacman -Syu.
Some things I disagree with about Manjaro. I don’t understand why they have their own repositories with a copy of all the packages when they are mostly focused on the problem of installation. Shipping a different installer entry point into the Arch world doesn’t require making a copy of everything. Manjaro might claim to gain benefits from having having separate repositories, but they are changing only a few packages so the improvements are tiny.
Arch is a bigger team, and so I trust them more to have the latest versions available with important security fixes, etc. Keeping up to date with the latest and best software is an ongoing and time-consuming process. It is the lifeblood of a rolling release. I don’t want to run something that isn’t a part of Arch in that important way. Manjaro should enable users running out of the Arch repository to not lock people in.
Manjaro takes released versions from Arch and runs additional tests on them. By doing this, they can claim to be more stable. However, if the underlying problem is that Arch is sometimes unstable, doesn’t that mean it just needs more people running the test builds? Why is waiting to start further testing a good idea?
Even consider the case that sometimes useful features don’t get enabled in the Arch packages. For example, if Manjaro wanted to offer x86 kernels with PAE support for machines with more than 4 gigabytes of RAM, they can setup their own respository with just that package. The cost to mirror and distribute a few packages is much smaller.
Or, people interested in doing the PAE work could just help out the Arch kernel developer who is probably very busy. It is easy to see problems as a reason to create something better and new. But a fork is social engineering and to be taken with care. I also don’t understand why Manjaro have their own wiki. It seems like it should just be a a few pages on the Arch wiki. The wiki is one of the best features of Arch, why would you start over rather than improve what is already there?
As a side note, Antergos is another (top-30) Arch-based distro with its own simple graphical installer but at the end points to the Arch repositories. For some reason, this team and community is less well-known than Manjaro. I don’t understand why so many small groups of people think they should make an OS. At Microsoft, a team of 4 developers would be maintaining a codec.
Mike Conlon has written papers about forks in software. In short, he said they should happen when people can’t get along. While we celebrate the diversity that comes from freedom, we shouldn’t always consider forks as a sign of something positive. Imagine living in a world that celebrated divorce instead of marriage. The benefits of community come with size and division of labor.
I don’t know much about the community of Arch. I have long respected Debian for its good people, generality, stance for freedom, sense of community, and vast institutional knowledge. The DPL election process itself encourages civil dialog about how things can improve. However, at some point with the kernel, systemd, Firefox, etc. the OS itself is just the tool to compile and distribute all the other components.
How Long Here?
I have run Arch for 6 weeks. I don’t know how long it will last, or whether I will find sufficient reason to leave. My review documented all the problems I ran into, but none are Arch’s fault. Put another way, I found no real problems in their package manager, compile and install scripts.
Some have left Arch and before I installed, I read reviews and articles to understand what the reasons were.
I discovered various different causes but I don’t think any of them will be a problem for me. One of common reasons was that there have been some transitions in the past that were disruptive for users: for example, the switch to systemd broke some systems, but there were others as well. Even if the users can fix the problems, the sum of the inconveniences of the years can leave a mark.
The switch to systemd was cited several times, and some thought it was bad idea being forced on them by a cabal. I personally believe it is important technology modernizing low-level usermode aspects of the OS. The kernel is a great piece of technology, but at the end of initialization, it starts one user-mode process and then waits for work. The kernel doesn’t even write its error messages to disk, just to a memory buffer. Such policy decisions as where to store the files and what format are left to usermode.
There are multiple free init systems out there, and so it is hard for groups to come to consensus about whether to switch and which is the right one. Arch because of its simpler social structures and smaller team, doesn’t have the open debate process like what Debian is currently going through. However, it doesn’t really matter how you came to that realization as long as it was the right result!
In most cases, Arch can provide multiple choices such as whether to run OpenBox, KDE, or Gnome. Supporting multiple init systems was a choice they did not provide, but that was also a good decision. Via the configuration files and command-line utilities, you can tweak a system in innumerable low-level ways. The idea that you gain any further useful customizability via multiple init systems is incorrect.
Arch was relatively late (mid-2011) in supporting package signing. (The benefit of this feature is that you can independently verify that a package came from Arch, no matter whether you download it from their servers, or a mirror, or any other place on the Internet.) Some left Arch, or threatened to, because it didn’t sign packages.
But at the same time, you need to be the type that wears a tinfoil hat to believe that anyone would secretly disrupt Arch or its mirrors. If the NSA wanted a secret hook, they’d put a gun to Linus’s head. They wouldn’t bother hacking Arch as it would take too long to accomplish anything. It was one of those features that is a good idea because it adds end to end security, but was only ever a theoretical problem for Arch users.
In any case, Arch now has signed packages. It was simply a lack of resources and more people complaining than programming. Arch is run by people who see what other distributions do and can learn from them. You don’t need a formal social organization to be able to learn from the good ideas of others.
Because Arch doesn’t have elections, release parties or yearly conferences, it doesn’t have as much a sense of community. Some people left Arch because they found too much elitism and rudeness. I experienced it personally already from one of the maintainers.
Upon filing a bug, I was told that it would have been easier for me to create the package I wanted than to keep discussing whether it even should be packaged. Of course, making your first package is much harder than installing it, just like making a cake is much harder than eating it. I told him that he should have just added the package like every other distro has, rather than keep coming up with invalid reasons not to, but that wasn’t convincing
On Reddit, Arch users commented of my previous review that it was my fault for expecting it to work like Ubuntu, and that I deserved no “sympathy” for any issues I ran into. One fan of Arch even told me I shouldn’t be running it! I have been running Linux for 8 years and working with the command line since he was in diapers.
Of course, judging a community by individual emails and Reddit comments is scientifically invalid. Insanity in various forms is currently a pre-existing condition of society, and I don’t have reason to believe it is worse in Arch than Washington, DC or Los Angeles.
But the distros that have a mentoring process and such will find that it removes a distinction between users which are to be looked down upon, and contributors who are the masterminds. Every contributor was initially a clueless user and some distros even have a mentoring process.
Arch has a process for moving from user to contributor as well, but it is much more about demonstrating past contributions, and eventually given more upload permissions. It also has a user repository area (AUR) that requires no special rights to become a part of the system.
And in the AUR, they have a mechanism which allows people to vote on packages, and the most popular can be included in the main repository. These feedback loops are great forcing functions to make sure the developers listen to users. Bugs can also be voted on, to help developers prioritize and make the system more democratic. Even though Arch is missing some of more refined social structures, it has its own.
I’ve also read about how some people left Arch because they had multiple breakages over time and lost trust. However, there are wiki pages that explain how to handle the situation and best practices for system stability. Almost always the problem is a mis-configured system, or a bug in the upstream code. I have two kernels installed, the mainline (3.12.3) and the LTS (3.10.22) version. So if a kernel upgrade breaks, I can always boot to a known-good one. Because Arch doesn’t have the resources of the big vendors to debug kernel problems, it can be nice to have an extra as backup.
That doesn’t solve many other ways the system can break before it finishes booting, but it is nice to have. It is a fact of life that it is very easy to break an OS. Death is the norm in this world. I’ve read stories of people who accidentally configured their computer to sleep in response to the system start event.
The community of Arch is large enough that users run into all kinds of problems that more slowly moving distros would never see. Of course, Arch isn’t large enough to debug every problem but they have the resources to fix the bugs that “everyone” runs into very quickly. There is safety in numbers.
Many of the hardest transitions in Arch are in the past. I expect it to be smooth sailing now. I can think of little ways to improve it, but given that it is running all the latest software, I have no reason to install anything else right now.
Arch is under the radar compared to Ubuntu, Fedora, Mint, Debian, Suse, etc. It gets less than 1% of the news that those others get. It doesn’t have benevolent dictators making hardware deals and memorable pronouncements. It doesn’t have billion-dollar companies supporting them. Somehow, they’ve managed to survive.
The free software community is still surprising to me. 50% of computer users know of Linux. 10% of them know Ubuntu. 10% of them know Debian. 10% of them run Arch. 1% of them contribute to Arch. There are many good distributions and working on them. When you consider it is only on 1% of desktops, it is astounding how many solid choices there already are. Arch is the secret anarchic distro that works perfectly.
I am writing this on my new Lenovo Yoga 2 Pro which comes with a beautiful screen and your Synaptics Clickpad. I have created this petition because there are a lot of problems with your hardware on Linux that are frustrating, time-wasting, and distracting. Right now, the pointer moves when it shouldn’t, and doesn’t move when it should, which is pretty close to the worst-case scenario for a mouse.
I’ve been using your Trackpads happily for years, but with your new model, things are such a mess that they get in the way of my ability to work. What if we users added up all the time we have collectively wasted and sent you the bill? Given it already has about the same marketshare as the Mac, the total number would have a lot of zeros. With the time I’ve wasted, I’d sign up for $200 / year in any class action lawsuit.
I know you spend most of your effort working on the Windows drivers, and that is fine because it is cheaper to maintain Linux drivers. The community will help you. You’ve written a proprietary Linux driver, but you never made it available. Why write something almost no one uses? There is a free Synaptics Linux kernel driver written for your users, but it is buggy and you haven’t ever helped on it.
I wrote a full review of Arch on my new hardware talking about the problems with your mouse in more detail. The point is that your new devices with an integrated button require smarter software to be usable. Well, now that you’ve pushed this new hardware of dubious benefit and definite downsides, what are you going to do?
Synaptics should respect our freedoms, and:
Windows might be the majority, but a big part of that is because the laptop manufacturers expend little to no effort on the alternative. Meanwhile, 50% of their customers would be happier running Linux if it was well setup! We can wonder why it hasn’t arrived on the desktop like it has for cellphones and servers, but in the meanwhile, it would be nice if the effing mouse worked well out of the box.
Linus can’t go around saying F-U to every hardware company although surely Synaptics deserves it. if you are frustrated with Synaptics hardware, take 1 minute and sign this petition where you can also add your comments! https://www.change.org/petitions/synaptics-corporation-help-maintain-linux-drivers
Warning: this is a 4,700-word review, so read when you have something to drink and some time. I tried to break it up into two posts, but I didn’t find a good place.
I have been running Debian-based distributions of Linux for the last 8 years, but I’ve decided to try Arch for a new laptop to replace my 2008 Thinkpad. I bought:
I was running such an old laptop because it still worked well other than the old CFL bulb which was very dim. Lenovo was also not adding much value in their newer models, and was even taking it away. For example, one of the great things about Lenovo is that it was user-serviceable, but this is becoming less true.
They are also producing an unending stream of indistinguishable models of laptops, (143 at last count, compared to Apple who offers 5 MacBook Pros, and 4 MacBook Airs), while making fewer that are user-customizable. Nike can make a fully custom shoe, whereas Lenovo appears to be heading in the other direction.
It has been interesting researching a HiDPI laptop. They are still hard to come by. Half of the models offered on the Dell and Lenovo websites are 1366×768. Mostly the rest are 980p or top out at 1080p. Compare that to the Nexus 5, which runs 1080p on a 5″ screen.
Most pictures on the web are made for 96dpi screens so the only thing you can do is scale it up and they don’t look better. The only noticeable graphical improvement is with wallpapers. I searched on Google for some nice ones such as this picture of a Maserati that actually looks like it’s in my computer. (Here are a couple Arch wallpapers.) In the end, I’ll be happy with this machine because the 5.7 million pixels make reading text like looking at a color laser printer.
The most controversial thing Lenovo did since 2008 was change the layout of their keyboards. It was apparently done by people who hadn’t used their previous model. They didn’t show a lot of respect towards the designs they inherited, nor to their customers whose fingers have been trained for years on them. One simple example is how they’ve swapped the left Fn and Ctrl keys. There is no possible benefit from that. The Linux kernel has a policy: “don’t break userspace”. Keyboards are one of the places in computing where backward compatibility is important.
Of course on smaller machines compromises must be made, but Lenovo used to put great keyboards into even their 13” machines. This laptop has almost an inch of empty space on the left and right edges of the typing area. If they had better used it, they could have built a more compatible layout. The chiclet keys work fine, but the idea of talking about an “efficient layout” for a keyboard means that inmates are running Lenovo.
So in addition to the muscle-memory re-learning required for the new keyboard, Lenovo (and others) are in the process of removing the mouse buttons. I worry we are heading towards the world of Idiocracy. If I wanted to re-learn the keyboard and mouse in an unmaintainable machine, I’d have bought a Mac. I almost expect them to next copy the Macbook Wheel:
Fortunately, they’ve got a compromise for now with the one-button ClickPad. Apparently it will still have mechanical wear issues so I’m not sure if it is actually an improvement.
By default, the left and right bottom areas are reserved for virtual mouse buttons. With a Clickpad, it can sense where the fingers are, and can guess whether the user meant a left-click or right click:
It is possible to live with a Clickpad, but it requires smarter drivers. The laptop shipped with a Windows driver written by Synaptics that worked fine. It was overkill with some of the extra gesture features they provided, but it was customizable and not flakey.
Whereas on Linux, out of the box, the driver is almost unusable. I will talk more about the problems in the kernel section below. There is a free Synaptics Linux driver out there, but it isn’t built with any help from Synaptics the Corporation. It supports gestures, but it has a number of bugs. Synaptics has apparently written a proprietary driver, but you can’t download it unless you are an OEM and make laptops! Since no one uses it, it is probably a mess anyway. Synaptics is perfectly happy to write drivers that don’t actually get into their customers hands. The issue of proprietary drivers on Linux is still a problem for video cards and a few other places, but in general most hardware companies realize it is inefficient and buggy to write closed drivers
(UPDATE: I’ve created a petition to Synaptics Corporation about Linux drivers. Please sign it!)
CPU / GPU
The new CPU is a bit faster, but because of its smaller size (45nm versus 22nm) it uses less energy. My old “Penryn”-class Intel 2.5 GHz dual-core CPU required 35 watts while the newer uses 15. The Yoga is hyper-threaded and runs at 900 MHz to 2.6 GHz, depending on load. Note that Windows thought it was a 2.3 GHz processor so there is a question as to whether these computers have been given an unintentional lobotomy by a bug in Microsoft’s code. The machine feels a lot faster mostly because of the SSD.
The Intel 4400 graphics card is much better than the GMA 965 I had in my old laptop. 2-D performance is plenty fast, even with the 4x more than 1600×900 pixels to move. It will run free little games like SuperTuxKart with 60 fps, but Second Life was unplayable at 2 fps.
The speed of an SSD is great. I am getting 500MB / sec reads whereas I would get about 50 MB / sec on my old 7,200 rpm drive. I was not usually waiting on the computer, but when re-booting, starting big apps like Firefox / LibreOffice, etc. I was sometimes I/O bound, so it is nice that this is gone. With an SSD, the computer is snappy.
I was paranoid about reliability, but it depends on factors such as the size of the silicon, the size of the drive, how many bits per cell, usage patterns, etc. My 22nm MLC drive should get 3,000 write cycles.
The typical use is when I’m working on a 1MB document. I’ve adjusted LibreOffice’s autosave to be every 30 minutes, which should be fine because it never crashes for me. With wear-leveling across a 256 Gig SSD and 3000 write cycles, I could write that file 768 million times. Given that I might write 20 revisions per day the drive should last 105,000 years.
The next biggest write-heavy app for me is Firefox which out of the box was poorly optimized for SSD drives. It usually wrote about 4 megabytes per click, and on one page wrote 70 MB! The following are the fixes I applied:
1. Go to about:config and set browser.cache.disk.enable to false. I also adjusted the cache size to be 50 MB so as to not let Firefox fill up memory with web pages I don’t typically go back to.
2. I next turned off thumbnail generation which was writing about 300KB per click. Sometimes it even generated two! It seems Firefox should only generate a thumbnail when it decided it actually needed it for the start screen, but this silliness can be worked around. Even worse, after turning off the cache and the thumbnails, I was still getting a couple of MB of writes / click.
3. So finally, I installed a tool called profile-sync-daemon, which sets a link to point your Mozilla profile to the /tmp directory which is actually just RAM. It lets Mozilla write as much junk as it wants with no wear to the drive. Every hour, it will do an update via Rsync, which does an incremental update of only the bits that changed. With those 3 fixes, Firefox performs well.
When adding in system maintenance, media, etc. I should write about 200 MB / day. With that, the drive should last 10,500 years.
More good news is that SSDs are quite smart about fixing problems transparently. It has reserve blocks, and when it detects a problematic cell, it moves the data over to a spare and keeps going. Of course, when one cell goes, because of wear-leveling, it means all the other cells are also old and the drive should be replaced. One big difference between SSDs and HDDs is that old flash cells can lose their data entirely if turned off for days, so backups are still valuable.
Linux has a utility called smartctl which can run diagnostics, and tell you information from the controller such as the number of times the cells have been erased, how many spare cells are in use, etc. This way you can monitor it. I’m sure there is some GUI for Windows, but I prefer a text-mode app anyway.
Part II: Arch
Now to the OS: I decided to try Arch because there while there are many good desktop distros, I wanted to try a rolling one that updates nearly all of its packages within a day of the component shipping a new version. Describing why not all the other distros is hard so I will just mention why I decided against Debian Unstable which is also rolling. (I wrote a separate article about why not Manjaro you can read here.) Debian used to have a reputation of reliable staleware, but this is less true:
Debian is generally close to upstreams, but there are still some problem areas: their integration of Gnome is 6 months behind. It is an interesting question why Debian, which is a bigger and older team than Arch, can’t keep up. The good news is that Debian is generally on a good trend with more people joining:
Debian is doing better than it used to, but Arch is close to perfect in regards to keeping up with the thousands of Linux applications.
Another downside of running Debian’s rolling release is that the official goal is to make the next stable version, so there isn’t the social culture that exists in Arch where everyone is depending on the build to always work.
I found lots of bugs while running Arch, but none that exist in their code, only in the kernel and the other applications they integrate. The faster I can get the latest code, the faster I can have a better Linux experience.
Arch’s primary assets are build / install scripts (here is the one for LibreOffice) and a wiki. The wiki is superb. I never considered to use the non-existent one for Mint. When I ran Ubuntu, it sort of did everything for me such that I never needed to read up on anything. I followed Arch’s Unofficial Beginner’s Guide, and after that spent some hours reading various articles about how to use and enable fancier features in the software. Using the command line is necessary for Arch, but the set of utilities required is not very large, and you can learn as you go. The wiki and the community assume people are comfortable with the shell which is good because it also keeps the instructions simple and fast. Some people with little familiarity of computers sneak through because the wiki docs are so good that any literate person can run Arch.
One of my big decisions was to just forget about the UEFI support, which makes installing Linux twice as complicated. Lenovo doesn’t seem to offer any other OS for purchase on their laptops, but fortunately their BIOS still has the ability to boot in “Legacy” mode, which allowed me to ignore all the stuff I don’t care about.
In order to switch back to the old (MBR) partition table, I had to wipe my hard drive and couldn’t dual-boot into Windows 8.1 again. But after spending a few hours with it, I decided it wasn’t worth the 40 gigs of space. Linux on the desktop still seems far away, but there are lots of coders out there so it is useful and empowering and interesting to watch. My install is currently using only 4 gigabytes, so it seems a waste to reserve 10x the space on something bloated and inferior.
Legacy BIOS boot mode worked fine and I set things up the way I was used to with two partitions: a 30-gig root partition for the OS and applications, and the rest for “/home”.
I couldn’t just copy over the entire home directory from my old machine. I tried, but Gnome 3.10 got completely confused with missing icons, ugly widgets, etc. So I started fresh and copied things over selectively.
The install process was straightforward but a little tricky. I needed to run:
# rfkill unblock wifi
on every boot to get the network card to function. And when I run that command, wireless doesn’t wake up and start working, so I need to use:
# wifi-menu wlp1s0
Another tricky issue was that when I setup the machine, I didn’t know to have the package manager install the rfkill command. Rfkill comes on the Arch installer, but it didn’t actually install it by default on the drive. And so when I booted into (text mode) for the first time, I couldn’t connect to the network to get the GUI and the rest of the packages I needed.
And so I went back to the USB install stick. I had to follow the instructions from the top, but I was able to skip many of the steps such as the need to actually copy over all the bits. When I got near the end, I installed the package:
# pacman -S rfkill
and then I was ready to reboot into the OS, connect to the net and install everything else.
Another little issue I ran into is that on most operating systems, when you install Gnome, you also install the GDM, but on Arch, this isn’t necessary, nor does it prompt you. I needed to run:
# systemctl enable gdm.service
Fortunately, that line was at the top of the Gdm wiki page, so it didn’t take me long to fix my problem. I’ve had to fiddle a bit with systemd while using Arch, and it seems to be a simple and reliable way to manage low-level aspects of OS.
I like how the software in Arch is so granular. Gnome is broken up into two meta-packages, an essential one, and a collection of extras. I installed the main one, and some of the extras.
With LibreOffice, I installedWriter, Calc, Impress, and the English proofing tools. Before I used to blindly download and install all the DEBs on every release, but with Arch I took the time, and only installed the pieces I care about. Only they will be selectively downloaded and updated every few months as new versions come out.
Eventually, I did decide to fully re-install the OS. The first time, I put the 32-bit version on but I discovered later that there is a kernel 3.11 bug (https://bugzilla.kernel.org/show_bug.cgi?id=61781) where the machine won’t resume from suspend properly. I saw it on a 2008 Lenovo, and 2013 Lenovo, so it was probably a regression for a large number of computers.
The subsystem maintainer has since stepped in and reverted the patch because the underlying maintainer hasn’t had time to find a proper fix. (Presumably the change was made for a reason.) Resume was only broken on x86 so switching to 64-bit Linux worked around that problem.
Also, the Arch kernel doesn’t enable PAE in their builds, unlike most other distros, so it could only see 4 gigabytes of memory. My old laptop had 2 gigabytes, which was typically more than enough, but I decided that the extra memory could be useful as a RAM drive. There are PAE builds in Arch’s user repository, but they are not widely used or supported, so switching to x86-64 solved that problem as well. I’m guessing a number of kernel developers are running 64-bit nowadays because otherwise the resume bug would have gotten fixed a lot faster, so perhaps 64-bit is more reliable.
I generally ran into 2 classes of problems, hardware bugs and HiDPI issues:
1. The screen’s backlight doesn’t work unless I put “acpi_backlight=vendor” into the kernel command line. Fixing this issue involves changing the GRUB bootloader configuration, but the wiki was clear and explained how it could be done. As a side-effect of this fix, the laptop buttons to adjust the brightness no longer work but I decided the tradeoff was worth it. I can do:
# echo 850 > /sys/class/backlight/intel_backlight/brightness
to tell the LEDs to back off a bit, but I don’t mind full brightness. I should get about 5 hours on a full charge, which is more than enough for my typical uses.
2. As I mentioned in the install, I need to unblock my wifi card to get wireless working.
3. The Synaptics Clickpad is almost unusable. In fact, for a while I was sorry I didn’t keep Windows around because a mouse on crack can prevent you from being able to focus on your work. The Synaptics driver built by the community is quite rich, but it is a pain to use:
4. The system still doesn’t always work when coming out of resume. Things work better on the x86-64 bit builds, but the mouse pointer and wireless doesn’t always work and so I need to reboot sometimes.
5. The driver leaves artifacts with the Intel driver under SNA mode:
6. The cpupower comand never shows the processor getting down to the minimal 800MHz to save heat and power even with the Intel Powersave governor which is supposed to be able to do that.
7. My logfile fills up with USB (and other) error messages.
usb 2-7: unable to read config index 0 descriptor/start -71
8. Even though I mount my hard drive with the “discard” option for SSDs, if I run fstrim after a reboot, it shows me this:
# fstrim -v /
/: 26.2 GiB (28103872512 bytes) trimmed
# fstrim -v /home
/home: 204.8 GiB (219841617920 bytes) trimmed
9. There is an option to disable hyperthreading by booting the kernel with maxcpus = 2, but it appears to also disable the ability to adjust the CPU frequency.
In general it is beautiful, but I ran into several problems.
1. Sometimes Gnome uses big cursors and sometimes it reverts to old habits of small ones:
5. Many apps like LibreOffice, Gimp and Audacity have toolbar buttons that are too small to easily click and even recognize. LibreOffice’s dialog boxes and text everywhere look fine, even pretty, it is just the buttons that are not scaling. Even using LibreOffice’s large icons setting doesn’t make them big enough. Interestingly, Apache OpenOffice seems to do a better job as it sets the toolbar button based on the size of the system font. All apps should do that.
6. Cinnamon and Xfce cannot detect that the screen is high-resolution and so need lots of tweaking to become usable.
7. Firefox needs the No-Squint plugin to make the websites display in a reasonable size. Apparently nothing has been done in the product to officially notice the dpi of the screen, but at least there is an easy workaround. There are still some problems such as the tiny buttons on the Youtube player but it is manageable.
Fortunately, as laptops with high resolution screens become more popular, these bugs will get noticed. There is a lot to be done. I’m surprised that Gnome has so many issues as their 3.10 release announcement makes it seem like they had fixed them, and I don’t see any mention of further work.
I did try Windows 8.1 for a few hours and it had mixed results with regards to handling such a high-resolution screen so I’m not necessarily any worse off with Linux. Teams in the free software community release code 2-4 times per year, so I’m sure it will get smarter relatively soon.
The good news is that Gnome’s Classic mode is almost good enough for someone who was happy in Gnome 2 but there are still various annoyances: most people in Gnome 2 used the left and right arrows to scroll through their workspaces, but in Gnome Classic, you have to use up and down.
I may try Ubuntu’s Unity GUI, as there are builds for Arch, but Unity requires patches to so many upstream components that I may not want to risk it. I’m surviving with Gnome Classic plus some extensions, and I’ll be able to use this machine full-time. So, in spite of all these problems, I don’t think any are Arch’s bugs. The state of Arch is really the state of the Linux desktop. I’m surprised it isn’t in the top-5 most popular distributions on Distrowatch.
Linux on the desktop
It seems not one person at Lenovo installed Linux on this new Yoga before they released it. There are a number of bugs they could easily have fixed. The average patch to the kernel is about 20 lines of code. Unlike with Windows, they can make fixes anywhere they find problems. They also don’t need to build a multi-lingual installer to distribute drivers, they just improve the actual code, and let the other parts of the community distribute it and provide the UI.
If they had put in the same amount of effort towards Linux that they put into their fluff Windows UI nagware, it would be more than enough to make sure the Linux desktop worked well out of the box. There are even Lenovo-specific modules in the kernel that they aren’t contributing to. Other hackers are expected to figure out how to make every new model work by reverse-engineering.
I see lots of Linux users using Lenovo hardware, but because they don’t offer it, they have no idea how many users are running it, or would be happier if it came pre-installed. Lenovo has 33,000 employees and Dell has 100K. They could easily enable a great out of box experience with a relatively small team. If I ran Lenovo, everyone would be required to dual boot. It would be nice if they saw the trends in computing on cell-phones to servers and realize that they should be a part of a better future.
They could also customize it more nicely and more cheaply rather than building a bunch of junk outside and on top of Windows. The community would even help them. When I first started using Linux in 2005, I thought that Linux on the desktop would soon happen, but even in 2013, things haven’t improved much. I guess we Linux users need to complain more loudly to the employees of the laptop manufacturers.
The good news is that at least on the hardware side, the kernel has thousands of people, and so they in theory have the resources to fix these issues quickly. I’m looking into filing bugs, but there are already 25 active against the Synaptics driver alone so I’d need to do research and make sure I’m not filing duplicates.
One challenge for the kernel is they sometimes let bugs languish in their database for years. It was recently Halloween, so check out these 400+ scary bugs marked as regressions: http://bit.ly/LinuxRegressions. The Linux kernel currently has 2,194 bugs: http://bit.ly/LinuxBugs. Even scarier, a messy and unprioritized buglist discourages people from even entering bugs.
Linus and the other top maintainers see their job as making sure the hordes of programmers don’t screw things up, and that is a big key to its success. It requires a lot of work to watch over the massive rate of change. The problem is that the team mostly focus on evaluating the stream of patches going in that they sort of ignore users. The Linux kernel has a policy of no regressions, but there is no enforcement.
One of important things I learned at Microsoft was to try to get down to zero bugs as best as you can. An early internal memo recommended developers stop adding new features when they had 10 or more assigned to them. Some people might laugh at the idea that Microsoft tried to write reliable software, but the poor results are because they have richer but more complicated and older codebases. Peter Drucker had a line: “What’s measured improves.” If you never looked at yourself in the mirror, or weighed yourself, you might get heavier than you realized. Linux runs on millions of machines, but there are billions of them.
Another issue is that the kernel dev community is not equally well-staffed in every part of the codebase. There are a lot of ARM contributors, but they can’t fix the random laptop issues. In a few cases, it seems like the hardware will be phased out before it works with Linux. The benefit of a bug list is that it helps you find the areas that need extra brainpower.
In many teams in the free software stack, people are working on their bugs as fast as they can. Groups that have 1000s of active bugs like LibreOffice are simply understaffed and need more money and people. In the case of the kernel, it is partially an uneven resource problem, but also a cultural issue.
The worst part about the current large active bug count is that it is the fans of Linux on the desktop who are getting hurt. There is a world of people out there who are inspired by Linux but can’t make kernel patches. Everyone who files a bug in the kernel appreciates this amazing thing that has been built. They would like it to be used everywhere, including on their computer.
In spite of the problems, I’m happy in Linux. With applications of every kind, it changes how you think about an OS. If I ran Windows, I’d still be using Firefox, LibreOffice, Audacity, VLC, Gimp, Python, Git, Tomboy, and other free software. Windows succeeds because of laziness and inertia. There are some applications that don’t run on Linux: games, Ableton, Solidworks, iTunes, etc. but for most most today it is better code. Looking forward to the next few years of Linux on my laptop.
Note: I’ve created a petition to Synaptics Corporation about Linux drivers. Please sign it!
LibreOffice 4.0 was launched last week, and the news reports and activity on social media were massive, more than any release of LibreOffice or OpenOffice before, with better coverage than many of Microsoft’s well-funded introductions. There were numerous links sent around to the usual sites like LinuxToday.com, but also TechCrunch, VentureBeat, Time Magazine, etc. A fair amount of the chatter was people wondering what the difference is between the two versions. Some have basic questions like whether LibreOffice can import their OpenOffice documents.
LibreOffice is introducing their new name and community to the world. All the major Linux distros are already aware, but there are many Windows and Mac users who don’t understand what is going on. People even become attached to names for emotional reasons. Brands are powerful. If you were in a remote village in India on a hot day, you’d quite likely grab a Coke to cool your thirst if that was the only one with letters you recognized. Even people who like to travel and try new things might not want to take a risk on something that looks like carbonated, used bathwater with funky characters when they are tired, hot and thirsty.
In the realm of software, the considerations are different but related. Many are afraid to try new things because technologies so frequently come and go. People have been burned by Farmville, Zune, Tweetdeck, iTunes, Nvidia, Comcast, AT&T, Sprint, Sun, Adobe, Gnome 2.x, Microsoft, IBM, etc.
Some people look down on the LibreOffice / OpenOffice codebases because the user interface is more clunky than Microsoft’s Office, but many who spent time in it saw how it handled their files, has many features, and is generally stable, fast, portable and free. People became attached to “OpenOffice” during the hours they spent expressing their creative ideas. Many attach greatness to the name rather than to the people who built it. This makes people uneasy to try LibreOffice.
If you were to explain to OpenOffice users that Oracle laid off all the programmers before handing the trademark to Apache, and their new team is legally unable to accept changes made by LibreOffice, they might realize they should try the newcomer. That disclaimer is currently not on the Apache website. It would also be a useful warning if they listed all the features missing from LibreOffice. The current full list is already mind-blowing (4.0, 3.6, 3.5, 3.4, 3.3), and they are just getting started (Easy hacks, GSoc).
The biggest issue to consider is the opportunity cost. Instead of enhancing the existing OpenOffice brand, the community is forced to rebuild a new one. That is especially unfortunate because there are many people in LibreOffice who contributed to OpenOffice, and made the brand worth what it is today. As Apache OpenOffice is unable to accept LibreOffice changes, the brand is being squandered. And instead of adding resources, Apache are playing catchup, mandating an inferior license for this codebase, and inferior tools.
Because Apache OpenOffice has the brand, and a handful of full-time employees working on the codebase, they can find always ways to report good news and give the illusion of progress: “There have been 35M downloads, which saves the world $21M per day.” “Who wants to help with the wiki?” “We’ve now got 6 workitems tagged as Easy Bugs.” “Can someone dig up the documentation of our SDF format?” “It would be great to get someone to package OpenOffice into Fedora and give users choice.” “We found 50 naive^Wnew volunteers to help with QA in our recent call for help.” Etc.
This was an exchange that took place during Michael Meeks’ interesting Fosdem 2013 talk:
People in the Linux community are aware of the situation, but many don’t realize that there is very little LibreOffice can do to improve things. LibreOffice cannot prevent new forks from being created, and no one inside was threatening to fork. LibreOffice couldn’t prevent Oracle from giving away the trademark to anyone. LibreOffice couldn’t prevent Apache from creating a project that doesn’t accept their code. LibreOffice can’t prevent new people from getting confused when they see Apache, OpenOffice, and a pretty website, not realizing this is basically the “pet project” of an IBM employee.
It seems like people inside Apache could do something, but many of them liked the idea of having two “cores”. They see themselves as the upstream with the more open license, and LibreOffice is free to grab whatever code they find useful. Unfortunately, they don’t realize that as these codebases diverge, this becomes harder. LibreOffice no longer uses the SDF format for localization. So between the confusion, and the illusion of progress by a stream of money, we could be here a while. IBM has been around for 100 years. Perhaps they’re happy to wait until everyone is dead and hope the next generation of LibreOffice representatives is more amicable to their plans. As far as for things getting better, the best sign to look for would be if IBM were to send their representative new directives from the Home Office. You do see comments stating they’d like to end the fork. If only they had that wisdom before they created one. However, it appears they have no ideas what to do next. More wisdom is yet required.
LibreOffice is doing very well for such a young team. The free software community is jumping in and improving the codebase in many ways. However, the community could easily use millions of dollars to hire more people to work full-time and mentor volunteers. Perhaps the greatest concern is a lack of people who understand the Writer layout code, which is the most complicated piece of logic in the entire suite. Code and people are valuable, but people who understand code even more so.
Note: I write about LibreOffice / OpenOffice because I don’t like to see brands and volunteers wasted.
With big decisions, it is nice to have a paper trail. I can find no supporting documentation backing up the decision other than one blog post written after the fact, which doesn’t give very much information.
It appears the decision was made in a meeting. It is great to have meetings to discuss things, and it is great to make decisions in meetings, but oftentimes the best results are about moving the decision-making process forward, not actually committing to big things. Even if there were many in that room, there are surely facts they didn’t have, and other interested parties who were not there. There is the risk of “tyranny” by a self-selected cabal. Hopefully the decision wasn’t made at a bar
To be clear, some of the reasoning is explained. Here are my responses:
Python and many languages fit that description.
Next, it says:
Unfortunately, that is not really much of a reason. In fact, it could be perpetuating a bad plan with this logic.
Every language works to make itself fast. There are lots of efforts to make Python fast such as Cython and PyPy, and as many Gnome libraries will remain in C, this is hardly an issue even with the standard CPython implementation.
I’m not sure what the benefit of being embeddable is for a desktop UI. And Python is embeddable as well, inside apps like LibreOffice. I don’t undertand what the benefit of being framework-agnostic is. Every language needs libraries, and a rich set of libraries is a good thing.
Aren’t Windows 8, mobile, and local web applications supposed to be a worse experience than a Linux desktop? I imagine living exclusively in any of those platforms and shudder at the thought. They also aren’t planning on sharing code with any of those groups. Please don’t try to convince people the Gnome future is bright by using those three examples!
My day job is trying to finish a movie (trailer) endorsing Python as part of math literacy. Changing how math is taught to children could take a generation. But if Gnome get going now, they will be ready, and hopefully also be better than Gnome 2.x by then (I’m stuck in MATE. I believe the decision to remove Gnome 2.x is as good an idea as LibreOffice removing DOC import. This decision can be revisited also, but given how long ago it was made, I’m sure people are tired of the topic, so I will end here.)
Congratulations on leaving Microsoft. Unless you have bills to pay, you won’t regret it. I left at the end of 2004, and have since studied a vast and amazing — but still flawed – world of computing out there.
For example, I discovered that we should already have cars that (optionally) drive us around and computers that talk to us. And that Linux on the desktop is powerful and rich but failing because of several strategic mistakes. Google claims to be a friend of Linux and free software, but most of their interesting AI code is locked up. Programming should be a part of basic math literacy for every child. The biotechnology world is proprietary like Microsoft, which is stunting progress in new medicines and safer devices.
The most important lesson is that the free software world outside Microsoft is much bigger and richer. No matter what aspect of technology you want to work on, there are codebases and communities out there. Even the large companies who write proprietary software like Amazon, Apple, Facebook, and Twitter use free software as their base. So you first find out what you want to work on, and then you find the existing codebases and communities to join. In some cases, the are multiple, so you need to decide which best meets your needs.
The good news is that there are already millions of smart people working on any aspect of technology you’d like to work on. That is important because now that you have left Microsoft, you greatly lose the ability to control your own destiny using their technology.
When I first left Microsoft, I took on a consulting job helping a team build a website which used Microsoft Passport as the authentication mechanism. However, as I ran into problems, even Google wasn’t able to help because the knowledge and ability I needed to fix my problems was locked up behind the Microsoft firewalls. Fixing a problem in proprietary software can sometimes feel like performing witchcraft — you have to try lots of random incantations because you can’t know what is really going on. In the free software world, the code, buglists, specs, discussions, etc. are public, and anyone is welcome to contribute. A warning though, it can be like herding cats.
I read you have a Microsoft Surface. I recommend getting another machine and installing Mint-Debian Linux. You’ve probably heard of Ubuntu, but Debian is the 1000-person team that provides the rock Ubuntu builds upon. Mint is a very popular re-spin that adds mp3 playback and other features that have patent risks and can’t be part of the free Debian system. The Windows app store is a Potemkin village compared to what Linux offers. I remember you have a Unix background, I recommend refreshing your knowledge of the command line and reading some new books. I felt like a stranger in a strange land for the first couple of months, but it became perfectly comfortable to me, and has numerous advantages such that now I am as interested in using Windows as I am in using DOS.
I don’t recommend you bother with Apple. They have a proprietary walled garden even smaller than Microsoft’s. If you find a problem with Apple’s technology, your best option is to wait. If you find a problem anywhere in the free software world, you can file a bug, talk to a person, (usually) find a workaround, write some code, hire someone — or wait.
The other nice thing about this global community is that you don’t have to go anywhere to join. You can write code in your pajamas from Seattle and send it to Linus Torvalds in Portland who works from home in his. The Linux kernel alone has 3,000 programmers, scattered all over the earth, some of whom live in countries that are officially at war with each other.
Enjoy your new-found freedom. I have written a book about much of this you can read for free. It contains many things I didn’t know until I left. There are many news sites to learn about what is going on in Linux. I personally use LinuxHomePage, but every community has blog aggregators.
I have just watched this video by Global Futures 2045:
This is my list of things I disagree with:
I moved my response here.
Copyright © 2013 keithcu.com - All Rights Reserved
Powered by WordPress & Atahualpa