RadeonHD and kompmgr

Thanks to the help of the guys at the #radeonhd channel on Freenode.net, I could quickly find out how to make DRI (and thus, EXA) work with my laptop's onboard graphics... thingy... I'll let lspci speak for itself; it's much easier that way:

shadowm@bluecore:~$ lspci
00:00.0 Host bridge: Advanced Micro Devices [AMD] RS780 Host Bridge
[...]
01:05.0 VGA compatible controller: ATI Technologies Inc RS780M/RS780MN [Radeon HD 3200 Graphics]
[...]

Let's not forget that this is my current (it was new on December) laptop, not the old, broken lappy, and I'm running Debian stable.

The thing is, with their help I installed a newer-than-Debian (or actually, newer-than-the-kernel-that-is-newer-than-Debian's) radeon drm module, and also solved a little mistake that made X lock up after resuming from suspend-to-disk with DRI and EXA enabled. Now I can enjoy some real 2D acceleration at last!

Although it seems, judging from the git repository's logs, that the official EXA and XRender acceleration support for this chipset is pretty recent, it is very stable compared to a certain other driver for another onboard chipset which I won't mention here. I could even enable KWin's composition manager (kompmgr)!

I have not tried the other big composition/window managers (e.g. compiz), nor intend to, as they require a hardware OpenGL interface which is not available yet for this chipset, and since they are labeled "window managers", I'm guessing that I'll definitively lose the KWin look-and-feel that kompmgr doesn't intend to replace).

Nonetheless, my very limited candy set-up has an incredibly low CPU overhead (either that or the radeonhd driver is f***ing awesome), and I can even let it enabled while in powersave mode, and no matter what I do, I don't see any impact in the other program's performance. I couldn't say the same thing about radeonhd before getting EXA.

One really odd thing I noticed, though, is that kompmgr doesn't seem to want to have anything to do with translucency and pop-up menus. Side-effect, it is KDE (Qt3?) itself who must provide and option for that in the Control Center -> Styles page. But even if I tell it to use XRender acceleration, something doesn't fit - and it is that the changes going on windows below a pop up menu are not seen, e.g. the pop up menu rendering is completely static. This unlike regular translucent windows. I don't know what benefits this design decision may bring, but it's KDE 3.5.10 anyway, and KDE 4 users would laugh at me for my obsession with not switching to 4.x.

Note that I am aware that 4.2.x has fixed a lot of the stunningly awful usability and configurability regressions seen in earlier 4.x versions. But I am still pondering why all Qt 4 widget engines are pretty slow, compared to Qt 3 - so, until I get an answer of the kind "it's just you" or "can be solved" or "has been solved already", I'll not consider switching to KDE 4 an option.

Build systems strike again

Since the introduction of the testing SCons-based build system to the (Battle for Wesnoth)[http://www.wesnoth.org] project, I have been annoyed repeteadly by its decreased performance in comparison with the old autotools based system we were using, and its increased power consumption in my laptop.

I felt alone in this world... since March IIRC, until I stumbled upon our Debian packager's blog entry about it just yesterday.

I cannot deny that Loonycyborg and ESR have done a great job in making the SCons build recipe for Wesnoth better over time, but there are these tiny issues that they cannot overcome without modifying SCons' source code itself and requesting all our users to use a patched version of SCons for that. 😕 Nonetheless, it annoys me that users have to install a non-GNU tool to be able to even see the build options for the software. This is certainly one of the good things about autotools: you generate the processed recipe and it will run on any machine with the UNIX or GNU coreutils, a compatible sh* and make! Of course, assuming the author of the raw recipe (configure.ac/in and friends) did not use what is called "bad practice" in it (bashisms, silent environment requirements, etc.). I still have to find a processed autotools recipe (Makefile.in, configure) that fails to run and do its job from a released source code distribution in any FLOSS project.

So with autotools it is rarely needed to install the recipe-processor tools (aclocal, autoconf, automake, autoheader, autopoint) if one wants to run software, not develop it. Yet with SCons and... CMake (the other candidate replacement for autotools at Wesnoth), it is necessary to install the equivalent of Apache server in the client machines for the equivalent of downloading a set of files from a public area of the server. Why?

That said, I'm personally sticking to autotools for managing builds (and have fun tailoring them to temporary needs at times!) in my personal project, Mesiga, until the GNU project comes with a better solution... should that be possible. Moreover, I'd personally maintain the autotools recipe in Wesnoth if our Release Manager wouldn't have been so persistent in the "let it rot" policy.

The fact that SCons project's homepage is filled with propaganda from big people in the software industry such as iD Software and ESR, is even more disturbing for me. It makes me think it... it... IT IS A TRAP! 😮 The "What makes SCons better?" section is fearsome... to me it looks like 'featuritis'. There are so many features built into SCons that they are overwhelming to me. 😕 Why users have to install this big piece of software in their machines if they just want to build some software, I mean?

Thanks goodness they didn't make adding new source code files to targets harder than with autotools.

Half-assed commits

During my work on the Coordinated Wesnoth User-Made Content Development Project (which we dub "wesnoth-umc-dev" for short), I came up with an interesting concept related to Subversion's standard workflow. Half-assed commits are revision commits to the Subversion repository that are not completed due to the subversion client (or server!) process dying unexpectedly, usually due to anything but a SIGTERM.

The obvious symptom of a half-assed commit in your local file system is a bunch of 'L' flags in the svn st command output. These can be removed with svn cleanup. So, most half-assed commits are harmless to you. However, according to the (holy) Subversion Book, it may leave garbage, half-assed transactions in the repository. These are not viewable to anyone but the repository admin of course, and should not harm anyone provided the filesystem on which it resides does not run out of space.

😐 Last afternoon I ran into a more harmful and painful sort of half-assed commit. I renamed some files in my working copy, invoked svn ci, and my crappy Wireless LAN connection burped just when it was about to update the working copy with the changes introduced to the repository:

Transmitting file data ...svn: Commit failed (details follow):
svn: MERGE request failed on '/svnroot/wesnoth-umc-dev/trunk/Invasion_from_the_Unknown'
svn: MERGE of '/svnroot/wesnoth-umc-dev/trunk/Invasion_from_the_Unknown': Could not read status line: Connection reset by peer
(https://wesnoth-umc-dev.svn.sourceforge.net)
svn: Your commit message was left in a temporary file:
svn: '/home/shadowm/src/wesnoth-umc-dev/trunk/Invasion_from_the_Unknown/svn-commit.2.tmp'

Unsurprisingly, I was left with my files in an awful state that caused local conflicts with the repository. That is, next svn update failed because the commit above was successful for the server, leaving the renamed files in the repository. SVN just didn't like that at my end, because I had those renamed files already in the working copy as result of the svn move result I just (half-ass) commited.

Thanks for nothing SVN! Seriously, the protocol should have the server request for a final confirmation from the client to check-in the transaction after its changes have been merged in the client's working copy. Or the inverse: have the client react in a smarter fashion to these situations that people like me often run into.

Mozilla Firefox 3.0

openSUSE 10.3 ships with Firefox 2. I switched to Firefox 3 from the "Mozilla" repository a few weeks after it came out (missed the download day/party). So far so good. Most user interface changes are nifty, except for the change of the History menu layout - the history sidebar cannot be enabled from there unlike in previous versions. CTRL-H or the View menu must be used instead. Awkward, but I can live with it thanks to keyboard shortcuts.

Flash-embedding pages (YouTube amongst others) good. No crashes when watching Flash videos although I use a crashy X.org display adapter driver ('radeon'... don't even ask about 'fglrx').

The problem goes when I use seemingly simpler features that I have known since at least Firefox 1.0. I am a laptop user, and I'm often disconnected from the Internet. I use the browser's cache to read pages that I had already skimmed since I can't be bothered to keep zillions of HTML downloads in my home dir. Then problems arise.

Seemingly this version Firefox crashes at random, specially when its session has run for long (t > 30 min.) time and one does stuff in the page view area such as scrolling or clicking on text while the sidebar is active. What a pity, because I like the history sidebar much better than this new separate "Full History" window. More pity is that the cache gets invalidated after a session crash and its contents get wiped out. Of course... I'm not the best person to judge whether this is a bug or a feature, since I don't know much computer security; but I can tell it annoys me to the point I have to be doing backups of the cache et al after closing Firefox successfully:

$ rm -rf ~/.mozilla2 && cp -rf ~/.mozilla ~/.mozilla2

Then if Firefox 3 crashes, I restart it, close it again, and restore the Cache directory from .mozilla2/firefox/SeeminglyHashedSessionId so I can continue reading from it when I'm offline.

By the way, the offline cache (for any browser) seems to be often underestimated. Some time ago, after the www.wesnoth.org server crash, I got some forum pages from Firefox's cache and uploaded them to this website in a hidden directory to serve as a partial, temporary mirror for people for having a guide to get back to work after the 2-months roll back of the forum's database.

The cache issues apart, the fact that Firefox gets crashy for nothing disturbs me. Wasn't this version supposed to be more stable than 2.0 according to the announcements? The rendering also got some performance regressions. Some pages take longer to be rendered than downloaded, specially those with heavy use of scripts. Those affected pages usually are also sluggish to scroll up/down, no matter if I disable "smooth" scrolling. I never experienced anything of this with the same pages on the same OS (openSUSE 10.3), the same architecture (x86_64) and earlier version (2.0.0.x) of Firefox.

Perhaps this whole download-day thing was just a trap. Or they put more attention to the Windows and MacOS X ports rather than the GNU/Linux one. Or I am cursed to have bad luck for the rest of my life. Whatever it is, I don't like it, and I'm seriously considering switching to a better open source browser for Linux; IceWeasel may be it if it is a fork that is being developed on its own. I have yet to do the switch to Debian.