With EXA support enabled in the xorg.conf for radeonhd, I started to get X.org lock-ups when resuming from s2disk on the 2.6.28.8 kernel at random times; sometimes after 2 days of uptime, sometimes after just 15 minutes. Since I'm using a laptop with that funny Fn key layout, pressing alt-sysrq-k (SAK), alt-sysrq-u (remount read-only) and alt-sysrq-i (kill all tasks but init IIRC) makes the kernel believe I have pressed alt-sysrq-<insert numpad key here>, which doesn't do anything but set the kernel loglevel. Not being able to see anything on the pitch-black screen, all I have left then is alt-sysrq-s (sync system call) and alt-sysreq-b (reboot) and let fsck replay the ext3 journals while restarting.
Hearing that the drm kernel modules at the freedesktop.org repositories are not updated with the mainline kernel changes, I decided to try out the kernel 2.6.30-rc2. Fear. Despair. It locked up forever while suspending to disk.
Either it apparently did something more than mereley locking up, or the freedesktop.org drm and radeon kernel modules are more broken than I expected, but some hours after rebooting to 2.6.28.8 and doing my homework on the laptop, most of the running GUI programs crashed with random SIGSEGVs and SIGABRTs. Yes, what you read - even the KDE window manager got busted. Interestingly, I could bring it up again with some effort without restarting X, but then I noticed that one of the apps I was working with, Eclipse (3.4.x), didn't want to start again. When launched from the console, it would exit around 2 seconds after being invoked, with no explanation. After fiddling around with my ~/.eclipse dir, restarting X various times just to see knotify and artsd crashing like mad on KDE startup, I started session as another (dummy) user in X and tried to launch Eclipse 3.2 instead - its explanation was a bit more satisfactory: missing libraries.
"Of course", I thought, "the ldconfig cache may have become corrupted".
My thoughts at that moment: WHAT THE F* HELL HAS HAPPENED!!!?!?! Considering that I hadn't run Wine for months, and it wasn't running at that moment either... I ran some ls's to see what had become of those libraries... it wasn't pretty. They had become character, block devices and FIFOs. 😕 I rebooted, and fsck still ignored the "clean" /usr (/dev/sda5) filesystem until I forced it to check it all filesystems again by setting their mount counts to some funny values using tune2fs.
Indeed, /usr was corrupted.
After init dropped me to a emergency console to run fsck manually on sda5, running e2fsck threw some interesting stuff, which I forgot to save to a file, but it was something like this:
inode ####### has 'compressed' flag set on an unsupported filesystem
inode ####### has invalid size
inode ####### (fifo) has non-zero size
inode ######## has 'compressed' flag set on an unsupported filesystem
((ad infinitum, ad nauseam))
After it finished, I noticed that the /usr/lost+found directory got populated with files and directories resembling parts of the Python 2.4 and 2.5 foundations installed. Of course, missing entire directories is a really bad sign. But I could recover the affect packages I noticed, with help of apt-get install --reinstall. Those where anything aRts, Python, wine, Pango and Akregator (which I actually remembered to reinstall right now, while writing this), although I reinstalled the Samba server I had installed just some hours before the crash, just in case. Apparently I didn't lose anything else. None of the other filesystems were affected by whatever smashed parts of /usr.
The obvious course of action then was blacklisting drm and radeon, and removing the Option "AccelMethod" "exa" line in xorg.conf, and nuking the 2.6.30-rc2 kernel image just in case. 😛 Eclipse and artsd worked again, and nothing else has shown any symptoms of being affected by fsck's decisions since then. It's been just 5 days though, and I have many apps lying around that I only use when asked/forced to, so I don't even remember their names.
I didn't say goodbye to kompmgr, however; it has a pretty decent performance and marginal CPU usage even while using a shadow framebuffer, with just one CPU core enabled at half of its maximum frequency, and even if the system is under load, so I figured that I really didn't need DRI or EXA all the time (radeonhd still doesn't have a opengl implementation, so having DRI enabled made no difference for clients) Still, it was nice to be able to scroll a huge konsole or Iceweasel/Firefox window without experiencing flickering.
Thanks to the help of the guys at the #radeonhd channel on Freenode.net, I could quickly find out how to make DRI (and thus, EXA) work with my laptop's onboard graphics... thingy... I'll let lspci speak for itself; it's much easier that way:
01:05.0 VGA compatible controller: ATI Technologies Inc RS780M/RS780MN [Radeon HD 3200 Graphics]
[...]
Let's not forget that this is my current (it was new on December) laptop, not the old, broken lappy, and I'm running Debian stable.
The thing is, with their help I installed a newer-than-Debian (or actually, newer-than-the-kernel-that-is-newer-than-Debian's) radeon drm module, and also solved a little mistake that made X lock up after resuming from suspend-to-disk with DRI and EXA enabled. Now I can enjoy some real 2D acceleration at last!
Although it seems, judging from the git repository's logs, that the official EXA and XRender acceleration support for this chipset is pretty recent, it is very stable compared to a certain other driver for another onboard chipset which I won't mention here. I could even enable KWin's composition manager (kompmgr)!
I have not tried the other big composition/window managers (e.g. compiz), nor intend to, as they require a hardware OpenGL interface which is not available yet for this chipset, and since they are labeled "window managers", I'm guessing that I'll definitively lose the KWin look-and-feel that kompmgr doesn't intend to replace).
Nonetheless, my very limited candy set-up has an incredibly low CPU overhead (either that or the radeonhd driver is f***ing awesome), and I can even let it enabled while in powersave mode, and no matter what I do, I don't see any impact in the other program's performance. I couldn't say the same thing about radeonhd before getting EXA.
One really odd thing I noticed, though, is that kompmgr doesn't seem to want to have anything to do with translucency and pop-up menus. Side-effect, it is KDE (Qt3?) itself who must provide and option for that in the Control Center -> Styles page. But even if I tell it to use XRender acceleration, something doesn't fit - and it is that the changes going on windows below a pop up menu are not seen, e.g. the pop up menu rendering is completely static. This unlike regular translucent windows. I don't know what benefits this design decision may bring, but it's KDE 3.5.10 anyway, and KDE 4 users would laugh at me for my obsession with not switching to 4.x.
Note that I am aware that 4.2.x has fixed a lot of the stunningly awful usability and configurability regressions seen in earlier 4.x versions. But I am still pondering why all Qt 4 widget engines are pretty slow, compared to Qt 3 - so, until I get an answer of the kind "it's just you" or "can be solved" or "has been solved already", I'll not consider switching to KDE 4 an option.
Since the introduction of the testing SCons-based build system to the (Battle for Wesnoth)[http://www.wesnoth.org] project, I have been annoyed repeteadly by its decreased performance in comparison with the old autotools based system we were using, and its increased power consumption in my laptop.
I felt alone in this world... since March IIRC, until I stumbled upon our Debian packager's blog entry about it just yesterday.
I cannot deny that Loonycyborg and ESR have done a great job in making the SCons build recipe for Wesnoth better over time, but there are these tiny issues that they cannot overcome without modifying SCons' source code itself and requesting all our users to use a patched version of SCons for that. 😕 Nonetheless, it annoys me that users have to install a non-GNU tool to be able to even see the build options for the software. This is certainly one of the good things about autotools: you generate the processed recipe and it will run on any machine with the UNIX or GNU coreutils, a compatible sh* and make! Of course, assuming the author of the raw recipe (configure.ac/in and friends) did not use what is called "bad practice" in it (bashisms, silent environment requirements, etc.). I still have to find a processed autotools recipe (Makefile.in, configure) that fails to run and do its job from a released source code distribution in any FLOSS project.
So with autotools it is rarely needed to install the recipe-processor tools (aclocal, autoconf, automake, autoheader, autopoint) if one wants to run software, not develop it. Yet with SCons and... CMake (the other candidate replacement for autotools at Wesnoth), it is necessary to install the equivalent of Apache server in the client machines for the equivalent of downloading a set of files from a public area of the server. Why?
That said, I'm personally sticking to autotools for managing builds (and have fun tailoring them to temporary needs at times!) in my personal project, Mesiga, until the GNU project comes with a better solution... should that be possible. Moreover, I'd personally maintain the autotools recipe in Wesnoth if our Release Manager wouldn't have been so persistent in the "let it rot" policy.
The fact that SCons project's homepage is filled with propaganda from big people in the software industry such as iD Software and ESR, is even more disturbing for me. It makes me think it... it... IT IS A TRAP! 😮 The "What makes SCons better?" section is fearsome... to me it looks like 'featuritis'. There are so many features built into SCons that they are overwhelming to me. 😕 Why users have to install this big piece of software in their machines if they just want to build some software, I mean?
Thanks goodness they didn't make adding new source code files to targets harder than with autotools.
During my work on the Coordinated Wesnoth User-Made Content Development Project (which we dub "wesnoth-umc-dev" for short), I came up with an interesting concept related to Subversion's standard workflow. Half-assed commits are revision commits to the Subversion repository that are not completed due to the subversion client (or server!) process dying unexpectedly, usually due to anything but a SIGTERM.
The obvious symptom of a half-assed commit in your local file system is a bunch of 'L' flags in the svn st command output. These can be removed with svn cleanup. So, most half-assed commits are harmless to you. However, according to the (holy) Subversion Book, it may leave garbage, half-assed transactions in the repository. These are not viewable to anyone but the repository admin of course, and should not harm anyone provided the filesystem on which it resides does not run out of space.
😐 Last afternoon I ran into a more harmful and painful sort of half-assed commit. I renamed some files in my working copy, invoked svn ci, and my crappy Wireless LAN connection burped just when it was about to update the working copy with the changes introduced to the repository:
Transmitting file data ...svn: Commit failed (details follow):
svn: MERGE request failed on '/svnroot/wesnoth-umc-dev/trunk/Invasion_from_the_Unknown'
svn: MERGE of '/svnroot/wesnoth-umc-dev/trunk/Invasion_from_the_Unknown': Could not read status line: Connection reset by peer
(https://wesnoth-umc-dev.svn.sourceforge.net)
svn: Your commit message was left in a temporary file:
Unsurprisingly, I was left with my files in an awful state that caused local conflicts with the repository. That is, next svn update failed because the commit above was successful for the server, leaving the renamed files in the repository. SVN just didn't like that at my end, because I had those renamed files already in the working copy as result of the svn move result I just (half-ass) commited.
Thanks for nothing SVN! Seriously, the protocol should have the server request for a final confirmation from the client to check-in the transaction after its changes have been merged in the client's working copy. Or the inverse: have the client react in a smarter fashion to these situations that people like me often run into.
openSUSE 10.3 ships with Firefox 2. I switched to Firefox 3 from the "Mozilla" repository a few weeks after it came out (missed the download day/party). So far so good. Most user interface changes are nifty, except for the change of the History menu layout - the history sidebar cannot be enabled from there unlike in previous versions. CTRL-H or the View menu must be used instead. Awkward, but I can live with it thanks to keyboard shortcuts.
Flash-embedding pages (YouTube amongst others) good. No crashes when watching Flash videos although I use a crashy X.org display adapter driver ('radeon'... don't even ask about 'fglrx').
The problem goes when I use seemingly simpler features that I have known since at least Firefox 1.0. I am a laptop user, and I'm often disconnected from the Internet. I use the browser's cache to read pages that I had already skimmed since I can't be bothered to keep zillions of HTML downloads in my home dir. Then problems arise.
Seemingly this version Firefox crashes at random, specially when its session has run for long (t > 30 min.) time and one does stuff in the page view area such as scrolling or clicking on text while the sidebar is active. What a pity, because I like the history sidebar much better than this new separate "Full History" window. More pity is that the cache gets invalidated after a session crash and its contents get wiped out. Of course... I'm not the best person to judge whether this is a bug or a feature, since I don't know much computer security; but I can tell it annoys me to the point I have to be doing backups of the cache et al after closing Firefox successfully:
Then if Firefox 3 crashes, I restart it, close it again, and restore the Cache directory from .mozilla2/firefox/SeeminglyHashedSessionId so I can continue reading from it when I'm offline.
By the way, the offline cache (for any browser) seems to be often underestimated. Some time ago, after the www.wesnoth.org server crash, I got some forum pages from Firefox's cache and uploaded them to this website in a hidden directory to serve as a partial, temporary mirror for people for having a guide to get back to work after the 2-months roll back of the forum's database.
The cache issues apart, the fact that Firefox gets crashy for nothing disturbs me. Wasn't this version supposed to be more stable than 2.0 according to the announcements? The rendering also got some performance regressions. Some pages take longer to be rendered than downloaded, specially those with heavy use of scripts. Those affected pages usually are also sluggish to scroll up/down, no matter if I disable "smooth" scrolling. I never experienced anything of this with the same pages on the same OS (openSUSE 10.3), the same architecture (x86_64) and earlier version (2.0.0.x) of Firefox.
Perhaps this whole download-day thing was just a trap. Or they put more attention to the Windows and MacOS X ports rather than the GNU/Linux one. Or I am cursed to have bad luck for the rest of my life. Whatever it is, I don't like it, and I'm seriously considering switching to a better open source browser for Linux; IceWeasel may be it if it is a fork that is being developed on its own. I have yet to do the switch to Debian.