Monday, October 24, 2011

Unity on Ubuntu 11.10 and no clock / date applet


When I upgraded to Ubuntu 11.10 I wanted date and time on my indicator panel, without the Evolution resource hogs that are dependencies of the standard Ubuntu 11.10 clock indicator. Since the Evolution calendar has (had?) a major memory leak in 11.04 when used with Google Calendars I stopped using Evolution and uninstalled it. Unfortunately I found the clock was impossible to use without the leaky Evo data server. And worse still, I was unable to find any clock indicator that was compatible with the new appindicator standard. So I had no way of knowing how late I was working beyond how dark it was outside. 
So here is my first attempt at an appindicator (or even writing Python code). It is inspired by Mark Bokil and his "show desktop indicator' (via http://www.webupd8.org/2011/10/show-desktop-indicator-for-ubuntu-quick.html#more )
You'll need to install wmctrl first, then run the display-indicator.py file below (which I chmod'ed and added to started applications)
sudo apt-get install wmctrl




Wednesday, December 1, 2010

Ruby on Rails and inconsistent database results

I started stressing a RoR project that has grown pretty big. Some serious hands on testing was showing that functionality was working well, performance was fine, but sometimes I would just get weird results from the database or ActiveRecord. I would create a new AR object, save it, use it a few times, update it, then it would suddenly just disappear. I would start getting ActiveRecord::RecordNotFound exceptions doing a Thingy.find(1234), when thingy#1234 definitely existed in the database. It would take a restart of Phusion Passenger or for one of the workers to timeout before I would start seeing the object again, and if I refreshed a page with Thingy.all(:conditions=>c) the results would change, then change back. I'm using MySQL so its not exactly what I was expecting to see.

I had issues in the past with some forking of processes that could just run through to conclusion in the background - they were removed. I made sure that there were good Thingy.transaction do end blocks covering my updates. Still, things were getting worse, not better.

Eventually I ended up hunting around the code from the dim and distant past. That stuff I don't touch because it "just works". Well, I roll up to an interesting section in a class :


 sql = ActiveRecord::Base.connection();
      sql.execute "SET autocommit=0";
      sql.begin_db_transaction
      sql.delete 'delete from a_table where some_conditions' 
      sql.update sqlstring
      sql.commit_db_transaction


This was valid, as the SQL going on in the sqlstring was complex to say the least. But since I've removed this from the main flow of the application things seem to have settled down considerably.

I'm guessing that my standard transactions were getting caught up in my attempt to borrow a connection from the pool explicitly and who knows what was happening. Or maybe Passenger was losing its connection and recreating a connection. I don't know, but I'm not doing it again!

Thursday, November 11, 2010

Ajax request keeps Google Chrome pointer spinning

I have a wonderful instant messaging app running using AJAX to maintain a long term connection to a simple HTTP server. How I did that is a discussion for another time, but the problem was that I needed to create the connection when the user first hit a web page. So I just simply put a Javascript call in body onload, something like:

<body onload="start_listener();">

Plain and simple, it worked. But on Google Chrome, the page continued to say 'Waiting for servername' and the mouse pointer was continuously spinning. I suffered this for a while, until I realized that the same call to start_listener() didn't exhibit the same endless spinning if I fired it from clicking a link.

I must be the only person on the planet with this problem, as Google yielded no results. I tried moving the call to a script at the end of the page, putting it into a script as window.onload, all to no avail. I double checked the request really was asynchronous. Chrome worked fine, but the cursor was disconcerting to say the least.

So I decided to search similar projects: node.js has nice event driven messaging demo, frequented by many observers and a handful of weirdos. They don't have the spinner problem, as they have you click on a link to get started. But something in there made me thing that I should attack this from another direction. How about I triggered an event that called the Javascript function, just as if I had clicked a link with the mouse?

Well, brainwave finally kicked in. Use a timer on the page, to start the AJAX request soon after the page loads. The Javascript must be event based, right?

So I converted my onload to:

<body onload="setTimeout('start_listener()', 1000);">


effectively triggering the call to AJAX as an event. And at last, Chrome stops spinning.

What's the deal? I guess that if you put an AJAX call directly into the main page load, Chrome treats it as an extension of the original request (maybe more through the design of the page request handling than on purpose?). But pull it out so that the AJAX request falls into a distinct event, and all works well. Timeout lets you do that without needing a user to click anything.

Finally I can look at my application without getting freaked out that it is failing to load something...

Wednesday, October 27, 2010

Counting pages

I've been working on a Ruby on Rails project for a while. One area of it has morphed into a bit of document management, and for some users it is important to know how many pages a specific document has in it. At least for PDFs and TIFFs.

Well, ImageMagick is one approach, letting you load the document then review its properties. But as anybody who has used it will know, unless you are careful, this can be a huge memory sink. In fact I use ImageMagick 'convert' as a way to force my machine to run out memory during testing, to see if it fails gracefully.

So, I hunted around a bit and came up with these programs: tiffdump and pdfinfo. I also considered tiffinfo, although the 'rawness' of tiffdump just seemed more appealing when parsing out the data I needed.

To install them (on Ubuntu):

sudo apt-get install libtiff-tools poppler-utils

Then use the command line programs from Ruby, something like this:


path = '/home/someone/somewhere/somefile.xxx'
mime_type = WEBrick::HTTPUtils.mime_type(path, WEBrick::HTTPUtils::DefaultMimeTypes)
if mime_type=='image/tiff'      
  return `tiffdump '#{path}' | grep 'Directory'`.count('\n')
elsif mime_type=='application/pdf'      
  return `pdfinfo '#{path}' | grep 'Pages'`.split(':')[1].chomp.to_i
else
  # whatever
end


Not pretty, not clever, but a lot faster than RMagick, and a lot easier than the Ghostscript approaches I've seen discussed but never actually working.

Tuesday, October 12, 2010

Chroot - ooh now I can run OpenOffice

I've been struggling with OpenOffice crashes since I've been running Ubuntu 10.04 (Lucid). I've tried everything. I've added horrible red-herrings to one of the many seemingly relevant bug reports on Launchpad. And in the process, I've tried debugging (debug symbols seem to be inadequate) and then I saw a discussion about recreating a bug from a previous Ubuntu version in a chroot based basic installation. So I followed the instructions for creating a chroot with a basic Ubuntu installation, installed a few basic packages (nano for example), set up the en_US UTF-8 locale, following Andrew Beacock's blog (necessary to install Java). I also had to add some archives for apt to pick up OpenOffice. Now I have Lucid running chroot'd inside Lucid.

I don't know chroot well and heard some issues around mounting disks, and I'm using ext4 with ecryptfs encryption for my home which kinda gets in the way, so I went the roundabout route and mounted ssh using sshfs (yes I had to install both of these first).

Finally I installed openoffice.org-ubuntu and openoffice.org-human-style. And I finally ran ooffice and edited a document all day long with no crashes. I don't know if that's it, allowing me to avoid some strange library conflict, or whether tomorrow is another day and another crash. But currently I like the chroot method for testing a clean install without making a whole clean install, or making it difficult to get at my documents, which I find a VM image tends to. And it took me about half an hour total time to get it to work, without chewing up half my disk or half my memory. I like chroot for this. Hopefully it will keep me productive for a while.

Tuesday, July 27, 2010

A lot going on in Ubuntu-land

I just ran the Ubuntu update for the day. A new release of the kernel and another new version of Firefox.

Frankly, I've moved to Google Chrome, since Ubuntu seemed to be getting slower and slower. I've not had that complaint with Chrome, though I did add the FlashBlock extension which prevents a lot of unnecessary advertising from chewing up CPU in the background.

As for the Kernel, there are some fixes to the ext4 file system I committed to when I installed Lucid, and the ecryptfs encryption module that I have decided might be a safe bet for encrypting my personal, business and backup data. The challenge now is to see if I can recompile the ecryptfs module on the Rackspace cloud server I've been using. I did it once, now let's see if I can do it again.

Oh and if you've been reading this block in the past, you'll see that I've been struggling with OpenOffice crashing. It still does, regularly if a document is showing a graphic on a page when I swap between applications. Not nice, as some of the documents I've been doing are pretty graphic intensive. I will continue to try and resolve this...

Tuesday, July 6, 2010

Canon PIXMA MX700 - print and scanning with Ubuntu

I own the nice-enough little all-in-one printer, scanner, fax unit, the Canon MX700. It was cheap, and I desperately needed a scanner at short notice. It worked nicely with my wife's WinXP laptop, but I've always struggled with it on Linux. An upgrade to Ubuntu 10.04 (Lucid) left me without a working printer again.

So here I go: when you set up your new printer, you won't find the MX700 in the list. You won't find an MX anything in fact. So pick the Canon PIXMA MP520. This apparently was released the same time and probably apart from network printing options or something benign, offers the same printing firmware. At least with USB, this works nicely for me.

As for scanning, I previously had to mess around to get SANE to understand the device. But then I ran across this: the SANE backend that is the standard scanning support for Linux claims to support the MX700: http://www.sane-project.org/sane-mfgs.html#Z-CANON


PIXMA MX700USB Ethernet0x04a9/0x1729CompleteFlatbed and ADF scan. All resolutions supported (up to 2400DPI)pixma
(0.16.1)
sane-pixma


Well, I just tried it with my favorite app, gscan2pdf and it just worked.

Finally, this stuff seems to be coming together nicely. Great work SANE team...