They say confession is good for the soul, so today, right here, in this very column, my Linux Walkabout, I will make public one of the deepest and darkest secrets of my soul. In making this confession, I will tell you about a great little tool for your Linux system. In that way, I may just help others who are similarly afflicted.
Steady Marcel, you can do this.
Okay, I'm ready. Here it is. I am a news junkie. I'm cursed with a maniacal and insatiable need to be up on the latest in current events, science, entertainment, politics, rumor, and whatever else you can think of. For reasons that defy explanation, I need a steady diet of information about anything and everything that is going on in the world; nay, in the entire universe. Though I must confess (again?) a particular fondness for information of a scientific or technical nature, I can just as easily be swayed by a review of the Canadian Opera Company's latest performance or by speculation on the newest blockbuster. Information...I must have information.
Oh, and fiction, too. Lots and lots and lots of fiction. Text, that's the key. <insert maniacal laugh here> Ah, I feel much better. This is probably part of the reason that one of my favorite toys is my Visor Prism, a Palm OS device with the signature springboard slot. In that slot, I have an expansion card, a MemPlug that gives my Visor an extra 64MB of storage space. What do I do with that extra space? I store the latest stories and headlines from my favorite news sources. I also keep a dozen or so novels there, but today I'm going to concentrate on the news.
Justin Mason must have been one of the afflicted. That's probably why he wrote Sitescooper, a nice little program that's actually just a clever piece of perl code. What Sitescooper does is provide an easy way to download, clean up, and format your favorite news website so that you can read it on your Palm (or similar) handheld device. Since there are a number of reader programs available for handhelds, Sitescooper knows about many of the more popular onessuch as iSilo, Plucker, RichReader, and any of the numerous programs that make use of the DOC format (Aportisdoc, Tealdoc, and so on). It collects the news by using site description files that you can write yourself, although you may never need to. Sitescooper comes with some 400 different site files preconfigured.
Intrigued? Then head on over to http://sitescooper.sourceforge.net and pick up the latest source. Installation is easy, as I will show you in a moment, but you can make life even easier by downloading prepackaged RPM files (for RedHat, SuSE, and others) and DEBs for the Debian folk. Or, you can go to the source.
tar -xzvf sitescooper-full.tar.gz cd sitescooper-3.1.2 su -c "make install"
Earlier on, I talked about site description files. We'll get to those in a moment, but for now, let's look at Sitescooper's one configuration file. It is called sitescooper.cf, and you'll find it in the /etc directory for the global file or in your $HOME/.sitescooper directory. Since this is a simple text file, you can edit it with your favorite editor (mine is still vi). There are two lines that you need to look for and uncomment. Here's the first:
When Sitescooper creates files for you to hotsync to your Palm, it needs a place to store them. That's what you do with the PilotInstallDir setting ahead. I've got mine set to pilot/install in my home directory. This can be pretty much anywhere you want. The alternative is to try to let Sitescooper figure these things out for itself by setting the PilotInstallApp parameter.
Replace the word InstallApp with the pilot sync program you are using: PilotManager, gnome-pilot, or JPilot. Once that is done, you are almost done.
Now, type sitescooper. When you run Sitescooper for the first time, it will create a .sitescooper directory (don't forget the period) in your home directory. The first thing you will notice, however, is that you've just been placed in the middle of a configuration file for sites to download news from, and there are a lot of them. This is the format you'll see:
[ ] New Scientist URL: http://www.newscientist.com/contents/ Filename: [samples]/science/new_scientist.site [ ] Science Daily Headlines URL: http://www.sciencedaily.com/news/summaries.htm Filename: [samples]/science/science_daily.site
The idea is simple. Simply put an X in-between the square brackets of the sites that interest you. This is a big list, so you might be busy for a while. The other option at this point is to ignore this file: Don't change anything, and close it right now (which is what I did). What will happen then is that Sitescooper will create a number of additional files and directories in that .sitescooper directory I told you about earlier. When you run the program, it will expect to see a directory called sites under your home directory. This is where you keep individual site files. Take a moment and go back to where you extracted Sitescooper, and you will discover a directory called site_samples (if you installed from source, you'll also find these under /usr/share/sitescooper). There you'll find a number of other directories representing the categories below. In the science directory, I find this site file for New Scientist website:
$ cat new_scientist_news.site URL: http://www.newscientist.com/news/ Name: New Scientist News Levels: 2 UseTableSmarts: 0 TableRender: flatten ContentsStart: <div class\s*=\s*"crosshead"> ContentsEnd: <div class\s*=\s*"listfoot" align\s*=\s*"justify"> StoryURL: http://www.newscientist.com/news/news_\d+.html StoryURL: http://www.newscientist.com/(daily)?news/news.jsp\?id=ns\d+ StoryHeadline: <title>New Scientist: (.*?)</title> StoryStart: <div class\s*=\s*"body"> StoryEnd: <B class\s*=\s*subs>
As you can see, the format is quite simple. With over 400 site files pre-built for you, odds are you won't need to go looking for more on your first day, but just in case, you might want to take down the following link. It is a simple set of instructions for creating your own site files:
The following listing is that of a site file I wrote for myself. It scoops the weekend book review column (a must-read for the book lovers out there).
#The Globe and Mail is a a Toronto newspaper #This script scoops the weekend Books review column URL: http://www.globebooks.com/ Name: Globe&Mail Books Levels: 2 ContentsStart: <!-- Feature Review --> ContentsEnd: <!-- Best of he Year --> StoryURL: http://www.globebooks.com/servlet/.* StoryStart: <!-- /fragments/emailtofriend.html ends --> StoryEnd: <!-- Addendum -->
All right, let's get some news. If you run the program in its simplest form, it creates files in iSilo format by default. The resulting file is prefixed by the date of the story.
Personally, I don't like this naming convention. What I want it new news, not yesterday's or last week's news. Consequently, I would rather have the file without that prefix so that new files overwrite old files when I hotsync. For that, I use the -nodates option.
sitescooper -nodates -fullrefresh
Notice that I added another option: the -fullrefresh flag. That's because Sitescooper is pretty smart. It uses a cache to remember whether it has already downloaded today's news so that it doesn't go out and get the same thing over and over again. To override that, I tell it to fully refresh its cache and start from scratch. Now my file name looks a bit less dated.
Of course, not everyone will be using iSilo as their reader. The most common reader format is arguably Palm doc, and many readers can handle this one. In order to generate Palm doc format files, use the -doc flag as well. This one is kind of boring since it creates a single long document without an index. For a table of contents with cross-linked stories, you are best with the iSilo and Plucker. I personally use a number of readers since no single one satisfies all my desires for formatted text. But I digress.
I brought up different formats at this time because this is where you may be starting to run into problems. The conversion to formats like iSilo, Palm doc, Plucker, RichReader, and so on, requires external programs. In the case of iSilo, you may need the iSilo conversion tool for Linux. You can get it at this address:
The Plucker format also requires a conversion tool. This is also a free tool (available in RPM and binary tarball) and you can get it at the following address.
I've grown quite fond of the Plucker format (see Figure 1). I run it with the -mplucker flag to create multipage HTML documents readable on my Visor. When I ran it for the first time, the PDB files (which should have been created) were nowhere to be found. I also got an error telling me that db_name was no longer a valid option. A little search through the Sitescooper mailing list archives provided the answer to my problem.
Figure 1 Reading scooped web news with Plucker.
The problem resides in the Main.pm file distributed with the tarball (who knows, it may be fixed by the time you read this). If you installed from the tarball, this file will be living in the /usr/local/share/sitescooper/lib/Sitescooper directory. Edit the file and look for db_file, and change it to doc_file. You also need to change db_name to doc_name. There's only one reference for each one, so it should be fairly easy to spot them.
This is all wonderful if you happen to own a Palm or a Visor, and you find yourself in situations where portable news is the only way to catch up. But what if you don't need to carry all this with you? What if you don't own one of those funky handhelds? Sitescooper can still be a great time-saver. Sitescooper also lets you capture sites in all their HTML glory, but with the news only. Here's the command I use:
sitescooper -fullrefresh -mhtml -color -nodates -noheaders -nofooters
Since I've thrown a few additional flags at you, I should probably explain. The -mhtml flag tells Sitescooper to download html, but to maintain the multi-level format. The -html flag downloads in HTML format, but the entire site becomes one big page. The -color flag tells Sitescooper that my Palm device can display color, so there's no need for black and white conversion. The other flags of interest here are -noheaders and -nofooters.
I also run another collection command so that I can hotsync my daily Visor news fix:
sitescooper -mplucker -color -nodates -noheaders -nofooters
And speaking of "every day," running these commands might get a little tedious. So why not create a script with the appropriate commands and run that script via a cron job? What could be more perfect? You get to your computer in the morning, and there, waiting for you, is the latest of the day's news, collected while you slept. Create a small script (with the above commands, perhaps?), and set a date and time for the job to run. Let's pretend that my script is called get_news_now. Here's a cron entry for a job that runs every morning at 5 AM.
0 5 * * * /home/marcel/scripts/get_news_now 1> /dev/null 2> /null
If you are like me, you'll have discovered many opportunities for wasting valuable time with the help of this little program. Sitescooper may not have cured my news addiction, but at least I'll be well informed. Well, it's time to wrap it up for today. Besides, I've got a lot of reading to do.
Next time on the Walkabout, I'm going to take you into the hinterlands, the backwaters, the deepest, darkest corners of the Linux universe. Well, maybe it won't be quite that scary. Better stock up on that programmer food, just in case.
Until next time, I bid you great Linux adventures!