Broken Technology

The RSS feed for this blog seems to have broken when I posted the new design. When I go to my iGoogle page, the last article for this blog is still the entry from April 14, “Infoquake” Reviewed on Fast Forward. Which means there are certainly a number of readers who have no idea that I’ve redesigned the website, and who will just assume I’ve fallen into a crack in the Earth somewhere until they decide to come browsing this way again. This happened the last time I redesigned the site too.

Broken computer monitor in the woodsI’m unclear why this has happened. The URLs for the feeds should still be in the same place. All of the articles that were in the old feed are still in the new feed. I did mess around in the database and fix a number of GUIDs (Globally Unique Identifiers, for those non-geeks in the audience) that were pointing to a temporary address. But that should only have affected your feed reader’s ability to mark the entry as read or not read.

At least you can delete and re-add the RSS feed to your feed reader. The syndication for my Amazon blog broke altogether several months ago, and my message to the Amazon technical support staff seems to have fallen into a crack in the Earth somewhere. Now I’m stuck adding new entries to my Amazon blog by hand.

Why is so much technology so goddamn fragile?

I joke about this all the time with my web programming customers. Chances are that if you see something drastically wrong with the website I’m managing — layout all fucked up, images floating all over the place, everything completely unreadable — it’s the fault of a single misplaced comma somewhere. Other industries don’t have this problem. I mean, if you’ve got a single board nailed crooked in your house, the whole thing doesn’t fall to pieces.

Read more

About the New Website Design

You might notice something different.

Home Page screen capYes, it’s that new website that I’ve been mentioning for months and months now. I actually started soliciting feedback on the DeepGenre blog way back in December (see my piece “What Works on an Author Website?”). And now you can see the results here. You might also want to take the opportunity to poke around the redesigned Infoquake and MultiReal websites.

It’s likely that I’ll be doing some continuous tweaking to the site over the next few weeks, both to improve functionality and to fix problems. I haven’t looked at the sites on IE6 in over a month, and I’m sure they look absolutely horrendous. I’ve never tried to view them on a Mac. Plus I keep stumbling across broken images all over the place, which is entirely WordPress’s fault, and has nothing whatsoever to do with my shoddy organizational skills and inability to follow directions or standard programming practices.

In the meantime, here are some of the highlights of the new website:

Read more

Building the Perfect User Interface (Part 2)

(Read Building the Perfect User Interface, Part 1.)

In my first ramble about user interface, I used the toaster as an example of something that is erroneously thought to have a perfect user interface. Perhaps a more apropos example for most techies is the Internet search engine.

Think of any piece of information you’d like to know. Who was the king of France in 1425? What’s the address and occupation of your best friend from junior high school? How many barrels of oil does Venezuela produce every day? Chances are, that piece of information is sitting on one of the trillions of web pages cached in Google’s databases, and it’s accessible from your web browser right this instant.

Google Is a Giant Robot illustrationYou just have to figure out how to get to it — and Google’s job is to bring it to you in as few steps as possible. It’s all a question of interface, and that’s why user interface has been Google’s main preoccupation since day one.

It might seem the model of simplicity to click in a box, type for a search term, and click a button to get your results. But the Google model of searching is still an imperfect process at best. You may not realize it, but there are still a number of Rubegoldbergian obstacles between you and the information you’re trying to get to. For instance:

  1. You need to have an actual machine that can access the Internet, whether it’s a computer or a cell phone or a DVR.
  2. That machine has to be powered and correctly configured, and it relies on hundreds of other machines — routers, satellites, firewalls, network hubs — to be powered and correctly configured too.
  3. You need to know how to log in to one of these machines, fire up a piece of software like a web browser, and find the Google website.
  4. The object of your search has to be easily expressed in words. You can’t put an image or a color or a bar of music into the search box.
  5. Those words have to be in a language that Google currently recognizes and catalogs (and your machine has to be capable of rendering words in that language).
  6. You have to know how to spell those words with some degree of accuracy — which isn’t a problem when searching for “the king of France in 1425,” but can be a real problem if you’re looking for “Kweisi Mfume’s curriculum vitae.”
  7. You need to be able to type at a reasonable speed, which puts you at a disadvantage if you’re one-handed or using imperfect dictation software.
  8. Google has to be able to interpret what category of subject you’re looking for, in order to discern whether you’re trying to find apples, Apple computers, Apple Records, or Fiona Apple.

Some of these barriers between you and your information might seem laughable. But it all seems so easy for you because you’re probably reading this from the ideal environment for Google, i.e. sitting indoors at a desk staring at a computer that you’ve already spent hours and hundreds, if not thousands, of dollars to set up. If you’re running down the street trying to figure out which bus route to take, the barriers to using Google become much steeper. Or if you’re driving in your car, or if you’re a Chinese peasant without access to 3G wireless, or if you’re lounging in the pool, and so on.

Even in the best-case scenario, after you jump through all those hoops, you usually have to scan through at least a page of results from the Google search engine to find the one that contains the information you’re looking for. Google does no interpretation, summarization, or analysis on the data it throws back to you. Some search engines do some preliminary classification of results, or they try to anyway, but it’s generally quite rudimentary. Chances are you’ll need to spend at least a few seconds to a few minutes combing through pages to find one that’s suitable, and then you’ll need to search through that suitable page to find the information you want.

I don’t mean to minimize the achievement of the Google search engine. The fact that I can determine within minutes that a) the king of France in 1425 was Charles VII, b) my best friend from junior high school is currently heading the division of a high-definition audio company in Latin America, and c) in 2004, Venezuela produced 2.4 million barrels of oil a day — this is all pretty frickin’ amazing. But that doesn’t mean we shouldn’t note the search engine’s shortcomings. That doesn’t mean we shouldn’t point out that there are still a zillion ways to improve it. There’s still a huge mountain to climb before we can call Google an example of perfect user interface.

But don’t worry, because Google’s on the case.

Read more

On DeepGenre: What Works on an Author Website?

Today on DeepGenre, I’ve posted a little article asking for reader and book-buyer feedback on author websites, in particular SF author websites. Quick excerpt: So my question today is this: what do you find useful on an author’s website? I think we can all agree that excerpts help, and at the very least, having a blog doesn’t hurt. But what about the rest? Do you read additional material like chapter annotations, deleted scenes, and first … Read more

Mini-Essay on the Internet and Publishing on SF Signal

I’ve got a mini-essay (three paragraphs) up today in the new “Mind Meld” feature of SF Signal. The question was about how the Internet has impacted publishing and the author’s ability to sell more books. Quick excerpt: But even more important, the Internet has allowed me to keep in touch with readers during the (too long) break between novels. Before the prevalence of websites and blogs, the only way for newer SF authors to keep … Read more

The Plot to Understand Second Life

Last night I had the privilege of attending a reading and interview of renowned science fiction author Paul Levinson in support of his book The Plot to Save Socrates. I stayed in my bathrobe the whole time, because the event took place on Second Life.

the-plot-to-save-socrates I had an ulterior motive for attending. I’m in the process of evaluating promotional ideas for my upcoming novel MultiReal, and the idea of doing a book launch on Second Life has cropped up in my discussions more than once. I created a Second Life profile many moons ago, just to poke around and see what the fuss was about. After a few days, I quickly grew bored with the whole thing and uninstalled the software from my PC. But yesterday, in the service of book promotion, I resurrected it and went exploring once again.

And after attending Paul’s Second Life event, I can now officially say I don’t get it.

This was no fault of Paul Levinson’s. I’ve shared a couple of panels at cons with him, and he seems like a friendly, intelligent, and interesting fellow. The reading itself was quite lively, and the book The Plot to Save Socrates sounds like that perfect combination of thought-provoking and nerdy cool. The plot in a nutshell: a grad student in the future decides to travel back in time to save the ancient Greek philosopher Socrates from drinking the hemlock. (Go read more about it on Paul’s website.) The interviewer herself asked pertinent, thoughtful questions.

But the Second Life aspect of the event basically went like this: I logged in and teleported to a virtual auditorium. I sat down in a virtual chair along with about 25-30 other spectators. The virtual Paul Levinson and the virtual moderator sat in virtual chairs on the stage, next to a virtual spinning copy of The Plot to Save Socrates. And then we all just sat there for an hour doing nothing while the two of them had a very interesting chat on audio.

So besides the novelty factor, what does Second Life offer to book promotion that you couldn’t get by holding your reading on, say, FreeConferenceCall.com or WebEx?

I’m not saying that Second Life is a bad place to hold a book event. If you’re the author, you get to see who’s attending the reading. You get a direct conduit to your own personal bookstore, along with all the tracking that entails. You get the potential of interacting with people who live in remote places you’re not likely to ever hit on the real-world book tour. Oh, and it’s free.

But as I sat in front of my computer and watched my avatar watch Paul Levinson’s avatar watching the moderator’s avatar, I tried and failed to figure out what potential Second Life has for literature over the next ten years. It’s kinda neat. It’s kinda fun. Is that it?

I tried to extrapolate, to think big. What if my name was Stephen King or Dan Brown, and someone gave me $500,000 and six months to put on a fabulous Second Life book event? What could I possibly do? Hire Second Life actors to put on a clunky little pantomime while I read? Create big virtual sculptures of the creatures in my book to hang over the stage? I have a difficult time imagining what I could do that wouldn’t just look silly. I suppose in 15 or 20 years when you can see 3D Hollywood-quality monsters zooming around while you read, that will be pretty cool. But Second Life is still a long way off. Right now they’re closer to King’s Quest IV circa 1988 than they are to Peter Jackson’s The Lord of the Rings.

The problem is that literature is a very one-directional art form that doesn’t translate well into an immersive environment like Second Life. People are always talking about “updating” the reading experience, and so far it’s pretty much all been marketing hokum. Even if we all ditched paper and ink tomorrow and shifted over to Amazon Kindles or some other gee-whiz e-book reader, the basic reading experience wouldn’t change, only the distribution method. You’re still staring at a narrative of sequential words that you read from start to finish. What’s really changed about the narrative experience since the ancient Sumerians sat around the fire to hear The Epic of Gilgamesh? Only three things that I can think of: (1) writing, (2) paper, and (3) hypertext.

Read more

Behold, the New ISP

If you’re reading this article, then that means that you’re now viewing my blog at its new home of Bluehost.com. (And if you’re not reading this, then you have officially achieved a state of ultimate paradox. Congratulations.)

I’ve taken the opportunity of moving the blog to make a number of changes, which I list below:

  • Infoquake site is now on WordPress. The Infoquake website is now running on good ol’ WordPress (as opposed to ColdFusion, which is what I originally created the site in). This means it will be a heck of a lot easier for me to maintain, and will allow me to install nifty plug-ins and the like. imm-glossary-pluginPlus once I do the redesign in 2008, all I’ll need to do is modify the skin and I’m good to go — no need to reprogram the whole thing. But the coolest thing about moving to WordPress? I can use the IMM-Glossary plugin to give me automatic popup definitions for the terms in the book. Go look at the excerpt page and run your mouse over one of the words with the dotted underlines to see it in action. (Or just look at the screen cap to the right.)
  • The John Barth Information Center is also now on WordPress. Some of you may or may not realize that I’ve maintained a fan website devoted to the Postmodernist author John Barth for the past, oh, 12 or 13 years. (Go ahead, search Google for “John Barth” and see what comes up just below the Wikipedia article. I’ll wait.) This site is now running WordPress too, and seeing as I’ve been such a horrible steward of the site, I’m hoping to open it up to other John Barth fans to write, administer, comment, and manage.
  • All my personal websites are now running on LAMP. Yes, I do frequently defend Microsoft and have not always been keen on open source software. But I’ve decided to move to an all-open source LAMP (Linux, Apache, MySQL, and PHP) environment, because a) it’s cheaper, and b) WordPress runs better that way.
  • Cleaner permalinks. If you look at your browser address bar, you’ll notice that the “/blog” is gone. As is the “index.php” and the date-based URL. Why? Well, it’s cleaner, that’s why. Whereas the old WordPress installation had permalinks in the form www.davidlouisedelman.com/blog/index.php/year/month/day/title/, the new installation shows permalinks in the form www.davidlouisedelman.com/category/title/. Much easier to read, and much more search engine-friendly. (Yet thanks to the magic of John Godley’s Redirection plug-in, you can still follow old links to the new pages. Really, this thing is a miracle — regular expression-based redirection, just like in Apache, but you don’t have to leave WordPress or mess with .htaccess files. Plus 404 logging, and more.)
  • My old book reviews and interviews are now part of the blog. If you take a look at the full archives page, you’ll notice that I now have blog entries dating back to 1994. No, I wasn’t the most prescient individual in the world, I’ve simply moved all of my old book reviews and author interviews from the mid-90s into WordPress. This means they’re accessible through the search and the archives and get the benefits of tagging and all that WordPress-y goodness. You can read my Baltimore City Paper interviews with Tim O’Brien, Nicholson Baker, and Stephen Hunter here — still three of the most popular pages on the website — and more.
  • I’ve licensed the pieces on this blog with Creative Commons. The pieces on this blog now come with an Attribution-Share Alike Creative Commons license. Which means you’re free to copy them and remix them in any fashion you’d like, as long as you attribute the original to me and share your copies and remixes yourself. I really have no idea what this is going to do for me; I figure that it will either a) help, or at least b) not cost me anything.

Read more

Shelfari: LibraryThing with a New Coat of Paint?

LibraryThing seems to have a new competitor. Or, at least, I’ve just become aware of them.

I’ve made no secret about the fact that I’m a big fan of LibraryThing. I’ve spent hours and hours tweaking my LibraryThing profile, adding books to my catalog, and just browsing around other people’s shelves. I’ve spoken with Tim Spalding, LibraryThing’s founder, and he’s taken the time to respond to e-mails of mine and feature me on the LibraryThing blog once or twice.

So I felt a little like a cheating spouse when I responded to someone’s invitation to sign up for a Shelfari account last week. But it was actually much easier than cheating on a spouse, because I didn’t have to go through that whole tedious seduction and getting-to-know-you routine. I exported my whole LibraryThing catalog in about three clicks, and imported it right into Shelfari. In a way, it was like moving in with your mistress and skipping straight to the seven-year-itch all in one shot.

Here are screen captures of my catalog on LibraryThing and Shelfari, side by side. (Visit my shelf on Shelfari.)

LibraryThing screen shot Shelfari screenshot

After noodling around with Shelfari a little bit, here’s a synopsis of my thought process:

  1. The name “Shelfari” is incredibly lame.
  2. Shelfari looks slicker than LibraryThing.
  3. Shelfari is more user-friendly than LibraryThing.
  4. LibraryThing is fairly slick and user-friendly in the first place.
  5. So why would I switch to Shelfari?

The big difference between LibraryThing and Shelfari is that LibraryThing caps its free accounts at 200 books; Shelfari doesn’t appear to have any limits. But keep in mind that the LibraryThing rates are eminently reasonable. $10 a year for all you can catalog, or $25 for a lifetime membership.

Oh yeah, and Shelfari has a Facebook application. (I see that LibraryThing is testing out MySpace and LiveJournal widgets, which is cool, but IMHO they need to get cranking on a Facebook app.)

But there’s a huge amount of functionality that LT has which Shelfari doesn’t seem to have. I went browsing through “my shelf” on Shelfari and discovered that my copy of Shel Silverstein’s Where the Sidewalk Ends doesn’t have Silverstein listed as the author, only as the illustrator; and despite the fact that there’s an “Edit” link next to Edition Details, there doesn’t seem to be any way to edit that information. I was able to change editions to one which does have the author listed… but this one doesn’t have an illustrator listed. LT, by contrast, lets you edit book details to your heart’s content and upload custom covers that the whole community can use. Does the system think that “J.D. Salinger” and “JD Salinger” are two different people? Easy enough to fix that in LibraryThing.

Shelfari logoThis community focus is one of the things that makes LibraryThing so appealing. It’s kind of like — well, a library. It’s really, really easy to import and export your entire catalog so you can use it in other applications. Put it on your blog? Tie it in to your Firefox? Access it from your cell phone? No problem! If there are inaccuracies in the catalog, everybody pitches in to help fix it. If you read through the help menus and fine print, you’ll see quirky little bits of humor that give the site some attitude. “If the buzz page doesn’t convince you,” says a little blurb on the LibraryThing home page, “you cannot be convinced. Go away.” There’s a lack of commercial focus that’s very reminiscent of that library feeling. Come on in! Put your feet up, hang around as long as you like, buy some of the books on the Community Used Book table in the back if you’d like, but no pressure.

Read more

Dave on Ruby on Rails

Imagine you’ve never played the game of football (the American version) before. You’ve never even seen a football game, and you have no idea what the rules are. But somebody tells you it’s way hella cool, and you’ve got the build for it, why don’t you come on down and join the team.

So you suit up and get on the field, but you still don’t have the foggiest idea what’s going on. Sometimes people are running with the ball, sometimes they’re throwing the ball, sometimes they’re kicking it or just pushing other players around and jumping on them for seemingly no reason. You try to ask the other players what’s going on, and they’re perfectly willing to help you — but all you can catch is a few seconds of their time between plays when they’re out of breath.

Ruby on Rails logoThat’s kind of how I feel trying to learn Ruby on Rails.

What the hell is Ruby on Rails? For all you non-technical people out there, it’s a programming environment that’s supposed to make development super, mega easy.

Those with a more technical bent have probably already heard about Ruby on Rails. But for those who haven’t, it’s an open-source web framework where you can use the popular Ruby language to build robust applications using the Model-View-Control pattern in an astonishingly few lines of code.

How easy is it? Well, once you’ve got it installed properly, you literally type “rails book” and then “ruby script/generate scaffold chapter.” In the space of seconds, RoR generates all of the files you need for a project called “book” composed of multiple “chapters.”

From there on out, it’s amazingly simple too. You can describe the data model with two basic statements:

class Book < ActiveRecord::Base
has_many :chapters
end

class Chapter < ActiveRecord::Base
belongs_to :book
end

RoR takes care of generating all of the HTML files needed to make it work on the fly. Within five minutes, you can have an application that will let you seamlessly add, edit, and delete chapters to a book. No more mucking around with granular SQL statements and spending hours debugging.

The problem is, you’ve got to get it installed properly.

And getting Ruby on Rails installed properly is a bitch. It’s taken me days, and I’m still not sure I’ve got it done right. Luckily, you don’t need a web server to serve up the application because RoR comes with a built-in lightweight web server called Webrick. Oh, but wait, Webrick isn’t powerful enough for a production server, so we need Apache. With the FastCGI module installed and configured for Ruby files. Oh, but wait, nobody uses FastCGI to do this anymore, everyone’s using something called Mongrel these days…

Read more

The End of MySpace

Ziff-Davis’ Baseline recently published an insider’s look at how MySpace functions on a technical level, and it’s quite revealing.

The common assumption among programming types about MySpace is that the system started off as somebody’s pet project and quickly mushroomed beyond the programmers’ control. Rather than cooling off growth to create a better infrastructure, the MySpace folks opted for growth at any costs. As a result, we end up with the buggy, unreliable usability nightmare that is MySpace today. Now, it’s assumed, the programmers and sysadmins are scrambling to play catchup.

This article pretty much confirms these assumptions. According to the article, MySpace started out as a ColdFusion-based project — and while ColdFusion is ridiculously easy to program, any developer can tell you it’s got a reputation (deserved or not) for being a little slow and resource-heavy on the performance scale. So as they’ve grown, MySpace has been moving to Microsoft’s ASP.Net and relying on emulators to port some of the older code over.

One can’t really blame MySpace for such logic. It’s the kind of hot-air logic that propelled companies like Pets.com to the stratosphere back in the ’90s and made a ton of people oodles and oodles of cash. It’s Web 1.0 thinking. Using such Web 1.0 thinking, MySpace has quickly vaulted to become the most visited site on the Internet and gotten snatched up by Rupert Murdoch’s News Corp. in the process.

But as a result, they’ve built on an unsustainable foundation. They’ve made the classic gamble that short-term gain will trump long-term stability. And like so many Web 1.0 companies that came before them, MySpace is headed for a big, clumsy fall. Here’s why.

  • Easy come, easy go. The base audience for MySpace consists of teenagers and folks in their twenties. That’s not to say this is the only demographic using MySpace, but that’s the core audience. These people flocked to the service for the same reasons young people flock to anything: it was new, it was cool, it was free, and everyone they knew was doing it. Give them an alternative that’s newer, cooler, better functioning, and more reliable — not to mention backed by big corporate dollars — and they’ll flock there just as quickly.
  • Insecurity. Recently someone came up with the grand idea of distributing malicious code through a security vulnerability in embedded QuickTime videos. Folks have been taking advantage of CSS and HTML quirks to hack MySpace almost since the place began. More and more people are complaining about hacked profiles and hijacked identities. MySpace has demonstrated time and again that they’re behind the curve when it comes to security. So I think it’s highly likely that at some point in the near future, we’ll see a series of successful crippling attacks on MySpace that will send people running in a panicky exodus.
  • Slowing pace of innovation. Adapt or die, that’s the unofficial motto of the Internet. And unlike, say, Google, which continues to pump out features and applications by the gallon, MySpace has remained largely sedentary for the past year. They released a lamentable, old-school IM client and better video integration, but otherwise the system is pretty much the same as it was 18 months ago. As MySpace’s technical problems grow and their folks spend more and more time just keeping up with demand, they’re going to fall even further behind.

    Read more