Drupal

Improvements to the Materialized View API

[img_assist|nid=196|title=An eye-catching graphic, largely irrelevant to this blog post.|desc=|link=none|align=right|width=200]

Introduction

The Materialized View API (related posts) provides resources for pre-aggregation and indexing of data for use in complex queries. It does this by managing denormalized tables based on data living elsewhere in the database (and possibly elsewhere). As such, materialized views (MVs) must be populated and updated using large amounts of data. As users change data on the site, MVs must be intelligently updated to avoid complete (read: very slow) rebuilds. Part of performing these intelligent updates is calculating how user changes to data affect MVs in use. Until now, these updates had limitations in scalability and capability.

Update and deletion propagation

In the first iteration of the Materialized View API (MV API), which is currently deployed to Drupal.org, update and deletion propagation were rather naïve: the hooks used to trap changes (hook_nodeapi and hook_comment) simply called the updater for both the entity itself and the updater for anything related. For example, hook_comment() called both the updaters for the comment itself and the node parent of the comment:

function materialized_view_comment($a1, $op) {
  $comment = (array) $a1;
  switch ($op) {
    case 'insert':
    case 'update':
    case 'publish':
    case 'unpublish':
    case 'delete':
      MVJobQueue::update('comment', $comment['cid']);
      // Also "update" the node for the comment.
      MVJobQueue::update('node', $comment['nid']);
  }
}

Calling updaters for related entities is important for aggregation-based data sources, like one that, for a given node, determines the later of when the node was changed and when the latest comment to the node was posted. A change to either the node or a comment related to the node may change the aggregated value:

class MVLastNodeActivityTimestamp extends MVColumn {
  public function getValue($entity_type, $entity_id) {
    $timestamp = db_result(db_query('SELECT MAX(c.timestamp) FROM {comments} c
      WHERE c.nid = %d', $entity_id));
    if (!$timestamp) {
      $timestamp = db_result(db_query('SELECT n.changed FROM {node} n
        WHERE n.nid = %d', $entity_id));
    }
    return $timestamp;
  }
  [...]
}

The design of building propagation into the change-capture hooks proved sufficient for the initial MV API uses, which were forum-centric. The design was sufficient because update propagation was highly predictable: nodes to themselves and comments to themselves and their parent nodes.

But a limitation quickly became apparent: this would not scale to more entity-entity relationships and introducing more MV-supported entity types.

Here’s why:

  • Update notifications quickly became noisy: MVs based on purely node data would be updated whenever comments for the node changed, even if the node-based MV didn’t rely on comment data.
  • Mapping change propagation created misplaced burdens. It’s impossible for a change-capture hook to predict all the possible relationships MV data sources might introduce. For example, if we wanted an MV based on the number of replies to a comment, we would have to trigger updates for every parent comment walking up the tree. Do we update hook_comment yet again?

The solution was to put the change-propagation burden on the data sources, with the default change-propagation algorithm being “changes to X require updating rows related to X in the MVs.”

The default covers the standard entity attribute (e.g. published status for a node) data sources while allowing aggregated sources to become much smarter.

The default change mapper in the MVColumn abstract class:

abstract class MVColumn {
  [...]
  public function getChangeMapping($entity_type, $entity_id) {
    $changed = array();
    $changed[$entity_type] = array($entity_id);
    return $changed;
  }
  [...]
}

But for data sources like MVLastNodeActivityTimestamp — which provides a data sources for nodes which is the later of the last comment posting and the node change timestamp — has more complex change-propagation logic. (This code admittedly assumes that comments will post-date the last node changes.)

MVLastNodeActivityTimestamp’s change propagation logic:

class MVLastNodeActivityTimestamp extends MVColumn {
  [...]
  public function getChangeMapping($entity_type, $entity_id) {
    $changed = array();
 
    if ($entity_type == 'node') {
      // A change to a node only affects its own value.
      $changed['node'] = array($entity_id);
    }
    else if ($entity_type == 'comment') {
      $comment = MVEntityCache::get('comment', $entity_id, '_comment_load');
 
      // A change to a comment affects the value of the node it's attached to.
      $changed['node'] = array($comment['nid']);
    }
 
    return $changed;
  }
  [...]
}

getChangeMapping() effectively says:

  • This data source changes whenever a node changes or a comment changes.
  • A node change affects the value of this data source for that node.
  • A comment change affect the value of this data source for the parent node.

Now when an entity changes, the Materialized View API walks through data sources in use on any MV and establishes the unique set of entities needing updating. If a node-based MV doesn’t use any data based on comments, comment changes won’t trigger any changes in that MV. (See the new update() method for class MaterializedView.)

But this caused a problem (already solved in the code above): while hook_comment() gets passed any comments being deleted, it’s not possible for a data source to later load the comments and look up the related nodes to calculate propagation. The solution for this also became a useful overall optimization, the entity cache.

The entity cache

The disconnection between change-capture hooks and data sources used to result in excessive object loading. For example, changing the timestamp on a comment would pass the comment to hook_comment(), but a data source relying on the timestamp for the comment would load the comment fresh from the DB while updating MVs at the end of the page request (when MV updates currently occur).

Now, change-capture hooks populate the entity cache, allowing most data sources to use statically cached entity data. The entity cache also transparently loads entities in the background, keeping the data source code clean.

Of course, the entity cache was originally created to solve the change propagation for deleted items problem. It solves that problem by caching deleted items in the change-capture hooks. MV data sources are then able to load basic data for deleted items despite running after the items disappear from the database.

Challenges ahead

Change propagation can be expensive: for modifying a taxonomy term, it’s O(n), where n is the number of nodes with the term. Eventually, change propagation will have to be batched and handled offline, which raises the next issue.

It’s now more complex to queue MV updates to happen offline (read: during cron). The data necessary to calculate propagations lives in a static cache that disappears at the end of each page request. The only truly scalable option now is to have a persistent entity cache. That way, change propagation can happen offline, especially for large sets.

Some sets are so large that the most reasonable option may be to trigger reindexing for affected MVs. Changing a taxonomy term will fall under this category until change propagation can be batched.

The end scalability goal is to have the real-time overhead for running MVs be very small and linearly proportional to the number of entity changes being requested by the user, but the challenges above will need implemented solutions to reach this goal.

Opportunities

This new architecture opens the door for an explosion of MV data sources and supported entity types. Particularly, MV should expose every CCK field (or Field API field in D7) as an MV data source.

The long-awaited automatic “Views to MV” conversion system work is now underway. It will be possible to automatically convert many common Views to run on MV-based data sets, dramatically improving scalability for converted Views without requiring any external system (like Solr, Lucene, or CouchDB).

Advanced Drupal form theming: Take control of error styling with a form-item-error class

Note: This HOWTO covers Drupal 6.x.

By default, Drupal adds an .error class to the form element itself: textarea, select, input, and so on. Sometimes, that’s not good enough. Maybe a client needs the label’s color changed — or a big, red border encompassing both the label and input elements.

This can be achieved by overriding theme_form_element() to add an error class to div.form-item, the div that wraps all elements in a form.

Add these lines to the top of your theme_form_element() override:

function MYTHEME_form_element($element, $value) {
  // This is also used in the installer, pre-database setup.
  $t = get_t();
 
  // Start compiling classes for the .form-item wrapper
  $classes[] = 'form-item';
 
  // Add an error class to the .form-item wrapper
  $exempt_elements = array('checkbox', 'radio', 'password_confirm');
  if (form_get_error($element) && !in_array($element['#type'], $exempt_elements)) {
    $classes[] = 'form-item-error';
    $classes[] = 'form-item-error-'. $element['#type']; // Optional
  }
 
  // Build the .form-item wrapper
  $output = '<div class="'. implode(' ', $classes) .'"';
  if (!empty($element['#id'])) {
    $output .= ' id="'. $element['#id'] .'-wrapper"';
  }
  $output .= ">\n";

The lines above should replace the following:

function theme_form_element($element, $value) {
  // This is also used in the installer, pre-database setup.
  $t = get_t();
 
  $output = '<div class="form-item"';
  if (!empty($element['#id'])) {
    $output .= ' id="'. $element['#id'] .'-wrapper"';
  }
  $output .= ">\n";

All form elements except those listed in the $exempt_elements array will have two classes applied to div.form-item: .form-item-error and .form-item-error-ELEMENT_TYPE. Feel free to change these as you like.

Why are some elements exempt? Checkboxes and radio button usually come in groups of two or more wrapped inside a single div.form-item, and each individual box or button is wrapped in yet another div.form-item. This nesting can make theming difficult, so I’ve exempted them here. (You can, of course, customize this override to fit your needs.)

Here’s what the markup looked like before adding wrapping error classes. Note that the .error class is applied to the form elements themselves only, which makes theming some elements like radio buttons impossible in some browsers:

<div class="form-item" id="edit-first-name-wrapper">
  <label for="edit-first-name">First name: <span class="form-required" title="This field is required.">*</span></label>
 
  <input maxlength="255" name="first_name" id="edit-first-name" size="60" value="" class="form-text required error" type="text" />
</div>
 
<div class="form-item" id="edit-last-name-wrapper">
  <label for="edit-last-name">Last name: <span class="form-required" title="This field is required.">*</span></label>
 
  <input maxlength="255" name="last_name" id="edit-last-name" size="60" value="" class="form-text required error" type="text" />
</div>
 
<div class="form-item">
  <label>Gender: <span class="form-required" title="This field is required.">*</span></label>
 
  <div class="form-radios"><div class="form-item" id="edit-gender-m-wrapper">
  <label class="option" for="edit-gender-m"><input id="edit-gender-m" name="gender" value="m" class="form-radio error" type="radio" /> Male</label>
  </div>
 
  <div class="form-item" id="edit-gender-f-wrapper">
  <label class="option" for="edit-gender-f"><input id="edit-gender-f" name="gender" value="f" class="form-radio error" type="radio" /> Female</label>
  </div>
 
</div>

And here’s the same markup after applying the override above:

<div class="form-item form-item-error form-item-error-textfield" id="edit-first-name-wrapper">
  <label for="edit-first-name"><span class="form-required" title="This field is required.">*</span> First name:</label>
 
  <input maxlength="255" name="first_name" id="edit-first-name" size="60" value="" class="form-text required error" type="text" />
</div>
 
<div class="form-item form-item-error form-item-error-textfield" id="edit-last-name-wrapper">
  <label for="edit-last-name"><span class="form-required" title="This field is required.">*</span> Last name:</label>
 
  <input maxlength="255" name="family_name" id="edit-last-name" size="60" value="" class="form-text required error" type="text" />
</div>
 
<div class="form-item form-item-error form-item-error-radios">
  <label><span class="form-required" title="This field is required.">*</span> Gender:</label>
 
  <div class="form-radios"><div class="form-item" id="edit-gender-m-wrapper">
  <label class="option" for="edit-gender-m"><input id="edit-gender-m" name="gender" value="m" class="form-radio error" type="radio" /> Male</label>
</div>
 
<div class="form-item" id="edit-gender-f-wrapper">
  <label class="option" for="edit-gender-f"><input id="edit-gender-f" name="gender" value="f" class="form-radio error" type="radio" /> Female</label>
</div>

The wrapping .form-item divs now have error classes: form-item form-item-error form-item-error-textfield.

The technique described above was inspired by the function _form_set_class(), which is responsible for adding .required and .error classes to form elements.

Four Kitchens' website featured on 960.gs

Our website has been featured on 960.gs, home of the 960 grid system! This is quite an honor, as we’re big fans of grid-based design — especially 960.gs — and have begun implementing its principles and techniques in virtually every project.

[img_assist|nid=190|title=|desc=|link=none|align=center|width=450|height=470]

[img_assist|nid=189|title=12-column grid overlay|desc=|link=popup|align=right|width=300|height=306]

To see grid-based design in action, go to 960.gs and click the “show grid” button above the screenshot.

Note that every region on the page is contained neatly within the overlaid columns. The understated simplicity of the layout and the remarkably trim CSS used to achieve it are at the core of grid-based design.

Last month, I presented sessions at DrupalCamps in Copenhagen, Helsinki, and Stockholm on Drupal theming using 960.gs. You can download the slide deck on our presentations page.

In a few weeks, 960.gs creator Nathan Smith and I will co-present a session on accelerated grid theming using NineSixty, the Drupal port of 960.gs, at Drupal Design Camp Boston.

Thanks to Nathan Smith for featuring us. I’d also like to thank Joon Park, whose NineSixty theme has made major strides towards improving the functionality of grid-based design. (The Four Kitchens theme is a subtheme of NineSixty.)

David's Epic Presentation Megapost

Four Kitchens and GeekAustin present an evening of Drinks and Drupal

We’re co-hosting a party and you’re invited!

[img_assist|nid=186|title=|desc=Photo by Antonio Zugaldia on Flickr (CC-Attribution)|link=none|align=right|width=200|height=317]

On Wednesday, May 20th at 6:30pm, Four Kitchens is teaming up with GeekAustin to spread Drupal love in Austin. This free event is a chance for local Drupal professionals to share their passion with the curious and uninitiated masses of our fair city.

From Lynn Bender at GeekAustin:

We’ll have drinks out front and presentations in back. We’ve been sending personal invites to Drupalistas throughout Texas. So, if you have any unanswered Drupal questions, this will be the place to find the answers.

Master web chef David Strauss will be on hand to give a presentation on “Quick and maintainable site-building for charities and non-profits using Drupal and CiviCRM.”

So if you’re interested in learning how the Drupal CMS can benefit your project, come and meet the Austin and Texas folks that make it happen every day. And remember, just like Drupal, this event is FREE

Drinks and Drupal
Co-hosted by Four Kitchens and GeekAustin
Date: Wednesday, May 20, 2009
Time: 6:30pm - 10:30pm
Venue: Union Park Austin
Street: 612 W Sixth St. Austin, Texas 78701
RSVP on Facebook

Update: The slide deck is attached below.

Alternatives to rebasing in Bazaar

A discussion recently arose on the Bazaar mailing list asking, “Why isn’t rebase support in core?” Rebase support is currently packaged as a plugin. This plugin is widely distributed, even in the standard Mac OS X installation bundle.

There are boring reasons that rebase support isn’t in core, like the lack of strong test coverage. More interesting are questions about the necessity of rebasing in typical workflows.

What is rebasing, and why should I care?

In large projects, there’s a mainline branch representing the current, global, coordinated development. In Drupal’s case, this is CVS HEAD. This mainline might not always be in perfect condition, but there’s a general sense that the mainline is not a sandbox for untested changes. Many changes are small enough that the developers simply work on and test a patch, but this workflow is inadequate for larger development projects like Fields in Core. Such large features require their own branch for development, a feature branch.

A feature branch allows development of a feature in isolation from the mainline but with the eventual intent of merging the changes back into the mainline. Because feature branches are created to foster long-term, divergent development from the mainline, it’s common for both feature development and mainline development to happen in parallel. This parallel development creates a problem: How do developers on the feature branch prepare for the eventual re-integration of their feature code into the mainline?

There are a few options:

  • Don’t sync changes. This option makes merging the feature back into the mainline painful. This option also defeats the purpose of developing and testing the feature in isolation because merging two tested (but divergent) branches often results in one broken (but converged) branch.
  • Merge the feature into the mainline before making any changes to the mainline and then re-branch for more feature work after making mainline changes. Merging an untested or incomplete feature into the mainline makes this option unattractive and impractical. This option is so silly, I only included it for completeness.
  • Periodically update the feature branch from the mainline. This is ideal because the feature branch continually answers the question “What if we merged this feature into the mainline?” and is ready for quick merging into the mainline without any disruption to mainline work.

The third option is the only practical one. But how should it work? What should the feature branch history look like after synching from the mainline?

Back to rebasing…

Rebasing integrates the updates to the mainline as ancestors to the changes on the feature branch. The commit history is reorganized (read: rebased) as if the feature branch were freshly created from the mainline and all work were done on top of that. There are many theoretical objections to rebasing, and I won’t rehash them here. There’s general consensus that rebasing is sort of icky.

I find that many rebase users use the tool because they’re not aware of better workflows. I’ll address each (supposed) reason to use rebase in its own section.

“I want to keep my feature branch updated from the mainline.”

The better choice is to run bzr merge [mainline] on the feature branch. This command will update the common ancestry between the feature and mainline branches so that the feature branch includes the latest changes from the mainline and is ready for smooth merging back into the mainline.

“I want to view only the revisions that make up the feature I’ve been working on.”

With a rebase, it’s reasonably clear which revisions constitute the feature work: they’re the top ones. But rebasing is not the best choice for reviewing this list. Run bzr missing --mine-only [mainline] from the feature branch, and Bazaar will output all the feature branch’s unique revisions without mangling the actual history (the way rebasing does).

“I want a human-readable summary of how merging the feature into the mainline will affect the code.”

For background, a rebase user would run a diff from the oldest feature-specific commit to the latest commit, but there’s a better way. Instead, run bzr diff --old=[mainline], and Bazaar will provide the net diff for merging the feature into the mainline. Now, don’t use this diff for anything but human review; you should still use bzr merge from the mainline to integrate the feature branch’s changes and preserve all history.

Creating a merge directive with bzr send provides an identical human-readable diff to the method above, but a merge directive also includes all the binary data Bazaar needs to perform a history-preserving merge.

“I want to maintain a patch set on top of the mainline.”

Rebasing commits is an ugly way to do this because you don’t retain your own history of work on each patch or the history of how rebasing has changed each patch. Bazaar has a plug-in called “Looms” that provides direct support for a much better patch set workflow. I’m a touch skeptical of Looms’ stability, so I just do what Looms does under the hood: maintain multiple branches, each derived (branched) from the one below. Each branch represents a patch. This method retains full, original history, including any changes I’ve made to the patches. When the mainline updates, I simply merge the mainline changes up through my patches.

“I want to clean up my commit history prior to submitting my changes to the mainline.”

Rebasing may group the feature commits, but it doesn’t make them coherent or pretty. It’s more effective to do the following:

  1. bzr merge [mainline]
  2. Use bzr diff --old=[mainline] on the feature branch to create a net diff.
  3. Get a fresh branch from the mainline.
  4. Apply the net diff as a patch.
  5. Shelve all changes.
  6. Work through unshelving the changes and committing them to create a coherent, pretty history.
  7. Create a merge directive using bzr send.
  8. Submit the merge directive.

“[Your reason here]”

I’d like to hear from users of any distributed version-control system why they use “rebase” in their workflows, even if their reason is one I’ve discussed above.

Drupal.org redesign sprint San Francisco: Day 4

[img_assist|nid=170|title=|desc=Photo by Franco Folini on Flickr (CC-Attribution-ShareAlike)|link=none|align=center|width=600|height=445]

Despite being held on a Saturday, more than 15 dedicated Drupalers showed up for Day 4 of the San Francisco Drupal.org redesign sprint. Here’s what was achieved.

Josh Koenig:

  • Made a “token” contribution. (Get it? Tokens.)
  • Developed a UI for setting up activity logging (the following/tracking utility see on the Dashboard).
  • Worked on a cross-site delivery system across the *.drupal.org infrastructure.

David Strauss:

  • Built the sever-side components for activity reporting and logging.
  • Built a microformat-based facet system for activity reporting.
  • Finished the single sign-on system for all *.drupal.org properties.

Chris Bryant:

  • Developed content and site architecture maps comparing the Drupal.org structure to what we think Mark Boulton is suggesting in his prototypes.
  • Reviewed the list of Dashboard widget and where they will appear throughout the site.
  • Mapped features needed to implement project-related functionality.

Mark Burdett:

  • Worked on the project browsing form.
  • Developed some tools for tracking and displaying CVS statistics.

Derek Wright:

  • Delegated tasks to other developers.
  • Reviewed several patches.
  • Fixed some project UI problems.

Neil Drumm:

  • Deleted more code than he added. (Nice.)
  • Worked on the “Add to Dashboard” links for widgets.
  • General UI/UX work for adding widgets to the Dashboard.
  • Figure out how cross-site widgets will work across *.drupal.org.
  • Ordered pizza. (Nice.)

Károly Négyesi (chx):

  • Worked on a new parser for the API module. His work will eventually allow commenting on api.drupal.org.

Dmitri Gaskin:

  • Build the jQuery-powered map on the front page.
  • Started work on remote Dashboard widgets.

Jerad Bitner:

  • Worked on the jQuery side of the Dashboard.
  • Recruited Nate Haug to help.

Courtney Miller:

  • Changed drop-down search form CSS to properly style the checkboxes after they were changed from radio buttons. She also made room for the “Popular Searches” feature in that form.
  • Worked on the user picture feature.
  • Fixed formatting and grid layout of user profiles.
  • Created logic to place section titles as non-header text while styling them to appear like <h1> elements.
  • Documented her side of the comment titles debate. (We DO NOT AGREE.)
  • Added styling of book navigation menus.

Erik Hopp:

  • Fixed $links styling for nodes.
  • Consolidated CSS.
  • Fixed the Views administration interface.
  • Added node type classes to node output.
  • Fixed styling of the “Develop with Drupal” block on the front page.
  • Researching styling of the Project module. Later, he will propose a new, more maintainable markup style.
  • Improved accessibility of navigation tabs.
  • Improved skip links.

Todd Nienkerk (that’s me!):

  • Created graphical elements for the world map on the front page.
  • Mapped all colors from the prototypes into a master color palette.
  • Created color palette for status, error, and help messages.
  • Added comments support to the Permalink module. The module now adds an anchored permalink for each comment.
  • Created comps for documentation pages. I still need to style classes for good versus bad examples of code, style, language, etc.

[img_assist|nid=169|title=Keiran Lal|desc=Photo by lolg42 on Flickr|link=none|align=right|width=150|height=223]

Thanks to the omniscient Kieran Lal for posting these earlier San Francisco sprint updates:

Kieran enjoys fine food, wine, and dispensing knowledge of San Francisco’s microclimates and underground economies. In his spare time, he works tirelessly to build momentum for the Drupal.org redesign.

Speaking of the redesign…

You can help!

We need you to help us launch the new Drupal.org. Here’s what you can do:

[img_assist|nid=171|title=|desc=Photo by wili_hybrid on Flickr (CC-Attribution)|link=none|align=center|width=600|height=400]

Drupal's vulnerability reports are not signs of security weakness

[img_assist|nid=166|title=|desc=Photo by loop_oh on Flickr.|link=none|align=right|width=300|height=200]

I’ve been tweeting back and forth with Alex Limi, one of the founders of Plone, about the validity of the security analysis from a CMS comparison report that includes Plone and Drupal. He’s proud of Plone’s infrequent vulnerability notices; it had two in the last year. Drupal had 26. Alex also cited a related IBM report on security in a later tweet.

While both reports above seem to identify Drupal (and Joomla! and WordPress, to be fair) as having notably bad security, they’re also both based on one superficial metric: self-reported vulnerabilities. Neither severity nor response time nor history of actual exploitation factored in.

The vulnerabilities in question have all (long) been fixed in Drupal, so Alex’s argument could only be that past occurrences of vulnerability reports are a predictor of future security problems. Unfortunately, he merely begs the question of correlation without answering it, and that’s only the beginning of the problems with his argument.

Even if vulnerability reports were perfect indicators of future risk, vulnerability self-reporting carries a high conflict-of-interest. This conflict is especially strong when, like Alex, you argue that the quantity of reports you issue should be held against your project.

The Drupal community (both in developers and users) is much larger than the Plone one, and the two continue to diverge:

[img_assist|nid=164|title=|desc=Drupal is red, and Plone is blue (to state the obvious).|link=none|align=center|width=580|height=260]

Many of us in the free software community are familiar with Linus’s Law: “given enough eyeballs, all bugs are shallow.” Vulnerabilities are merely a special class of bugs. All other things being equal, Drupal’s larger developer and user base should be expected to find and publish more vulnerability reports than Plone’s.

But Drupal had more than just community growth in 2008; it also experienced unprecedented security review thanks to work by Barry Jaspan, who presented his findings at Drupalcon Szeged 2008. Barry subjected Drupal’s core code to static and dynamic analysis, resulting in the discovery of several vulnerabilities. Has Plone undergone similar scrutiny? A quick search on Google didn’t uncover anything of the sort.

Despite Alex’s thoughts on my stubbornness, I am open to an honest evaluation of Drupal’s security versus similar tools. I’m just not willing to base the debate on a superficial metric of such questionable importance.

Check out Four Kitchens' hot new logos!

After many months of deliberation, we’ve decided to totally rebrand Four Kitchens. It was a tough decision — there’s so much work that needs to be done — but we decided, in the end, that our firm needed a new look.

Our goals for the rebranding are:

  • Identify Four Kitchens as a leading Drupal consulting firm.
  • Raise awareness of our design skills and portfolio. (We’re not just scalability experts!)
  • Create an iconic brand that associates the Four Kitchens with quality, respect, and community involvement.

Please check out our ideas below. Any feedback is welcome. We really need your help!

Four Kitchens logo: version 1

(nid=161)

Inspiration: Building a website is like reading a book. First, you ride your fixed-gear bike to a locally owned, vegan bookstore and pick out something about World Trade Organization-sponsored coups. Then you turn the book over to see how much it costs. Finally, you pedal home — uphill — and buy it on Amazon in a hot minute.

Four Kitchens logo: version 2

(nid=162)

Inspiration: Building a website is like growing a plant. If you leave it in the sun too long, it will wither up and die. Then you have a dried up, dead plant, dummy. What a waste of five bucks. Oh, and you have to make a geospatial/SMS mashup that visualizes demographic data based on your proximity to a mailbox, squirrel population, volcanic activity, and roadside historical markers.

Four Kitchens logo: version 3

(nid=159)

Inspiration: Building a website is a lot like owning a giant, red robot. Everybody thinks it’s cute, but you’re the one that has to follow it around with plastic bags on your hands, cleaning up its piles of instructional DVDs.

UPDATE: Version 4 submitted by a loyal reader

(nid=163)

We’re not sure who made this, but we like it! They deserve a ping, whoever they are…

The Transatlantic Tacky Swag Swap has begun!

[img_assist|nid=153|title=Web Chef Aaron Stanush “mugs” for the camera. Get it?|desc=|link=popup|align=right|width=225|height=300]

Drupal themer extraordinaire Morten.dk, currently ranked #7 on Google for “king of Denmark”, has been bugging us for a Don’t Mess with Texas mug. Well, “bugging” may not be the right word. “Profanely demanding” is more appropriate.

Finding one was surprisingly difficult. While (lesser) cities like Dallas and Houston are lined with shops hawking rattlesnake heads and scorpions encased in plastic, there doesn’t seem to be much demand for Texas memorabilia in Austin.

Except at the airport, where you can find your name stamped on a fake Texas license plate or worn chunk of fencepost.

So, after scoring the great city of Austin for tacky crap, we proudly present Morten.dk’s Don’t Mess with Texas mug:

[img_assist|nid=154|title=Morten.dk’s Don’t Mess with Texas mug|desc=|link=none|align=center|width=600|height=450]

[img_assist|nid=155|title=Denmark or bust!|desc=|link=popup|align=right|width=300|height=225]

In return, we demand Morten.dk send us the tackiest Danish thing he can get his hands on. (I seem to remember him saying something about mermaids. Is that a Danish thing? Is Denmark known for mermaids or mer-related activity?)

Morten, the ball’s in your court. The gloves are off, and I’ve thrown down the gauntlet. There’s a line in the sand. You’re walking a tightrope.

That is to say, you have been challenged. The Transatlantic Tacky Swag Swap has begun!

Pages