Step Into Step-through Debugging

How many times have you been stuck on a difficult problem that required lots of debugging? Hours go by and you make incremental progress on the solution. Finally, eureka, you’ve cracked it. On to the pull request! To your dismay it gets shot down immediately because of this


or this


or, science forbid, this


What if I told you you were doing it wrong?

Let’s do it right

Print statements are a very simple way to debug your code. They’re great, if I need something quick I’ll still drop one in. I’ll even remember to remove them later, usually… Their biggest drawback though is that they limit your view into what’s actually going on in your program, both in the variables that you’ve printed and how the program flows before and after your print statement.

If you want to really understand what’s going on in your program at a given point in time you need to be using a step-through debugger. Step-through debuggers are commonplace among compiled languages but for whatever reason in web development, at least in my experience, they haven’t seemed to catch on. This needs to change.

Why, you ask? Let’s look at how a step-through debugger works. A debugger attaches itself to the program’s process and listens for events. If your program isn’t throwing any exceptions or forcing any breakpoints the program will execute as normal.

There are some common features of step-through debuggers which I’ll be referencing in the rest of this blog post, here’s a list:

  • Breakpoint - A line number of interest in your program. When the debugger is running it halts execution of the program at this line.
  • Call stack - A list of functions in the debugger that explains how the program got to where it currently is. Think of this as a live stack trace, without the exception.
  • Continue - An action to take in the debugger that will continue execution until the next breakpoint is reached or the program exits.
  • Step over - An action to take in the debugger that will step over a given line. If the line contains a function the function will be executed and the result returned without debugging each line.
  • Step into - An action to take in the debugger. If the line does not contain a function it behaves the same as “step over” but if it does the debugger will enter the called function and continue line-by-line debugging there.
  • Step out - An action to take in the debugger that returns to the line where the current function was called.

Breakpoints are the key though, consider this (contrived) example:

 * Returns an array of random numbers up to *size*.
 * @param {int} size
 *   The size of the random array to return.
 * @return {array}
 *   Array of random integers.
function loop(size) {
  var ary = []
  var min = 0;
  for (var max = 1; max <= size; max++) {
    var rand = Math.floor(Math.random() * (max - min + 1) + min)
  return ary;
var max = 100;
var min = 1;
var size = Math.floor(Math.random() * (max - min + 1) + min);
var res = loop(size);
console.log('Created an array with ' + res.length + ' results.');
console.log(res.join(', '));

Suppose we’re interested in the value of size before we enter the loop function. Sure we could print there, but what if we’re also interested in what happens after the value is created? Using a step-through debugger we can follow the execution path of the program and track our variable – and any other in-scope variables – as the program is running. It’s easier to show this than to write about it, so check this out:

We’ll start by adding a breakpoint at line 21 (22 in the debugger since node-inspector wraps our code in an IIFE). When the program is started it will immediately pause at this line. The first action we’ll take is to step over this line so the value is instantiated. In the debugger you’ll see that the size variable is listed with the value 78.

The next action we’ll take is to step into the loop function and step over the ary and min variable instantiations. You’ll see the initial values being set in the scope variables list as we step over each line.

Next we’ll step through the for loop. You’ll notice the ary array is growing with random values and that on each step we can see the value of rand before it is appended to the array. We’ll add a new breakpoint at the end of the array and continue to that point so we don’t have to step through each iteration of the loop.

Finally we’ll step back out, in this case stepping over or in would produce the same result since we’ve reached the end of the function, but why not go ahead and use our one remaining action? After we review the results we’ll continue to the end. The process that the debugger is connected to will exit cleanly and the debugging session will end.

This is obviously a very simple example of how a step-through debugger works but hopefully it’s clear how powerful this can be.

To take this one step further, when you start dealing with nested function calls the call stack list will display the current function you are executing and the functions that were called to get you here. You can step back out to any one of those functions and observe the scope of variables, etc that existed when the function was called. So now not only can you get a snapshot of where things are currently and where they are going, you can also see what things looked like in the past that led the program here. This is huge for debugging complex code or just learning how someone else’s code works!

Ok, I’m sold, now what?

There are two major components for step-through debugging that are common regardless of what type of code you’re debugging:

  1. A process that is sending debug information.
  2. A debugger that is listening for debug information.

Typically you start a process for debugging by adding some conditional flags. I’ll cover the programming languages we work with the most at Four Kitchens here, but the process is going to be the same for pretty much everything.

For PHP you need to install xdebug on your server and add a xdebug.ini config file like this:


The config variables are described in detail in the xdebug documentaion so I won’t go into them here, but take note of the remote_port and remote_autostart variables. These tell PHP to send debug info to port 9000 and to begin in “debug mode” as soon as a PHP process starts.

Similarly, in node.js you can start a process in debug mode as follows:

node --debug app.js

node --debug-brk app.js

to halt on the first line of your script.

The next step is configuring the debugger. If you use an IDE like eclipse this is often baked in. Since – according to my fellow web chefs – I’m a vim using masochist, I use stand-alone debuggers for PHP MacGDBP and node.js node-inspector. MacGDBP isn’t the best debugger I’ve ever used but it gets the job done. Node-inspector is a great tool and because its interface is chrome dev tools it’s very familiar. If you’re not using dev tools node-inspector is a great segue into using them. (If you’re not using chrome dev tools or similar for step-through debugging your front end JavaScript, guess what… you’re doing that wrong too!)

You’ll need to match the debugger’s port up with whatever your process is running with: 9000 in our PHP example and 5858 for node.js by default.

After that’s configured you should be all set to start stepping through your code and will never be burdened by cleaning up debug print statements again! I can’t tell you how much time I’ve saved by doing all of my debugging this way. Simple problems like misspelled variables or minor logic issues literally pop out at you when you’re stepping through. Obscure pieces of code that you wrote months ago come back to haunt you in minutes, not hours. The code you write is more robust all around. I hope you found this post useful and will kick print statements and start stepping through your code!

DevTools for mobile

Howdy perfers! In the US we’re gearing up for a long Thanksgiving weekend, so if you find yourself with some free time on your hands try digging into one of these awesome DevTools resources for mobile development.

Chrome Dev Summit mobile talks

Last week was the 2013 Chrome Dev Summit, so if you missed the action due to a busy work schedule, fear not. There are lots of slide decks to catch up on and a few videos too. Paul Irish brings us up to speed on the state of mobile web development, showing how you can use Chrome to emulate many more device properties such as pixel density, edit local dev environments from the browser using Workspaces, and a slew of performance tips thrown in as a bonus. Grab the slides, watch a video recorded previous to the dev summit, or read a longer article on HTML5 Rocks.

If you’re keen to optimize the critical rendering path but aren’t quite sure where to start, Bryan McQuade goes on a deep dive that walks you through that process. Many pieces are well-known frontend performance tips, but he helpfully explains how it affects the rendering of the page and hopefully brings some perspective on how much impact you can have to make a mobile site fast. Read the slides.

Paul Kinlan’s session about the Best UX patterns for mobile web apps explains how you can use some new UX tools within Google PageSpeed to help identify and resolve common problems that appear on responsive and mobile websites. I’ve appended the necessary query string ?ux=1 to the PageSpeed link so feel free to try your site out!

General reference

Just a reminder that you can always fond up to date information at both the Official Chrome DevTools docs and the Firefox Aurora devtools.

You Don't Need Bootstrap

It will come to no surprise to anybody who has heard me speak that I am no friend to Bootstrap. One of my goals with the trainings that Four Kitchens does for Responsive Web Design at various Drupal events and for companies, is to give developers the tools they need to not using Bootstrap or other similar tools. I hope to clear up why I feel that Bootstrap is the wrong tool for most websites, and what you can use instead of it.

1. Anti-patterns

First off, Bootstrap supports far too many anti-patterns. An anti-pattern is a design idea that seem good, is reproduced often, but generally are bad ideas for a website. First off, Bootstrap does not give you a truly responsive design. It supports the idea that there should be 4 breakpoints based off of the idea that we are building websites to phones, tablets and desktops. This is contrary to the idea of designing and building a site based upon content, and ensuring the site is usable, no matter the device that views it. A good example on how devices do not fit into such discrete categories is the Samsung Galaxy Mega, which is either a large smartphone, or a small tablet. The design should work with such a device, even though it is not specifically a tablet or smartphone.

This is a problem in general with class-based grid systems as well, that we are limiting ourselves. Grids should work with the content itself, not impose a class structure. We should not limit ourselves to a 12 column layout with four breakpoints.Instead, let’s design our sites around the content, creating awesome mobile-first layouts. In short: We deserve better.

2. You should use YOUR design

Bootstrap sites all look the same, and all have the same front end code that is applied. Even the themes will apply a lot of styling that you might not need. But for your site, you should have as little CSS as possible to create YOUR design. Why should you be overriding something, when you can just use your own code from the beginning?

3. Better markup

I hate having excess classes, and non-semantic classes littering my markup. I want my site to be clean, easy to read, and have as little interference as possible when I am creating markup. Besides my desire for clean markup, there are also practical reasons for not wanting this class-based system. Many CMSs, Drupal included, have their own opinion and method for printing out markup and classes and having to put Bootstrap classes throughout the markup is very difficult. Yes, it can be done, and yes, there are ways of doing it. But, it will take far more time and energy than just creating your own custom CSS from the beginning.

4. Sass makes it possible

This is the point in my arguments that I usually get the reaction of, “Yes, but I need to make quick prototypes, and I do not have the time to create custom CSS from day 1.” This was true, before the Sass community started making pluggable extensions to Compass to allow quick and easy prototypes to be made with 100% custom code. Not only can you do this with custom code, but you are writing production-ready code from day 1 with your prototype. Team Sass has done a large amount of work to allow custom grids, media queries, and styling incredibly easy. You can do it, and these tools will make it possible.

What Bootstrap is good for

I highly recommend not using Bootstrap, and would never advocate its use on a live site. That being said, it is not all bad, and has been made by a team of amazing engineers and designers to give some great ideas to the world. First off, it is a good way to learn, and see how some interesting styling and JavaScript can be used. If you are new to front end work, or want to see how responsive web design works at a fundamental level, check out the code. It is a great way of learning how fundamentally some of these front end examples work. Also, if you are building a back end administration page, that is only used internally, Bootstrap is a great tool to give you interesting UI pieces without having to do a full design. As this code will not need to be perfectly performant, and will not matter if its design steals from other UI elements, use Bootstrap.

Wrap up

Do not use Bootstrap. No, really don’t. You can do better, you can design better, and you can build your code better. Using a CSS preprocessor like Sass will allow you to still have the rapid prototyping you need, and the community tools within Sass will be your bread and butter in this regard.

A flexible field model for tables

Part 1: Modeling Ambiguity

When you are developing data models for fields in Drupal sometimes the only thing you can count on is that there will be exceptions to the model. Thankfully, with Drupal, you can incorporate a lot of flexibility into your data model. Typically that flexibility comes at the cost of complexity for you, the developer or site builder, and likely, for your content contributors. It gets even more difficult when those fields are irregular in format, need to be responsive, and are a part of a time sensitive content launch involving content experts with little to no Drupal experience.

Recently, we were faced with a situation just like this - we needed a high degree of field flexibility for tabular data that ultimately needed to be responsive and the content contributor experience had to be dead simple. Thankfully the effort spent to develop this solution was rewarded the very first time we were able to reconfigure the field data model on the fly. I’d like to share our approach to the field model and the responsive no-tables end result. In the first part of this two-part series I’ll walk you through the process of modeling fields with a lot of flexibility and in the second part we’ll focus on how to represent that model in responsive design with a “no-tables” strategy.

The problem at hand

Let’s start with a brief overview of the problem. A client was launching a new site with a deadline that required a large content entry effort from a number of Drupal novices. The scale and pacing made it necessary to simplify the entry process as much as possible. In fact, development and content entry needed to occur simultaneously to complete the site by the launch date. Our efforts as a development team were to enable content entry at the earliest date possible and fill out improvements and other functionality after. While much of the content was tabular in nature, individual tables varied occasionally but significantly and there simply was too much of it to audit so that detail was cataloged. Heavy tablet and mobile device use was predicted and so all of the content, including the tables, needed to be responsive.

In summary we needed to:

  1. preserve and enable content idiosyncrasies
  2. keep entry as simple as possible, and
  3. follow best practices on the presentation layer for responsive tables.

Field modeling with Rumsfeld: Known unknowns in your content

I love saying the phrase “modeling ambiguity”. It has a nice mouth feel and its a lot nicer than saying “nobody knows all the details just yet”. I used to say this phrase a lot when I was building database schemas from scratch for applications without frameworks, yuck. The reality of that phrase hurt a lot more back then. Well, thank goodness those dark days are gone. I haven’t had to pull this phrase out all that often with Drupal especially given all of the options that a site-builder has at their finger-tips for multi-value fields, field collections, field groups, field formatters and the powerful context surfacing and evaluation from CTools. However, my old friend reared his ugly head on this problem, and it was just in the nick of time.

Through our discovery phase we uncovered a number of similar fields for several content types. The fields had a heading, with one or more tables. Each table had two columns with one or more rows and a header. Each header had two column headings which may or may not have specific and unique headings per table. Table content had light formatting such as bolds, italics, bullets, and underlines, as well as html links.

In the end it looked something like this - desktop on the left and mobile on the right:

Desktop and mobile mockup of tabular data showing fields and optional repeating structures

You can have anything you want… for a price

The real problem in all of this was that each time we thought we had nailed down when a particular instance of this field model had a particular set of these variations — another survey of the actual content demonstrated that it was a little different. This is where all that experience with “modeling ambiguity” came in handy. I knew from experience that we could accommodate all of those variations. I knew that while it would be nice, we really didn’t need to know all of the variations before hand. So far that is a pretty rosy picture.

Ah, but I also knew that with ambiguity comes a price. Usually a developer’s blood, sweat, and tears and sometimes a site’s users. Things get complex quickly when you don’t have a lot of things nailed down. You have to think through all of the options and provide ways for contributors to self select at entry time. Both of those can cause pain. I brainstormed a number of ways we might try and accommodate this variability programmatically but I knew it was going to take some time to get all the pieces in place. Since time was a real concern there were really two options - create a fast solution that would almost certainly require rework and a lot of band aids but enable quicker entry or step back and create a more novel solution that was truly flexible. Both the client and I agreed that a longer term, flexible solution was in the best interest of the site, timely delivery, and cost.

So I worked backwards from the most complex combination of all of these elements and understood that content contributors would ultimately need to:

  • add or remove any number of tables beneath a heading
  • allow editable headers on a per table basis with default column headers
  • add or remove any number of rows to each table,
  • utilize a wysiwyg editor to manage light formatting for each table cell

Scaling with Variability

The obvious choice for tabular data is a table field, right? Right, er, no, not here. Don’t get me wrong, its a great solution for most tabular data needs, please use it. However, it didn’t hit the right notes here. Mainly because it simply doesn’t scale with variability. This is another important concept in successfully modeling ambiguity. “Scaling with variability” means that you should only introduce the additional complexity of handling the variations when needed. With all of the fields set as table fields there is no more-or-less difficult version, it is tables all of the time. Its not particularly flexible for rich editing, nor the heading variability, and to top it off, storage for table fields is fairly unique, which means there would certainly be data migration if the fields needed to be modified further when something new comes up or if no further complexity is introduced to some instances - there is no opportunity to devolve into a simpler state.

What we were really looking for was a nested multi-value field structure that could withstand on the fly changes. Significant field type changes are often disabled after there is data in a field but what we were relying on in this case is that the gradual process of content entry would reveal the need for these variations on particular instances of this field in the content types. So, what field type comfortably accepts a single or delta value without significantly altering the storage? What field type accommodates a multi-field grouping approach? What multi-field type can nest inside itself? What options do we have for rich editing experiences for the table cells?

Arriving at a Model

The field model that answered all of these questions looked something like this:

Field Group {
  Field Collection for Table (1 or more / set to 1 initially) {
  Header 1  - text field (provided with default heading values)
  Header 2 - text field (provided with default heading values)
    Field Collection for Table Data (1 or more / set to 1 initially) {
      Row data 1 - text long (w/ WYSIWYG editing)
      Row data 2 - text long (w/ WYSIWYG editing)

How it works

Let’s discuss how this structure works. The field group is the parent structure which will accommodate the field heading for all tables below. Then we have our nested field collection structure - one field collection that contains another field collection as a child. The first field collection structure allows us to have multiple or single tables underneath the heading and the second field collection gives us multiple or single rows of table data for the parent table field collection. By default we’d start with the simplest configuration of a single table and a single row value which results in a very plain content entry experience. The Row data columns are text fields with wysiwyg field editors allowing our content entry team access to light formatting options with familiar tools. Additionally, if the fields change significantly we have a little more flexibility from the storage perspective. Our variable table headings can be accounted for in the text header 1 and header 2 fields.

The nested field collection approach lets us leverage the strengths of individual field types editing interfaces and their storage. It also gives us the flexibility to add in the option for a multi-value field only when needed. When not in play, as a single value, each field looks just like a single WYSIWYG text field. If at any point during the content entry a content contributor discovers that a particular instance of this field needs to have multiple tables or multiple rows we can enable that variation without fear of having to reroll the field. We simply make tht change on the content type configuration screen. If we need a single multiple row table we can do that, if we need multiple tables with multiple rows with default or empty headings that are required we can do that too. Its a much more configurable solution that lets us dial each instance in at its required complexity.

Hopefully I’ve given you some tools and terms to think about the next time you need to model ambiguity. In the next part of this two part series we’ll take a look at how we translated this field structure into responsive table content with a “no-tables” approach.

Read Part 2 of this post here.

DevTools link roundup

Today I looked through my collection of links and realized I have enough resources pooled up to put together a decent little post on browser devtools. In case you’re not familiar, development tools ship with each web browser, enabling us to analyze and debug our increasingly complex websites and apps. From finding a background color to profiling frame rate issues, browser devtools bring sanity to the world of frontend development.

Discover (Chrome) DevTools

Put together by Google and Code School, Discover DevTools takes you from zero to master using Chrome DevTools. It’s a series of 17 videos with over 75 challenges for you to complete along the way. Although the course is Chrome-specific, other browsers’ tools are similar or identical in many cases, and the real takeaway is the ability to use these tools regardless of how the buttons are laid out in a given browser.

Secrets of the Browser Developer Tools

Secrets of the Browser Developer Tools provides bite-size tips for all browsers’ development tools. There are enough that you’ll probably learn something new just by skimming. They’re broken up into various categories, and each tip is accompanied by a list of platforms where the feature can be found. This fantastic resource is brought to you by @AndiSmith and he welcomes contributions.

DevTools can do THAT?

The last link is a set of performance-oriented slides from Ilya Grigorik detailing some of the finer points within Chrome DevTools. He highlights not only particular settings but how to configure them in order to easily test for different situations (e.g. cold/warm cache). Also included is a quick walkthrough explaining network waterfalls, the life of a request as visualized in DevTools, options for exporting and processing HAR files (a JSON format for storing network waterfalls), and overviews (plus examples) of Timeline and Profiles tabs, which help you debug rendering and memory issues.

Chrome DevTools Revolutions 2013

Update 2013-11-13: Since I published this, another awesome HTML5 Rocks tutorial has been posted showing some of the latest features of Chrome DevTools, including Sass source maps, remote debugging, and some sick tools to help you debug rendering performance. Eat it up ;)

Browsers want your feedback!

Although they have a ton of features, browser makers always want to hear from YOU, the one in the trenches. Most browsers have a dedicated developer relations team to ensure that the tools stay up to date and sensible for the hard-working people of the web. Here are their twitter accounts:

Base64 Encoding Images With node.js

If you’re a follower of this blog or Four Kitchens in general you might have noticed that making the web accessible to mobile devices is something we’re interested in. One of the biggest challenges with mobile computing is unreliable network connections that are inevitably encountered in the wild. This is particularly troublesome when your website becomes more of a web application that should behave the same regardless of connection status. There are many, many issues that crop up when the wire goes dark, but today we’re going to focus on one strategy for dealing with images, and a solution we created to help with this.

As web developers we’re all familiar with the <img> tag for embedding images in the document. Normally the source element of this tag points to a file on the server that the browser will fetch during a page load process. The source element can also contain what’s called a data URI. Data URIs allow you to embed base64 encoded data directly in the HTML document. In the case of image tags this means we can embed all data for an image in the document. A 1x1 gif file looks like this when it’s base64 encoded:

<img src="data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///ywAAAAAAQABAAACAUwAOw==" />

Modern browsers will then interpret this data and display images as usual, avoiding another call to the server to fetch the image.

So what does this have to do with offline applications? Base64 encoding images and keeping them inline does two things:

  1. Reduces the number of calls to the server that the client needs to make.
  2. Moves the caching of these images into the cache entry for the document. If the document was cached by the browser, it’ll work offline and future loads won’t require server communication to fetch images. (Note: for simplicity’s sake I’m purposefully ignoring any JavaScript or CSS that might also be required for your application to function correctly offline).

It’s not all fun and games from here though, there are tradeoffs for base64 encoding images. Notably: file sizes tend to be a bit larger for base64+gzipped images than they would be for their binary+gzipped counterparts. There can also be a higher CPU cost on initial page loads while the browser processes the inline images. These problems can be mitigated by being careful about which images you base64 encode based on your use case.

So, now that we know about base64 encoded images and what they’re good for, let’s look at a way to replace non-base64 encoded images in an HTML document. There are lots of tools to do this, many of them online, but let’s make this more interesting and assume that we need to do the replacements on the fly before they’re served to clients. Let’s say that we have a RSS feed that contains embedded images that needs to have its contents parsed and distributed to client applications.

If you’re using node.js as a server backend or preprocessor of some sort, this is now incredibly easy thanks to the img64 module. img64 uses jsdom and jQuery to look for images in strings and replaces the source references with base64 encoded data URIs. Here’s an example of doing this:

var img64 = require('img64');
var string = "<p>Organic mattis pharetra eget lorem Brooklyn eget mattis sodales donec keytar vitae magna ut ipsum +1 sagittis. Congue proin auctor viral commodo pellentesque sem donec vinyl tellus sapien justo urna Austin non eros eros curabitur you probably haven't heard of them.</p><img src=\"\" />";
img64.encodeImgs(string, function encoded(err, doc) {
  // Voici! Our 1x1 gif is now base64 encoded and
  // embedded as a data uri! The string is otherwise
  // unchanged.

Now you can include content from anywhere in your web application without needing to worry about whether or not the content provider has already base64 encoded their images!

In our example case this means that the HTML delivered to the browser will now contain base64 encoded images for all the feed content, regardless of its origin. In addition to the advantages of base64 encoding images we outlined above this also means that the images hosted on other servers will not need to be fetched and downloaded. Assuming you have permission to use these images this means your application will not need to be concerned if an image’s host goes down or an image is moved, etc: your app will always behave the same as long as the base64 encoded image is present.

The img64 module was created as a proof of concept and while it’s definitely a viable solution it hasn’t been battle tested yet. Check it out; if it works for you, awesome! If not, open a pull request. Happy encoding!

Creating Custom Panels Panes (and use substitution too!)

I spent several hours last night just trying to add some configurable social sharing buttons to my node pane, but I needed to use fields from the node itself within my code. After hours of Google searching, and checking versions– I finally figured out how to do what I was looking for. Part of this confusion is due to just ctools (Chaos Tool Suite) having slightly different API depending on its version. Note, I am using ctools version 7.x-1.2 and panels version 7.x-3.3.

This quick tutorial will show you how to create you own custom panel, the form for settings, and finally how to use the substitution variables from the entity being rendered within your settings.

The .info File and .module File

First off, a custom panels pane is in reality a ctools content type, which has absolutely nothing to do with Drupal’s normal content types. This confusion of verbiage is found throughout a lot of documentation, so to differentiate here, I will only use the word “pane” to reference the ctools content type we are building. To create a pane, you must write your own custom module, although you can use an existing custom module if you want. Per usual, you will need an file, and will need to ensure that you have ctools set as a dependency.

name = My Custom Panes
description = Provides several custom panes to panels.
core = 7.x
dependencies[] = ctools

Next, you will need to alert ctools where to look for your custom pane information. This is done with hook_ctools_plugin_directory(), a hook that must be placed within your .module file. This hook will just return a directory path (from your module’s base folder) where ctools should go looking for your plugin files. See the example code below:

 * Implements hook_ctools_plugin_directory().
function mymod_ctools_plugin_directory($owner, $plugin_type) {
  if ($owner == 'ctools' && $plugin_type == 'content_types') {
    return 'plugins/' . $plugin_type;

In this example, the hook checks to see if this is for ctools ($owner), then if we are looking for the ‘content_types’ plugins. If you are going to be using more plugins, you can also include them into the ‘if’ statement, however we will not need any other ctools plugins for this example.

Define the Ctools Content Type (Pane)

The last file we will need is the file that will contain the full definition of the panel we want to make. It will live in the folder ‘plugins/content_types’ (taken from the hook return above) within your module directory. The file will have not only the definition, but also the settings form (if needed), and the render function. Do not worry about the name of the file, but ensure it is unique, and name spaced to your module. For this example, the file will be ”’ The first bit is the plugin definition, which will live outside a function, and the first piece of code within your file. Explanations of what each part means are inline.

$plugin = array(
  'single' => TRUE,  // Just do this one, it is needed.
  'title' => t('My Module Custom Pane'),  // Title to show up on the pane screen.
  'description' => t('ER MAH GERD, ERT DERS THINS'), // Description to show up on the pane screen.
  'category' => t('My Module'), // A category to put this under.
  'edit form' => 'mymod_pane_custom_pane_edit_form', // A function that will return the settings form for the pane.
  'render callback' => 'mymod_pane_custom_pane_render', // A function that will return the renderable content.
  'admin info' => 'mymod_pane_custom_pane_admin_info', // A function that will return the information displayed on the admin screen (optional).
  'defaults' => array( // Array of defaults for the settings form.
    'text' => '',
  'all contexts' => TRUE, // This is NEEDED to be able to use substitution strings in your pane.

At this point, once the module is enabled, then you pane should show up when you click to “Add content” to a panel. If it is not showing up, ensure that you do not have pending changes to the panel, as this will prevent updating of panes to occur.

Create a Settings Form

Next, we need to write the functions to create the edit form, render the pane, and create the admin info display. The form is created much like any other form with the FAPI. Your current settings are added to the $form_state variable, and the submit function will automatically saved any settings that are put into the $form_state[‘conf’] array.

 * An edit form for the pane's settings.
function mymod_pane_custom_pane_edit_form($form, &$form_state) {
  $conf = $form_state['conf'];
  $form['text'] = array(
    '#type' => 'textfield',
    '#title' => t('Panel Text'),
    '#description' => t('Text to display, it may use substitution strings'),
    '#default_value' => $conf['text'],
  return $form;
 * Submit function, note anything in the formstate[conf] automatically gets saved
 * Notice, the magic that automatically does that for you.
function mymod_pane_custom_pane_edit_form_submit(&$form, &$form_state) {
  foreach (array_keys($form_state['plugin']['defaults']) as $key) {
    if (isset($form_state['values'][$key])) {
      $form_state['conf'][$key] = $form_state['values'][$key];

Render the Pane

Now, we get to (finally!) actually render out content in our pane. For this, we will use the render function we defined above, and ensure that we return our content as a stdClass object with a ‘title’, and ‘content’. This can be either HTML, or renderable arrays, it does not matter. The importatant part here is the function ‘ctools_context_keyword_substitute’, which will take the $contexts variables passed through, and then replace the substitution strings within.

 * Run-time rendering of the body of the block (content type)
 * See ctools_plugin_examples for more advanced info
function mymod_pane_custom_pane_render($subtype, $conf, $args, $contexts) {
  // Update the strings to allow contexts.
  if (!empty($contexts)) {
    $content = ctools_context_keyword_substitute($conf['text'], array(), $contexts);
  $block = new stdClass();
  // initial content is blank
  $block->title = t('This is my title!'); // This will be overridden by the user within the panel options.
  $block->content = $content;
  return $block;

That’s it! You have made a rendered panel pane. You can of course make this more complex as you needs are. For me, I wrote a custom function that will take the $conf variables, and then create a “tweet this” button. It then had variables for the hashtags, url, and text that should be automatically added to the tweet, but the possibilities are endless.

The Admin Display Page

This last section is optional, but allows you to display information on the panel’s content configuration screen. By default, each pane will just display “No info available.” We have already defined our admin display within the $plugin definition above, so we just need to use the same function name we defined above:

 * 'admin info' callback for panel pane.
function mymod_pane_custom_pane_admin_info($subtype, $conf, $contexts) {
  if (!empty($conf)) {
    $block = new stdClass;
    $block->title = $conf['override_title'] ? $conf['override_title_text'] : '';
    $block->content = $conf['text'];
    return $block;

You can, of course, change the $block->content to something more complex, but that is all that is needed for this display.

Other Links

Here are some other tutorials I found very helpful in my search. They provide more examples of some of the functionality here.

I also included the code files used here for you to look at all together. They work, I swear.

And just remember… when in doubt. Clear the cache.

New module: Dismiss

Anyone working with the Drupal theme layer knows that sometimes our frontend debugging is accompanied by some backend trouble as well. While this is manageable in some cases, other times you really need those error messages to go away so you can work on the theme.

Inspecting the element and deleting manually using browser devtools is tedious. Setting some development CSS is risky, because you might commit the change into your codebase. What’s a frontend dev to do?

Enter Dismiss

Dismiss is a very simple module that uses jQuery to add — you guessed it — a “Dismiss” button to each group of messages within Drupal. So each group of status, warning, and error messages will have a little X button that allows you to quickly get rid of them, rather than using one of the workarounds I listed above.

It can also be a nice feature for user-facing messages. If a user has read and understood a message they can easily get rid of it when you install this module.

I don’t have any further features planned, as I want it to stay simple. If you use it, let me know what you think!

CSS Specificity: ID overrides

Yesterday Chris Coyier kicked off a fun journey of discovery by voicing his surprise that 256 classes would override one HTML ID in a selector:

The CSS2 Spec allows for this

Although that's a TON of classes on one element, the spec clearly leaves room for this type of override. From the CSS2 spec (emphasis mine):

A selector's specificity is calculated as follows:

  • count 1 if the declaration is from is a 'style' attribute rather than a rule with a selector, 0 otherwise (= a) (In HTML, values of an element's "style" attribute are style sheet rules. These rules have no selectors, so a=1, b=0, c=0, and d=0.)
  • count the number of ID attributes in the selector (= b)
  • count the number of other attributes and pseudo-classes in the selector (= c)
  • count the number of element names and pseudo-elements in the selector (= d)

Concatenating the four numbers a-b-c-d (in a number system with a large base) gives the specificity.

The main source of confusion is that there’s no set number in the spec upon which specificity overrides are defined, and it seems to cause a small deviance in interoperability. It goes without saying that this is a somewhat academic discussion, as one would rarely use 256 classes chained together, if ever.


From Mr. Coyier's brief period of discovery, we seem to have determined that WebKit and Gecko both use an 8-bit number, or base 256 when determining specificity.

After a little bit of prompting from various devs, we finally got word on Opera’s internals. They use 16-bits, or base 65536:

So there you have it. IDs are much harder to override, but they can still be trumped by particularly long selectors. I suspect we could set up other academic-but-impractical cases using tags, combinators and so forth.

Building Custom Blocks with Drupal 7


There are times when you need to build a custom block that a site builder can utilize in various places on a page. Drupal 7 provides several hooks that allow you to accomplish this goal:


For this tutorial, we will be using the first four hooks above to build a custom block that will contain WYSIWYG markup and an uploaded picture file. This custom block will be built within a module called custom_block.

Making the Block Selectable

In order to use the block, we have to make it selectable in the block administration menu. To accomplish this, we will use hook_block_info(). Inside this hook, you can assign multiple blocks to your returned array. We will only be adding one block in this tutorial. Add the following code to your module (remember to swap out ‘custom_block’ with the name of your module):

 * Implements hook_block_info().
function custom_block_block_info() {
  $blocks = array();
  $blocks['my_block'] = array(
    'info' => t('My Custom Block'),
  return $blocks;

After you load this code up in your custom module, you will have the option of selecting and configuring ‘My Custom Block’:

Screenshot of block selection with custom block added.

When you click on configure at this point, you will get the standard block configuration page. We will need additional code in order to enable more block options.

Building the Block Configuration Form

We want to add a WYSIWYG text form for html markup and a file upload widget for our picture file. This can be accomplished within hook_block_configure(). Additionally, we will need two separate form types to pull this off. Inspecting the Form API Reference for Drupal 7, we see that there are two form types that will work perfectly: #text_format and #managed_file.

For the #text_format (WYSIWYG) type form, we need to supply #type, #title, and #default_value. In order for Drupal to remember what you store in this form, you will need to assign a stored variable to #default_value using the variable_get() function.

For the #managed_file (AJAX file upload widget) type form, we need to supply #name, #type, #title, #description, #default_value, #upload_location, and #upload_validators. Again, in order for Drupal to remember the file that we uploaded, #default_value will need to be assigned to the stored file FID variable using the variable_get() function. Additionally, you can specify the subdirectory within your Drupal files directory using #upload_location and allowable file extensions using #upload_validators.

Add the following code to your module:

 * Implements hook_block_configure().
function custom_block_block_configure($delta='') {
  $form = array();
  switch($delta) {
    case 'my_block' :
      // Text field form element
      $form['text_body'] = array(
        '#type' => 'text_format',
        '#title' => t('Enter your text here in WYSIWYG format'),
        '#default_value' => variable_get('text_variable', ''),
      // File selection form element
      $form['file'] = array(
        '#name' => 'block_image',
        '#type' => 'managed_file',
        '#title' => t('Choose an Image File'),
        '#description' => t('Select an Image for the custom block.  Only *.gif, *.png, *.jpg, and *.jpeg images allowed.'),
        '#default_value' => variable_get('block_image_fid', ''),
        '#upload_location' => 'public://block_image/',
        '#upload_validators' => array(
          'file_validate_extensions' => array('gif png jpg jpeg'),
  return $form;

Once you’ve added the code to your module, you should have a block configure form that looks like this:

Screenshot of custom block configuration page.

Handling the File Saving

When saving the block, we want to make sure that Drupal will take the necessary steps in order to properly save our text and uploaded file. To accomplish this, we’ll make use of hook_block_save(). This hook is called whenever the ‘Save block’ button is clicked on the block modification form. Two arguments, $delta and $edit, are supplied into hook_block_save(). The $delta variable will contain a string identifier of which block is being configured and $edit will contain an array of submitted block form information.

In order to save our WYSIWYG text, we will use variable_set() function. You’ll need to supply a name and the value as arguments. In our case, the text is captured in the $edit array, specifically within $edit['text_body']['value'].

Saving our uploaded file requires several more steps. First, we need to build a file object for the uploaded file. This is done using the file_load() function and supplying the FID value as the argument. Once the file object is established, the status value needs to be set to FILE_STATUS_PERMANENT. This constant tells Drupal that this file is permanent and should not be deleted. If the file status value is not changed, Drupal assumes that it is temporary and will delete it after the DRUPAL_MAXIMUM_TEMP_FILE_AGE time value is exceeded.

Next, we need to save our file object to the database using the file_save() function. Additionally, we will want to map our block’s usage to the file. To perform this, we first need to build a block object using block_load() while supplying the module name and block name arguments. Then we can set the file usage via the file_usage_add() function. This function passes in the file object, module name, object type associated with the file, and a numeric ID of the object containing the file (in our case, the block BID). Finally, the file FID gets associated with a variable using the variable_set() function, similar to how the text was stored earlier.

Add the following code to your module:

 * Implements hook_block_save().
function custom_block_block_save($delta = '', $edit = array()) {
  switch($delta) {
    case 'my_block' :
      // Saving the WYSIWYG text      
      variable_set('text_variable', $edit['text_body']['value']);
      // Saving the file, setting it to a permanent state, setting a FID variable
      $file = file_load($edit['file']);
      $file->status = FILE_STATUS_PERMANENT;
      $block = block_load('custom_block', $delta);
      file_usage_add($file, 'custom_block', 'block', $block->bid);
      variable_set('block_image_fid', $file->fid);

After implementing this code, uploading a file, and saving the block, you should have an entry in your file_usage table like this:

Screenshot of file_usage table entry from uploaded file.

Also, there should be an entry in your file_managed table like this:

Screenshot of the file_managed table entry from uploaded file.

Building the Block Output

Now all that is left to do is output the block contents in a meaningful way. To achieve this, we will need to use hook_block_view(). Within this hook all that needs to be done is return a renderable array. To decouple the renderable array construction, we’ll place this logic in a separate function called my_block_view() that simply returns the final renderable array.

Within the my_block_view() function, we retrieve the image file using file_load() and supply it with the stored file FID reference variable. We then check to see if the image file path exists and assign it if present. The image markup is built using theme_image() with an array of attributes getting passed in as the argument. The WYSIWYG text markup variable is retrieved using variable_get(). Finally, the block renderable array is constructed using two sub-arrays ‘image’ and ‘message’.

Add the following code to your module:

 * Implements hook_block_view().
function custom_block_block_view($delta='') {
  $block = array();
  switch($delta) {
    case 'my_block' :
      $block['content'] = my_block_view();
  return $block;
 * Custom function to assemble renderable array for block content.
 * Returns a renderable array with the block content.
 * @return
 *   returns a renderable array of block content.
function my_block_view() {
  $block = array();
  // Capture the image file path and form into HTML with attributes
  $image_file = file_load(variable_get('block_image_fid', ''));
  $image_path = '';
  if (isset($image_file->uri)) {
    $image_path = $image_file->uri;
  $image = theme_image(array(
    'path' => ($image_path),
    'alt' => t('Image description here.'),
    'title' => t('This is our block image.'),
    'attributes' => array('class' => 'class_name'),
  // Capture WYSIWYG text from the variable
  $text = variable_get('text_variable', '');
  // Block output in HTML with div wrapper
  $block = array(
    'image' => array(
      '#prefix' => '<div class="class_name">',
      '#type' => 'markup',
      '#markup' => $image,
    'message' => array(
      '#type' => 'markup',
      '#markup' => $text,
      '#suffix' => '</div>',
  return $block;

Now you have all the necessary pieces in place to output the block on your page. Here is the un-styled block output within the ‘Featured’ section:

Screenshot of final custom block output in the featured section of the Drupal site.


As you have seen, Drupal 7 provides you with several tools to build custom blocks. Using hooks, the Form API, and various storage/retrieval functions, the block construction possibilites are endless. Have suggestions, comments, or better solutions? Let us know!