Rachael Arnold spends her days crafting code for the web, mastering the art of SAFe scrum, and encouraging women to jump into development. Her nights are spent crafting fabric things, reading speculative fiction, and encouraging her dog to pose for Instagram photos.
When I add new Event or PageView tracking code that is bound to click events, I want to be sure that the tracking code is fired without waiting for a site visitor to trigger it (or in some cases before the code is even on a live page). Because my own traffic is filtered out of all of my Google Analytics reports, I can’t rely on my clicks to show in my reports (not to mention the delay). But, even with the filter in place, the clicks are sent to Google, and I can check to be sure they’re sending the right information in Firefox using Firebug’s Net panel. Here’s how.
Enable the Net tab
If you don’t already have Firebug installed in Firefox, get it, because you need it for this.
Once it’s installed, enable the Net panel.
Load the page you want to test. Once you do, you’ll see a lot of stuff pop up in the Firebug Net panel. One of my posts looks something like:
Look for the PageView
Once you have enabled the Net panel, you can see if your Google code triggers the initial pageview. Look through the requests for one that starts with _utm.gif?. Expand it by clicking on the arrow to the left of it.
Looking at the params tab (it defaults to headers), you can see the information being passed to Google. The first line is your unique tracking ID. You’ll also see your page title, the domain, and most importantly, the URL being passed.
If you don’t find the _utm.gif? entry, something is wrong with your basic GA tracking code and no data is being sent to Google.
Trigger your event or new pageview and find the tracking info
After verifying that the GA code is tracking the pageview, you can see if your click event code is working. To do so, trigger an event that you’ve attached tracking code to. In my screenshot, I open an external link, which I’ve coded to trigger a GA event. It’s important that if you are triggering a link to a new page that you open it in a new tab or window so that your Firebug Net panel still tracks the original page.
A new _utm.gif? line should show up in the Net panel. Expand this new tab. If you triggered another pageview, the params will look similar to those mentioned earlier. Events look a bit different.
Click to view larger version
The important field here is utme, which shows the information you passed in the _trackEvent call. The arguments you passed to _trackEvent are separated by asterisks. Mine reads Outbound Link*us.php.net/strtotime*Why is date() returning 12/31/1969. This is because I track my outbound links under the categoryOutbound Link, with the external URL as the action and the h1 text on the page the call was triggered from as the label.
Yours will likely differ depending on your schema for event tracking.
Again, if you don’t find this new _utm.gif? call in the Net panel, your click event code isn’t running, or the GA _trackEvent/_trackPageview code is set up incorrectly. You’ll need to debug it, then check again.
Now you know how to check if your code is working. What do you use Google Event or Pageview tracking for?
The Web is ever changing, and this article is relatively ancient having been published 12 years ago. It is likely out of date or even blatantly incorrect in relation to modern better practices, so proceed at your own risk.
I recently developed a sign-up form for a client that includes on-page price total calculation using JavaScript (jQuery). The premise is simple: the user provides information and specify options, then clicks a radio button to choose a specific price plan. The initial total price calculation is triggered by the change() event for the radio button elements. But, the client was concerned (and with user testing, it turned out rightly so) because in IE, the price calculation didn’t happen until the user clicked somewhere else on the page. In cases where they first clicked one option, then a different one, the price would seemingly lag behind because of IE’s delayed change event firing. It was confusing to the user, but worse—confusing for me to “fix” IE’s implementation.
The awesome news is that this has been fixed in jQuery 1.4, but the concern is still valid for older versions (which my application was using) and straight-JavaScript implementations.
The problem
In Internet Explorer, the change event fires when focus leaves the form element (an event know as blur). That means that the event happens only once a user has clicked on—or used the keyboard to navigate to—another element on the page (form or other). In cases like mine, where a user is expecting instant feedback to their click, this causes issues with user experience. Unfortunately, this isn’t exactly a “bug,” as it’s how IE handles this event in 6, 7 and 8.
In other browsers—Firefox, Webkit-based (Safari/Chrome) and Opera—the event fires off immediately, so in order to have consistent, intelligent operation, we have to hack IE’s basic behavior. The easiest solution is to bind your function to a different event, such as the click event, but that’s generally not the right solution. There is a better one.
Why using the click event is wrong
One word: accessibility. Users—whether they have a disability that restricts their use of the mouse or like to tab about the page with the keyboard for speed—don’t always use the mouse to move from form element to form element. So, if you bind your functionality to the click event, you may end up messing with a user’s workflow, which makes for unhappy visitors. In some cases, it may even make a user unable to use your application. So don’t use that as your solution.
The real solution
If IE needs a blur event to know that the change event should fire, give it one when the element is clicked. In jQuery, that looks something like:
$("#element").click(function(){
// In IE, we need to do blur then focus to trigger a change event
if ($.browser.msie) {
this.blur();
this.focus();
}
}).change(function(){ actionIWantOnChange(); });
This code tricks IE into thinking that focus has been changed away from the element when it is clicked. But since the change event is also triggered, the actions attached to that event also happen. Keyboard navigability still works, because even though there is no click, the change event will fire when they use the keyboard to move to another field meaning the feedback is instantaneous.
Now, you can probably improve my above example by using better feature-sniffing to test for IE instead of the browser object in jQuery, but my time for creating a fix was limited—and this code gets the job done.
In a day of zero fun, a coworker and I ended up tasked with debugging a jQuery-based script that seemed to be perfectly fine, except of course, the part where it wasn’t working as expected in IE. After some tracing, it turns out the issue had everything to do with one line of code not returning a value: $("title").text();. To translate, the author of the code had been trying to retrieve the text of the title element using jQuery’s text function. But, turns out that doesn’t work in IE.
The solution
If you want to retrieve the value of the page title, use document.title. If you want to set the value of the page title—the title displayed at the top of the browser—use document.title.
It doesn’t matter if you’re working with straight JavaScript or a library like jQuery, the correct way to interact with the page title is through document.title.
Here’s a full code example:
var foo = document.title;
alert("The title was "+foo);
// on this site, the above alert would read
// "The title was The TITLE element and jQuery’s text() function"
document.title = "Now it’s been changed.";
alert(document.title);
// this alert = "Now it’s been changed"
// the display at the top of the browser changes too
The explanation
Don’t worry, I’m not going to leave you high and dry without explaining why the original approach didn’t work.
The problem exists in IE because IE doesn’t consider the title element to have child nodes. By W3C specs, the title element contains one, and only one child node: a text node. It can not have any child elements. Just the one text node. But according to IE, that text exists in some strange nebulous state outside of a normal text node. Here’s a code example.
var titles = document.getElementsByTagName("title");
var kids = titles[0].childNodes;
alert(kids.length);
If you run that code in IE, the alert will be 0. In FireFox, Chrome, Safari, Opera, etc, it will return 1, the expected amount.
Breaking down jQuery.text()
So, the reason $("title").text() doesn’t work in IE is because of jQuery.text()’s reliance on children nodes. Let’s take a look at the function (jQuery version 1.4.2, line 3418):
function getText( elems ) {
var ret = "", elem;
for ( var i = 0; elems[i]; i++ ) {
elem = elems[i];
// Get the text from text nodes and CDATA nodes
if ( elem.nodeType === 3 || elem.nodeType === 4 ) {
ret += elem.nodeValue;
// Traverse everything else, except comment nodes
} else if ( elem.nodeType !== 8 ) {
ret += getText( elem.childNodes );
}
}
return ret;
}
A quick translation of the code is as follows. The first line of the function sets up a variable ret as an empty string. This variable is the return variable for the function. Then we jump into a loop that goes through each object in the array that was passed in (remember, $("title") will return an array with a single element). For each of these elements it checks whether it is a text node. If it is, it appends the node’s value to the return value (ret). If it is an element, it recursively calls itself on that element, appending the result of the recursive call to the return value. At the end, ret is returned.
But, remember, in IE, the title element has no text node, because there are no child nodes. So what does ret equal? An empty string. The entire for loop is skipped, because elems is empty. It essentially goes straight from declaring ret="" to returning that initial value.
The code continues to work in other browsers because they rightly treat the element as having a single child node—a text node— and that node’s value—the text—is added to ret.
innerText doesn’t work either
Even if you’re using plain JavaScript without a a library, document.title is the way to do this. Typically, to set the text of an element in IE you would use the innerText property, but you shouldn’t do that in this situation. There are a couple reasons for this:
It doesn’t work in IE. In fact, it’s a documented issue dating back to IE 5, maybe earlier. At this point, we have to assume Microsoft has it working as they intend it to.
The innerText property doesn’t work cross browser. In FireFox, you have to use the textContent property. Why bother writing extra code to be cross-browser compatible when document.title will work without issue?
Wrapping it up
Here’s that first script again, showing the correct way to do it:
var foo = document.title;
alert("The title was "+foo);
document.title = "Now it’s been changed.";
alert(document.title);
So now you know why you can’t use jQuery.text() to retrieve the page title. But that’s ok, because doing so would be inefficient anyway when you can just call document.title. Good luck with your dev.
The Web is ever changing, and this article is relatively ancient having been published 13 years ago. It is likely out of date or even blatantly incorrect in relation to modern better practices, so proceed at your own risk.
I’ve finally started development of a book recommendation widget for the musings and reviews on books I read section of my site. The general functionality is pretty simple: visitors have a few fields to complete with info about the book; upon submission, their recommendation is saved to a database; the new recommendation is shown to all and sundry in a “last recommendation” section; rinse & repeat. The whole no-JS needed, server-side scripting processing involved is simple, straight-forward and was quickly completed. Being a front-end developer, however, I want to make sure this can all be done in a smooth JS-enhanced way as well (for some nifty UX). That’s where I encountered yet another annoying JavaScript problem.
Each browser interprets “clone” differently
Due to differences in how each browser implemented cloneNode() in their JavaScript engines, there is an issue with values for form elements not persisting to the copy of an element. Inconsistencies like this are many of the reasons why I’m a fan of using a JavaScript library for most projects. In this case, because the library has usually worked out the issues and can handle copying without losing important data. Unfortunately, jQuery still has some issues to work out; it loses select and textarea values on clone(). Other inputs types don’t seem to have any issues, including hidden fields.
How this affected my widget
In order to display the submitted form data in the “last recommendation” section, I decided the best approach would be to copy the form, then replace each form element with an inline element containing the value based on type of element. It might not be the most elegant solution, but it seemed better than specifying each individual element by name and manipulating it that way.
So, step one: use clone() to copy the form (no need for cloning events and the like). Step two: do the replacement. Step three: realize that regardless of selection, the last recommendation always displays the first option in a select box. And an empty string for any text area.
Solutions
Honestly, my current solution is just a dirty hack. I only have two affected fields, so I explicitly copy the original values to where they need to go. I’m more concerned about getting this up and running, knowing that there won’t be much in the way of extension in the future. I am about 99% positive I won’t be adding any additional fields, at least.
For other projects though, this could be a major issue for scalability and ongoing maintenance, especially if there are multiple affected elements. A quick search around the Internet shows a lot of inelegant solutions. One that approaches a decent solution for jQuery is this solution at mddev.co.uk, although it only attacks the select issue (and later inputs as well with a bit overkill [see the comments], but no textarea). That approach could work with some modifications.
Have you seen this before? How’d you solve it/work around it?
The Web is ever changing, and this article is relatively ancient having been published 13 years ago. It is likely out of date or even blatantly incorrect in relation to modern better practices, so proceed at your own risk.
When I was new to working with AJAX functions—especially in the realm of form submission—one hurdle I often encountered was how to handle processing errors in my back-end script and give meaningful feedback to my users. I didn’t understand how to designate a response as an error instead of successful processing. Early on, I sometimes employed a dirty hack of prepending any error with the string ERROR: and then adding in some quick checking to see if that substring existed in my response text. While that may get the job done, it’s not good form. It causes convoluted code usage, thumbs its nose at existing error handling functionality and makes future maintenance a headache. But there is a better way by simply utilizing your processing language’s inherent header and error handling functionality.
N.B. From a JavaScript standpoint, I’m showing code based on the jQuery library, because I use it on a regular basis. The concept of triggering the XMLHttpRequest object error handling with proper headings is applicable to any type of JavaScript coding. Likewise, my server-side processing examples in this article are coded in PHP, but that is not the only applicable language. You can produce similar results with other languages as well, so long as that language allows you to send header information. If you’re willing to translate my examples into another language or non-libraried JavaScript, please do so in the comments or e-mail me (rae.arnold@gmail.com), and I’ll add it into this article (and give you credit, of course).
The information in this article refers to AJAX requests with return dataTypes of html or text. JSON and XML dataTypes are for another day.
The client side of things
Let’s say we’re working with a bare-bones comment form: your users add a name, e-mail address and their comment. Three fields, all required, easy-peasy. For the purposes of this article, we’re going to ignore all of the validation you would want to do on the form and focus solely on the process of sending it via AJAX to your PHP processing script. The resulting AJAX call with jQuery might look something like:
//[warning: most of this is pseudo-code, not code you can copy+paste and expect to immediately work]
$.ajax({
type: "get",
url: "/processing/process_comment.php",
data: $("#commentForm").serialize(),
dataType: "html",
async: true,
beforeSend: function(obj) {
//give user feedback that something is happening
showProcessing();
},
success: function(msg) {
//add a success notice
showNotice("success",msg);
clearForm();
},
error: function(obj,text,error) {
//show error
showNotice("error",obj.responseText);
},
complete: function(obj,text) {
//remove whatever user feedback was shown in beforeSend
removeProcessing();
}
});
Essentially, the above JS expects the server-side processing script to return a message to show the user. We’ll set up such a script next.
The server side of things
Processing our simple comment form is close to trivial. We’d want to do some basic validation, make sure the submitter hasn’t been blacklisted for spamming or other reasons (in this example based on IP address), and then add the comment to a DB. The interesting part, however is how to tell the server that an error occurred during the processing and have that error propagate back to the AJAX properly. This is where the header and exit functions come in handy. Look at this example:
<?php //[warning: the "processing" is pseduo-code functions, however the error throwing parts are valid]
// perform validation
if (validValues($_GET)) {
if (blacklisted()) {
header('HTTP/1.1 403 Forbidden');
exit("Uh, hi. Your IP address has been blacklisted for too many spammy attempts. Back away from the keyboard slowly. And go away.");
}
if (addComment($_GET)) {
// We have success!
print("Your comment has been successfully added.");
exit(0);
}
// if the code reaches this point, something went wrong with saving the comment to the db, so we should show an error
header('HTTP/1.1 500 Internal Server Error');
exit("Something went wrong when we tried to save your comment. Please try again later. Sorry for any inconvenience");
} else {
header('HTTP/1.1 400 Bad Request');
exit("This is a message about your invalid fields. Please correct before resubmitting.");
}
?>
In PHP, the header function allows you to send headers to the client. In the syntax used above, it is allowing us to specify error status. For more info on the header function, head to the PHP manual on header. exit is a handy construct that ends script execution while allowing you to print an error. Upon successful completion of the processing, we make sure to call exit(0), which signifies successful completion of the script. Any non-0 value indicates that an error occurred. Learn more at the PHP manual on exit. For errors, you can also use the die construct, which is equivalent to exit.
The above examples for the error function are pretty simple, but you can create very elegant error handling solutions with it by utilizing other properties of the XMLHttpRequest object. One possibility is to take unique actions based on status code returned. By using the status property of the object, you can customize your error handling based on the status returned. For instance, this error function script would alert a user to modify form information if needed, but completely remove the form if they found the user is a blacklisted IP (using the same server-side script from above).
The Web is ever changing, and this article is relatively ancient having been published 13 years ago. It is likely out of date or even blatantly incorrect in relation to modern better practices, so proceed at your own risk.
A: The bottom of your page, just before the </body> tag is your safest bet. Of course, with Web development, nothing is as easy as a blanket statement like that, right? But, when I’m helping people troubleshoot their JavaScript problems, 95% of the time the first step is to move the JS to the bottom, order the scripts properly and wrap it in some sort of function that starts only after the page is loaded. This not only fixes their problem, but often speeds up content loading. Read on to learn why this is a good rule.
You can’t affect an element that doesn’t yet exist
When attaching events to an object (one of the most often used JavaScript things I see), that object must exist to have an event attached. When you have a bit of script at the top of the page, it is run as soon as the script appears, meaning the object you’re trying to attach the event listener to doesn’t exist. For starters, the DOM, which is the structure that allows you to interact with elements in your page, has not finished loading at this point, nor have your elements inside of the <body>.
If your script is at the bottom, however, even if the DOM isn’t complete, chances are good that your element will at least exist on the page, able to be manipulated. That’s why nothing is happening when your code in the <head> is trying to set up some cool thing to happen when you click an object; the object didn’t “hear” you tell it to do that cool thing—it was in rendering limbo, outside the realm of your script’s reach.
Waiting for the DOM to be ready
Simply moving the code to the bottom doesn’t necessarily mean everything will be loaded, however. The best practice is to explicitly tell your code not to run until either the body is loaded (for older browsers), or until the DOM is ready (for modern browsers). I can’t explain how to do this in every library that exists, because I haven’t used them all (check out their documentation), and that’d make for a very long article, but in jQuery, it can be done with this code:
$(document).ready(function() { ... all your code here ... });
Or the shorthand version that looks like:
$(function(){ ... all your code here ... });
However, if you’re manipulating images, you’ll need to wait until those are downloaded as well. That’s not necessarily done before the DOM says it’s ready. In that situation, you would wrap your code with:
$(window).load(function() { ... your code ... });
The above is triggered after every piece of the document has been downloaded, including all of your images.
In plain-Jane non-libraried JavaScript, you’d be using something along the lines of
window.onload = function(){ ... your code ... });
This is the essentially the same as the method of <body onload="some code goes here">, which should not be used because you should be separating your function from content. Using window.onload allows you to have your code in a separate file, easily included or changed in multiple documents.
Does any one know how to check if the DOM is ready with plain JavaScript? Forgive me, but the process escapes me at the moment.
On the subject of page speed
Now, a slow-loading page isn’t necessarily a broken page, although some research shows that visitors will quickly bounce if they have to wait too long for the page to load. But page speed can often be improved, and many times it’s hanging because of scripts in the <head>. Most pages consist of a multitude of files in addition to the basic page: CSS, JavaScript, images, etc. To speed up loading times, browsers will try to download these in parallel—multiple files at the same time. JavaScript, however, is not pulled down in parallel. So, once a JavaScript file download starts, everything else is put on hold until the script finishes. If the JavaScript is in the <head> of your document, this means your users are starting at a blank screen while the script finishes. If your JavaScript is at the bottom of the page, your content loads first. Your well-written, interesting content should keep visitors busy while the rest of the scripts finish loading.
How I roll
I use the following order for setting up my scripts at the bottom of the page. I’ve found that it provides the best results for my uses both here on my personal site and on sites I develop for others.
JS Library (usually jQuery)
Plug-ins, if any (such as Lightbox, SimpleModal)
My site-specific code (usually invoking plug-ins, form validation, etc)
Social media plug-in (AddThis)
Tracking code
You can view the source of this page to see all of that code at the bottom. (Or don’t, I really need to clean it up! I’m causing you to load the contact form validation even when there is no form. That’s very naughty of me, and I promise to fix it post haste.) The exceptions are ads and the custom search script which both appear at the point in the code where they show on the page, due to requirements of the code and companies.
When should you not place it at the bottom?
When making such a generalized statement like “at the bottom,” there are going to be exceptions. The number one exception to placing the code at the very bottom of the page is: when the provided documentation, manual, or instructions say otherwise. Now, I rarely come across such instructions except for in outdated code that shouldn’t be used anyhow*. One notable exception is SWFObject, a wonderful script for use with Flash on a page. They say put the code in the <head>. I haven’t done enough testing to say that it’ll work with the script at the bottom, so <head> it is.
The other exceptions I see regularly are ads and widgets. Ad servers, such as Google AdSense, are usually structured so that you place the scripts wherever you want the ad to appear. Unfortunately, these slow down your page load, but there’s nothing to be done until they improve their scripts. Likewise, some widgets require placement at the point in the code where they should appear rather than at the bottom of the page. Try to find alternatives if possible, otherwise, do as the documentation instructs.
A final note, speaking of code that’s supposed to go in the <head> element…: please, please, for the love of all things sacred do not use MM_Preload or MM_Menu or any other old Dreamweaver/Fireworks JavaScript that is prefixed by “MM.” Without exception, I have never seen this code do anything that cannot be accomplished in a better way, often without the use of JavaScript to begin with. </rant>
Do you have any other quick JS debugging tips? Has this strategy fixed your JS issues?
The Web is ever changing, and this article is relatively ancient having been published 14 years ago. It is likely out of date or even blatantly incorrect in relation to modern better practices, so proceed at your own risk.
I try to make sure to do my due diligence by checking my sites in the 3 major browsers. And usually, much to my chagrin, I also end up thoroughly testing in IE6, although I’m not willing to make them pixel perfect unless I’m being paid very well to do so. So, I was extremely surprised when I client got ahold of me about a display bug in a now live site. Ok, so you have to believe me, I really did check in IE7. On multiple computers. I don’t know why the client’s installation was special, but sometimes that’s just how things go.
The problem
This isn’t the actual client site, but you get the idea.
The solid-color slightly-transparent drop-down menu backgrounds were showing up as 80% alpha gray to completely transparent gradients, thus rendering most of the navigation illegible.
Code Foundations
These were pretty run-of-the-mill drop-down menus comprising
semantic, nested uls, with
CSS-styling, including hover states (where available), topped with
jQuery/Superfish enhancement,
although I was trying some CSS3 with the styling.
The CSS wasn’t anything for beginners although it’s nothing crazy. To start with, my backgrounds were transparent by way of RGBa background-colors for browsers that support it. Then it downgraded to a opaque grey for older versions of modern browsers/non-IE browsers. And then, in the IE-only style-sheets I swapped in a 1×1px PNG-24 w/ alpha transparency (which was being fixed by DD_belatedPNG in IE6).
Ok, no problem so far. When I tested in IE6/7 my background image swapped itself in and repeated and everything looked fab.
Except for on the client’s computer.
The Cause
After much Googling that turned up very little, I found some promising complaints about pngs going wonky in IE if they’re stretched. Then I was super confused. You can’tstretch a background image with CSS. But to keep it simple: for some reason, the JavaScript was hijacking my background-repeat and instead was scaling (stretching) the png, under certain circumstances. And voilà! seemingly-unexplained, phantom, uncoded gradients.
The Solution
Make the PNG 2×2px or 800×1px or … well, you get the idea. IE doesn’t always play nice with 1×1px PNG files with alpha transparency. Who knows why, it just doesn’t. But it’s an easy fix. Unless you want to complain about file size. But then you better go off and make me a gorgeous background sprite so you’re not killing your visitors with a bazillion extra HTTP requests if you know what’s good for you.
Closing Arguments
In short: if the background image is getting stretched, just change from 1×1px to something larger.
Also, please don’t murder the teacher, advanced kids. I realize that my implementation also had some kinks that should be ironed out, but this post is just about those damn PNGs.