Digg Comment System is Not Retarded

…I am! Ok, so I wrote a post yesterday called “Digg Comment System is Retarded” in which I was complaining about the *new* way story comments are displayed. A short time later after the post was posted to Digg, one user pointed out the following:

you know you do have the option of viewing the comments in differing formats:

sort by most diggs
sort by date (show all)
sort by date (-10 diggs or higher)
sort by date (-4 diggs or higher)
sort by date (+0 diggs or higher)
sort by date (+5 diggs or higher)
sort by date (+10 diggs or higher)

it’s a drop down box right above the first comments number of diggs
sounds like this would solve most of your problems with digg’s comment system

I’m feeling a little sheepish about the whole thing now. In my defense, Digg adds features regularly and after a fairly recent feature addition, the comment view defaulted to the “sort by most diggs” selection which is what I was seeing every time I viewed story comments. I believed that this was just a feature change that I had to live with and it was really getting under my skin how illogical it was.

Frankly I can’t imagine why anyone would want to view the comments this way except to verify that comments from more liberal/irreligious posts get dugg up and conservative/religous posts get dugg down, but the Digg programmers are pretty sharp and I’m sure they’ve got some reason for it. So really, I just need to say, hey folks at Digg, I apologize for the “Retarded” comment. I see you’re really just trying to make it accommodate as many features as possible. You all do great work and I’m sorry for doubting you.

Now, with all that being said, I still have to take issue with one of the items I named in my original post:

Posts are not rated based on whether they are good or well thought out, but rather on whether or not the person rating the comment agrees with what was said.

I acknowledge that this doesn’t take issue with the comment system, but rather with the commentators. And I do take issue with the fact that people do this. I doubt my mentioning it will cause people to change, but I do remember a time when we used terms like “good netizen”–meaning someone who is responsible on the net (tubes), and there were whole documents on ‘netiquette”–that is, net etiquette. In case you’ve forgotten etiquette means (and yes it’s a dictionary definition) “conventional requirements as to social behavior; proprieties of conduct as established in any class or community or for any occasion.”

Having proper netiquette in the context of the Digg comments section I think would mean digging comments up or down based upon whether they are demonstrating well reasoned points and whether they are written with temperate language rather than whether or not you agree. If I disagree with someone but they’ve made their arguments well and with respect, I digg them up. There is so much juvenile name calling (yes I realize I used the word Retarded to describe the comment system–nobody’s perfect) that it makes it a place not of enlightenment and discovery, but rather a place of vitriol and anger. It’s completely counter productive.

In hindsight, it’s not the whole system, just the “sort by most diggs” feature that is umm, not so great (retarded is probably too strong), but at least I have choices. I am now viewing the comments using “sort by date (show all)”.

Digg Comment System is Retarded

Since its inception every aspect of Digg has gone through refinements, the vast majority of which have been good. This latest version of the comment system, however, is the most retarded thing I have ever seen. Usually comment systems are either what you call “flat”–meaning the comments are just posted in ascending order according to the date when the comment was posted, or they are “threaded”–meaning when someone responds to a comment, their comment is placed indented right below the comment to which the user was responding.

The folks at Digg have tried to keep some control over the way people post by limiting replying only to the top level comments (e.g. avoid infinite thread cascading)–which was good, but now it appears that posts are listed on the page only according to the number of diggs–positive or negative–a post may get. Here’s what this has amounted to:

  • Replies to posts are scattered about and you never know who the person was responding to without doing a page search to find similar keywords.
  • Posts are not rated based on whether they are good or well thought out, but rather on whether or not the person rating the comment agrees with what was said.
  • Often the top rated comments are just jokes that barely relate to the topic at hand.
  • When the posts are of a controversial nature (e.g. religious, political, etc.) clearly the majority of “diggers” are non-religious and left wing. If you don’t believe me, just look through any of the political stories at digg and you’ll see that if you’re a conservative, you just need to file to the bottom of the page and if you’re liberal, you will get promoted to the top of the page.

I suppose that you can never create a system that will make everyone happy, but this current version is an illogical atrocious mess that really needs to be re-thought. I’m not sure what the underlying issues were with the previous iteration of the system, but this is definitely a step backwards.

FizzBuzz Is For ‘Real’ Programmers Too

In his follow-up to hist post on Coding Horror about the FizzBuzz problem (Why Can’t Programmers… Program? (February 26, 2007), Jeff Atwood says:

It certainly wasn’t my intention, but a large portion of the audience interpreted FizzBuzz as a challenge. I suppose it’s like walking into Guitar Center and yelling ‘most guitarists can’t play Stairway to Heaven!’* You might be shooting for a rational discussion of Stairway to Heaven as a way to measure minimum levels of guitar competence.

And then a paragraph or so later he says:

The whole point of the original article was to think about why we have to ask people to write FizzBuzz. The mechanical part of writing and solving FizzBuzz, however cleverly, is irrelevant. Any programmer who cares enough to read programming blogs is already far beyond such a simple problem. FizzBuzz isn’t meant for us. It’s the ones we can’t reach– the programmers who don’t read anything– that we’re forced to give the FizzBuzz test to.

What I believe Jeff has discovered is that programmers, all programmers (especially the ones who read programming blogs) are insecure about their ability to write code.

Programmers, when we first come into contact with a rudimentary problem that might call into question our competency, need to make sure we agree that the problem is in fact rudimentary because if we are not able to solve it then there can only be two possible conclusions, either the problem is not rudimentary after all, or we are not competent.

As I see it, upon encountering this post, most programmers went through four phases of competency validation as follows:

  1. Am I able to solve the problem at all?
  2. As a ‘senior’ programmer, am I able to solve it in under 10-15 minutes (remembering that a quote from Imran in the original post stated, “I’ve also seen self-proclaimed senior programmers take more than 10-15 minutes to write a solution.”)
  3. Now that I have solved it under that time, how long did it actually take me? In other words, where do I rank among true programmers?
  4. Now that I know I can do it in under 3 minutes, I must create the most cleverly written one-liner ever, ever!

In my mind, when Jeff says,

And instead of writing FizzBuzz code, they should be thinking about ways to prevent us from needing FizzBuzz code in the first place.

he’s missing what actually happened. When programmers saw the post, they said, “I’ve never had to solve that problem before and if some senior level (self-proclaimed or otherwise) programmers took 10-15 minutes to solve it, I had better check to see that I can do it in less time (or at all for that matter)”.

Dont’ you see what you’ve done Jeff?. Every programmer out there who considers himself competent just went and took a test to make sure he really is. Imagine that there was someone who should know how to solve the problem who currently holds a programming job who just failed the test miserably. Maybe he will come to his senses and stop wasting his company’s time and his own life. If this is truly a problem, that people get into positions because nobody asked them the FizzBuzz question when they were being interviewed, then the FizzBuzz post has just shown them the truth and maybe they will now ‘see the light’ and finally stop making their fellow programmers look bad and go away.

Then again, they probably won’t, but at least now they know. They can’t deceive themselves any longer. They are not competent. The jury is no longer out for them.

ASP .NET 2.0 TreeView Strategy

The statelessness of the web continues to challenge every web devloper no matter if you’re a newbie or a seasoned professional. Wherever I am in the development experience landscape, I am no exception.

While working with an ASP .NET 2.0 TreeView for the first time I was hopeful that there might be some built in capabilities that would make my experience similar to what you might expect in the WinForms TreeView.

When working with the TreeView in WinForms, I simply create a derived TreeNode object for every type of entity I expected to represent in the TreeView. That way, when I get a SelectedNodeChanged event, I can simply see what the node type is and respond accordingly. So I set up the event handler in my ASP .NET TreeView and went to look at the signature and realized that the only thing I was given was the sender (the TreeView itself) and a generic EventArgs object. This is different than what you get in the WinForms TreeView. It actually provides the the selected node object in the event args. Well, I reasoned, just because I’m not given the node object doesn’t mean that I can’t just grab it out of the TreeView’s SelectedNode property.

Now, keep in mind that I had created a base TreeNode type that all of my TreeNodes could inherit from, but to be safe I just typecasted to the TreeNode type when I accessed the selected node in the TreeView.
Here is what I did:
[csharp]TreeNode selectedNode = ((TreeView)sender).SelectedNode;[/csharp]
I then used the GetType() method to determine whether I was getting back the type I was expecting. And herein lies the problem. As I moused over selectedNode while debugging, the type it was showing was TreeNode. No matter which of my inherited node types I had added to the TreeView on page load, the SelectedNode parameter always returned a TreeNode type.

At this point, I realized that the statelessness of the web had gotten me again. I should have realized that this would be the case. The TreeView only knows how to make generic TreeNodes and since the TreeView has to be built on each successive trip to the server, it doesn’t know how to make my extended tree nodes.

In thinking through a bit more, and with a bit of help from some guys on the CodeProject.com message boards, I realized that the only way to do what I was wanting was to create my own derived TreeView and create a way to store the types that I needed in the ViewState. Oy! It was starting to seem that this way of handling a TreeNode getting clicked was getting much more complicated than it needed to be.

I decided to head back to the drawing board or, actually, the MSDN documentation and see what all the TreeView had to offer me. What it all boils down to are three different properties I can access. They are:

TreeNode.Text: This is the text that gets displayed for the particular node when rendered.

TreeNode.Value: This is the value of the node which has to be unique at any given hierarchy depth.

TreeNode.ValuePath: This is the full value path to the current node.

Thes properties make it pretty clear that the way to handle TreeNodes in ASP .NET 2.0 is to use the ValuePath to determine where the selected node lies in the hierarchy. The Text property is only useful in that it will display something to the user. The Value property is useful in that it will allow you to store an identifier for the current node. It’s the ValuePath property, however, that allows you to parse out everything you need to respond accordingly.

Now that I knew that I needed to determine a good way to parse the ValuePath, I stubmled upon a new problem. In the event handler for the node change, I was simply building a URL to redirect to and then calling Response.Redirect(). The problem is that if you want to simply redirect to the current page and display a View in the page (I was was using a MutiView, which is another powerful control available in ASP .NET 2.0) based on parameters you’ve passed in the URL, you’re going to have problems with the TreeView not retaining its display. The TreeView is expecting that you will simply post back to the server and it will be able to retain it’s layout in the ViewState. In my case, though, when I clicked the TreeNode, it was doing a Response.Redirect which, when the new page loads, will cause it to not be a PostBack which simply means that the TreeView will re-render in it’s default state (which is determined by the ExpandDepth property that is set in the designer or code views).

What this means is that each time I would click a node in the TreeView, when the page loaded again, my TreeView would be collapsed again to its default expansion depth which I had set to 1. So, how could I ensure that when the page loads, my TreeView is expanded properly?

I decided to take the ValuePath of the selected node and append it to the URL as a parameter as well and then use that on the other side to expand the TreeView to the same state as it was before the node was clicked. Of course, first you have to UrlEncode the ValuePath like this:

string encodedValuePath = HttpUtility.UrlEncode(valuePath);

And then you can add it to the URL you are redirecting to, but here is how I handle expanding the nodes properly when the page is loaded again:
[csharp]if (Request.Params[“ValPath”] != null)
// Decode the ValPath parameter
string valPath = HttpUtility.UrlDecode(Request.Params[“ValPath”]);

// Find the node we want to select according to the ValuePath
TreeNode node = tvMain.FindNode(valPath);

if (node != null)
// If we were able to find the node, we expand it and set its
// selected property.

// Now we simply walk up the TreeNode hierarchy, expanding each
// parent node above us to ensure that the TreeView will display
// properly.
TreeNode tmpNode = node;
while (tmpNode.Parent != null)
tmpNode = tmpNode.Parent;
I had orignally thought that simply selecting and expanding the selected TreeNode would be enough, but it didn’t actually work correctly until I did things this way.

You may be wondering at this point why I don’t simply use the NavigateUrl property on each node when I initially populate the TreeView. The issue is that the ValuePath property gets set at a later time and is not available when you first instantiate a TreeNode. If I tried to append the ValuePath to the NavigateUrl property at the time when the TreeNode was created, it would have thrown a null reference exception. On the other hand, when you just redirect in the node changed event handler as I was doing, you can grab the ValuePath of the currently selected node without problem.

Ok, so now that was working correctly. There was only one more thing I needed to do. I had to have a good way to parse the ValuePath property when the node changed event was fired. For this, I implemented the Strategy Pattern and created a Strategy Factory and ValuePathStrategy objects for each of the TreeNode entities. Basically, my Strategy Factory (which is a Singleton) takes the ValuePath as a parameter and determines from the first or second level in the ValuePath which type of strategy object to create and passes that object back to the caller. The caller can then simply access the Url property on the ValuePathStrategy object and redirect accordingly. Here is what the code basically looks like:
[csharp]protected void tvMain_SelectedNodeChanged(object sender, EventArgs e)
TreeView view = (TreeView)sender;
TreeNode node = view.SelectedNode;
if (node != null)
ValuePathStrategyBase vbase = ValuePathStrategyFactory.
Instance.GetStratey(node.ValuePath, view.PathSeparator);
if (vbase != null)
Response.Redirect( “~/Default.aspx?” + vbase.Url, true);
Notice I’ve also passed in the PathSeparator so each level of the path can be easily tokenized with the String.Split method. Each of what would have originally been extended tree nodes are now strategy objects that provide different URL parameters based on what will be needed when the page loads including the view, which specifies which view in my MultiView to display, the name of the parameter that view needs in order to load its data, and the parameter value itself which, in the case of my application, gets passed to an ObjectDataSource which has a table adapter for the corresponding data in the database.

I’m not sure that my methods for handling my TreeView are the best, however, I’m pretty happy with the outcome. I’ve been able to keep everything relatively clean code wise and the way I’ve done things seems to uphold the rules of designing an application for the web as opposed to WinForms.

Web Programming: A Hacker’s Dream?

So I recently came to realize that a web application I had developed had a security hole as someone had hacked the page with a SQL injection attack. I was thinking that it was the underlying CMS (Content Management System) framework that provides the basis for the web application I built that was causing the problem, but on a whim decided to go back in and check my own code just to make sure. I realized that while it may not have been my code that was used to mount the attack, it was also quite possible that it had been. Basically I had unchecked inputs, so a url like this:


Could be turned into something like this:


which would simply delete all of the users from the database. Yikes!

As a seasoned programmer I should know better, so the only excuse I can think of is how long ago I wrote the code. I guess I had never done any security hardening to the application that I know I should have done. Fortunately, this lesson didn’t have terribly dire consequences, however it has made me wake up to the realities a bit.

Maybe it has already been done, but it seems to me that if someone with malicious intent and a basic knowledge of web crawling could write a simple script to just run amok on the Internet, crusing from site to site looking for vulnerabilities. Obviously companies and even individuals feel that they cannot be without a web site, however, building a site and having it accessible to the world should give us programmers pause. We need to do one of two things: a) write a killer EULA (End-User License Agreement) like Microsoft does that protects us from any culpability, which lowers a customer’s confidence in our work, or b) we need to write such rock-solid code as to ensure that these vulnerabilities are not as easily found and exploited. Maybe the real solution is somewhere in between the two. Anyhow, this experience has helped me get better focus. I hope it will encourage you to get focused as well.

Lazy coding practices is the stuff Hacker’s Dreams are made of.

For an in depth introduction to SQL Injection Attacks, read this article.

New limits for cigarette marketing: no more “light” cigarettes… Crazy!

Read this on Digg today:

From today’s ruling against the tobacco companies: “Judge Kessler ordered the companies to stop labeling cigarettes as “low tar” or “light” or “natural” or with other “deceptive brand descriptors which implicitly or explicitly convey to the smoker and potential smoker that they are less hazardous to health than full-flavor cigarettes.””

read more | digg story

Ok. I hate the smell of cigarette smoke as much as the next non-smoker with ulfactory senses still in tact, but this stuff is crazy! In Colorado where I live there has been a state wide ban on cigarette smoke in all establishments including clubs and bars. Now they are telling the cigarette companies that they are misleading the public? Come on! Do you really think that with all of the bans on cigarette smoking and continued growth in public loathing that these people just are too dumb to get it? Let’s fix the packaging now so that these people will finally “understand”? Hello? It’s a choice, people. People who smoke do so on purpose. They’re not being tricked into it.

Anyhow, I just think the government gets their hands in too much. They go too far. Use of any of the terms listed are relative terms and non of them are intentionally misleading. Does anyone really believe that any cigarette is safe? I have no opinion of the tobacco companies. They are a business like any other business that produces something with consumption risks and the government rightly did require them to warn about the use of their products. That is a good use of government authority, but this? This is just stupid. Continuing to force the companies to change their practices without expecting the “poor saps who don’t get it” to change theirs is just ludicrous. What the heck happened to people taking responsibilities for their own actions? It’s unreal.

Ok. I’m done.

Don’t Let Them Undervalue Your Work

In spite of many programming jobs being sent overseas it remains relatively painless to get work as a programmer in the United States even though a lot of it is more in the context of freelancing.

Over the years different sites have come (and some gone) that provide a method for connecting companies who need programmers with the programmers who have the necessary skills.

I have never personally gotten work through these services, but I have friends who have and still do and find them to work relatively well. I do, however, have my email address on a list with Guru.com from which I receive daily updates of new freelance job listings on the site. When I get the messages, I just quickly glance through the list to see the type of work people are interested in. This, I think helps me keep my thumb on the pulse of the market–what are the types of skills I need to stay up on, etc. The problem I’ve seen though is that what employers are looking for more than anything is a free lunch and here is why I say that.

Just today I received an email listing a job and toward the end of the message decscribing the job were the following seemingly inocuous words:

“This should be pretty simple for an expert.”

I say seemingly inocuous because I think it has much deeper meaning and I take issue with its very sentiment. If I were to paraphrase, I would interpret the words this way, “I don’t know how to do this, but I believe that if there is an expert out there, she/he should be able to handle this with ease. And since it’s easy for you, it shouldn’t cost me much.” I’ve seen others with the same sentiment. They all say something to the affect, “This shouldn’t take somone who knows what they’re doing very long”.

By this statement, the poster is initially admitting that they have no idea how to do what they want, but in the same sentence somehow they have become an expert in knowing how long this thing they know nothing about will take to finish? That’s just crazy. Even the “experts” themselves often don’t know exactly how long something will take. It is clearly a way to say, I need help, but I don’t want to pay for it.

If you are a freelance developer and you find yourself responding to these types of freelance posts on the freelance sites, stop trying to provide these people with bids. Read between the lines and see what they are really wanting–your experience and knowledge for free. Did you get to where you are by someone just giving you a free lunch? No. You know what you know and are an expert becase you worked hard to get there. You probably taught yourself or went to school, took the initiative and deepened your understanding and now you are willing to give that away for free? People should be paying for your skills and these skills are not simply the ability to write code, but also the knowledge and experience of the particular field that they know nothing about. They are saving money by asking for your advice alone. That should be valued.

If they can find someone overseas with the same skills who will work for next to nothing, there’s nothing you can do about it, but don’t try to sell yourself short because these people are not bright enough to see value in paying for good people who have the ability to provide for them what will make their business successful.

Don’t misunderstand what I am saying here. I have no need to get freelance work through these sites and I don’t even try, so this is not an attempt to just get everyone to raise their standards so I’ll have a better chance getting the rate I want. I don’t even know what rates people are getting on average through these sites these days. What I am saying is stop giving away your hard earned abilities and knowledge (real “intellectual property”) for nothing. Charge what you want to be competitive. That’s fine. Just don’t let anyone convince you that your expertise should come cheap because they figure it’s easy for you.

And in as much as you are able to, make sure any contract you take on, you explain that software development estimation is not an exact science and that you will be billing according to the hours you work. Don’t work for free. I see guys letting people take advantage of them all the time. Let your customer know from the start what kind of time frames are possible and make sure they understand that your work is valuable to them and they wouldn’t want this kind of work done on the cheap.

When I’ve bid contracts, I normally figure out what I think it will take to complete the job. Then I double the hours and sometimes add another 20% when there are some unknown aspects to the job. This may sound high to some people out there, but if you’ve ever had to eat some hours for innacurate estimation on your part, you’ll understand why. If you explain to your customer that you bill for the hours worked and assure them you will only be billing for those, they will have a greater confidence that you’re not just jacking the price up on them for no reason. Being honest in business is the best policy. The honesty needs to start up front with the estimate. If you have a bad gut feeling about an estimate you’ve given, it’s better to go back and renegotiate before starting than to get into it a ways and realize just how big of a mistake your estimate was.

I’ve worked on jobs that by time it was all said and done, I was working for a ridiculously low rate for the kind of skills and knowledge I was providing. All that does is make you bitter and frustrated with the job to where you wish you had never done it in the first place. Bid it correctly from the start and you’ll be better for it in the long run. Even if it means you lose the contract, it’s a better way to go. Other contracts will come. The bottom line is that if you don’t value the work you do, no one else will either.

Single-Sided Floppies

I just got some new old music hardware from a friend–a Roland S-330 sampler module. It only takes floppies for loading samples so I went digging through my old boxes for some floppies I can use. I found a couple, but then came to learn that not only does the system take floppies only, but it also requires them to be single-sided 720K floppies.

After doing a little research on the web I found that the trick is to take a Double-Sided Double-Density floppy, cover the hole in the disk opposite the write-protect hole with tape and then format it as a 720K disk. Sounds easy enough and fortunately it is. The only problem now is that in Windows XP the format command:

c:\> format a: /F:720

Does not work. The only option in XP is to use /F:1.44. I then dug a little deeper and came across this gem, which does the correct formatting:

C:\> format a: /T:80 /N:9

When I get a little more time, I’ll look up exactly what each of these does, but for now suffice it to say, it does the job and I have now turned several perfectly good 1.44MB floppy discs into 720K floppy discs. Who would have ever thought that might be desireable? :-D


This is a reminder to myself the next time I have problems getting the SQLXMLBULKLOAD library to work. According to several articles online, the bulk load capabilities Microsoft provides through this library only work in a single threaded environment. You can read a bit more about it over at sqlxml.org.

The basic gist is this. When you want to run the bulk load utility in another thread, specify to that thread that it should run as a single threaded apartment, like this:

System.Threading.Thread.CurrentThread.ApartmentState =

You can also specify it when you create a thread by doing this:

System.Threading.Thread bulkLoaderThread
= new Thread( new ThreadStart( BulkLoadProc) );
bulkLoaderThread.ApartmentState = System.Threading.ApartmentState.STA;

For anyone who has never used this library before you may want to take a look at it. It provides a great way to take large amounts of XML and load them into a database. The usage of the library is as simple as instantiating the object, setting a few parameters, and then calling Execute() which takes the path to your schema file that provides mapping between XML fields and database fields, and a the path to the actual XML file you want to load.The library is limited in that it doesn’t provide any sort of record by record reporting or event hooking, however, it is a great tool for when you are sure you have valid XML going in (e.g. it expects you’ve already validated the XML and data integrity).

SQL XML 3.0 SP3 Download