Digg API C# Library

I did a little search around the web for a C# library that encapsulates the Digg API and found this little gem written by Dan Atkinson. Thanks Dan. This is a nice little library. It places the data it gets back from the Digg API calls into a DataSet making it simple to use it however you want. He also wrote a helper function that allows you to get the raw XML. You can simply drop that into an XmlDocument object and use XPath queries to get the nodes you want. From the post on his blog, it sounds like Dan might be extending the library.

You can see Dan’s blog at http://www.dan-atkinson.com/blog/.

The Digg API allows you to collect all kinds of information about Digg stories, comments, users, diggs, and more. It’s quite interesting. I was thinking I might write a few apps that provide a little analysis of what kinds of comments get dugg the most in popular stoires. I’ll write the app when I get a little time. Meanwhile, thanks again, Dan, for the library.

Javascript Doppler Radar Object

I was looking around for a way to display a radar map on a website based on a given location. If you go to the NOAA National Weather Service site, you can get the image you need, however, the image they use now is actually a composite image that layers transparent images on top of each other and uses the z-index for each layer. It makes it much less intensive for their system to really only have to render one, maybe two layers. The majority of the layers such as topographic, county lines, highways, etc. are static. Only the radar data and weather warnings need to be rendered.

I figured this was as good as anything so I looked a little closer. What’s really cool is that the images you need can simply be referenced with a full URL to the NWS website–that is, there is no server side coding or or need for XmlHttpRequest (AJAX). You just need a little Javascipt to determine what the image name should be, again, based on location. You can download the Javascript file I wrote that encapsulates this functionality here: NWSRadar.js.

The following is the basic usage. As usual, place this in the head of your HTML:


And then instantiate the object like this:

var radar = new NWSRadar("PUX", 10, 10);

The constructor signature is like this:

function NWSRadar(radarid, left, top, width, height)

The only required parameter is radarid. Because this uses divs, I needed to specify explicitly the location where to place the image on the screen so we are using absolute positioning. This will make is so the width and height parameters work correctly as well. If you want the radar image to be in a table element or some other flow type layout, then you simply specify a div that encloses the script and has its style set to relative positioning.

In case it’s not clear, here’s what each of the constructor parameters represent:

  • radarid: This is the ID for the location that is used by the National Weather Service. Go to the site and find the location you need. For my example above, I used “PUX” which is Pueblo, Colorado. To find the Radar ID you need, go to the NWS website and enter in your city and state in the “Local Forcast by City, St” box in the upper left-hand corner. Then click on the left most image under the heading Radar and Satellite Images. On the ensuing page, you will see in the URL the id field. This is your radarid.
  • left: The absolute position of the left side of the image.
  • top: The absolute position of the top side of the image.
  • width: The width of the image. Defaults to 600px which is the NWS default size.
  • height: The height of the image. Defaults to 550px which is the NWS default size.

Once your object has been instantiated, simply call render() like this:

radar.render();

And here is what it should look like:


And here is a full HTML test page




NWS Weather Radar for Pueblo, CO







8051 Microcontroller Programming

8-bit MCUs continue to be a popular microcontroller for embedded systems because they are fairly simple to understand and write code for. I have been pursuing embedded programming as a hobby for several years now and have found the obstacles to understanding to be pretty difficult to overcome. In the course of the past year or so, though, I haver begun taking steps to figure things out. My first step was to actually purchase a microcontroller development board along with some books. I ended up purchasing a dev board from Silicon Laboratories. The actual development kit I bought is the C8051F020DK. It provides everything you need to get started with Microcontroller development. What it doesn’t include though is a quick way for programmers to learn electronics. This continues to be my biggest challenge. Here, however, is what I’ve learned so far for those of you who are Windows programmers but would like to venture into the embedded world.

Interrupts Are Like Events
As programmers we are used to responding to events constantly. An event handler on a button or a list control that has just been changed gets triggered when a user does some function such as clicking the button or changing the selection in a list.

Things are similar in embedded programming. While embedded systems don’t have the layer upon layer of abstraction we’re used to, they do provide the tools you need to be able to do what is necessary in the embedded world. Interrupts are a special abstraction that get triggered based on a timeout or a button (a physical button on the dev board for example). Timers are a special peripheral of these chips that will count up and then trigger an interrupt when the number overflows. When that happens, we are able to figure out how much time actually elapsed and create more meaningful delays in order to achieve what we want.

While flashing an LED (Light Emitting Diode) is not terribly exciting, it is something that can be handled very easily in an embedded system using interrupts. You simply set the timer running and attach an interrupt function (similar to a callback, really) and when the timer overflows, our function gets called. It is at that point when the LED gets toggled on or off to make it flash.

Ports Turn Stuff On and Off
On the development board, I have 64 port pins that I can manage in code that turn things on and off. The C code to do so is very simple. First, we have to define the memory location of the pin we’re interested in. Bascially on my system there are 8 ports with 8 pins each. Only the lower numbered ports 0-3 are bit addressable which means that they can be turned on and off independently of the rest of the pins on the given port. Byte addressable only ports require that you maintain state and use bit shifting and/or masking to make the changes that you need.

Hooking It Up
I’ve hooked up my board to an “Electronics Learning Lab” from Radio Shack. You really don’t need to get one of those, but I had one and decided to use it. My first project was connecting it to a 7-segment display on the learning lab. For each of the LEDs in the 7 segment display, I connected a port pin from port 1. Below are the pinouts for the peripheral connections. I used Port 1, which is connected to pins C4, B4, A4, C3, B3, A3, C2, and B2. See the frame below for the complete pinout.

You can see the way the seven segment display is connected in the following illustration.

From Vast Resources to Few
Being a Windows programmer makes learning embedded systems a bit of a challenge. By specification, the 8051 only allows you a maximum of 64KB of program code. If your program is bigger than that, you are either out of luck or you have to use some embedded guru’s black magic that I am, clearly, not yet familiar with. This is a far cry from the amount of programming (and data for that matter) memory space we have at our disposal as Windows programmers.

As strange as it may seem, I really like the challenges that come with learning embedded systems. I think it makes you more likely to consider optimization in other types of programming which can be a good thing. As soon as I think of a killer project to create, I will prototype it, market it and sell the idea to highest bidder. Until then, I will just continue to fiddle around with LEDs and, well, LEDs. Does anybody have some ideas for a killer widget that is primarily LED related?

A Pluggable Framework

I’ve been wanting to blog about this for a while, but haven’t as it seems it might be time consuming to explain my thoughts. I’m going to dip into it a bit here and then follow up as I have time.

The company I work for provides data analysis for retail stores (read grocery) found throughout the country. The companies we work for are fairly small which is to say chains like Albertsons, King Soopers, Safeway, etc. are much larger than what we handle. We would like to see that change and I think it will in the future, however, the largest chain we work with currently has fewer than 30 stores total.

Part of being able to use the same data analysis tools for all of these different companies has meant being able to take transaction data (data from scans at the cash register stored in log files) and convert it to a format that can be easily loaded into the database schema we use when running our analysis tools.

I was tasked with writing the extraction processes for each of the different Point of Sale (POS) systems that we support and converting that data from raw log files to the format (XML) that can be loaded into our database schema.

As you might have guessed from the title of this post, I use a pluggable framework that allows me to select which ‘extractor’ to use at run time based upon the type of POS system the store is using. Keep in mind that we need to support numerous POS systems since these smaller companies all use a broader range of systems than what you might find at the large corporate site I mentioned earlier. Also, though it doesn’t happen too much, there may even be cases where within a single company or single site (store), there might be a mix of POS systems in use. That’s why it’s important to be able to accommodate multiple types of POS systems.

I am using reflection for my pluggable framework. I’ll get to the details of that in a minute, but first let me talk about how to organize pluggable components.

Namespace Organization
In order to adhere to good programming practices, I created an abstract base class that contains some common functionality that all transaction log extraction modules will benefit from. I placed that class in it’s own assembly, though, which allows me to create new extractor any time I need to without having to have a reference to the other modules. Organized into namespaces, it looks like this:


CompanyName.ApplicationName.Extractors
CompanyName.ApplicationName.Extractors.SuperPOS
CompanyName.ApplicationName.Extractors.HyperPOS
CompanyName.ApplicationName.Extractors.WonderPOS

Each of these are in their own projects and therefore each of them create their own assembly. Each of the specific extractors has a reference to the CompanyName.ApplicationName.Extractor namespace and inherit from a class in there called “ExtractorBase”. Keep in mind that all of my namespaces here are bogus. In the real application they have their actual names, but I didn’t really want to give away anything here about the real POS systems we support.

The reason I’ve done things this way is because I want to be able to create new projects in the future that adhere to the interface (abstract class) without requiring me to recompile anything but the new assembly. So, say in the future I decided to add a new extractor with the namespace:

CompanyName.ApplicationName.Extractors.CoolPOS

I can simply create a new project that has a reference to CompanyName.ApplicationName.Extractors, inherit from CompanyName.ApplicationName.Extractors.ExtractorBase and implement the interface specified. Then, when I’ve compiled the assembly, I can simply move CompanyName.ApplicationName.Extractors.CoolPOS.dll into my extractors directory and then the existing system will be able to simply load the new assembly using reflection based on some parameters I will add to a configuration XML file.

Reflections Of the Way Life Used to Be
Do you remember life before we had reflection? RTTI (RunTime Type Identification) was the closest I recall in C++. These days I can’t imagine how I could have done anything like this so easily without it. In the code that actually calls each extractor, I use reflection to find and invoke the method. As part of my interface (abstract class), I’ve provided a single entry point that has multiple implementations (e.g. polymorphism). The method name is called Execute and it provides multiple signatures in case some parameters are not needed or desired. The method invocation code is able to determine based on the number of parameters passed in which Execute method should be called.

The method invoker uses strings that specify the full namespace of the object that is currently needed and able to load the proper assembly. It then calls the execute method on the extractor module found in that assembly. The code looks something like this:
[csharp]
string extractorAssemblyName =
configManager.GetValue( “ApplicationPath” ) +
“CompanyName.ApplicationName.Extractors.” +
this.extractorName + “.dll”;

string extractorObjectName =
“CompanyName.ApplicationName.Extractors.” +
this.extractorName + “.” +
this.extractorClassName;

DynamicMethodInvoker invoker = new DynamicMethodInvoker(
extractorAssemblyName,
extractorObjectName,
“Execute” );
[/csharp]
The strings extractorName and extractorClassName were populated from an XML configuration file. In the case of my new POS system I referred to above as CompanyName.ApplicationName.Extractors.CoolPOS, the string extractorAssemblyName would now contain “C:\path_to_application\CompanyName.ApplicationName.Extractors.CoolPOS.dll” and extractorObjectName would now contain “CompanyName.ApplicationName.Extractors.CoolPOS.CoolPOSExtractor” (which is the actual name of the class we are going to use for this POS).

All I need to do at this point is pass in the parameters that the Execute() method is expecting and call Invoke() like this:
[csharp]
invoker.AddParameter( param1 );
invoker.AddParameter( param2 );
invoker.AddParameter( param3 );
invoker.Invoke();
[/csharp]
And my Execute method for the new extractor, CompanyName.ApplicationName.Extractors.CoolPOS is now running without having to recompile my entire calling application.

At this point, I can’t remember where I got the DynamicMethodInvoker class from, but if you are interested in taking a look at it and using it, just let me know and I’ll forward it on to you (I believe I got it off of MSDN, but heck if I can find a reference to it now). DynamicMethodInvoker provides a simple interface for calling methods and examining reflected types and methods. It makes things a little bit simpler to implement.

Conclusion
I haven’t yet done any profiling to see if things would be much faster if I didn’t use reflection (e.g. just create a hard coded class hierarchy), however, the system does seem to work well and fast enough to accomplish what we need. I had thought about using a provider model, but I realized later that the provider model is really intended for creating an abstraction in case you need to change the underlying system at a later date. A data provider for instance provides access to different databases based upon a specific implementation of a provider interface. If you ever needed to switch from a SQL Server database to a MySQL database, for example, you would simply need to implement the provider interface for the MySQL database and the specify in a config file that you want to use that provider instead.

The approach I’ve outlined here works similarly, however, in contrast to the provider model this pluggable framework is intended to swap out the current extractor in use during a single session where the data provider model is simply a design choice you make in order to support a different underlying data store should you need to in the future.

Well, that’s it for now. I will revisit this again as more comes to me. Meanwhile, good luck and have fun making pluggable frameworks with reflection.

Digg Comment System is Not Retarded

…I am! Ok, so I wrote a post yesterday called “Digg Comment System is Retarded” in which I was complaining about the *new* way story comments are displayed. A short time later after the post was posted to Digg, one user pointed out the following:

you know you do have the option of viewing the comments in differing formats:

sort by most diggs
sort by date (show all)
sort by date (-10 diggs or higher)
sort by date (-4 diggs or higher)
sort by date (+0 diggs or higher)
sort by date (+5 diggs or higher)
sort by date (+10 diggs or higher)

it’s a drop down box right above the first comments number of diggs
sounds like this would solve most of your problems with digg’s comment system

I’m feeling a little sheepish about the whole thing now. In my defense, Digg adds features regularly and after a fairly recent feature addition, the comment view defaulted to the “sort by most diggs” selection which is what I was seeing every time I viewed story comments. I believed that this was just a feature change that I had to live with and it was really getting under my skin how illogical it was.

Frankly I can’t imagine why anyone would want to view the comments this way except to verify that comments from more liberal/irreligious posts get dugg up and conservative/religous posts get dugg down, but the Digg programmers are pretty sharp and I’m sure they’ve got some reason for it. So really, I just need to say, hey folks at Digg, I apologize for the “Retarded” comment. I see you’re really just trying to make it accommodate as many features as possible. You all do great work and I’m sorry for doubting you.

Now, with all that being said, I still have to take issue with one of the items I named in my original post:

Posts are not rated based on whether they are good or well thought out, but rather on whether or not the person rating the comment agrees with what was said.

I acknowledge that this doesn’t take issue with the comment system, but rather with the commentators. And I do take issue with the fact that people do this. I doubt my mentioning it will cause people to change, but I do remember a time when we used terms like “good netizen”–meaning someone who is responsible on the net (tubes), and there were whole documents on ‘netiquette”–that is, net etiquette. In case you’ve forgotten etiquette means (and yes it’s a dictionary definition) “conventional requirements as to social behavior; proprieties of conduct as established in any class or community or for any occasion.”

Having proper netiquette in the context of the Digg comments section I think would mean digging comments up or down based upon whether they are demonstrating well reasoned points and whether they are written with temperate language rather than whether or not you agree. If I disagree with someone but they’ve made their arguments well and with respect, I digg them up. There is so much juvenile name calling (yes I realize I used the word Retarded to describe the comment system–nobody’s perfect) that it makes it a place not of enlightenment and discovery, but rather a place of vitriol and anger. It’s completely counter productive.

In hindsight, it’s not the whole system, just the “sort by most diggs” feature that is umm, not so great (retarded is probably too strong), but at least I have choices. I am now viewing the comments using “sort by date (show all)”.

Digg Comment System is Retarded

Since its inception every aspect of Digg has gone through refinements, the vast majority of which have been good. This latest version of the comment system, however, is the most retarded thing I have ever seen. Usually comment systems are either what you call “flat”–meaning the comments are just posted in ascending order according to the date when the comment was posted, or they are “threaded”–meaning when someone responds to a comment, their comment is placed indented right below the comment to which the user was responding.

The folks at Digg have tried to keep some control over the way people post by limiting replying only to the top level comments (e.g. avoid infinite thread cascading)–which was good, but now it appears that posts are listed on the page only according to the number of diggs–positive or negative–a post may get. Here’s what this has amounted to:

  • Replies to posts are scattered about and you never know who the person was responding to without doing a page search to find similar keywords.
  • Posts are not rated based on whether they are good or well thought out, but rather on whether or not the person rating the comment agrees with what was said.
  • Often the top rated comments are just jokes that barely relate to the topic at hand.
  • When the posts are of a controversial nature (e.g. religious, political, etc.) clearly the majority of “diggers” are non-religious and left wing. If you don’t believe me, just look through any of the political stories at digg and you’ll see that if you’re a conservative, you just need to file to the bottom of the page and if you’re liberal, you will get promoted to the top of the page.

I suppose that you can never create a system that will make everyone happy, but this current version is an illogical atrocious mess that really needs to be re-thought. I’m not sure what the underlying issues were with the previous iteration of the system, but this is definitely a step backwards.

FizzBuzz Is For ‘Real’ Programmers Too

In his follow-up to hist post on Coding Horror about the FizzBuzz problem (Why Can’t Programmers… Program? (February 26, 2007), Jeff Atwood says:

It certainly wasn’t my intention, but a large portion of the audience interpreted FizzBuzz as a challenge. I suppose it’s like walking into Guitar Center and yelling ‘most guitarists can’t play Stairway to Heaven!’* You might be shooting for a rational discussion of Stairway to Heaven as a way to measure minimum levels of guitar competence.

And then a paragraph or so later he says:

The whole point of the original article was to think about why we have to ask people to write FizzBuzz. The mechanical part of writing and solving FizzBuzz, however cleverly, is irrelevant. Any programmer who cares enough to read programming blogs is already far beyond such a simple problem. FizzBuzz isn’t meant for us. It’s the ones we can’t reach– the programmers who don’t read anything– that we’re forced to give the FizzBuzz test to.

What I believe Jeff has discovered is that programmers, all programmers (especially the ones who read programming blogs) are insecure about their ability to write code.

Programmers, when we first come into contact with a rudimentary problem that might call into question our competency, need to make sure we agree that the problem is in fact rudimentary because if we are not able to solve it then there can only be two possible conclusions, either the problem is not rudimentary after all, or we are not competent.

As I see it, upon encountering this post, most programmers went through four phases of competency validation as follows:

  1. Am I able to solve the problem at all?
  2. As a ‘senior’ programmer, am I able to solve it in under 10-15 minutes (remembering that a quote from Imran in the original post stated, “I’ve also seen self-proclaimed senior programmers take more than 10-15 minutes to write a solution.”)
  3. Now that I have solved it under that time, how long did it actually take me? In other words, where do I rank among true programmers?
  4. Now that I know I can do it in under 3 minutes, I must create the most cleverly written one-liner ever, ever!

In my mind, when Jeff says,

And instead of writing FizzBuzz code, they should be thinking about ways to prevent us from needing FizzBuzz code in the first place.

he’s missing what actually happened. When programmers saw the post, they said, “I’ve never had to solve that problem before and if some senior level (self-proclaimed or otherwise) programmers took 10-15 minutes to solve it, I had better check to see that I can do it in less time (or at all for that matter)”.

Dont’ you see what you’ve done Jeff?. Every programmer out there who considers himself competent just went and took a test to make sure he really is. Imagine that there was someone who should know how to solve the problem who currently holds a programming job who just failed the test miserably. Maybe he will come to his senses and stop wasting his company’s time and his own life. If this is truly a problem, that people get into positions because nobody asked them the FizzBuzz question when they were being interviewed, then the FizzBuzz post has just shown them the truth and maybe they will now ‘see the light’ and finally stop making their fellow programmers look bad and go away.

Then again, they probably won’t, but at least now they know. They can’t deceive themselves any longer. They are not competent. The jury is no longer out for them.

ASP .NET 2.0 TreeView Strategy

The statelessness of the web continues to challenge every web devloper no matter if you’re a newbie or a seasoned professional. Wherever I am in the development experience landscape, I am no exception.

While working with an ASP .NET 2.0 TreeView for the first time I was hopeful that there might be some built in capabilities that would make my experience similar to what you might expect in the WinForms TreeView.

When working with the TreeView in WinForms, I simply create a derived TreeNode object for every type of entity I expected to represent in the TreeView. That way, when I get a SelectedNodeChanged event, I can simply see what the node type is and respond accordingly. So I set up the event handler in my ASP .NET TreeView and went to look at the signature and realized that the only thing I was given was the sender (the TreeView itself) and a generic EventArgs object. This is different than what you get in the WinForms TreeView. It actually provides the the selected node object in the event args. Well, I reasoned, just because I’m not given the node object doesn’t mean that I can’t just grab it out of the TreeView’s SelectedNode property.

Now, keep in mind that I had created a base TreeNode type that all of my TreeNodes could inherit from, but to be safe I just typecasted to the TreeNode type when I accessed the selected node in the TreeView.
Here is what I did:
[csharp]TreeNode selectedNode = ((TreeView)sender).SelectedNode;[/csharp]
I then used the GetType() method to determine whether I was getting back the type I was expecting. And herein lies the problem. As I moused over selectedNode while debugging, the type it was showing was TreeNode. No matter which of my inherited node types I had added to the TreeView on page load, the SelectedNode parameter always returned a TreeNode type.

At this point, I realized that the statelessness of the web had gotten me again. I should have realized that this would be the case. The TreeView only knows how to make generic TreeNodes and since the TreeView has to be built on each successive trip to the server, it doesn’t know how to make my extended tree nodes.

In thinking through a bit more, and with a bit of help from some guys on the CodeProject.com message boards, I realized that the only way to do what I was wanting was to create my own derived TreeView and create a way to store the types that I needed in the ViewState. Oy! It was starting to seem that this way of handling a TreeNode getting clicked was getting much more complicated than it needed to be.

I decided to head back to the drawing board or, actually, the MSDN documentation and see what all the TreeView had to offer me. What it all boils down to are three different properties I can access. They are:

TreeNode.Text: This is the text that gets displayed for the particular node when rendered.

TreeNode.Value: This is the value of the node which has to be unique at any given hierarchy depth.

TreeNode.ValuePath: This is the full value path to the current node.

Thes properties make it pretty clear that the way to handle TreeNodes in ASP .NET 2.0 is to use the ValuePath to determine where the selected node lies in the hierarchy. The Text property is only useful in that it will display something to the user. The Value property is useful in that it will allow you to store an identifier for the current node. It’s the ValuePath property, however, that allows you to parse out everything you need to respond accordingly.

Now that I knew that I needed to determine a good way to parse the ValuePath, I stubmled upon a new problem. In the event handler for the node change, I was simply building a URL to redirect to and then calling Response.Redirect(). The problem is that if you want to simply redirect to the current page and display a View in the page (I was was using a MutiView, which is another powerful control available in ASP .NET 2.0) based on parameters you’ve passed in the URL, you’re going to have problems with the TreeView not retaining its display. The TreeView is expecting that you will simply post back to the server and it will be able to retain it’s layout in the ViewState. In my case, though, when I clicked the TreeNode, it was doing a Response.Redirect which, when the new page loads, will cause it to not be a PostBack which simply means that the TreeView will re-render in it’s default state (which is determined by the ExpandDepth property that is set in the designer or code views).

What this means is that each time I would click a node in the TreeView, when the page loaded again, my TreeView would be collapsed again to its default expansion depth which I had set to 1. So, how could I ensure that when the page loads, my TreeView is expanded properly?

I decided to take the ValuePath of the selected node and append it to the URL as a parameter as well and then use that on the other side to expand the TreeView to the same state as it was before the node was clicked. Of course, first you have to UrlEncode the ValuePath like this:

string encodedValuePath = HttpUtility.UrlEncode(valuePath);

And then you can add it to the URL you are redirecting to, but here is how I handle expanding the nodes properly when the page is loaded again:
[csharp]if (Request.Params[“ValPath”] != null)
{
// Decode the ValPath parameter
string valPath = HttpUtility.UrlDecode(Request.Params[“ValPath”]);

// Find the node we want to select according to the ValuePath
TreeNode node = tvMain.FindNode(valPath);

if (node != null)
{
// If we were able to find the node, we expand it and set its
// selected property.
node.Expand();
node.Select();

// Now we simply walk up the TreeNode hierarchy, expanding each
// parent node above us to ensure that the TreeView will display
// properly.
TreeNode tmpNode = node;
while (tmpNode.Parent != null)
{
tmpNode.Expand();
tmpNode = tmpNode.Parent;
}
}
}[/csharp]
I had orignally thought that simply selecting and expanding the selected TreeNode would be enough, but it didn’t actually work correctly until I did things this way.

You may be wondering at this point why I don’t simply use the NavigateUrl property on each node when I initially populate the TreeView. The issue is that the ValuePath property gets set at a later time and is not available when you first instantiate a TreeNode. If I tried to append the ValuePath to the NavigateUrl property at the time when the TreeNode was created, it would have thrown a null reference exception. On the other hand, when you just redirect in the node changed event handler as I was doing, you can grab the ValuePath of the currently selected node without problem.

Ok, so now that was working correctly. There was only one more thing I needed to do. I had to have a good way to parse the ValuePath property when the node changed event was fired. For this, I implemented the Strategy Pattern and created a Strategy Factory and ValuePathStrategy objects for each of the TreeNode entities. Basically, my Strategy Factory (which is a Singleton) takes the ValuePath as a parameter and determines from the first or second level in the ValuePath which type of strategy object to create and passes that object back to the caller. The caller can then simply access the Url property on the ValuePathStrategy object and redirect accordingly. Here is what the code basically looks like:
[csharp]protected void tvMain_SelectedNodeChanged(object sender, EventArgs e)
{
TreeView view = (TreeView)sender;
TreeNode node = view.SelectedNode;
if (node != null)
{
ValuePathStrategyBase vbase = ValuePathStrategyFactory.
Instance.GetStratey(node.ValuePath, view.PathSeparator);
if (vbase != null)
{
Response.Redirect( “~/Default.aspx?” + vbase.Url, true);
}
}
}[/csharp]
Notice I’ve also passed in the PathSeparator so each level of the path can be easily tokenized with the String.Split method. Each of what would have originally been extended tree nodes are now strategy objects that provide different URL parameters based on what will be needed when the page loads including the view, which specifies which view in my MultiView to display, the name of the parameter that view needs in order to load its data, and the parameter value itself which, in the case of my application, gets passed to an ObjectDataSource which has a table adapter for the corresponding data in the database.

I’m not sure that my methods for handling my TreeView are the best, however, I’m pretty happy with the outcome. I’ve been able to keep everything relatively clean code wise and the way I’ve done things seems to uphold the rules of designing an application for the web as opposed to WinForms.

Web Programming: A Hacker’s Dream?

So I recently came to realize that a web application I had developed had a security hole as someone had hacked the page with a SQL injection attack. I was thinking that it was the underlying CMS (Content Management System) framework that provides the basis for the web application I built that was causing the problem, but on a whim decided to go back in and check my own code just to make sure. I realized that while it may not have been my code that was used to mount the attack, it was also quite possible that it had been. Basically I had unchecked inputs, so a url like this:

http://website/modules/mymodule.php?id=100

Could be turned into something like this:

http://website/modules/mymodule.php?id=100;DELETE%20FROM%20users

which would simply delete all of the users from the database. Yikes!

As a seasoned programmer I should know better, so the only excuse I can think of is how long ago I wrote the code. I guess I had never done any security hardening to the application that I know I should have done. Fortunately, this lesson didn’t have terribly dire consequences, however it has made me wake up to the realities a bit.

Maybe it has already been done, but it seems to me that if someone with malicious intent and a basic knowledge of web crawling could write a simple script to just run amok on the Internet, crusing from site to site looking for vulnerabilities. Obviously companies and even individuals feel that they cannot be without a web site, however, building a site and having it accessible to the world should give us programmers pause. We need to do one of two things: a) write a killer EULA (End-User License Agreement) like Microsoft does that protects us from any culpability, which lowers a customer’s confidence in our work, or b) we need to write such rock-solid code as to ensure that these vulnerabilities are not as easily found and exploited. Maybe the real solution is somewhere in between the two. Anyhow, this experience has helped me get better focus. I hope it will encourage you to get focused as well.

Lazy coding practices is the stuff Hacker’s Dreams are made of.

For an in depth introduction to SQL Injection Attacks, read this article.

Don’t Let Them Undervalue Your Work

In spite of many programming jobs being sent overseas it remains relatively painless to get work as a programmer in the United States even though a lot of it is more in the context of freelancing.

Over the years different sites have come (and some gone) that provide a method for connecting companies who need programmers with the programmers who have the necessary skills.

I have never personally gotten work through these services, but I have friends who have and still do and find them to work relatively well. I do, however, have my email address on a list with Guru.com from which I receive daily updates of new freelance job listings on the site. When I get the messages, I just quickly glance through the list to see the type of work people are interested in. This, I think helps me keep my thumb on the pulse of the market–what are the types of skills I need to stay up on, etc. The problem I’ve seen though is that what employers are looking for more than anything is a free lunch and here is why I say that.

Just today I received an email listing a job and toward the end of the message decscribing the job were the following seemingly inocuous words:

“This should be pretty simple for an expert.”

I say seemingly inocuous because I think it has much deeper meaning and I take issue with its very sentiment. If I were to paraphrase, I would interpret the words this way, “I don’t know how to do this, but I believe that if there is an expert out there, she/he should be able to handle this with ease. And since it’s easy for you, it shouldn’t cost me much.” I’ve seen others with the same sentiment. They all say something to the affect, “This shouldn’t take somone who knows what they’re doing very long”.

By this statement, the poster is initially admitting that they have no idea how to do what they want, but in the same sentence somehow they have become an expert in knowing how long this thing they know nothing about will take to finish? That’s just crazy. Even the “experts” themselves often don’t know exactly how long something will take. It is clearly a way to say, I need help, but I don’t want to pay for it.

If you are a freelance developer and you find yourself responding to these types of freelance posts on the freelance sites, stop trying to provide these people with bids. Read between the lines and see what they are really wanting–your experience and knowledge for free. Did you get to where you are by someone just giving you a free lunch? No. You know what you know and are an expert becase you worked hard to get there. You probably taught yourself or went to school, took the initiative and deepened your understanding and now you are willing to give that away for free? People should be paying for your skills and these skills are not simply the ability to write code, but also the knowledge and experience of the particular field that they know nothing about. They are saving money by asking for your advice alone. That should be valued.

If they can find someone overseas with the same skills who will work for next to nothing, there’s nothing you can do about it, but don’t try to sell yourself short because these people are not bright enough to see value in paying for good people who have the ability to provide for them what will make their business successful.

Don’t misunderstand what I am saying here. I have no need to get freelance work through these sites and I don’t even try, so this is not an attempt to just get everyone to raise their standards so I’ll have a better chance getting the rate I want. I don’t even know what rates people are getting on average through these sites these days. What I am saying is stop giving away your hard earned abilities and knowledge (real “intellectual property”) for nothing. Charge what you want to be competitive. That’s fine. Just don’t let anyone convince you that your expertise should come cheap because they figure it’s easy for you.

And in as much as you are able to, make sure any contract you take on, you explain that software development estimation is not an exact science and that you will be billing according to the hours you work. Don’t work for free. I see guys letting people take advantage of them all the time. Let your customer know from the start what kind of time frames are possible and make sure they understand that your work is valuable to them and they wouldn’t want this kind of work done on the cheap.

When I’ve bid contracts, I normally figure out what I think it will take to complete the job. Then I double the hours and sometimes add another 20% when there are some unknown aspects to the job. This may sound high to some people out there, but if you’ve ever had to eat some hours for innacurate estimation on your part, you’ll understand why. If you explain to your customer that you bill for the hours worked and assure them you will only be billing for those, they will have a greater confidence that you’re not just jacking the price up on them for no reason. Being honest in business is the best policy. The honesty needs to start up front with the estimate. If you have a bad gut feeling about an estimate you’ve given, it’s better to go back and renegotiate before starting than to get into it a ways and realize just how big of a mistake your estimate was.

I’ve worked on jobs that by time it was all said and done, I was working for a ridiculously low rate for the kind of skills and knowledge I was providing. All that does is make you bitter and frustrated with the job to where you wish you had never done it in the first place. Bid it correctly from the start and you’ll be better for it in the long run. Even if it means you lose the contract, it’s a better way to go. Other contracts will come. The bottom line is that if you don’t value the work you do, no one else will either.