The managed object model version used to open the persistent store is incompatible with the one that was used to create the persistent store.

I’ve put the whole error message in the title of this post so that people can find the answer to this problem easier when looking on the web. The first Core Data application I built since starting to learn how to develop applications for Mac OS X Leopard threw me for a loop because I kept getting this error message when I would try to run the application:
Core Data Error

The managed object model version used to open the persistent store is incompatible with the one that was used to create the persistent store.

The problem would not happen as long as I hadn’t made any Core Data binding connections in interface builder, so I was puzzled. I looked around on the web for a while and couldn’t find any answers, so then I asked Marcus. He knew exactly what was wrong.

Apparently the Core Data applications create an XML file in which they store the data for your application in:

~/Library/Application Support/ApplicationName/ApplicationName.xml

where ~ means your home directory and ApplicationName and ApplicationName.xml are the name of your actual application.

All you have to do is delete that XML file and rebuild your app. It will then work fine as it regenerates the object model according to what is in XCode.

What happened was I had created a Core Data application and started trying to do things during which I screwed up completely and decided to scrap the whole project. I then went into the finder and deleted the application code directory completely and decided to start over. When I created the new Core Data application, I used the exact same name as the first project. So, now when I went to build the application, it went and looked in:

~/Library/Application Support/ApplicationName/ApplicationName.xml

and found the file there, so it tried to use it. Meanwhile, my data model had changed in the new project. The application didn’t know what to do with the incompatible model and then threw up its hands and gave me this error. Anyhow, it’s an interesting lesson and very helpful. Thanks to Marcus for providing the answer. Meanwhile, take a look at this code that is part of the default Core Data template:
[c]
– (NSString *)applicationSupportFolder {

NSArray *paths = NSSearchPathForDirectoriesInDomains(NSApplicationSupportDirectory, NSUserDomainMask, YES);
NSString *basePath = ([paths count] > 0) ? [paths objectAtIndex:0] : NSTemporaryDirectory();
return [basePath stringByAppendingPathComponent:@”ApplicationName”];
}
[/c]
This code is returning the path to the specific Application Support directory for your application.

Hope this helps.

Getting Started with Cocoa and Objective-C

Most of my posts related to programming on this site center around programming .NET using C# on Windows. Well, it looks like that era in my programming career has come to a close. I bought a MacBook Pro back in June and have been working toward starting to learn how to develop applications for the Mac. I’ve waited until now for several reasons–one of which is that the new XCode 3.0 and Objective-C 2.0 just came out with Mac OS X Leopard. As I have time, I am going to start documenting my experiences here to help others figure things out as well. I could be wrong, but with the introduction of Mac OS X Leopard, I believe the demand for Mac developers is going to rise dramatically. Time will tell.

Anyhow, while my general programming experience is very useful to me as far as logic and flow are concerned for developing an application, the Apple way of doing things is different and takes some getting used to. From what I’ve seen so far, though, is that the Apple way is also very cool. I am fortunate to work with Marcus Zarra who is an independent software developer on the Macintosh. He is helping me close some gaps in my own knowledge and so I’m going to journal the things I’m learning here from now on.

So, lets get coding…

Extract DVD Audio in Mac OS X

Handbrake is a great application for extracting your DVDs for use on your laptop or your iPod, but I’ve also recently found it to be the path of least resistance for extracting audio tracks from movies as well.

Every now and again it’s fun to grab an audio clip to use in my own media projects, but this often seems cumbersome. Recently I’ve started grabbing video clips with Handbrake for video projects, but I realized that sometimes all I want is the audio. Here’s how I did it.

Determine the scene number of the scene from which you want to extract the audio (go to scene selections in the DVD menu in your DVD Player app and find it that way). Once you know the scene number, close DVD Player and open Handbrake. Select the scene range in the Chapters section–setting the same number for both selections as this will extract just the one scene (chapter). Choose the other settings you want it to have and then click Start.

Handbrake

Once it has finished extracting, you can then open iMovie HD and drag and drop the new clip from the Desktop or wherever you saved it in Handbrake into the clips section of iMovie HD (I’m still using 06′). This will begin importing the clip. Once it has been imported, you can select the clip in the clips section and then choose File | Export…. In the resulting dialog box, select “Expert Settings” in the “Compress movie for:” drop down. Then click Share.

iMovie HD Export Dialog

Another dialog box will display. In the “Export:” drop down box, select “Sound to Wave”. Choose a location to save the file to and click Save.

iMovie HD Export Dialog Expert Settings

That’s it. It’s really simple if you have the right tools and in this case, all of the tools are free (well, at least included if you consider you paid to get your Mac which came with iMovie).

Have fun.

Band ‘Mute Math’ Does Own Arrangement of Transformers Theme

Anyone who grew up playing with Transformers has got to be really excited about the upcoming release of the Transformers Movie from Michael Bay on July 4th. Well, the band Mute Math has gone so far as to do their own arrangement of the Transformers Theme.

You can listen to it here or get it at their site.

Update: I removed the link on my site since this appears to be the official Transformers Theme from the Original Motion Picture Soundtrack. I just thought it was an arrangement of the song they did for fun. Apparently not! Way to go Mutemath.

Digg API C# Library

I did a little search around the web for a C# library that encapsulates the Digg API and found this little gem written by Dan Atkinson. Thanks Dan. This is a nice little library. It places the data it gets back from the Digg API calls into a DataSet making it simple to use it however you want. He also wrote a helper function that allows you to get the raw XML. You can simply drop that into an XmlDocument object and use XPath queries to get the nodes you want. From the post on his blog, it sounds like Dan might be extending the library.

You can see Dan’s blog at http://www.dan-atkinson.com/blog/.

The Digg API allows you to collect all kinds of information about Digg stories, comments, users, diggs, and more. It’s quite interesting. I was thinking I might write a few apps that provide a little analysis of what kinds of comments get dugg the most in popular stoires. I’ll write the app when I get a little time. Meanwhile, thanks again, Dan, for the library.

Climate Change Scientists, Critical Thinkers?

It seems logical to me and probably most people that scientists are going to be some of the most critical thinkers. And this is also their claim, but I am a little confused when it comes to discussions about climate change why so much critical thinking *seems* to be missing. I’ll explain what I mean.

I first have to admit that my knowledge of the science behind climate change is limited. I’m no expert. There are some interesting climate related things happening in the world so I can go along with the idea that maybe something is up, however, when it comes to proving that the changes are *all* due to the affect humans have on the planet, I become a big skeptic. I’m just not much into believing every alarmist and his impassioned story just because it plays to fears of global devastation (who doesn’t love a good Armageddon story).

Ok, so here’s what I mean when I say that the critical thinking seems to be missing. I’ve read parts of the IPCC Fourth Assessment Report on Climate Change. What is so striking to me is what terminology is used to express the certainty from this group that climate change is caused by humans (anthropogenic). Here’s what the report says in the footnote on page 4 of the Working Group Summary for Policy Makers (PDF), just so we’re clear on terminology:

In this Summary for Policymakers, the following terms have been used to indicate the assessed likelihood, using expert judgement, of an outcome or a result: Virtually certain > 99% probability of occurrence, Extremely likely > 95%, Very likely > 90%, Likely > 66%, More likely than not > 50%, Unlikely < 33%, Very unlikely < 10%, Extremely unlikely < 5%. (See Box TS.1.1 for more details).

Now maybe I’m just the dummy here and everybody else gets it, but I don’t understand how one can have such a high level of certainty when their assessment is based simply on using expert judgment.

In a court case someone who has knowledge in a particular field will be called an “expert witness” and their testimony will often be accepted as fact because of their knowledge and experience in that field. Ok, so I suppose we should apply the same thing here. We should simply accept that the people who have decided these things are experts and their judgment should be considered fact. There’s only one problem with that. The field we’re talking about is so new and unknown we would be foolish not to question the validity of anyone’s *expert* testimony on either side of the debate.

For those of you who are upset at this point because you feel I just don’t get it, please enlighten me. I don’t want a pointless debate. I want to learn and understand. On what basis do we blindly accept the testimony of these experts? Not only that, how is it that they can simply arrive at such a specific percentage of certainty? From what I’m told there are computer models out there that can demonstrate this certainty, but if that is true, then why don’t we see statements in the report that say “90% certainty based upon computer modeling and calculations”. Why do we just get, in essence, “90% certainty based upon some smart dude’s expert judgment”?

Does that sound like critical thinking to anyone? If you tell me to just blindly accept what these people say as truth, I’m going to have a hard time considering your position valid. It sounds like blind faith to me and as everyone knows, scientists cannot accept blind faith. They need critical evidence. Don’t they? Otherwise, isn’t this just a religious debate? If I said to an atheist, “I’m 90% sure that God exists because I’ve known him all my life”, he wouldn’t accept my testimony even though I’ve *demonstrated* that I’m an expert on God (e.g. I’ve known him all my life).

And while we’re talking about evidence, can someone explain to me how computer models are able to determine which molecules or particles in the atmosphere come from fossil fuels and which ones come from natural sources created by the earth itself? I understand that trends since 1750 suggest that it’s warmer now than it was then, but we don’t have much in the way of climate data prior to that date. Isn’t it possible that warming occurred at another time earlier in the earth’s history that was clearly not due to the industrial revolution?

Wouldn’t a stronger case be made if we could go up into the atmosphere and take measurements and be able to conclusively say “well, these particles over here are from Acme Manufacturing while these were caused by those gases coming from that volcano over there and there’s clearly more coming from Acme”? Do we have this kind of technology? If we do, then great. Let’s see the data from those tests.

Call me crazy, but I think the jury is still out on the actual certainty of whether climate change is anthropogenic. If you show me some evidence to the contrary aside from, “the expert is pretty sure–90% sure even”, then I’ll be glad to hear it. Meanwhile, lets stop with the fear mongering over something for which nobody seems to have absolute unquestionable evidence. Shouldn’t we require at least a scientifically *provable* (as opposed to arguable) level of certainty before we go requiring countries to reduce emissions to some arbitrary standard that may or may not make a difference even if the problem is caused by us?

Digg Encourages Dishonest Reporting

If you are what appears to be Digg’s main demographic, then you are a geek, male (largely redundant considering the first descriptor), politically liberal, atheist (or at least agnostic or non-religious) and apparently you have no qualms about lying.

I saw this headline today on one of the stories:

Fox News: Could Cho have been possessed by the Devil?

We all know that if you fit the demographic above then to you think the real devil is Faux News (that play on words was pretty funny several Y E A R S ago, by the way) but writing a title that implies Fox News espouses that *they* think Cho may have been possessed by the devil is just simply dishonest.

I understand that the editorial board at Digg is the community itself, blah blah blah, but what gets me is that there are all these people who claim to be of some higher plane of thinking and reason and yet still stoop to posting these stories with dishonest titles to somehow demonize every establishment they disagree with. If you were to follow the link and actually read the story, you would realize that Fox News is making no such claim. They are reporting on someone who has made such a claim. If it were editorial, they would probably demonstrate that they think it’s just as ridiculous as you do.

So why stoop to lying? You are misleading your lemming disciples. Just read the comments for the story and you’ll realize that a great many of the people who commented have no clue what the story actually says. Simply because of the way the title was phrased they just assumed that Fox News thinks that Cho may have been possessed by the devil (cause somebody once told them that Faux News is the real Debil). Don’t you have any conscience about misleading the little followers to believe some farce simply because of the way you posted your story, ‘jimripper’? If you are so enlightened and above the fray in ‘thinking progress’, why stoop to such a level? You may hate Fox News (and who doesn’t? it’s *the* most popular American past time second only to hating the President) but how can you act like you are somehow on a higher plane. You are as bad if not a worse liar than the people you’re pointing your finger at.

To the supposed enlightened ones of Digg, here’s a suggestion, you might be able to convince a few people of your ideas, if only you were honest. I know in your world, the one where morality is relative, telling a *little white lie* is allowable and the end justifies the means on a regular basis, but really. Stop claiming you have some sort of moral superiority *because* you are an atheist or un-religious. You are not honest and you make no apologies for it. Where I come from, we call lying and misleading people WRONG–morally and otherwise. Your dishonesty begets contempt.

I’m not blaming Digg for anything here. Self regulating editorial is a very cool idea and I’m all for it. You just have to take the bad with the good I suppose and until people take it upon themselves to report stories honestly, there will remain plenty of bad along with what makes Digg so good.

Javascript Doppler Radar Object

I was looking around for a way to display a radar map on a website based on a given location. If you go to the NOAA National Weather Service site, you can get the image you need, however, the image they use now is actually a composite image that layers transparent images on top of each other and uses the z-index for each layer. It makes it much less intensive for their system to really only have to render one, maybe two layers. The majority of the layers such as topographic, county lines, highways, etc. are static. Only the radar data and weather warnings need to be rendered.

I figured this was as good as anything so I looked a little closer. What’s really cool is that the images you need can simply be referenced with a full URL to the NWS website–that is, there is no server side coding or or need for XmlHttpRequest (AJAX). You just need a little Javascipt to determine what the image name should be, again, based on location. You can download the Javascript file I wrote that encapsulates this functionality here: NWSRadar.js.

The following is the basic usage. As usual, place this in the head of your HTML:


And then instantiate the object like this:

var radar = new NWSRadar("PUX", 10, 10);

The constructor signature is like this:

function NWSRadar(radarid, left, top, width, height)

The only required parameter is radarid. Because this uses divs, I needed to specify explicitly the location where to place the image on the screen so we are using absolute positioning. This will make is so the width and height parameters work correctly as well. If you want the radar image to be in a table element or some other flow type layout, then you simply specify a div that encloses the script and has its style set to relative positioning.

In case it’s not clear, here’s what each of the constructor parameters represent:

  • radarid: This is the ID for the location that is used by the National Weather Service. Go to the site and find the location you need. For my example above, I used “PUX” which is Pueblo, Colorado. To find the Radar ID you need, go to the NWS website and enter in your city and state in the “Local Forcast by City, St” box in the upper left-hand corner. Then click on the left most image under the heading Radar and Satellite Images. On the ensuing page, you will see in the URL the id field. This is your radarid.
  • left: The absolute position of the left side of the image.
  • top: The absolute position of the top side of the image.
  • width: The width of the image. Defaults to 600px which is the NWS default size.
  • height: The height of the image. Defaults to 550px which is the NWS default size.

Once your object has been instantiated, simply call render() like this:

radar.render();

And here is what it should look like:


And here is a full HTML test page




NWS Weather Radar for Pueblo, CO







8051 Microcontroller Programming

8-bit MCUs continue to be a popular microcontroller for embedded systems because they are fairly simple to understand and write code for. I have been pursuing embedded programming as a hobby for several years now and have found the obstacles to understanding to be pretty difficult to overcome. In the course of the past year or so, though, I haver begun taking steps to figure things out. My first step was to actually purchase a microcontroller development board along with some books. I ended up purchasing a dev board from Silicon Laboratories. The actual development kit I bought is the C8051F020DK. It provides everything you need to get started with Microcontroller development. What it doesn’t include though is a quick way for programmers to learn electronics. This continues to be my biggest challenge. Here, however, is what I’ve learned so far for those of you who are Windows programmers but would like to venture into the embedded world.

Interrupts Are Like Events
As programmers we are used to responding to events constantly. An event handler on a button or a list control that has just been changed gets triggered when a user does some function such as clicking the button or changing the selection in a list.

Things are similar in embedded programming. While embedded systems don’t have the layer upon layer of abstraction we’re used to, they do provide the tools you need to be able to do what is necessary in the embedded world. Interrupts are a special abstraction that get triggered based on a timeout or a button (a physical button on the dev board for example). Timers are a special peripheral of these chips that will count up and then trigger an interrupt when the number overflows. When that happens, we are able to figure out how much time actually elapsed and create more meaningful delays in order to achieve what we want.

While flashing an LED (Light Emitting Diode) is not terribly exciting, it is something that can be handled very easily in an embedded system using interrupts. You simply set the timer running and attach an interrupt function (similar to a callback, really) and when the timer overflows, our function gets called. It is at that point when the LED gets toggled on or off to make it flash.

Ports Turn Stuff On and Off
On the development board, I have 64 port pins that I can manage in code that turn things on and off. The C code to do so is very simple. First, we have to define the memory location of the pin we’re interested in. Bascially on my system there are 8 ports with 8 pins each. Only the lower numbered ports 0-3 are bit addressable which means that they can be turned on and off independently of the rest of the pins on the given port. Byte addressable only ports require that you maintain state and use bit shifting and/or masking to make the changes that you need.

Hooking It Up
I’ve hooked up my board to an “Electronics Learning Lab” from Radio Shack. You really don’t need to get one of those, but I had one and decided to use it. My first project was connecting it to a 7-segment display on the learning lab. For each of the LEDs in the 7 segment display, I connected a port pin from port 1. Below are the pinouts for the peripheral connections. I used Port 1, which is connected to pins C4, B4, A4, C3, B3, A3, C2, and B2. See the frame below for the complete pinout.

You can see the way the seven segment display is connected in the following illustration.

From Vast Resources to Few
Being a Windows programmer makes learning embedded systems a bit of a challenge. By specification, the 8051 only allows you a maximum of 64KB of program code. If your program is bigger than that, you are either out of luck or you have to use some embedded guru’s black magic that I am, clearly, not yet familiar with. This is a far cry from the amount of programming (and data for that matter) memory space we have at our disposal as Windows programmers.

As strange as it may seem, I really like the challenges that come with learning embedded systems. I think it makes you more likely to consider optimization in other types of programming which can be a good thing. As soon as I think of a killer project to create, I will prototype it, market it and sell the idea to highest bidder. Until then, I will just continue to fiddle around with LEDs and, well, LEDs. Does anybody have some ideas for a killer widget that is primarily LED related?

A Pluggable Framework

I’ve been wanting to blog about this for a while, but haven’t as it seems it might be time consuming to explain my thoughts. I’m going to dip into it a bit here and then follow up as I have time.

The company I work for provides data analysis for retail stores (read grocery) found throughout the country. The companies we work for are fairly small which is to say chains like Albertsons, King Soopers, Safeway, etc. are much larger than what we handle. We would like to see that change and I think it will in the future, however, the largest chain we work with currently has fewer than 30 stores total.

Part of being able to use the same data analysis tools for all of these different companies has meant being able to take transaction data (data from scans at the cash register stored in log files) and convert it to a format that can be easily loaded into the database schema we use when running our analysis tools.

I was tasked with writing the extraction processes for each of the different Point of Sale (POS) systems that we support and converting that data from raw log files to the format (XML) that can be loaded into our database schema.

As you might have guessed from the title of this post, I use a pluggable framework that allows me to select which ‘extractor’ to use at run time based upon the type of POS system the store is using. Keep in mind that we need to support numerous POS systems since these smaller companies all use a broader range of systems than what you might find at the large corporate site I mentioned earlier. Also, though it doesn’t happen too much, there may even be cases where within a single company or single site (store), there might be a mix of POS systems in use. That’s why it’s important to be able to accommodate multiple types of POS systems.

I am using reflection for my pluggable framework. I’ll get to the details of that in a minute, but first let me talk about how to organize pluggable components.

Namespace Organization
In order to adhere to good programming practices, I created an abstract base class that contains some common functionality that all transaction log extraction modules will benefit from. I placed that class in it’s own assembly, though, which allows me to create new extractor any time I need to without having to have a reference to the other modules. Organized into namespaces, it looks like this:


CompanyName.ApplicationName.Extractors
CompanyName.ApplicationName.Extractors.SuperPOS
CompanyName.ApplicationName.Extractors.HyperPOS
CompanyName.ApplicationName.Extractors.WonderPOS

Each of these are in their own projects and therefore each of them create their own assembly. Each of the specific extractors has a reference to the CompanyName.ApplicationName.Extractor namespace and inherit from a class in there called “ExtractorBase”. Keep in mind that all of my namespaces here are bogus. In the real application they have their actual names, but I didn’t really want to give away anything here about the real POS systems we support.

The reason I’ve done things this way is because I want to be able to create new projects in the future that adhere to the interface (abstract class) without requiring me to recompile anything but the new assembly. So, say in the future I decided to add a new extractor with the namespace:

CompanyName.ApplicationName.Extractors.CoolPOS

I can simply create a new project that has a reference to CompanyName.ApplicationName.Extractors, inherit from CompanyName.ApplicationName.Extractors.ExtractorBase and implement the interface specified. Then, when I’ve compiled the assembly, I can simply move CompanyName.ApplicationName.Extractors.CoolPOS.dll into my extractors directory and then the existing system will be able to simply load the new assembly using reflection based on some parameters I will add to a configuration XML file.

Reflections Of the Way Life Used to Be
Do you remember life before we had reflection? RTTI (RunTime Type Identification) was the closest I recall in C++. These days I can’t imagine how I could have done anything like this so easily without it. In the code that actually calls each extractor, I use reflection to find and invoke the method. As part of my interface (abstract class), I’ve provided a single entry point that has multiple implementations (e.g. polymorphism). The method name is called Execute and it provides multiple signatures in case some parameters are not needed or desired. The method invocation code is able to determine based on the number of parameters passed in which Execute method should be called.

The method invoker uses strings that specify the full namespace of the object that is currently needed and able to load the proper assembly. It then calls the execute method on the extractor module found in that assembly. The code looks something like this:
[csharp]
string extractorAssemblyName =
configManager.GetValue( “ApplicationPath” ) +
“CompanyName.ApplicationName.Extractors.” +
this.extractorName + “.dll”;

string extractorObjectName =
“CompanyName.ApplicationName.Extractors.” +
this.extractorName + “.” +
this.extractorClassName;

DynamicMethodInvoker invoker = new DynamicMethodInvoker(
extractorAssemblyName,
extractorObjectName,
“Execute” );
[/csharp]
The strings extractorName and extractorClassName were populated from an XML configuration file. In the case of my new POS system I referred to above as CompanyName.ApplicationName.Extractors.CoolPOS, the string extractorAssemblyName would now contain “C:\path_to_application\CompanyName.ApplicationName.Extractors.CoolPOS.dll” and extractorObjectName would now contain “CompanyName.ApplicationName.Extractors.CoolPOS.CoolPOSExtractor” (which is the actual name of the class we are going to use for this POS).

All I need to do at this point is pass in the parameters that the Execute() method is expecting and call Invoke() like this:
[csharp]
invoker.AddParameter( param1 );
invoker.AddParameter( param2 );
invoker.AddParameter( param3 );
invoker.Invoke();
[/csharp]
And my Execute method for the new extractor, CompanyName.ApplicationName.Extractors.CoolPOS is now running without having to recompile my entire calling application.

At this point, I can’t remember where I got the DynamicMethodInvoker class from, but if you are interested in taking a look at it and using it, just let me know and I’ll forward it on to you (I believe I got it off of MSDN, but heck if I can find a reference to it now). DynamicMethodInvoker provides a simple interface for calling methods and examining reflected types and methods. It makes things a little bit simpler to implement.

Conclusion
I haven’t yet done any profiling to see if things would be much faster if I didn’t use reflection (e.g. just create a hard coded class hierarchy), however, the system does seem to work well and fast enough to accomplish what we need. I had thought about using a provider model, but I realized later that the provider model is really intended for creating an abstraction in case you need to change the underlying system at a later date. A data provider for instance provides access to different databases based upon a specific implementation of a provider interface. If you ever needed to switch from a SQL Server database to a MySQL database, for example, you would simply need to implement the provider interface for the MySQL database and the specify in a config file that you want to use that provider instead.

The approach I’ve outlined here works similarly, however, in contrast to the provider model this pluggable framework is intended to swap out the current extractor in use during a single session where the data provider model is simply a design choice you make in order to support a different underlying data store should you need to in the future.

Well, that’s it for now. I will revisit this again as more comes to me. Meanwhile, good luck and have fun making pluggable frameworks with reflection.