Archive for May 2009
The price jump from Dotfuscator CE to the full professional version is $0 to $2000 in six seconds 🙂
The good news is that the folks at PreEmptive Solutions (makers of Dotfuscator) have finally realized that we poor struggling developers have been suffering from sticker shock for quite some time now and they have decided to come to our rescue with Dotfuscator Micro Developer Edition (MDE). It’s priced at $399 a year (subscription price) and includes the following features:
- Ability to integrate Dotfuscator MDE through a command line interface and MSBuild
- Integration with Microsoft Visual Studio
- Regular product updates
- Access to dedicated customer support staff
- VS2010 CE features available today
- Tamper detection and defense
- Application expiry logic
- Feature tracking logic
Still more than some rivals, but less painful to your wallet.
I’m nearing the end of a database sharding project and I wanted to make some notes about what I’ve observed both for myself, so I can refer back here when I do it again, and for anyone who may be interested in the subject.
To Shard or not to Shard, That is the Question
It’s nice to dream, but do you really want to put in all the extra time and effort it will take to implement sharding before knowing that you will reach the huge volume necessary to require you to chop your database into bits?
I’d say no and the guys over at 37Signals agree: Don’t do it until you have to.
Sharding is an evolutionary step that you should undertake only when you really need to.
Should you shard from the start?
Sitting in the Ivory Tower of design, it is easy to predict which entity you will be partitioning your shards on, but after running your system for a few months, or better still a few years, you are likely to find that your lofty theories don’t stand up to the harsh realities of production.
Example: In the system I’m working on, it was originally thought that the Users table would be the one to partition by because users had Contacts and the ContactActivities table would get huge and this would be best chopped up by User. But as the business progressed and the true needs of the people using it became clear it turned out that a totally different table was the bottleneck and a totally different entity had to be used for partitioning.
If that database had been designed from scratch to shard across the Users table, then the work necessary to “reshard” would have ended up being much more than to shard it across the correct entity after waiting a while to find out what that entity was.
So I’d say, don’t shard from the start. Shard when you need to because of volume and when you have the data in your database that you can analyze to tell you what to shard on.
The one exception to this is where you are building a system that is a copy of an existing system which has already gone through the process and the sharding entity is well known.
For example, if you are building the next Facebook then you can use the same entity they do to shard by.
In checking out the T4 code generator that is built right into Visual Studio 2008 but which has no code coloring or IntelliSense, I came across two free add-ins that provide some features you can use.
- T4 Editor Community Edition adds some limited code coloring for the T4 keywords and directives.
- Tangible modeling tools plus T4 Editor adds code coloring for T4 keywords and for embedded C#, SQL, VB and other languages AND it provides IntelliSense for some basic classes of the .NET Framework.
Both these products provide full code-coloring and full IntelliSense in their Pro versions. Of the two free versions I recommend the Tangible T4 Editor because it code-coloring everything and the basic IntelliSense is very useful. I haven’t yet tried the modeling tools that come with it, but they look good too.
If you want to find out more about T4 and how you can use it to generate code, here are a few links:
- T4 (Text Template Transformation Toolkit) Code Generation – Best Kept Visual Studio Secret
- T4 Toolbox – Includes ready-made code generation templates and links to excellent tutorials and articles on using T4
- MSDN:Generating Artifacts By Using Text Templates
There is tremendous confusion over the words “argument” and “parameter”.
Definition of argument from computeruser.com: “A value that is passed to a program, subroutine, procedure, or function by the calling program; one of the independent variables that determine the output.”
Definition of parameter from computeruser.com: “In computing, a value sent to a program or operation by the user.”
Several other sources said basically the same thing, which is that the two words are synonyms.
Well, despite the fact that most programmers I know use them to mean the same thing, they DON’T have the same meaning.
If you are going to understand the new “Named and Optional Arguments” feature in C# 4.0 (and if you’ve ever had to do Office Automation or COM Interop in C#, you will want to) then you are going to have to clear up the difference between these two terms.
Here is my attempt to clarify. I’m making it as simple as possible:
- Argument: what gets passed into a function.
- Parameter: the named slot in the signature of a function used to pass values to the function.
int howMany = CounterMethod(myTable); //example of calling a function
private int CounterMethod(DataTable table) //example of a function signature
- “myTable” in the function call is an argument.
- “table” in the function signature is a parameter.
I hope that clarifies it for you. Now you can go read all about the new C# 4.0 feature “Named and Optional Arguments“.
Available for download: Visual Studio 2010 and .NET Framework 4 Beta 1
I recently re-read SQL Server 2005 Bible, a great book that covers everything you ever wanted to know about SQL Server but were afraid to ask. At one point in the book the author says that your data abstraction layer should be in your database in the form of stored procedures and he gives some good arguments as to why.
Right after that I read Pro LINQ Object Relational Mapping in C# 2008, another excellent book that shows you how to create an ORM using LINQ (both LINQ to SQL and Entity Framework) . When introducing the fact that you can use stored procedures in LINQ, the author very definitely says that even though they are supported, you shouldn’t use them and he gives some good arguments as to why.
Two diametrically opposing views in two really good books. The error both authors make is thinking that one size fits all. The truth is that “all” can vary widely so one size just ain’t gonna cut it.
My view is that everything we have in our industry is a tool and each tool has a range of uses.
Example: I recently joined a project that heavily uses stored procedures in line with the “SQL Server Bible” author. I think it is appropriate given the huge amount of data the application has to deal with. Tuning many queries within an inch of their life, including being able to completely rewrite them or split a single query into several, is very necessary on this project. Using LINQ or some other ORM tool to only generate the SQL and never use stored procedures would just not work.
Another Example: I was working on a project a couple years ago where LINQ to SQL would have been perfect for fast creation of a Data Access Layer in an existing application that needed some major enhancements, but unfortunately we had to still support users with Windows 2000 machines and .NET Framework 3.5 doesn’t support W2K, so we were stuck with using typed datasets. But we didn’t use stored procedures for our abstraction layer. It really wasn’t necessary. Yes, some tuning was needed, but the app usage was mild and the amount of data relatively small, even after many years of use, so an ORM would have been great and would have made our lives simpler.
Anyway, the moral of the story is use the tool that is appropriate and remember that absolutes are unobtainable.