Archive for August, 2004

Goodbye CLS: Is Microsoft is effectively abandoning the Common Language Specification?

Monday, August 30th, 2004

Like many developers, I’ve started the process of getting acquainted with beta 1 of Visual Studio .NET 2005, along with the new versions of both C# and Visual Basic .NET. One thing has become increasingly apparent – Microsoft is effectively abandoning the Common Language Specification (CLS).

If you think back a couple of years when .NET was first announced, the CLS was one of the lynchpins of the .NET message. The idea that every .NET compatible language would be able to work seamlessly with any other .NET language was one of the key innovations of .NET. This meant you could create components in any language, and easily use them from any other – a vast improvement over the previous generation where mixed language development required great care to match calling conventions, parameter count and type, and responsibility for reference counting and memory management.

There is no doubt that .NET is a huge improvement over previous approaches. As a component developer, and someone who strongly believes in software reuse and component based software development, the CLS was gospel. I believed that every assembly should always be CLS compliant (even private code, in the hope that one day it might be refactored into components). Visual Basic .NET 2002-3 produces CLS compliant code by default. C# does not, though it’s not hard to maintain CLS compliance, and the compiler can check for it if you wish.

With Visual Studio 2005 it seems clear that Microsoft has effectively abandoned the CLS in favor of a new standard: specifically – the set of language constructs supported by Visual Basic .NET and C#.

Want proof? Visual Basic .NET 2005 no longer creates CLS compliant code by default.

And you know what? Microsoft made the right decision in this case. Leaving these features out of VB .NET would have crippled the language (both in perception and in reality, though mostly the former – a subject I’ll return to later).

The three major non-CLS compliant features I’ve seen so far are generics, unsigned variables and different access levels for property procedures. All of these are extraordinarily useful features for VB .NET. All will make it easier for VB .NET and C# assemblies to work together.

What does this mean to other language vendors? Can they leave out these features with the argument that they are not necessary because they are not CLS compliant? Of course not – how can you be less than 100% compatible with the two major .NET languages? In effect, VB .NET and C# will define the new de facto standard.

The idea of a common language specification is a good one, and the truth is – they are so close that it’s hard to see this as a big deal. But language interoperability is important – especially if we are ever going to convince developers to truly adopt component-based development. Both VB .NET and C# should produce CLS compliant code by default and require an explicit developer choice to turn it off (just as systems should be secure by default – same concept). But I’m not suggesting they change the default attributes for VB .NET and C# projects. Rather, Microsoft should update the CLS to match the new de facto standard and make that the default setting for VB .NET and C#. This will help vendors of other components and the folks developing Mono to have a clear common guideline to work with, and in the long term ensure that the interoperability promised with .NET does not devolve into an illusion.

Related Articles:
How Microsoft Lost the API War by Joel Spolsky is essential reading to every Windows software developer. Check out his new book.

RSS feeds for sites referred to in this item:

Joel on Software

A Tale of Two Nerds

Thursday, August 26th, 2004

Ever since the movie “Revenge of the Nerds,” the word “nerd” hasn’t been so much an insult, as a promise; or at least a suggestion that those kids who spend their time in geeky pursuits will one day employ the cool kids (or at least cash in some nice stock options along the way).

Well, stock options aren’t what they were (unless you work for Google), and jobs of nerds and former cool kids alike are finding their way to India and Asia, but geeky pursuits continue among both kids and adults.

Today I’d like to tell you about two teenage nerds I met recently. Wait, I mean two teenage geeks. Nowadays, around here at least, geek is much more complimentary than nerd. But since it may vary where you are, I’m just going to alternate between them – you’ll know what I mean.

Anyway, I want to tell you about them, because their pursuits today may give us some insight into the future. More important, I think their stories may shatter some misconceptions adults often have about what it means to be a nerd today. Consider this a plank in bridging today’s digital generation gap (something I’ll be writing more about).

When I was a young nerd, my main hobby was Amateur Radio (or Ham Radio). I had my advanced class license by 14, and passed the technical requirement for first class by 15 (didn’t quite make it on the morse code). I could tear apart and repair a short wave radio, build Heathkits for fun, and spent weekends participating in contests and sometimes T-Hunts (first generation geo-caching, done the hard way).

We all know those kids have vanished. Heathkit is but a fond memory. Ham radio largely relegated to senior citizens. The computer took over, and everybody knows that it’s on computers that you’ll find today’s young nerds. But let’s take a closer look at what everybody knows.

In “Revenge of the Nerds,” the technology nerds played with computers and build robots. In fact, any kid who used a computer was by definition a “computer nerd.” That is no longer true.

Almost every kid and teen today uses computers routinely, most are on the Internet as well. And while it might feel nice to interpret this as a victory of sorts (yeah! all the kids are geeks now), it’s just not so. For most kids and teens the computer is a tool. They have little or no interest in the computer itself – just in what they can do with it. From using wordprocessors for reports, to presentation software for classroom presentations, to online gaming, etc., the computer is a means to an end.

So how did I recognize these two “computer nerds?”

I spotted the first one when we were having a casual conversation about computer monitors. This alone is not a mark of a nerd – monitors are a fact of life nowadays, but he used the term “frame rate,” which is not in the vocabularly of the average teen. Later we were discussing a dynamic state machine based AI he’s developing for a game he’s writing, and two things became apparent: First, that I’d forgotten more than he knows about state machines (with an emphasis on the “I’ve forgotten” part), and second, that he definitely qualifies as a nerd (oops, I mean a geek).

The second one I met at a LAN party. When not in school he runs his own part time business, charging friends and neighbors $25/hour to clean their computers of viruses, set up security, and install software. Yep, that definitely qualifies.

My nephew and his group of friends have quite a few geeks among them as well. In fact I had the pleasure of watching them participate in a STRUT contest last year in which their team field stripped a computer to its component parts, then reassembled it in a shockingly short time.
Ok, this isn’t enough to play “spot the nerd,” but the key thing to remember is this: just because a kid is using a computer doesn’t make him or her a nerd. You have to look further.

Oh, one more thing.

Both the two teens I mention here, and most of the others in this group have one other interesting thing in common: they spend much of their time using and playing with Linux. Today, that is definitely the mark of a geek. Tommorow? Who knows….

Why Microsoft should not build antivirus protection into Windows

Monday, August 23rd, 2004

Most people in the technology field don’t realize how truly awful it is out there with regards to viruses and spyware. I’ve been traveling around the country promoting my book “Always Use Protection: A Teen’s Guide to Safe Computing,” and the stories I’ve heard are horrific. People are truly learning to hate their computers.
A recent blog in the Scobleizer is but a small example, but one of the comments that suggested Microsoft build in anti-virus capability to Windows is way off.
Having Microsoft build anti-virus protection into Windows would be a disaster.
Let’s ignore the antitrust issues. Let’s even ignore the question of whether Microsoft can be trusted to build a good antivirus product.
Right now there are a decent number of antivirus vendors. The competition among them is helping improve the overall protection of antivirus products. More important, the variety makes it very difficult for a virus/worm to target all of them (remember – viruses love to disable antivirus programs).
What would happen if Microsoft included antivirus protection in Windows?
It would immediately suck a huge amount of the revenue that antivirus companies use to survive and use to develop their products. Even if vendors had a better product, the vast majority of people would just stick with the built in antivirus program. The Microsoft antivirus program would quickly gain a dominant market share. We’ve seen that story before.
Not only would this reduce the competition that drives improvement in antivirus programs, it would create a huge juicy target for viruses. Just as most viruses now target Internet Explorer, most would target the built-in protection. Computer security is far too important to take a chance on this. Microsoft took the right approach with XP SP2, checking to see if an antivirus program is present. They should continue with this approach.

RSS feeds for sites referred to in this item:

Software Factories and why hardware engineers are so much better at reuse than software developers

Sunday, August 22nd, 2004

Why is it that hardware developers benefit from component reuse and software developers do not?
Jack Greenfield recently authored an article on MSDN called “The Case for Software Factories” that I encourage you to read. While I don’t disagree with this article or it’s sequel, there are some areas that I think deserve elaboration.
In his article, Greenfield distinguishes between economics of scale and of scope. Scale is where you can easily make copies of an existing item, Scope is where you can create custom items by assembly components that you already have. When it comes to scale, software has the edge: just duplicate a CD or post an installation file to a server. When it comes to scope, hardware as the edge.
A chip maker wouldn’t dream of designing a chip using individual transistors. They use libraries of components and advanced design software to assemble chips from the simplest ASIC to the largest microprocessor.
Why don’t we do this with software? What do hardware engineers do that we don’t?

Ultimately, it comes down to two things:

  • Hardware components have their behavior specified in extraordinary detail. You can predict with great accuracy what even a complex hardware component will do in a system. Microprocessors, for example, come with documentation books that cover every aspect of the chip’s behavior from commands, to electrical characteristics, to the precise timing of signals.
  • Hardware developers use sophisticated tools to build systems from components, whether designing a chip from predesigned elements, to building systems using commercial or custom components.

In software we have primitive versions of both. There is a third party component industry (that I’ve been a part of for years with my company Desaware, but the industry is tiny compared to the overall software market (I discuss this further in my article “Impact of Source Code Availability on the Economics of Using Third Party Components” )

The Heart Of The Problem

The fundamental reason that hardware developers are ahead of software developers in reuse is this: coding is cheap.
Intellectually, we know that software is expensive to develop. We know that coding is a very small part of the total lifecycle cost of software. But when we look at a software component, we still tend to think we can do ourselves cheaper rather than using something off the shelf. The fact that coding is inexpensive gives us a distorted view of development costs.
If software components were specified and documented with the level of detail common for hardware components, perhaps it would overcome some of the reluctance people have to using components, but the nature of the software market prevents this. The component market is not large enough to justify the investment in documentation, customers do not demand it, and the marketing drive to constantly add features to software drives up the cost of specifying and documenting components even further.
Regarding high level tools, it’s easy to justify the long term investment in developing and deploying such tools. But the key word here is long term. Most companies do not think long term. Time to market and this quarter’s profitability are stronger driving forces.

The Hardware Edge

Why have hardware developers turned to components and invested in high end development tools? Because the economics are compelling both on the long term and on the short term!
The cost to develop an integrated circuit are enormous. Prototyping a chip is time consuming and expensive – you can’t just mock up a chip the way you might mock up a user interface. Actually building a test chip is expensive as well – especially for complex chips. Testing is expensive, often requiring complex hardware and programming.
In short, hardware development is so expensive both in reality and perception that at a certain level of complexity it is not only costly, it’s virtually impossible to do without componentization and use of sophisticated development tools that allow a high level of abstraction. It is this cost that forces hardware developers to buy instead of build components (overcoming “not invented here” syndrome). It is this cost that overcomes the temptation to build something because it’s fun – you just can’t rationalize the extra development work the way you often can in software.
Curiously enough, even as the low cost of software development prevents us from enjoying the economies of scope (code reuse), the low cost of software reproduction makes piracy a greater problem in software than hardware (though hardware piracy exists as well).


Software Factories: Assembling Applications with Patterns, Models, Frameworks, and Tools by Jack Greenfield, et-al. I haven’t read this, but if you like the MSDN article, you’ll probably like this book.

Code Generation in Microsoft .NET by Kathleen Dollard. Practical rather than academic, if you want to try code generation with .NET, start here.

RSS feeds for sites referred to in this item:
Kathleen Dollard’s Blog

Telling time with .NET

Tuesday, August 17th, 2004

My latest ebook “Telling Time with .NET: Build your own Internet time component” is now available.
I’ve always wanted my own “Atomic” clock, and even though self-updating clocks have been available for years, I never got around to owning one. Internet time servers make it possible to come close though.
I ended up using a number of interesting techniques to improve accuracy, and while advanced .NET developers probably won’t learn much from this ebook, I think it will prove an interesting read for beginning and intermediate developers. Topics covered include .NET sockets, inheritance, code access security and regular expressions.

Advice for future Computer Science Majors

Tuesday, August 17th, 2004

Like many computer professionals, I?m often asked for career advice for those considering entering this field. Given the recent drop in the number of students entering college with computer science majors (see the May 2004 issue of Computing Research News), offering good advice is more important than ever. Here’s my version.

You had better like change.

Many careers require that you keep studying to remain current. Doctors and lawyers have to stay on top of he latest treatments and legal precedents. Realtors study the latest regulations. Contractors their building codes. But what makes computer science intense is that not only do you have to keep learning technology that is changing at a rapid clip, what you previously knew becomes obsolete.

Most developers like to learn new technology, or at least play with the latest toys. Sometimes we get so hung up on new technology that we don’t think clearly about the consequences of that technology (a topic for another time). But it is important to consider some of the consequences of the rapid change that occurs in this industry.

Because what you know will soon be obsolete, you’ll spend much of your career under intense pressure to stay up to date, the underlying fear being that if you don’t, you’ll end up unemployed and pathetic. This fear, though rarely admitted, is quite common, and can be a source of stress, which may not matter to you now, but is one of the reasons people leave the field. It’s like the Red Queen says in “Through the Looking Glass” – you have to run as fast as you can just to stay in place. You have to run even faster to get anywhere. Burn-out is a problem.

Being technologically savvy isn’t enough.

Being an extreme programmer is all very nice, but if you want to succeed in this industry it’s not nearly enough. You may have heard the political and economic pundits on the news talking about the “jobless recovery.” Bush is stressing because corporate profits are rising but employment is not. Kerry promising to create jobs, but it’s not clear what he can do. Why? Because our economic system demands that businesses become more productive, and more productive means (among other things) doing more with fewer people, or doing more with cheaper people. We’re all familiar with how technology eliminates some jobs – ATM machines reduce the need for bank tellers, self service pumps allow gas stations to be staffed by a single person. There’s no clear sign of this happening to software developers, in the sense that few software development tools are so sophisticated as to replace programmers (though it’s coming – automatic code generation is a fascinating topic). But it is possible to replace expensive software developers in the U.S. with less expensive software developers in other countries. How big an impact this is having, and how big an impact it will continue to have is subject for debate. But it’s too significant to ignore.

And even if productivity isn’t an issue, the inevitable tides of our economy will be. You will at some point in your career be dealing with a tight job market. And it’s not your technological skills that will determine how well you succeed at those times.

It’s your personal skills that will count. How well do you communicate? You should know how to present your ideas both to individuals and small groups. Can you write clearly and somewhat grammatically? Do you come across as confident in yourself and your abilities? Do you have leadership skills (that often translate into management skills)? Are you responsible? Are you a nice person to have around (or at least not completely repulsive)? Yes, there are those who are so technologically brilliant they can get away with caring just about technology, but for most of us these other skills are essential.

So, as you go off to college, don’t let your technical classes get in the way of getting a good education. Take a writing class. Take a class or get involved in an activity that forces you to do some public speaking. Do some drama or improv. Join a club. Do some volunteer work. Do some tutoring. This kind of experience will have long term benefits to your career that you wouldn’t believe.

Take CS for the right reasons

The best technology professionals are almost without fail the ones who entered this field because they are fascinated with technology. We like to play with the latest and greatest toys. We share an underlying faith that technology can be used to solve problems and make the world better. In fact, we’re sometimes so blinded by technology that we fail to consider other factors in our decisions (like business and economic factors, social consequences, etc.) – but that is a subject for a later time.

The important thing is not to go into CS just because you think it’s going to make you a lot of money. Sure, some software developers got rich in the dot-com boom, but even then most of us ended up with at least some stock that ultimately became worthless. Choose this major because it’s fun, and you’ll end up having a great time. You’ll meet lots of smart people, most of them pretty nice. And when the inevitable stress and problems occur, you’ll at least know that you’re spending your days doing what you enjoy the most.

Do you have additional recommendations for future CS majors? Please post them (remember, comments on this blog are moderated and won’t show up right away).

The Always Use Protection Quiz

Tuesday, August 17th, 2004

I’ve started seeing all sorts of interesting comments since my interview with Robert Scoble was posted on Many of the comments relate to the quiz I have posted at Some of the comments are thoughtful. Others provide fascinating insight into the biases of the reader. I’ll be using this post (which I’ll update periodically) to respond to the most interesting of these posts.

Readers at PHP Everywhere wonder:

  • What are the FTC surveys on teens suffering from identity theft?
    Answer: They don’t ask that question. There’s rarely money involved in the kinds of identity theft teens suffer from (and the FTC, being the Federal Trade Commission, is focused on financial issues). So until some grad student looking for a good thesis does a formal study, I’m afraid my numbers are the best I have. I feel comfortable using them because the numbers I’m seeing are so high (over 30%) compared to the identity theft numbers for adults (under 5% in 12 months, which is still very high).
  • I have some sort of financial motive by claiming free antivirus programs aren’t good enough.
    Answer: Anyone who asks this is missing the real point of the question. The key idea is that periodic scanning for viruses is never good enough. You need real-time scanning, and to my knowledge at this time none of the free scanners include that feature. As soon as I find one I’ll remove free from the question. Also, I don’t work for an anti-virus company.
  • Cookies are a threat to privacy.
    Of course they are! And occasionally a stupid web site will include personal information (such as user ID and password) in the cookie. But most sites don’t. The point of this question is to encourage people to understand what cookies really are and the kinds of threats they pose from minor (obfuscated first party cookies), to more significant (obfuscated third party cookies), to serious (cookies that contain personal information).

Readers at wonder:

  • If I just wrote this to plug an area of the market that is otherwise unplugged (under 20’s) – the implication being it’s just for the money.
    Well, yes. Obviously if there had already been other security books for teens I wouldn’t have written this one. I would have just bought that book and handed it to the teens I know who need it. But any author will tell you that writing books nowadays is one of the least profitable ways to spend your time.
  • You play fast and lose with the terms virus and worm:
    You bet I do! The difference between them is important to security professionals and those who are by nature precisionists (or anal retentive). From the perspective of a home users, viruses, worms and trojans all fall into the class of “bad things that a good antivirus program should clean or remove.” Spyware and Adware fall into the class of “bad things that you may need a spyware/adware tool to remove because many antivirus programs won’t catch them”. That’s why in the book, once I explain the difference between them, I tell readers that I’ll just use the more generic term “virus” throughout the rest of the book.

More to come…

RSS feeds for sites referred to in this item:
PHP Everywhere

RAD is not productivity

Monday, August 16th, 2004

Recently a software developer came to me with a fascinating argument. He claimed that VB .NET is a poor choice for software development because it was more RAD than C# (RAD = Rapid Application Development for those who have forgotten), and that RAD tools lead invariably to bad code that is unsupportable and more costly in the long run. He based this argument on extrapolation from VB6 days, where VB6 was (of course) very RAD, and this resulted in copious amounts of bad VB6 code.

Now, there’s no denying that lots of very bad VB6 code was written. But does RAD really result in bad code? Are RAD tools really best for quick prototyping and “throw away” code?

This question is especially important as we approach Whidbey, where much of the differentiation between C# and VB .NET are in the area of RAD features, with VB .NET gaining edit and continue and native support for the My classes, and C# gaining refactoring.

To answer the question, let’s agree first that increased productivity is a good thing (in fact, in an age of increased offshoring, it’s arguably critical). But how do you measure productivity? Is it in lines of code per dollar spent? That’s a fair definition – but only if you qualify it. The lines of code has to have a specified quality level, and dollars spent has to include the lifetime cost of the software, not just the initial coding and testing.

RAD tools clearly allow you to write more code faster. But is it necessarily bad code?

I think not.

The reason that so much bad VB6 code was written was not because VB6 was RAD, but because it was easy. In fact, VB6 made writing software so easy that anyone could be a programmer, and so everyone was. Doctors, Lawyers, Bankers, Hobbyists, Kids – everyone was writing VB6 code with little or no training.

Now, I don’t know about you, but I still have copies of a few of the programs I wrote when I was just starting out, before I’d actually gone to school to learn a thing or two about software development. There was some BASIC, some Pascal, and looking at it now, it’s all pretty ugly.

So let’s get real. Bad programmers write bad code. Good programmers write good code. RAD lets bad programmers write bad code faster. RAD does NOT cause good programmers to suddenly start writing bad code.

RAD tools can make a good programmer more productive, because they speed up the coding process without compromising the level of quality that a good programmer is going to achieve.

I haven’t yet completed my comparison of C# and VB .NET for Whidbey (I’m in the early stages of updating my eBook “VB .NET or C#: Which to Choose”), so I can’t tell you yet which one is likely to be more productive. But if it turns out that one language is, in fact, more productive than the other, that will be a major factor in the results.

And the bad code is?

And by the way, it may be time for us to reconsider our definition of “bad” code as well. It’s the beautifully structured, object oriented, “good” C++ code that’s giving us all huge amount of grief due to security problems (buffer overruns, memory leaks, games with pointers, etc.). Barring use of Win32 API calls, VB6 was remarkably immune to all of these problems. Makes you wonder who’s been writing the bad code, doesn’t it?

Why Dan does blog after all…

Monday, August 16th, 2004

So I was at a Geek dinner in Seattle that Robert Scoble put together. Towards the end of dinner, there I was – surrounded by numerous bloggers, all chanting the mantra of blogging (whatever that may be). Finally, I cracked. I couldn’t take it. I left, a broken hulk of a man, promising Robert that yes, I would join the ranks of bloggers. Only a couple of months ago I posted “Why Dan don’t blog” on the Apress blog site , explaining my reasoning for avoiding it. And you know what? Most of those reasons are still true. No matter. Consistency is overrated anyway.

I still don’t plan to post daily. I still have no interest in posting the minuta of my daily life. I’m still not an information aggregator. But I came to realize that not every blog takes that approach. Maybe there is room in blogging for the mavens of the world.

I decided to name this blog Kibitzing and Commentary (which are sort of the same thing, just one has attitude). I know many blogs focus on specific topics; I have no intention of doing so. So if you aren’t interested in straying from purely technological topics, I encourage you to subscribe to the technology or software categories only.

So here goes. I hope you find what follows interesting and entertaining.

RSS feeds for sites referred to in this item: