Like many developers, I’ve started the process of getting acquainted with beta 1 of Visual Studio .NET 2005, along with the new versions of both C# and Visual Basic .NET. One thing has become increasingly apparent – Microsoft is effectively abandoning the Common Language Specification (CLS).
If you think back a couple of years when .NET was first announced, the CLS was one of the lynchpins of the .NET message. The idea that every .NET compatible language would be able to work seamlessly with any other .NET language was one of the key innovations of .NET. This meant you could create components in any language, and easily use them from any other – a vast improvement over the previous generation where mixed language development required great care to match calling conventions, parameter count and type, and responsibility for reference counting and memory management.
There is no doubt that .NET is a huge improvement over previous approaches. As a component developer, and someone who strongly believes in software reuse and component based software development, the CLS was gospel. I believed that every assembly should always be CLS compliant (even private code, in the hope that one day it might be refactored into components). Visual Basic .NET 2002-3 produces CLS compliant code by default. C# does not, though it’s not hard to maintain CLS compliance, and the compiler can check for it if you wish.
With Visual Studio 2005 it seems clear that Microsoft has effectively abandoned the CLS in favor of a new standard: specifically – the set of language constructs supported by Visual Basic .NET and C#.
Want proof? Visual Basic .NET 2005 no longer creates CLS compliant code by default.
And you know what? Microsoft made the right decision in this case. Leaving these features out of VB .NET would have crippled the language (both in perception and in reality, though mostly the former – a subject I’ll return to later).
The three major non-CLS compliant features I’ve seen so far are generics, unsigned variables and different access levels for property procedures. All of these are extraordinarily useful features for VB .NET. All will make it easier for VB .NET and C# assemblies to work together.
What does this mean to other language vendors? Can they leave out these features with the argument that they are not necessary because they are not CLS compliant? Of course not – how can you be less than 100% compatible with the two major .NET languages? In effect, VB .NET and C# will define the new de facto standard.
The idea of a common language specification is a good one, and the truth is – they are so close that it’s hard to see this as a big deal. But language interoperability is important – especially if we are ever going to convince developers to truly adopt component-based development. Both VB .NET and C# should produce CLS compliant code by default and require an explicit developer choice to turn it off (just as systems should be secure by default – same concept). But I’m not suggesting they change the default attributes for VB .NET and C# projects. Rather, Microsoft should update the CLS to match the new de facto standard and make that the default setting for VB .NET and C#. This will help vendors of other components and the folks developing Mono to have a clear common guideline to work with, and in the long term ensure that the interoperability promised with .NET does not devolve into an illusion.
How Microsoft Lost the API War by Joel Spolsky is essential reading to every Windows software developer. Check out his new book.
RSS feeds for sites referred to in this item:
Joel on Software
Ever since the movie “Revenge of the Nerds,” the word “nerd” hasn’t been so much an insult, as a promise; or at least a suggestion that those kids who spend their time in geeky pursuits will one day employ the cool kids (or at least cash in some nice stock options along the way).
Well, stock options aren’t what they were (unless you work for Google), and jobs of nerds and former cool kids alike are finding their way to India and Asia, but geeky pursuits continue among both kids and adults.
Today I’d like to tell you about two teenage nerds I met recently. Wait, I mean two teenage geeks. Nowadays, around here at least, geek is much more complimentary than nerd. But since it may vary where you are, I’m just going to alternate between them – you’ll know what I mean.
Anyway, I want to tell you about them, because their pursuits today may give us some insight into the future. More important, I think their stories may shatter some misconceptions adults often have about what it means to be a nerd today. Consider this a plank in bridging today’s digital generation gap (something I’ll be writing more about).
When I was a young nerd, my main hobby was Amateur Radio (or Ham Radio). I had my advanced class license by 14, and passed the technical requirement for first class by 15 (didn’t quite make it on the morse code). I could tear apart and repair a short wave radio, build Heathkits for fun, and spent weekends participating in contests and sometimes T-Hunts (first generation geo-caching, done the hard way).
We all know those kids have vanished. Heathkit is but a fond memory. Ham radio largely relegated to senior citizens. The computer took over, and everybody knows that it’s on computers that you’ll find today’s young nerds. But let’s take a closer look at what everybody knows.
In “Revenge of the Nerds,” the technology nerds played with computers and build robots. In fact, any kid who used a computer was by definition a “computer nerd.” That is no longer true.
Almost every kid and teen today uses computers routinely, most are on the Internet as well. And while it might feel nice to interpret this as a victory of sorts (yeah! all the kids are geeks now), it’s just not so. For most kids and teens the computer is a tool. They have little or no interest in the computer itself – just in what they can do with it. From using wordprocessors for reports, to presentation software for classroom presentations, to online gaming, etc., the computer is a means to an end.
So how did I recognize these two “computer nerds?”
I spotted the first one when we were having a casual conversation about computer monitors. This alone is not a mark of a nerd – monitors are a fact of life nowadays, but he used the term “frame rate,” which is not in the vocabularly of the average teen. Later we were discussing a dynamic state machine based AI he’s developing for a game he’s writing, and two things became apparent: First, that I’d forgotten more than he knows about state machines (with an emphasis on the “I’ve forgotten” part), and second, that he definitely qualifies as a nerd (oops, I mean a geek).
The second one I met at a LAN party. When not in school he runs his own part time business, charging friends and neighbors $25/hour to clean their computers of viruses, set up security, and install software. Yep, that definitely qualifies.
My nephew and his group of friends have quite a few geeks among them as well. In fact I had the pleasure of watching them participate in a STRUT contest last year in which their team field stripped a computer to its component parts, then reassembled it in a shockingly short time.
Ok, this isn’t enough to play “spot the nerd,” but the key thing to remember is this: just because a kid is using a computer doesn’t make him or her a nerd. You have to look further.
Oh, one more thing.
Both the two teens I mention here, and most of the others in this group have one other interesting thing in common: they spend much of their time using and playing with Linux. Today, that is definitely the mark of a geek. Tommorow? Who knows….
Most people in the technology field don’t realize how truly awful it is out there with regards to viruses and spyware. I’ve been traveling around the country promoting my book “Always Use Protection: A Teen’s Guide to Safe Computing,” and the stories I’ve heard are horrific. People are truly learning to hate their computers.
A recent blog in the Scobleizer is but a small example, but one of the comments that suggested Microsoft build in anti-virus capability to Windows is way off.
Having Microsoft build anti-virus protection into Windows would be a disaster.
Let’s ignore the antitrust issues. Let’s even ignore the question of whether Microsoft can be trusted to build a good antivirus product.
Right now there are a decent number of antivirus vendors. The competition among them is helping improve the overall protection of antivirus products. More important, the variety makes it very difficult for a virus/worm to target all of them (remember – viruses love to disable antivirus programs).
What would happen if Microsoft included antivirus protection in Windows?
It would immediately suck a huge amount of the revenue that antivirus companies use to survive and use to develop their products. Even if vendors had a better product, the vast majority of people would just stick with the built in antivirus program. The Microsoft antivirus program would quickly gain a dominant market share. We’ve seen that story before.
Not only would this reduce the competition that drives improvement in antivirus programs, it would create a huge juicy target for viruses. Just as most viruses now target Internet Explorer, most would target the built-in protection. Computer security is far too important to take a chance on this. Microsoft took the right approach with XP SP2, checking to see if an antivirus program is present. They should continue with this approach.
RSS feeds for sites referred to in this item:
Why is it that hardware developers benefit from component reuse and software developers do not?
Jack Greenfield recently authored an article on MSDN called “The Case for Software Factories” that I encourage you to read. While I don’t disagree with this article or it’s sequel, there are some areas that I think deserve elaboration.
In his article, Greenfield distinguishes between economics of scale and of scope. Scale is where you can easily make copies of an existing item, Scope is where you can create custom items by assembly components that you already have. When it comes to scale, software has the edge: just duplicate a CD or post an installation file to a server. When it comes to scope, hardware as the edge.
A chip maker wouldn’t dream of designing a chip using individual transistors. They use libraries of components and advanced design software to assemble chips from the simplest ASIC to the largest microprocessor.
Why don’t we do this with software? What do hardware engineers do that we don’t?
Ultimately, it comes down to two things:
- Hardware components have their behavior specified in extraordinary detail. You can predict with great accuracy what even a complex hardware component will do in a system. Microprocessors, for example, come with documentation books that cover every aspect of the chip’s behavior from commands, to electrical characteristics, to the precise timing of signals.
- Hardware developers use sophisticated tools to build systems from components, whether designing a chip from predesigned elements, to building systems using commercial or custom components.
In software we have primitive versions of both. There is a third party component industry (that I’ve been a part of for years with my company Desaware www.desaware.com), but the industry is tiny compared to the overall software market (I discuss this further in my article “Impact of Source Code Availability on the Economics of Using Third Party Components” )
The Heart Of The Problem
The fundamental reason that hardware developers are ahead of software developers in reuse is this: coding is cheap.
Intellectually, we know that software is expensive to develop. We know that coding is a very small part of the total lifecycle cost of software. But when we look at a software component, we still tend to think we can do ourselves cheaper rather than using something off the shelf. The fact that coding is inexpensive gives us a distorted view of development costs.
If software components were specified and documented with the level of detail common for hardware components, perhaps it would overcome some of the reluctance people have to using components, but the nature of the software market prevents this. The component market is not large enough to justify the investment in documentation, customers do not demand it, and the marketing drive to constantly add features to software drives up the cost of specifying and documenting components even further.
Regarding high level tools, it’s easy to justify the long term investment in developing and deploying such tools. But the key word here is long term. Most companies do not think long term. Time to market and this quarter’s profitability are stronger driving forces.
The Hardware Edge
Why have hardware developers turned to components and invested in high end development tools? Because the economics are compelling both on the long term and on the short term!
The cost to develop an integrated circuit are enormous. Prototyping a chip is time consuming and expensive – you can’t just mock up a chip the way you might mock up a user interface. Actually building a test chip is expensive as well – especially for complex chips. Testing is expensive, often requiring complex hardware and programming.
In short, hardware development is so expensive both in reality and perception that at a certain level of complexity it is not only costly, it’s virtually impossible to do without componentization and use of sophisticated development tools that allow a high level of abstraction. It is this cost that forces hardware developers to buy instead of build components (overcoming “not invented here” syndrome). It is this cost that overcomes the temptation to build something because it’s fun – you just can’t rationalize the extra development work the way you often can in software.
Curiously enough, even as the low cost of software development prevents us from enjoying the economies of scope (code reuse), the low cost of software reproduction makes piracy a greater problem in software than hardware (though hardware piracy exists as well).
Software Factories: Assembling Applications with Patterns, Models, Frameworks, and Tools by Jack Greenfield, et-al. I haven’t read this, but if you like the MSDN article, you’ll probably like this book.
Code Generation in Microsoft .NET by Kathleen Dollard. Practical rather than academic, if you want to try code generation with .NET, start here.
RSS feeds for sites referred to in this item:
Kathleen Dollard’s Blog
My latest ebook “Telling Time with .NET: Build your own Internet time component” is now available.
I’ve always wanted my own “Atomic” clock, and even though self-updating clocks have been available for years, I never got around to owning one. Internet time servers make it possible to come close though.
I ended up using a number of interesting techniques to improve accuracy, and while advanced .NET developers probably won’t learn much from this ebook, I think it will prove an interesting read for beginning and intermediate developers. Topics covered include .NET sockets, inheritance, code access security and regular expressions.