Dan Appleman: Kibitzing and Commentary

My personal blog

A Developer's View of Vista

I must admit, I rather like Vista. Ok, maybe “like” is too strong a word. I certainly don’t dislike it. At the same time, I have yet to see that its benefits justify the learning curve (which is more nuisance value than a real problem). Maybe that will come with time.
I’ve played with it enough now to have come to some tentative conclusions about it – at least from a developer’s perspective.
First, I think upgrading to Vista is generally not worth the trouble. There’s still enough software that is “quirky” under Vista, and it’s demanding enough on computer resources, that if you have an XP system that’s working the way you like it you should leave it alone. I don’t believe in upgrading existing OS’s in general – plenty of time to do that when you get a new system (which you probably do every year or two anyway).
So that means going for a new system – no big deal, a decent Vista capable system is well under $1,000. But here’s where I’d suggest going a step farther. As long as you’re getting a new system anyway, get a 64 bit system and install Vista X64. The performance of 64 bit Vista on a fast machine is very nice indeed.
Next, install Virtual PC 2007 and bring up a 32 bit system of your choice (XP or Vista – or both), so you can be sure to be able to run other software you might need. Be sure to install the Virtual PC additions – they dramatically improve performance.
How do you know that your system will support 64 bit Vista? Look at the support web site and see if the vendor is shipping Vista 64 bit drivers for the machine. If you see drivers and utilities released over the past couple of months that are either 64 bit specific, or explicitly state that they support 32 and 64 bit Vista, you should be in good shape. I’ve been working on a new Thinkpad R60, which installed Vista X64 just fine without the new drivers (leading me to suspect it was one of the systems they tested it on). The new drivers and utilities are nice though in that they support the Thinkpad specific features (shock detection for the hard drive, finger print reader, custom trackpoint control, etc.) better than the Vista default drivers.
I suspect with time I’ll find more things I actually like about Vista. But for now I’ll settle for the fact that I now have a reliable 64 bit development system to play with, along with several 32 bit virtual machines that run surprisingly fast. Oh yeah, the Aero interface does look cool. Not enough reason to upgrade, but as long as it’s there anyway…

The Ramifications of Google Custom Search

I’m a tech skeptic. Seriously. My first reaction to anything new is almost always doubt – especially if it comes with a ton of hype. And I stay skeptical for a long time. As a result, my track record for predicting which technology will be slow to catch on (or fail) is pretty good. Unfortunately, as with most people, my track record for predicting which technology is going to boom is average – I usually figure it out after it’s happened.
Once I got it right – when I saw the Visual Basic 1.0 beta, I knew that it was going to be huge and change the nature of software development. I responded to that change by launching Desaware.
This week I felt the same way when I saw Google Custom Search. Within 12 hours I had launched SearchDotNet.com – really as an experiment (and a tool for my own research, that is already proving useful).
The more I think of it the more I’m convinced that Google Custom Search is going to do for search what VB did to Windows development – change the paradigm.
(more…)

SearchDotNet.com – A Google custom search for .NET developers

So this morning I noticed Google launched a new custom search tool – where you can basically customize the Google search engine to search across a set of sites and perform other customizations. Basically a domain specific search.
Is I mentioned in my last post, I consider discoverability one of the greatest challenges facing developers today (it’s certainly the single greatest challenge I face in my daily work). Somewhere out there are answers to almost every technical problem – but how to find it?
Using Google custom search to create a .NET domain specific search engine was a no-brainer. A quick visit to GoDaddy and who would have believed it: SearchDotNet.com was available!
So here it is – a Google powered .NET domain specific search. I’m still early in the process of adding sites. Some of the choices are obvious (MSDN, duh!). Other sites are those that I’ve had the most luck with finding answers to problems I found challenging.
I’m particularly interested in finding more “experts” sites – those that help answer really tough problems, or those that have advanced content – but that are often lost in the noise. These will get the “by_experts” tag that allow them to really stand out.
So, if you have favorite sites that you think just HAVE to be included in the list, please let me know (by Email or comments). I’m not trying to just build a list of all .NET sites – quite the opposite. There are plenty of aggregators (not to mention general Google search) that are great at searching everything. What I need (and am trying to implement) is a tool I can use for an initial search that has a higher probability of finding a good solution to problems – especially on more advanced topics. Then, if it fails, I’ll go to the broader web search.
Plus, I’ll be adding content to the site on the topic of discoverability in general. Keeping up with rapidly changing technology is no easy thing, and hopefully I can make a contribution to that effort as well.

Update: 14 years have gone by and I’ve long since abandoned that site. However, the concept is still sound – and I’m using it now at SearchTheForce.com – a custom search engine for all things Salesforce.

The MSDN Wiki Project

I noticed earlier today that Rob Caron at Microsoft was glad I posted my first entry to the MSDN Wiki. While I really don’t consider my posting to be any sort of a milestone, it did remind me that the Wiki project is no longer under NDA so I can actually comment on it.
So here goes.
As someone who has been doing Windows software development since version 1.0, I long considered MSDN the single most important product Microsoft ever shipped – you had to have suffered through the painful lack of documentation earlier on to really appreciate what a revolution it was. The complexity of Windows programming is ever increasing, and MSDN remains the foundation that every developer relies on. While it’s true that for most of us the front-end for searching MSDN is now Google, the content remains the gold standard.
But as good as MSDN is, it’s not good enough. There are far too many holes (and probably always will be – I doubt any doc team could keep up). Even now, it’s extremely common for me to have to search the web for solutions to problems – answers that should be in MSDN but are not.
None of the search engines are good enough for what is needed – a cross linking of information (samples, best practices, caveats and bug reports) that is relevant to each MSDN entry. This problem – discoverability of knowledge that already exists – is the biggest problem faced by any software developer today.
I’ve known about the MSDN Wiki project for a while, but have been too busy with other things to pay close attention. That said, I believe that the MSDN Wiki project is the single most important project going on at Microsoft in terms of software development.
My plan is to add content to it any time I run into something that I get stuck on and have to research – something that should have been in the docs in the first place. I invite and encourage everyone to join in.
I also challenge Microsoft to encourage every one of their software developers to contribute to those areas where they were involved in the development.
The MSDN Wiki project has enormous potential, and I am very excited to see it becoming a reality.
Check it out: http://msdnwiki.microsoft.com

Stunning Privacy Breach by AOL

By now you’ve probably read about the astonishing breach of privacy in which AOL posted the supposedly “anonymous” search records for 500,000 users over a three month period.
You can read more at:
siliconbeat , techcrunch , digg , reddit , and zoli’s blog
Most of the comments on these sites point out the problem of people entering personally identifiable information searches – the idea being that if people searched on topics that might identify them, then also search on topics that are embarrassing or illegal, the database effectively becomes a map to prosecution, blackmail, etc.
What most of the posts and comments miss is that the situation is even worse. Each search request also includes a very accurate (to the second) timestamp. So all the government would need to do to identify someone is to match up a couple of requests to a government owned web site by IP address and time (one can assume that while a company like Google might protect users privacy, government owned web sites probably won’t).
So, to use a hypothetical example: if someone searches for how to pass a drug test, and you find the same user paid a visit to the Department of Motor Vehicles and maybe a court site, it wouldn’t be too hard to pull the logs from those sites and see which IP address visited both at the times specified. Presto – you have some pretty solid evidence what that user is up to, and a map of their searches (who knows what else it might turn up). Plus, since you now have their IP address, you can (as a tech savvy prosecutor), subpoena their records from their ISP you now have some solid identification.
Aside from a gross violation of trust on the part of AOL, this represents a threat to the very future of the Internet. If every search you perform becomes part of your permanent record, how will that impact search?
One thing is clear – AOL cannot be trusted. This is too great a mistake to just brush off. Google has shown at least a willingness to protect user’s information, going to court to protect exactly this kind of information. I don’t know Microsoft’s stand at the moment – if anyone has information on their record please feel free to comment.

Fun Buying From Dell

Joel Spolsky just posted an item on Why Dell.com Still Feels Like Buying A Used Car that describes how Dell’s attempt to segment their customer base makes it that much harder to buy a computer (and know you’re getting a good deal).
I do have two small items to add:
First, they aren’t just trying to make more from business customers – they’re trying to make more from all customers and manage their supply chain efficiently. Thomas Friedman writes about this in his fantastic book “The World is Flat: A Brief History of the Twenty-First Century” where he convinces Dell to trace the history of all the components that make up his laptop.
Second, assuming you aren’t buying in volume and able to negotiate a better deal, here’s a hint – always check prices on both the consumer and small business sites. The consumer site may seem cheaper, but they sometimes stack the small business site with some serious rebates and premium service plans that can actually make it less expensive for a comparable or better machine.
For the biggest bang for the buck on PCs, the best deals are often the refurbished units or discontinued models, where you can get 6 month old technology for a substantial discount over the latest and greatest. I discuss this in my article “The Best Deals on Desktop PCs“.

Real Geeks Use Tools

So today I saw a funny series of Blog posts starting with Robert Scoble’s defense of his “Geekhood” after a post by someone named Cody who hates fake computer geeks.
What’s interesting about these posts are the examples that both use to define geekiness. Cody complains that Scoble doesn’t host his own Blog software. Scoble defends his geek credentials by mentioning past experience installing NT 3.5. Either way, those definitions don’t reflect the reality of the information age.
To put this in context, let’s think back 15 years or so to the Visual Basic story. Here was a tool that provided a high level of abstraction over Windows. Who were the geeks? The C++ programmers who blew off VB as a “toy language” or “glue language”, or the millions who adopted VB either as their first language or migrating from another language?
The answer is obvious – both were. The only difference was that the VB geeks were much more productive (for a wide class of applications).
The world has changed of course, and neither VB .NET nor C# provide the kind of abstraction levels that are needed going forward. We don’t have a tool that corresponds to the .NET framework the way VB related to the Windows API. Or put another way – VB was incredibly productive because it provided a level of abstraction to the underlying API for which C/C++ was the “first class” language. Today, VB .NET and C# are the “first class” languages for .NET – but we don’t yet have that new paradigm, that new level of abstraction, that will bring us to the next level (of geekiness, as it were).
Or do we?
At least in one area, I’m beginning to think that we do.
When I look at ASP .NET, I see lots of great components and features for building great web applications. At the same time, the prospect of building a site using it is – well, it’s about as exciting as Hello World was in C back in the 90’s. I’m working on a project now (not ready to talk about yet), that is web based, and building it from scratch wasn’t even a consideration.
For web applications, tools like WordPress and CMS systems like Plone, Drupel and DotNetNuke are compelling platforms on which to base new applications. Their open source nature and flexible architectures assures extensibility in much the way that VB’s support for custom controls allowed the language to do things that it’s developers never imagined.
This, by the way, should be something Microsoft pays close attention to – the vast majority of CMS systems today are LAMP systems – and this is what might cost them the web platform war (not the quality of the platform itself).
Anyway, I digress. Cody, Robert – you’re both geeks in my book.
And for the record, this particular Blog is on WordPress, that is in fact hosted on my own server – not because there is any geek value in doing so, but because my incremental cost to do so is zero (which is, coincidently, the cost of Robert’s hosting as well).