Archive for the 'Technology' Category

New course on for .NET Developers

Tuesday, February 26th, 2013

I’m pleased to announce my latest Pluralsight course “ for .NET Developers”. This course is a prequel to my course “ and Apex Fundamentals for Developers” intended specifically for .NET developers who are curious about

For more information, check out Pluralsight’s  blog post announcing the course. is the next Visual Basic

Monday, September 24th, 2012

I just came back from the Dreamforce conference with an epiphany – is the next Visual Basic. Some less experienced software developers might think that’s an insult, but those of us who have been around know that it’s not merely a compliment – it’s an observation that, if true, represents a potential tectonic shift to our industry.

To understand why, I need to take you back over 20 years.

When Visual Basic 1.0 came out (I participated in the original beta program), the reactions to the product fell into three categories:

  • Most professional C++ programmers dismissed it. VB was a “toy language” or a “glue language” for components – not for serious software development.
  • Increasing number of software engineers embraced the language because, to put it simply, when it came to desktop applications you could be an order of magnitude more productive in VB than in C++. It may not have had the stature and features of a “real” professional language, but it sure was profitable to work in it.
  • VB was easy enough for anyone to use, so everyone did. Doctors, lawyers, students – millions of VB developers sprang up out of nowhere and wrote a lot of code. Much of it was very bad code, but that’s what happens when a bunch of amateurs get in the game. Entire book, magazine and training industries grew up to help them get better, and many of them did and built entire careers around the platform.

By the time VB6 came around, it was the most popular software development language and platform in the world. Simply because it was easy, and it was productive.

Why was it productive? Because VB put an abstraction layer over the Windows API that was infinitely easier to use than coding to the native API or other available frameworks such as MFC or ATL. You couldn’t do everything in VB6, but you could do most of what you needed, and could call the API directly if you really needed to. Having a rich set of available components to purchase didn’t hurt either.

Microsoft did a lot of things right building the VB community. They had great developer and ISV relations. They supported several conferences. There were books, documentation, whitepapers and so on. They really set the standard on how to build a platform.

Then they created the .NET framework.

There was a lot of negative reaction from the original VB6 community towards VB .NET, some calling it “VB .NOT” or VB.Fred (coined by Bill Vaughn). Some programmers made the transition. Some switched to C#. But two things were clear. First, VB .NET was indeed a powerful, serious, professional language and platform for software developers. Personally, I love it, and still use it all the time. But it was equally clear that VB .NET is not easy. In fact, the entire .NET framework is robust, powerful, sophisticated and complex. It’s a great platform for software developers, but is it a platform that makes it easy for non-programmers to write line of business applications? Not even close.

Both VB .NET and C# are native languages to the .NET framework – the Windows API of today’s software. Missing was the magic of the original VB – that layer of abstraction that made it easy for anyone to write software.

I’ve been searching for that magic for a long time. I kept waiting for it to appear out of nowhere the way VB 1.0 did. I sure didn’t expect it to sneak up on me from behind.


Specialization is for Insects

Monday, April 25th, 2011

Recently, for a variety of reasons, I’ve been asked about my background. It’s a question that can take some time to answer. To answer why, allow me to share one of my favorite quotes by Robert A. Heinlein:

Specialization is for Insects

A human being should be able to change a diaper, plan an invasion, butcher a hog, conn a ship, design a building, write a sonnet, balance accounts, build a wall, set a bone, comfort the dying, take orders, give orders, cooperate, act alone, solve equations, analyze a new problem, pitch manure, program a computer, cook a tasty meal, fight efficiently, die gallantly. Specialization is for insects.

I can only do about half of these myself, but I’ve always agreed with the sentiment. Back in college my advisors all explained why it is important in one’s career to specialize. But I’ve always chosen breadth over depth. My ideal has never been the specialist, but rather, the synergist – someone who can combine multiple skills and talents to come up with creative and unique solutions to problems.

Some would argue that when one tries to do many things, it becomes impossible to do any of them well. This is not true. What is true, is that if you do not specialize, it becomes impossible to be the very best at something. I know this, because for a time in my career I did specialize in a specific area of technology and became one of the best at it anywhere. Now, I can safely say that I am not the best at anything. But it does not mean I don’t do a lot of things very well.

What might Heinlein add to his list if he were writing for today’s technical society?

A human being should also be able start and run a company, speak in front of a group (any size), write a book, film and edit a video, plan an event, write and place an ad, teach a class, read or write a financial statement, manage investments, work with spreadsheets, write production quality code (on several platforms and in multiple languages), handle a TV or radio interview, mentor a kid, build a computer, wire a network, play an instrument, configure a firewall, architect a complex software system.

Ok, that might not be his list, but it is at least part of mine. It turns out that if you spend time doing a lot of different things over enough years, you can actually become surprisingly good at them. You can even find connections – ways different knowledge sets overlap – that a specialist might never see.

So if you are still early in your career, I encourage you to create your own list. Don’t limit yourself to the confines of what you are taught in school or at work, or what you think you are supposed to be learning. And please don’t limit yourself just to technology – there’s a lot more going on out there.

And if you’re visiting this site to gain some insight into my background, suffice to say – it’s a long story. I like to think that were it possible for me to meet Heinlein today, he would, after some conversation, judge me to be a human being. Coming from him, I would consider it the highest of compliments.

Putting a Machine to Rest

Sunday, June 7th, 2009

You’d think getting rid of an old computer would be easy.

Now, if it’s not a very old computer – say, a year or three, it’s easy: just restore the computer from its restore partition or backup DVDs, run a program like Eraser to overwrite the unused space with random data to make sure no personal information is left, and give it to a friend or donate it to a local organization.

But what if it’s a really old computer – say 10 years or so? In that case you have a much bigger problem. Disposing of the hardware itself won’t be hard – your local electronics recycler will be glad to take it off your hands. But that’s the easy part.

I know what you’re thinking. What am I doing with a 10 year old computer? Well, as you know, I run a software company. And we still occasionally get support calls on very old legacy software, which means we need to maintain the ability to debug or build some pretty ancient code. So when a new machine comes in, the previous machine remains frozen in time.

Now you may think you don’t have this problem, but let me ask you – how long have you had a computer? It may well be 10 years or longer. If so, you may have some old documents from back then that you’ve been carefully backing up or copying to your new machines each time you upgraded. But can you read them?


This New Machine

Friday, May 22nd, 2009

About two years ago I wrote an entry titled “This Old Machine” in which I described my search for a travel laptop – something to use on casual trips when I didn’t need to lug around the full sized laptop I used for conferences. I had some specific requirements – it had to be small and light, and low stress – meaning that if it did get lost, broken or stolen, it wouldn’t be the end of the world. And while the goal was to use it primarily for Email, web access, and storage/editing of vacation photos, it had to have enough horsepower to handle serious applications in a pinch: Visual Studio, SQL Server Express, etc – the kinds of applications that a professional developer might need to use.

At the time, I ended up upgrading an old Thinkpad 240 – a machine that today would be called a Netbook, but that anticipated that class of machines by almost a decade. And it served tolerably well. It was slow enough to be annoying, and the battery life was worthless, but it was good enough.

This summer I wanted something better, and with all the talk about NetBooks I decided it was time to take a look and see if any of the new machines would serve my needs. After reviewing specifications and reviews, I ended up snagging an Asus Eee 1000HE PC.

Honestly, I did not have high hopes – laptops are almost by definition studies in compromise. But I ended up being blown away. This is a nice machine. You can read the full review in my gadget column. Suffice to say that with a 2GB memory upgrade, it has plenty of power for casual development work, is easily small enough to fit into a book bag or backpack, and at a hair over 3 pounds with over 7 hours of battery life is truly portable.

Oh yes, and I will never buy another laptop without a multi-touch touchpad.

Read my full review

Why The Author’s Guild is Wrong about the Kindle

Friday, February 13th, 2009

The Author’s Guild has objected to the text-to-speech features of the new Kindle, suggesting that it somehow jeopardizes the rights of authors (See: will lawyers kill the Kindle). They are wrong on many counts.

First, it is not a copyright violation.

If you read a book out loud, is that different from reading it silently? If someone reads a book to you, does that mean you both have to buy a copy of the book? What if you hire someone to read to you? Of course not. So why would an automated reading device be any different? It is not.

Now a true audio book is different from a printed book. Why? Because it is a derivative work – a performance of a book. It is a new work that is derived from the original.

Some might argue that speech-to-text is also a performance of a work and subject to a new copyright – and it would be, if you tried to sell and market such a work. A similar situation exists with translations. If you wish to translate a book and sell the translation, you have to get permission from the copyright holder. But if a friend comes over to read a book in a foreign language and translates it for you as they read, that is perfectly fine. Text-to-speech is that high-tech friend.

But the copyright argument is not the biggest reason that the Author’s Guild is wrong about the Kindle. The real problem is that they are acting against the best interests of authors.

Here’s why.

Let’s consider audio books on CD in two categories. In general fiction shows 13867 results. In SF and fantasy, 1891 results.

Why would someone buy an audio book? Possibilities include:

  • Unable to read (visually impaired)
  • Too lazy to read
  • Wants to utilize commute time (while driving, on public transit).
  • Enjoys the performance.

Let’s assume that the first three of these represent 75% of the market, and that it can be replaced by text-to-speech. Let’s also assume that few people would buy both the print and audio book. Since audio books cost more than print books, text-to-speech technology should result in some drop of income to these authors as people choose to buy the print book instead of the audio book. If audio books represent 10% of a book’s total sales, and if we assume the audio book pays an author twice what a print book does, the author will lose 50% of 75% of 10% of their income – a drop in 3.75% of their income.

Of course, this would have a much greater impact on audio book publishers – but then why isn’t the audio book publisher’s guild complaining? Surely the Author’s Guild wouldn’t make such a fuss over a 3.75% drop of author’s income.

Especially when you consider the following:

Amazon lists 403,000 results just in general fiction, almost 90,000 books in SF and Fantasy. Or put another way, maybe 3% of printed books have audio books available. If there’s one thing we know about the market – when prices drop, people tend to buy more. Text-to-speech effectively reduces the cost of audio books which means people will buy more – and now they’ll be able to choose from any title, not just those with audio books available. Ultimately this will benefit far more authors as book sales increase overall.

While the numbers I use are largely hypothetical, the principle is clear – text-to-speech is good for authors. It makes their existing books more accessible and opens them to markets (commuters, visually impaired) that were otherwise closed to them. Authors win. The consumer wins. A few authors might lose a small amount. And audio book publishers potentially lose – they will have to market their good purely based on the quality of their performers, not just on the fact that it is an audio book.

The Author’s Guild should live up to its name and acknowledge the fact that the Kindle’s new text-to-speech feature is neither a copyright violation, nor is it counter to the interests of authors.

Tax time

Monday, February 2nd, 2009

As an active software developer, I know that technology advances rapidly. My nose is rubbed in that fact every day as I work to keep up, catch-up, and occasionally learn something new. It therefore leaves me somewhat bemused when I am surprised by huge changes in technology in areas that I don’t focus on daily. Intellectually I know they must be changing, but I’m too busy to pay attention to them, and when I do – I discover all sorts of surprises.

For many, many years tax preparation time was a routine – buy the latest edition of TurboTax, do my taxes and send them in. The only big change was switching to e-file from paper returns some years ago.

But this year that was shaken up. I was about to buy TurboTax when saw that for some reason this year’s version was getting one star rankings on What could have happened? I wrote about this in my gadget column in “TurboTax takes a wrong turn- will TaxCut become the #1 tax software“.

Because that price increase (since reversed) would have doubled my costs, I started looking for other options for the first time in many years. One of the things I found was that there were numerous online options for doing taxes. What seems to have happened is that the IRS was planning its own free online filing system (it’s far less expensive and far more accurate for them to process an electronic return than a paper one). Tax preparation software companies got together to oppose this and created the “Free File Alliance” to try to protect their revenue. The way they do this is by using their free filing options to upsell other features -and to charge for filing state returns. Though officially this free filing is limited to incomes of $56,000, in fact many of the vendors don’t abide by this – neither TurboTax, TaxCut or TaxAct mention any income restrictions.

So, to my surprise, instead of reviewing the tax preparation software packages, I found myself first reviewing the free online services (See Free tax return software reviewed: TurboTax vs. TaxCut vs. TaxAct).

What really floored me was TaxAct. Their free edition could even handle my return (which is moderately complex). And the upgraded version is only $9.95. Now, I wouldn’t actually use it for my return – I find standalone software has other features that are important to me (something I’ll discuss when I review the software packages) – but it demonstrates that there has been some real progress in the area of tax return software that I had been completely oblivious to.

Makes me wonder what else I haven’t noticed recently….

On Device Convergence and Human Nature

Wednesday, November 19th, 2008

This week, Wired posted two articles on device convergence: Five Gadgets That Were Killed by the Cellphone and Seven (More) Gadgets Killed by the Cellphone. I responded in my column first with Five Gadgets that were supposedly killed by the cellphone, but aren’t dead yet and today with Five gadgets that won’t get killed by the cellphone.

It wasn’t really until my second response that I really tuned in to the real philosophical difference between my approach and that of Wired. I get a sense, reading the Wired columns, that there is a fundamental approach that convergence is good and inevitable, and that ultimately cell phones will become good enough and powerful enough to effectively obsolete everything else.

Now, I’m enough of an Science Fiction fan to concede that someday this may be true – it’s quite possible that in a century or so we’ll all have cell phones embedded in our skulls providing full 3D sensory communication and gaming experiences, while shooting photos through our optically enhanced eyes and quietly irradiating our brains.

But barring breakthrough of that magnitude, it seems to me that the Wired columns, like those of many authors, are far too focused on the technology and not nearly enough on the people who use it, and how they relate to it. For example: it’s easy to look at teens and young adults and how they have adopted and use certain technology and assume that is the future for everyone. But you can’t really extrapolate that way until you’ve seen what happens once they get older. The constant text messages and Facebook posts of the young 20-something professional might be replaced by something else once she’s a harried mother trying to maintain a career.

I invite you to visit my column at, where I’ll always try to remember that the people are more important than the gadgets. And I’ll continue to post here on my blog additional commentary that doesn’t quite fit in the column itself.

Gadgets, gadgets and more gadgets

Wednesday, November 12th, 2008

Ok, I’m a geek. I love gadgets. But, I have a confession to make – I’m a flawed geek.

You see, a true gadget nut loves gadgets for their own sake. If it looks cool and has great features (the more the better), it’s great – that’s all that matters. Cost, value, reliability, technical support, practicality and usefulness – these tend to stay in the background; minor details to be excused or explained away.

I’m too much of a skeptic to be a true gadget freak. I think an intuitive and easy to use gadget with fewer features is far better than a fancy complex gadget with tons of features (most of which you’ll never use anyway). I’ll take six month old technology if it will save me 50% off the latest and greatest. And in today’s economy, I really want to make sure that every dollar I spend is worthwhile.

I’ve always wanted to write about gadgets, and I finally found the right opportunity. I’ve become the National Gadget Examiner at It’s an interesting concept – a sort of virtual newspaper that seems to be succeeding at attracting competent people to write about their topics (I’m speaking of the other writers, called Examiners – you can judge my competence for yourself).

I’m certainly having fun at it so far – gadgets is a broad topic (at least as I interpret it), and I hope my skeptical (and cheap) attitude will strike a chord. I invite you to check it out at Gadgets Examiner.

I’ve been cross-posting the articles to a new blog: as well. I’ll also be using that for more in-depth articles that don’t fit on the Examiner site.

The Email Encryptor Reborn

Sunday, March 2nd, 2008

Email obfuscation using Javascript is a popular way to hide Email addresses from spam-bots. One of the most popular of these was written by Jim Tucek at the Academy of Science at St. Louis. Unfortunately he hosted it on one of their student pages at, and the other day (during a website move), all of those pages were taken down, leaving web developers unable to create new Email addresses for sites that use the script. After getting a panicked call from a friend (who has many clients on the script), I decided to step in and see what I could do to help. The result can be found on my new Email Encryptor page.

This actually posed an interesting challenge. Jim used RSA encryption (about 10 bits) to encrypt the strings. I’m not sure why he chose an asymmetric algorithm instead of a symmetrical algorithm – as the primary result of this choice is to make it hard to create new Email addresses (in a symmetric algorithm you could use the decryption key to encrypt as well – so the loss of the original page would have had little impact. In an asymmetric algorithm having the decryption key offers no help). Fortunately, the algorithm itself is reasonably simple and widely published. Though I use cryptography a great deal, this was actually my first time doing the math, and translating some of the mathematical requirements into computer algorithms took a bit of thought. It was also the first time I’d really thought about modulus math. Finally, there was the Javascript coding itself. Though my primary expertise remains VB .NET and C#, I’ve gotten to do quite a bit of Javascript in the past couple of years (both in web projects and some Ajax work), so that part was relatively simple. By the way, I love the Visual Studio 2008 Javascript debugger – it doesn’t get much attention, but it’s very cool. As a side-effect I also got a nice prime number example to use next week at SD-West when I talk about the TPL, I mean “the Parallel Extension to the .NET Framework”.

So ultimately I was able to get it to work. It’s now live and hopefully will serve as a good resource to those left stranded when Jim’s page went down, and the rest of us in our never-ending battle to fight spam.