« Why do I hate driverless cars? | Main | Why has the Bank of Canada "done nothing" for 4 years? »


Feed You can follow this conversation by subscribing to the comment feed for this post.

"Enterprise software manufacturers produce inferior products because they can."

Because they can is only ever half the story. Sure, producing inferior products is feasible, but is it optimal? Because they can does not address the question of optimality. Perhaps it's as simple as arguing that producing inferior software is cheaper.

But I suspect that the problem runs deeper. It's no cheaper to write "action this proposal" than anything else, but it is the type of thing that is commonly written by absolute morons. Absolute morons enjoy using pseudo-intellectual phrases. In a non-competitive market a firm that employs absolute morons doesn't face competition from firms that have intelligent and competent workers. So your software is written, at least in part, by morons who like to action things. It's not just because they can, it's also because they enjoy it.

Evan: "producing inferior products is feasible, but is it optimal? "

Yes, the issue is not so much that the products are not as good as they could be - a Toyota Matrix isn't as good a car as a BMW, but if I'm buying a car to sit in the driveway and rust, I'd rather have an inferior car at a low price than a superior car at a high price. The problem is that the software isn't as good value for money as it could be. An example some readers might be familiar with is Editorial Express. Sure, it's not a great software system. But it works perfectly well, and it costs a small fraction of the amount that commercial editorial management systems, like Manuscript Central, cost. So, sure, it's inferior, but it's good value, so it's fine.

I was involved in setting up the TA management software. It was a major job. We had to explain to the software guy what we wanted. But before we could do that, we actually had to stop and think about what we were actually doing. That was the hardest part. Who did what, in what order, and what information did they need to do it. None of us knew what the others were doing, and it was only when we all sat down with the programmer that we figured it out. Because we all had to explain our bits to him.

I could actually manage Carleton's in-house TA system more or less OK. It is Banner (the big US university software system) that always defeated me. As a young honours supervisor, I could actually work the clunky old in-house CP6 system, and get TRUTH, WISDOM, and CLASS, on the students and classes. But only the administrative staff, who use Banner every day, can actually work it.

And if we ever want to customise Banner, because Carleton isn't exactly the same as the typical US uni, it's a nightmare. "You can't touch that, or the whole system will crash!"

Building big software systems is also just really hard, and some parts are critical to the system and are harder to build than others. It's like an airplane. The hard parts are the air frame, wings, jet engines, avionics. A safe, reliable airplane need no be very comfortable (as anyone who has flown recently can attest). Same goes for big enterprise systems. A firm like Oracle has to devote huge amounts of resources into making the critical bits work. By the time they've done that, they don't want to spend tons more money on the software equivalent of fancy upholstery for the seats, and they don't have to for the reasons you outline.

You forgot another reason - and this is true for most large enterprises with in-house IT shops. Very often the choice between enterprise software packages is not to choose among them but whether to build some thing in house or buy; simply because most enterprises have some quirk or other which makes retrofitting a prebuilt software difficult ("If you customize it too much , it will crash"). This even further narrows down any choices that you might have. According to most statistics (Standish group reports) about 80% of IT projects are either run overbudget,or dont deliver on what was promised or simply abondoned as they get too unweildy. With that high a risk of failure, most IT managers are more than happy to buy a clunky package from a reputed vendor that does some thing rather than trying building it in-house and fail which most likely would get them fired.

"The very nature of enterprise software means that the market for these products will be dominated by a handful of firms. Potential market entrants are detered by the prohibitive start-up costs; the millions of dollars it takes to develop enterprise software from scratch. Incumbants meanwhile reap profits from their locked-in customer base. The costs of switching from one piece of enterprise software to another are so large that firms will tend stick with their existing systems until they become seriously unsatisfactory."

Although it doesn't cost millions of dollars to develop enterprise software (at least not a simple academic management software, or some of the various enterprise management softwares that exist, are crappy, and are as easy to build as a the "I can't wake up!" app), this sentece holds some truth.

But maybe the problem here isn't poor competition in the software industry, but poor competition in the businness (or academia, to stay within our example). If the costs of switching software are so great, and they really are, the drive to better software that arrives through time should be made by new firms in the businness arriving at the market using new and better software and driving the old firms with crappy software out of it.

My belief is ERP software sucks because it's mostly built to deal with the regulatory environment most companies/organizations have to deal with. Get rid of 90+% of overtly stupid regulations and you'll find software that looks much less arcane and painful.

An alternative explanation is that information technology has made information handling possible and generally productive, but that dealing with very big and diverse data requires more than exists right now - e.g. intelligent agents.

I prefer the former explanation - through significant experience in the area and a touch of confirmation bias. The complexity, redundancy, vagaries and sheer uselessness of government regulations, and the follow-on civil compliance distortion, creates the need for features that consume development/design time. Time that otherwise could be spent innovating simplicity and clarity. With development and SME resources tied up in regulatory knowledge and execution, e.g. Tax code or SEC regs, the available investment in innovation shrinks. Government regulation crowds out investment in innovation in organization management software.

Oracle and SAP make their money selling software that helps companies with compliance first, efficiency second.

Nick: "I could actually manage Carleton's in-house TA system more or less OK."

What frustrates me is that the TA management system hasn't changed at all since that initial exercise in which you participated. Everything that was put into the TA management system then has become The Way Things Are, and the TA management system has begun to actually dictate processes, rather than the other way around. The users don't sit down together as a group and talk about what works and what could be better, because everyone's busy, and who has an incentive to change things?

Satish: "most IT managers are more than happy to buy a clunky package from a reputed vendor that does some thing rather than trying building it in-house and fail which most likely would get them fired"

I can easily believe that.

Giovanni: "the drive to better software that arrives through time should be made by new firms in the businness arriving at the market using new and better software"

Interesting observation. And given that Canadian universities don't exactly operate in a competitive environment - it's extremely difficult for new universities to enter the market, for example - this might explain some of the frustrations in my life.

Todd - the original idea from this post came out of a chat with my nephew. He works in England, but has the same frustrations. Can compliance with US regulations be the explanation when the problem is international?

"most IT managers are more than happy to buy a clunky package from a reputed vendor that does some thing rather than trying building it in-house and fail which most likely would get them fired"

Totally. "No one ever got fired for buying IBM" is an IT industry idiom/cliche.

I once read an article in (I think) Wired magazine looking at this issue, particularly why the big companies seem to never have updated their software in fifteen years. The basic reason was that the code was so poorly written, style wise, that updating it is actually near impossible. Any decent software written now a days is designed to be as modular as possible, so updating one bit will have no effect on the rest but a lot of the existing enterprise software was basically written as one giant gloop of code so the costs to the software firm to update are massive. Over time though, these firms have basically ironed out all the unintentional bugs and so can promise really good performance metrics like up time and data security that, if they rolled out brand new software they wouldn't be able to guarantee. When asked, the largest firms that purchase software are more willing to dance with the devil they know ( 99.9999% uptime, medicore worker productivity) then the one the don't (99.9% uptime, improved worker productivity) because they've basically built up an internal structure over time to compensate for the limitations in the software, so they are not as affected by the short comings.

You would think, as I work for one of the accused, that I would have some insight into the issue, but I'm not sure I have anything much to add to what others have said, but it is certainly true that uptime will win over a nice UI any time, that the market is oligopolistic, and that switching costs are massive. (Oligopoly without the switching costs produces different results, of course, as BlackBerry will tell you.)

I'm not here to defend the industry. I would add two other factors.

One is that enterprise software companies don't just sell software. This is present in some of the discussion above: vendor offerings are always a mix of software and services. All customers want some customization, and interoperability with the many different systems of a big customer is a huge problem (thousands of applications that need to act as a cohesive unit). This means that software flaws can be fixed by (or blamed on) the services/implementation effort, and leads to less clear-cut pressure on software quality.

A second is that the chain of principal-agent problems from the customer through to the software engineering teams of the vendor is long and daunting. The bigger the customer, and the bigger the vendor, the longer the chain. It's such a challenge that, to be honest, I'm surprised it holds together at all.

I'm not convinced that the mega-vendor market structure is the best one from a social good point of view, but any replacement will have to deal with this principal-agent chain or it will have to deal with replacing both N applications and also replacing N^2 interoperability constraints.

What Patrick said.

And... major software purchase decisions are typically not made by the end user.

People who choose the software want the use auditable. They want to be able to be accountable (ie produce the documentation trail) when doing tasks. Large systems do that stuff very well.

Also, and this is key: large systems come with default interfaces that are clunky but idiot proof. I promise you Nick, someone in the university could write an interface tonuour program to automate that task or make the welcome screen look pleasant and intuitive. They are too busy doing something else.

Joseph: "the largest firms that purchase software are more willing to dance with the devil they know ( 99.9999% uptime, medicore worker productivity) then the one the don't (99.9% uptime, improved worker productivity)"

Interesting. Also uptime is much easier to measure than something like worker productivity, especially because the long term effects of a system on worker productivity might be quite different from the short-term effects. Features that seem really appealing at first, like being able to do everything by clicking menus, can turn into time sinks for the experienced user. This wouldn't be the only time when decision makers focus on what's measurable, and ignore stuff that's important but hard to quantify.

Tom: "Oligopoly without the switching costs produces different results, of course, as BlackBerry will tell you."

Yes - it would make sense that in a global industry a few millions of dollars in up-front investment costs wouldn't be as much of a barrier to entry as an ability to attract users.

Frances, I apologize for addressing my reply to Nick.

As others have said, it has to with priorities all around (uptime/UI, capable designers busy on something else). Just wanted to emphasize that not only is uptime easier to measure, its effect on profitability is also much more visible, easy to measure, and *attributable*. If "the system is down", everyone can pass off the blame to the vendor. If the system requires far more "clicking around" than necessary, well line workers just aren't clicking fast enough. A.k.a. "It is what it is" so just deal with it. (I.e. There is a lot of organizational-structure pathology involved as well.)

A widely-known example of just the phenomena I mentioned in the previous comment is of course the U.S. healthcare website rollout. It took a complete disaster before the "CEO" (president) got directly involved with the high-tech requirements in all their complexity (and forced the organization to hire the right people to work that).

Chris J - I figured you were simply replying to Nick's earlier comment on this thread, which is perfectly appropriate.

Jeff - "If the system requires far more "clicking around" than necessary, well line workers just aren't clicking fast enough. A.k.a. "It is what it is" so just deal with it."

And, for the most part, people do. But there's both a direct and indirect organizational cost. The direct cost is obvious; the indirect one less so. Take, for example, our new curriculum management system. It requires Associate Deans to spend hours each year clicking 'approve' for curriculum changes, which is tedious and boring. The best way to avoid spending your life clicking is simply to avoid taking on administrative jobs at all. Unless you're the type of person who likes clicking buttons.

What is the impact on organizations if they attract into management people who enjoy clicking 'approve'?

Many excellent points have been made in this thread. I'd like to add that enterprise software is, more or less by definition, software that must be all things to all people. This notoriously results in clunky software, because those extra clicks do something you (for any given value of "you") don't need but that someone somewhere does.

Once upon a time this was why IBM couldn't write successful PC software. What happened was that their process had endless levels of review, in the course of which someone would claim that customers in Outer Mongolia absolutely needed the ability to do X. The result was something that was acceptable to everyone but was nobody's first choice.

But PC software in those days wasn't enterprise software. To get to that level not only do you get clunky, you also get rigid, even thought customizability exists. SAP, for instance, is notorious for forcing organizations to change their processes to do things SAP's way. It's also notorious that SAP installations are always behind schedule and over budget.

The fact that SAP has a market anyway shows something. What? I'm not experienced in this area, but my guess is that organizations decide that dealing with all the things mentioned, in this post and upthread, is better than dealing with the interoperability problems that come with having many less ambitious, and hence more streamlined, systems.

Note that individuals users are typically not directly affected by interoperability problems. They see the problems caused, but not the benefits gained.

Gasp, is SAP still infuriating and clunky? It was air-dropped on our department fifteen years ago, and had holes that I could not believe such a world-wide system would tolerate. We could, for instance, create a bill of materials, but it was set up so that we could not be certain that we had saved it and transmitted it to the warehouse. I had to telephone the warehouse to make sure they had received it. Also, in their misguided urge to get our office to go paperless (ha-ha) they made it impossible to print out a copy of our BoM, because hey, you can always find it in the system. (We couldn't, usually.) So we learned to take screenshots, which was better than nothing and aided us when explaining why our department's efficiency had dropped 30% after getting the new system. There were other flaws, over which memory has thankfully drawn a veil. I took early retirement a year or so after SAP came, but still vividly remember whole mornings wasted in frustration and fruitless requests for assistance. Yes, they had seminars. No, they didn't help.

Until the Toyotas/Walmarts arrive to replace the GMs/Woolworths of SAP/Oracle, their profits will be bigger *because* the software is bad. In software more than any industry I can think of, incentives are perverse.

Thought experiment: imagine if software were easily customized by each user to her liking, data was stored in way that could be readily adapted to any vendor's package, and software built according to best practice was reliable and resilient? It would mean the complete collapse of economic rents from needlessly intricate installation and parasitic support flowing to the sales and support morons Evan correctly identifies as the key assets of the current vendors.

For SAP, Oracle, or just a small shop with captured customers, it pays to be stupid.

Interesting post. A couple of thoughts from a wannabe (or startup, depending on your worldview :p) enterprise software provider:

1. Enterprise software has many characteristics similar to natural monopolies. Entering the market is very difficult, which leaves little competition once you manage to get inside. Not only developing the software is costly, earning customers trust is costly. Which is why (almost) only large firms can enter the enterprise software market when there is already a usable option (ie, not a new market): a middle manager will probably not trust the 3-people startup can do what Oracle failed to do right. These costs usually grow super-fast as the size of enterprises grows larger.

2. Not all users are buyers (or make decisions). This means, as noted in comments earlier, that the features that you complain about are probably the ones not important for the people that actually decide. Back office departments usually have to deal with these problems because it is the front office that actually mandates requirements. This results in fairly suboptimal user interactions.

3. It is impossible to create a large system that accommodates everyone. This should mean that development does not stop once the system is implemented. Unfortunately, once the system is working "well enough", usually funding for development is cut drastically, which results in half-implemented software. As we saw in point 2, the person deciding "well enough" may not represent the opinion of all/most/any actual users.

4. Nobody really knows what the system ought to do. There are several interconnected issues: as Nick pointed out, the several stakeholders have no idea what the others require; when they introspect to find out what they want they may not have thought everything through (resulting in either requirements changes very late in the implementation, resulting in delays and overbudgets, or implementing the wrong system); when they actually find out what they want the developer may not understand what is wanted.

So, as an in-house developer of a database-based software system (all of which *typically* use SAP and / or Oracle or other relational database as a backend), I have developed a few rules about developing these systems, which requires additional design and functionality not needed immediately to meet future user needs. Engineered in at the beginning, life is easy. Not engineered in, life's a nightmare, both to maintain and add functionality. Software companies made decisions early on in their lifecycle that affect everything that happens down the line because they implement only what they need to meet the needs of their first, or most important, customers. The most famous case of this is probably the memory limit of the early MS DOS operating system. By accident of fate and / or the market, they develop a dominant position. People become expert in developing against those systems, so the initial cost of implementation decreases. Add in a few overzealous sales and marketing types selling to unsophisticated buyers (typically not technologists) and you end up with the situation described. My rules:

1. Every field is searchable / sortable.
2. Every field is accessible through a hierarchical tree, including user annotations.
3. Every system has both a single-record and multiple-record updatable user-interface (this should include something like accepting Excel SS as inputs and outputs).
4. Every system has an API to a common language / data interchange format.
5. The reporting system is easily configurable, preferably by suitably trained end-users.
6. The data model is as complete, even for features not currently slated for use.

Beyond that, I agree that the regulatory burden adds a greater degree of difficulty; when you're working with healthcare, for example, having your system available HIPPA-compliant out of the box is a huge feature. Making changes to regulated systems incurs a cost almost equivalent to the initial development cost in testing. A big problem here is that the regulatory agencies aren't technologically sophisticated, either.

As IT manager with similar concerns - some observations:

* There are "mature market" dynamics at play within many software categories serving the enterprise, so highly specialized requirements within the context of (s)low growth leads to few firms competing. The highly proprietary, pay-per-use model used by many enterprise software firms can also slow innovation. By contrast, high growth software markets can see similar anti-competitive barriers through "tight integration" that encourages customers to use their product offerings with an "all in" adoption approach.

* Vendors with short term financial pressures can also grind innovation to a halt with legacy system dependencies. A friend who used to work at one of Canada's largest and most successful software companies often complained that they couldn't attract fresh talent because their flagship products were all built with out-of-date code. A big problem in software companies is employee retention, so interestingly, it was a standard practice with this company to use this out-of-date programming codebase for competitive advantages by helping it ensure longstanding expertise didn't leave the company: their skills became so specialized, they wouldn't be applicable in other environments nearly as well!! For new talent, some of them could be "de-skilled" into this model and get captured too, but the flipside with this approach is that some of the new hot talent needed to support succession and innovation refused to work with crappy legacy code and preferred more “risky startups” using newer programming frameworks. Although I'm told that this "new talent" issue was partly addressed indirectly, the recent economic downturn of these past years permitted the company to blissfully continue with incremental changes to their ancient codebase (and innovation be damned to a manageable trickle!). Internally, some even refer to their flagship product suite as "Frankenstein" but large institutions keep buying it up because "it has it all" and most can't distinguish the pretty interface makeovers from the 1990s technology that it's built on top of.

* For customers, problem with change comes down to opportunity cost and the crazy diverse set of requirements often expected from enterprise software. IT workers keen to support innovation are often up against the painful reality that few are willing to pay the price for the adoption of a promising competing platform(s). This is because it's rare for a competing platform to be built from nothing and have it all from day one, so new marketplace entrants require customers willing to share the vision and *the risk* for adopting the underdog solution. There's also some pain that needs to be internalized when moving to new competing platforms, since "it may not be all there" from day one, so this or that business workgroup may have to wait a period of time for its requirements to get up to speed. And who wants to volunteer for the greater good of moving "enterprise software" forward if it means their particular work area be impacted - usually nobody!

So when the requirements analysis and selection takes place, if *everybody* is coming to the table saying "my requirements are mandatory - no compromise!" naturally the selection team will be biased towards the more mainstream, clunky solution that at some level "keeps everyone happy." Ergo, you have Banner, etc.

But there's some hope in a better mix of proprietary and open source solutions hitting the enterprise nowadays – I’m especially enthused about open source, with its more permissive software licensing model that forces the marketplace towards more competition at the service level, making monopolistic practices more difficult. This business model also permits greater customer involvement since users have a greater degree of control over the software, try-before-you-buy, and other advantages.

Halli - on your six rules: yes yes yes yes yes yes!

"I agree that the regulatory burden adds a greater degree of difficulty"

But you know what happens... people get fed up with working with the clunky user interface, and find a way to just download everything onto a spreadsheet, which goes onto the laptop, which is not password protected, and gets left at airport security or in a coffee shop or.... Forget Freedom of Information and Privacy Act protections!

George - fascinating insights into how the software labour market works. I may forward those to someone I know who's contemplating entering the industry....

Felipe: "Not all users are buyers (or make decisions)."

I absolutely agree that this is a crucial issue. I also suspect - and the tech people who are commenting can correct me if I'm wrong - that some buyers are under the impression that users are idiots, and either buy systems with limited capacity so that the user can't mess anything up and/or don't listen to the users concerns.

Not having (yet) read all the comments:

Oh, you have PeopleSoft, too?

And course management software is also, well, not user-friendly. I'm currently forced to use Moodle, which has its points. But...I have to enter every evaluation instrument into the gradebook separately (or, in any event, I can't figure out how to "batch" process them, and neither can the support staff). And, since I give a daily quiz, that's 28 quizzes a semester. And, if I have the quizzes taken in Moodle, the scores can be auto-entered. But if I give the quizzes in class, I have to use an amazingly off kludge to enter them manually.

But enough whining for the day...

A lot of software created over the last 30 years was bought and managed by people who were specialists in something other than software. Many, many of them truly did not know, and do not know now, what a computer actually is, and did not/do not know what software actually is.

Some of the staff and management were determined to use the computer system to reduce the head count. The rest of the staff guessed or suspected, and used the development process to sabotage the system so the head count could not be reduced.

Middle managers buying or managing software creation often realized the the new system would force a certain pattern of operation on the organization, and forced in their pet ideas. The C-suite, however, often had different and incompatible ideas.

Furthermore, management feels their job is to restructure and reinvent the pattern of organizational operation. So the software get shuffled in mid-construction. As if one end of the bridge is for road traffic, and the other is for rail traffic. Chaos ensues.

The trouble runs the other way too. Software developers often have only the foggiest notion of how the organization actually does operate, and even less information on how the organization ought to operate.

The difference between working as a programmer for a non-computer company (a University or a retail chain) and for a computer company (Oracle, Google or Microsoft) is night and day. All the programming challenges are reversed.

Frances -- I was being sarcastic/cynical. No one in management *likes* to click "approve" endlessly themselves. They might like to delegate and then blame subordinates for slowness. Or they may just suffer like everyone else, because of higher management (or their own) decisions. But my main point was that management doesn't at all like to be dependent on someone they don't (nominally) "control" (uptime problems) vs. someone they do (subordinates with efficiency "problems").

I once worked selling enterprise software to the aviation industry. I was the software guy and a friend of mine was the marketing guy. When he got word of a project, ideally before the actual RFP, he would shmooze around the company and produce a corporate chart, based on the official corporate organizational chart, but highlighting whom he considered to be the key players in the project. This included the purchaser who would sign off on the project, the specifier who would write the RFP, the payer who would pay for it, the director who would be be overseeing the project, the end users who be stuck using it, the staff officer pushing it through the upper chambers, the IT team that would be operating it and so on. Each party had to be sold on the project, and each party had his or her own concerns and constraints. This diffusion of decision and execution was never a guarantee of software excellence.

Another problem was that the company was already doing stuff. This meant that end users, dispatchers, meteorologists, cargo planners, pilots and the like, were already doing their jobs and had no time to explain what they were doing to the development team. Even finding out what the databases and data feeds looked like could require pulling teeth to get access to specifications or data samples. Sometimes it was just the corporate DNA trying to protect is assets. Sometimes there really was no one still working at the company who still understood what was going on.

There is also the problem that one cannot do half an implementation. The solution has to be at least complete, even if it requires some serious usage gymnastics. You can't half install it either. It has to be all or nothing, and if it turns out to be nothing you can wind up on the evening news. When you develop an app, you are working in a green field. Getting halfway there is often a real step forward. This is not so for most enterprise software.

There is also the contractual structure of most software projects. There needs to be a statement of work. There needs to be some agreement on the specification before design and coding can start, if only so that the contractor can get paid as the project is completed. This leads to some version of the dreaded waterfall model in which the code is poorly specified, then completely implemented, and then nearly impossible to improve once it reaches some minimal level of usability.

A corporation could take the project in house, but this requires building up enough staff to write the entire package and then maintaining it. The problem is that it usually takes a lot more people to build than maintain, so the company has to ramp up hiring and training, then lay people off to improve the morale of those sticking around to maintain and improve it. Some companies get around this by constantly building, but not everyone can afford this approach.

Bad, or at best, mediocre enterprise software is often the result of structural problems, as much as we'd like to believe it is a product of malicious hackers with perverse senses of humor.

Jeff: "But my main point was that management doesn't at all like to be dependent on someone they don't (nominally) "control" (uptime problems) vs. someone they do (subordinates with efficiency "problems")."

That's a good point. My point was that this has further and quite profound implications, that is, these enterprise software decisions can have profound implications for the structure and culture of an organization.

You get what you pay for. Developing GUI is incredibly time consuming, and in database systems end users decline to master relational algebra that places the burden on the developer to anticipate all of the queries and encase those in some kind of interface. Once you have a SQL database, most report generation can be handled in excel. Teach your staff SQL. It's much more powerful. Heck, you can use R... Just specify there has been an odbc with appropriate privileges.

When designing data entry systems there are two basic considerations
- develop the bulk case first, most systems build this last
- realize most of the problem is data validation during bulk entry

Jon "Once you have a SQL database, most report generation can be handled in excel."

Yes, excel's pivot tables are beautiful! Once I discovered how to use them, numbers that took me half an hour or so to retrieve previously could be obtained in minutes.

Which raises the question - we have all of this management training blah blah blah; students are taught all of this research methods and critical thinking blah blah blah. Why is the use of pivot tables so rarely a part of university or management training? (The only reason I figured out how to use them was that a friend who works for the Bank of Canada was telling me that they look for knowledge of pivot tables in their recruiting, and so figured "these must be useful, perhaps they can help me sidestep my software's tedious reporting facilities").


General Nathan Bedford Forrest said that the battle goes to him who "gits thar the fustes' with the mostes'", or so they say. In the ethos of the software industry getting there first is highly valued. Eliminating any and all bugs is impossible with software of any size and speed of development is considered more important than reducing bugs and design flaws, even though both are valued, of course. Releasing buggy software is the industry standard.

Jeff Atwood, co-founder of the highly regarded website for software programmers and developers, Stack Overflow, wrote a blog post this year about "Complaint-Driven Development", which, he says, is how Stack Overflow and other projects were developed. The URL is http://blog.codinghorror.com/complaint-driven-development/ . Atwood states:

"{During beta testing} Identify and fix everything major that comes up. Your product will still be terrible, but it'll be slightly marginally less terrible, and you'll now be slightly marginally less screwed than you otherwise would. (This is what we business experts call a "competitive advantange". Look it up.)

"Rapidly get to a public launch. It will suck, but you will ship it anyway."

Why does your software suck? Because software sucks. At least, commercial software sucks. (It approaches perfection in the long run, hehe. ;)) No, it doesn't approach perfection in the long run. What happens is that new features are perceived to give a competitive advantage, so they are added and incorporated into new sucky software that is released as quickly as possible. That is why problems persist. Less effort is expended on eliminating problems than on adding bells and whistles.

That does not mean that no effort goes into perfecting software. If you are a researcher there is a good chance that another researcher doing similar research has written good software for herself and is selling it or giving it away. Mission critical software has to be good. CERN has good software. So does NASA. (Too bad Obama did not farm out the ACA website to NASA. ;))

Effort goes into perfecting commercial software, too. As Atwood indicates, developers do respond to user complaints. So be the squeaky wheel. Maybe they will listen to you.

Alternatively, academic institutions could use really user friendly and aesthetically pleasing software which says "Yo" :)

But I get what you mean. Perhaps they find it difficult to recruit sufficiently qualified people who wish to dedicate their time to trial runs of products before releasing them to market?

From the side of the institutions, perhaps they could mine the community of app developers or people who have demonstrated abilities in user friendly apps which demonstrate an ability to respond to relatively complex institutional environments? But the tolerance to epic fails would have to be very high. If an app fails, you just get few/no downloads. If the software commissioned by the university completely fails, it's a little harder to take a "win some, lose some" sort of attitude.

Enterprise software sucks because the companies that make and use the software suck. Large companies are
Filled with backstabbing, petty, losers who couldn't innovate out of paper sack. And then you've got the business people who are bent on milking and frauding their companies at every turn.

When you sell one of these frankenstiens software, they'll make sure it's deployed as badly as possible. And of you're a sales guy, they'll make sure your life becomes hell.

The answer- don't touch anything to do with a big company unless you've got no other options. Most people have gotten that memo already and that why we have so much b2c nowadays.

The comments to this entry are closed.

Search this site

  • Google

Blog powered by Typepad