Main      Site Guide    
Computer Stupidities

Programming

If teaching an individual to use a computer is bad, teaching someone to program one is worse.


I almost cried laughing.


One day I was in a public park, reading "C++ For Dummies" when someone came up and asked me what I was reading. I told him I was reading a book about C++. He responded, "Oh, HTML kicks C++'s @$$."




I was the night-time operator for a university in the northern part of the state. We ran our administrative jobs locally, and the students submitted their jobs to us. We read their card decks into our IBM 370/115 which transmitted them to the IBM 370/165 at the capital which sent the results back to be printed. We then wrapped the listing around the kiddies' card decks and put them out for them to pick up.

One evening a student was in the pickup area, looking at her listing, and crying. We operators were not required to help students, but if we had some extra time, we always did. I asked her what was wrong, and she said her program wasn't working right. I took the listing and looked it over. It was the first exercise given to first semester COBOL students. I saw nothing wrong with it. No compiler errors, no JCL errors, and the printout from the run even looked correct. So I said, "I don't see any errors."

At that point she let out a great wail and sob. "I know!" she cried. "That's the problem!"

"Huh?" I said.

It turned out the the instructor told the class what all instructors tell their classes for the first computer program they ever write. "Don't worry about errors the first time you submit your deck. People always get errors the first time."

Well, through some fluke of improbability, this girl managed to write a flawless program and key it into the key punch flawlessly and got a flawless run the very first time she tried it. The instructor told her to expect errors. She didn't get them, so she thought she was doing something wrong.


I once worked for the IT department of a small manufacturing company. The new Vice President of IT claimed that he had been a programmer for more than twenty years prior. One time we were in a meeting with a software company we hired to build our web site for us. As they explained that the web pages would be written in HTML and Javascript, this VP stops them cold and says, "None of my guys here work with any of that Javascript stuff! This is a SQL shop! I only want these web pages written in SQL so we can support it ourselves!"

Rather than correct a man who'd been a programmer for twenty years, I sat there with an amused look on my face for the remainder of the meeting. So did the people from the software company.


Someone else's shell script I saw at work today was extensively commented, including this gem of non-information.

export PATH # Export path



One thing that many will run into in the computer industry, is employers who are rather clueless and yet don't necessarily realize this. In 1996, a friend told me about a boss he had that needed a C program written for him. After a week, the boss complained that the program wasn't done, and he asked my friend what was taking so long.


I was making my way through MSDN, looking at Win32 API console functions to make my own gotoxy() function in Visual C++ 6.0. My C++ programming teacher looked at my screen and asked:

Unfortunately, Borland C++ 3.1 was designed for DOS and Win16. Visual C++ works on Win32. Worse, headers only contain types and class declarations, defines, and function prototypes. I don't know how my teacher thought this would work.



During a code review, when I asked why (besides the source control file headers) there was not a comment in 240,000 lines of code which was getting handed over to me for maintenance, the programmer replied, "I'm terse."


I found this comment in a program I was given to edit:

if x then
    #if condition is true
    [do something]
end if.

It literally said "if condition is true;" it wasn't an expansion on the significance of x.


I was helping a friend with some code. In the code, I found the line:

x = x;

and removed it. I made some further changes and send the code back to him. He told me he still had errors. So he sent me his code again, and again I found the same line. I asked him why he kept putting that in there, and he replied, "So x doesn't lose its value."


One time a girl in my introduction to programming class told me that she hated Microsoft and started using UNIX to compile her programs. Later on, she emailed me and said she hated UNIX now, too, because it would compile her program but not allow her to retrieve her data. So I asked her to send her code to me, and I would take a look at it. I stumbled upon this:

int addandsubtract (int a, int b)
{
    return (a + b);
    return (b - a);
}

I asked her the purpose of this function, and she told me she wanted to first get the sum of a and b and then get the difference. She didn't understand why this wouldn't work, and it took me an hour or so to explain why.


I teach a C programming course. For one of the assignments, somebody once copied a program verbatim from a fellow student who did the course two years before. He did pay attention, though: following the updated course material, which said that 'main' should return an error code, he changed:

void main (...) { ... }

to:

int void main (...) { ... }

Needless to say, the program didn't even compile.


In college, I worked as a teaching assistant for an introductory programming language. For most of the people in the class, this was probably their first and only programming class.

One day, I was doing program code reviews with a handful of students. This one girl gave me her code, and, after looking at it, I asked why she had repeated a certain line twice:

let x = 7;
let x = 7;

She said, "Just in case it didn't get set right the first time."


When a computer professor asked his students to comment all their programs, he got remarks like:


I found this comment in some code I had to maintain:

/* This function is BOOL but actually returns TRUE,
   FALSE and -2 because I've no time to change it
   to int */

Didn't it take more time to write the comment?


When I was studying programming, one of my classmates was having serious troubles with his program. When he asked me for help, I leaned over his screen and saw all of his code in comments. The reason: "Well, it compiles much faster that way."


In college I worked as a consultant. One day this grad student was having trouble with his Fortran program and brought the printout to me. He said he kept changing things but couldn't get it to run correctly. His analysis: "I get the feeling that the computer just skips over all the comments."


I tutored college students who were taking a computer programming course. A few of them didn't understand that computers are not sentient. More than one person used comments in their Pascal programs to put detailed explanations such as, "Now I need you to put these letters on the screen." I asked one of them what the deal was with those comments. The reply: "How else is the computer going to understand what I want it to do?" Apparently they would assume that since they couldn't make sense of Pascal, neither could the computer.


I was taking an introductory programming course. One assignment was to do a little payroll program, including some data validation. The program was supposed to accept terminal input and send output back to either the console or a printer.

Suddenly the printer began spewing out paper like crazy. One of the students (a particularly mouthy woman) had programmed a less-than-helpful error message ("YOU ARE WRONG") and then not provided any exit from the error-checking logic -- the program just re-read the last (failing) input and re-tested it. All in all, it was a very nice infinite loop.

After spitting through about fifty pages of "YOU ARE WRONG," somebody cut power to the printer, and the instructor had to flush the print queue manually. He went back to the student and asked if she had tested the program by sending the output to the console before trying to print it, and she said, yes, she had tested it on the console and ended up with a screen full of "YOU ARE WRONG" messages. Why, then, had she sent her output to the printer? "I thought I would be daring!"


A colleague wrote the documentation for the return codes from a set of functions in one of his DLLs. Among the documentation was this:

/* Return code=1: generic error condition
   Return code=2: all other error conditions */


I was taking a C programming class once, and the class was divided up into two programming teams. On my team we had a woman who was totally out of her league. What earned her legendary status was doing a global search and replace, swapping out asterisks for ampersands, because she felt the asterisks weren't "working."


I was just teaching an optional class on C programming; in the first class meeting, I asked, "Does anybody know anything about programming?"

To which one of my students gleefully replied, "I know how to use a chat program!"


I was asked to maintain a shell script that was taking too long to run and wasn't reliable. Among other horrors, the one that gave me the best mix of laughter and fear was a repeated construct like this:

display=`env | grep DISPLAY | sed 's/[^=]*=//g'`
DISPLAY=$display
export DISPLAY

This made me scratch my head for a moment, until I realized that this was a complete no-op. It's equal to DISPLAY=$DISPLAY (except when the grep command pulls out the wrong thing). This was repeated for something like a dozen environment variables. I still cannot fathom the logic of it. I ended up doing a complete rewrite.


I was asked about taking on a contract to maintain a piece of software. Something about the way it was presented made me wary. I asked to look over it first. What a sight! I use it as an example of why not to use global variables. Among other things, there were files with suites of functions on the following order:

adjust_alpha()
{
    alpha = gamma + offset * 3;
}

adjust_beta()
{
    beta = gamma + offset * 3;
}

Dozens of functions that differed only by the global variable they modified. Just picture it: a multi-thousand line program with a graphical interface and a database that never used function parameters.

The original programmer painted himself into a corner with his variable names. Clearly if you need variables "up," "down," "left," and "right," you name them as such. When he found himself needing those direction names in different parts of his program but was stuck because global variable names had to be unique, his solution was to use names like:

up, _up, up_, Up, uP, UP, _Up, _UP
down, _down, down_, Down, dOWN, DOWN, _Down, _DOWN

...and so on. Even the densest of my students comprehended immediately why that was bad. Needless to say, I turned down the job.


While working on a programming project in highschool with a friend, I mentioned to him that if he really wants to name his variables things like x, xx, and xx2, he should at least put comments saying what they're used for.

The next time I looked over his shoulder, I saw this:

int x; // x is an int


Some years ago, a friend and I were jointly writing a game in C++. We were repeatedly getting inexplicable access violation errors in a piece of code which should have been rock solid. Eventually we found something like this, obviously left over from a past debugging hack:

((class CNetwork *) 0x05af12b0)->Initialise();

It had gone unnoticed for a while because, out of sheer luck, all the builds we'd done since that hack hadn't changed the address in memory of that particular instance of CNetwork. Obviously we had eventually changed something which caused it to be allocated elsewhere: cue major chaos. If anyone has heard of a dumber programming practice than hardcoding a pointer, I'd like to see it!


This was found in code written by an ex-employee.

strcpy(vl_name,"00000000000000000");
strcpy(vl_volume,"000000");

strncpy(temp1,vl_lud,4);
temp1[4]='\0';

strncpy(temp2,vl_name+4,13);

temp2[13]='\0';
strcat(temp1,temp2);

strcpy(temp2,"");
sprintf(temp2,"%d",vl_serial_num);
temp1[7]='\0';
strcat(temp1,temp2);
strcat(temp1,"000000000");
temp1[8]='.';
strncpy(temp1,temp1,9);
temp1[9]='\0';
strcat(temp1,vl_data_set_name);
temp1[17]='\0';
strcpy(vl_name,temp1);
strcpy(vl_volume,"1");


My friend is a programming teacher at a local high school, where there are two programming classes -- one taught by him and one by another teacher. Recently he spent WEEKS preparing the major assessment that both classes would do, a large assignment that the students would work on for the next few months.

As well as making the question sheet for the students, he also made an answer sheet for the other teacher, so that she could familiarize herself with the assignment before giving it to her class.

But this other teacher knows NOTHING about programming and wasn't able to tell the difference between the question sheet and answer sheet, and so she wound up photocopying the answer sheet and handing it out to every student in her class.

She no longer teaches programming.


This little bit of Java was written as part of a group project at university. The friend who passed it to me has been bouncing off the walls about the quality of the guilty party's code (silly things like defining error and success codes with the same value so you don't know what the return code means and stuff like that), but this is the most obviously stupid bit.

public int convertItoi(Integer v)
{
    if (v.intValue()==1) return 1;
    if (v.intValue()==2) return 2;
    if (v.intValue()==3) return 3;
    if (v.intValue()==4) return 4;
    if (v.intValue()==5) return 5;
    if (v.intValue()==6) return 6;
    if (v.intValue()==7) return 7;
    return 0;
}


Days ago I had to fix a bug into our software. The person that originally wrote the module quit, so I had total control of the source code. I totally rewrote half of the code when I found things like:

int i;
memset(&i, 0, sizeof(int));

And:

switch (k) {
    case 9: printf("9\n");
    case 8: if (k==8) printf("8\n");
    case 7: if (k==7) printf("7\n");
    // and so on...
}

I wondered why he put the "if" clauses, but then I noticed that none of the cases has its "break" statement, so if he found that if k was 9, the program printed 9, 8, 7, etc. So I think he added the "if" clauses to fix that behavior.

The masterpiece, however, was the following, where two consecutive errors actually caused the program to work fine:

char msg[40];
unsigned char k,j;

memset(msg, 0, 41); /* to set the terminator */
j = k;
...

Of course the "memset" was supposed to reset the msg variable, but it actually also reset k, for which no initialization was provided; could be a deliberate if hackish and unreliable solution, but that "set the terminator" comment gives it away. In fact, all over his code he managed to add one for the "terminator," one byte past the end of the character array he was working on.


About four years ago, I was working on a project that, among other things, involved porting several million lines of code. While not technically real-time, the code needed to be reasonably fast. At one point, I found the following gem:

unsigned long reverse(unsigned long theWord)
{
    unsigned long result = 0;
    int i;

    for (i = 0; i < 32; ++i) {
        if (theWord & (unsigned long) pow(2.0, (double) i))
            result += (unsigned long) pow(2.0, (double) (31 - i));
    }

    return result;
}

Obviously, the purpose was to reverse the bits in a word. Naturally, I called all of my colleagues over to see this, and we all marvelled at how someone would think that a conversion to floating-point, a function call, and a conversion to integer could be faster than one shift operation. To say nothing of the possibility of rounding errors completely screwing up the, um, algorithm.

Not wanting to leave an exercise for the reader, here's the replacement:

unsigned long reverse(unsigned long theWord)
{
    unsigned long result = 0;
    int i;

    for (i = 0; i < 32; ++i) {
        if (theWord & (1 << i))
            result += 1 << (31 - i);
    }

    return result;
}


An introductory programming student once asked me to look at his program and figure out why it was always churning out zeroes as the result of a simple computation. I looked at the program, and it was pretty obvious:

begin
    readln("Number of Apples", apples);
    readln("Number of Carrots", carrots);
    readln("Price for 1 Apple", a_price);
    readln("Price for 1 Carrot", c_price);
    writeln("Total for Apples", a_total);
    writeln("Total for Carrots", c_total);
    writeln("Total", total);
    total := a_total + c_total;
    a_total := apples * a_price;
    c_total := carrots + c_price;
end;


At my previous job, we were porting a UNIX system to Windows NT using Microsoft VC++. A colleague of mine, that was in the process of porting his portion of the code, came to me, looking really upset.


I ran across this gem while debugging someone else's old code once:

if (value == 0)
    return value;
else
    return 0;


I found this buried in our code somewhere:

if (a)
{
    /* do something */
    return x;
}
else if (!a)
{
    /* do something else */
    return y;
}
else
{
    /* do something entirely different */
    return z;
}


I had a probationary programmer working for me. Needless to say, he never got to be permanent. One day I was inspecting his C code and found this:

if ( a = 1 ) {
    ...some code...
} else {
    ...some other code...
}

I told him the "else" clause will never get executed because of his "if" statement. I asked him to figure out why. He said he'd "investigate" it first. I allowed him to "investigate," since it had not been a critical task.

A day later, he told me he figured out the problem. He said he used an incorrect operand in the "if" statement -- it should have been == instead of =, which was absolutely correct. But then he emailed me his revised code.

a = 1;
if ( a == 1 ) {
    ...some code...
} else {
    ...some other code...
}

What the...?

I asked him if the "a = 1" part was necessary and not just a fragment of debug code he forgot to remove. He said it was necessary. So I asked him if the "else" statement would ever be executed. He said yes. I asked him to give me a situation when such would occur. He said he'd get back to me with the explanation.

I kicked him out of the project that same afternoon.


Once I ran across code that did this to test the i-th bit in a byte-wide value:

if (value && (int)pow(2,i))
{
    ...
}


Digging in the code a colleague wrote years ago, I found the following:

EndWhile = 0;
while (EndWhile == 0)
{
    ...
    if (index < MAX)
        EndWhile = 0;
    else
        EndWhile = 1;
    index = index + 1;
}


Years ago, I put a simple, fortune cookie style program out on an FTP site. It was too simplistic to look at environment variables or configuration files to look for the location of the fortune cookie database file; the path was compiled into the executable. I provided the source, so if you wanted to change the path it was installed in, you had to change it in the source file and recompile.

Since I put it out, every so often I'll get an email message commenting on it. Recently, I received a message asking for help trying to get the thing to work. He couldn't get the executable to find the database file properly. I answered him, and he mailed back saying nothing helped. I mailed him again, saying that the readme file which was included in the archive should have very detailed instructions. He mailed me back saying the readme file didn't help him. So he mailed me the source code file, asked me to change it to the way it should be, then mail it back to him. I told him, but as I was typing in my final reply, a horrific thought occurred to me. So I asked:


I was working for a consulting firm that was called in to help another firm that was doing some fairly important UNIX work for a large Wall Street firm. They were all Mac programmers that had taken a week long course in UNIX, C programming, and UI programming for this particular workstation. I took a look at their C code and it was littered with the following code statement:

strcat(string,"\0");

I asked why they were doing this. The reply was, in a "don't you know?" tone of voice: "All strings in C must end in a null zero!"

Trying to explain that strcat wouldn't work unless the null terminator was there already just got me blank stares.


I've seen this code excerpt in a lot of freeware gaming programs for UNIX:

/*
* Bit values.
*/
#define BIT_0 1
#define BIT_1 2
#define BIT_2 4
#define BIT_3 8
#define BIT_4 16
#define BIT_5 32
#define BIT_6 64
#define BIT_7 128
#define BIT_8 256
#define BIT_9 512
#define BIT_10 1024
#define BIT_11 2048
#define BIT_12 4096
#define BIT_13 8192
#define BIT_14 16384
#define BIT_15 32768
#define BIT_16 65536
#define BIT_17 131072
#define BIT_18 262144
#define BIT_19 524288
#define BIT_20 1048576
#define BIT_21 2097152
#define BIT_22 4194304
#define BIT_23 8388608
#define BIT_24 16777216
#define BIT_25 33554432
#define BIT_26 67108864
#define BIT_27 134217728
#define BIT_28 268435456
#define BIT_29 536870912
#define BIT_30 1073741824
#define BIT_31 2147483648

A much easier way of achieving this is:

#define BIT_0 0x00000001
#define BIT_1 0x00000002
#define BIT_2 0x00000004
#define BIT_3 0x00000008
#define BIT_4 0x00000010
...
#define BIT_28 0x10000000
#define BIT_29 0x20000000
#define BIT_30 0x40000000
#define BIT_31 0x80000000

An easier way still is to let the compiler do the calculations:

#define BIT_0 (1)
#define BIT_1 (1 << 1)
#define BIT_2 (1 << 2)
#define BIT_3 (1 << 3)
#define BIT_4 (1 << 4)
...
#define BIT_28 (1 << 28)
#define BIT_29 (1 << 29)
#define BIT_30 (1 << 30)
#define BIT_31 (1 << 31)

But why go to all the trouble of defining 32 constants? The C language also has parameterized macros. All you really need is:

#define BIT(x) (1 << (x))

Anyway, I wonder if guy who wrote the original code used a calculator or just computed it all out on paper.


When I was still a student, I worked as an admin for the university CS dept. Part of this job involved time in the student labs. Our network was a conglomeration of Suns and SGIs and was generally confusing for novice users who don't understand the concept of multiuser, multitasking, networked computers.

Around the room are large signs explaining how to log in, along with big warnings about not removing power unless you like the idea of having a grad student running a several million variable modeling project he's been working on for several years show up and beat you death with research papers.

You would be amazed how many people try to type in a program at the "Login:" prompt, and then turn the machine off when they are done. The worst of the bunch then complain about not being able to find the program they just typed in at the login prompt.


I was looking through a shell script I had written recently, and I almost died when I saw some of the code. I'm embarrassed to admit it, but here's one thing I had done:

if ($var = value) then
    # do something
else
    # do the exact same thing as in the other code
endif


While in college, I used to tutor in the school's math lab. A student came in because his BASIC program would not run. He was taking a beginner course, and his assignment was to write a program that would calculate the recipe for oatmeal cookies, depending upon the number of people you're baking for. I looked at his program, and it went something like this:

10 Preheat oven to 350
20 Combine all ingredients in a large mixing bowl
30 Mix until smooth
.
.
.


A "software engineer" I used to work with once had a problem with his code that looked something like this:

a_pointer->fn();

It caused a General Protection error. He knew C, but not C++ -- I did, so he asked me for help. I told him to check to see if the pointer was NULL before making the call. A couple of hours later he came back; the problem was still happening.

if (a_pointer == NULL)
{
    LogError();
}

a_pointer->fn();

I said, "You need a return statement after the LogError call."

He said, thoughtfully, "Where does it return to?"


A friend of mine wanted to keep track of the other users on the UNIX systems of our university. There is a nice command "last" on UNIX which will list the last users to have logged in. So he wrote a script that'd log in to all workstations of the department by remote shell and run the "last" command, with the results sent back to the originating host, to be collected in aggregate form.

He called this little script "last" -- same name as the UNIX system command -- and put it in his home directory. His path was set up so his home directory had a higher precedence than the UNIX bin directories. So when he ran the "last" command, it would use his own script instead of the system command.

So he ran the script. It logged in to all the other workstations just fine. Then it ran the "last" command -- the one in his home directory, of course, not the system command. You can guess what happened. It got in an infinite loop that tried to log into every workstation an infinite number of times. This very effectively nuked off the whole department, and all workstations had to be shut down for it to stop.


One of our customers, a major non-US defense contractor, complained that their code ran too slowly. It was a comedy of errors.

Act I

Act II

So, on a hunch, we sent them the latest version of our software for Windows NT.

Act III

Finally, some of their code was declassified. We looked at it, and one piece of it contained a routine for reading one million or so integers from a file. Rather than opening the file once and reading them all in, there was a loop: it would open the file, read the first integer, and close it; then open it again, read the second integer, and close it; etc.