Home of the The Hall of Ma'at on the Internet
Home
Discussion Forums
Papers
Authors
Web Links

May 2, 2024, 2:29 pm UTC    
August 29, 2001 11:36AM
<HTML>I'm in the mainframe computer performance arena. My job is to optimize every single thing such that response times, aggregate capacity throughput, etc. are at their peak.

I happened to get involved in a new relational database, DB2 (many platforms but this discussion will cover only the IBM mainframe perspective) about 5 years ago.

I was on an internet mailing list for DB2. I brought up my findings that reversed all previously held findings on a certain issue. The issue had to do with the hardware compression feature that allows one to compress rows/records of in-processor-storage data via a compression algorithm to reduce the processor-storage requirements.

Every time a row/record that HAS been compressed is retrieved by a user/function, it must be REuncompressed such that it is understandable back to the user. The early findings showed that the CPU (processor) overhead to this function was in the order of 5%. Very little. The reason CPU consumption is critical is because software licenses are based on CPU usage (HUGE $ here).

My later-arriving findings since I was fairly new to mainrame DB2 showed that the overhead was on the order of 67%. 33% was for the processor compress/uncompress hardware instruction while the other 34% was the additional code that needed to be executed to prepare for the compression instruction.

The admin of the list is an internationally reknowned DB2 expert. What does he come back and say to the list (2000 people around the world are on this DB2 mailing list)? That what my new findings were saying was completely untrue, baseless, misleading, and bordering on professional misconduct. He went so far as to say that he was going to discuss this with IBM and have them post a response to my erroneous findings.

For some quick background, the more complex a processor instruction is (how much function it does or the amount of what it does) takes more and more processor cycles. A compression/decompression instruction has a ton of housekeeping/initial stuff just to even get to the point of actually performing the function.

Well, guess what, I could prove easily via empirical, over-and-over, consistent results that indeed I was correct. There would be no room for error. Basically one would embark on a "benchmark" test for compressed vs. non-compressed data and measure the CPU used and compute the difference. It's a very common thing to do for measurement/quanitification purposes.

Why did few people believe me? Because I didn't have the "credentials" that this international expert had; no one knew me from adam. So, just because I showed up later than him gives him the benefit of the doubt? So, it's not possible that any late-comer can be wiser and more astute than those that came before? So, it's not possible that even in my few short years on DB2 that I couldn't run rings around this guy intellectually?

To this day I've never received an apology. I even published (on the mailing list) my results as well as HOW I got them so others could compare their findings. Gee, I'm just not surprised. Remember, I/T is a very, very empirical science since we're dealing with repeatble scenarios.

No, it's not just this one incident that has changed my outlook on academia and false concept of "credentials" but rather a view of the historical record. Granted, much of this eventually does get overturned/corrected but it usually takes a generation or two.

JL</HTML>
Subject Author Posted

My career experience re: &quot;academia&quot;

Jim Lewandowski August 29, 2001 11:36AM

Re: My career experience re: &quot;academia&quot;

Garrett August 29, 2001 11:40AM

Re: My career experience re: &quot;academia&quot;

Jim Lewandowski August 29, 2001 12:00PM

Re: My career experience re: &quot;academia&quot;

Carl Tanner September 01, 2001 08:32PM



Sorry, only registered users may post in this forum.

Click here to login