User: Password:
|
|
Subscribe / Log in / New account

Who wrote 2.6.20?

Who wrote 2.6.20?

Posted Feb 21, 2007 16:23 UTC (Wed) by richardl@redhat.com (guest, #31678)
In reply to: Who wrote 2.6.20? by pr1268
Parent article: Who wrote 2.6.20?

LOC is a perfectly valid metric as long as you normalize against language, etc. In this case, LOC is used as a relative metric. The effort required to produce 100 LOC in C for the kernel is different from the effort required to produce 100 LOC in, say, Ruby for a webapp -- but that's not what the editor is doing here.

I'd be interested in hearing why you think LOC is "pure evil." I think it all depends on how you use it.


(Log in to post comments)

Who wrote 2.6.20?

Posted Feb 21, 2007 16:46 UTC (Wed) by lmb (subscriber, #39048) [Link]

LoC changed is difficult though. For example, I could iterate 100 times trying to get a single line of code right. But then, software metrics are hard.

One suggestion for a possibly interesting metric, so that I don't have to code it myself:

Annotate the whole of the tree: Who last changed which line? Number of lines * age = Author score.

This can then be extended to a historical score: who contributed how many lines of code, and how long did they remain in the tree before being removed/changed? Developers changing their own code would get accumulated, so this is essentially neutral.

LOC metric

Posted Feb 23, 2007 1:23 UTC (Fri) by giraffedata (subscriber, #1954) [Link]

...as long as you normalize against language, etc. In this case, LOC is used as a relative metric. The effort required to produce 100 LOC in C for the kernel is different from the effort required to produce 100 LOC in, say, Ruby for a webapp

I saw a study long ago that had the remarkable result that there is nothing to normalize here. It was looking specifically at the cost to develop and test new software, and found that 100 LOC costs the same regardless of the language or subject. What I've seen is consistent with that.

The study did find a few variables that added precision to a LOC-based estimate. With modification of existing code, there were some measurements of the code base that helped. I think number of files touched added precision too.

Who wrote 2.6.20?

Posted Feb 24, 2007 11:05 UTC (Sat) by bockman (guest, #3650) [Link]

Well, for one thing often you can accomplish something equivalent with 1000 lines of dumb code or with 300 lines of very smart code. Most of the programming effort is going into figuring out the 'commonalities' between potential code blocks and write customizable code ( loops, routines, classes, templates) that exploit said commonalities. But the more time a developer spends in this kind of exercise, the shorter the final code would result.

I don't say that LOC measurements are meaningless. Just that they are statistics and should not used outside of this context ( for instance should not be used to measure the productivity of a developer or even a team ).

Ciao
-----
FB

Who wrote 2.6.20?

Posted Mar 1, 2007 21:00 UTC (Thu) by jboorn (guest, #43808) [Link]

So what. You can write reallly slow naive brute force code for some problem with 300 lines. Or you can you use a fancy complicated algorithm that takes 1000 lines of code, but is much faster.

In this case the code is for the same project and I think using lines of code with in a project is good enough for the analysis sought here.

It is a bit annoying to see the same argument about lines of code count come up that is pointless. Sure it is possible to find examples of code that is smaller and as efficient (or more efficient) than a given larger implementation. But, that does not exclude the existence of larger code that is more desirable for a given project based on a meteric other than executable size.


Copyright © 2017, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds