home *** CD-ROM | disk | FTP | other *** search
File List | 1987-06-12 | 38.5 KB | 885 lines |
-
- A Software Metrics Tutorial
- and
- METRIC 1.0 User's Guide
-
-
-
-
-
-
- Warren Harrison
- SET Laboratories, Inc.
- P.O. Box 03963
- Portland, OR 97203
-
-
-
-
- DISCLAIMER
-
-
- The field of software metrics is an evolving discipline.
- This document, and METRIC 1.0 software is made available on
- an "as-is" basis. No claim is made that any of the
- techniques discussed in this document nor implemented in
- METRIC 1.0 will accurately predict software size, cost,
- effort or errors in every situation. SET Laboratories, Inc.
- does not warrant the fitness of METRIC 1.0 for any
- particular purpose. The user accepts full responsiblity for
- any and all damages, costs, losses, expenses, and other
- liabilities resulting from the use of METRIC 1.0 and/or this
- document. Use of the techniques discussed in this document
- or use of the METRIC 1.0 software package should not be
- undertaken until the entire document is read completely. Use
- of the METRIC 1.0 package is an implict agreement on the
- part of the user that this disclaimer has been read, is
- understood, and the user agrees to all conditions and
- statements included in the disclaimer.
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
- Copyright 1987 by SET Laboratories, Inc.
-
- Software Metrics Tutorial and METRIC 1.0 User's Guide 2
- SET Laboratories, Inc.
-
-
-
-
-
- INTRODUCTION
-
- Since the first software product was developed,
- programmers and programming managers have had to estimate
- how much to budget for development, testing and maintenance.
- It is this particular problem in which the users of software
- metrics are interested. Their approach is based on the idea
- that certain characteristics of the software will have a
- major impact on how much it costs to develop and maintain
- the software.
-
- One of the major characteristics considered by software
- metricians is something called "software complexity".
- Software complexity is the property of how hard a program is
- to understand and work with. A software complexity metric is
- a measure of how complex a piece of software is. Many
- computer scientists think that the amount of effort required
- to develop and/or maintain a piece of code depends on how
- complex the software is.
-
- The usual approach to establishing a software
- complexity metric is to identify certain properties of a
- program which are thought to lead to it being difficult to
- work with. For example, one popular measure of complexity is
- simply the number of decision making statements in the code.
- Most complexity measures attempt to estimate the software
- complexity that a software product actually exhibits through
- the degree to which these properties exist in the code.
-
-
- SOFTWARE COMPLEXITY MEASURES
-
- Even when one has identified the characteristic(s) that
- might lead to software complexity, a method of "quantifying"
- the degree to which the characteristic(s) exist in the code
- is not trivial. This is especially true if more than one
- characteristic is being considered. Then, one must determine
- the effect on overall complexity due to each.
-
- Since a large number of different characeristics have
- been identified as impacting software complexity, a number
- of metrics exist. In general, software complexity metrics
- can be divided into four categories:
-
- (1) measures of program size;
-
- (2) measures of control flow;
-
-
-
- Copyright 1987 by SET Laboratories, Inc.
-
- Software Metrics Tutorial and METRIC 1.0 User's Guide 3
- SET Laboratories, Inc.
-
-
- (3) measures of data structures and use;
-
- (4) hybrid measures, combining two or more of the properties
- measured by the other types of metrics.
-
- Metrics in each category all have their respective merits.
- However, most of the software complexity metric research
- that has been performed by computer scientists over the past
- ten years has dealt only slightly with data structures and
- their use. An annotated bibliography of some of this
- research is included in the appendix. Some of the metrics
- which have come about due to this research are now listed:
-
- (1) Program size metrics are perhaps the most common
- measures of program complexity. For example, a
- count of source lines in a program can be easily
- obtained through a simple text editor. In
- addition, the impact that size has on programming
- activities is quite obvious and understandable.
- The most popular measures of program size include:
- nonblank lines of code without comments, nonblank
- lines of code with comments, and total number of
- lines in the source file. Other popular measures
- of size include a count of tokens in the code, and
- the number of procedures included in the program.
-
- (2) Another popular method of measuring program
- complexity involves assessing how complex the flow
- of control within the program is. A very popular
- approach is a simple count of IF statements. One
- technique which has gained great popularity among
- computer scientists is something called the
- cyclomatic complexity of a program. This metric is
- based on a great deal of mathematical graph
- theory. The interested reader is referred to the
- appendix for references to related papers.
- Luckily, the usual method of computing the
- cyclomatic complexity of a program with a single
- entrance and a single exit is to simply count the
- number of "decision points" (ie, statments such as
- IF, WHILE, FOR, REPEAT, etc. that change the flow
- of control from a sequential path) and add 1. A
- number of variants have grown up around the
- cycolmatic measure, but it remains the most
- popular control flow metric.
-
- (3) Data structure and use metrics have had little
- attention paid to them by software metric
- researchers. There are currently no popular
- measures of the complexity contributed by data
-
-
- Copyright 1987 by SET Laboratories, Inc.
-
- Software Metrics Tutorial and METRIC 1.0 User's Guide 4
- SET Laboratories, Inc.
-
-
- items available. However, the dramatic impact that
- data structures have on complexity is bound to
- lead to some very good metrics being suggested in
- the near future.
-
- (4) Hybrid metrics are metrics which take two or more
- of the metrics which would fit in one of the above
- categories, and combine them. Perhaps the best
- known such metric (in fact, set of metrics) is
- what is known as Software Science. Software
- Science is based on the idea that a program is
- made up of operators and operands. The parameters
- of interest for Software Science are the number of
- unique operators, the number of unique operands*,
- the number of total operators used, and the number
- of total operands used. Thus, Software Science
- combines measures of data structure and use, with
- program size. From this set of parameters, a large
- number of unique metrics can be assembled,
- including E (effort), V (volume), N^ (predicted
- program length) and B^ (predicted number of bugs).
- Each of these measures will be discussed in more
- detail in a later section which describes the
- METRIC 1.0 package. Software Science is probably
- one of the most popular complexity metrics in use
- by software metric researchers, though many people
- question some of the assumptions that the measures
- are based upon.
-
-
-
- USING METRICS
-
- Once a metric has been selected, one must consider the
- impact the complexity of the software has on the development
- and/or maintenance effort. Quite often, these two issues are
- confused. One may have a metric that measures complexity
- quite well, but lack a method of determining what the impact
- of this complexity will be. The impact that a particular
- level of complexity has on the programming process can best
- be obtained empirically. This can be accomplished by
- analyzing previous projects, and relating their measured
- complexity to some measure of the programming process (eg,
- development effort or number of bugs). Ideally, the result
- of this process will be a model of how a marginal increase
- in complexity impacts some selected programming activity.
-
- When performing these activities, one must be careful
- to avoid putting too much emphasis on the results. While
-
-
- ____________________
- (*)An operand is a variable or constant
-
-
- Copyright 1987 by SET Laboratories, Inc.
-
- Software Metrics Tutorial and METRIC 1.0 User's Guide 5
- SET Laboratories, Inc.
-
-
- they can be helpful indicators of development effort or
- time, they are at best, valid in a statistical sense only
- (ie, the errors in prediction, when spread over a large
- number of projects are acceptable, but the predictions for
- one individual project may vary greatly from the actual
- results). As an example, consider the following projects
- developed by the author of METRIC 1.0 over a period of
- several years. Various Software Science measures (V,E),
- cyclomatic complexity (Vg), nonblank lines of code (LOC),
- number of procedures (Proc), the person-hours of development
- time predicted by the Software Science 'T' measure, and the
- approximate number of person hours that really went into
- each project are listed.
-
- Approximate
- Predicted Actual
- Project V E Vg LOC Proc Time Time
- -------------------------------------------------------------
- convert 1,126 141,806 9 44 2 2 hours 1 hour
- editor 13,497 3,958,863 68 514 17 61 hours 55 hours
- letter 3,179 453,339 22 131 7 7 hours 6 hours
- list 569 21,906 5 39 0 <1 hour 2 hours
- more 625 19,832 7 45 0 <1 hour 1 hour
- mymetrics 9,083 1,506,016 31 281 5 23 hours 12 hours
- paslex 4,058 850,658 25 181 7 13 hours 10 hours
- paste 1,064 81,928 10 56 1 1 hour 1 hour
- plot 2,680 247,326 14 155 6 4 hours 3 hours
- redform 12.080 2,670,531 79 523 14 41 hours 68 hours
- salt 16,606 4,001,407 68 447 13 62 hours 35 hours
- scoot5 545 24,695 7 33 0 <1 hour <1 hour
- twocol 1,335 61,029 10 57 3 1 hour 3 hours
- upasm 5,275 648,455 46 191 7 10 hours 6 hours
- upbbs 16,801 3,096,789 53 560 19 48 hours 18 hours
- upcom 3,222 273,820 14 131 6 4 hours 3 hours
- vroff 16,735 4,148,886 98 591 8 64 hours 70 hours
- ----------------------------------------------------------------
-
- Note that the actual observed time is only approximate. No
- attempt was made to control for interruptions, division of
- attention between several concurrent activites, etc.
- Likewise, the work on many of these projects took place over
- a period of weeks or even months, with a few hours every
- other day allocated to the project - clearly some uncounted
- time may have went into the project between periods of
- formal work, and some time may have been wasted in 'getting
- up to speed' when a long period of time separated formal
- work periods.
-
- Only the time spent actually coding the product or
- putting the design on paper is included in the 'actual
-
-
- Copyright 1987 by SET Laboratories, Inc.
-
- Software Metrics Tutorial and METRIC 1.0 User's Guide 6
- SET Laboratories, Inc.
-
-
- time'. Other effort involved in developing the product is
- not included. For example, the observed time does not
- include the time spent researching advanced features of the
- language needed to complete a particular project - nor does
- it include the time involved in writing user documentation
- or inserting 'post-development' internal documentation.
- Therefore the 'actual time' listed in the table may itself
- differ by 20-30% from what someone else may consider it to
- be. Also, note that these are small projects completed by a
- single person in less than one person-month. Typically the
- errors would be less noticeable in larger, multi-person
- projects.
-
- Both these results, and results obtained by computer
- scientists doing research in this area suggest that due to
- the great variability in programmers, applications and
- environments, no one measure will always be 100% accurate in
- assessing complexity. Likewise, it is doubtful that a single
- model of the impact of complexity upon programming
- activities will ever exist. However, these metrics can often
- be useful in determining very coarse parameters, such as
- predicting the order of magnitude of project effort.
-
- By now, the reader should have noticed that the
- calculation of the metrics discussed so far requires the
- existence of the source code to derive the metric. Clearly,
- this is a great limitation to the usefulness of a metric,
- effectively eliminating its use as a tool for predicting
- development time. However, if one agrees with the idea that
- software complexity adversely impacts the software
- development process, it seems reasonable to extend this to
- say that software complexity impacts the software maintenace
- activity in a like manner, as well as the testing process.
-
- Therefore, complexity metrics can be used most
- effectively to budget and schedule for the testing and
- maintenance activities since by this point, one typically
- has code available for analysis. Thus, based on the
- complexity of a product, one may allocate more or less
- time/resources for its maintenance or testing. However, the
- reader is cautioned to keep the discussion of the previous
- section in mind. No metric can ever be 100% accurate, and
- likewise, no model of the impact of complexity on any human
- activity can be 100% accurate.
-
-
- THE METRIC 1.0 TOOL
-
- The METRIC 1.0 software complexity analysis tool will
- calculate a variety of software metrics, and implement a
-
-
- Copyright 1987 by SET Laboratories, Inc.
-
- Software Metrics Tutorial and METRIC 1.0 User's Guide 7
- SET Laboratories, Inc.
-
-
- limited number of models which describe the impact of
- complexity on various aspects of the programming process.
- The input to the tool is a program written in standard
- Pascal, which will compile cleanly (METRIC 1.0 does not
- check for syntax errors).
-
- The metrics calculated by METRIC 1.0 include some of
- the Software Science measures using the counting rules
- suggested by Salt in the March 1982 issue of ACM SIGPLAN
- Notices. A few differences do exist, however, between the
- counting rules defined by Salt and the way METRIC 1.0
- counts. First, no distinction is made between unary and
- binary arithmetic operations (Salt records these as separate
- uses of '+' and '-'). Second, parameterless function
- invocations (such as the Turbo Pascal MEMAVAILABLE standard
- function) are interpreted as references to variables, and
- thus, such references are viewed as operand references.
- Third, labels in a label definition are treated as operands
- (not operators) and the label indicator (the ':') is treated
- as a separate operator.
-
- The Software Science measures implemented include the E
- or effort measure, and the V or volume measure. The reader
- should note that effort is given in unitless figures, and
- therefore should not be construed to mean person-effort (eg,
- person hours or person months).
-
- The Software Science parameters n1, n2, N1, N2, n and N
- are also provided, as well as an estimated program "length",
- N^ which represents the length of a "pure" program with the
- observed number of unique operators and operands. The
- difference between the N and N^ parameters is classically
- attributed to certain "impurities" in the code that cause
- the observed length to differ from the predicted "pure"
- length. These impurities include things such as "cancelling
- of operators", "ambiguous operands", "synonymous operands",
- "common subexpressions", "unnecessary replacements", and
- "unfactored expressions". METRIC 1.0 provides a measure
- called the "purity ratio" which represents the ratio of N^
- to N. A "perfect" purity ratio of 1 should indicate the
- program contains no impurities. No studies have been
- performed to assess the relationship of the purity ratio to
- programming activities.
-
- METRIC 1.0 also implements two Software Science based
- models which attempt to reflect the impact that software
- complexity has on various programming activities. The first
- is the B^ measure which is an approximation of the number of
- errors which have been inserted into the code during
- development. This model is based on the idea that a program
-
-
- Copyright 1987 by SET Laboratories, Inc.
-
- Software Metrics Tutorial and METRIC 1.0 User's Guide 8
- SET Laboratories, Inc.
-
-
- with n unique tokens and N total tokens will require
- (N*log2*n) "mental discriminations" to "process" the entire
- program since the n unique tokens could be searched in
- log2*n time using a binary search, and N lookups of the list
- of n possible tokens would be required. It is interesting to
- note that this relation is equivalent to the Volume measure
- (V). Independent studies (of activities other than
- programming) suggest that people tend to make one mistake
- every 3,000 - 3,200 mental discriminations. Thus, the number
- of errors one might expect to be inserted in the code is
- approximately ((N*log2*n)/3,200).
-
- The second Software Science based model is the 'T'
- measure which attempts to model the impact of program
- complexity on program development time. If a constant S,
- could be obtained which would represent the speed of a
- programmer in terms of the number of mental discriminations
- made per second, then a predicted time for development could
- be obtained. Independent psychological studies suggest that
- this number is between 5 and 20, with the value 18 typically
- being used for Software Science studies. Using the E metric
- as a measure of the number of mental discriminations being
- involved in developing the software, we have T=E/18 (in
- seconds). E (effort) is essentially the (N*log2*n) measure,
- but normalized for the level of abstraction at which the
- program is written, by dividing by (2/n1)(n2/N2).
-
- Another complexity metric calculated by METRIC 1.0 is
- the cyclomatic complexity. Researchers in this field suggest
- that 10 be an upper limit to the cyclomatic complexity of a
- procedure or subprogram. The cyclomatic complexity is
- calculated for the entire source file by METRIC 1.0 (ie, it
- is summed for all the procedures in the source file), in
- addition, an average cyclomatic complexity per
- procedure/function is provided to help the user determine if
- the source file should be further broken up into additional
- procedures to reduce the average cyclomatic complexity.
-
- Finally, two of the classical size measures are
- provided by METRIC 1.0: lines of code and a count of the
- procedures/functions in the source file.
-
- The following display is an illustration of the report
- produced by METRIC 1.0 for the tool itself:
-
-
-
- Copyright 1987 by SET Laboratories, Inc.
-
- Software Metrics Tutorial and METRIC 1.0 User's Guide 9
- SET Laboratories, Inc.
-
-
-
-
- -------------------------------------------------------------
- | METRIC 1.0 Report for: METRIC.PAS |
- | Copyright 1987 by SET Laboratories, Inc. |
- | ---------------------------------------- |
- | |
- | n1: 75 n2: 216 n: 291 |
- | N1: 2258 N2: 1458 N: 3716 N^: 2142 |
- | Purity Ratio: 0.58 |
- | |
- | Volume: 30415 |
- | Effort: 7698796 |
- | Predicted Bugs: 10 |
- | Predicted Development Time (minutes): 7129 |
- | (hours): 119 |
- | |
- | |
- | Cyclomatic Number: 105 |
- | Average Cyclomatic Number: 5 |
- | |
- | Lines of Code: 887 |
- | Number of Procedures/Functions: 23 |
- | Number of Executable Semi-colons: 412 |
- | |
- | WARNING! See Disclaimer In User Guide Before Using Output |
- -------------------------------------------------------------
-
- It should be clear to the user after studying this document,
- that there is no single "magic number" that will solve all
- the ills of software project management. Even experts in
- this field cannot agree on which measure works best. What
- METRIC 1.0 attempts to do is to provide an easy way of
- obtaining several of the popular metrics for a piece of
- Pascal source code. The user may then, after careful study
- and analysis of previous projects and their measured
- complexity, decide which, if any, of the metrics to use.
- Just as important, the user is also responsible for
- determining how the metrics are to be used.
-
-
- THE ROLE OF DATA COLLECTION IN METRIC APPLICATION
-
- One point which has been consistently stressed in this
- document is that what may work for one person, in one
- environment and one application, may not work for someone
- else in a different environment and with a different
- application. Thus, if it is determined that metrics will be
- used to help manage a development project, one must "tune"
- the use of the metric to his particular situation.
-
-
-
- Copyright 1987 by SET Laboratories, Inc.
-
- Software Metrics Tutorial and METRIC 1.0 User's Guide 10
- SET Laboratories, Inc.
-
-
- To do this "tuning", one must compute the metrics for
- (many) past projects. The measures obtained can then be
- related to the performance actually encountered (eg, errors,
- development time, etc.). No doubt, various relationships
- will be observed to exist between the measured complexity of
- the projects and the performance aspects of interest.
-
- For example, perhaps one may notice that the actual
- number of errors usually lies within 10% of B^. Once
- sufficient confidence is obtained by the user, this
- knowledge may be used to estimate the number of errors which
- are in a project undergoing testing. If the user is very
- risk adverse, testing might continue until (B^*1.10) errors,
- 10% more than predicted - are found (or a significant amount
- of testing time is spent and no additional errors found). On
- the other hand, if the user is more of a risk taker, testing
- might continue only until (B^*.90) errors, 10% less than
- predicted, are encountered. This estimate could be used to
- determine how much time to allocate for testing, and
- determining when testing is complete.
-
- Without an historical database of previous projects, it
- is not clear that complexity metrics should be used for any
- form of predicition or estimation. Even with a large
- database of information on previous projects, one is still
- making an assumption that the project and programmers in
- question is similar enough to the projects in the database
- to make use of the experiences.
-
- In order to build such a database, the minimum
- information which should be maintained include:
-
- a. software complexity measures - it would even be
- better if the actual source code could be
- maintained so new metrics could be applied to the
- historical data as they become available.
-
- b. development time - the keyword here is CONSISTENT
- measurement of the development time - if design
- time is included in one project, it should be
- included in all - otherwise, one ends up comparing
- apples and oranges.
-
- c. error counts - as with development time, error
- counts for a project should be consistent - if
- only errors encountered during integration testing
- are counted in one project, errors encountered
- during integration testing should be available for
- each of the other projects in the data base.
-
-
-
- Copyright 1987 by SET Laboratories, Inc.
-
- Software Metrics Tutorial and METRIC 1.0 User's Guide 11
- SET Laboratories, Inc.
-
-
- d. maintenance effort - in addition to recording how
- much time is spent in maintenance, the actual
- maintenance tasks themselves should be recorded so
- that allowances can be made during analysis to
- reflect the inherent difficulty of the activity.
-
- Naturally, one may wish to record additional data. The above
- information should serve as a minimal data collection
- process. It is important that this data collection be an
- on-going effort. The data base should be kept up to date,
- and metric tuning should be done from time-to-time using the
- current data base. Otherwise, the metric use will not
- reflect recent changes in the environment in which the
- project is being developed. For help in establishing such a
- data collection process, contact SET Laboratories.
-
-
- USING, DISTRIBUTING AND REGISTERING METRIC 1.0
-
- METRIC 1.0 is distributed under the "shareware"
- concept. This means that one can make copies of METRIC 1.0,
- distribute them to others (in fact, we wish you would) and
- use METRIC 1.0 as long as the following conditions are met:
-
- You are encouraged to copy and share this program
- with other users, on the conditions that the program
- is not distributed in modified form, that no fee is
- charged for the program beyond reasonable copying
- and/or media charges, and that this notice is not
- bypassed or removed.
-
-
- Also, please note that since the impact of software
- complexity on the process of software development or
- maintenance is dependent upon a number of factors, not all
- of which are reflected in software complexity metrics, SET
- Laboratories cannot warrant the fitness of the METRIC 1.0
- tool, or this manual for any particular purpose. Software
- metrics in general, and METRIC 1.0 in particular should be
- only one of a number of tools used by an individual to
- manage the software development and/or maintenace activity.
- Because of this, please observe the following disclaimer:
-
- This program is distributed on an "AS-IS" basis without
- warranty. The entire risk as to the quality and performance
- of the program is with the user. No warranties as to the
- quality/fitness of the program are made.
-
-
- If after using METRIC 1.0 you think it is a useful tool -
-
-
- Copyright 1987 by SET Laboratories, Inc.
-
- Software Metrics Tutorial and METRIC 1.0 User's Guide 12
- SET Laboratories, Inc.
-
-
- especially if you are using it professionally - you may
- register it for $99 with SET Laboratories, Inc. at the
- following address:
-
- SET Laboratories, Inc.
- PO Box 03963
- Portland, OR 97203
-
-
- Registration will put your name on a list to receive the
- next major release of METRIC, which will implement
- additional software complexity metrics and models. Also,
- starting with the next major release, we will support
- additional languages besides Pascal - when registering,
- please state your language preference. Additionally, we will
- be happy to provide limited support for registered users of
- METRIC 1.0, both with the package, and in software
- complexity metrics in general.
-
- Even if you are not a registered user, if you use
- METRIC 1.0, and have any comments ideas, or just want to say
- "hi", please write to us at the above address.
-
-
- TECHNICAL MATTERS
-
- The METRIC 1.0 distribution package should contain five
- files. The first file, README.LIS, you probably have already
- read. README.LIS is simply a half a page or so which
- describes how to proceed. The file METRIC.COM is the actual
- tool itself. To run it, simply type:
-
- METRIC
-
- You will be asked for the file name of the program to
- analyze (assumed to end with the suffix .PAS, if it doesn't,
- specify it). After entering the file name and a <CR>, a file
- called xxx.RPT (where 'xxx' is the name of the source file)
- will be created which will contain the various complexity
- measures calculated for the source program.
-
- Two additional files, PASRESWO.TAB and PASSTATE.TAB are
- also included. The first file lists all the Pascal tokens
- which are considered operators and the second file lists all
- the tokens which terminate a Pascal statement. The first
- file can be modified to reflect additional operators present
- due to a local language extension, etc. (if adding
- additional tokens, please ensure that the file remains
- ordered and the new tokens are entered in lower case!). The
- other file, METRIC.LIS, is this document.
-
- Copyright 1987 by SET Laboratories, Inc.
-
- Software Metrics Tutorial and METRIC 1.0 User's Guide 13
- SET Laboratories, Inc.
-
-
-
- AN ANNOTATED BIBLIOGRAPHY OF
- THE SOFTWARE METRIC LITERATURE
-
-
- [1] Baker, A., "A Comparision of Measures of Control Flow
- Complexity", I_E_E_E_ T_r_a_n_s_a_c_t_i_o_n_s_ o_n_ S_o_f_t_w_a_r_e_ E_n_g_i_n_e_e_r_i_n_g_,_
- November 1980, pp 506-511.
-
- Compares various measures of the complexity of program
- control flow.
-
- [2] Curtis, B., S. Sheppard, P. Milliman, M. Borst and T.
- Love, "Measuring the Psychological Complexity of
- Software Maintenance Tasks with the Halstead and McCabe
- Metrics", I_E_E_E_ T_r_a_n_s_a_c_t_i_o_n_s_ o_n_ S_o_f_t_w_a_r_e_ E_n_g_i_n_e_e_r_i_n_g_,_
- March 1979, pp 96-104.
-
- Describes controlled experiment where performance on
- various maintenance tasks was related to software
- complexity measures.
-
- [3] Evangelist, W., "Software Complexity Metric Sensitivity
- to Program Structuring Rules", T_h_e_ J_o_u_r_n_a_l_ o_f_ S_y_s_t_e_m_s_
- a_n_d_ S_o_f_t_w_a_r_e_,_ August 1983, pp 231-243.
-
- This paper investigates the impact of various
- "structuring rules" on the complexity measurements
- obtained from various metrics.
-
- [4] Fitzsimmons, A. and T. Love, "A Review and Evaluation of
- Software Science". A_C_M_ C_o_m_p_u_t_i_n_g_ S_u_r_v_e_y_s_,_ March 1978,
- pp 3-18.
-
- A classical paper discussing software science, its
- foundations and the empirical evidence which supports
- it.
-
- [5] Halstead, M., "Natural Laws Controlling Algorithm
- Structure?", A_C_M_ S_I_G_P_L_A_N_ N_o_t_i_c_e_s_,_ February 1972, pp
- 19-26.
-
- The seminal paper on Software Science - interesting
- reading.
-
- [6] Halstead, M., E_l_e_m_e_n_t_s_ o_f_ S_o_f_t_w_a_r_e_ S_c_i_e_n_c_e_,_ Elsevier
- North Holland, New York 1977.
-
- The most complete description of Software Science
- available - a must for anyone who is seriously
-
-
- Copyright 1987 by SET Laboratories, Inc.
-
- Software Metrics Tutorial and METRIC 1.0 User's Guide 14
- SET Laboratories, Inc.
-
-
- interested in the area.
-
- [7] Harrison, W., K. Magel, R. Kluczny and A. DeKock,
- "Applying Software Metrics to Program Maintenance",
- I_E_E_E_ C_o_m_p_u_t_e_r_,_ September 1982, pp 65-79.
-
- Description and discussion of many popular complexity
- metrics in each of the four categories: control flow,
- size, data and hybrid.
-
- [8] Harrison, W., "Software Complexity Metrics", J_o_u_r_n_a_l_ o_f_
- S_y_s_t_e_m_s_ M_a_n_a_g_e_m_e_n_t_,_ July 1984, pp 28-30.
-
- General discussion of software complexity metrics.
-
- [9] Harrison, W., "Applying McCabe's Complexity Measure to
- Multiple Exit Programs", S_o_f_t_w_a_r_e_-_P_r_a_c_t_i_c_e_ &_
- E_x_p_e_r_i_e_n_c_e_,_ October 1984, pp 1004-1007.
-
- Describes a new "short-cut" method of calculating this
- complexity measure, and provides a set of theorems and
- proofs supporting the technique.
-
- [10] Lasses, J., D. van der Knijff, J. Shepherd and C.
- Lassez, "A Critical Examination of Software Science",
- J_o_u_r_n_a_l_ o_f_ S_y_s_t_e_m_s_ a_n_d_ S_o_f_t_w_a_r_e_,_ May 1982, pp 105-112.
-
- Describes problems in application of software science
- and counting rules.
-
- [11] McCabe, T., "A Complexity Measure", I_E_E_E_ T_r_a_n_s_a_c_t_i_o_n_s_
- o_n_ S_o_f_t_w_a_r_e_ E_n_g_i_n_e_e_r_i_n_g_,_ December 1976, pp 308-320.
-
- The first paper to combine the issue of control flow
- complexity with graph theory - a real classic in
- software metric circles.
-
- [12] Oulsnam, G., "Cyclomatic Numbers Do Not Measure
- Complexity of Unstructured Programs", I_n_f_o_r_m_a_t_i_o_n_
- P_r_o_c_e_s_s_i_n_g_ L_e_t_t_e_r_s_,_ December 1979, pp 207-211.
-
- Describes problems with McCabe's measure of complexity
- when applied to unstructured programs.
-
- [13] Salt, N., "Defining Software Science Counting
- Strategies", A_C_M_ S_I_G_P_L_A_N_ N_o_t_i_c_e_s_,_ March 1982, pp 58-67.
-
- Describes the set of Software Science counting rules
- which are used in the METRIC 1.0 tool.
-
-
-
- Copyright 1987 by SET Laboratories, Inc.
-
- Software Metrics Tutorial and METRIC 1.0 User's Guide 15
- SET Laboratories, Inc.
-
-
- [14] Shen, V., S. Conte and H. Dunsmore, "Software Science
- Revisited: A Critical Analysis of the Theory and Its
- Empirical Support", I_E_E_E_ T_r_a_n_s_a_c_t_i_o_n_s_ o_n_ S_o_f_t_w_a_r_e_
- E_n_g_i_n_e_e_r_i_n_g_,_ March 1983, pp 155-165.
-
- Critically discusses the theory behind Software Science
- and gives examples of situations where Software Science
- does not work.
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
- Copyright 1987 by SET Laboratories, Inc.
-
-
-
-
-
- Table of Contents
-
-
-
- INTRODUCTION 2
- SOFTWARE COMPLEXITY MEASURES 2
- USING METRICS 4
- THE METRIC 1.0 TOOL 6
- THE ROLE OF DATA COLLECTION IN METRIC APPLICATION 9
- USING, DISTRIBUTING AND REGISTERING METRIC 1.0 11
- TECHNICAL MATTERS 12
- AN ANNOTATED BIBLIOGRAPHY OF THE SOFTWARE METRIC LITERATU 13
-