UT Dallas CS 4351 Homework #4 Page 1 of 1
Homework #2
SE4351 / Spring 2021
Instructor Dr. Klyne Smith
Office Location ECSN 3.928
Cell Phone XXXXXXXXXX
Email Address XXXXXXXXXX
Purpose
To better understand Requirements Traceability
Assignment Overview
• Research Requirements Traceability (at least 2 sources)
• Research how to write a research paper
• Create a paper. 2-4 pages column spaced with the following topic sections
o Abstract
o Introduction
o Background
o Findings
o Conclusion
Deliverables
1) HW Assignment 1 uploaded to eLearning as a PDF.
2) HW Assignment should be named the following FirstName-LastName-SE4351-Spring2021-
HW2.PDF
Notes
Page length is not as important as quality of work and relationship between deliverables.
Plagiarism is not acceptable and will result in score of 0
Microsoft Word - shaw-minitutorial.doc
Writing Good Software Engineering Research Papers
Minitutorial
Mary Shaw
Carnegie Mellon University
XXXXXXXXXX
Abstract
Software engineering researchers solve problems of
several different kinds. To do so, they produce several
different kinds of results, and they should develop
appropriate evidence to validate these results. They often
eport their research in conference papers. I analyzed the
abstracts of research papers submitted to ICSE 2002 in
order to identify the types of research reported in the
submitted and accepted papers, and I observed the
program committee discussions about which papers to
accept. This report presents the research paradigms of
the papers, common concerns of the program committee,
and statistics on success rates. This information should
help researchers design better research projects and write
papers that present their results to best advantage.
Keywords: research design, research paradigms,
validation, software profession, technical writing
1. Introduction
In software engineering, research papers are customary
vehicles for reporting results to the research community.
In a research paper, the author explains to an interested
eader what he or she accomplished, and how the author
accomplished it, and why the reader should care. A good
esearch paper should answer a number of questions:
♦ What, precisely, was your contribution?
• What question did you answer?
• Why should the reader care?
• What larger question does this address?
♦ What is your new result?
• What new knowledge have you contributed that
the reader can use elsewhere?
• What previous work (yours or someone else’s)
do you build on? What do you provide a superior
alternative to?
• How is your result different from and better than
this prior work?
• What, precisely and in detail, is your new result?
♦ Why should the reader believe your result?
• What standard should be used to evaluate your
claim?
• What concrete evidence shows that your result
satisfies your claim?
If you answer these questions clearly, you’ll probably
communicate your result well. If in addition your result
epresents an interesting, sound, and significant contribu-
tion to our knowledge of software engineering, you’ll
have a good chance of getting it accepted for publication
in a conference or journal.
Other fields of science and engineering have well-
established research paradigms. For example, the
experimental model of physics and the double-blind
studies of medicines are understood, at least in
oad
outline, not only by the research community but also by
the public at large. In addition to providing guidance for
the design of research in a discipline, these paradigms
establish the scope of scientific disciplines through a
social and political process of "boundary setting" [5].
Software engineering, however, has not yet developed
this sort of well-understood guidance. I previously [19,
20] discussed early steps toward such understanding,
including a model of the way software engineering
techniques mature [17, 18] and critiques of the lack of
igor in experimental software engineering [1, 22, 23, 24,
25]. Those discussions critique software engineering
esearch reports against the standards of classical
paradigms. The discussion here differs from those in that
this discussion reports on the types of papers that are
accepted in practices as good research reports. Another
cu
ent activity, the Impact Project [7] seeks to trace the
influence of software engineering research on practice.
The discussion here focuses on the paradigms rather than
the content of the research
This report examines how software engineers answer
the questions above, with emphasis on the design of the
esearch project and the organization of the report. Other
sources (e.g., [4]) deal with specific issues of technical
writing. Very concretely, the examples here come from
the papers submitted to ICSE 2002 and the program
committee review of those papers. These examples report
esearch results in software engineering. Conferences
often include other kinds of papers, including experience
eports, materials on software engineering education, and
opinion essays.
Proceedings of the 25th International Conference on Software Engineering, IEEE Computer Society, 2003, pp XXXXXXXXXX.
2. What, precisely, was your contribution?
Before reporting what you did, explain what problem
you set out to solve or what question you set out to answer
—and why this is important.
2.1 What kinds of questions do software
engineers investigate?
Generally speaking, software engineering researchers
seek better ways to develop and evaluate software. Devel-
opment includes all the synthetic activities that involve
creating and modifying the software, including the code,
design documents, documentation, etc. Evaluation
includes all the analytic activities associated with predict-
ing, determining, and estimating properties of the software
systems, including both functionality and extra-functional
properties such as performance or reliability.
Software engineering research answers questions about
methods of development or analysis, about details of
designing or evaluating a particular instance, about gener-
alizations over whole classes of systems or techniques, or
about exploratory issues concerning existence or feasibil-
ity. Table 1 lists the types of research questions that are
asked by software engineering research papers and
provides specific question templates.
Table 1. Types of software engineering research questions
Type of question Examples
Method or means of
development
How can we do/create/modify/evolve (or automate doing) X?
What is a better way to do/create/modify/evolve X?
Method for analysis
or evaluation
How can I evaluate the quality/co
ectness of X?
How do I choose between X and Y?
Design, evaluation, or
analysis of a
particular instance
How good is Y? What is property X of artifact/method Y?
What is a (better) design, implementation, maintenance, or adaptation for application X?
How does X compare to Y?
What is the cu
ent state of X / practice of Y?
Generalization or
characterization
Given X, what will Y (necessarily) be?
What, exactly, do we mean by X? What are its important characteristics?
What is a good formal/empirical model for X?
What are the varieties of X, how are they related?
Feasibility study or
exploration
Does X even exist, and if so what is it like?
Is it possible to accomplish X at all?
The first two types of research produce methods of
development or of analysis that the authors investigated in
one setting, but that can presumably be applied in other
settings. The third type of research deals explicitly with
some particular system, practice, design or other instance
of a system or method; these may range from na
atives
about industrial practice to analytic comparisons of
alternative designs. For this type of research the instance
itself should have some
oad appeal—an evaluation of
Java is more likely to be accepted than a simple evaluation
of the toy language you developed last summer.
Generalizations or characterizations explicitly rise above
the examples presented in the paper. Finally, papers that
deal with an issue in a completely new way are sometimes
treated differently from papers that improve on prior art,
so "feasibility" is a separate category (though no such
papers were submitted to ICSE 2002).
Newman's critical comparison of HCI and traditional
engineering papers [12] found that the engineering papers
were mostly incremental (improved model, improved
technique), whereas many of the HCI papers
oke new
ground (observations preliminary to a model,
and new
technique). One reasonable interpretation is that the
traditional engineering disciplines are much more mature
than HCI, and so the character of the research might
easonably differ [17, 18]. Also, it appears that different
disciplines have different expectations about the "size" of
a research result—the extent to which it builds on existing
knowledge or opens new questions. In the case of ICSE,
the kinds of questions that are of interest and the minimum
interesting increment may differ from one area to another.
2.2 Which of these are most common?
The most common kind of ICSE paper reports an
improved method or means of developing software—that
is, of designing, implementing, evolving, maintaining, or
otherwise operating on the software system itself. Papers
addressing these questions dominate both the submitted
and the accepted papers. Also fairly common are papers
about methods for reasoning about software systems,
principally analysis of co
ectness (testing and
verification). Analysis papers have a modest acceptance
edge in this very selective conference.
Table 2 gives the distribution of submissions to ICSE
2002, based on reading the abstracts (not the full papers—
ut remember that the abstract tells a reader what to ex-
pect from the paper). For each type of research question,
the table gives the number of papers submitted and ac-
cepted, the percentage of the total paper set of each kind,
and the acceptance ratio within each type of question.
Figures 1 and 2 show these counts and distributions.
Table 2. Types of research questions represented in ICSE 2002 submissions and acceptances
Type of question Submitted Accepted Ratio Acc/Sub
Method or means of development XXXXXXXXXX% XXXXXXXXXX%) (13%)
Method for analysis or evaluation XXXXXXXXXX% XXXXXXXXXX%) (20%)
Design, evaluation, or analysis of a particular instance XXXXXXXXXX% XXXXXXXXXX%)