Last modified: Nov 20, 2013
… because everyone loves writing documentation.
McConnell has a good & balanced discussion on this.
Modern focus has shifted considerably away from commenting bodies towards API documentation.
double m; // mean average
double s; // standard deviation
double meanAverage
double standardDeviation
// Sum up the data
double sum = 0.0;
double sumSquares = 0.0;
// Add up the sums
for (double d: scores)
{
sum += d;
sumSquares += d*d;
}
// Compute the average and standard
// deviation
double meanAverage = sum / numScores;
double standardDeviation =
sqrt ((sumSquares - numScores*sum*sum)
/(numScores - 1.0));
// Subtract the average from each data
// item and divide by the standard
// deviation.
for (int i = 0; i < numScores; ++i)
{
scores[i] = (scores[i] - meanAverage)
/ standardDeviation;
}
// Compute summary statistics
double sum = 0.0;
double sumSquares = 0.0;
for (double d: scores)
{
sum += d;
sumSquares += d*d;
}
double meanAverage = sum / numScores;
double standardDeviation =
sqrt ((sumSquares - numScores*sum*sum)
/ (numScores - 1.0));
// Normalize the scores
for (int i = 0; i < numScores; ++i)
{
scores[i] = (scores[i] - meanAverage)
/ standardDeviation;
}
// Compute summary statistics
double sum = 0.0;
double sumSquares = 0.0;
for (double d: scores)
{
sum += d;
sumSquares += d*d;
}
double meanAverage = sum / numScores;
double standardDeviation =
sqrt ((sumSquares - numScores*sum*sum)
/(numScores - 1.0));
// Normalize the scores
for (int i = 0; i < numScores; ++i)
scores[i] = (scores[i] - meanAverage)
/ standardDeviation;
void computeSummaryStatistics (
const double* scores, // inputs
int numScores,
double& meanAverage, // outputs
double& standardDeviation)
{
double sum = 0.0;
double sumSquares = 0.0;
for (double d: scores)
{
sum += d;
sumSquares += d*d;
}
meanAverage = sum / numScores;
standardDeviation =
sqrt ((sumSquares - numScores*sum*sum)
/(numScores - 1.0));
}
void normalizeData (double* data,
int numData,
double center,
double spread)
{
for (int i = 0; i < numData; ++i)
data[i] = (data[i] - center) / spread;
}
⋮
double meanAverage;
double standardDeviation;
computeSummaryStatistics (scores, numScores,
meanAverage, standardDeviation);
normalizeData (scores, numScores,
meanAverage, standardDeviation);
void computeSummaryStatistics (
const double* scores, // inputs
int numScores,
double& meanAverage, // outputs
double& standardDeviation)
{
double sum = 0.0;
double sumSquares = 0.0;
for (double d: scores)
{
sum += d;
sumSquares += d*d;
}
meanAverage = sum / numScores;
standardDeviation =
sqrt ((sumSquares - numScores*sum*sum)
/(numScores - 1.0));
}
void normalizeData (double* data,
int numData,
double center,
double spread)
{
for (int i = 0; i < numData; ++i)
data[i] = (data[i] - center) / spread;
}
⋮
double meanAverage;
double standardDeviation;
computeSummaryStatistics (scores, numScores,
meanAverage, standardDeviation);
normalizeData (scores, numScores,
meanAverage, standardDeviation);
void computeSummaryStatistics (
const double* scores, // inputs
int numScores,
double& meanAverage, // outputs
double& standardDeviation)
{
double sum = accumulate(
scores, scores+numScores);
double sumSquares = accumulate(
scores, scores+numScores,
[](double x, double y)
{return x + y*y;});
meanAverage = sum / numScores;
standardDeviation =
sqrt ((sumSquares - numScores*sum*sum)
/(numScores - 1.0));
}
⋮
// Normalize the scores
double meanAverage;
double standardDeviation;
computeSummaryStatistics (scores, numScores,
meanAverage, standardDeviation);
transform (
scores, scores+numScores,
scores,
[] (double d) {
return (d - meanAverage)
/ standardDeviation});
How many forms of software documentation charting do you know?
Earliest examples were automatic flowchart generators
Generating flowcharts from source code.
As flowcharts declined in popularity, so did the demand for these tools.
A hallmark of so-called CASE (Computer-Aided Software Engineering) systems
API documentation tools are now more common
Reflect modern emphasis on re-usable interfaces
and specially formatted blocks of comments embedded in the source code
Encourages updating comments as code is modified
Comments become a legitimately useful tool for application writers.
Application writers have less need to access actual code.
Generate linked documents to facilitate browsing of referenced type names and other entities
Some IDEs understand this markup as well and use it enhance “live” help while editing code.
Perhaps the best known tool in this category
/**
*
*/
package edu.odu.cs.extract.control;
import org.jdom.Document;
import edu.odu.cs.extract.dataflow.Dataflow;
import edu.odu.cs.extract.dataflow.QuickTransformer;
import edu.odu.cs.extract.dataflow.TransformationResult;
import edu.odu.cs.extract.inputprocessing.segmentation.Segmentation;
import edu.odu.cs.extract.utils.Properties;
/**
* Transforms a PDF file dataflow into Raw IDM by attempting a direct translation
* of text PDF, but passing pages thought to be scanned on for OCR and thenby trimming to a selected number of pages
* OCR-to-rawIDM conversion.
*
* @author zeil
*
*/
public class SegmentationTransformer extends QuickTransformer {
/**
*
*/
public SegmentationTransformer() {
super();
}
/* (non-Javadoc)
* @see edu.odu.cs.extract.dataflow.ThreadedTransformer#doTransform(edu.odu.cs.extract.dataflow.Dataflow[])
*/
@Override
public TransformationResult doTransform(Dataflow[] in) throws Exception {
String status = "success";
String message = "OK";
IDMDataflow inputDF = (IDMDataflow) in[0];
Document unsegmentedIDM = inputDF.getDocument();
String mergeFailed = unsegmentedIDM.getRootElement().getAttributeValue("OCRmerge");
if (mergeFailed != null && "failed".equals(mergeFailed)) {
status = "warning";
message = "unable to merge pages from OCR";
}
// Segment document
Document segmentedIDM = new Segmentation(unsegmentedIDM).reSegment();
IDMDataflow outputDF = new IDMDataflow (in[0].getTrace(), segmentedIDM);
return new TransformationResult(outputDF,status, message, null);
}
@Override
public String getOutputExtension() {
Properties p = Properties.getProperties();
return p.getProperty(Properties.Names.SEGMENTATION_OUT_EXT);
}
}
.
Common Javadoc Markup
@author authorName
@version versionNumber
@param name description
@return description
@throws exceptionClassName description
@see crossReference
Running javadoc
Command line
javadoc -d destinationDir -sourcepath sourceCodeDir \ -link http://docs.oracle.com/javase/7/docs/api/
Eclipse: Project ⇒ Generate Javadoc ...
ant
<javadoc packagenames="edu.odu.cs.*"
destdir="target/javadoc"
classpathref="javadoc.classpath" Author="yes"
Version="yes" Use="yes" defaultexcludes="yes">
<fileset dir="." defaultexcludes="yes">
<include name="extractor/src/main/java/**" />
<include name="generatedSource/gen-src/**" />
<exclude name="**/*.html" />
</fileset>
<doctitle><![CDATA[<h1>ODU CS Extract
Project</h1>]]></doctitle>
</javadoc>
Output can be HTML, LaTeX, or RTF
Running doxygen
Command line
doxygen configFile
The config file can contain any of a bewildering set of options in typical property-file style:
PROJECT_NAME = C++ Spreadsheet INPUT = src/model OUTPUT_DIRECTORY = target/doc EXTRACT_ALL = YES CLASS_DIAGRAMS = YES GENERATE_HTML = YES GENERATE_LATEX = YES USE_PDFLATEX = YES
Eclipse: Eclox plugin
Ant (3rd-party contributed task)
Because a documentation generator needs to module and function structure and function parameters, a distinct parser is needed for each programming language.
This leads to a variety of language-specific tools, e.g.,
jsDoc for Javascript
YARD for Ruby
sandcastle for .Net
We’ve already looked JUnit, which can be used to generate test reports like this one.
This is generated in ant via the junitreport task:
<project name="code2html" basedir="." default="build">
<record name="ant.log" action="start" append="false" />
<taskdef classpath="JFlex.jar" classname="JFlex.anttask.JFlexTask" name="jflex" />
<echo>loading build-${os.name}.paths</echo>
<include file="build-${os.name}.paths"/>
<target name="generateSource">
<mkdir dir="src/main/java"/>
<jflex file="src/main/jflex/code2html.flex"
destdir="src/main/java"/>
<jflex file="src/main/jflex/code2tex.flex"
destdir="src/main/java"/>
<jflex file="src/main/jflex/list2html.flex"
destdir="src/main/java"/>
<jflex file="src/main/jflex/list2tex.flex"
destdir="src/main/java"/>
</target>
<target name="compile" depends="generateSource">
<mkdir dir="target/classes"/>
<javac srcdir="src/main/java" destdir="target/classes"
source="1.6" includeantruntime="false"/>
</target>
<target name="compile-tests" depends="compile">
<mkdir dir="target/test-classes"/>
<javac srcdir="src/test/java" destdir="target/test-classes"
source="1.6" includeantruntime="false">
<classpath refid="testCompilationPath"/>
</javac>
</target>
<target name="test" depends="compile-tests">
<property name="mypath" refid="testExecutionPath"/>
<echo>testExecutioPath is ${mypath}</echo>
<echoproperties/>
<mkdir dir="target/test-results/details"/>
<junit printsummary="yes"
haltonfailure="yes" fork="no"
>
<classpath refid="testExecutionPath"/>
<formatter type="xml"/>
<batchtest todir="target/test-results/details">
<fileset dir="target/test-classes">
<include name="**/*Test*.class"/>
</fileset>
</batchtest>
</junit>
<junitreport todir="target/test-results">
<fileset dir="target/test-results/details">
<include name="TEST-*.xml"/>
</fileset>
<report format="frames" todir="target/test-results/html"/>
</junitreport>
</target>
<target name="build" depends="test">
<jar destfile="codeAnnotation.jar" basedir="target/classes">
<manifest>
<attribute name="Main-Class"
value="edu.odu.cs.code2html.Code2HTML"/>
</manifest>
</jar>
</target>
<target name="clean">
<delete dir="target"/>
</target>
</project>
Other common test reports
Javadoc of unit test code
Coverage reports
Many tools that we will cover later for analyzing code can produce useful (or at least, impressive) documentation as a side effect.
Configuration managers (to be covered later) generate reports about the dependencies among the software components.
Examples:
You can also add instructions to your build manager to post files to a website.
Sync to a website on a local file system:
<target name="deploy" depends="build"
description="Sync with the website directory"
>
<sync todir="${deploymentDestination}/reports"
includeEmptyDirs="true" granularity="2000">
<fileset dir="target/reports">
<include name="**/*.html"/>
<include name="**/*.png"/>
<exclude name="**/*.xml"/>
</fileset>
<preserveintarget>
<include name="**/.ht*"/>
</preserveintarget>
</sync>
</target>
Posting to a website
A website on a remote machine:
<target name="publish-reports" depends="reports"
description="send project reports to web server">
<tar destfile="target/project-reports.tz" compression="gzip"> ➊
<tarfileset dir="target">
<include name="project-reports/**/*"/>
</tarfileset>
</tar>
<input message="login name for ${webserver}:" addproperty="scp.login"/> ➋
<input message="password for ${webserver}:" addproperty="scp.password"/>
<scp file="target/project-reports.tz" sftp="true" ➌
remoteToDir="${scp.login}:${scp.password}@${webserver}:${website.path}"/>
<sshexec host="${webserver}" username="${scp.login}" password="${scp.password}"
command="cd ${website.path}; tar xzf project-reports.tz"/> ➍
</target>
A software forge is a collection of web services for the support of collaborative software devlopment:
Project web sites
Communications (e.g., messaging, wikis, announcements)
Bug reporting and tracking
Project personnel management
Forge Examples
Among the best known forges are
the original, SourceForge, (1999)
Google Code, (2006)
GitHub, (2008)
The CS 350 course has its own forge