Recording
These slides accompany a recorded video: Play Video
Example of Early Analysis
This lesson works through an example of the early stages of analysis.
Example of Early Analysis
This lesson works through an example of the early stages of analysis.
Available documentation is natural language, fairly general in nature
Example of Early Analysis
This lesson works through an example of the early stages of analysis.
Available documentation is natural language, fairly general in nature
Example of Early Analysis
This lesson works through an example of the early stages of analysis.
Available documentation is natural language, fairly general in nature
Example of Early Analysis
This lesson works through an example of the early stages of analysis.
Available documentation is natural language, fairly general in nature
We’re working on a domain model
Example of Early Analysis
This lesson works through an example of the early stages of analysis.
Available documentation is natural language, fairly general in nature
We’re working on a domain model
Example of Early Analysis
This lesson works through an example of the early stages of analysis.
Available documentation is natural language, fairly general in nature
We’re working on a domain model
What do we hope to accomplish?
Example of Early Analysis
This lesson works through an example of the early stages of analysis.
Available documentation is natural language, fairly general in nature
We’re working on a domain model
What do we hope to accomplish?
Example of Early Analysis
This lesson works through an example of the early stages of analysis.
Available documentation is natural language, fairly general in nature
We’re working on a domain model
What do we hope to accomplish?
Example of Early Analysis
This lesson works through an example of the early stages of analysis.
Available documentation is natural language, fairly general in nature
We’re working on a domain model
What do we hope to accomplish?
Mistakes will be made!
ODU offers a number of courses via the internet. A common requirement among these courses is for a system of online assessment. An assessment is any form of graded question-and-answer activity. Examples include exams, quizzes, exercises, and self-assessments. In preparation for automating such a system, our group has undertaken a study of assessment techniques in traditional classrooms.
An assessment can contain a number of questions. Questions come in many forms, including true/false, single-choice from among multiple alternatives, multiple choices, fill-in-the-blank, and essay. There may be other forms as well.
Students take assessments that are administered by instructors. The students’ responses to each question are collected by the instructor, who grades them by comparison to a rubric for each question. The instructor may also elect to provide feedback (written comments), particularly about incorrect responses.
A total score for the assessment is computed by the instructor. If this is a self-assessment, the score is for informational purposes only. For other kinds of assessments, the instructor records the score in his/her grade book.
Information is returned to the student about their performance. At a minimum, the student would learn of their score and any instructor-provided feedback. Depending upon the instructor, students may also receive the questions, a copy of their own responses, and the instructor’s correct answer.
For the initial list, mark up the description, looking for noun phrases and verb phrases.
ODU offers a number of courses via the Internet. A common requirement among these courses is for a system of on-line assessment. An assessment is any form of graded question-and-answer activity. Examples include exams, quizzes, exercises, and self-assessments. In preparation for automating such a system, our group has undertaken a study of assessment techniques in traditional classrooms.
An assessment can contain a number of questions. Questions come in many forms, including true/false, single-choice from among multiple alternatives, multiple choices, fill-in-the-blank, and essay. There may be other forms as well.
Students take assessments that are administered by instructors. The students’ responses to each question are collected by the instructor, who grades them by comparison to a rubric for each question. The instructor may also elect to provide feedback (written comments), particularly about incorrect responses.
A total score for the assessment is computed by the instructor. If this is a self-assessment, the score is for informational purposes only. For other kinds of assessments, the instructor records the score in his/her grade book.
Information is returned to the student about their performance. At a minimum, the student would learn of their score and any instructor-provided feedback. Depending upon the instructor, students may also receive the questions, a copy of their own responses, and the instructor’s correct answer.
Start by setting up some class diagrams.
Assessment |
---|
Exam |
---|
Quiz |
---|
Exercise |
---|
Self-Assessment |
---|
Question |
---|
True/False Question |
---|
Single-Choice Question |
---|
Multiple Choices Question |
---|
Fill-In-The-Blank Question |
---|
Essay Question |
---|
Student |
---|
Instructor |
---|
Response |
---|
Rubric |
---|
Feedback |
---|
Score |
---|
Grade Book |
---|
Information |
---|
Performance |
---|
Correct Answer |
---|
Since all of the various kinds of assessments are likely to have similar attributes and operations, I’m going to set them aside for now.
I’ll do the same with the various kinds of questions.
Assessment |
---|
Question |
---|
Student |
---|
Instructor |
---|
Response |
---|
Rubric |
---|
Feedback |
---|
Score |
---|
Grade Book |
---|
Information |
---|
Performance |
---|
Correct Answer |
---|
Now fill in the operations known so far:
contain (questions)
take (assessment),
administer,
collect (responses),
grade,
provide (feedback),
compute (score),
record (score),
return (information),
An assessment can contain a number of questions.
This is really a statement about attributes of an assessment
Assessment |
---|
: seq Question |
|
Students take assessments that are administered by instructors.
The language (plurals) is a bit tricky.
Instructors administer an assessment to an entire class.
Each student individually takes the assessment.
Instructor |
---|
|
administer(: Assessment, to: seq Student) |
Taking or Administering?
Taking or Administering?
Surprised that I put that in Instructor?
Taking or Administering?
Surprised that I put that in Instructor?
Remember the basic rule: if A does B to C, then “do B” is usually a responsibility of C
Taking or Administering?
Surprised that I put that in Instructor?
Remember the basic rule: if A does B to C, then “do B” is usually a responsibility of C
It would not be a responsibility of the assessment
Taking or Administering?
Surprised that I put that in Instructor?
Remember the basic rule: if A does B to C, then “do B” is usually a responsibility of C
It would not be a responsibility of the assessment
Taking or Administering?
Surprised that I put that in Instructor?
Remember the basic rule: if A does B to C, then “do B” is usually a responsibility of C
It would not be a responsibility of the assessment
Could it be a responsibility of the Student?
Taking or Administering?
Surprised that I put that in Instructor?
Remember the basic rule: if A does B to C, then “do B” is usually a responsibility of C
It would not be a responsibility of the assessment
Could it be a responsibility of the Student?
Taking or Administering?
Surprised that I put that in Instructor?
Remember the basic rule: if A does B to C, then “do B” is usually a responsibility of C
It would not be a responsibility of the assessment
Could it be a responsibility of the Student?
What’s involved in administering an assessment?
The problem statement tells us:
Students take assessments that are administered by instructors. The students’ responses to each question are collected by the instructor, who grades them … The instructor may also elect to provide feedback (written comments), particularly about incorrect responses.
A total score for the assessment is computed by the instructor. … Information is returned to the student about their performance.
We’re looking at the instructor’s method for administering an assessment.
taking an assessment
So we add the student’s role into our model:
Student |
---|
|
take(:Assessment) |
Instructor |
---|
|
administer(: Assessment, to: seq Student) |
The students’ responses to each question are collected by the instructor
This is really just describing the output from the request sent to students asking them to take the assessment.
Student |
---|
|
take(:Assessment): Response |
the instructor, who grades them by comparison to a rubric for each question.
Response |
---|
|
grade(:Rubric) |
Working with Rubrics
We are told there is a separate rubric for each question. So the “comparison” is between a response to a single question and a rubric.
We don’t know what the proper terminology here would be, so we use a placeholder and make a note to consult the domain experts.
Response |
---|
: seq of QuestionReponses? : seq of Rubric |
gradeAllQuestionResponses() |
QuestionResponse? |
---|
|
grade(Rubric): score |
Wait a minute…
Response |
---|
: seq of QuestionReponses? : seq of Rubric |
gradeAllQuestionResponses() |
QuestionResponse? |
---|
|
grade(Rubric): score |
At this point, sanity reasserts itself
What’s the alternative?
Response |
---|
: seq of QuestionReponses? : seq of Rubric |
gradeAllQuestionResponses() |
First thought: the instructor does the grading
But not mentioning it all in the model seems wrong
Looking for Variant Behavior
One reason that we really want to model the grading process is that we know that we have many different kinds of questions:
Question |
---|
True/False Question |
---|
Single-Choice Question |
---|
Multiple Choices Question |
---|
Fill-In-The-Blank Question |
---|
Essay Question |
---|
and we suspect that the grading method varies from one type of question to another.
Question |
---|
|
grade(Rubric): score |
But I Don’t Like That Either
… and here’s why:
But I Don’t Like That Either
… and here’s why:
Essay questions can cover all kinds of things.
But I Don’t Like That Either
… and here’s why:
Essay questions can cover all kinds of things.
But I Don’t Like That Either
… and here’s why:
Essay questions can cover all kinds of things.
But I Don’t Like That Either
… and here’s why:
Essay questions can cover all kinds of things.
But I Don’t Like That Either
… and here’s why:
Essay questions can cover all kinds of things.
But I Don’t Like That Either
… and here’s why:
Essay questions can cover all kinds of things.
Are these really the same kind of question?
But I Don’t Like That Either
… and here’s why:
Essay questions can cover all kinds of things.
Are these really the same kind of question?
But I Don’t Like That Either
… and here’s why:
Essay questions can cover all kinds of things.
Are these really the same kind of question?
But I Don’t Like That Either
… and here’s why:
Essay questions can cover all kinds of things.
Are these really the same kind of question?
So maybe it’s the rubrics that capture this behavior
Grading - revised
Response |
---|
: seq of QuestionReponses? : seq of Rubric |
gradeAllQuestionResponses() |
QuestionResponse? |
---|
grade(Rubric): score |
Question |
---|
: Rubric |
|
Rubric |
---|
|
grade(QuestionResponse?): score |
I’m much happier with that
Can rubrics be “intelligent” or is this unacceptable anthropomorphism again?
We’re probably going to wind up with a hierarchy of Rubric classes for different variants.
The instructor may also elect to provide feedback (written comments), particularly about incorrect responses.
GradedQuestionResponse? |
---|
score feedback |
|
This suggests that the various grading functions we have written that have been returning scores should really be returning a GradedQuestionResponse
.
QuestionResponse? |
---|
grade(Rubric): GradedQuestionResponse |
Question |
---|
: Rubric |
|
Rubric |
---|
|
grade(QuestionResponse?): GradedQuestionResponse |
A total score for the assessment is computed by the instructor.
GradedResponse? |
---|
overall score : seq of GradedQuestionResponses |
QuestionResponse? |
---|
grade(Rubric): GradedQuestionResponse computeTotalScore(): score |
recording grades
the instructor records the score in his/her grade book.
GradeBook |
---|
|
record(score, for: Student, on: Assessment) |
Instructor |
---|
: GradeBook
|
administer(: Assessment, to: seq Student) |
returning information
Information is returned to the student about their performance.
It’s a pretty good bet that we don’t want a class with as vague a name as “Information”.
The clue is the description: “At a minimum, the student would learn of their score and any instructor-provided feedback.”
Student |
---|
|
take(:Assessment): Response receive(: GradedResponse) |
|
Assessment |
---|
: seq of Question |
|
GradeBook |
---|
|
record(score, for: Student, on: Assessment) |
GradedQuestionResponse? |
---|
score feedback |
|
GradedResponse? |
---|
overall score : seq of GradedQuestionResponses |
computeTotalScore(): score |
Instructor |
---|
: GradeBook
|
administer(: Assessment, to: seq Student) |
Question |
---|
: Rubric |
|
QuestionResponse? |
---|
responseTo: Question |
grade(Rubric): GradedQuestionResponse |
Response |
---|
: seq of QuestionReponses? : seq of Rubric? |
gradeAllQuestionResponses() |
Rubric |
---|
|
grade(QuestionResponse?): GradedQuestionResponse |
Student |
---|
|
take(:Assessment): Response receive(: GradedResponse) |
|
We might (cautiously) question whether some of the empty class boxes represent classes that we need to retain in the model.
Right now, we have as many questions as answers.
but finding useful questions is part of the process
We can’t go much further without more info
… and it’s very dangerous to start making stuff up based on intuition about how we think the program could work.