- Using PVAAS for a Purpose
- Key Concepts
- PEERS
- About PEERS
- Understanding the PEERS pages
- Evaluation List
- Evaluation Summary
- Evaluation Forms
- Add Educator
- Add Evaluator
- Manage Access
- Add a school-level Educator to PEERS
- Add a district-level Educator to PEERS
- Add the Evaluator permission to a user's account
- Remove the Evaluator permission from a district user's account
- Add the Evaluator or Administrative Evaluator permission to a district user's account
- Remove the Administrative Evaluator permission from a district user's account
- Remove an Educator from PEERS
- Restore a removed Educator
- Assign an Educator to a district-level Evaluator
- Assign an Educator to an Evaluator
- Unassign an Educator from an Evaluator
- Assign an Educator to a school
- Unassign an Educator from a school
- Link a PVAAS account to an Educator
- Working with Evaluations
- Switch between Educator and Evaluator
- View an evaluation
- Use filters to display only certain evaluations
- Print the Summary section of an evaluation
- Understanding evaluation statuses
- Determine whether other evaluators have access to an evaluation
- Lock or unlock an evaluation
- Save your changes
- Mark an evaluation as Ready for Conference
- Release one or more evaluations
- Download data from released evaluations to XLSX
- Make changes to an evaluation marked Ready for Conference
- Reports
- School Reports
- LEA/District Reports
- Teacher Reports
- Student Reports
- Comparison Reports
- Human Capital Retention Dashboard
- Roster Verification (RV)
- Getting Started
- All Actions by Role
- All Actions for Teachers
- All Actions for School Administrators or Roster Approvers
- Manage teachers' access to RV
- Assign other school users the Roster Approver permission
- View a teacher's rosters
- Take control of a teacher's rosters
- Add and remove rosters for a teacher
- Copy a roster
- Apply a percentage of instructional time to every student on a roster
- Batch print overclaimed and underclaimed students
- Remove students from a roster
- Add a student to a roster
- Return a teacher's rosters to the teacher
- Approve a teacher's rosters
- Submit your school's rosters to the district
- All Actions for district admin or district roster approvers
- Assign other LEA/district users the Roster Approver permission
- Take control of a school's rosters
- View a teacher's rosters
- View the history of a teacher's rosters
- Edit a teacher's rosters
- Add and remove rosters for a teacher
- Copy a roster
- Apply a percentage of instructional time to every student on a roster
- Batch print overclaimed and underclaimed students
- Return a school's rosters to the school
- Approve rosters that you have verified
- Submit your district's rosters
- Understanding the RV Pages
- Viewing the History of Actions on Rosters
- Additional Resources
- Admin Help
- General Help
Teacher Value-Added Summary
Technical Details
The Teacher Value-Added Summary is simply a summary of the teacher's value-added reports for individual grades, subjects, and courses. As a result, there are no additional calculations performed to generate the data in the summary. To understand how the values in the summary are generated, it's only necessary to understand how the values are generated for the Teacher Value-Added reports.
How PVAAS Measures Growth
Each year, the academic performance of students is evaluated using a variety of assessments. LEAs/Districts, schools, and teachers receive results from these assessments, which provide important information about the achievement level of their students in tested grades and subjects or Keystone content areas. This information includes the number and percentage of students who performed in each of the state's academic performance ranges—Advanced, Proficient, Basic, and Below Basic. Achievement data from previous years is also included for comparison.
But because the achievement data is based on different groups of students each year, direct comparisons of data across years are often not meaningful or useful. For example, comparing the performance of last year's fifth graders to the performance of this year's fifth graders does not tell us how much academic growth either group of fifth graders made.
We offer a different set of measures. The growth of each group of students is measured as they move from one grade to the next or enter and complete a Keystone course. This approach yields growth measures that are fair, reliable, and useful to educators.
The process begins by generating measures of the average entering achievement level of the group of students served by each teachers, schools, and LEAs/districts. Then a similar measure is generated for the group's average achievement level at the end of the subject and grade or course. To ensure that the measures are precise and reliable, PVAAS incorporates state assessment data across years, grades, and subjects for each student.
The difference between these two achievement measures is calculated and then compared to a standard expectation of growth called the growth standard. Growth color indicators are then assigned to indicate how strong the evidence is that the group of students exceeded, met, or fell short of the growth standard.
Simply put, the expectation is that regardless of their entering achievement levels, students should not lose ground academically, relative to their peers in the same grade and subject or course in the reference group. This standard is reasonable and attainable regardless of the entering achievement of the students served.
With this approach, it's possible for a group of students to demonstrate high growth, even if all of them remain in the same state performance level from one year to the next. Each performance level includes a range of scores, so it's possible for a group's average achievement to rise or fall within a single state academic performance level.
To calculate the growth measures, PVAAS uses two different analytic models, depending on the assessments administered. The Growth Standard Methodology is used when students are tested with the same assessment in the same subject in consecutive grades. The Predictive Methodology is used for subjects in which students are not tested in consecutive grades and for tests such as end-of-course assessments that students might take in different grades.Predictive Methodology
Assessments analyzed with the Predictive methodology can be used with any assessment that has sufficient prior testing. This prior testing, or predictors, can include assessments in the same or different subjects.
This model generates a predicted score for each student. Entering achievement reflects students' achievement before the current school year or when they entered a grade and subject or Keystone content area.
A predicted score is the score the student would make on the selected assessment if the student makes average or typical growth. To generate each student's predicted score we build a robust statistical model of all students who took the selected assessment in the most recent year. The model includes the scores of all students in the reference group, along with their testing histories across years, grades, and subjects.
By considering how all other students performed on the assessment in relation to their testing histories, the model calculates a predicted score for each student based on their individual testing history.
To ensure precision in the predicted scores, for most subjects, a student must have at least three prior assessment scores. This does not mean three years of scores or three scores in the same subject, but simply three prior scores on state assessments across grades and subjects. There is one exception. To generate a predicted score for fourth-grade science, only two prior scores are required: third-grade math and ELA.
Let's consider an example. Zachary is a high-achieving student who has scored well on state assessments for the past few years, especially in math. To predict Zachary's score on the Keystone assessment, we:
- Determine the relationships between the testing histories of all students and their exiting achievement on this assessment in the same year.
- Use these relationships to determine what the expected score would be for Zachary, given his own personal testing history.
Based on Zachary's testing history, a score at the 83rd percentile would be a reasonable expectation for him.
In contrast, Adam is a low-achieving student who has struggled in math. Their prior scores on state assessments are low. Just as with Zachary, we use the relationships between the testing histories of all students and their exiting achievement on the assessment statewide to determine a predicted score for Adam. Based upon Adam's own personal testing history, a score at the 26th percentile would be a reasonable expectation for him.
Once a predicted score has been generated for each student in the group, the predicted scores are averaged. Because this average predicted score is based on the students' prior test scores, it represents the entering achievement in this subject for the group of students.
Next, we compare the students' exiting achievement on the assessment to their entering achievement. If a group of students scores what they were predicted to score, on average, we can say that the group made average, or typical, growth. In other words, their growth was similar to the growth of students at the same achievement level across the reference group. This is the definition of meeting the growth standard in the predictive methodology.
If a group of students scores significantly higher than predicted, we can conclude that the group made more growth than their peers across the reference group. If a group scores significantly lower than predicted, the group did not grow as much as their peers.
The growth measure is a function of the difference between the students' entering achievement and their exiting achievement. This value is expressed in scale score points and indicates how much higher or lower the group scored, on average, compared to what they were expected to score given their individual testing histories. For example, a growth measure of 9.3 indicates that, on average, this group of students scored 9.3 scale score points higher than expected. When generating growth measures for Teacher Value-Added reports, students are weighted for each teacher based on the proportion of instructional responsibility claimed during roster verification.
Calculating the Growth Index
The standard error is used in conjunction with the growth measure to calculate the growth index. Specifically, the growth index is the growth measure divided by its standard error. This calculation yields a robust measure of growth for the group of students that reflects both the growth and the amount of evidence. All index values are on the same scale and can be compared fairly across years, grades, and subjects throughout the reference group.
Each Growth Index is color-coded to indicate how strong the evidence is that students exceeded, met, or fell short of the growth standard. The colors should be interpreted as follows:
Growth Color Indicator | Growth Index Compared to the Growth Standard | Interpretation |
---|---|---|
Well Above | At least 2 standard errors above | Significant evidence that the teacher's group of students exceeded the growth standard |
Above | Between 1 and 2 standard errors above | Moderate evidence that the teacher's group of students exceeded the growth standard |
Meets | Between 1 standard error above and 1 standard error below | Evidence that the teacher's group of students met the growth standard |
Below | Between 1 and 2 standard errors below | Moderate evidence that the teacher's group of students did not meet the growth standard |
Well Below | More than 2 standard errors below | Significant evidence that the teacher's group of students did not meet the growth standard |