Thursday, November 12, 2009

Assignment 5

The survey Teacher reading strategy survey 1 was designed to gage teachers’ attitudes regarding what they feel is important to teach students in order to allow them to comprehend what they read. It also assists in determining levels of teacher knowledge regarding the explicit teaching of reading comprehension strategies. The purpose of this survey is to measure changes in teacher attitudes and knowledge of the importance of explicitly teaching of reading comprehension strategies. Teachers would be required to complete the survey prior to the implementation of the Literacy in Action program and then again once implementation comes to a close. The survey questions were designed to assist in measuring outputs from the evaluation’s logic model which included: increased student and teacher knowledge and utilization of reading strategies, increased numbers of students able to comprehend grade-level material, increased comprehension scores as measured on a common assessment tool, and proposals for how to provide support to teachers during implementation.

The initial survey was piloted with 5 teachers who have used the Literacy in Action resource previously or who have attended various workshops on the explicit teaching of reading comprehension strategies. The piloting process illuminated a few areas of the survey that could be clarified for ease of completion and to increase the precision of the measurement instrument.

Some confusion arose in survey participants as they attempted to answer questions 13-15. These three questions were all worded the same except for the words “before”, “during”, and “after” which were typed in capital letters to draw attention to the word change. Despite this textual emphasis teachers did not immediately notice and answered the “before” question with answers regarding general reading strategies. Only upon attempting to answer the next question did they realize their mistake therefore the survey was revised to warn teachers of the requirements for these three questions prior to their beginning to answer them with a statement of explanation.
Another area that posed confusion for some participants was question 21 where teachers were asked to describe the amount of time they spend teaching English Language Arts. The multiple choice options were clarified by adding descriptions of possible teaching loads alongside the answers.
When answering question 6 all of the participants agreed that low comprehension was a concern for their students yet 4 out of 5 participants stated that their students were reading at or above grade level. This discrepancy makes one wonder how teachers define success in reading. Another open-ended question “What is the ultimate goal of effective reading instruction?” was added to the survey to determine the extent to which teachers view comprehension as the crucial outcome of reading instruction or if it is another aspect of reading such as level of accuracy or fluency that they are judging their students’ reading proficiency on.
Further revision was required for question 18 regarding supports that would assist teachers in increasing their students’ level of reading comprehension. Initially, the question was an open-ended question but it may be better to offer participants various options so they do not inadvertently omit a support they may find beneficial by providing teachers with a list of possible answers to choose from. Participants would still have the opportunity to answer with their own ideas for supports through the provision of an “other” option. The University of Saskatchewan survey tool does not allow for multiple answers or an “other” option so these revisions will be made superficially on the revised tool. In order for this question to become functional in the future another survey tool would have to be used.
Question 22 was also affected by the limitations of the survey tool because the survey tool did not allow for multiple answers to be selected. Multiple answers were encouraged in the revised version however. The initial survey question forced participants to choose the highest numeral applicable, therefore imposing a hierarchy on their answers. It is preferred to gain knowledge of the participants’ academic backgrounds without valuing some experiences over others.
Unfortunately one teacher had some kind of technical difficulty while answering the survey and submitted her completed survey twice which skewed the percentages for the Likert scale items. Instructions included in the introductory letter need to urge participants to contact the program evaluator in case of difficulty submitting the survey.
Overall the survey was completed with ease. The average participant took approximately 15 minutes to complete the survey which was considered an appropriate amount of time by all. The ease of completion may be partly due to the fact that the pilot participants were familiar with various reading strategies and the importance of explicitly teaching these strategies. If a less knowledgeable group had been chosen as the pilot group they may have had more difficulty answering the open-ended questions and may have needed more multiple choice questions to support them in formulating their answers.
Although the University of Saskatchewan survey tool has an option to send a thank-you email, as a final touch the participants should be sent a thank-you letter expressing appreciation for their time. This letter could include a link to the survey results (with the clients’ permission). It is important to let the participants know that their input is valuable and necessary to the success of the evaluation.

Teacher reading strategy survey 1 https://survey.usask.ca/survey.php?sid=17823

Teacher reading strategy survey 2 https://survey.usask.ca/survey.php?sid=17945


References

Hollingsworth, A., Sherman, J., & Zaugra, C. (2007). Increasing reading comprehension

in first and second graders through cooperative learning.
Retrieved November
12, 2009, from

http://www.eric.ed.gov/ERICDocs/data/ericdocs2sql/content_storage_01/0000019b/80/36/2d/80.pdf

Saskatchewan Ministry of Education (2009). Assessment for learning program: Reading

Assessment 2009.
Regina, SK: Author.
Hello,

Thank-you for agreeing to participate in this survey regarding reading comprehension strategies. There are many strategies referred to as reading strategies. This survey is designed to measure teacher knowledge and utilization of reading comprehension strategies. Reading comprehension strategies include strategies employed to help students understand what they read. Explicitly teaching reading strategies involves the teacher modeling and describing the thinking that occurs as a reader engages in the strategy.

Please click on the link below to begin the survey. I would appreciate it is you could complete it by Friday, November 6, 2009. If you have experience any difficulty opening accessing the survey or submitting your survey, please do not hesitate to contact me for assistance at (my email address). I appreciate your assistance in this matter as I know how busy you are. Thank-you for your time and insight.

Survey link https://survey.usask.ca/survey.php?sid=17823

Sincerely,

Carrie S.



Saturday, September 19, 2009

Assignment 2

Provus’ Discrepancy model can be used to evaluate educational programs. It would be an appropriate model for the evaluation of the ESC Programming for Children with Severe Disabilities. Evaluators employing this model gather evidence of compliance with established standards, identify any discrepancies between the standards and actual performance, and recommend corrective actions if necessary (Regan, Triggs, Mitsopoulos, Duncan, Godley, & Wallace, 2000).

In order to begin an evaluation using the Discrepancy model, clear and specific program goals must be established. The school board in this case has so far failed to do so as it only mentions that a “child’s program must meet the child’s needs” and includes an inspirational quote that describes how “teachers should lead children to mystery” and help them “unlock the beauty” in what they see (Medicine Hat Separate Regional Division #20, 2009). These hints at the mission of the program are much too vague to be measured or to help define the success and direction of the program. Participating in a Discrepancy evaluation will force the school division to establish some explicit goals for this program.

In addition to determining clear objectives for the program, the first stage of the Discrepancy model is concerned with program design (Rose & Nyre, 1977). The school board has clearly delineated the students who are eligible for the program, the staff that need to be involved, the required time allotment, and the documentation necessary. The program has its regulations clearly outlined but neglects to include information on possible activities that should be engaged in to meet the program goals therefore in addition to program goals, program developers need to ascertain suitable learning activities that can be included in this program.

The second stage of the Discrepancy model involves ensuring that the implemented program is congruent with the school board’s plan (Rose & Nyre, 1977). The evaluators need to ensure that the programs’ regulations are indeed being followed. The evaluators should determine if all children in the division with severe disabilities are able to access the program, whether all children participating in the program do meet the eligibility standards, if staff is properly qualified, and if required time allotments and documentation are being completed.

The third stage is a formative evaluation as it focuses on the program’s process. At this point, the evaluator begins to examine the alignment between the program’s standards and its performance (Rose & Nyre, 1977) in order to refine the program to maximize its instructional effectiveness during the development process (Regan et al., 2000).

It is during the fourth stage that the product of the program is assessed by comparing student attainment with the standards and objectives of the program (Rose & Nyre, 1977). Any discrepancies are determined at this time and the reasons for these discrepancies are investigated (Regan et al., 2000). Steps are then taken to eliminate the discrepancies (Regan et al., 2000). If students’ academic and behavioural achievement has not increased according to the program’s standards then corrective actions can be taken (Regan et al., 2000). The Discrepancy model lends itself equally well to both quantitative and qualitative methods (Regan et al., 2000). In determining if the majority of children have benefitted from their participation, not only should academic and behavioural assessment scores be considered but the children, parents, teachers, and other professional staff should be interviewed to determine their satisfaction with the program.

Finally, stage five involves a cost-benefit analysis (Rose & Nyre, 1977). The program can be compared to other similar programs in order to determine if this program is cost-effective and whether the program needs to be modified in order to improve outcomes for students in the future.



References
Medicine Hat Catholic Separate Regional School Division No. 20. (2009). ECS

programming for children with severe disabilities. Retrieved from

http://www.mhcbe.ab.ca/ on September 12, 2009.


Regan, M. A., Triggs, T. J., Mitsopoulos, E., Duncan, C. C., Godley, S. T. & Wallace, P.

(2000). Provus’ discrepancy evaluation of the Drivesmart novice driver CD-Rom

training product. Retrieved from

http://www.rsconference.com/pdf/RS000047.pdf?check=1 on September 12,

2009.

Rose, C. & Nyre, G. (1977). The practice of evaluation. Retrieved September 6, 2009,

from,

http://www.eric.ed.gov/ERICDocs/data/ericdocs2sql/content_storage_01/0000019

b/80/35/ad/cc.pdf

Saturday, September 12, 2009

Assignment 1 revised with citations

Outcomes Linked to High-Quality Afterschool Programs: Longitudinal Findings from the Study of Promising Afterschool Programs is an outcome-based evaluation that measured the effects of afterschool programs on elementary and middle-school participants. The purpose of this summative evaluation was to determine the effectiveness of high-quality afterschool programs in improving student outcomes as a result of attendance in these programs.

A quasi-experimental design was utilized as both participant and comparison groups were identified. The design was not truly an experimental design as the afterschool programs were not randomly chosen to take part in the study but were selected by the evaluators based on: program’ evaluations, recommendations, and whether the programs met the evaluators’ criteria for high-quality afterschool programming.

The program evaluators adopted a model similar to Stake’s Countenance model as they sought to judge the merit of high-quality afterschool programs. Countenance evaluators identify contingencies between outcomes and antecedent conditions and instructional transactions (Rose & Nyre, 1977). The evaluators collected data to establish a relationship between the outcomes of improved academic and behavioural success and the participation of students from low socio-economic backgrounds in afterschool programming. The evaluators found congruencies between student participation in high-quality afterschool programming, achievement on standardized math tests, reports of improved work habits and task persistence, and reductions in misconduct and drug and alcohol use amongst middle-school students. The evaluators obtained standardized scores and participant observations to make a judgment upon which to recommend a continuation and expansion of high-quality afterschool programming.

The Countenance model allows for an experimental design to be adopted. An experimental design protects against threats to internal validity thus increasing the significance of the findings of the evaluation. Another advantage of the Countenance model is that it focuses on immediate, long-term, personal, cognitive, affective, and societal outcomes (Rose & Nyre, 1977). In this evaluation, behavioural outcomes were considered as well as academic ones. In accordance with Stake’s recommendations, the evaluators considered the judgments of those participating in the programs including teachers and students (Rose & Nyre, 1977).

One of the main weaknesses of the Countenance model is also present in this example as once a judgment had been made regarding the merit of the programs the evaluation was finished. In this evaluation there is no mention of how afterschool programming could be improved in the future except to have it continued and expanded to more communities. If continuing feedback was part of this model, the evaluators could begin looking at why some students chose not to attend, how low attendees can be encouraged to participate, what characteristics seem to be present in high vs. low attendees, and how programs could be improved to further increase positive academic and behavioural outcomes. Participant outcomes were not categorized so that the impact on various groups could be compared so that improvements could be made to meet diverse needs in the future. The longitudinal impact is also not considered with the Countenance model. There is no mention of expanding the study to determine the impact afterschool programming may play on students’ lives once they stop attending and in their adult lives.

Reference

Rose, C. & Nyre, G. (1977). The practice of evaluation. Retrieved September 6, 2009,

from,

http://www.eric.ed.gov/ERICDocs/data/ericdocs2sql/content_storage_01/0000019

b/80/35/ad/cc.pdf

Asignment 1

Outcomes Linked to High-Quality Afterschool Programs: Longitudinal Findings from the Study of Promising Afterschool Programs is an outcome-based evaluation that measured the effects of afterschool programs on elementary and middle-school participants. The purpose of this summative evaluation was to determine the effectiveness of high-quality afterschool programs in improving student outcomes as a result of attendance in these programs.

A quasi-experimental design was utilized as both participant and comparison groups were identified. The design was not truly an experimental design as the afterschool programs were not randomly chosen to take part in the study but were selected by the evaluators based on: program’ evaluations, recommendations, and whether the programs met the evaluators’ criteria for high-quality afterschool programming.

The program evaluators adopted a model similar to Stake’s Countenance model as they sought to judge the merit of high-quality afterschool programs. Countenance evaluators identify contingencies between outcomes and antecedent conditions and instructional transactions. The evaluators collected data to establish a relationship between the outcomes of improved academic and behavioural success and the participation of students from low socio-economic backgrounds in afterschool programming. The evaluators found congruencies between student participation in high-quality afterschool programming, achievement on standardized math tests, reports of improved work habits and task persistence, and reductions in misconduct and drug and alcohol use amongst middle-school students. The evaluators obtained standardized scores and participant observations to make a judgment upon which to recommend a continuation and expansion of high-quality afterschool programming.

The Countenance model allows for an experimental design to be adopted. An experimental design protects against threats to internal validity thus increasing the significance of the findings of the evaluation. Another advantage of the Countenance model is that it focuses on immediate, long-term, personal, cognitive, affective, and societal outcomes. In this evaluation, behavioural outcomes were considered as well as academic ones. In accordance with Stake’s recommendations, the evaluators considered the judgments of those participating in the programs including teachers and students.

One of the main weaknesses of the Countenance model is also present in this example as once a judgment had been made regarding the merit of the programs the evaluation was finished. In this evaluation there is no mention of how afterschool programming could be improved in the future except to have it continued and expanded to more communities. If continuing feedback was part of this model, the evaluators could begin looking at why some students chose not to attend, how low attendees can be encouraged to participate, what characteristics seem to be present in high vs. low attendees, and how programs could be improved to further increase positive academic and behavioural outcomes. Participant outcomes were not categorized so that the impact on various groups could be compared so that improvements could be made to meet diverse needs in the future. The longitudinal impact is also not considered with the Countenance model. There is no mention of expanding the study to determine the impact afterschool programming may play on students’ lives once they stop attending and in their adult lives.