The workshop is over, and many staff naturally want to know: “What is going to happen next? How will we follow up on our good conversations?” This is of course a key question! So, how will you build upon the day? How will you avoid cynical comments about “This, too, shall pass”? How can you keep the momentum going? How can you show your colleagues that you are serious about long-term, focused, and well-supported renewal?So I guess there's not going to be a lot of time left over after teachers get done with all this to administer a spelling test or two.
On the next pages we offer an array of possible actions and approaches for follow-up.
Examples of 10 possible ‘next step’ actions: Design/Analyze/Research
1. Design a model unit in teams. Ask staff to commit to a timeline of the design of a unit or unit elements. e.g. try using essential questions next week; have a complete unit by semester’s end, designed and piloted, etc.....
2. Design a model scoring rubric (supported by work samples) that makes “understanding” a clear, prominent, and explicit aim. Have staff use the rubric and work sample models (‘anchors’) to clarify for students that the aim is understanding, not recall.....
3. Design a transfer task for a key Standard (e.g. a complex novel-looking math problem; a document-based question in history; a test of reading strategy and skill on a new piece of non-fiction, etc.). Analyze the Standard carefully: if that’s the Standard, what would count as performance evidence of meeting the Standard? Then, sketch out a task. Design a protocol for administering the task in which students initially receive no hints or scaffold, but can receive hints if they truly need them. Use a graduated-prompted rubric to score the results (4 = needed no hints, 3 = needed 1-2 minor hints or reminders, 2 = needed reminders and hints all along the way, 1 = even with much scaffolding could not produce an adequate response.)
4. Design a Gradual-Release-of-Responsibility unit. Use the 4-step process (I do, you watch; I do, you help; You do, I help; You do, I watch) to design a unit or set of units deliberately aiming at autonomous transfer of student learning. Use a graduated-prompting rubric to score student performance.
5. Analyze model and typical units against UbD design standards. Have teams/departments assess units or lessons using our peer review process against the criteria for “good design” (see the UbD design standards, or use your own). Have teams report out what they learned from studying and self-assessing units against standards.
6. Analyze local assessments. For a targeted time frame, (e.g. the month of November or the next marking period) collect all the assessments given in a building. Then, taking a sample of the assessments (e.g. every 4th assessment item) analyze the type and validity of the assessments. Use credible criteria to rate the assessment (e.g. Bloom’s Taxonomy, Webb’s Depth of Knowledge, the six facets of understanding, state standards, etc.) Make 1-2 specific recommendations for improving the quality of local assessment, based on the findings.
7. Analyze local grading. Examine trends in local teacher grading to identify the validity of grades (given Mission, state standards, critical thinking, understanding, etc.). Compare local grading standards to state performance standards (e.g. state-wide writing, college freshman exam scoring, etc.) – i.e. how predictive are local grades of later important performance? Also, look at cross-teacher consistency by having teachers grade the same student work on their own, then discuss their grading in groups.
8. Analyze results on a common assessment that you design, making sure that the assessment includes higher-order as well as lower-order questions....
9. Research test results: go to your local site or to the Florida FCAT, Massachusetts MCAS, or Pennsylvania PSSA websites to download their released test items with analysis. Study the results – especially the ones in Reading and Mathematics. Note the hardest questions and most common wrong answers. (All test reports in these states show the correct answer and what % of students picked which answer; they also code the purpose of each test question against the key state standard it is assessing)....
10. Research motivation in students. In teams, study a small group of ‘typical’ kids over the course of a day; you might shadow one student each through their day or half-day. In what work are they most motivated and engaged in class - and out of class (sports/computer games/arts)? When are they least motivated? When do students persist with a challenge and when do they quit? What general conclusions can you draw from motivation and engagement about how to make schoolwork less boring? (We have online student surveys you can use, too).
Also, if there are teams of grownups at school wanting to 'shadow' my 'typical kid' over the course of a day, I would like to be asked whether that is OK with me.
Then, when I inform the team that shadowing my typical kid over the course of a day is not OK with me, I would like the grownups to return to the classroom and teach.