ISTE Stardard 2.7 "Educators understand and use data to drive their instruction and support students in achieving their learning goals. "
2.7.a Educators provide alternative ways for students to demonstrate competency and reflect on their learning using technology.
2.7.b Educators use technology to design and implement a variety of formative and summative assessments that accommodate learner needs, provide timely feedback to students and inform instruction.
2.7.c Educators use assessment data to guide progress, personalize learning, and communicate feedback to education stakeholders in support of students reaching their learning goals.
I am constantly offering alternative assessments in my classroom as well as offering choice in those assessment activities to ensure that students have opportunities to show off what they have learned in ways that best fit their learning style and abilities. But to provide those opportunites, I need data to see what each student needs.
This artifact highlights the importance of using data to drive instruction and assessment design. In teaching Florida’s middle school Civics, the challenge lies in preparing students for the broad, high-stakes end-of-course exam (EOC), where teachers have limited access to question types and must rely on benchmarks. To meet this challenge, we use School City, a platform that provides detailed data points such as teacher, school, and district averages, as well as student-level benchmark performance. These insights allow me to see both classwide trends and individual student needs, making it possible to create targeted, personalized activities. For example, I developed a series of Kahoot reviews tailored to specific benchmarks students had not yet mastered, alongside whole-class team games to reinforce skills. This approach aligns with ISTE Standard 2.7 by using data and digital tools to design meaningful assessments. Ultimately, leveraging data empowers me to provide equitable support and better prepare students for success.
In creating this artifact, I explored and compared four key methods of data collection in education: surveys, observations, program analytics, and test scores. Each method offers unique strengths and limitations, and together they provide a more comprehensive understanding of program effectiveness and student learning. Surveys are valuable for gathering direct feedback and gauging stakeholder perceptions, though they can be limited by bias or low response rates. Observations allow educators to see implementation in action, capturing successes and struggles in real time, but they can be time-consuming and influenced by the presence of the observer. Program analytics provide objective, real-time data and track usage trends, though they may not explain the “why” behind user behavior. Finally, test scores remain a key indicator of academic progress, revealing trends and learning gaps, but they are not always fair to all students. Reflecting on these methods reminded me of the importance of using multiple data sources for balanced evaluation.
This page actually gets a bonus artifact. In this video, I go over how digital tools and programs can help educators create powerful and meaningful assessments that align with their educational goals.