Skip to main content

A few weeks ago, I was asked to outline my thoughts on evaluating and measuring the effectiveness of an instructional technology program. Measuring the impact of professional development, as a whole, is a knotty problem, but with more and more districts investing in dedicated FTE positions for job-embedded, just-in-time coaching, the question is relevant. Sooner or later, the school board will be asking questions like, “Have we met our goals? Is the program still relevant? Is there still an envelope to push?”, especially when budget cuts must be made.

Without a plan to measure and report value, the program is at risk.

Because I came from a district that lost their instructional techs, I’m pretty passionate about assessing this role in a way that quantifies impact. I think it can be done, even if it might feel uncomfortable.  Our team started doing this really well toward the end of the five years I was there, but it was a bit too little, too late at that point.

Metrics

I think there are three primary metrics I would want in place if I were leading a team of edtechs:

  1. How many teachers touched per week? What general level of support was given during each interaction? 
  2. How many promotions of teacher work were shared each month? 
  3. How many support resources generated each month? What topics were covered? 

My thoughts behind these metrics are as follows:

  1. This metric (capturing teacher name, general description of help, and level of help) can be used to express impact and growth over time. When I was a full time edtech, I kept all my teacher connections in my calendar and then would enter them into our database (a form set up by one of the team) at the end of every week. Once a quarter, my supervisor would ask us to review our data and we’d talk about where we were not getting traction. We’d bounce ideas off of one another as to how we could keep up momentum. He would report back on how many teachers we had reached by subject. If I had a chance to go back, we could have passed the data on to a progress monitoring coach (our data-focused team) and she could have given us some growth statistics based on our interactions with teachers.
  2. I personally believe that an edtech role isn’t about the edtech specialist. It’s about the teachers they serve. As soon as someone moves into an edtech position, their voice begins to become less effective to change others from a peer-to-peer standpoint. Because of that, an edtech specialist must promote the work of the teachers who are most interested in their help. In my mind, if I didn’t have something to share from one of the classrooms I served at least every week, I wasn’t doing my job (I was covering a full high school and middle school – there were plenty of stories to tell). If I was at a loss for ideas, I’d let someone guest blog, or I’d tweet out a good resource, but I was of the opinion that I needed to be telling authentic stories every week if I wanted to see change. Examples from that time are here: https://tech4practice.wordpress.com/
  3. As an edtech goes through their week, they will be creating lots of resources. Those resources should live in a common place where teachers can get to things when the edtech isn’t around. In my building, people knew that they could go to my delicious.com bookmarks to see what I was up to, to the blog for stories from the past few weeks, or, in the last two years of the program, to a common website where our tech integration specialist team was posting resources, arranged by tags: http://www.pkwy.k12.mo.us/tis/indexWide.cfm?goToLocation=resources.cfm&content=2

Accountability

I cannot understate the importance of developing a positive relationship with administrators. I grabbed about 45 minutes of each of my head principals’ time each trimester for a “State of Tech Integration” meeting, where I highlighted projects, reported out how I was using my time, and updated them on district news they might not have heard about. A sample report is available here. When reporting on my work with teachers, I used Understanding by Design terms that we were applying to all curriculum efforts at that time – acquire, make meaning, transfer – and paired them with an edtech framework in favor at that time, the “Technology and Learning Spectrum”.

What did not work: a rubric

We did go through an iteration of evaluating our performance that involved a rubric. I found that this method, while familiar in an education setting, did not push us. It was more of an autopsy that we might or might not get to each year, based on a self-evaluation, a couple in-person observations from our supervisor, and our performance in team meetings. All of those touch-points made up less than 10% of our work.

The rubric approach is great for a performance assessment within a classroom, but less effective when we talk about evaluating service delivery and impact. The metrics listed above are much more real-time, are visible to a larger community than the direct supervisor, and are more easily summarized and reported to a school board.

Summary thoughts

To sum up, I think an edtech should …

  •  track daily connections to teachers, and the level at which those teachers were supported
  •  publicly promote the work of teachers every week through visible means (blog, tweet, instagram, etc)
  •  archive, index, and report on resources they create each month
  •  regularly report to administration on their use of time and their impact on the community

When a program begins with these metrics, regularly evaluates performance against them, and collects testimony from teachers impacted, the annual program evaluation will be something both the edtech coach and her school can celebrate.

Scroll To Top