I think the difficult question will answer be what should we be trying to learn and therefore measure. Personally I believe with the rise of automation we will need to be equipping the next generation of students with the ability to construct and use knowledge across domains. Technology is and just will keep getting better at answering questions within defined verticals. Ultimately I think that means we need to remove the artificial constructs of subjects, e.g., Math, English, History… and start exploring concepts in an integrated fashion.
However I will leave it to others for the moment to have that discussion which I think will be mainly noise for at least a decade. My concern through my startup adapptED is instead about how can we help learners better gain and hold onto knowledge and skills.
As for the question of balance of efficacy vs adoption I think there are a number of potential ways to do this. Specifically in our context the issue revolves about the recommendations of algorithms. What is the teacher’s role in this environment. On one end you could automate the process where the algorithm adjusts the student’s learning environment (zero teacher involvement). Given a rigorous algorithm this would presumably lead to better learning outcomes but also to low classroom adoption as we hear frequently that teachers don’t trust the system. On the other end the system could provide the teacher with the data and keep them in charge. Anecdotes from current ed-tech products suggest teachers like this but generally do little to adjust classroom practice to benefit from the new information. A hybrid version involved providing the teacher with the optics of control but have the algorithm do its thing in the background; there are obvious ethical issues with this approach.
Given the above my current view is that the question hinges on building teacher trust in the systems. Ed-tech companies haven’t done well enough to explain how the systems work and why they came to the eventual recommendations.