When we think about engineering and computing education, immediately we have to realize that what we normally focus on is what students know. But do we really know what our students know? They express that knowing on our exams, in our assignments, in the programs they write and the labs they complete. They reflect what they know in what they do on our projects. Or at least we think they do.
So take an instance - most experienced instructors have had the occurrence where the student takes an XYZ engineering course, and they get a good grade. Say in networking, or programming or software engineering. In our class, they designed a nice robust and properly-segmented network; they wrote nice, well-structured code; they did their reviews and wrote out their designs and test cases. We know they know it. They showed us, their peers, and themselves that they know it.
Or did they? Sometime later, on some capstone or other project we find our hopes for quality and integration of knowledge shattered by what they didn't do. Some one, if not all expectations for quality work went out the window. We look under the covers, and their circuits are sloppy, the code is a wreck, the design planned is not recognizable in the implementation, there's no test plan, no tests, miserable documentation if there is any at all. Whatever they were supposed to learn in XYZ just did not happen.
And as an instructor, you feel frustrated, occasionally even embarrassed. Before reality sets in, I've occasionally had the wish that I could retroactively fail them for a course if they couldn't be bothered to apply what they learned in their follow-on courses. So did they know it? Did they fake it? Learn it and forget it? While the education literature will call this a 'transfer' problem, where a student fails to transfer what they learned in one course/domain to another course/domain, what is really going on? If they knew XYZ why didn't they do or apply XYZ when no one was looking (or sometimes even when they are)?
We work hard to identify the knowledge, technique, skill or method that we think matters (E.g., what Billy Koen might call The State-of-the-Art) or should matter to our students. We put that important knowledge, skill, technique or method, the SoTA into course XYZ because we think they need it, or will need it. And they usually learn it - or we might fail them in the XYZ course. But then they don't do it, even when it might have really helped them. Why?
Personally, i think this has very little to do with what the students know or don't know. I think it is fundamentally a failure to value something. We spend so much time making students learn the What and the How of the SoTA, we don't necessarily get into the Why of it. And even if we do (and many instructors i've met over the years do an awesome job at the why part), what it it that we assess? We asses what they know. We assess the cognitive. We don't assess students on whether their values, and particularly their values around the SoTA that we spend so much time with - actually connect with them. Of course there are those that say we can't measure values. We can't judge values.
So if they don't necessarily value what we teach - why bother? What are we trying to do with education anyway if we don't care what students value? Studies have show for years (centuries perhaps?) that what they value, is what they retain. What they value affects what they will do, especially when no one's looking.
To start a thread please log in using the link at the top of the page.
Signing in is required in order to activate the ability to post.
Your ieee.net username and password should allow you access.
Please follow us also on Facebook or LinkedIn.
The success of the STC as a forum for exchange of ideas depends on your involvement...
1-2 of 2