The Value of the Human in an AI World
In many futuristic imaginings from movie producers and sci-fi writers, the future includes Artificial Intelligence or AI. Most of us have had a conversation or two with Siri or Alexa. Companies like Netflix and Amazon make use of predictive technology to make very accurate stabs at what we’d like to buy or binge watch. AI is not down the road another 30-50 years, it’s here impacting our daily lives in very meaningful ways. By the time our students leave our schools for the last time, many of the jobs known to us today, will be done by a machine or will include augmentation by a machine, machines that can learn and predict better than we can. Technology and its integration into every sector of the economy is increasing and expanding so rapidly and the career landscape is changing so vastly that trying to predict the future is becoming problematic. This is a big challenge for educators. We exist to help prepare the next generation to contribute to society and continue to tackle the problems of our world, just like every generation before them. But how do we prepare them to contribute to a society that contains jobs not even created yet? How do we structure our classrooms, our curriculum, and our policies to prepare students for a job market shared with Artificial Intelligence?
Let’s start with that first question about preparing our students to contribute. Let me ask you this: what value does a human add to an automated world? The key thing to remember is that AI exists because we birthed it and fed it information, or data, that now allows it to run its algorithms and spit out solutions. We provide data to AI all the time via our Google searches or our song choices on Pandora. Our choices, decisions and mindsets become the data. If we want our machines to help us live in a better world, we have to be careful about the caliber of data we provide. Therefore, the people that will continue to thrive as more jobs are turned over to AI are the ones that can work with AI by making those crucial decisions about what kind of data our AI gets. That job requires high-level thinking skills like analysis, ethical reasoning, evaluation and synthesis, among others.
Now we can start to answer that other question about how we structure our educational surroundings. We have to make sure that every time we ask anything of our students, we are asking them to reach those higher levels of thinking. This applies to all students on every grade level. Can a kindergartener learn to analyze? I argue that they can. Can a third grader learn to make well-developed evaluations? Certainly. The key is that we have to expect them to get to these higher levels of thinking.
When I was in third grade, I was asked to re-create a traditional Native American dwelling, the teepee. I looked up what teepees look like, got some simple supplies and then built a teepee to look exactly like the one I found in my research. I got my A. But what kind of thinking had I really demonstrated? According to Bloom’s Taxonomy, I probably only reached the Understanding level. I was never asked to think deeper about the subject. Therein lies the problem. In order to stay relevant in a world where computers are better and faster than we are, our value comes in our ability to think deeply about complex issues and about the data we feed to these computers. The forms of AI we’re seeing are trying to become more and more like us; they are programmed to try and learn and think like we do. These machines will have to start at low-level thinking and work their way up, just like we do when we learn. We can’t stay ahead and retain our value in the economy if we never move past the bottom of Blooms. It is our challenge as teachers and it’s a challenge we have to pass on to our students. Let’s think higher.
One way a teacher or a school or even an entire district can make sure their students are prepared to share a future with AI is to redesign their rubrics. Rubrics tend to guide the way a project is designed and ultimately assessed. My school recently undertook this idea and redesigned a few of our school-wide rubrics. The intent was to focus on a progression from low-level thinking skills toward high-level thinking skills, those skills that will always give us an edge over AI.
Old Rubric
Emerging Developing Proficient Advanced
Student knows little to none of the things we want | Student knows some of the things we want. | Student knows most of the things we want. | Student knows all of the things we want. |
The issue with the old rubric above is that if the standard (the thing that we want) says something like “Students will identify causes of the civil war,” then at the advanced side, all the student has proved is that they can identify ALL of the causes of the civil war. Identify is a lower-level thinking skill, a skill that machines can do and do better than we can. Granted, identifying is a crucial starting point in the learning process, but we have to get them to go past it.
New Rubric
Emerging Developing Proficient Advanced
Student can recall or identify content. | Student can explain or compare and contrast content. | Student can apply content to new situations. | Student can analyze, evaluate, connect, argue, create, etc. |
Using the same standard about identifying the causes of the civil war, the new rubric blows right past it. Ideally, the student would progress from emerging, simply recalling causes of the war, to advanced, analyzing the causes, evaluating the effects, arguing against opposing views, by the end of the unit or project. If we have a kindergartner who was studying fairy tales and they can recall the events of The Three Little Pigs, they are showing us those foundational skills of remembering. Given time, that same kindergartner might be able to evaluate the moral of the story. They might even be able to argue that the wolf is really the victim. They could then create a new fairy tale. They’ll do all of this in their own kindergartner style, and with appropriate guidance from their teacher, but they’ll still be practicing all levels of thinking, including those crucial higher-levels because they’ve been pushed to go deeper.
Start with shifting your rubrics and the natural consequence will be a shift in the caliber of assignments and projects you ask your students to do. It will also cause a shift in the level of thinking you do as an educator and what you do each day with your students. A shift in your rubrics, so that they align with the progression of Bloom’s Taxonomy, will result in deeper learning. If the human contribution to the future will be making crucial decisions about where AI is appropriate, how AI will learn, and what we will teach AI, then we need students who can analyze a situation, argue the gray area, evaluate consequences and create solutions we’ve never seen before. They can’t get there if we never require them to. Let’s get our students ready for a future where the awesome power of their minds will be the most valuable commodity.
For more, see:
Larry Leonard
Ms. Durfee, you write: " If we want our machines to help us live in a better world, we have to be careful about the caliber of data we provide." However, you had just expounded on the wide range of personal data we DO provide which is NOT vetted for quality AI development. The "Human contribution" as to WHERE AI is appropriate has already been determined. It is pervasive. Knowing this: the VALUE a human adds is to be automation's facilitator. Like you in your classroom, when humans engage in tech use, they become AI facilitators, supplying information to the algorithms so AI can think for itself. In either case, we do want students learning critical thinking skills. Your 'new Rubric' has been in place in hundreds of school districts in the USA for many years. Yet, we both know that it is the integrity of implementation which makes it effective. Thanks for a good read.
Replies
RReeve
I don't think your grasping how much AI will change our world. Within the next 5-10 years at the very most, AI will be so far superior to humans, there will be no benefit to the AI to have human facilitators who supply information because there will nothing we could possibly teach it.
This is not science fiction. The day when a single AI is smarter than us we become obsolete. Experts in this field are telling us we are almost there already. When we can build something that is smarter than ourselves then its logical to say that this AI will also be able to build an AI which is smarter than itself.
When this happens it will create exponential growth at an ever-increasing rate that is impossible to predict because it will impact EVERY area of our life.
A generous estimate would be about a year after we have built an AI smarter than ourselves when EVERY profession within EVERY industry will replace people with AI.
Not only would every industry on the planet begin to experience huge growth, the rate of innovation and development across all sectors will also experience exponential growth because AI will be incredibly cheap for the very short time when it won't be free.
At the beginning of this global exponential curve, an AI will be smarter and more productive than a team of 100 humans, probably more.
Humans won't be able to keep up with the levels of progress because changes will occur faster than we can comprehend. It really is a pandora's box that is impossible to contain and control. Anyone who believes it will just be a tool at our disposal to assist us with our own development at our own pace hasn't fully understood the implications of AI.
(Also, it's important to note, as every industry will experience massive improvements from AI, this also includes Robotics)
Within a very short space of time, were talking months not years, we won't be able to tell who is a machine and who is human. I know that sounds impossible but when you begin to comprehend the speed of growth on this exponential curve, this technology will slip through our fingers and we won't see it coming. THIS IS NOT SCIENCE FICTION. It's impossible for this not to happen when you fully understand AI and it's potential. When we eventually hit this unavoidable technological singularity, technology and artifical intelligence will spread across the universe at the speed of light.
NOW is the time to act because this is the biggest risk to our survival as a species. If nothing changes, this will 100% happen within this generation without doubt. I've always said, you would have to be insane to build a machine with this much intelligence. Many smart people are warning us to act now but nobody is listening because few can comprehend the danger we are in. Job security is the very last topic we should be concerning ourselves when our very existance is in question. It's incredibly arrogant and naive to assume we can keep control of this technology for our own benefit.
I would say AI is the next big discovery after humans discovered 'FIRE' Thankfully, early man managed to control and contain FIRE but we will never be able to do the same with AI. Our only hope at this moment in time is that AI decides to allow humans to co-exist with it. It's 50/50 if that will happen.