i-Ready is Ready for Prime Time

EdTech

Curriculum Associates has been a trusted provider of supplemental standards-based content and test prep services for decades. The team, led by former Score President Rob Waldron, recently released i-Ready, a very impressive cloud-based K-8 adaptive assessment engine.

i-Ready’s two components, adaptive diagnostic and standards preparation, provide a visually appealing and fun approach to educational materials. Both programs allow teachers, parents and administrators to follow the progress of every student down to the skill level. The diagnostic program not only identifies the grade level a student is at, but pin points the skills needed to improve and adapts lessons accordingly.

The i-Ready standards preparation program focuses on helping students understand key concepts that will be tested on their state’s exams. The products are customized for every state exam and the new Common Core State Standards. i-Ready can be used to power your blended learning lab, to accelerate your struggling students, or support summer school supplemental activities.

What makes this even better is that i-Ready assessments queue engaging learning experiences in K-6 reading and math.  Their products are an interesting mix of print materials that seamlessly integrate with online activities.

Compared to MAPS, NWEA’s market leading assessment, i-Ready is easier to use and gives teachers more student data information. It also has really clean link to engaging content. It’s aligned to Common Core and/or your state standards and is designed to allow teachers to drive what is happening with every student.

Curriculum Associates also provides reading, mathematics and special education printed materials. With a talented and driven team they are striving to be the best company to go on this ever-changing education journey with.

Tom Vander Ark

Tom Vander Ark

Tom Vander Ark is author of Smart Parents, Smart Cities and Getting Smart. He is co-founder of Getting Smart and Learn Capital and serves on the boards of 4.0 Schools, eduInnovation, Digital Learning Institute, Imagination Foundation, Charter Board Partners and Bloomboard. Follow Tom on Twitter, @tvanderark.

4 Comments

Melanie Nichols /

Have you heard any reviews on iready since publishing this? My daughter’s school is doing a test pilot on iready. I’m concerned about it. Her school was using Saxon math before. She always excelled when doing Saxon. Now when doing iready she brings home 60’s and 80’s. What are your thoughts?

Ms. E. Hawkins /

I understand your concerns. My daughter is in the second grade. She was previous T1 student.

I have had IEP meetings and PTCs with my daughter’s teacher.

No one has mentioned the i-ready test, but my daughter brought home a note from her teacher stating….your child is considered “below level” on the mid-year i-ready test.

I don’t have any knowledge or information about this i-ready test. This is just something else to burden our children and provide new labels with this Common Core.

I have 7 children. Four of my children are in college: 1st Master’s degree, 2nd Bachelor’s degree, 3rd Associates degree, 4th Associates Degree. My other three children are between grades 2-11.

Everyone should be concerned.

E Schaefer /

My 4th grade daughter is complaining about computer time for the first time in her 6 years in the district. I have spoken to the principal, and he identified the program as being ready. I am just beginning my investigation, but he told me he has heard from a number of students who are not satisfied. I am planning to check with him as to what changes he is planning.

Alma /

i-Ready is results are poorly correlated to other tests and actual skills, knowledge, and capability. A 7th grade student who has been tested by a college entrance test to be at college level, knows 95 percent or better of vocabulary presented to the student, has a verbal IQ in the 99%, was tested by Bader test in 3rd grade to 8th grade level, performed at grade level on the test. Now it is true that after the system crashed, it came back to the question the student was at, then dropped down to simplistic words and passages. However, given test questions I observed that were illogical, had no correct answers or multiple correct answers depending on usage, or were clearly wrong in how they were set up, it would seem random guessing rather than specific knowledge may be the cause for artificial low scores. additionally it appears that multiple answers were needed at rimes, but this was not made clear to many of the students, also causing them to score low. Random clicking and then inputting in the correct answer was not accepted by the software. It keeps the first entry even before the student moves on to the next question. The test also breaks down at the higher levels as the scores overlap. It should not be used to grade teaching effectiveness or student needs until it is better correlated, and the placement test questions and adaptive software are fixed.