Sunday, March 18, 2012

Student Attrition, Retention, Persistence

After a few weeks of reading Tinto, Bean and others about what makes students stay and what makes them go, I have to say the theories make sense. Each piece of the theories seems coherent. The challenges and extensions of Tinto seem either warranted or as if they are taking Tinto out of context and applying his work to staying rather than going. In each of the models, my mind gets stuck at one point -- if you buy what these theorists are selling, colleges and universities have a very limited ability to affect students coming, staying or going. It seems as if 60% of the theories are pre-college traits. While this may lead credence to the increasing trend of enrollment management personnel to start connecting with students in middle school, I'm not sure how this theory will influence my work. It seems like common sense that you serve your students as best you can with the resources your institution has (though I sense this may not be the case for some of my counterparts at other institutions). It also gives me a framework for where to look for service or program needs. ...

Saturday, February 25, 2012

Keeping Track of Students

The past couple of weeks we've read about different aspects of students, attrition, retention...I haven't posted about it because I've actually spent quite some time talking over higher education and students with one of our admissions officers, the one and only Luke Morgan. Luke is finishing his masters in higher education at Western Kentucky so we've processed together quite a bit (I think I have great ideas for his thesis...I'm not so sure he agrees). My thoughts are actually revolving around how we track students, and how we use that information. I should be working on IPEDS spring data collection right now. We are asked to report freshman-to-sophomore retention, 100%, 150%, and 200% completions, degrees. Our financial aid and business offices also report data during these times. The data that is reported through IPEDS goes into the NCES databases to be searched by anyone. It also gets reported with the submission of every FAFSA. Every person that submits a FAFSA sees retention and graduation data for the institution selected. The federal government is lagging far behind the realities of higher education:

  • The government only collects data on first-time, full-time students. Transfers are aggregated by male/female full and part-time students and then ignored. First time, half-time students are included in the FTIAC cohort. A retention rate is the percentage of the FTIAC cohort that returns for its second year of college at your institution. A graduation rate is the percentage of the FTIAC cohort that completes in 100% of the time to degree (four years for a four-year degree).
  • Transfer students are a lost entity. Half of our incoming classes for the last two years have been transfer students. They count nowhere and for nothing.
  • The assumption is that retention and graduation rates reflect something of the quality of the institution. 
Why is the federal government not tracking transfer students? Why isn't the government tracking a student through his/her college career? Let's look at how this affects an institution like Kuyper. My incoming class last fall was 89 students, 54 of which were FTIACs. The 35 transfers poof into thin air -- who cares about them? So my FTIAC cohort for retention and graduation tracking is 54 students. ONE student leaving equals OVER 1%. When we consider that 75% retention is the high side of average, for my college that equals 40 students, or a 14 student attrition. That's not bad! What is worse, losing 14 students, or with a college that has a 5000 student incoming class losing 250 students? Is there a comparison? 
And why doesn't the government take any account of the reasons a student leaves? Why is it implicitly assumed it's the college's fault? Why are only 18-year old students the only group tracked? This affects not only small, private or specialized colleges, but community colleges too. 
My question is, when is the federal government going to get with the times? 

Saturday, January 21, 2012

Students and Assessment

  This week's readings in EAD966 are from Astin's book on assessment. Though it's dated, he has some good, baseline information about what and how we assess our students and our institutions as a whole. Much of what he writes about is the variability of information that can be collected and the context for which we collect it. What are we trying to measure? How can we best measure it? Though he is focusing more on institution-wide or the big-picture outcomes, the underlying practices of assessment are applicable at the class level as well as institutional level.
  Assessment is often about judgments based on best practices. There are just some things that aren't good, but there are some areas that I (in a very technical manner) call "squiggy". There are some "hard and fast" rules; there are some grey areas where you can take some liberties as long as you do it advisedly. Deciding how to assess students at the course level is one of those areas where there are best-practice guidelines, but there often aren't right and wrong answers. It's a judgment call, and sometimes you just have to dive in, try it, and tweak it the next time around.
  I was privileged to see this process wrestled through with upper-level students yesterday. Our first-year core group facilitators wrestled through the formulations and the use of a grading rubric for reflection essays, and the various philosophies that can be used in assessing student development and progress. They have experienced these decisions made by others behind the curtain, but have never worked through that process on their own as the decision makers about assessment. They agreed they wanted to take an instructional role with their first year students, and will norm use of the rubric we decided on among themselves.
  This might be the most tangible assessment results I've had in a while.