Translate

Showing posts with label online learning. Show all posts
Showing posts with label online learning. Show all posts

Monday, June 16, 2014

Attribution Theory and Online Learning

In my earlier posts, I discussed the ESPRI and the factors associated with predicting student success. My research is looking to examine how we can use the ESPRI to help improve student performance in online courses by addressing areas of weakness in a student’s “soft” skill set. To recap, the four areas covered in the ESPRI are technology self-efficacy, achievement beliefs, organizational skills, and academic risk-taking. In this post, I will discuss ways to intervene with the first two.

To begin, some definitions. Self-efficacy relates to the beliefs about successfully performing academic tasks, while self-concept is the knowledge and perceptions about one’s own academic achievement (Ferla et al., 2009). Attribution theory (Weiner, 1992) relates to the reasons that students attribute academic outcomes, which fall into three categories: locus of control (i.e., are the reasons internal or external to the learner?), stability (i.e., is the reason temporary or lasting?), and controllability (i.e., how it relates to learner persistence).

Simply put, the more the student believes that he or she is in control of the outcome, the more likely the student is to persist, maintain motivation, and change behaviors to improve learning.  Since a large population of online learners are taking courses for credit recovery, or are in an alternative setting due to being unsuccessful in a traditional school, changing mindsets is critical.

But how do you teach that? Can you teach that? Studies have been somewhat scarce and mixed. Walden and Ramey (1983) found that high-risk students who participated in a long-term intervention on internalizing control did see improvement in academic achievement. Robertson’s (2000) review of attribution retraining studies found mixed results, and based on the review recommended that attribution retraining interventions should include additional steps to ensure positive results. In other words, simply telling students to internalize attributions may lead to decreased motivation if they are still unsuccessful. Robertson suggested the inclusion of other learning strategies; thus, if the student is not successful, it could be viewed as a mistake related to the strategy rather than overall ability. Finally, Chodkiewicz and Boyle (2014) provided an overall critique of attribution studies as they relate to education, stating that much of the literature from the field of psychology is conducted in clinical settings, with little taking place in the classroom.

Regarding my current research with online learning, my efforts seem to be in line with Robertson’s recommendations, as we are looking to tackle multiple strategies and not simply student belief systems.


Chodkiewicz, A. R., & Boyle, C. (2014). Exploring the contribution of attribution retraining to student perceptions and the learning process. Educational Psychology in Practice, 30(1), 78-87.

Ferla, J., Valcke, M., & Cai, Y. (2009). Academic self-efficacy and academic self-concept: Reconsidering structural relationships. Learning and Individual Differences, 19(4), 499-505.

Robertson, J. S. (2000). Is Attribution Training a Worthwhile Classroom Intervention For K–12 Students with Learning Difficulties? Educational Psychology Review, 12(1), 111-134.

Walden, T. A., & Ramey, C. T. (1983). Locus of control and academic achievement: Results from a preschool intervention program. Journal of Educational Psychology, 75(3), 347-358.

Weiner, B. (1992). Human Motivation: Metaphors, Theories and Research. Newbury Park, CA: Sage Publications.


Friday, April 18, 2014

Program-level measurements


Continuing our exploration of measures of success in online or blended settings, an important item to add to course-specific factors (identified in my previous post) that reflects program-level success is program completion or graduation rate. Conceptions of student success will vary depending on educational level – K-12, community college, 4-year College, and post-graduate degree – as well as credentialing requirements, but completion of the courses required for graduation is a critical measure of program success.

Concerns about students’ persistence in online or blended programs surfaced shortly after institutions started offering these programs. Rovai (2003) explored research on this phenomenon and described a composite model to explain persistence and attrition in online courses and programs.   

A definition of retention that applies to online or blended programs comes from Boston, Ice and Gibson (2011): “the progressive reenrollment in college, whether continuous from one term to the next or temporarily interrupted and then resumed” ( 38). Students in online programs may not complete courses each term, based on lack of resources, changes in their profession, personal or professional commitments, etc., but should eventually complete their degree. 
A broader conception of online program success from Kuh et al. (2006) includes the following factors: “academic achievement, engagement in educationally purposeful activities, satisfaction, acquisition of desired knowledge, skills, and competencies, persistence, attainment of educational objectives, and post college performance” (p. 7). This definition reflects a more comprehensive view of student experiences and expectations in online or blended degree programs. 
A popular measurement of student satisfaction in higher education institutions are student course evaluations (SCE). An analysis of 2-years of SCE data comparing our students’ perceptions of graduate course quality resulted in NO statistically significant differences based on course format – hybrid vs online. 

As part of our evaluation of a blended and online graduate degree in educational technology, we analyzed student success, by format, along with GPA and retention, where retention includes the number of students who withdrew after the term started AND those who received a failing grade in the course. For the 2011-2014 academic years, our graduate courses had retention rates of 98.8% in online courses and 97% in hybrid courses. We are still analyzing measures of student program success and will share that information in a future posting.
Other measurements of student success in online programs include interactions and a sense of community. Interactions are often cited in the research as linked with student satisfaction in online educational experiences and we will explore conceptions of interactions in future postings. Exter et al. (2009) report on a study that measured students’ program-level sense of community and its possible link with overall success. A challenge for those implementing and supporting online programs is how to measure students’ sense of community at the program level and actions that can be taken to improve this affective element of students’ experience.  
Knowing how many students complete an online program and graduate is an important indicator of success. Identifying factors contributed to students’ failure to graduate is equally critical, especially if our goal is to improve graduation rates. This becomes especially salient as the marketplace for online courses and programs grows to include non-traditional educational institutions and stakeholders select programs based on their individual criteria. 

Andy

References


Boston, W.E., Ice, P., & Gibson, A.M. (2011, spring). Comprehensive assessment of student retention in online learning environments. Online Journal of Distance Learning Administration, 14(1). Retrieved from: http://www.westga.edu/~distance/ojdla/spring141/index.php

Exter, M.E., Korkmaz, N., Harlin, N.M., & Bichelmeyer, B.A. (2009). Sense of community within a fully online program: Perspectives of graduate students. The Quarterly Review of Distance Education, 10(2), 177-194.

Kuh, G.D., Kinzie, J., Buckley, J.A., Bridges, B.K., & Hayek, J.C. (2006). What matters to student success: A review of the literature. Commissioned Report for the National Symposium on Postsecondary Student Success: Spearheading a Dialog on Student Success. Available online: http://nces.ed.gov/npec/pdf/kuh_team_report.pdf

Rovai, A.P. (2002). In search of higher persistence rates in distance education online programs. The Internet and Higher Education, 6, 1-16.

Monday, March 3, 2014

State of online education

Navigating online education requires an understanding of the current state and the future direction of online teaching and learning.” Kim & Bonk, 2006.

Introduction


With the increased availability of web-based instruction, web-based learning environments (like BlackBoard, Moodle, etc.) and the growth in for-profit organizations in education, more and more instruction and assessments are occurring online. In higher education, a recent survey (Allen & Seaman, 2011) suggests that 77% of those surveyed in public universities agree with the statement “online education is critical to the long-term strategy of my institution” (p. 29). The same study reported online enrollment represents 31.3% of total enrollment in those institutions surveyed. 

Abundant growth is also occurring in the K-12 online education domain. Ambient Insights (2011) reports that over 4 million K-12 students, or 6% of the overall K-12 student population, enrolled in online learning courses in the 2010-2011 year. While there are many concerns about student success in virtual schools – e.g., a study by Miron, Horvitz & Gulosino (2011) reporting 37.6% graduation rates in 2011-2012 – for-profit corporations, like K-12, Inc., are moving quickly to provide alternatives to traditional K-12 public and charter schools.  

Some states are inviting for-profit institutions to offer online or blended classes that can replace K-12 in-class experiences. In the state of Michigan, for example, High School students are required to complete an “online experience” before they can graduate. These efforts, added to the growth in popularity of virtual schools, has led to an explosion in online training, courses, programs, and consulting services. In this blog, we will attempt to move beyond the hype and focus instead on what we know about effective methods, tools, and media for quality online education in higher educational and K-12 settings.   

Terminology


As we explore standards, research, and organizations involved in online education, it is important to recognize the different assumptions made by stakeholders regarding basic concepts and outcomes. For example, what constitutes an “online course” or class is somewhat subjective. Some define online, virtual, or e-learning as “the majority of work completed online,” while others differentiate between online and “fully online,” where no on campus activities are required. Add to this the notion of hybrid, blended, or flipped classrooms, and the conversation gets even more complicated. Finally, there are online or blended courses or classes, online or blended programs, and online or blended degrees.

Allen & Searman (2011) provide a definition of an “online course” – at least 80% of course content delivered online - while the Higher Learning Commission (HLC) uses a more restrictive definition: no required on-campus activities. We refer to this as “fully online” on this blog to differentiate between mostly online and totally (100%) online. Next we will explore efforts to develop standards and criteria for evaluating the quality of online educational offerings.

Standards for online education


In the K-12 domain, standards for online instruction have been offered by several organizations. The National Educational Association (NEA) offers standards for teaching in both online and blended K-12 settings. The International Association for K-12 Online Learning provides another set of standards for online learning in K-12 settings. The iNACOL standards have gained widespread support and there are discussions in some states about requiring K-12 teachers to hold a credential, or at least complete required courses, if they wish to teach online or blended classes. 

In higher education, Quality Matters© (QM) provides a formal process for evaluating and improving online courses with a focus on peer-reviews (Legon & Runyon, 2007). QM incorporates the following elements in their evaluation criteria: course overview and introduction, learning objectives, assessment/measurement, instructional materials, learner interaction and engagement, technology, learner support and accessibility. QM has been adopted at our institution but so far, has not been a required element for online course or program development, approval or evaluation.

Another set of evaluation criteria for online programs in higher education comes from U.S. News & World Report (Brooks & Morse, 2014), and includes admission student selectivity (30%), student engagement (30%), faculty credentials & training (20%) and student services (20%). Student engagement includes graduation rate, best practices, program accreditation, class size, 1-year retention, and time to degree completion.

We have aligned our online university courses and programs with standards specified by the HLC, which accredits our university through the North Central Association. Our institution currently offers an M.Ed. degree in educational technology in both hybrid (mostly online) and fully online formats, and we will begin offering our first online graduate degree in online/blended instruction and assessment in the summer of 2014. In our experiences developing, teaching, and evaluating online education, our work has been informed and enriched by research focused on online education.

Research


For those involved or interested in online education, it is important to consider the available research on effective online education when planning, developing, teaching or evaluating instruction. Online education as a scholarly domain includes an expanding base of knowledge and expertise that provides evidence-based ideas for effective instruction and assessment. 

For example, an article by Larreamendy-Joerns and Leinhardt (2006) explores the history of college-level online education based on published research in the field; a study by DiPietro (2010) explores the instructional practices of K-12 virtual teachers; Ward, Peters & Shelley (2010) report on student and faculty perceptions of the quality of their online learning experiences; and Ester et al. (2009) examine the sense of community in a fully online graduate degree program. 

A prominent organization in online education is the Sloan Consortium (Sloan-C), which offers standards as well as training, support, and research targeted at K-12 and higher education institutions. The Quality Scorecard© includes a set of standards and criteria for developing and evaluating online instruction based on five pillars of quality. The Sloan-C website provides links to research on aspects of online education including those that influenced development and use of their instrument. Other helpful resources include the United States Distance Learning Association (USDLA) and the International E-learning Association (iELA) which both provides conferences, research, and other materials.  

While there are clearly political and financial factors that will influence online education as it evolves, there are also evidence-based sources that can and should shape online choices regarding instruction and assessment. Paying attention to what is already known about online education can help improve the quality and effectiveness of these offerings and will ultimately benefit stakeholders. Key questions that research can address include: what factors influence student success in online or blended learning settings? How can online or hybrid courses and programs be evaluated for quality and effectiveness? How appropriate are online or virtual schools for K-12 students? How assessable are blended or online instruction and assessments for those students with special needs or abilities? What opportunities do digital, web-based media and materials provide for students that allow them to extend or expand what is normally available in a traditional time-limited class setting?  

Immersing oneself in this research literature will hopefully ensure that online offerings are effective and meaningful for those who seek to benefit from online or blended/hybrid courses or programs. The future of online education looks especially bright if we continue exploring ways to teach and assess, learn from research in this domain and share our knowledge and experiences with stakeholders. We look forward to a rich and diverse conversation about online education on this blog! 

“More has been written about online education than is known.” Anonymous source.

References


Allen, I.E., & Seaman, J. (2011). Going the distance: Online education in the United States. Sloan Consortium. Retrieved from: http://www.onlinelearningsurvey.com/highered.html

Ambient Insights (2011). 2011 Learning technology research taxonomy: Research methodology, buyer segmentation, product definitions, and licensing model. Monroe, WA: Author: Retrieved from http://www.ambientinsight.com/Reports/eLearning.aspx

Blended learning: An NEA policy brief. National Education Association, Washington, DC. Retrieved from: http://www.nea.org/assets/docs/PB36blendedlearning2011.pdf

Brooks, E., & Morse, R. (2014, January 7). Methodology: Best Online Graduate Program Rankings. U.S. News & World Report. Retrieved from http://www.usnews.com/education/online-education/articles/2014/01/07/methodology-best-online-graduate-education-programs-rankings-2014

DiPietro, M. (2010). Virtual school pedagogy: The instructional practices of K-12 virtual school teachers. Journal of Educational Computing Research, 42(3), 327-354.

Exter, M.E., Korkmaz, N., Harline, N.M, & Bichelmeyer, B. A. (2009). Sense of community within a fully online program: Perspectives of graduate students. The Quarterly Review of Distance Education, 10(2), 177-194.

Guide to teaching online courses. National Education Association, Washington, DC. Retrieved from: http://www.nea.org/technology/images/onlineteachguide.pdf

Guide to online high school courses. National Education Association, Washington, DC. Retrieved from: http://www.nea.org/technology/onlinecourseguide.html

Kim, K-J, & Bonk, C.J. (2006). The future of online teaching and learning in higher education: The survey says … EDUCAUSE Quarterly, November 4, 2006. Retrieved from: https://net.educause.edu/ir/library/pdf/eqm0644.pdf

Larreamendy-Joerns, J. , & Leinhardt, G. (Winter, 2006). Going the distance with online education. Review of Educational Research, 76(4), 567-605.

Legon, R., & Runyon, J. (2007). Research on the impact of the Quality Matters course review process. Presentation at the 23rd Annual Conference on Distance Teaching & Learning. Retrieved from: http://www.uwex.edu/disted/conference/Resource_library/proceedings/07_5284.pdf

Miron, G., Horvitz, B., & Gulosino, C. (May 2013). Virtual schools in the U.S. 2013: Politics, performance, policy, and research evidence. National Education Policy Center, School of Education, University of Colorado Boulder. Retrieved from: http://nepc.colorado.edu/files/nepc-virtual-2013-section-1-2.pdf

National Standards for Quality Online Teaching. International Association for K-12 online learning. Retrieved from: http://www.inacol.org/research/nationalstandards/iNACOL_CourseStandards_2011.pdf

Ward, M.E., Peters, G., & Shelley, K. (2010). Student and faculty perceptions of the quality of online learning experiences. International Review of Research in Open and Distance Learning, 11(3), 57-77.

Andy/03-03-14