Association for Educational Communications and Technology
Presentation Number 2108
Title ACCOUNTING FOR INDIVIDUAL DIFFERENCES IN LEARNING: Where do we start and what are the implications for online instruction?
Program Area Research and Theory
Date and Time: Date: 11/8/01       Location: Dorval
Length and Type of Session: 60 Minutes    Presentation then Discussion
Presenter(s) Joanne   Bentley   Utah State University ,     

Short Description
A comparison of two instruments designed to measure individual differences in learning and a discussion concerning the implications for online instruction.
Abstract BACKGROUND Over the years there have been many attempts to account for individual differences in learning. However, the problems associated with getting a stable measure of these differences have caused many to conclude that they are indeterminable. Without the necessary consideration of the dominant influence of emotions and intentions on learning, both Cronbach (1957, 1975) and Snow (1987; Snow et al., 1990) were unable to find stable cognitive/aptitude treatment interactions. However, both Snow and Cronbach found more stable attribute/treatment interactions at the conative level (Cronbach, 1975). In the late eighties, Snow (1987) described how in cognitive psychology conation as a learning factor has been demoted, and, since it seems not really to be a separable function, it is merged with affect. Together these factors are viewed as mere associates or products of cognition, and then ignored. He warned that individual difference constructs or aptitude complexes needed greater consideration of the joint functioning between cognitive, conative, and affective processes.
Snow was in search of an information processing model of cognition that would include (still as a secondary consideration) possible cognitive-conative-affective intersections. He was looking for a way to fit realistic aspects of mental life, such as mood, emotion, impulse, desire, volition, and purposive striving into instructional models. According to Snow (1989), the best instruction involves treatments that differ in structure and completeness and high or low general ability measures. Highly structured treatments (e.g., high external control, explicit sequences and components) seem to help students with low ability but hinder those with high abilities (relative to low structure treatments). However, by treating individual differences in learning as a predominantly cognitive phenomena, researchers may have unwittingly ignored a key element in the equation. More recent research (Snow & Jackson, 1993; Snow & Jackson, 1997; Jackson, 1998; Martinez, 2000) suggests that may well be the case.

PURPOSE The purpose of this study was to discover how the Learning Orientation Questionnaire (LOQ) and the Herrmann Brain Dominance Instrument (HBDI) are related in an attempt to sharpen and elaborate their respective score meanings and theoretical interpretations in accounting for individual learning differences. This study was the foundation for my dissertation and is a portion of the validation argument for the LOQ.

REVIEW OF LITERATURE Understanding individual differences in learning has been a major research interest since World War I. Over the ensuing years there have been many attempts to account for individual differences in learning (Gagné, 1967; Glaser, 1972, 1976; Ackerman, Sternberg, & Glaser, 1989; Jonassen & Grabowski, 1993). In the fifties, Cronbach (1957) optimistically challenged the field to “find for each individual the treatment to which he can most easily adapt”, however, perhaps due to the systematically cognitive approach used by researchers of the time, this challenge proved to be more complex than they had originally anticipated. Problems with getting a stable measure of these differences in learning, stable interactions with treatment alternatives, and limited, expensive technology made creating computerized instruction, which accommodated a broad range of individual differences very costly and time intensive. During the era of media studies, it was common to assume that most people learned in a similar fashion. However, if we are intent on avoiding the “no significance differences” trap that Russell (1997) documents in his review of numerous media impact studies we should ask if lumping together different types of learners may not have confounded earlier research.
Accordingly, if some learners were helped by a certain form of delivery, some were frustrated, and others were not particularly affected either positively or negatively, then it would not be surprising that there were frequently “no significance” in learning outcomes. If learners can be classified as Martinez (1998) suggests into transforming, performing, conforming, and resistant learners with different preferences for how they like to interact with content then it is little wonder that when multiple students scores are combined that there is frequently no significant difference between treatments ignoring differences in individual learner preferences. With the development of XML, meta-data, and cascading style sheets the potentially costly nature of re-working the delivery of content for individual learning preferences has been greatly reduced. “Designers are finally allowed to separate content from style of delivery” (Hall & Gottfredson, in press). Groups such as IMS, AICC, and IEEE are currently involved in developing learning standards which would allow small units of instruction, sometimes referred to as learning objects to be shared across different management systems. Using these and other technological advances in computing to support dynamic content adaption for different learning styles will be a huge step towards true mass-customization of instructional material.
William S. Cohen, the U. S. Secretary of Defense, summarizes the vision of the Advanced Distributed Learning Network (ADLNet) as being to "provide access to the highest quality education and training, tailored to individual needs, delivered cost effectively, anywhere and anytime" (ADLNet, 2000, Emphasis added). The technology should be the tool delivering shareable content, assembled on the fly, using a variety of learning management systems, and new instructional design theories, which take into account a broader range of individual differences in learning. Now, with the rapid expansion of the internet, web-based courses purporting to meet individual needs abound. However, the notion of mass customization in instruction has come of age faster that the instructional design theories needed to support it.
Although the fledgling technology is now available for some personalization of instruction, there are still few substantive, prescriptive solutions as to how to account for a variety of individual differences in web-based learning. Nowhere is the issue of developing mass customization in instruction more problematic than in the training arena where establishing return on investment for new innovations in training is crucial. Martinez (1999a, 1999b, 1998, 1997; Martinez et al., 1999; Martinez & Bunderson, 1998) has begun an aggressive push to apply her theory of accounting for individual differences through learning orientations to web-based instruction. Martinez (1999a, 1999b, 1998, 1997) goes beyond the work of Bereiter and Scardamelia (1993) to provide an elaborated view of intentional learning by elevating intentionality to a primary or dominant position as an influence on learning. This perspective includes the combination of beliefs, control, enjoyment, effort, and intentions at three distinct levels (transforming, performing, and conforming) as they relate to learning at each level of orientation. She believes that such a model when used to determine learner orientation can provide relevant information on how to mass customize and dynamically personalize instruction to meet the needs of individual learners. Martinez is one of the first in web-based instruction to attempt to account for individual differences in learning coupled with dynamic delivery of content. Therefore, establishing a stronger case for the validity of her diagnostic instrument, the Learning Orientation Questionnaire (LOQ), which is based on Intentional Learning Theory becomes a timely research endeavor.

METHODS Cronbach (1988) introduced the term Validation argument to describe the process of establishing validity, which he described as an argument that “must link concepts, evidence, social and personal consequences, and values . . . The 30-year old idea of three types of validity, separate but equal, is an idea whose time is gone . . . validation is never finished”. Building on Cronbach (1988), Martinez, Bunderson, & Wiley (2000) propose that “the verification procedure in design experiments is a design process to establish the various aspects of construct validity and other aspects of a validity argument”, thereby taking the idea of “constructing construct validity” one step further. Martinez, Bunderson, & Wiley (2000) go on to describe how convergent and discriminant studies add to the verification process by “finding alternative measures of the same construct and comparing measurement outcomes across instruments, people, and occasions.
Measures of the same construct should converge to provide triangulated evidence for the construct” (p. 14 ). Finding “relationships among different methods of measuring the construct can be especially helpful in sharpening and elaborating score meaning and interpretation” (AERA, APA, NCME, 1999, p. 14 ). This study attempts to discover how the LOQ and the HBDI are related and if their items measure similar or distinctly different constructs in an attempt to sharpen and elaborate their respective score meanings and interpretations. We expected to find some correlation between the LOQ and the HBDI, although exactly how they would be correlated was not known. We had three general hypothesis. Firstly, it is anticipated that the HBDI would have a broader scope across different domains than the LOQ, but would emphasize cognitive and social constructs. Secondly, the LOQ would not span as many domains, but would emphasize conative and affective constructs and de-emphasize cognitive, physical, social, and values. Hence, the HBDI should correlate with the LOQ more strongly on the cognitive constructs of effort, which involves strategies and planning, than on the conative/affective factor of intentions. Thirdly, we believe the LOQ is more likely to correlate with multiple quadrant combinations (3 or 4) than with single quadrant scores which should provide insight into both transforming learners (LOQ) and whole brainedness (HBDI). In an attempt to understand the convergent and discriminant patterns of relationship between the LOQ and the HBDI a correlation of the four scores on the LOQ with the four profile composite-scores on the HBDI will be assessed using a cumulative augmented quota sample of approximately 200 high school and college-age respondents. Bivariate correlations were run using the Pearson product-moment method based on the four raw scores from the HBDI (Left, Lower Left, Lower Right, Right) and the raw scores from the LOQ (Intentions, Effort, Control, and total Learning Orientation).
The resulting data will be displayed in correlation matrices in the final paper. Subjects were asked to voluntarily participate in a study comparing two measures of assessing learning preferences; the LOQ and the HBDI. Assurances were made that all data collected would be kept confidential and their responses would in no way affect their class grades. Both ethical considerations being important to the design. Participants were reminded to be honest and then given the LOQ and the HBDI to complete. After the instruments were collected a short debriefing seminar on learning styles was given to participants. This study used an incrementally augmented quota sampling design for selection of 150-200 high school and college age subjects from a variety of backgrounds. Bailey describes quota sampling as ?the nonprobability sampling equivalent of stratified sampling” (pg 97). Although in the traditional application of quota sampling ?each stratum is generally represented in the sample in the same proportion as in the entire population” however, equal representation is not always possible (pg 97).
Due to the partially exploratory nature of this research project there was some concern that a convenience sample might restrict the range of representation of each of the variables. It is understood that a nonrandom sample gives up the probable assurance of being representative of the population. However, we are not trying to generalize to the population at large with the results of this study. This study was designed to measure the construct correlation between the LOQ and the HBDI and reflect on the underlying theory. This study was not trying to generalize to a predefined population, rather was more concerned with ensuring that a full range of learning orientations and HBDI profiles are approximately equally represented in the sample. Approximately equivalent representation of the eight variables allowed for a more complete and reliable correlation.

SUMMARY OF RESULTS Based on expert judgment, items on the HBDI are primarily cognitive and the LOQ is primarily conative, confirming that the HBDI is more cognitively oriented and the LOQ more conative and affective. As experts sharpen distinctions between constructs, the clarity of their substantive processes increases, leading to improvements in the construct validity of the instruments. Of practical importance is that experts found the LOQ to measure different constructs from the HBDI. As one of the broadest measures of individual differences, the HBDI does not significantly overlap with the conative and affective constructs measured by the LOQ. The correlations between the LOQ and the HBDI have significance in the substantive process operating for both instruments. The HBDI and the LOQ do converge around measures of high intentionality. Intentionality appears to include HBDI scores in upper right, right mode, cerebral, whole-brainedness, cerebral left whole-brained (CLWB), and cerebral right whole-brained (CRWB). LOQ total scores were more likely to correlate with multiple quadrant combinations (or whole-brainedness) of HBDI scores than with single quadrant HBDI scores. The Upper Right quadrant was the most likely HBDI score to correlate with high LOQ scores. However, high LOQ scores are also likely to correlate with HBDI multiple quadrant combinations (or whole-brainedness) such as CRWB. (Additional results from the study, and the significance of these results will be explained more fully in the paper and presentation.)

CONCLUSIONS & IMPLICATIONS Assessing individual differences in learning and then tailoring instruction to fit students’ needs is less challenging when you can interact face-to-face with your students over time–If one strategy doesn’t work you can try another, using verbal and non-verbal feedback to refine the delivery process. Over time, a student’s preference for certain content delivery styles becomes evident. The ability to identify student’s individual differences in learning and the opportunity to dynamically tailor instruction for an individual has always been possible in a traditional classroom but has seldom, if ever, truly existed in computer-based instruction (CBI). Convergent and discriminant validation studies have been lacking in the past for both instruments. This study has begun to address issues of overlap and redundancy among individual difference instruments important in teaching and learning situations. Common areas in accounting for individual learning differences have been highlighted while drawing attention to distinctly different concepts for further consideration by authors of both instruments.
As a result of this study, we have deepened our understanding of the content and substantive processes of construct validity for both instruments, and are coming closer to understanding how to account for individual differences in learning. In summary, the LOQ has been shown in this study to be significantly different from the HBDI in what constructs it measures. Its use can therefore take us one step further in finding new ways to assess individual differences in learning. Based on LOQ scores, those who understand the intentional learning construct claim to be able to tailor learning treatments to that which an individual can most easily adapt. With further research this may be proven valid, and of sound utility. If this is the case then the LOQ may indeed be what researchers are looking for to more coherently account for and adapt to individual differences in learning. Although there is more research to be done to complete the validation argument for the LOQ, it is my hope that future research will build on these findings. By providing data for determining and understanding individual differences in learning we have a better hope for creating instruction to meet individual needs. Building another piece in the case for the validity of the LOQ has been important, not only because it is a part of the process of establishing validity for both instruments but because it has the potential to strengthen the learning theory base which underpins instructional psychology generally and web-based instruction specifically.

REFERENCES

ADLNet. Advanced Distributed Learning Network (ADLNet), [Online]. Available: http://www.adlnet.org/ [June 26, 2000]. American Educational Research Association, American Psychological Association, and National Council on Measurement in Education (1999). Standards for educational and psychological tests. Washington, D.C: American Educational Research Association. American Psychological Association. (1954). Technical recommendations for psychological tests and diagnostic techniques. Psychological Bulletin, 51, (2 Part 2). American Psychological Association, American Educational Research Association, and National Council on Measurement in Education. (1974). Standards for educational and psychological tests. Washington, D.C: American Psychological Association. American Psychological Association, American Educational Research Association, and National Council on Measurement in Education. (1985). Standards for educational and psychological tests. Washington, D.C: American Psychological Association. Ackerman, A. J., Sternberg, R. J., & Glaser, (1989). Learning and individual differences: Advances in theory and research. New York : W. H. Freeman. Anderson, L. A. & Krathwhol D. R. (Eds.) (in press). A taxonomy for learning, teaching, and assessing: A revision of Bloom’s taxonomy of educational objectives. New York: Allyn & Bacon. Babbie, E. R. (1986). The practice of social research. (4th ed.). Belmont, CA: Wadsworth Publishing. Bailey, K. D. (1982). Methods of social research. (2nd ed.). New York: The Free Press. Bandura, A. (1986). Social foundations of thought and action: A social cognitive theory. Englewood Cliffs, NJ: Prentice -Hall Bereiter, C., & Scardamalia, M. (1993). Surpassing ourselves: Inquiry into the nature and implication of expertise. Chicago: Open Court. Bloom, B. S., Engelhart, M. D., Frost,E. J., Hill, W. H. & Krathwohl, D. R. (1956). Taxonomy of educational objectives. Handbook I: Cognitive domain. New York: David McKay. Boyle, G. J. (1995). Myers-Briggs Type Indicator (MBTI): Some psychometric limitations. Australian Psychologist, 30(1): 71-74. Bunderson, C. V. (1988). The validity of the Herrmann Brain Dominance Instrument. In Ned Herrmann (Ed.). The creative brain. Lake Lure, NC: Brain Books. Cronbach, L. J. (1957). The two disciplines of scientific psychology. American Psychologist, 671-684. Cronbach, L. J., & Snow, R. E. (1977). Aptitudes and instructional methods: A handbook for research on interactions. New York: Irvington. Cronbach, L. J. (1988). Five perspectives on validity argument. In H. Wainer & H. I. Braun (Eds.), Test validity (pp. 3-17). Hillsdale, NJ: Laurence Erlbaum. Curry, L. (1990). A critique of the research on learning styles. Educational Leadership, 48(2), 1990. Felder, R. (1996). Matters of Style. ASEE Prism, Vol. 6 (pp. 18-23). Gagné, R. (1967). Learning and individual differences. Columbus, Ohio: Merrill. Gall, J. P., Gall, M. D., & Borg, W. R. (1996). Applying educational research: a practical guide (4th ed.). New York: Longman. Gay, L. R. (1996). Educational research: Competencies for analysis and application (5th ed.). Columbus, Ohio: Merrill. Gardner, H. E. (1999). Intelligence reframed : Multiple intelligences for the 21st century. New York, NY : Basic Books. Gardner, H. E. (1993). Multiple intelligences: Theory in practice. New York, NY : Basic Books. Gardner, H. E. (1984). Art, mind, and brain: A cognitive approach to creativity. New York, NY : Basic Books. Gardner, W. L. & Martinko, M. J. (1996). Using the Myers-Briggs Type Indicator to study managers: A literature review and research agenda. Journal of Management, 22(1): 45-83. Glaser, R. (1976). Components of a psychology of instruction: Toward a science of design. Review of Educational Research, 46(1), 1-24. Glaser, R. (1972). Individuals and learning: The new aptitudes. Educational Researcher, 1(6), 5-13. Gredler, M. E. (1997). Learning and instruction: Theory into practice (3rd ed.). Upper Saddle River, New Jersey: Prentice-Hall. Hall, J. P. & Gottfredson, C. A. (In press). Evaluating Web-Based Training: The Quest for the Information Age Employee. In B. H. Khan (Ed.), Web-Based Training. Herrmann, N (1990). The creative brain. Lake Lure, NC: Brain Books. Ho, K. T. (1988). The dimensionality and occupational discriminating power of the Herrmann Brain Dominance Instrument. Unpublished doctoral dissertation, Brigham Young University, Utah. Jackson, D. (1998). An exploration of selected conative constructs and their relation to science learning. (CRESST CSE Technical Report 467). Palo Alto: Stanford University, Department of Education Jonassen, D. H., & Grabowski, B. L. (1993). Handbook of individual differences, learning and instruction.. Hillsdale, NJ: Lawrence Erlbaum Associates, Inc. Martinez, M. Learning Orientation Construct (LOC), [Online]. Available: http://www.trainingplace.com/source/research/learningorientations.htm#loc [July 31, 2000a]. Martinez, M. Learning Orientation Questionnaire, [Online]. Available: http://www.trainingplace.com/source/research/questionnaire.htm#manual [July 31, 2000b]. Martinez, M., Bunderson, C. V., & Wiley, D. (2000, April). Verification in a design experiment context: Validity argument as design process. Symposium session at the annual meeting of the American Educational Research Association, New Orleans, LA. Martinez, M., & Bunderson, C. V. (1999). Development of a self-report instrument for measuring learning orientations and sources for individual differences: Instrument testing and hypothesis refinement. Unpublished manuscript. Martinez, M. (1999a). A mass customization approach to learning. ASTD’s Technical Training Magazine, 10(4), 24-26 Martinez, M. (1999b). An Investigation into Successful Learning—Measuring the Impact of Learning Orientation, a Primary Learner-Difference Variable, on Learning. (University Microfilms No. 992217) Martinez, M., Bunderson, C. V., Nelson, L. & Ruttan, J. P. (1999). Successful learning in the new millennium: a new web learning paradigm. Proceedings CD for the Association for the Advancement of Computing in Education WebNet 99 World Conference, Honolulu, HI. Martinez, M. (1998). Development and validation of the intentional learning orientation questionnaire. Unpublished manuscript, Brigham Young University, Utah. Martinez, M., & Bunderson, C. V.(1998). Transformation: A description of intentional learning. The Researcher, 13(1), 27-35. (ERIC Document Reproduction Service No. ED 408 260). Martinez, M. (1997). Designing intentional learning environments. Proceedings of the ACM SIGDOC 97 International Conference on Computer Documentation, Salt Lake City, UT, 173-80. Merrill, M. D. (1975). Learner control: Beyond aptitude-treatment interactions. AV Communications Review, 23, 217-226. Messich, S. (1976). Individuality in learning: Implications of cognitive styles and creativity for human development. San Francisco: Jossey-Bass. Messich, S. (1989). Validity. In R. L. Linn (Ed.), Educational Measurement (3rd ed.). New York: American Council on Education: Macmillian Publishing Com. Messich, S. (1995). Validity of psychological assessment. American Psychologist, 50(9), 741-749. The Ned Herrmann Group. (1989). Herrmann Brain Dominance Instrument Profile Interpretation Package. [Brochure]. Lake Lure, NC: Author. Reeves, T. (1993). Pseudoscience in computer-based instruction: The case of learner control research. Journal of Computer-Based Instruction, 20(2), 39-46. Rock, D. (1983). The issues and concerns related to developing a construct validation program. Unpublished report. Princeton: Educational Testing Service. Russell, T. (1997). Technology wars: Winners and losers. Educom Review, 32(2), 44-46. Snow, R. E. (1989). Toward assessment of cognitive and conative structures in learning. Educational Researcher, 18(9), 8-14. Snow, R. E. (1987). Aptitude complexes. In Richard E. Snow & Marshall Farr (Eds.), Aptitude, Learning, and Instruction, Conative and affective process analysis. (Vol. 3, pp. 11-34). Hillsdale: Lawrence Erlbaum Associates. Snow, R. E. & Jackson III, D. (1993). Assesment of conative constructs for educational research and evaluation: A catalogue. (CRESST CSE Technical Report 354). Palo Alto: Stanford University, Department of Education Snow, R. E. & Jackson III, D. (1997). Individual differences in conation: selected constructs and measures. (CRESST CSE Technical Report 447). Palo Alto: Stanford University, Department of Education. Snow, R. E., Mandinach, E., & Mc Vey, M. (1990). The topography of mastery assessment in instructional domains. Princeton, NJ: Educational Testing Service. Sperry, R. W. (1977). Bridging science and values–A unifying view of mind and brain. American Psychologist, 32 (4), 237-245.

Displayed with written permission from Phil Harris/AECT (April 04, 2005).

Presentation on this site is © 2001 by AECT
Association for Educational Communications and Technology
1800 N. Stonelake Dr. Suite 2
Bloomington, IN • 47408