The Wiley Handbook of Cognition and Assessment
eBook - ePub

The Wiley Handbook of Cognition and Assessment

Frameworks, Methodologies, and Applications

  1. English
  2. ePUB (mobile friendly)
  3. Available on iOS & Android
eBook - ePub

The Wiley Handbook of Cognition and Assessment

Frameworks, Methodologies, and Applications

Book details
Book preview
Table of contents
Citations

About This Book

This state-of-the-art resource brings together the most innovative scholars and thinkers in the field of testing to capture the changing conceptual, methodological, and applied landscape of cognitively-grounded educational assessments.

  • Offers a methodologically-rigorous review of cognitive and learning sciences models for testing purposes, as well as the latest statistical and technological know-how for designing, scoring, and interpreting results
  • Written by an international team of contributors at the cutting-edge of cognitive psychology and educational measurement under the editorship of a research director at the Educational Testing Service and an esteemed professor of educational psychology at the University of Alberta as well as supported by an expert advisory board
  • Covers conceptual frameworks, modern methodologies, and applied topics, in a style and at a level of technical detail that will appeal to a wide range of readers from both applied and scientific backgrounds
  • Considers emerging topics in cognitively-grounded assessment, including applications of emerging socio-cognitive models, cognitive models for human and automated scoring, and various innovative virtual performance assessments

Frequently asked questions

Simply head over to the account section in settings and click on “Cancel Subscription” - it’s as simple as that. After you cancel, your membership will stay active for the remainder of the time you’ve paid for. Learn more here.
At the moment all of our mobile-responsive ePub books are available to download via the app. Most of our PDFs are also available to download and we're working on making the final remaining ones downloadable now. Learn more here.
Both plans give you full access to the library and all of Perlego’s features. The only differences are the price and subscription period: With the annual plan you’ll save around 30% compared to 12 months on the monthly plan.
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Yes, you can access The Wiley Handbook of Cognition and Assessment by Andre A. Rupp, Jacqueline P. Leighton in PDF and/or ePUB format, as well as other popular books in Education & Research in Education. We have over one million books available in our catalogue for you to explore.

Information

Year
2016
ISBN
9781118956618

1
Introduction to Handbook

André A. Rupp and Jacqueline P. Leighton

Motivation for Handbook

The field of educational assessment is changing in several important ways at the time of this writing. Most notably, there has been a shift to embracing more complex ways of thinking about the relationship between core competencies, behaviors, and performances of learners at various developmental levels across the lifespan. These new ways of thinking have been fueled by new models of cognition that are increasingly more inclusive and accepting of correlates of basic knowledge and skill sets. In many educational assessment contexts, considerations of how cognitive, meta‐cognitive, socio‐cognitive, and noncognitive characteristics of individual learners affect their individual behaviors and performances – and those of teams that they are working in – are becoming increasingly common. Clearly, at a basic level, the mere conceptual consideration of such broader characteristics and their interrelationships is not intellectually new but the way in which they are nowadays explicitly articulated, operationalized, and used to drive instructional and assessment efforts is indeed something new.

Assessment of Twenty‐First‐Century Skills

In US policy, this trend is reflected in curricular movements such as the Common Core and its adoption by individual states as well as collections of states in consortia such as the Partnership for Assessment of Readiness for College and Careers and Smarter Balanced. While the degree of influence of these two particular consortia is likely to change over time, the foundational tenets and goals of the Common Core are less likely to vanish from our educational landscape. Importantly, Common Core standards articulate models of learning that are explicitly focused on the longitudinal development of learners over time across grades. Focal competencies include domain‐specific knowledge, skills, and abilities as well as professional practices but also broader cross‐domain competencies.
Such complex competencies are sometimes called “twenty‐first‐century skills” and include cognitive skills such as problem‐solving, systems thinking, and argumentation skills, intrapersonal skills such as self‐regulation, adaptability, and persistence, as well as interpersonal skills such as collaboration skills, leadership skills, and conflict resolution skills. Of note is the inclusion of information and communication technology skill sets, which are an integral part of the digitized life experiences of citizens in our times across the world. As a result, the kinds of intellectual and creative tasks that effective citizens need to be able to solve nowadays with digital tools are often qualitatively different in important ways from the tasks of the past. As a result, considerations of smart assessment design, delivery, scoring, and reporting have become much more complex.
On the one hand, more “traditional” assessments constructed predominantly with various selected response formats such as multiple‐choice, true‐false, or drag‐and‐drop are certainly here to stay in some form as their particular advantages in terms of efficiency of scoring, administration, and design are hard to overcome for many assessment purposes. This also implies the continued administration of such assessments in paper‐and‐pencil format rather than digital formats. While it clearly is possible to use tools such as tablets, smartphones, or personal computers for the delivery of innovative digital assessments, many areas of the world where education is critical do not yet have access to reliable state‐of‐the‐art technological infrastructures at a large scale.
On the other hand, there are numerous persistent efforts all over the world to create “smarter” digital learning and assessment environments such as innovative educational games, simulations, and other forms of immersive learning and assessment experiences. Sometimes these environments do not proclaim their assessment goals up front and may perform assessment quietly “behind‐the‐scenes” so as to not disturb the immersive experience – an effort called “stealth assessment” by some. Since the tasks that we create for learners are lenses that allow us to learn particular things about them and tell evidence‐based stories about them, we are nowadays confronted with the reality that these stories have become more complex rather than less complex. This is certainly a very healthy development since it forces assessment design teams to bring the same kinds of twenty‐first‐century skills to bear to the problem of assessment systems development that they want to measure and engender in the learners who eventually take such assessments.

Methodologies for Innovative Assessment

In the most innovative and immersive digital environments the nature of the data that are being collected for assessment purposes has also become much more complex. We now live in a world in which process and product data – the indicators from log files that capture response processes and the scores from work products that are submitted at certain points during activities – are often integrated or aligned to create more comprehensive narratives about learners. This has meant that specialists from the discipline of psychometrics have to learn how to play together – in a common and integrated methodological sandbox – with specialists from disciplines such as computer science, data mining, and learning science.

Integrating disciplinary traditions.

Clearly, professionals deeply trained in psychometrics have a lot to offer when it comes to measuring uncertainty or articulating evidentiary threads for validity arguments when data traces such as log files are well structured. Similarly, professionals deeply trained in more predominantly computational disciplines such as computer science or educational data mining have a lot to offer when it comes to thinking creatively through complex and less well‐structured data traces. Put somewhat simplistically, while traditional psychometrics is often seen as more of a top‐down architecture and confirmation enterprise, modern computational analytics is often seen as a more bottom‐up architecture or exploration enterprise.
In the end, however, most assessment contexts require compromises for different kinds of design decisions and associated evidentiary argument components so that effective collaboration and cross‐disciplinary fertilization is key to success for the future. This requires a lot of strategic collaboration and communication efforts since professionals trained in different fields often speak different methodological languages or, at least, different methodological dialects within the same language.
Paradoxically, we are now at a time when conceptual frameworks like assessment engineering or evidence‐centered design – a framework that many authors in this Handbook make explicit reference to – will unfold their transformational power best, even though some of them have been around in the literature for over 20 years. None of these frameworks is a clear “how‐to” recipe, however. Instead, they are conceptual tools that can be used to engender common ways of thinking about critical design decisions along with a common vocabulary that can support effective decision‐making and a common perspective on how different types of evidence can be identified, accumulated, and aligned.

Integrating statistical modeling approaches.

Not surprisingly perhaps, the statistical models that we nowadays have at our disposal have also changed in important ways. Arguably there has been a strong shift in the last decades toward unification of statistical models into coherent specification, estimation, and interpretation frameworks. Examples of such efforts are the work on generalized linear and nonlinear mixed models, explanatory item response theory models, and diagnostic measurement models, to name just a few. Under each of these frameworks, one can find long histories of publications that discuss individual models in terms of their relative novelties, advantages, and disadvantages. The unified frameworks that have emerged have collected all of these models under common umbrellas and thus have laid bare the deep‐structure similarities across these seemingly loosely connected models.
This has significantly restructured thinking around these models and has helped tremendously to scale back unwarranted, and rather naïve, claims from earlier times about the educational impact that certain kinds of statistical models could have by themselves. Put differently, it has helped many quantitative methodologists to re‐appreciate the fact that any model, no matter how elegantly it is specified or estimated, is, in the end, just a technological tool. Like any tool, it can be used very thoughtfully as a “healthy connective tissue” for evidence or rather inappropriately leading to serious evidentiary “injuries.”

Integrating assessment design and validity argumentation.

From a validity perspective, which is foundational for all educational assessment arguments, the constellation of design choices within an assessment life cycle has to be based on sound scientific reasoning and has to rhetorically cohere to provide added value to key stakeholders. This typically means that the information that is provided from such assessments should provide real insight into learning, performance, and various factors that affect these.
As such, smart assessment design considers the system into which the assessment is embedded just as much as the tool itself. In fact, under views of the importance of measuring learning over time as articulated in the Common Core, for instance, it is impossible to think of the diagnostic process as a one‐off event. Instead, assessment information needs to be interpreted, actions need to be taken, experiences need to be shaped, and new information needs to be collected in an ever‐continuing cycle of learning, assessment, and development. In this new world of cognition and assessment such a longitudinal view will become more and more prevalent thus forcing many communities of practice to change the way they design, deliver, score, report, and use assessments.
This perspective critically affects the societal reverberations that assessments can have when serving underrepresented or disadvantaged groups in order to improve the life experiences of all learners across the societal spectrum and lifespan. It may certainly be too much to ask of measurement specialists – or at least it may be rather impractical for workflow considerations – to always keep the bigger philanthropic goals of assessments in mind as these do not always influence their work directly. For example, the optimal estimation of a complex latent variable model for nested data structures will not be directly affected by an understanding of whether this model is used in an assessment context where assessment scores are used to provide increased access to higher‐education institutions for minorities or in an educational survey context where they are used for accountability judgments.
However, ensuring that assessment arguments are thoughtful, differentiated, and responsible in light of societal missions of assessment is important, especially in interdisciplinary teams that are charged with various critical design decisions throughout the assessment lifecycle. It will help these teams be more motivated to keep track of controversial design decisions, limitations of assessment inferences, and critical assumptions. In short, it will help them to make sure they know what evidence they already have and what evidence still needs to be collected in order to support responsible interpretation and decision making. As mentioned earlier, such a shared understanding, perspective, and communal responsibility can be fostered by frameworks such as assessment engineering or evidence‐centered design.

Integrating professional development and practical workflows.

These last points speak to an aspect of assessment work that is often overlooked – or at least not taken as seriously as it could – which is the professional development of specialists who have to work in interdisciplinary teams. There is still a notable gap in the way universities train graduate students with Master’s or PhD degrees in the practices of assessment design, deployment, and use. Similarly, many assessment companies or start‐ups are under immense business pressures to produce many “smart” solutions with interdisciplinary teams under tight deadlines that take away critical reflection times.
In the world of Common Core, for example, short turnaround times for contracts from individual states or other stakeholders in which clients are sometimes asked to propose very complex design solutions in very short times can be problematic for these reflection processes. While short turnaround times would be feasible if the needed products and solutions truly fit a plug‐and‐play approach, the truth is that the new assessment foci on more complex, authentic, collaborative, and digitally delivered assessment tasks require rather creative mindsets...

Table of contents

  1. Cover
  2. Title Page
  3. Table of Contents
  4. Notes on Contributors
  5. Foreword
  6. Acknowledgements
  7. 1 Introduction to Handbook
  8. Part I: Frameworks
  9. Part II: Methodologies
  10. Part III: Applications
  11. Glossary
  12. Index
  13. End User License Agreement