Using Games and Simulations for Teaching and Assessment
eBook - ePub

Using Games and Simulations for Teaching and Assessment

Key Issues

  1. 312 pages
  2. English
  3. ePUB (mobile friendly)
  4. Available on iOS & Android
eBook - ePub

Using Games and Simulations for Teaching and Assessment

Key Issues

Book details
Book preview
Table of contents
Citations

About This Book

Using Games and Simulations for Teaching and Assessment: Key Issues comprises a multidisciplinary investigation into the issues that arise when using games and simulations for educational purposes. Using both theoretical and empirical analyses, this collection examines cognitive, motivational, and psychometric issues with a focus on STEM content. Unlike other research-based volumes that focus solely on game design or the theoretical basis behind gaming, this book unites previously disparate communities of researchers—from civilian to military contexts as well as multiple disciplines—to critically explore current problems and illustrate how instructionally effective games and simulations should be planned and evaluated.

While computer-based simulations and games have the potential to improve the quality of education and training, Using Games and Simulations for Teaching and Assessment: Key Issues shows how the science of learning should underlie the use of such technologies. Through a wide-ranging yet detailed examination, chapter authors provide suggestions for designing and developing games, simulations, and intelligent tutoring systems that are scientifically-based, outcomes-driven, and cost-conscious.

Frequently asked questions

Simply head over to the account section in settings and click on “Cancel Subscription” - it’s as simple as that. After you cancel, your membership will stay active for the remainder of the time you’ve paid for. Learn more here.
At the moment all of our mobile-responsive ePub books are available to download via the app. Most of our PDFs are also available to download and we're working on making the final remaining ones downloadable now. Learn more here.
Both plans give you full access to the library and all of Perlego’s features. The only differences are the price and subscription period: With the annual plan you’ll save around 30% compared to 12 months on the monthly plan.
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Yes, you can access Using Games and Simulations for Teaching and Assessment by Harold F. O'Neil,Eva L. Baker,Ray S. Perez in PDF and/or ePUB format, as well as other popular books in Computer Science & Human-Computer Interaction. We have over one million books available in our catalogue for you to explore.

Information

Publisher
Routledge
Year
2016
ISBN
9781317814665
Edition
1

PART I
Theory/Framework/Context

1

A FRAMEWORK TO CREATE
EFFECTIVE LEARNING GAMES
AND SIMULATIONS
Eva L. Baker and Girlie C. Delacruz
This chapter presents a broad framework for ways to consider the design and evaluation of games and simulations. The design rubric is overarching. From our perspective, design means the planning and re-planning of elements to be executed in a product designed for learning. We will describe the information that is included in the initial design and the elements that trigger revisions. Our own experience covers the development and evaluation of learning games and simulations created by academic, nonprofit research, and commercial firms. The framework we provide is intended to be flexible enough to encompass a wide range of potential development, and we have selected our emphases to highlight importance, innovation, and durable features not so affected by changes in interface or platform. The orientation we take is largely conceptual, with guidance on how these ideas might be made into procedures, but without any notion that there is one right procedural way to do this design and evaluation work.

The Centrality of Design—What Is Under the Hood

Our framework is grounded in the essential but fundamentally unsexy topic of designing the infrastructure that supports rapid and iterative R&D areas and simulations. When an important goal of the game is to meet an educational objective, much of the front-end game design efforts should be on clearly operationalizing those educational goals at different levels of specificity (Baker, Chung, & Delacruz, 2008; Delacruz, 2014). There are differences in approaches to the design and development of games. For example, in commercial settings, design is focused on how to make games within constraints (time and budget) and their aesthetic or marketing value. Agile development approaches support progress tracking of game software, and the asset development needed for integration. They emphasize timely releases during the creation of the game. They may stop at playtesting rather than accumulate evidence of the attainment of learning outcomes. We have observed and have participated in sprints (incremental development), stand-ups (daily coordination meeting), and a/b testing (contrast of different variations). This “agile” approach that derives from general software development is contrasted in the literature with so-called “waterfall” approaches, which are inaptly characterized as goal-focused, single-track development, involving minimal data collection and quality control, but without iteration or comparative version testing (Dybå & Dingsøyr, 2008).
In the current state of the practice, commercial games may not have learning outcomes paramount or clear evidence about their effectiveness, but despite the fact that they are not intentionally instructional in the school learning sense, they do support the development of some skill sets that can be applied in academic learning and in the workforce. Similarly, many games designed to be educational (or serious games) have aped the structure and strategy of commercial “play” games and are games with highly compelling engagement that simultaneously report evidence of consistent learning, particularly of difficult skills and concepts (Clark, Tanner-Smith, & Killingsworth, 2015; Delacruz, Baker, & Chung, 2013).
In our own design work we use a combination of the agile and waterfall approaches. We use a model-based engineering process to design and develop games and simulations (Delacruz, 2014)—a simultaneous top-down and bottom-up design process that goes beyond the typical design document or storyboards. In this approach, a set of explicit models are developed and revised to produce the game/simulation instructional framework and assessment architecture—a transparent and externally represented coordinated system to guide game design development and evaluation activities (Baker, 2012; Baker & Delacruz, 2013; Delacruz, 2014). There is a focus on goals, ontologies, feature analysis, instructional strategies, assessment, and evaluation. We will discuss each of these foci in the remainder of the chapter.

Goals

In this approach, a general slate of learning goals is articulated. This goal set includes the desirable outcomes to be achieved by the player, including potential optional goals that might be anticipated at the outset of development. One clear responsibility is to describe learning clearly and explicitly, without reverting to the empty set of response format, such as passing a multiple-choice test on troubleshooting disasters aboard a naval ship, which tells the designer almost nothing important about how to build the intervention. Instead, at this point, the emphasis is on getting more operational about what situations or tasks will be presented to the learner and a specified set of conditions or constraints that will define the nature of the tasks, in addition to the expected range of behaviors, actions, or decisions that could (and should) be made.
For example, when designing games to train naval officers on surface warfare tactics (e.g., Iseli, Koenig, Lee, & Wainess, 2010), the designer should be given detailed specifications of the class of problems the learner should be able to solve (e.g., mitigating communication failures due to poor data transmission), under what conditions and the context (e.g., equipment and personnel available), and information provided (e.g., a few pieces of high-quality information versus a lot of low-quality information). It is important that the designer also be given operational descriptions of what the learner should consider or attend to (e.g., is communication completely lost or intermittent, proximity to other members of the fleet) when making various choices and decisions to meet the objective or solve the problem.
At this point the work is about saying what the learning is to accomplish and how the designer will likely know. This set of decisions presages important choices to be made during the process of designing instructional options, as well as for developing the set of checkpoints or formative assessment options for the instructor to tell how well the student is learning. Finally, the goals, as fleshed out, present the core elements of the evaluation frame, the way effectiveness is to be gauged, and impact and return on investment calculated. What is not decided yet is how the learning is to take place. It has been the practice in game development to begin with an opposite point of entry, that is, storyboarding the sequence that the learning goes through and making the assumption that such a sequence will be optimal for learning.
To add depth to goals, we need to create options to be considered and reviewed. For example, we developed a game for the Defense Advanced Research Projects Agency (Delacruz et al., 2013). Our program manager had a general idea of developing STEM (science, technology, engineering, and math) games for children ages 5 to 9 in Grades K–4. We took that idea and moved it to a more specific goal premise: We wanted to teach children physics concepts that they could use in a game environment, and as a consequence of play, that they would learn. The first set of ideas was generated by knowledge and confirmation about what these learners were likely to be able to do. Some of our design principles driven by goals were as follows:
• Children would need to have ways to see the concepts in action rather than as abstractions.
• They might be able to be successful with nine or so concepts in a short 40-minute game.
• No child would be asked to deal with more than three concepts simultaneously (e.g., mass, velocity, friction).
• The cognitive process on which we focused was problem solving in a game scenario. Using a structure created by inviting cognitive scientists to give us their structure of problem solving (Mayer, 2011) and subsequently revised in versions by Delacruz (2013) and Baker (Baker & Delacruz, 2013), we defined the elements of the problem-solving scenario (e.g., obstacles in the game).
• We also recognized that children at ages 5 to 9 might not have strong reading skills, so we needed to plan for scenarios with rebuses or pictures to convey words, or with audible supports.

Ontologies

At our institution, the Center for Research on Evaluation, Standards, and Student Testing (CRESST), we have used ontologies in math (Phelan, Vendlinski, Choi, Herman, & Baker, 2011), physics (Delacruz et al., 2013), communication (Baker, 2011), problem solving (Baker & Delacruz, 2013), situation awareness (Baker, 2011), and decision making (Baker, 2011), to name some examples. The ontologies are represented in different ways (e.g., network graph, matrices, text descriptions) to provide operational definitions at different levels of granularity to support the needs of designers, game developers, programmers, and data analysts.
In the course of our work, beginning with simulations of M-16 rifle marksmanship (Baker, 2012) and moving through math learning for middle school (Baker, 2012), we have used our model of ontology development. Ontologies are usually content driven and represent the relationship of elements to one another (Baker, 2012). They are developed often as vertical “part-of” sequences, for instance, when it is clear that if the goal were to identify a vector, then a part of that task is to be able to identify the marker showing direction. There are also ontologies that are less hierarchical but show relationships. For example, in solving a balance problem, the student does not need to address either the right or the left side of the fulcrum first. The process can begin on either side, so long as the result is achieved (Baker, 2012).
Specifically, we use ontologies for at least four purposes: First, they depict key elements to be learned. For example, in a network representation, the nodes with the most links may indicate a level of importance and criticality that should be considered in deciding on outcomes. They also display the range of the domain to be learned and support the idea of comprehensiveness. If it is decided that it is not possible to be comprehensive, the decision to exclude certain areas is made clear. This approach contrasts with a strategy that says the goal is to improve learners’ performance on the construct of math achievement, consisting of rational numbers, geometry, and algebra, with items sampled for each but not explicitly weighted to reflect importance or relationships among major topics.
Second, because of their focus on relationships, visual ontologies can help decide the components of content that are prerequisite to learning particular goals. The prerequisites follow the notion of venerable educational psychologist Robert Gagné (Gagné, 1968; Gagné & Medsker, 1996), who suggested we ask a simple but important question: What does the learner need to be able to do before he or she can begin to tackle the next question? Gagné called his work “task analysis,” and like ontologies they were created by experts with the expectation that they would be empirically verified. A variant on this procedure (Clark, Feldon, van Merriënboer, Yates, & Early, 2008) is cognitive task analysis, which involves interrogating an expert during the process of completing a task or answering a particular question to elicit what the experts are thinking. A set of student responses could then be compared to determine whether the designer was on target for estimating what elements were needed in instruction.
Third, the ontologies can help generate a best case or set of alternative sequences of learning, sometimes called learning progressions (to connote increasing learning requirements) or learning trajectories (to show individual or group patterns to various standards of achievement; Confrey & Maloney, 2012). As with component knowledge elements, these ideally need empirical verification for the types of learners and settings in which the product will be used. This verification process is expensive in terms of time and dollars. As a less expensive alternative to verification, we have used two other approaches. One is to use crowdsourcing (Law & von Ahn, 2011) as a verification tool to see the extent to which others, presumably knowledgeable in the domain, would identify the same components and sequences. A second approach is to consider the use of expert versus novice comparisons (Acton, Johnson, & Goldsmith, 1994). For example, one could look at how learners who presumably have mastered the tasks perform on what are thought to be subsets of needed tasks. These performances are contrasted with learners who have not yet succeeded in the skill area.
As ontologies based on problem solving only have meaning as they are embedded in the domain, for every major content node it is possible to extract elements of the cognitive ontology. For example, to succeed, a learner must be able to recognize or describe the concept and distinguish it from other members of its set (for instance, to know the difference between “<” and “>”) before responding to certain probabilistic statements. Thus an important task is integrating the cognitive demands with the content ontology (Koenig, Iseli, Wainess, & Lee, 2013; Koenig, Lee, Iseli, & Wainess, 2010).
A fourth use of ontologies is in the design of assessments or other measures to be used during the various trials of the intervention. These assessments may focus on prerequisites identified in the ontologies, or in-game or in-simulation requirements intended to suggest that the learner is ready to progress...

Table of contents

  1. Cover
  2. Half Title
  3. Title Page
  4. Copyright Page
  5. Table of Contents
  6. List of Contributors
  7. Preface
  8. Part I Theory/Framework/Context
  9. Part II Assessment
  10. Part III Cognitive/Motivational Issues
  11. Part IV Psychometric Issues
  12. Index