Multi-Camera Networks
eBook - ePub

Multi-Camera Networks

Principles and Applications

  1. 624 pages
  2. English
  3. ePUB (mobile friendly)
  4. Available on iOS & Android
eBook - ePub

Multi-Camera Networks

Principles and Applications

Book details
Book preview
Table of contents
Citations

About This Book

  • The first book, by the leading experts, on this rapidly developing field with applications to security, smart homes, multimedia, and environmental monitoring
  • Comprehensive coverage of fundamentals, algorithms, design methodologies, system implementation issues, architectures, and applications
  • Presents in detail the latest developments in multi-camera calibration, active and heterogeneous camera networks, multi-camera object and event detection, tracking, coding, smart camera architecture and middleware

This book is the definitive reference in multi-camera networks. It gives clear guidance on the conceptual and implementation issues involved in the design and operation of multi-camera networks, as well as presenting the state-of-the-art in hardware, algorithms and system development. The book is broad in scope, covering smart camera architectures, embedded processing, sensor fusion and middleware, calibration and topology, network-based detection and tracking, and applications in distributed and collaborative methods in camera networks. This book will be an ideal reference for university researchers, R&D engineers, computer engineers, and graduate students working in signal and video processing, computer vision, and sensor networks.

Hamid Aghajan is a Professor of Electrical Engineering (consulting) at Stanford University. His research is on multi-camera networks for smart environments with application to smart homes, assisted living and well being, meeting rooms, and avatar-based communication and social interactions. He is Editor-in-Chief of Journal of Ambient Intelligence and Smart Environments, and was general chair of ACM/IEEE ICDSC 2008.

Andrea Cavallaro is Reader (Associate Professor) at Queen Mary, University of London (QMUL). His research is on target tracking and audiovisual content analysis for advanced surveillance and multi-sensor systems. He serves as Associate Editor of the IEEE Signal Processing Magazine and the IEEE Trans. on Multimedia, and has been general chair of IEEE AVSS 2007, ACM/IEEE ICDSC 2009 and BMVC 2009.

  • The first book, by the leading experts, on this rapidly developing field with applications to security, smart homes, multimedia, and environmental monitoring
  • Comprehensive coverage of fundamentals, algorithms, design methodologies, system implementation issues, architectures, and applications
  • Presents in detail the latest developments in multi-camera calibration, active and heterogeneous camera networks, multi-camera object and event detection, tracking, coding, smart camera architecture and middleware

Frequently asked questions

Simply head over to the account section in settings and click on “Cancel Subscription” - it’s as simple as that. After you cancel, your membership will stay active for the remainder of the time you’ve paid for. Learn more here.
At the moment all of our mobile-responsive ePub books are available to download via the app. Most of our PDFs are also available to download and we're working on making the final remaining ones downloadable now. Learn more here.
Both plans give you full access to the library and all of Perlego’s features. The only differences are the price and subscription period: With the annual plan you’ll save around 30% compared to 12 months on the monthly plan.
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Yes, you can access Multi-Camera Networks by Hamid Aghajan,Andrea Cavallaro in PDF and/or ePUB format, as well as other popular books in Technology & Engineering & Electrical Engineering & Telecommunications. We have over one million books available in our catalogue for you to explore.
PART 1
MULTI-CAMERA CALIBRATION AND TOPOLOGY
CHAPTER 1 Multi-View Geometry for Camera Networks
Richard J. Radke
Department of Electrical, Computer, and Systems Engineering, Rensselaer Polytechnic Institute, Troy, New York
Abstract
Designing computer vision algorithms for camera networks requires an understanding of how images captured from different viewpoints of the same scene are related. This chapter introduces the basics of multi-view geometry in computer vision, including image formation and camera matrices, epipolar geometry and the fundamental matrix, projective transformations, and N-camera geometry. We also discuss feature detection and matching, and describe basic estimation algorithms for the most common problems that arise in multi-view geometry.
Keywords
image formation • epipolar geometry • projective transformations • structure from motion • feature detection and matching • camera networks

1.1 INTRODUCTION

Multi-camera networks are emerging as valuable tools for safety and security applications in environments as diverse as nursing homes, subway stations, highways, natural disaster sites, and battlefields. While early multi-camera networks were confined to lab environments and were fundamentally under the control of a single processor (e.g., [1]), modern multi-camera networks are composed of many spatially distributed cameras that may have their own processors or even power sources. To design computer vision algorithms that make the best use of the cameras’ data, it is critical to thoroughly understand the imaging process of a single camera and the geometric relationships involved among pairs or collections of cameras.
Our overall goal in this chapter is to introduce the basic terminology of multi-view geometry, as well as to describe best practices for several of the most common and important estimation problems. We begin in Section 1.2 by discussing the perspective projection model of image formation and the representation of image points, scene points, and camera matrices. In Section 1.3, we introduce the important concept of epipolar geometry, which relates a pair of perspective cameras, and its representation by the fundamental matrix. Section 1.4 describes projective transformations, which typically arise in camera networks that observe a common ground plane. Section 1.5 briefly discusses algorithms for detecting and matching feature points between images, a prerequisite for many of the estimation algorithms we consider. Section 1.6 discusses the general geometry of N cameras, and its estimation using factorization and structure-from-motion techniques. Section 1.7 concludes the chapter with pointers to further print and online resources that go into more detail on the problems introduced here.

1.2 IMAGE FORMATION

In this section, we describe the basic perspective image formation model, which for the most part accurately reflects the phenomena observed in images taken by real cameras. Throughout the chapter, we denote scene points by X = (X, Y, Z), image points by u = (x, y), and camera matrices by P.

1.2.1 Perspective Projection

An idealized “pinhole” camera
image
is described by
image
A center of projection C
image
3
image
A focal length f
image
+
image
An orientation matrix RSO(3).
The camera’s center and orientation are described with respect to a world coordinate system on
image
3. A point X expressed in the world coordinate system as X = (Xo, Yo, Zo) can be expressed in the camera coordinate system of
image
as

image
(1.1)

The purpose of a camera is to capture a two-dimensional image of a three-dimensional scene
image
—that is, a collection of points in
image
3. This image is produced by perspective projection as follows. Each camera
image
has an associated image plane
image
located in the camera coordinate system at ZC = f. As illustrated in Figure 1.1, the image plane inherits a natural orientation and two-dimensional coordinate system from the camera coordinate system’s XY-plane. It is important to note that the three-dimensional coordinate systems in this derivation are left-handed. This is a notational convenience, implying that the image plane lies between the center of projection and the scene, and that scene points have positive ZC coordinates.
image
FIGURE 1.1 Pinhole camera,...

Table of contents

  1. Cover image
  2. Title page
  3. Table of Contents
  4. Copyright
  5. Foreword
  6. Preface
  7. PART 1: MULTI-CAMERA CALIBRATION AND TOPOLOGY
  8. PART 2: ACTIVE AND HETEROGENEOUS CAMERA NETWORKS
  9. PART 3: MULTI-VIEW CODING
  10. PART 4: MULTI-CAMERA HUMAN DETECTION, TRACKING, POSE AND BEHAVIOR ANALYSIS
  11. PART 5: SMART CAMERA NETWORKS: ARCHITECTURE, MIDDLEWARE, AND APPLICATIONS
  12. Outlook
  13. Index