Science: Image in Action
eBook - ePub

Science: Image in Action

Proceedings of the 7th International Workshop on Data Analysis in Astronomy "Livio Scarsi and Vito DiGesĂš"

  1. 328 pages
  2. English
  3. ePUB (mobile friendly)
  4. Available on iOS & Android
eBook - ePub

Science: Image in Action

Proceedings of the 7th International Workshop on Data Analysis in Astronomy "Livio Scarsi and Vito DiGesĂš"

Book details
Book preview
Table of contents
Citations

About This Book

The book gathers articles that were exposed during the seventh edition of the Workshop “Data Analysis in Astronomy”. It illustrates a current trend to search for common expressions or models transcending usual disciplines, possibly associated with some lack in the Mathematics required to model complex systems. In that, data analysis would be at the epicentre and a key facilitator of some current integrative phase of Science.

It is all devoted to the question of “representation in Science”, whence its name, IMAGe IN AcTION, and main thrusts

-->

  • Part A: Information: data organization and communication,
  • Part B: System: structure and behaviour,
  • Part C: Data — System representation.

-->

Such a classification makes concepts as “complexity” or “dynamics” appear like transverse notions: a measure among others or a dimensional feature among others.

Part A broadly discusses a dialogue between experiments and information, be information extracted-from or brought-to experiments. The concept is fundamental in statistics and tailors to the emergence of collective behaviours. Communication then asks for uncertainty considerations — noise, indeterminacy or approximation — and its wider impact on the couple perception-action. Clustering being all about uncertainty handling, data set representation appears not to be the only solution: Introducing hierarchies with adapted metrics, a priori pre-improving the data resolution are other methods in need of evaluation. The technology together with increasing semantics enables to involve synthetic data as simulation results for the multiplication of sources.

Part B plays with another couple important for complex systems: state vs. transition. State-first descriptions would characterize physics, while transition-first would fit biology. That could stem from life producing dynamical systems in essence. Uncertainty joining causality here, geometry can bring answers: stable patterns in the state space involve constraints from some dynamics consistency. Stable patterns of activity characterize biological systems too. In the living world, the complexity — i.e. a global measure on both states and transitions — increases with consciousness: this might be a principle of evolution. Beside geometry or measures, operators and topology have supporters for reporting on dynamical systems. Eventually targeting universality, the category theory of topological thermodynamics is proposed as a foundation of dynamical system understanding.

Part C details examples of actual data-system relations in regards to explicit applications and experiments. It shows how pure computer display and animation techniques link models and representations to “reality” in some “concrete” virtual, manner. Such techniques are inspired from artificial life, with no connection to physical, biological or physiological phenomena! The Virtual Observatory is the second illustration of the evidence that simulation helps Science not only in giving access to more flexible parameter variability, but also due to the associated data and method storing-capabilities. It fosters interoperability, statistics on bulky corpuses, efficient data mining possibly through the web etc. in short a reuse of resources in general, including novel ideas and competencies. Other examples deal more classically with inverse modelling and reconstruction, involving Bayesian techniques or chaos but also fractal and symmetry.

Contents:

  • Information: Data Organization and Communication:
    • Statistical Information: A Bayesian Perspective (R B Stern & C A de B Pereira)
    • Multi-A(ge)nt Graph Patrolling and Partitioning (Y Elor & A M Bruckstein)
    • The Role of Noise in Brain Function (S Roy & R Llinás)
    • F-granulation, Generalized Rough Entropy and Image Analysis (S K Pal)
    • Fast Redshift Clustering with the Baire (Ultra) Metric (F Murtagh & P Contreras)
    • Interactive Classification Oriented Superresolution of Multispectral Images (P Ruiz, J V Talents, J Mateos, R Molina & A K Katsaggelos)
    • Blind Processing in Astrophysical Data Analysis (E Salerno & L Bedini)
    • The Extinction Map of the Orion Molecular Cloud (G Scandariato (Best Student's Paper), I Pagano & M Robberto)
  • System: Structure and Behaviour:
    • Common Grounds: The Role of Perception in Science and the Nature of Transitions (G Bernroider)
    • Looking the World from Inside: Intrinsic Geometry of Complex Systems (L Boi)
    • The Butterfly and the Photon: New Perspectives on Unpredictability, and the Notion of Casual Reality, in Quantum Physics (T N Palmer)
    • Self-replicated Wave Patterns in Neural Networks with Complex Threshold (V I Nekorkin)
    • A Local Explication of Causation (G Boniolo, R Faraldo & A Saggion)
    • Evolving Complexity, Cognition, and Consciousness (H Liljenström)
    • Self-Assembly, Modularity and Physical Complexity (S E Ahnert)
    • The Category of Topological Thermodynamics (R M Kiehn)
    • Anti-Phase Spiking Patterns (M P Igaev, A S Dmitrichev & V I Nekorkin)
  • Data/System Representation:
    • Reality, Models and Representations: The Case of Galaxies, Intelligence and Avatars (J-C Heudin)
    • Astronomical Images and Data Mining in the International Virtual Observatory Context (F Pasian, M Brescia & G Longo)
    • Dame: A Web Oriented Infrastructure for Scientific Data Mining and Exploration (S Cavuoti, M Brescia, G Longo, M Garofalo & A Nocella)
    • Galactic Phase Spaces (D Chakrabarty)
    • From Data to Images: A Shape Based Approach for Fluorescence Tomography (O Dorn & K E Prieto)
    • The Influence of Texture Symmetry in Marker Pointing: Experimenting with Humans and Algorithms (M Cardaci & M E Tabacchi)
    • A Multiscale Autocorrelation Function for Anisotropy Studies (M Scuderi, M De Domenico, A Insolia & H Lyberis)
    • A Multiscale, Lacunarity and Neural Network Method for γ/h Discrimination in Extensive Air Showers (A Pagliaro, F D'anna, G D'alí Staiti)
    • Bayesian Semi-Parametric Curve-Fitting and Clustering in SDSS Data (S Mukkhopadhyay, S Roy & S Bhattacharya)


Readership: Students, researchers and academics.

Frequently asked questions

Simply head over to the account section in settings and click on “Cancel Subscription” - it’s as simple as that. After you cancel, your membership will stay active for the remainder of the time you’ve paid for. Learn more here.
At the moment all of our mobile-responsive ePub books are available to download via the app. Most of our PDFs are also available to download and we're working on making the final remaining ones downloadable now. Learn more here.
Both plans give you full access to the library and all of Perlego’s features. The only differences are the price and subscription period: With the annual plan you’ll save around 30% compared to 12 months on the monthly plan.
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Yes, you can access Science: Image in Action by Bertrand Zavidovique, Giosue' Lo Bosco in PDF and/or ePUB format, as well as other popular books in Computer Science & Computer Networking. We have over one million books available in our catalogue for you to explore.

Information

Publisher
WSPC
Year
2011
ISBN
9789814397186
PART A
Information: Data Organization and Communication
STATISTICAL INFORMATION: A BAYESIAN PERSPECTIVE
R.B. STERN and C.A. de B. PEREIRA*
Department of Statistics, IME-USP,
S
images
o Paulo, SP, Brazil
* E-mail: [email protected]
We explore the concept of information in statistics: information about unknown quantities of interest, the parameters. We discuss intuitive ideas of what should be information in statistics. Our approach on information is divided in two scenarios: observed data and planning of an experiment. On the first scenario, we discuss the Sufficiency Principle, the Conditionality Principle, the Likelihood Principle and their relationship with trivial experiments. We also provide applications of some measures of information to an intuitive example. On the second scenario, the definition and new applications of Blackwell Sufficiency are presented. We discuss a new relationship between Blackwell Equivalence and the Likelihood Principle. Finally, the expected values of some measures of information are calculated.
Keywords: Blackwell Equivalence, Blackwell Sufficiency, Conditionality Principle, Likelihood Principle, Sufficiency Principle.
1. Introduction
The concept of information is perhaps one of the most controversial in Statistics. One can find innumerous different measures of the amount of information an experiment or a given data set brings about unknown quantities of interest—parameters. It is important to explore these measures because, after all, extracting information seems to be the ultimate goal of Statistics. More precisely, the goal is to extract—from an observed data or from an experiment to be performed—information about unknown quantities of interest. The intuitive definition of information which will guide this paper is given by Ref. 1 and is as follows:
“Information is what it does for you, it changes your opinion”.
This conceptual definition leads one to the following four questions:
• Information about what?
• Where is the information?
• How is information extracted?
• How much information is used?
We are interested in defining information about a parameter, θ, which assumes possible outcomes in Θ, the parameter space. Hence, the answer to the first question is straightforward.
A parameter represents a state of nature which we are uncertain of. For example, one can be interested in the number of days in which it will rain next year. For instance, θ can be this number and Θ all natural numbers smaller or equal to 366.
Next, we try to answer the second question. It is important to note that when defining Θ we are already using some previous knowledge about θ. In the example, we have informed that any year has at most 366 days and, therefore, θ must be smaller than this number. Besides stating Θ, one might also think that some values are more probable than others. This kind of knowledge is used to elicitate the prior distribution for θ. Here the prior distribution represents our description of our present state of uncertainty about θ. Mainly, the statisticians' goal is to decrease her/his uncertainty about this unknown quantity of interest. In order to reach such objective, (s)he observes data that, in his/her opinion, is related to the parameter. Consequently, one expects that there is information about θ in the data to be observed. That is, answering the second question, statistical (expected) information is contained in the collected data set (experiment to be performed).
It seems natural at this point to ask: How to extract information contained in the observed data? In order to answer this question properly, the “scientist” considers a global probability space involving a prior distribution on θ and the experimental distribution for every possible θ. Next, the “scientist” uses the Bayes operation to obtain the posterior distribution. The posterior distribution describes the uncertainty about the parameter after calibrating the prior by the observed data. Thus, we could say that the new information also depends on the statistical framework. The act of observing the data corresponds to a mechanism of transforming unknown quantities in known ones. We also call such a mechanism an experiment.
The last question which is important in design of experiments is: “How much information is extracted after the experiment is performed (after the data is observed)?”. After obtaining an adequate answer, it is possible to modify the question to a pre-posterior analysis : “How much information do we expect to obtain in a specific experiment to be performed?”. The heart of this paper is to explore possible answers for both questions.
Section 3 analyzes how informative a particular data set is. Section 3.1 introduces common principles in Statistics and their relationship with the Likelihood Principle. Section 3.2 presents a simple example and three information functions compatible with the principles of Section 3.1.
Section 4 is related to experimental design and tries to answer the following question: Among the possible alternative experiments, which is our best choice? Blackwell Sufficiency is a possible criterion to compare experiments. The definition of Blackwell Sufficiency, with examples, is presented in Section 4.1. The Likelihood Principle and its relationship with Blackwell's Equivalence are discussed in Section 4.2. We argue that Blackwell Sufficiency is the best criterion whenever the experiments are comparable in that sense. Finally, since not all experiments are comparable in Blackwell's sense, Section 4.3 explores the metrics exposed in Section 3.2 using the framework of decision theory to compare experiments.
2. Definitions
In the context of experimental information we will always be concerned with a probability space, that is, a triple (
images
,
images
, P) in which
images
is a set,
images
is a set,
images
is a σ-algebra on
images
and P :
images
is a set,
images
images
[0,1] is a probability function.
A random quantity R corresponds to a function from
images
to some set
images
. We define the probability space induced by R, (
images
,
images
,PR)
, where
images
and PR(M) = P(R-1[M]). Finally, the σ-algebra induced on
images
by a random quantity R is called
images
and corresponds to
images
.
We call an experiment any random quantity which can be observed, that is, which can be known. The realization of an experiment corresponds to the observation of this random quantity. It is important to observe that classifying a random variable as an experiment has nothing to do with the probability space, but with the limitations which exist in the world.
On the other hand, a parameter is a random quantity of interest. If the parameter were an experiment, its value could be known and the work of the statistician would be easy. Nevertheless, in many cases the parameter is not an experiment. Therefore, it is necessary to learn about it in an indirect manner, that is, observing those random quantities which are experiments and applying the Bayes Theorem. Again, classify...

Table of contents

  1. Cover Page
  2. Title Page
  3. Copyright Page
  4. Contents
  5. Workshop photograph
  6. Organizing Committees
  7. Sponsors
  8. Preface
  9. Memory of Vito Di GesĂş
  10. Part A Information: Data Organization and Communication
  11. Part B System: Structure and Behaviour
  12. Part C Data/System Representation
  13. Author Index
  14. Participants