Abstract for Ramo Gencay Title: Abstract:

advertisement
Abstract for Ramo Gencay
Title: Multi-Scale Information Processing in Economics & Finance
Abstract: Conventional time series analysis, focusing exclusively on a given data frequency may
lack the ability to explain the nature of the data-generating process. An econometric model
which successfully explains daily price changes, for example, may be unable to characterize the
nature of hourly price changes. On the other hand, statistical properties of monthly price changes
may not often fully covered by a model based on daily price changes.
If, for instance, a data generating process is subject to structural breaks (temporary or
permanent), the way such breaks (jumps, mean shifts or volatility regimes) demonstrate
themselves at microsecond, second, hourly, daily or lower frequency sampling rates do not
necessarily follow a linear aggregation law. This means that linear aggregation from a highfrequency sampling rate (e.g. second by second data) to a lower-frequency rate (e.g. hourly data)
do not necessarily lead into correct characterizations and inference. One possible solution is to
model multiple-frequencies at the same time.
Gençay et al (2010) simultaneously model regimes of volatilities at multiple time scales through
wavelet-domain hidden Markov models. They establish an important stylized property of
volatility across different time scales. They call this property asymmetric vertical dependence. It
is asymmetric in the sense that a low volatility state (regime) at a long time horizon is most likely
followed by low volatility states at shorter time horizons. On the other hand, a high volatility
state at long time horizons does not necessarily imply a high volatility state at shorter time
horizons. These types of findings have important implications regarding the scaling behavior of
volatility, and, consequently, the calculation of risk at different time scales.
References:
Gençay, R; Gradojevic, N; Selçuk, F; Whitcher B, 2010, Asymmetry of information flow
between volatilities across time scales, Quantitative Finance, Vol. 10, No. 8, October 2010, 895–
915.
Gençay, R; Selçuk, F; Whitcher B, 2001, An Introduction to Wavelets and Other Filtering
Methods in Finance and Economics, Academic Press.
Abstract for Jan van Leeuwen
Title: The Scope of PIIP
Abstract:
• Information processing is a key issue in computer science. It became a metaphor in
understanding cognitive processes, leading to the multi-stage view of the way humans deal with
information. What do we understand information processing to be, today?
• The scene of information processing has changed to data science:
– Data now consists of streams and piles of data. Information is embedded in it, in some
known or unknown way and perhaps without knowing precisely what it is.
– Information processing is a process of iterated filtering, with suitable means and with or
without quantitative restrictions on it.
– The programs that govern the processing learn and adapt, and may change depending on the
data they receive over time and the interactions they engage in.
• How do we look at information processing now? What do the insights about streaming, dataand process-mining, and adaptive processes imply? Why is information processed? what does
one get from it?
• Some questions:
– Streams give an infinite dimension to data. What does this mean for the concept of
information?
– Is there a general classification of information processing, based on the means that are used
and the information that is processed?
– Suppose the observer and the observed interact, implying that the data exchanged become
interaction-dependent over the course of the exchange. Can information processing be explained
game-theoretically?
– Information processing and computation are generally equated but not considered equal,
based however on a traditional view of both. Can the difference, if any, be made more tangible,
in view of the evolved conceptions of both information processing and computation?
– Can information processing, like computation, be captured in a model that allows one to
express its generic properties, whatever these properties are?
• Is the philosophy of data science a philosophy of information processing?
• What role does information processing play in the understanding of natural complex systems?
Conversely, what do these systems learn about information processing?
Abstract for Giuseppe Primiero
Title: Negating Trust: intentionality and enemy mine
Abstract: Trust has recently become a crucial epistemic notion for many areas across the
computational sciences spectrum to identify relevant, secure or preferred agents,
communications and information contents. In the last two decades, research on trust has
developed mainly quantitative approaches focusing on the understanding, modelling and
anticipation of trust propagation. Such analyses heavily rely on the correct formal representation
of transitive trust: If Alice trusts Bob and Bob trusts Carol, then Alice trusts Carol. This
represents a main issue for security protocols, efficient reputation and recommendation systems
and algorithm personalisation. These solutions are extending towards the identification of
various quality criteria, among which understanding of negative values of trust is essential.
Transitivity applies in this case as well: Alice does not trust Bob; Bob does not trust Carol; does
Alice trust Carol?
To contribute to this debate, we offer an intentional interpretation on the semantics of data,
changing the perspective on how (un)trust transitivity should be accounted for, and how agents
are supposed to act in view of such assessments. We introduce an overview of the different
meaning of trust and untrust in the computational context, considering how they have been
influenced by interpretations from the social sciences. Afterwards, we present an account of the
notions of trust, distrust and mistrust that heavily relies on the analysis of the agents' intention
and procedures induced by their beliefs. We define distinct procedural protocols for each of these
notions and explain how enemy mine situations are specific to intentional transmission of false
information. An evaluation based on intentionality criteria can offer a sensibly better solution in
many cases when combined with a quantitative and computationally feasible approach,
ameliorating conditions for a number of current applications based on simple and straightforward
reputation methods such as web search results ranking algorithms.
Abstract for Marcin J. Schroeder
Title: Structural and Quantitative Characteristics of Information from the Point of View of
Invariance
Abstract: The concept of symmetry, i.e. of invariance with respect to transformations of the object of study
is fundamental for scientific methodology. We can trace its role retroactively in the entire development of
physics and other disciplines, even before mathematical tool for the study of symmetry (group theory) was
born. The perspective of symmetry and invariance became a commonly recognized program of the
mathematical inquiry in the 19th Century (Erlangen Program of Felix Klein) to become in the next century
main tool of physics (Noether’s Theorem), chemistry, biology, and finally to permeate psychology (Jean
Piaget) and even cultural anthropology (Claude Levi-Strauss). The discovery of breaking symmetry in
nature marked the turning points of physics (e.g. Special Relativity, unification of physical interactions,
etc.)
It is an interesting twist of the early history of information studies that it was invariance with respect to
change of encoding of information, from which Ralph Hartley derived the measure of information in his
1928 article marking the beginning of information science. Hartley was aware of the importance of
invariance, but because of his conviction that structural aspects of information belong to psychology, not
engineering disregarded the constraints on transformations. Claude Shannon did not continue Hartley’s
investigation of invariance and explored the issue of structural characteristics of texts in terms of conditional
probabilities in sequential choices of characters, but without much success. Invariance disappeared from
the study of information dominated by the quantitative methodology focused on entropy fully invariant
with respect to arbitrary transformations of the underlying set (alphabet). The attempts of Yehoshua BarHillel and Rudolf Carnap in 1952 to introduce structural analysis of information in terms of logic did not
bring lasting results. It is interesting that the linguistic revolution of Noam Chomsky did not revive interest
in this subject. Although Rene Thom developed his study of information exactly in the spirit of invariance,
he distanced himself from the orthodox information theory so vigorously that his work did not influence
information science.
The first step towards the structural study of information is its conceptualization incorporating both aspects
of information – selective and structural. The presentation utilized the approach developed by the present
author in several publications in last decade, in which information is defined as identification of a variety,
i.e. that which makes one out of many. Selection is one mode of identification, a structure imposed on the
variety is another. Both manifestations can be formalized in a mathematical way in terms of closure spaces.
Set of closed subsets in such space has the structure of a complete lattice (generalization of a Boolean
algebra), which can be considered a generalization of logic for not necessarily linguistic forms of
information. The properties of this logic of information may serve as a basis for the structural analysis of
information. For instance, the level of decomposability of the logic into components can be identified as
the level of information integration characterizing structural manifestations of information. On the other
hand, if this logic admits probability measure (not always it is possible), we can define quantitative
measures of the type of entropy to characterize selection (although it may be argued that better measures
than entropy are possible).
The entropy type of measure is related to selection, so it is not a surprise that it is an invariant of a larger
class of transformations than those associated with structural manifestations. In the presentation a new
quantitative measure of information integration m*(L), which is an invariant of transformations preserving
structure is proposed and its basic properties are reviewed.
Abstract for Paolo Rocchi
Title: What Information to Measure – How to Measure?
Abstract: This talk addresses the official argument of the present workshop that is the
information metrics, and means to suggest a new strategy of research.
Information theorists put forward several and different definitions so far. Some of the
information definitions are more technical in nature, others are more abstract and broad-based.
The concept of information is still liquid.
While theorists have not reached a definitive solution, as matter of facts engineers prepare
astonishing digital appliances. Other scholars have found out intriguing informational
phenomena in Nature. For instance biologists have discovered and decoded the DNA of several
living beings. I wonder:
What culture or what notions do these productive actions sustain?
I have observed that professionals and researchers coming from various domains exploit the
semiotic scheme that they use through intuition. They adopt the concepts of signifier and
signified in a pragmatic manner and without using the semiotic terminology. The same Shannon
uses the notions of signifiers and signified without mentioning them in his seminal work.
Concluding, we cannot say what is information and in consequence we cannot measure it, but
could follow a novel pragmatic vein of research. We could study the notions of signifier and
signified from a scientific-mathematical perspective due to the popularity of the semiotic
scheme, and could measure the semiotic elements.
This talk does not develop a purely political account; I have made some steps toward the
innovative direction of research.
I suggest a formal definition for the signifier by means of an inequality. Several measures can be
derived from this inequality; in particular I have demonstrated that the inequality lies at the base
of the information technology though it is used by intuition. In addition the present theory can
answer to some vexed philosophical arguments and can demonstrate in formal terms:
#1 Digital signals are perfect instead analog are fuzzy.
#2 The perception fallacy conundrum is an ill-posed argument and therefore has many solutions.
#3 Information is a relativistic concept.
#4 ‘Nothing’ is a potential vehicle of information and does not deny the information physicism.
The proposed formal solution demonstrates to offer benefits to practitioners and philosophers of
information as well.
Abstract for J. Michael Dunn
Title: Relevance Logic as an Information-based logic
Abstract: There are obviously connections between formal logic and database theory, and hence
between formal logic and information processing. Unfortunately these connections while in one
sense obvious, are in a deeper sense not as well understood and utilized as one might naively
expect they should be. There are many reasons for this, but one is surely that the concept of
information is kind of buried in the semantics (model-theory) of classical logic. Here we shall
explore information based approaches to one of the best known non-classical logics, variously
called relevance logic, or relevant logic. From the beginnings of relevance logic there has been
much controversy about its semantics. First there was complaint that it had none, and then there
was the complaint that it had one, particularly the so-called "Routley-Meyer semantics" for
relevance logic -- which used the novelty of a ternary accessibility relation in rough analogy to
the binary relation that Kripke used in his semantics for modal logic. I shall explore the idea that
the terms of the relation are information states, and that Rabc can be understood in one of the
following three ways:
1. Information Combining Interpretations: the piece of information a when combined with b
equals c (Urquhart) or is included in c (Fine).
2. Computational Interpretation: view information state a as "input" and view the information
state b as a stored program. Information state c is a potential result of running the program on
that input (Dunn).
3. Program Combining Interpretation: view information states a and b both as stored programs,
and view the result of composing these two programs as equal to (or included in) the information
state which is the stored program c (Dunn).
The informational interpretation 1 in Fine’s version was shown to be sound and complete with
respect to the logic of relevant implication R. Informational interpretations 2 and 3 give different
related logics.
Abstract for Jeff Racine
Title: Five Top Challenges in Philosophy of Information and Information Processing
Abstract:
• there has been a preoccupation with moment-based inference (which can lead to inconsistent
inference, e.g., two distributions have same mean but differ almost everywhere)
• one challenge is to educate practitioners on the benefits of using entropic measures of
divergence
• implementation of entropic measures raises questions of ill-posedness, e.g., the PDF is the
derivative of the CDF, yet uniformly consistent estimators such as the ECDF cannot be plugged
into the definition of the density based on a derivative (this constitutes an `ill-posed problem’)
• kernel smoothing techniques have emerged as a popular means of overcoming the ill-posed
problem
• however, this imparts information on the resulting entropic measures of divergence
• so a related challenge is how best to go beyond moment-based inference while quantifying the
amount of information imparted on the entropic measure by kernel smoothing
• so, to summarize, ideally we educate practitioners to move beyond restrictive moment-based
inference, but perhaps maximize the mutual information between non-smooth empirical
distribution and the smooth distributions that are made necessary due to ill-posedness of certain
problems
Abstract for Ximing Wu
Title: Information-Theoretic Assessment of Copula-based Analysis
Abstract: Joint distributions of multivariate random variables can be expressed as joint
distributions of corresponding marginal distributions, leading to the copula representation.
Copula functions completely summarize dependence among random variables. Copula-based
approaches have become popular, partially due to its separation of marginal distributions from
the dependence. This note cautions that the advantage of copula-based analyses hinges on the
simplicity of copulas. If a joint distribution is complicated while its copula function is simple,
estimations can benefit from transformation into the copula domain. In contrast, in the presence
of complicated copula functions, the costs of copula estimations may outweigh its benefits. We
note that mutual information can be interpreted as copula entropy. Furthermore, copula entropy
is also the relative entropy between a copula density and the uniform density, which reflects the
complexity of a copula and hence the difficulty of its estimation. Consequently information
theories provide a natural framework to assess the suitability of copula-based analyses.
Download