Program in detail

Program Index

Tuesday, September 11

Satellite Workshop
Track: Modeling Financial Systems
Lecture Hall CAB G 11

9.00 - 10.00, Rosario Mantegna, University of Palermo, Italy
EPJ Data Science Lecture: Stylized Facts in the Credit market
I will briefly discuss some empirical observations about the flow-of-funds dynamics observed during the so-called period of “the great moderation” and will consider the degree of interrelations observed among different financial markets and different venues of the same market. My last focus will be on the credit market, which is the market directly linking the financial system to the real economy. Specifically, I will present some empirical results observed in the credit and investment relationships between banks and non-financial companies.
10.30 - 11.00, Imre Kondor, Parmenides Foundation, Germany
Strong random correlations in networks of heterogeneous agents
Slides
Correlations in a schematic model of binary agents (voting yes or no, trading or inactive, etc.) are considered. The agents are placed at the nodes of a network and they collaborate or compete with each other according to a fixed set of positive or negative links. They may also be subject to some external influence equally impacting each of them, and some random noise. We study this system by running numerical simulations of its stochastic dynamics. A microscopic state of the system is a vector with binary components, describing the actual state of each agent. The totality of these vectors span the „phase space” of the system. Under the dynamics the state vector executes a random walk in phase space. At high noise levels the system has a single attractor with a broad basin of attraction. As the noise level is lowered, the heterogeneous interactions between the agents come to the fore and will divide phase space into typically several basins of attraction. For small systems sizes it is possible to completely map out the attractors and the low lying states belonging to their basins. This map defines a graph in phase space, and we study the random walk of the system on this graph. At low noise levels the system will spend a long period in the immediate vicinity of one of the attractors until it finds a low saddle point along which it escapes, only to be trapped in the basin of the next attractor. The dynamics of the system will thus be reminiscent of the punctuated equilibrium type evolution of biosystems – or human societies. It is clear that evolution in such a landscape will depend on the initial condition, but the landscape itself will be extremely sensitive to details of the concrete distribution of interactions, as well as to small shifts in the values of the noise or the external field. The evolution is so slow that one can meaningfully speak of some quasi-equilibrium while the system is exploring the vicinity of one or the other attractor. Performing measurements of correlations in such a quasi-equilibrium state we find that (due to the heterogeneous nature of the system) these correlations are random both as to their sign and absolute value, but on average they fall off very slowly with distance. This means that the system is essentially non-local, small changes at one end may have a strong impact at the other, or small changes in the boundary conditions may influence the agents even deep inside. These long-range, random correlations tend to organize a large fraction of the agents into strongly correlated clusters that act together and behave as if they were occupying a complete graph where every agent interacts with every other one. If we think about this model as a distant metaphore of economic agents or bank networks, the systemic risk implications of this tendency are clear: any impact on even a single agent will spread, in an unforeseeable manner, to the whole system via the strong random correlations.
11.00 - 11.30, Giulia Iori, City University London, UK
Network analysis of the e-Mid interbank market: implications for systemic risk
Coauthors: Vasilis Hatzopoulos and Giulia Iori.
Slides
In this paper we examine the temporal evolution of the e-Mid interbank market transactions to gain understanding on the behaviour of network structural measures at or near events that where considered pivotal in the 2007-2008 credit crisis. Simple analysis of fundamental properties of the time-ordered set of networks defined over non-overlapping maintenance periods indicate a shrinking market size. The number of nodes (banks) present per maintenance period, the number of edges, edge density and average degree in the system continued to shrink at a roughly constant rate. In order to compare the evolution of various network metrics when the underlying network size changes over time it becomes crucial to define appropriate network null models against which the statistical significance of a number of early warning indicator can be assess. Given the directed and and weighted nature of our connections we construct a randomised ensemble of networks using the edge swap procedure, but conserving the vertex in-out strength sequence rather then the in-out degree sequence. Each weighted directed edge with weight $w_{uv}$ is further inserted $w_{uv}-1$ times in the network and all edges have their weights set to $1$. The resulting multigraph is then rewired as a directed unweighted graph where each edge now indicates a single transactions and the number of edges between $u$ and $v$ correspond to their number of transactions. The rewired multigraph is then collapsed to a directed weighted graph via the reverse procedure (i.e. all $m$ directed and unweighted edges between $u$ and $v$ are collapsed into a single edge with weight $m$) and the quantities of interest are then computed in this final graph. Note that as it stands this process only works for graphs with integer weight. We observe that all our network statistics show a non trivial evolution in time. The networks have a trend of decreasing size and number of edges while at the same time the number of reachable pairs increases, with the fraction of reachable pairs remains roughly constant. All the Pearson degree-degree and strenght-strenght correlation coefficients seem to increase over time. The directed unweighted clustering coefficients all show a decrease in time and values at or below the random expectation. The middle clustering coefficient, to which normally a higher level of systemic risk is associated, dominates the other in magnitude. We also monitor some centrality measures that reveal the importance of a vertex in terms of channelling the flow of liquidity. As the number of edges decreases closeness centrality goes down. On the other hand we observe a corresponding increase in the betweenness centrality. The tendency towards preferential trading, as measured by the participation ratio, increases, especially so during the crisis, showing a tendency for banks to trade with particular partners
11.30 - 12.00, Mario Eboli, Università 'G d'Annunzio', Italy
A flow network analysis of direct balance-sheet contagion in financial networks
Slides
In this paper we put forward a novel approach, based on the theory of fl‡ow networks, for the analysis of direct contagion in networks of agents connected among themselves by financial obligations. Such fi…nancial systems are here represented as ‡flow networks -- –i.e., directed and weighted graphs endowed with source nodes and sink nodes --– and the diffusion of losses and defaults is here represented as a fl‡ow. Using the properties of network ‡flows, we analyse the dynamics of the ‡flows of losses that cross a …financial ‡flow network and obtain four sets of results. First, we address a know problem of indeterminacy that arises, in fi…nancial networks, from the intercyclicity of payments. We establish necessary and sufficient conditions for the uniqueness of clearing intercyclical payments and use these conditions in an algorithm that, while computing the contagion process, controls for the occurrence of possible indeterminacies. Second, we investigate the relation between the shape of a financial network and its exposure to default contagion. We characterise fi…rst and fi…nal contagion thresholds (i.e., the value of the smallest shock capable of inducing default contagion and the value of the smallest shock capable of inducing the default of all agents in the network, respectively) for different network shapes, namely the complete, star-shaped, incomplete regular, and cycle-shaped networks. We fi…nd that the class of incomplete regular networks (which includes the cycle-shaped ones), compared to the classes of complete and star-shaped networks, is more exposed to episodes of contagion due to shocks of small magnitude and scope, and less exposed to the risk of complete system melt-downs. Third, we …find that the ratio between the external debt of the agents in a network (i.e. the debt towards claimants who do not belong to the network, such as households) and their internal debt (i.e. the debt towards other agents in the network) determines the exposition to contagion of the network. Ceteris paribus, the larger the ratio between the intra-network exposures and the external debt of the agents in a network, the more the network is exposed to default contagion, both in terms of contagion thresholds and of scope of contagion, i.e. number of defaults due to an external shock. Fourth, we study the distribution of losses between shareholders and debthold- ers. We fi…nd that: i) The larger the ratio between the intra-network obligations and the external debt of the fi…nancial intermediaries in a network, the larger the losses borne by their shareholders as a whole, in case of a contagion crisis, and the smaller the losses suffered by their external debtholders as a whole; ii) For any given shock, the losses borne by shareholders (debtholders) are larger (smaller) in complete networks than in incomplete networks.
14.00 - 14.30, Fulvio Corsi, Scuola Normale Superiore, Italy
Financial innovation, leverage, and diversification
Coauthors: Fabrizio Lillo, Stefano Marmi
We propose a simple model able to reproduce the procyclical dynamics of assets, leverage, and diversification and their effects of increasing the systemic fragility of the financial system during the boom period of financial cycle. We consider the joint effect of (i) financial innovation, which allows more efficient diversification of risk and (ii) leverage, which, amplifying the financial shocks, allows the expansion of the balance sheets of financial intermediaries. More specifically we focus on the balance sheet amplification due to the mark-to-market accounting rules and VaR constraints (arising both from capital requirements and margin on collateralized borrowing) which determine a prociclycal behavior of financial intermediaries: an increases in asset prices relaxes the VaR contrain permitting expansion of balance sheets (see Adrian and Shin 2009). Balance sheet amplification due to VaR constrains thus induces a perverse demand function creating positive feedback effects between asset prices and balance sheet sizes whose strength increases with the degree of leverage . In our model, financial intermediaries, which face cost of diversification and VaR constraints, chooses the optimal leverage which maximizes the returns of an equally weighted portfolio. When the costs of diversification are high, the degree of diversification is small and thus the portfolio of the financial institutions tends to be heterogeneous. Because of this heterogeneity in the portfolios and profit and losses, the individual amplification effects coming from VaR constraints will remain uncoordinated. The introduction of financial innovation in our model has several important consequences. First, a financial innovation which reduces the cost of diversification, by increasing the optimal level of diversification, reduces the volatility of the portfolio which in turns increase the leverage of the institution. Second, by increasing diversification, the overlap in the portfolios of the different financial institutions will be larger, increasing the "similarity" of the portfolio choices among the investors. Moreover, having diversified away a larger part of the idiosyncratic risk but remaining exposed to the common undiversifiable component, will increase the correlation among portfolios. Third, by increasing the leverage, the individual exposition to the undiversifiable macro factor risk increases; i.e., although each individual is more resilient to idiosyncratic shocks, they become more sensitive to the shocks in the macro factor. As a consequence, individual reactions in terms of asset demands will be more aggressive (due to higher leverage) and more coordinated (because of the larger correlation in the profits-losses realizations). This rise in the strength and coordination of the individual reactions will make more likely to have aggregate feedback in which the rise of the price of some asset leads to an excess of equity (by the realized capital gains) and, hence, to an expansion of the balance sheets driving new demands for the asset which pushes the price up and so on. Finally, endogenizing the dynamics of financial innovation will even reinforce this type of feedbacks. This research is funded by the CRISIS project.
14.30 - 15.00, Yoshi Fujiwara, University of Hyogo, Japan
Chained Financial Failures at Nation-wide Scale in Japan
Coauthors: Hideaki Aoyama (Kyoto University)
Slides
We will talk about recent studies based on real data of propagation of financial failures in the past financial crises and the present one due to the earthquake at nation-wide scales in Japan. Leading credit research agencies in Tokyo and Nikkei have accumulated a huge amount of data on banks-firms and supplier-customer links with financial information and failures of nodes. By using these large-scale data, we measure the actually occurred propagation of financial distress on the real data of large-scale economic networks comprising of firms, banks, and their relationships at the order of millions and even more. Exogenous shocks due to global financial crisis and mass destruction by disasters such as earthquakes cause propagation resulting in a sluggish relaxation, typically observed as an Omori-law. We shall focus on this aspect as well as its possible mechanism.
15.00 - 15.30, Tarik Roukny, IRIDIA - Universite Libre de Bruxelles, Belgium
Assessing the role of topology in the emergence of systemic risk in financial networks
Coauthors: Stefano Battiston, Hugues Bersini, Hugues Pirotte
Slides
We model a financial system where each agent can borrow from and lend money to other agents in and outside the system. If ever a sever shock occurs, a market with such an interconnected profile is known to be prone to contagion and default cascades. Similar to other natural systems, there exist different ways for the shock to propagate. Here, we implement two types of contagion sources. The first one is rather mechanical as it merely considers the creditor-debtor ties between agents. If a borrowing agent defaults, her creditors will suffer a loss on their assets which might outrange their financial robustness and thus lead them to default as well. If these latter agents are borrowing from others, their fall might propagate the shock throughout the system by triggering the same phenomenon. Upon this process, the second source introduces illiquidity matters and panic runs. In fact, in times of financial distress, one agent might encounter herself in a situation where, facing some counterparts' default, she needs to reimburse some of her lenders worried about her capacity to bear the losses. In order to fulfill her debts, she might be forced to sell some of her assets, including illiquid ones. Due to these fire sales, she will typically endure a price fall. In some situations, this subsequent loss can appear critical and cause her default. While the previous contagion process is conservative, this latter scenario models amplifications of the distress' propagation dynamics within the market. Taking into account these considerations, we analyze the impact that the market's structure (i.e. the way lending ties are distributed among the agents) can have on the financial system's stability as a whole, i.e. its systemic risk. Viewing the system as a directed and weighted network where nodes are financial agents and edges represent lending relationships from lenders to borrowers, we relate our study to considerations from the field of network theory. More precisely, we are interested in assessing the role of the degree distribution (i.e., the heterogeneity in the amount of counterparts each agent has) on the system's robustness by comparing binomial (i.e. erdos-renyi networks) and power-law (i.e. scale-free networks) distributions under different scenarios. We show that these different topologies provide different profiles depending on the state of the system. More precisely, we emphasize how liquidity is an important aspect to be taken into account when trying to assess how stable a market is. As it appears, the structural role of an institution in terms of lending connections can be of different nature depending on the system's level of liquidity. Moreover, we analyze the robustness aspect of each topology when correlations exist between the financial robustness and the degree of each agent and when shocks become targeted. As a general result, we argue that no single structure emerges as the optimal solution for all situations. Hence, regulators should not mitigate the complex role of markets' underlying topology when designing systemic risk policies.
16.00 - 16.30, Irena Vodenska, Boston University, USA
Cascading Failures in Bi-partite Graphs: Model for Systemic Risk Propagation
Coauthors: Irena Vodenska, Xuqing Huang, Shlomo Havlin, H. Eugene Stanley
Slides
Understanding systemic risk of complex networks is vital for robust design and sustainability of such networks. For example, economic systems are becoming more interconnected and a certain exogenous or endogenous shocks can provoke cascading failures throughout the system that might be difficult to remedy or may cripple the system for a prolonged period of time. This concept is vital for policy makers to create and implement safety measures that can halt such cascading failures or soften their impact on the overall system. The wide spread of the current EU sovereign debt crisis and the 2008 world financial crisis indicate that financial systems are nonlinear, characterized by complex relations among financial institutions, where severe crisis can spread dramatically. Thus it becomes very important and necessary to study the systemic risk of financial systems as complex networks. We study the United States commercial banks’ balance sheet data from 1976 to 2008. We construct a bi-partite banking network that is composed of banks on one hand, and bank assets on the other. We propose a cascading failure model to simulate the crisis spreading process in such networks. We introduce a shock into the banking system by reducing a specific asset value and we monitor the cascading effect of this value reduction on banks and other assets. Furthermore, we test our model using the 2007 data to analyze the empirically failed banks, and find that, for specific realistic parameters, our model simulates well the crisis spreading process and identifies a significant portion of the actual failed banks from the FDIC failed bank database from 2008 to 2011. We suggest that our model for systemic risk propagation might be applicable to other complex systems, e.g. to model the effect of the sovereign debt value deterioration effect on the global banking system or the impact of depreciation or appreciation of certain currencies on the world economy.
16.30 - 17.00, Ben Craig, Federal Reserve Bank of Cleveland, USA
Interbank Tiering and Money Center Banks
Coauthors: Goetz von Peter
Slides
This paper provides evidence that interbank markets are tiered rather than flat, in the sense that most banks do not lend to each other directly but through money center banks acting as intermediaries. We capture the concept of tiering by developing a core-periphery model, and devise a procedure for fitting the model to real-world networks. Using Bundesbank data on bilateral interbank exposures among 1800 banks, we find strong evidence of tiering in the German banking system. Moreover, bank-specific features, such as balance sheet size, predict how banks position themselves in the interbank market. This link provides a promising avenue for understanding the formation of financial networks.
17.00 - 17.30, Delio Panaro, Università di Pisa, Italia
Credit Market in an Agent-Based Model of Endogenous Growth with Locally Interacting Agents
Coauthors: Mastrorillo, M., Ferraresi, T., Fagiolo, G. and Roventini, A.
Slides
The analysis of the determinants of the sources of self-sustained growth received several contributions from both ‘Endogenous Growth’ and ‘Evolutionary’ models (e.g. Romer, 1990; Nelson and Winter, 1982). Furthermore, many attemps have been made to figure out the contribution to growth provided by credit institutions (e.g. Aghion, Angelotos and Manova, 2010; Aghion, Banerjee and Piketty, 1999; King and Levine, 1993). According to the Modigliani-Miller theorem, the optimal level of investment for a firm is independent of its financial structure and, at the margin, the price for the investment in R&D and any other type of investment would be the same; in the presence of information asymmetries, external financing of an investment is not a perfect substitute of internal one. With the present work we build on the existing literature about endogenous growth in an agent-based framework by introducing accumulation of resources and credit market in the model designed by Fagiolo and Dosi (2003) and Fagiolo (2000) where there are three different types of agents: “miners” which extract/produce a homogeneus product that we can think as GDP; “imitators” which abandoned their old technology to adopt the one used by other agents, and “explorers”, which are involved in R&D activity. In this way we contribute to the debate within AB community on the need for comparability among different ABMs The aims of the paper consist investigating how the patterns of imitation/exploration activated by the agents are modified by the introduction of a realistic credit market populated by boundedly-rational financial intermediaries. In our version of the model, part of the ouput produced by the miners is consumed while what remains, which we can think as saving, is stored in a credit institution. The productivity of each island can be tuned as increasing or decreasing in the number of the agents stationing on it. In order to allow for heterogeneity of credit institutions, we work with different specifications of the model: in the first one on each island there is a small savings bank which collects the savings of the agents mining the island; in the second one there is a unique big commercial bank that collects the savings of all agents. Both the savings banks and the big commercial bank follows the same rules to fund imitation/exploration, however, they are provided with different information about the distribution of the islands on the map. As in the original model, we run a Monte Carlo simulation for some sets of parameters and compute some statistical indexes summarizing system's final results. We pay particular attention to the sources of sulf-sustained growth and study how different credit environments affect both macroeconomic growth and the patterns of imitation/exploration activated by the agents. We expect the organization of the credit market to influence economic growth. In particular, the different sets of information available for different credit istitutions as well as credit fractioning, will strongly influence macroeconomic performance and emerging phenomena of imitation/exploration.
Track: Agent-Based Modeling in Economics
Lecture Hall CAB G 61

9.00 - 10.00, Giovanni Dosi, SSSUP Pisa, Italy
Wage formation, Investment Behavior and Growth Regimes: An Agent-Based Analysis
Slides
Using the ``Keynes & Schumpeter'' (K+S) agent-based model developed in Dosi et al. (2010) and Dosi et al. (2012) we study how the interplay between firms' investment behavior and income distribution shapes the short and long-run dynamics of the economy at the aggregate level. We study the dynamics of investment under two different scenarios. One in which investment is fully determined by past profits, and one in which investment is tied to expectations about future consumption demand. We show that, independently from the investment scenario analyzed, the emergence of steady growth with low unemployment requires a balance in the income distri- bution between profits and wages. If this is not the case, the economy gets locked either in stagnation equilibria, or into growth trajectories displaying high volatil- ity and unemployment rates. Moreover, in the demand-led scenario we show the emergence of a non-linear relation between real wages and unemployment. Finally, we study whether increasing degrees of wage-flexibility are able to restore growth and unemployment and reduce the volatility in the economy. We show that this is indeed the case only when investment is profit-led. In contrast, in the scenario where investment is driven by demand expectations wage-flexibility has no effect on either growth and unemployment. In turn, this result casts doubts on the ability of wage-flexibility policies to stabilize the economy.
10.30 - 11.00, paul ormerod, volterra partners, uk
Network effects on decisions among many similar choices
Coauthors: Alex Bentley and Bassel Tarbush
Slides
Current social network models often focus on the dynamics of popularity and how ‘superior’ alternatives come to the fore (e.g. 1, 2, 3). This is relevant to modern consumer markets and the diffusion of innovation and technology [4]. Addressing this question in a novel way, Lieberman et al. [3] used an established evolutionary model (Moran model) to show that the probability that a new, better invention becomes adopted widely fundamentally depends on the social network structure. Hierarchical networks did a better job at ensuring that inherently superior options would be adopted across the population [3]. Such models invoke binary choices where one option is identifiably superior [e.g. 3,5]. Here we adapt a related evolutionary model (Wright-Fisher model) in which all options are inherently indistinguishable, and in every period, every agent either chooses from previously available options, with probability 1 – μ, or else invents something entirely new, with probability μ. The inability to distinguish between the attributes of alternatives is a realistic feature of many modern markets, given the stupendous proliferation of choice which exists [6] Because there are no fitness advantages in our model, we explore not the correlation between fitness and probability of success (3), but rather how hierarchical network structure affects several patterns observable in real-world data: the highly non-Gaussian distribution of choices ranked by popularity and the distribution of lifespans of the choices among the top popularity rankings. We ran this process on four distinct large networks that match those studied by Lieberman et al. [3] which, from the least hierarchical to the most hierarchical, were: (a) a square lattice, where agents copy other agents near them on a grid, (b) a fully connected network, where each agent can copy any other agent, (c) a ‘meta-funnel’ network [2], where the structure funnels out from a central agent, and (d) a ‘superstar’ network [2], where agents are grouped and a central agent is connected to all groups. We find that the more hierarchical the network, the more right-skewed the distributions of choices ranked by popularity become, culminating in a winner-take-all distribution for the superstar network. Also, in non-hierarchical networks, the distributions of lifespans among the top 100 are log-normal or even power law, whereas in hierarchical networks a short-lived majority contrasts markedly with a very long-lasting winner. These results confirm that choices spread more rapidly and widely in more hierarchical networks. The more surprising result is that a clear winner emerges in hierarchical networks even without any inherent superiority. This therefore relates to the Achlioptas process [7] for sudden coalescence into sparse hierarchical networks, as has been observed for Wikipedia [8]. Our paper demonstrates that this seems to be a very general principle of social network markets. It is a feature of such markets regardless of whether agents are able to distinguish between the objective attributes of competing alternatives. These results confirm that choices spread more rapidly and widely in more hierarchical social networks. A clear winner emerges in hierarchical networks even without any inherent, identifiable superiority. References: 1. Zhao, Z. et al. (2010). Physical Review E 81(5): 056107. 2. Leskovec, J. et al. (2009). Proc. 15th ACM SIGKDD: 497–506. 3. Lieberman, E. et al. (2005). Nature 433: 312-6. 4. Antonelli, C. ed., (2011). Handbook on The Economic Complexity Of Technological Change (Edward Elgar). 5. Watts, D.J. (2002) PNAS 99: 5766–71.6. Beinhocker, E Origins of Wealth, 2007 7. Achlioptas, D. et al. (2009). Science 323: 1453-5. 8 Bounova, G. (2009). Topological Evolution of Networks. PhD Thesis, M.I.T
11.00 - 11.30, Ling Feng, National University of Singapore, Singapore
Linking agent-based models and stochastic models of financial markets
Coauthors: Ling Feng, Baowen Li, Boris Podobnik, Tobias Preis, and H. Eugene Stanley
Slides
We carry out a study to quantitatively link agent-based modeling to stochastic modeling. After reviewing selected empirical and theoretical evidence documenting the behavior of traders, we construct an agent-based model to quantitatively demonstrate that “fat” tails in return distributions arise when traders share similar technical trading strategies and decisions. Extending our behavioral model to a stochastic model, we derive and explain a set of quantitative scaling relations of long-term memory from the empirical behavior of individual market participants. Our analysis provides a behavior- al interpretation of the long-term memory of absolute and squared price returns: They are directly linked to the way investors evaluate their investments by applying technical strategies at different investment horizons, and this quantitative relationship is in agreement with empirical findings. Our approach provides a possible behavioral explanation for stochastic models for financial systems in general and provides a method to parameterize such models from market data rather than from statistical fitting.
11.30 - 12.00, Yu Chen, The University of Tokyo, Japan
Informed Local Managers and the Global Financial Crisis
Coauthors: Tomoya Hasegawa and Hirotada Ohashi
Slides
The global financial crisis (GFC) is specifically defined as the synchronized crashes of stock indices in multiple countries in the current study. Recently, agent-based models have been applied to study GFC [1], emphasizing the role played by the so-called global managers. The base model was proposed by Friedman and Abraham [2], which is characterized by the inclusion of gradient dynamics for a continuous tuning of trading strategy, as well as a risk aversion factor for the calculation of risk cost. In order to simulate GFC, two modeled markets are coupled through the introduction of specific agents, namely portfolio managers who simultaneously invests in both markets. We introduced in our study two factors which reflect the characteristics of GFC. The first one, similar to the previous study, is the frequency of the synchronized crashes occurring in the time series of prices. The second one is the time evolution of realized correlation between the two stock indices, which is found by investigating daily indices data in various countries during the periods of Lehman shock in 2008 and Asian financial crises in 1997. In particular, a sharp rise of the correlation coefficient right before GFC, as well as a period of strong correlation after the shock are discerned. On the other hand, we also found that the agent-based model only with portfolio and local managers could not be used to explain this peculiar observation. To understand the time evolution of the correlation coefficient of stock indices, we suggest that a new kind of managers, namely the so-called informed local managers have to be taken into account. Informed local managers are defined as agents who trade locally, meanwhile, being aware of the risk of neighboring markets. A cognitive model for the evaluation of risk is established. By employing a threshold and a memory curve, the hysteresis behavior in the risk recognition is formulated for the informed local managers. Simulations with the suggested model remarkably recovered the peculiar behavior of correlation coefficient before and after the synchronized crashes. The key role played by the informed local managers and their interactions with the global managers in GFC are revealed. Furthermore, an interesting scenario to reason the eruption of GFC is identified: If a crash occurred in a particular market, with a scale large enough so that the informed managers in another market start to count the risk in this market, the two market could become more correlated. Unluckily if a second crash took place in this period, the shock could immediately propagate to the other market so as to trigger crashes in all markets. More results on GFC will be detailed in the workshop. [1] T. Feldman. Journal of Economic Behavior & Organization 75 (2010) 192–202. [2] D. Friedman and R. Abraham. Journal of Economic Dynamics & Control33 (2009) 922–937.
14.00 - 14.30, Paolo Zeppini, Eindhoven University of Technology, Netherlands
A Percolation Model of Product Diffusion and Competition
Coauthors: Koen Frenken, Luis Izquierdo
Slides
We study the diffusion of innovative products in a market structured as a network of social relationships. Different topologies are implemented, as regular lattices, Erdos-Renyi random networks and Small-World networks. Beside network topology we consider two other factors: endogenous learning curves, with products becoming more attractive through adoption, and consumer preferences. To do this we specify the diffusion model as a percolation model, where adoption occurs conditionally on local information (a neighbour buys) and whenever the product quality is above an individual level (preference). A uniform distribution of preferences is our benchmark. To this we compare a Pareto distribution. In this view we can study how inequality affects product diffusion. An extension of the model addresses the diffusion of competing products. Here the question is whether the critical transition of percolation affects competition. This is important for the strategic behaviour of firms competing in the market, for instance, where also seeding (first users) assume a strategic relevance. Percolation models of social systems are Solomon et al. (2000) Silverberg and Verspagen (2005), Frenken et al. (2008), Hohnisch et al. (2008) and Cantono and Silverberg (2008). Geroski (2000) and Young (2009) study the time pattern of adoption for different mechanisms of social interactions. Economic models of diffusion with local strategic interactions are Blume (1995), Morris (2000), Jackson and Yariv (2007), Goyal and Kearns (2011). Analytical studies of network models of diffusion are Moore and Newman (2000), Newman et al. (2002), Watts (2002). The model has been implemented in NetLogo. Preliminary results show the following: with one product learning shifts the percolation threshold to lower levels, as expected. If one works in price space, this has a positive impact on consumer surplus. The time required for diffusion presents a peak at the threshold, typical of critical transitions. The peak is more pronounced for low values of learning. Regular grid lattices are more efficient for diffusion than rings with same degree, presenting a lower percolation threshold and shorter diffusion times. This indicates that average path length is more important than connectivity for percolation. For a given average connectivity, the random network has lower thresholds than regular networks. Just above the threshold the random network is more efficient, but for large quality (far above the threshold) less efficient than regular networks: this is due to unconnected components. Small-Worlds do little better than rings: a low average path length is not enough for percolation. Clustering has a negative impact. The crucial feature for percolation is a ``spreading'' structure, where neighbours of successive orders increase in number, as in the grid. Regarding diffusion of competing products, we observe a non-monotonic behaviour of market variety: initially entropy goes down, market shares diverge, and a dominant product emerges. Then entropy increases, with a reversion to the mean of market shares. The model reproduces the observed life-cycle of an industry (selection phase, dominant design phase, mature phase). This non-monotonic pattern is obtained also without product differentiation. The ``rebound'' of variety is crucial for understanding whether diffusion affects competition.
14.30 - 15.00, Antoine Mandel, University Paris 1 Pantheon-Sorbonne, France
Agent-based dynamics and general equilibrium
Slides
We start from a reading of the Sonnenschein-Mantel-Debreu theorem, as a result characterizing the complexity of the general equilibrium framework. The aggregate excess demand has the property to fit any homogeneous and continuous function satisfying Walras law, while individual excess demand have to satisfy much more stringent conditions such as the weak axiom of re- vealed preference. Multiplicity of equilibria and the richness of out-of-equilibrium dynamics can be seen as emergent properties at the macro-level. Much of the subsequent work in general equilibrium theory has tried to cast aside this complexity by establishing general conditions under which the law of demand holds, or as far as computable general equilibrium (CGE) models are concerned, by discarding the micro-level through the use of representative agents. This line of research has yielded useful results for policy analysis through local comparative statics exercises but has left general equilibrium theory mute as far as regime changes or equilibrium transitions, are concerned. The main pitfall in this respect is the lack of models for out-of-equilibrium dynamics. In this paper, we investigate which insights on the stability of equilibrium and transitions between multiple equilibria, can be gained by equipping the general equilibrium model with out-of-equilibrium agent-based dynamics. We also wish to revisit the duality between the micro and the macro level in general equilibrium theory and in particular to show that bounded rationality at the micro-level is not necessarily inconsistent withthe emergence of equilibrium at the macro-level. This has been showed for exchange economies in two recent contributions of Gintis. The validity range of these conclusions, obtained in a framework without capital accumulation, were then questioned This questioning is the main driver of the work presented here. We introduce agent- based dynamics for a relatively large class of Arrow-Debreu economies, in particular allowing for capital accumulation and endogenous technological progress. In this framework, we first demonstrate that the micro-behavior of boundedly rational agents can lead to the emergence of equilibrium at the macro-level provided some form of collective op-timization takes place through evolutionary/genetic mechanisms. We then investigate which properties of out-of-equilibrium dynamics are crucial for such convergence and stability. This investigation puts forward the role of time-scales: the relative speed of adjustment of prices and quantities, the speed at which expectations evolve, the implicit temporal dimension embedded in agents decision rules. Consistency between a model’s notional time and empirical time also appears as a key criteria for validation and the computational treatment of time in agent-based models as a major area for future research. A second set of issues to be analyzed concern the use of agent-based models for the exploration of the dynamics of economies with multiple equilibria. In our setting, multiple equilibria materializes as a result of monte-carlo simulations and we observe endogenous equilibrium transitions.
15.00 - 15.30, Philipp Harting, Bielefeld University, Germany
Spatial labor market frictions and economic convergence: policy implications from a heterogeneous ag
Coauthors: Herbert Dawid, Michael Neugart
Slides
Title: Spatial labor market frictions and economic convergence: policy implications from a heterogeneous agent model The coexistence of regions characterized by significantly different productivities within an integrated economic area is for numerous reasons problematic, as can be seen in the current European debt crisis. We use a two-region agent-based macroeconomic model (the Eurace@Unibi model) to analyze medium and long term effects of two distinct local policy measures, which attempt to foster convergence of two differently developed regions. The first, a human capital policy, leads to an upgrade of the general skill level in the population of the less developed region. The second, a technology policy, aims at closing the technological gap by providing subsidies to those firms within the weaker region that invest in the most recent vintage of the technology. In order to explore the role of spatial labor market frictions on the effectiveness of these policies we consider two experimental setups. They differ with respect to the level of integration of the two local labor markets, where the extreme cases of full integration or full separation are treated. It turns out that in case of fully integrated labor markets the human capital policy is counterproductive since it has a positive effect on the economically stronger region and a clear negative effect on the production of the weaker region at which it is targeted. However, the technology and the combination of both policies have a positive output effect in the target region and a negative effect on the neighbor region. In the case where we set the frictions such that both local labor markets are separated, we find that all policies have clear positive effects on the development in the weak region and are helpful to support economic convergence. In this scenario, in particular the human capital policy has however a negative impact on the speed of technological change in the strong region.
16.00 - 16.30, Silvano Cincotti, University of Genoa, Italy
Macroprudential policies in the EURACE artificial economy
Coauthors: Marco Raberto, Andrea Teglio
The paper presents the main modelling features of the Eurace agent-based macroeconomic framework. Eurace is a large-scale agent-based model and simulator representing a fully integrated macroeconomy consisting of three economic spheres: the real sphere (consumption goods, investment goods, and labour markets), the financial sphere (credit and financial markets), and the public sector (Government and Central Bank). Eurace economic agents, characterized by bounded rationality and adaptive behaviour, interact directly in decentralized markets. A balance-sheet approach, coupled with stock flow consistency checks, has been used both as modelling paradigm and validation instrument. A set of computational results is presented, showing the real effects on the Eurace economy of monetary aggregates dynamics, i.e., endogenous credit money supplied by commercial banks (loans to firms) and fiat money created by the central bank by purchasing bonds (quantitative easing). Results show a higher fragility of the economy when the amount of credit money in the economic system is pushed by a high leverage of banks, and a negligible effect of quantitative easing. Business cycles emerge as a consequence of the interplay between the real economic activity and its financing through the credit market.
16.30 - 17.00, Matthias Lengnick, University of Kiel, Germany
Money Creation and Financial Instability: An Agent-Based Credit Network Approach
Coauthors: Sebastian Krug, Hans-Werner Wohltmann
Slides
The recent crisis has worked as a catalyzer for bringing critical ideas on the economics agenda. Among them are: Agent-based computational economics (ACE), stock-flow-consistency (SFC), network economics, disequilibrium processes, non-rationality, systematic risk and endogenous crisis. We develop a model of the credit market that unites all these characteristics. We start with the simple model of money creation (monetary multiplier) as it can be found in almost every introductory macroeconomics textbook. But instead of calculating the equilibrium results directly, we set up a population of heterogeneous agents. We then allow households to place their deposits at banks and allow banks to grant credits to the real sector. For each agent, we set up a balance sheet to account for every single transaction. We also show that the individuals, just by economic interaction, endogenously create money and weave a network of debt relations. One nice property of our model is that, although it is generally described by a disequilibrium, it also contains the mainstream equilibrium solution as a limiting case. In an extended version we also introduce an interbank market for credits. Obviously, this extension leads to a higher interconnectedness of agents through debt claims (especially of banks). We show that the existence of an interbank market has two effects on the economy. It allows banks that are illiquid to get credits and continue to operate. At the same time, however, it allows for bankruptcy cascades that produce the threat of systematic and deep crisis. Since we model every agent and his balance sheet, we are able to give a detailed and individual based illustration of how systematic risk builds up. If one bank happens to be unable to fulfill its debt obligations and enters insolvency, other banks (who have credit relations with the bankrupt one) suffer from a depreciation of assets. They might therefore also become insolvent and transmit the bankruptcy cascade. Mainstream macroeconomic theory has had problems to find an adequate role for money. Most contemporaneous models do simply introduce money on top of a real exchange equilibrium. Money in this case becomes a residual. It adjusts to whatever is necessary that all other equations are meet. Additionally mainstream models have a strong intrinsic tendency to be stable. Therefore they have become blind to endogenous crisis. Our analysis casts serious doubt on this way of modeling because it shows that the simple process of money creation, that is present in every macroeconomics textbook, goes hand in hand with financial instability.
17.00 - 17.30, Marco van der Leij, CeNDEF, University of Amsterdam, The Netherlands
Complex Methods in Economics: An Example of Behavioral Heterogeneity in House Prices
Coauthors: Wilko Bolt, Maria Demertzis, Cees Diks, Cars Hommes
Slides
Recent work by Scheffer et al. (2009) argue that critical transitions in natural sciences, such as the desertification of the Sahara or the breakdown of coral reef can be predicted in advance by monitoring simple time-series properties. We show, however, how these simple statistical techniques for capturing critical transitions used in natural sciences, fail to capture economic regime shifts. This implies that we need to use model-based approaches to identify critical transitions. We apply a heterogenous agents model a la Brock & Hommes (JEDC, 1998) in a standard housing market model to show that these family of models generate non-linear responses that can capture such transitions. We estimate this model on house prices in the United States and the Netherlands and find that first, the data does capture the heterogeneity in expectations and, second, that the qualitative predictions of such nonlinear models are very different to standard linear benchmark models. It would be important to identify which approach can serve best as an early warning indicator.
Track: Econophysics Colloquium
Lecture Hall CAB G 51

9.00 - 10.00, Jean-Phillipe Bouchaud, Ecole Polytechnique Paris, France
Anomalous price impact and the critical fragility of financial markets
Slides
Price impact refers to the correlation between an incoming order (to buy or to sell) and the subsequent price change. Empirical studies reveal that the impact of a metaorder of volume Q grows, quite surprisingly, as the square-root of Q. This appears to be a universal result, independent of markets, epoch, trading strategy or tick size. We propose a generic dynamical theory of liquidity that predicts that the average supply/demand profile vanishes around the current price. The anomalously small local liquidity induces a breakdown of the linear response and a diverging impact of small orders, explaining the ‘‘square-root’’ impact law. This suggests that liquidity in financial markets in inherently fragile, and why ``micro-crisis'' are so frequent and why markets are so volatile.
10.30 - 11.00, Tiziana Di Matteo, King's College London, UK
Embedding High Dimensional Data on Networks
Coauthors: T. Di Matteo, Won-Min Song, T. Aste
The continuous increase in the capability of automatic data acquisition and storage is providing an unprecedented potential for science. The ready accessibility of these technologies is posing new challenges concerning the necessity to reduce data-dimensionality by filtering out the most relevant and meaningful information with the aid of automated systems. In complex datasets information is often hidden by a large degree of redundancy and grouping the data into clusters of elements with similar features is essential in order to reduce complexity. However, the reduction of the system into a set of separated local communities may hide properties associated with the global organization. For instance, in complex systems, relevant features are typically both local and global and different levels of organization emerge at different scales in a way that is intrinsically not reducible. It is therefore essential to detect clusters together with the different hierarchical gatherings above and below the cluster levels. In this talk I will introduce a graph-theoretic approach to extract clusters and hierarchies in complex data- sets in an unsupervised and deterministic manner, without the use of any prior information [1,2]. This is achieved by building topologically embedded networks containing the subset of most significant links and analyzing the network structure. For a planar embedding [3,4] this method provides both the intra-cluster hierarchy, which describes the way clusters are composed, and the inter-cluster hierarchy which describes how clusters gather together. I will discuss performance, robustness and reliability of this method by investigating several synthetic data-sets finding that it can outperform significantly other established approaches. Applications to financial data-sets show that industrial sectors and specific activities can be extracted and meaningfully identified from the analysis of the collective fluctuations of prices in an equity market. [1] Won-Min Song, T. Di Matteo, T. Aste, "Nested Hierarchy in planar graphs", Discrete Applied Mathematics 159 (2011) 2135. [2] Won-Min Song, T. Di Matteo, T. Aste, "Hierarchical information clustering by means of topologically embedded graphs", PLoS One 7(3) (2012) e31929. [3] T. Aste, T. Di Matteo, S. T. Hyde, "Complex networks on hyperbolic surfaces", Physica A 346 (2005) 20-26. [4] M. Tumminello, T. Aste, T. Di Matteo, R. N. Mantegna, "A tool for filtering information in complex systems", PNAS 102, n. 30 (2005) 10421.
11.00 - 11.30, Fabrizio Lillo, Scuola Normale Superiore di Pisa, Italy
Market shocks: should High Frequency Traders be blamed?
Coauthors: Maria Frolova, Sergey Ivliev
In the last years there has been a growing interest of scholars, regulators, and professionals about the role of High Frequency Traders (HFTs) in the dynamics of financial markets. In particular, one of the main concerns is that HFTs contribute significantly to destabilize markets, increasing volatility and creating large swings of the price not connected to fundamental reasons. A paradigmatic case is the Flash Crash of May 6, 2010. However conclusions are still controversial, mainly because they are poorly supported by empirical analyses. Here we make a contribution in this direction thanks to the availability of a unique database of Russian equity market, specifically the Moscow Interbank Currency Exchange (MICEX) for 29 stocks in the MICEX index for a four month period in 2010. The database contains all the limit order book events and, more important, allows to identify, in a coded way, the agent responsible of each order. Our work can be divided in three parts: (i) identification of market shocks, (ii) identification of agents behaving as HFTs, (iii) investigation of the trading behavior of HFTs before, during, and after these shocks. As a first step we identified market shocks (or jumps), i.e. abrupt change of price, which are several times larger than the local volatility, at different time scales (hours, minutes, ticks). We apply specific filters for each scale and studied the dynamics of price path, cumulative return, buy imbalance, trading volume and bid-ask spread around identified events. We identified 1820 events on hour-scale, 13368 events on minute-scale, and 369 events on tick-scale, allowing us to perform large-scale statistical analyses. We found that buy imbalance in volume can be considered as a precursor for the minute-scale shocks at least 2 minutes before the shock. On the next step we have built simple algorithms for the identification of HFTs based on the characteristics of the trading and order submission behavior of single agents. In particular, large number of limit orders, a very high ratio of canceled orders, and a short lifetime of limit orders seem to be good indicators to identify HFTs. Generically, we find roughly ten HFTs placing several thousand of limit orders per day in a single stock but executing a very small fraction of it, typically between 10^-2 and 10^-3. Finally, we investigated the behavior of identified HFTs in a 2 hour interval around the minute scale shocks. We observed that the volume traded by HFTs is rising compared to the rest of the market peaking at the shock time. Surprisingly this is observed more frequently for positive shocks than for the negative ones. Obtained results shed some light on the controversial nature of price jumps and the role of HFTs in creating these extreme events.
11.30 - 12.00, Damien Challet, École Centrale Paris, France
Beyond tick-by-tick data: limit order book phenomenology from real trader behaviour
Coauthors: David Morton de Lachapelle
Brokerage data brings new insights on financial market phenomenology. While most previous studies using such data have focused on trader transactions, we exploit a unique large data set that records the raw actions of all the clients of Swissquote Bank SA from 2002 to 2012. In particular, one has access to the all the actions of traders, including those which did not result in a transaction. This makes it possible not only to provide a more complete description of limit order book dynamics but also to test or uncover relationships that could only be guessed indirectly. This talk will first re-assess current wisdom on limit order placement and its dependence on price, spread, time of the day, and patience. It will then use robust measures to understand what makes some categories of traders become active. A third part aims at characterizing the full trader timescale heterogeneity.
14.00 - 14.30, Victor Yakovenko, University of Maryland, USA
Statistical Mechanics of Money, Income, Debt, and Energy Consumption
Slides
By analogy with the probability distribution of energy in physics, entropy maximization results in the exponential Boltzmann-Gibbs probability distribution of money among the agents in a closed economic system. Analysis of the empirical data shows that income distribution in the USA and other countries has a well-defined two-class structure. The majority of the population (about 97%) belongs to the lower class characterized by the exponential ("thermal") distribution. The upper class (about 3% of the population) is characterized by the Pareto power-law ("superthermal") distribution, and its share of the total income expands and contracts dramatically during bubbles and busts in financial markets. The probability distribution of energy consumption per capita around the world also follows the exponential Boltzmann-Gibbs law. Web links: http://physics.umd.edu/~yakovenk/econophysics/ http://physics.umd.edu/~yakovenk/econophysics/animation.html http://physics.umd.edu/ ~yakovenk/papers/RMP-81-1703- 2009.pdf http://physics.umd.edu/~yakovenk/papers/2010-NJP-v12-n7-p075032.pdf
14.30 - 15.00, Boris Podobnik, Zagreb School of Economics and Management, Croatia
The competitiveness versus the wealth of a country
Coauthors: Davor Horvatic, Dror Y. Kenett, and H. Eugene Stanley
Slides
Over the past decade, Switzerland, Singapore, the Nordic countries, and the USA have been considered the most competitive. Generally speaking, rich countries are considered more competitive than poor countries, implying that there is a functional dependence between the Global Competitiveness Index (GCI) and per capita GDP (gdp). Here we address two questions: (i) What is the expected level of competitiveness for a country with a given level of wealth? (ii) What is the probability that a country will substantially improve its wealth and its level of competitiveness? Politicians world-wide frequently promise a better life for their citizens within a span of a few years. We test how meaningful this promise is. By using a ranking approach, we find that the probability that a country will increase its gdp rank R within a decade follows an exponential distribution with decay constant lambda = 0.12. To estimate the probability, e.g., that Bulgaria's gdp will reach Germany's gdp in 10 years, we calculate the difference between the current Bulgarian rank and German rank and, using an exponential function, determine the probability 0.5 exp(- lambda |Delta R|) that at least the change in rank R will occur within a decade. Besides the GCI, we also study the Corruption Perception Index (CPI) and find that the distribution of change in CPI rank and the distribution of change in GCI rank follow exponential functions with almost the same exponent as lambda, implying that the dynamics of gdp, CPI, and GCI may share the same origin. Then we quantify the relationship between the gdp and the GCI with a power law. We introduce a new measure to assess the competitiveness relative to gdp as the difference D between the actual GCI value and the expected value of GCI obtained from the power-law fit between GCI and gdp---the more negative D is, the smaller will be the relative competitiveness of a given country. Thus for a country below the regression line, our regression suggests the expected level of competitiveness a country should aspire to in order to achieve at least the average relative competitiveness, and the final outcome if a country does not improve its competitiveness. For all European and EU countries during the 2008--2011 economic downturn we find that that the drop in gdp in more competitive countries relative to gdp was significantly smaller than in relatively less competitive countries, which is valuable information for policymakers. We finally propose a model for political corruption and nepotism. We model that in a corrupt country, people often occupy public sector jobs because of political corruption and nepotism, not because of their skills. We suggest that a common mechanism at the EU level to control corruption is needed, e.g., an anti-nepotism law. If corruption is allowed to continue, uncorrupt EU countries will increasingly be paying the bills of the corrupt. Fighting corruption is thus fighting for an increase in competitiveness, and this must be a top EU priority.
15.00 - 15.30, Austin Gerig, University of Oxford, United Kingdom
High-Frequency Trading: What is it Good for?
In recent years, financial markets have experienced a dramatic increase in high-frequency trading (HFT), i.e., extremely fast, autonomous computerized trade. There is growing evidence that high-frequency trading makes markets more efficient -- increasing the accuracy of prices and lowering transaction costs -- but it is unclear how HFT activity provides these benefits. Using a special dataset from the NASDAQ stock exchange, I show that HFT synchronizes market prices so that the values of related securities change contemporaneously. With a simple model, I demonstrate how synchronization increases market efficiency: prices are more accurate and transaction costs are reduced. This research highlights an important role that HFT plays in markets, explains how HFT can increase market efficiency yet also be profitable, and should be useful for policy makers who are currently deciding how to regulate HFT.
16.00 - 16.30, Klaus Pawelzik, University of Bremen, Germany
Annihilation of Information Destabilizes Speculative Markets
Coauthors: Felix Patzelt
Slides
Financial markets are believed to be efficient in the sense that no profit can be made by speculation using only publicly available information. This implies that the behavior of market participants collectively exerts a stabilizing control on predictable price changes leaving only residual, unpredictable returns. In contrast, speculative markets occasionally exhibit extremely large price changes which indicates destabilizing mechanisms. Here we investigate whether this long standing antinomy in economics can be resolved by a recent non-economic theory: Efficient adaptive control can in fact drive a dynamical system into a state of extreme susceptibility [1]. We introduce a minimal trading model where asset redistribution by trading captures the generation of market efficiency by removing the influence of reoccurring information on price changes. In a strongly self-referential market this control mechanism is found to perpetually destabilize local equilibria and make it jump into distinct dynamical regimes. Our model exemplifies the Information Annihilation Instability as a novel principle to explain the inherent criticality of markets as systems that per construction exert a stabilizing control and tend towards an equilibrium. The model is simple enough to be rigorously tractable in detail, it can be mapped to a minority game and simulations reproduce stylized facts of price time series in qualitative and quantitative detail: Returns have leptokurtic distributions and exhibit long time correlations of volatility. [1] Patzelt, F. and Pawelzik, K.: Criticality of Adaptive Control Dynamics. Phys. Rev. Lett. 107, 238103 (2011).
16.30 - 17.00, Hans Danielmeyer, Institut für Neuro-und Biotechnologie, Germany
The role of spare time and inherited capacities for long term economic growth
Coauthors: Thomas Martinetz
Slides
Economic growth is unpredictable unless demand is quantified. We solve this problem by introducing the demand for unpaid spare time and a user quantity named human capacity. It organizes and amplifies spare time required for enjoying affluence like physical capital, the technical infrastructure for production, organizes and amplifies working time for supply. The sum of annual spare and working time is fixed by the universal flow of time. This yields the first macroeconomic equilibrium condition. Both storable quantities form stabilizing feedback loops. They are driven with the general and technical knowledge embodied with parts of the supply by education and construction. Linear amplification yields S-functions as only analytic solutions. Destructible physical capital controls medium-term recoveries from disaster. Indestructible human capacity controls the collective long-term industrial evolution. It is immune even to world wars and runs from 1800 to date parallel to the unisex life expectancy in the pioneering nations. This is the first quantitative information on long-term demand. The theory is self-consistent. It reproduces all peaceful data from 1800 to date without adjustable parameter. It has full forecasting power since the decisive parameters are constants of the human species. They predict an asymptotic maximum for the economic level per capita. Long-term economic growth appears as a part of natural science. The financial sector isrole of mo
17.00 - 17.30, Hayafumi Watanabe, Tokyo institute of technology, Japan
Connection between sales of Japanese firms and the structure of the nation-wide inter-firm trading network
Coauthors: Hayafumi Watanabe, Hideki Takayasu and Misako Takayasu
Slides
The circulation of money is often likened to the circulation of blood in an animal body. Because of the recent accumulation of data, we can now observe an inter-firm trading network containing nation-wide business relationship. In the case of blood flow we can estimate transport of blood from the vascular structure in the animal body. Is it possible to estimate transport of money from the structure of the inter-firm trading network? We analyzed empirical data for an inter-firm trading network, which consists of about one million firms covering practically all active firms in Japan with the financial information such as the annual sales of corresponding firms. First, we analyzed the empirical relationship between sales of a firm and sales of its nearest neighborhoods. We found a simple linear relationship between sales and a weighted sum of sales of nearest neighborhoods. Based on this empirical local money flow relation we introduced a simple money transport model of the global system. In this model, a buyer firm distributes money to its neighboring seller firms proportional to the number of in-degrees of destinations. From intensive numerical simulations, we confirmed that the steady flows derived from these models can approximately reproduce the distribution of sales of actual firms, which obeys the Zipf's law. Moreover the sales of individual firms deduced from the money-transport model are shown to be proportional, on an average, to the real sales. In our presentation, we will discuss about results by data analysis and numerical simulation. In addition, we will discuss a relation between Zipf's law and negative degree-degree correlation of the network and a mean-field approximation of our model [1]Watanabe H, Takayasu H and Takayasu M 2012 New J. Phys. 14 043034

Wednesday, September 12

Behavioral Economics
9.00 - 9.45, Daniel Houser, George Mason University, USA (Lecture Hall CAB G 11)
Self-control and Altruism at Work
Slides Video
Self-control resolves conflict between altruistic and selfish impulses. Self-control requires energy, and in work environments controlling one’s short-run desires can have a detrimental impact on subsequent productivity. Further, controlling selfish impulses is more difficult when costs of altruistic effort for others are monetized. Brain imaging data suggest altruism is mediated by social reward systems. These systems may be difficult to activate (that is, self-control more difficult) in the presence of pecuniary costs, as money is perceived as an individual resource.
9.45 - 10.30, Roberto Weber, University of Zurich, Switzerland (Lecture Hall CAB G 11)
Multiple Equilibria and Economic Theory
Slides Video
Many economic contexts possess multiple equilibria. These situations are important for many reasons, including because they are often where traditional theoretical approaches fail to generate precise or accurate predictions. I discuss recent experimental studies that demonstrate how, in situations with multiple equilibria, behavior can change dramatically in ways unaccounted for by current theoretical models. This evidence highlights the need for improved behavioral theories of equilibrium selection, comparable to advances in other areas of behavioral economic research.
11.00 - 11.30, Arno Riedl, Maastricht University, Netherlands (Lecture Hall CAB G 11)
On the interaction of economic theory and experimental economics: Studies on incomplete preferences and partner choice
Slides Video
Experiments in economics and psychology have critically contributed to the development of new theoretical (behavioral) models of individual and social behavior. Experiments may not only be used to falsify existing models but can also suggest the right way of modeling. This will be exemplified by a novel study on the incompleteness of preferences in decisions under uncertainty where the major (behavioral) models fail to account for observed behavior. Moreover, in a strategic setting it will be shown that economics experiments that ignore the power of partner choice in social interaction are likely doomed to produce misleading predictions for field behavior and to give wrong guidance for theory development.
11.30 - 12.00, Matthias Sutter, University of Innsbruck, Austria (Lecture Hall CAB G 11)
Experimental Choices and Field Behavior: On Impatience, Saving and Smoking
Slides Video
Experimental economics applies controlled conditions to investigate the causal factors that drive economic behavior. Recent work has been focusing on how experimental choices relate to field behavior. We link teenagers? decisions in an intertemporal choice experiment to their savings decisions and health related behavior (such as smoking). We do not only find important correlations, but also a predictive power of experimental choices for field behavior a few years later.
12.00 - 12.30, Dirk Helbing, ETH Zurich, Switzerland (Lecture Hall CAB G 11)
Rethinking Macroeconomics Based on Complexity Theory
Slides Video
We argue that, if we are to find a more satisfactory approach to tackling the major socio-economic problems with which we are faced, we may need to thoroughly rethink the basic assumptions of macroeconomic and financial theory. Making minor modifications to the standard models to remove “imperfections” may not be enough, the whole framework may need to be reconstructed. Let us first enumerate some of the standard assumptions and postulates of economic theory: The first of these is the idea that an economy is an equilibrium system. In other words it is a system in which all markets systematically clear at each point of time but where the equilibrium may be perturbed, from time to time by exogenous shocks. The second is that the selfish or greedy behaviour of individuals yields a result which is beneficial to society, a modern and inadequate restatement of Adam Smith’s description of “the invisible hand”. Thirdly, Individuals and companies decide rationally. By this is meant that individuals optimize under the constraints with which they are faced and that their choices satisfy some standard axioms of consistency. Fourthly, the behaviour of the all agents together can be treated as corresponding to that of the average or representative individual. Fifthly when the financial sector is analysed it is assumed that financial markets are efficient. Efficiency here meaning that all the relevant information about the price of an asset is reflected by the price of that asset. Thus no individual has any incentive to seek information for himself. Sixthly in financial markets it is assumed that the more liquid they are the better they function. Lastly in financial markets the more connected the network of links between individuals and institutions the more risk is spread and the more stable and robust the system. We will show computer simulations or analyses of other social systems that question assumptions such as the above, but also give a perspective of how a new theoretical approach may be developed that is in better agreement with real-world evidence.
14.00 - 14.30, Ryan Murphy, ETH Zurich, Switzerland (Lecture Hall CAB G 11)
Simple Stochastic Games: Risk Taking in Strategic Contexts
Slides Video
Stochastic game theory unifies both strategic interactions and random processes into a single analytic framework. Along these lines, we develop a simple risky choice problem, and then extend that decision theoretic problem into a strategic context. We derive the equilibrium for this simple 2-player zero sum game and show that its mixed strategy equilibrium is both complicated and highly sensitive to the stochastic process. Further we show a non-zero sum version of this game, and then outline several experiments along these lines.
14.30 - 15.00, Christian Zehnder, University of Lausanne, Switzerland (Lecture Hall CAB G 11)
On the Psychology of Contracts
Slides Video
Recent theoretical work on on incomplete contracts suggests that contracts may not only define trading parties' rights and obligations but may also have important psychological effects. In particular, it has been hypothesized that competitively negotiated ex ante contracts may provide salient reference points which shape perceived entitlements in ex post trade. A series of papers demonstrates that the existence of such contractual reference points has a number of important implications for the theory of the firm. We have conducted a series of controlled laboratory experiments testing the empirical relevance of the underlying behavioral assumptions of this new strand of literature. Our evidence is highly supportive for the hypothesis that contracts serve as reference points. Specifically, we find that there is an important trade-off between contractual rigidity and flexibility. While the existence of this trade-off is in line with the theory of contractual reference points, it is in strong contrast to both standard economic theory and established behavioral models of social preferences. Further experimental conditions also reveal that the central behavioral mechanism underlying the concept of contractual reference points is robust to the presence of informal agreements and ex post renegotiation.
15.00 - 15.30, Massimo Molinari, University of Trento, Italy (Lecture Hall CAB G 11)
Competition Policy as a Tool for the Macroprudential Regulation of the Banking Sector
Coauthors: Edoardo Gaffeo
Slides Video
In this paper we employ network analysis to re-assess competition policy within a macroprudential framework. Such an exercise seems to be relevant as it explicitly addresses a question posed forcefully by Haldane (2009), that is whether policy interventions can alter the topological network structure with the declared aim of improving network robustness. Here we concentrate on the idea that central banks and antitrust authorities have the opportunity to design the structure of the industry by choosing how banks are allowed to merge. Merges change the topology of the system for three reasons: 1) larger banks are formed as the summation of smaller ones; 2) the total number of active banks decrease; 3) large banks generally possess more connections than small banks. One can imagine that different competition policies (e.g., let just one very big bank to form by allowing it to acquire a large number of smaller banks; limit the size of each merger to just two small units, etc.) lead to different network topologies, which could in principle be characterized by different degrees of resilience to shocks. If this is the case, competition policy can be interpreted as an additional tool for macro-prudential regulation aimed at preventing systemic crises. We build an agent-based computational laboratory of an interbank network and employ three different types of M&A strategies as network-changing devices, in order to evaluate their effect on the resilience of the system. Our results suggest that topologies are not all alike: more specifically, it appears that a concentrated and yet asymmetric system is better geared to cope with an external shock. By contrast, concentrated and symmetric markets turns out to be in fact more fragile than a competitive one. The extent of the damage to the system depends on the exposure to interbank claims, the degree of connectivity, the structure of the network and capital requirements. In addition, we put forward the idea that capital requirements should be network-varying. Different shock-amplifying dynamics are observed because flat capital requirements force an inefficient allocation of net worth within the system. For example, it turns out that large banks are forced to hold too much capital whereas small institutions have less than what it is necessary and this misallocation renders the system less resilient. Once we introduce network-varying capital requirements, the robustness of the system improves and this aligns the performance of different topologies. The clear policy implication is that the regulator shall closely monitor the structure of the network and its evolution over time because policy on capital requirements is sensitive to it and one size does not fit all. We need to improve our effort towards the production of reliable and up-to-date data that allows us to map banking networks as precisely as possible.
16.00 - 16.45, Jean-Robert Tyran, University of Vienna, Austria (Lecture Hall CAB G 11)
The Economics of Money Illusion
Slides Video
Money illusion refers to a tendency to think about economic transactions in terms of nominal rather than real values. While standard economics assumes that all economic agents are free from money illusion, increasing evidence suggests that thinking in nominal terms is common, that purely nominal changes can affect individual choices, and that money illusion can shape outcomes in labor, housing and asset markets. The lecture argues that experiments can be used to understand when money illusion matters for economic outcomes – and when it does not.
16.45 - 17.30, Lorenz Goette, University of Lausanne, Switzerland (Lecture Hall CAB G 11)
The Weave of Social Life: How Community Participation Shapes the Individual
Coauthors: Rene Algesheimer and Ernst Fehr
Video
Does society shape individuals? Examining this question is difficult, as individuals influence the collective just as the collective may influence the individual. We use a large-scale field experiment to solve this causality problem and show that groups with stronger community participation render their members generally more altruistic and trusting towards anonymous strangers. Moreover, stronger community participation also causes a boost in trust towards those who reciprocate favours, thus generating stronger implicit punishment for untrustworthy individuals. Increased community participation enhances the strategic sophistication of individuals and raises the prevalence of Machiavellian strategies.

Thursday, September 13

Systemic Risk
9.00 - 9.45, Carsten Detken, European Central Bank, Germany (Lecture Hall CAB G 11)
Is early warning against systemic risk feasible? The ECB’s newly developed analytical support to the European Systemic Risk Board
Slides Video
The lecture will address the question whether the construction of early warning systems against systemic risks might not be a futile attempt to safeguard financial stability, as the orthodox academic scepticism towards the early warning literature might suggest. The position taken here is that a careful optimism is defendable, due to lessons learned, methodological advances, improvements in data availability, as well as policy makers changed attitude towards type I versus type II errors. The newly developed risk identification approach as well as some examples of models and tools employed by the ECB to provide analytical support to the European Systemic Risk Board will be presented. Experience with different models, with hindsight, reveals the usefulness of some structural indicators, like global credit gaps, and the uselessness of market price based indicators for early warning purposes.
9.45 - 10.30, Joseph E. Stiglitz, Columbia University, USA (Lecture Hall CAB G 11)
Crisis, Contagion, and the Need for a New Paradigm
Slides Video
Joseph E. Stiglitz is Professor of Economics at Columbia University New York and recipient of the 2001 Nobel Memorial Prize in Economic Sciences. He is not only one of the most influential academic economists of the last decades, but also has been the Chairman of the Council of Economic Advisers during the Clinton administration, Chief Economist of the World Bank and member of the Intergovernmental Panel on Climate Change.
11.00 - 12.00, Panel Discussion (Lecture Hall CAB G 11)
Systemic Risk: Are There Lessons To Be Learned?
Video
Jürg Blum (Swiss National Bank), Rama Cont (Imperial College London), Carsten Detken (European Central Bank), Peter Fischer (Neue Zürcher Zeitung, Jean Charles Rochet (University of Zurich), Didier Sornette (ETH Zurich), Joseph E. Stiglitz (Columbia University, New York) and Frank Schweitzer (ETH Zurich) will discuss about the role of Systemic Risks in a modern, highly interconnected world. Especially the question Are There Lessons To Be Learned? will be addressed - triggered, but not limited to - the recent economic crisis.
14.00 - 14.45, Jean Charles Rochet, Swiss Finance Institute and University of Zurich, Switzerland (Lecture Hall CAB G 11)
Taming Systemically Important Financial Institutions
Coauthors: Xavier Freixas (UPF Barcelona)
Slides Video
We model a Systemically Important Financial Institution (SIFI) that is too big (or too interconnected) to fail. Without credible regulation and strong supervision, the shareholders of this institution might deliberately let its managers take excessive risk. We propose a solution to this problem, showing how insurance against systemic shocks can be provided without generating moral hazard. The solution involves levying a systemic tax needed to cover the costs of future crises and more importantly establishing a Systemic Risk Authority endowed with special resolution powers, including the control of bankers' compensation packages during crisis periods.
14.45 - 15.30, Rama Cont, Imperial College London, UK (Lecture Hall CAB G 11)
Channels of Contagion: Identifying and Monitoring Systemic Risk in the Financial System
Slides Video
The recent financial crisis has simultaneously underlined the importance of systemic risk and the absence of an appropriate framework for assessing, monitoring and regulating it. Modeling systemic risk requires to change the traditional focus of risk modeling and examine the structure and stability of the financial system as a whole, with special attention given to contagion mechanisms which may lead to large scale instabilities in the financial system. We present some recent work on the quantitative modeling of systemic risk, focusing on three key channels for financial contagion: balance sheet contagion, illiquidity cascades and price-mediated contagion generated by feedback effects. Finally, we discuss the implications of these results for monitoring of systemic risk.
16.00 - 16.30, Matteo Luciani, ECARES - Université libre de Bruxelles, Belgium (Lecture Hall CAB G 11)
Ranking systemically important institutions
Coauthors: Mardi Dungey (Cambridge and Tasmania Univ.) and David Veredas (ECARES - Univ. libre de Bruxelles)
Slides Video
Based on the definition of systemic risk given by Jean-Claude Trichet at Clare College in Cambridge (Dec. 2009), we propose a simple methodology for ranking systemically important institutions. We incorporate both the cross sectional aspects of risks through firms interrelations and the time series aspects of the evolution of this interconnectedness over time. We view firm's risks as a network with vertices equal to the volatility shocks and edges equal to their correlations. Dynamic centrality measures allow us to rank the firms in terms of risk connectedness and firm characteristics. The resulting global systemic risk (GS) measure from applying this approach to all firms in the S&P500 for 2003-2011 reveals that the systemic risk in the financial sector stocks peaked in September 2008, but was greatly reduced by the introduction of TARP. Anxiety about European debt markets saw the systemic risk begin to rise again from April 2010. We further decompose these results to find that the systemic risk of insurance and deposit taking institutions differs importantly, the latter experienced generally declining systemic risk from late 2007, in line with burst of the housing price bubble, while risk for insurance companies continued to climb up to the rescue of AIG. Our systemic risk index emphasises interconnectedness: a comparison of this with the capital shortfall approach of Brownlees and Engle (2011) shows that while risk due to interconnectedness declined post September 2008, capital shortfall risk remained at sustained levels. The two approaches offer complementary information. Further, we show the importance of including the interconnectedness of the financial sector with firms in the real economy, in producing measures of systemic risk.
16.00 - 16.30, Kimmo Soramäki, Financial Network Analytics, Spain (Lecture Hall CAB G 51)
Identifying Systemically Important Banks in Payment Systems
Coauthors: Samantha Cook, PhD
Slides Video
The ability to accurately estimate the extent to which the failure of a bank disrupts the financial system is very valuable for regulators of the financial system. One important part of the financial system is the interbank payment system. The paper develops a robust measure, SinkRank, that not only accurately predicts the magnitude of disruption by a given bank in a payment system, but also informs about which banks are most affected by the failure. In interbank payment networks banks (nodes) transfer payments related to customer requests or their own trading along directed links of the network. When a payment is made the money is no longer available to the sender, and the receiver of the funds can make a payment to any other bank in the system. The transfer process takes place along walks in the network as any bank can pay other multiple times without constraints. Traditional measures of centrality that have been developed in network theory with other types of processes in mind (e.g. processes transmitted along geodesic paths or trails or processes based on duplications instead of transfer) are not able to accurately identify central nodes in systems based on transfers along walks and with feedback loops present in payment systems. SinkRank is based on absorbing Markov chains which are well suited to model transfers processes along walks in a network. An absorbing state is a state from which there is a zero probability of exiting. The theory reflects accurately the process of bank failure in a payment system – any funds sent to the failing bank stay in its account until the bank resumes operations. Because actual bank failures are rare and the data is not generally publicly available, the metric is tested by simulating payment networks and inducing failures in them. In the simulations each bank is set in turn to be unable to send any payments during the day. The failing bank continues, however, to receive payments and traps some of the total liquidity on its account - becoming a sink. As a consequence other banks run short of liquidity and queues will build, first causing existing liquidity buffers to be used more and eventually causing payments to be delayed. We use two metrics to evaluate the magnitude of the disturbance. First, duration of delays in the system (‘Congestion’) aggregated over all banks and the average reduction in available funds of the other banks due to the failing bank (‘Liquidity Dislocation) as measures of the extent of this disruption. We test the measure on Barabasi-Albert types of scale-free networks, random networks and lattice networks. We find that the SinkRank of a node correlates very strongly and stronger than other topological measures considered with both Congestion and Liquidity Dislocation caused by its simulated failure.
16.30 - 17.00, Giovanni di Iasio, Bank of Italy, Italy (Lecture Hall CAB G 11)
Contagion in Financial Networks
Coauthors: Stefano Battiston, Luigi Infante, Federico Pierobon
Video
A default of a bank has cascade-effects in a financial network in which entities are tightly intertwined. The cascade may propagate sequentially with additional defaults, from close neighbors to distant banks. Many contributions show that banking systems seem to be fairly stable to contagion via credit risk, as very large shocks are needed to simulate cascades of a meaningful size. We use a novel method - DebtRank – from previous contributions of one of the authors, to assess the centrality of a bank in a network, accounting for the propagation of distress even in the absence of defaults in the cascade. Indeed, an event that weakens the balance-sheet of a bank j, has a negative spillover on the balance sheet of claim-holders of j (contagion through distress). In this respect, the centrality of a bank is (i) proportional to its relative exposure toward the source of distress and (ii) depends on its financial soundness. DebtRank solves the infinite reverberation problem typical of contagion in networks with loops. We estimate the total potential loss to the financial system caused either by an initial default of a single institution or by a common shock to several institutions. The method also allows to find candidate subsets of institutions that, together, may constitute systemically important groups. We use a unique dataset of supervisory reports to the Bank of Italy that includes (i) bilateral exposures (secured and unsecured, short and long term) between all Italian banks, (ii) the links with major foreign financial institutions and (iii) balance sheet data (capital, total and encumbered assets,…).
16.30 - 17.00, Co-Pierre Georg, UC3M & Oxford University, United Kingdom (Lecture Hall CAB G 51)
Financial Linkages, Macroprudential Policy, and Systemic Risk
Coauthors: Silvia Gabrieli, Banque de France
Slides Video
With the financial crisis of 2007/2008 systemic risk took center stage and challenged our understanding of a financial system that has become highly interconnected and increasingly complex. Policy makers and academics alike are faced with the key task to develop new models of systemic risk that account for agent heterogeneity, interconnectedness, and complexity. In recent years, financial networks and Agent-Based-Models have gained increasing attention as tools to model and understand systemic risk. In this paper we analyse the interplay of different forms of systemic risk and assess the effectiveness of macroprudential measures to facilitate financial stability. We develop a multi-agent simulation of the banking system that features all relevant forms of systemic risk: interbank contagion caused by counterparty risk; endogenously generated fire-sales caused by common asset holdings; and information contagion triggered by either an initial bank default or an ongoing fire- sale. The novelty of our contribution is the simultaneous occurrence of various sources of financial fragility, which allows us to take feedback effects between the different forms of systemic risk into account. In addition, we allow for varying macroeconomic conditions during the course of a simluation, analysing the effect of financial fragility building in good times and manifesting during a recession. Hence, our model helps to bridge the gap between the time-dimension of systemic risk (i.e. how it builds over time) and the cross-sectional dimension (i.e. how it spreads when a shock hit the system). We use our framework to assess the effectiveness of various macroprudential measures, including countercyclical capital requirements, different liquidity ratios, a leverage ratio, and surcharges for systemically financial institutions. We model agents as optimising a portfolio of risky real assets (i.e. loans to the real economy), risky financial assets (i.e. interbank loans and repurchase agreements) and riskless assets (i.e. cash or US treasury bonds). Agent heterogeneity is introduced through varying risk, return, and liqudity preferences. When a shock hits the system, (myopic) agents optimally rebalance their portfolios. This endogenously changes the interbank network structure and correlations of banks' portfolios originating from common asset holdings. Information contagion emerges whenever a shock hits the system. This gives rise to feedback effects aggravating interbank market freezes, credit crunches (i.e. substantially reduced investment in real assets), fire sales, and interbank contagion.
17.00 - 17.30, Vladimir Filimonov, ETH Zurich, D-MTEC, Switzerland (Lecture Hall CAB G 11)
Quantifying reflexivity in financial markets: towards a prediction of flash crashes
Coauthors: Didier Sornette
Slides Video
We introduce a new measure of activity of financial markets that provides a direct access to their level of endogeneity. This measure quantifies how much of price changes are due to endogenous feedback processes, as opposed to exogenous news. For this, we calibrate the self-excited conditional Poisson Hawkes model, which combines in a natural and parsimonious way exogenous influences with self-excited dynamics, to the E-mini S&P 500 futures contracts traded in the Chicago Mercantile Exchange from 1998 to 2010. We find that the level of endogeneity has increased significantly from 1998 to 2010, with only 70% in 1998 to less than 30% since 2007 of the price changes resulting from some revealed exogenous information. Analogous to nuclear plant safety concerned with avoiding "criticality", our measure provides a direct quantification of the distance of the financial market to a critical state defined precisely as the limit of diverging trading activity in absence of any external driving. This talk represents work with D. Sornette (PRE 85 (5), 2012: 056108)
17.00 - 17.30, Andreas Krause, University of Bath, Great Britain (Lecture Hall CAB G 51)
The Role of Interbank Lending in the Prediction of Individual Bank Failures during a Banking Crisis
Coauthors: Simone Giansante
Slides Video
We analyze the determinants of individual bank failures arising from solvency and liquidity shortages in a stylized banking system following Krause/Giansante (2012, forthcoming JEBO) where banks are characterized by the amount of capital, cash reserves and their exposure to the interbank loan market as borrowers as well as lenders. A network of interbank lending is established that is used as a transmission mechanism for the failure of banks through the system. We trigger a potential banking crisis by exogenously failing a bank and then investigate the likelihood of an individual bank failing. Most notably we find that the probability of a bank failing depends on the characteristics of the network of interbank loans and the market structure, while balance sheet relationships are of limited influence. We also establish different determinants for failures arising from solvency and liquidity shortages.

Friday, September 14

Economic Networks
9.00 - 9.45, Giorgio Fagiolo, Scuola Superiore Sant'Anna Pisa, Italy (Lecture Hall CAB G 11)
The International Trade Network: Statistical Properties and Modeling
Slides Video
In the last years, complex-network analysis has been applied to several fields in economics, giving rise to a wide literature, both empirical and theoretical. In this talk, I will overview some recent work exploring the properties of the international-trade network (ITN), defined as the graph where nodes are world countries and links represent bilateral trade flows (imports or exports). I address five main questions: (1) Why characterizing trade flows using a network representation may be relevant for trade economists? (2) Can the knowledge of the ITN topological properties shed new light on issues like growth, globalization and trade integration? (3) Can we separate ITN topological properties that are the sheer outcome of randomness from those that are instead statistically significant? (4) Is the gravity model of trade able to replicate the observed ITN structure? (5) Can we explain the properties of the ITN in terms of standard economic forces such as country specialization and comparative advantage?
9.45 - 10.30, Sanjeev Goyal, Cambridge University, UK (Lecture Hall CAB G 11)
Network Resilience
Slides Video
Connections between individuals facilitate the exchange of goods, resources and information and create benefits. However, the connections are costly to create and also serve as conduits for the spread of attacks and viruses. What are the implications of this trade-off for the network design and the nature of contagion in networks. The talk will present an overview of theoretical models and empirical studies of network resilience.
11.00 - 11.30, Sergio Souza, Banco Central do Brasil and UCB, Brazil (Lecture Hall CAB G 11)
Directed clustering coefficient as a measure of Systemic Risk in complex banking networks
Coauthors: Daniel O Cajueiro; Marcelo Takami; Jadson Rocha
Slides Video
Recent literature has focused on the study of systemic risk in complex networks. It is clear now, after the crisis of 2008, that the aggregate behavior of the interaction among the agents is not straightforward and it is very difficulty to predict. Contributing to this debate, this paper shows that the directed clustering coefficient may be used as a measure of systemic risk in complex networks. Furthermore, using data from the Brazilian bank interbank network, we show that the directed in clustering coefficient is negatively correlated with domestic interest rates
11.00 - 11.30, Andrea Tacchella, La Sapienza - University of Rome, Italy (Lecture Hall CAB G 51)
A New Metric for the Economic Complexity of Countries and Products
Coauthors: Andrea Tacchella, Matthieu Cristelli, Luciano Pietronero
Slides Video
We discuss a new approach to the complexity of countries and products in the spirit of the recent work by Hidalgo and Hausmann (PNAS 2009). The basic information is represented by the matrix of countries and exported products. The standard economic analysis is essentially based on the GDP but the diversification of this into a series of different products provides an additional element of fitness in the spirit of biodiversification in a fluctuating environment. According to the standard analysis, the specialization of countries towards certain specific products should be optimal, but this is valid only in a static condition. The strongly dynamical situation of the world market suggests that flexibility and adaptability are even more important. We propose a novel metric, defined as the fixed point of two nonlinear coupled maps, which is able to quantify these qualitative observations. Our new metric has the following fundamental properties: 1. It defines a Fitness for countries and a Complexity for products which are improved by iteration but always keep their original meaning. 2. The iteration adds complexity to the distributions which become broad and Pareto like. 3. Test cases show that the fixed point of the iteration is weakly perturbed by noise. This is crucial as real export data is unavoidably noisy. 4. Test cases and real applications are strongly improved with respect to previous approaches. The information provided by this new metric can be used in various ways. The direct comparison of the Fitness with the country GDP gives an assessment of the non expressed potential of the country. Also for each country it is possible to define the Complexity of the products exported and how competitive is this country with respect to the other countries which produce the same product. The behavior of the countries in this new space is rather heterogeneous for different groups of countries. This heterogeneity is crucial to identify a predictive power for the GDP or for the Stock indices. The method permits also a scientific test of the rating and the new variables are shown to be far superior to the standard rating in identifying risky situations long before the collapse.
11.30 - 12.00, Xiaobing Feng, Shanghai Jiaotong University, SHIFT, PR China (Lecture Hall CAB G 11)
Measurement and Internalization of Systemic Risk in a Global Bank Trading & Clearing Network
Coauthors: Haibo Hu, Matt Pritsker and Beomjun Kim
Slides Video
The negative externalities from an individual bank failure to the whole system can be huge. One of the key purposes of bank regulation is to internalize the social costs of potential bank failures via capital charges. This study proposes a method to evaluate and allocate the systemic risk to different countries using a SIR type of epidemic spreading model and the Shapley value in game theory. The paper also explores features of a constructed bank network using real globe-wide banking data. The major findings are that the magnitude of the systemic risk at the national level is related to the degree distribution of a bank in a nonlinear fashion. To be more specific, it depends on whether the network is more heterogeneous such as a scale free network, or more homogeneous such as an exponential or even a regular network. The constructed global banking network includes over 30,000 public and private overseas banks all over the world. The systemic important institutions are identified.The detected modularity of the global network indicates that the geographical location still plays roles in formulating the communities. The systemic risk is internalized by capital charges required from each country. The capital charge is evaluated based on the country level systemic risk. A type-two-holling function is used to convert systemic risk to capital charge. Finally we suggest that individual risk control policy should be combined to the systemic risk control policy to maintain the stability of the system, neither of which can be ignored. This is an advice that is different from the current policy stance that emphasizes only the safety of individual bank.
11.30 - 12.00, Stefano Battiston, ETH Zurich, Switzerland (Lecture Hall CAB G 51)
The Network of Global Corporate Control
Coauthors: Stefania Vitali and James B. Glattfelder
Video
The structure of the control network of transnational corporations affects global market competition and financial stability. So far, only small national samples were studied and there was no appropriate methodology to assess control globally. We present the first investigation of the architecture of the international ownership network, along with the computation of the control held by each global player. We find that transnational corporations form a giant bow-tie structure and that a large portion of control flows to a small tightly-knit core of financial institutions. This core can be seen as an economic “super-entity” that raises new important issues both for researchers and policy makers.
12.00 - 12.30, Antonio Scala, CNR-ISC "La Sapienza", IMT Lucca and LIMS London, Italy (Lecture Hall CAB G 11)
Mitigating distress cascades in financial networks
Coauthors: Guido Caldarelli
Slides Video
We use a simple model of distress propagation (the sandpile model) to shows how financial systems are naturally subject to the risk of systemic failures. Taking into account possible network structures among financial institutions, we investigate if simple policies can limit financial distress propagation to avoid system-wide crises, i.e. to dampen systemic risk. We therefore compare different immunization policies (targeted helps to financial institutions) and find that the information coming from the network topology allows to mitigate systemic cascades by targeting just few institutions. Furthermore, our analysis points that ”Rich Clubs” can significatively enhance the effects of targeted policies for securing the financial network. This result represents a higly controversial point from the perspective of policy makers trying to enforce a free market and to avoid oligopolies.
12.00 - 12.30, Martinez Jaramillo Serafin, Banco de Mexico
An Empirical Study of the Mexican Banking System's Network and its Implications for Systemic Risk
Coauthors: Biliana Alexandrova-Kabadjova, Bernardo Bravo-Benitez and Juan Pablo Solorzano-Margain
Slides Video
With the purpose of measuring and monitoring systemic risk, some topological properties of the interbank exposures and the payments system networks are studied. We propose non-topological measures which are useful to describe the individual behavior of banks in both networks. The evolution of such networks is also studied and some important conclusions from the systemic risks perspective are drawn. A unified measure of interconnectedness is also created. The main findings of this study are: the payments system network is strongly connected in contrast to the interbank exposures network; the type of exposures and payment size reveal different roles played by banks; behavior of banks in the exposures network changed considerably after Lehmans failure; interconnectedness of a bank, estimated by the unified measure, is not necessarily related with its assets size.
14.00 - 14.30, Hamed Amini, EPFL (Swiss Finance Institute), Switzerland
Contagious Defaults in Financial Networks
Coauthors: Rama Cont and Andreea Minca
Slides Video
Propagation of balance-sheet or cash-flow insolvency across financial institutions may be modeled as a cascade process on a network representing their mutual exposures. In the first part, we derive rigorous asymptotic results for the magnitude of contagion in a large financial network and give an analytical expression for the asymptotic fraction of defaults, in terms of network characteristics. Our results extend previous studies on contagion in random graphs to inhomogeneous directed graphs with a given degree sequence and arbitrary distribution of weights. We introduce a criterion for the resilience of a large financial network to the insolvency of a small group of financial institutions and quantify how contagion amplifies small shocks to the network. Our results emphasize the role played by 'contagious links' and show that institutions which contribute most to network instability in case of default have both large connectivity and a large fraction of contagious links. The asymptotic results show good agreement with simulations for networks with realistic sizes. This part of talk is based on joint work with Rama Cont and Andreea Minca. In the second part, we consider the problem of a lender of last resort who seeks to minimize the magnitude of contagion under budget constraints. In case the lender observes the interbank exposures progressively, as banks report their exposures to banks in default, we can model distress propagation under intervention as a Markov Decision Process. We find the optimal intervention policy as a result of Hamilton Jacobi Bellman equations. Our results show that, in the case of non-anticipative information, the optimal strategy depends in a non-linear way on the fraction of banks that use short-term funding. This part is based on a joint work with Jean-Philippe Chancelier, Andreea Minca and Agnes Sulem.
14.30 - 15.00, Frank Page, Indiana University, USA
Rollover Risk and Endogenous Network Dynamics
Coauthors: Jose Pedro Fique
Slides Video
One of the most striking phenomena of the 2007-2009 financial crises was the rapidity with which liquidity in the interbank markets dried up, especially in long term maturities. In network literature terminology, the once dense interbank network that allowed highly liquid banks to channel liquidity to those banks with investment opportunities, transited to a sparse architecture. The sudden failure of the once well-functioning interbank loan network during the recent financial crisis has given momentum to the movement toward major, worldwide regulatory reform to minimize the possibility of another interbank network failure and to make the financial network more robust. The shortcomings of the regulatory framework exposed by the crisis lead to the design and implementation of new instruments aimed at the insuring the stability of the financial system. One of these instruments, now in use in several European countries, is a special banking levy/tax. The levy/tax aims not only to raise funds to reduce the cost to taxpayers incurred with past (or future) rescues of the financial infrastructure but also to provide banks with the correct incentives for risk taking. The purpose of this paper is twofold: (i) to analyze within a dynamic network formation game how macroeconomic conditions, such as investors' risk appetite, affect rollover decisions and (ii) to determine the effects that a levy has on the endogenous dynamics of network formation. We find that because the existence of linkages between market participants generates an informational externality, the newly formed network is strongly conditioned by past network architectures. Simulations show that this inertia is strongly dependent on macroeconomic conditions, such as investors' risk appetite. The numerical exercises reveal that for intermediate values of the risk appetite parameter, the inability to maintain a threshold number of linkages may push the market into a gridlock. Moreover, this tipping point property implies that the recovery from a market freeze situation requires good conditions of a magnitude considerably greater than the magnitude of the bad conditions precipitated the crisis in the first place – leading to a network induced inertia. Finally, since we model the special banking levy as a cost to the activation of interbank connections, we find that a substantial decline in the tax burden is required in order to re-start lending activity when the market experiences severe stress situations. Thus while we find that banking levy instruments play an important role in aligning private and social incentives for risk taking - and therefore, constitute an important part of the regulatory landscape - if the activation of each interbank connection has an externality that is particularly onerous during periods of financial stress, the weight of this instrument (i.e., banking levies) should be counter-cyclical, as are the recent capital requirement proposals supported both by academia and regulatory bodies.
15.00 - 15.30, Sam Langfield, European Systemic Risk Board, Germany and UK
Mapping the UK interbank market
Coauthors: Tomohiro Ota and Zijun Liu
Slides Video
This paper describes the features of the UK interbank system, using a newly available regulatory dataset on counterparty-level interbank exposures. To our knowledge, this dataset is the most granular representation of a large interbank market available worldwide. We present recently developed metrics which characterise the network from the point of view of financial-system stability. We pay particular attention to four complexities. Firstly, the network exhibits multiple layering: nodes are connected by up to 150 types of financial instruments, including prime lending, fixed income, CDS, repos, derivatives and others, at a spectrum of maturities. Secondly, each link is directed from bank A to bank B. Thirdly, each link has a weight, which corresponds to the pound sterling value of the exposure or funding source. Fourthly, nodes are diverse in their balance sheet characteristics. Comprehensive matching between the interbank exposures dataset, a regulatory balance-sheet dataset and public data allows us to capture this heterogeneity. The interbank network clearly exhibits a 'hub and spoke' structure. Most of the 176 banks resident in the UK are exposed to a handful of money-centre banks. We infer that the UK interbank system is 'robust yet fragile': it is resilient against random shocks, but vulnerable to targeted attacks. We conclude by suggesting avenues for future research, particularly on how financial policy might respond to such network structure in order to improve financial-system stability.
16.00 - 16.45, Fernando Vega-Redondo, European University Institute Florence, Italy
Globalization and Social Networks
Slides Video
We propose a stylised dynamic model to understand the role of social networks in the phenomenon we call "globalization." This term refers to the process by which even agents who are geographically far apart come to interact, thus overcoming what would otherwise be a fast saturation of local opportunities. A key feature of our model is that the social network is the main channel through which agents search and exploit new opportunities. Thus only if the social network becomes globaI (heuristically, "reaches far") can global interaction be steadily sustained. To shed light on the conditions under which such a transformation may, or may not, take place is the main objective of the paper. One of our interesting insights is that in order for a local social network to turn global, the economy needs to display a degree of "geographical cohesion" that is neither too high (for then global opportunities simply do not arise) nor too low (in which case there is too little social structure for the process to take off). And if globalization does arise, we show that it often occurs abruptly and consolidates as a robust state of affairs. We also show how it is affected by improvements in the flow at which information travels in the network, or the range at which the social network helps to monitor behavior.
16.45 - 17.30, Pier-Paolo Saviotti, Universität Hohenheim, Germany
Networks of Knowledge
Slides Video
In this paper the representation of the knowledge base of firms, research organizations or fields of knowledge as a network will be described. Networks of this type have nodes constituted by units of knowledge defined at a given level of aggregation and links determined by the interactions of such units. Examples of such units are the technological classes associated to patents or the themes that can be identified in scientific publications or patents. The paper will describe the application of this approach to the dynamics of knowledge in firms, research organizations or fields of knowledge by mapping the changes occurring in the structure of knowledge and by measuring some relevant properties of the of the knowledge bases, such as their coherence, variety or cognitive distance. Among other applications this approach can allow us to detect the presence of knowledge discontinuities, such as technological paradigms, and their impact on the behaviour of firms or research organizations.

September 11 to September 14

Poster Session
1) Sule Akkoyunlu, WTI University of Bern, Switzerland
Remittances and Financial Development: Is there a direct link? Evidence from Turkish Data
This study investigates whether there is a direct link between remittances and financial development with time series data on Turkish workers’ remittances from Germany over the five decades. Studies on the direct or indirect link of the remittances-financial development-growth have given mixed results depending on the sample of the countries and regions chosen. In this study rather than analysing groups of developing countries or groups of regions we concentrate on only one developing country – Turkey - where financial markets rather well developed. The Toda and Yamoto (1995) procedure is adopted for testing the direct link or for the Granger non-causality, as variables are integrated in an arbitrary order and may not be contegrated. We find no evidence of causality from remittances to financial development or financial development to remittances. Possible reasons are explored for this result.
2) Giulia Andrighetto, ISTC-CNR and EUI, Italy
Self-Policing Through Norm Internalization. A Cognitive Solution to the Tragedy of the Digital Commons
Coauthors: Daniel Villatoro; Rosaria Conte; Jordi Sabater
In the seminal work “An Evolutionary Approach to Norms”, Axelrod identified internalization as one of the key mechanisms that supports the spreading and stabilization of norms. This process entails several advantages for the maintenance of social order. For example when norms are internalized norm compliance is expected to be more robust than in cases when norms are external reasons for conduct. But how does this process work? Why and how do agents internalize social inputs, such as commands, values, norms and tastes, transforming them into endogenous states? What are the specific implications of different forms of norm compliance for society and its governance? These questions have received no conclusive answer so far. In particular, no explicit, controllable, and reproducible model of the process of internalization within the mind is available yet and this work aims to fill this gap. In particular, it advocates a rich cognitive model of different types, degrees and factors of internalization. Norm internalization is here characterized as a multi-step process, giving rise to more or less robust compliance and also allowing in certain circumstances for the automatic behaviours to be unblocked and the deliberate normative reasoning restored. To fully operationalise a multi-level model of norm internalization requires a complex agent architecture. Unlike the vast majority of simulation models in which heterogeneous agents interact according to simple local rules, in our model agents are provided with normative mental modules, allowing them to internalize norms and to block the automatic normative behaviour when necessary. The effect of norm internalization is analyzed in a self-regulated scenario that recreates a “Tragedy of the Digital Commons”. We present a provider-consumer scenario, populated by providers that might suffer from the tragedy-of- the-commons caused by the consumers’ over-exploitation. Agents have to be responsible for the appropriate consumption and sharing of the services in absence of a centralized system in charge of regulating the market. Our simulation results show that when facing a “Tragedy of the Digital Commons” situation, agents able to internalize norms achieve higher levels of cooperation and prove more adaptive than agents whose decision-making is based on utility calculation.
3) George Bagaturia, International Black Sea University, Georgia
Bayes Methodology for the Forecasting of Global Economic Crisis
“The one, who does not reflect on far difficulties, has to wait troubles soon by all means“ Konfutsy (Kun-Tszu) There are a lot of scientist economic research which are devoted to the problems of world economic development and the reasons of economic crisis but these problems have no real solution. Actually the existing economic theories can’t explain the nature of modern economic relations and forecast economic crisis. As a result economic crisis is happened suddenly and only afterwards various economist experts tried to explain their reasons. But these “explaining” are too late, the economic crisis have grave consequences, especially for the pure and small countries with transient economy. From the cybernetic point of view the problem of economic crisis’ forecasting is the topic of management in the conditions of not full information. In this case it is possible to conduct the optimal management (i.e. forecast economic crisis) on the basis of the factors’ analysis and consideration probability characteristics of “noise”. The research considers an opportunity of economic crisis forecasting on the basis of Bayes methodology that gives a chance both to the countries based on global economy to avoid or weaken results of the global economic crisis and to business companies to craft an optimal strategy of a company’s development during the crisis. For the forecasting of economic crisis we can use Bayes Methododlogy as follow: Consider event X (crisis), which is happened by Yi, i=1,2,...,n, various reasons (ways). Say we know the priori probabilities of the crisis happening - P(Yi). The question is: How will change the distribution of these reasons P(Yi) if a crisis occurs? For this purpose we can use Bayes formula and calculate a posteriori probabilities P(Yi|X), i=1,2,...,n. So, it is possible to estimate more correctly the probabilities distribution of crisis occurs P(Yi|X), (i.e. forecast a crisis by a certain probability) on the basis of information obtained. For the practical using of above mentioned methodology it is necessary to be listed and estimated: The main reasons of crisis happening – Yi; A priori probabilities of the main reasons P(Yi); and The conditional probabilities of crisis happening P(X|Yi). For this purpose it is necessary to study and analyze the environment of the economic system’s functioning. This is the goal of this project. As a result of the research’s implementation will be determined and / or defined: The main reasons of the crisis causing; A priori probabilities of the main reasons; A conditional probabilities of crisis happening; and A posteriori conditional probabilities of crisis happening i.e. to predict the economic crisis on the basis of above data.
5) Hugues Bersini, IRIDIA, Belgium
Why Should the Economy Be Competitive?
Coauthors: Nicolas van Zeebroeck
Is a competitive free market the most efficient way to maximize welfare and to equally allocate rare resources among economic agents? Many economists usually tend to think this is the case. Our work is a preliminary attempt through an object-oriented multi-agents model to address the famous efficiency-equality trade-off. We aim to go beyond the common analysis on market efficiency in the agent-based computational economics literature to better tackle the distributive aspects. In our model, agents which are alternatively producers, sellers, buyers and consumers participate in a market to increase their welfare (by consuming products endowed with a given utility). Two market models are compared: a competitive one, which is a classical double auction market in which agents attempt to outbid each other in order to buy and sell products, and a random one, in which products are allocated in a random manner. The comparison focuses on aggregative wealth creation (a common study) and above all on inequalities in resources allocation (much less frequently addressed). We compute and compare in both cases the money and utility generated within the markets as well as the Gini index that characterizes the degree of inequality in the allocation of these values among agents. We also compare welfare creation and distribution with more or less intelligent agents who use more or less information provided by the market. The results of thousand of rather robust of simulations are analyzed to mainly find that: 1) As is well-known, the competitive structure of the market greatly improves the circulation of the information of the consumers tastes and needs provided by the prices. 2) In a market with no budgetary constraint, the competitive market is always more efficient in terms of the aggregative utility, it creates more value overall. Nevertheless, if the utility does not decrease at each consumption, the competitive market turns out to be very “unfair” as compared with the competitive one while, in the presence of decreasing marginal utility, the competitive market results as equalitarian as the random one. 3) In the case of no budgetary constraints (at the start of the simulations, all agents are provided with enough money to buy any desired product during the entire simulation) and in the presence of decreasing marginal utility, both efficiency and equality objectives are simultaneously maximized by the competitive market and no trade-off has to be deplored. 4) In the case of strong budgetary constraints, our results dramatically change. There the competitive market acts as an inequality self-reinforcing mechanism. Even in the presence of producers with very similar skills, the initial tiny differences are amplified in time and only the most successful producers (becoming the richest agents) can satisfy their consumption needs. By acknowledging the exponential inequalities created by a competitive market constrained in liquidity, we finally discuss its moral acceptance in the current philosophical literature, since very equal opportunities can yet lead to uncontrolled and explosive inequalities.
6) Giacomo Bormetti, Scuola Normale Superiore, Italy
Stochastic Volatility with Heterogeneous Time Scales
Coauthors: Danilo Delpini
Agents' heterogeneity has been recognized as a driver mechanism for the persistence of financial volatility. We focus on the multiplicity of investment strategies' horizons; we embed this concept in a continuous time stochastic volatility framework and prove that a parsimonious, two-scales version effectively captures the long memory as measured from the real data. Since estimating parameters in a stochastic volatility model is a challenging task, we introduce a robust, knowledge-driven methodology based on the Generalized Methods of Moments. In addition to volatility clustering, the estimated model also captures other relevant stylized facts, emerging as a minimal but realistic and complete framework for modeling financial time series. A preprint version of the manuscript is available at http://arxiv.org/abs/1206.0026.
7) Joanna Bryson, University of Bath, United Kingdom
Costly Punishment and the Regulation of Public Goods Investment
Coauthors: Karolina Sylwester, James Mitchell, Simon Powers, Daniel Taylor
Behavioural economics often constructs situations where the advantages of public goods are limitless, but in the real world where the heuristic strategies underlying social behaviour are learned and evolved, over-investment in a public good is a real possibility. We recently conducted a study seeking to explain the variation in anti-social punishment (ASP, the costly penalising of those who contribute to the public good) that has been observed across global regions. Herrmann, Thöni & Gächter (Science, 2008) showed through a series of behavioural economics experiments conducted on undergraduates attending regionally-leading universities that individuals in Southern Europe (Athens, Istanbul), the Middle East (Riyadh, Samara) and the former Soviet Union (Muscat, Minsk, Dnipropetrovs'k) are more likely to punish those who contribute more the public good than are subjects from Northern Europe (Bonn, Copenhagen, Zurich, St. Gallen, Nottingham, Boston (USA)) or the Far East (Seoul, Chengdu, Melbourne*). Previous studies on costly punishment have suggested that the altruistic punishment (AP) of free-riders might explain human cooperativeness, but we show AP and free-riding are relatively invariant across cultures, while mixed strategies including ASP vary, and better correspond to the correlations with GDP and the rule of law found by Herrmann et al. We hypothesise that ASP is part of a distributed mechanisms for regulating public goods investment to levels sensible given the soci-economic environment, based on individual's (probably implicit) perception of this based on recent local outcomes. The average level of public goods production is determined by a mixed population of over & under producers. The proportions of individuals playing each of these strategies is is proximately controlled via psychological assessments of in-group / out-group status. We support this hypothesis via agent-based models and meta-analysis of Herrmann et al's data. *Melbourne students expressed behaviour more like the Chinese & Korean cities than the UK or USA.
8) Andrzej Buda, Wydawnictwo Niezalezne, Poland
Hierarchical Structure in Phonographic and Financial Markets
I find a topological arrangement of assets traded in phonographic and financial markets which has associated a meaningful economic taxonomy. I continue using The Minimal Spanning Tree and the correlations between assets, but also outside the stock markets. The record industry offers artists instead of stocks. The value of an artist is defined by record sales. The graph is obtained starting from the matrix of correlation coefficients computed between the world's most popular 30 artists by considering the synchronous time evolution of the difference of the logarithm of weekly record sales. This method provides the hierarchical structure of phonographic market and information on which music genre is meaningful according to customers. Statistical properties (including the Hurst exponent) of weekly record sales in the phonographic market are also discussed.
9) Gabriella Cagliesi, University of Greenwich, UK
A Multidisciplinary Approach to Male Worklessness: An Endogenous Switching Regime of the UK Labour Market
Coauthors: Denise Hawkes, Riccardo De Vita
In this empirical study we propose to use a new cross-disciplinary approach among labour economics, behavioural economics (BE) and social networking analysis (SNA) to explain agents’ decisions over employment, non employment and across various inactivity categories in the labour market. We abandon the traditional welfare approach and use the more general framework of capabilities and refined functioning proposed by Amartya Sen to interpret how different types of constraints - ranging from socioeconomic conditions and environmental background to specific individual’s idiosyncracies - affect preferences and functionings. In our model labour market’s opportunities, choices and achievements are all affected by the interrelations and interactions of individual’s demographic and psychological characteristics (such as age, gender, heuristic, perceptions, beliefs, attitude, goals and ambitions) with external factors (such as geographical, socio- cultural and economic conditions). Using he British Household Panel Data we estimate an endogenous switching regime model and a multinomial logit model of choices and decisions that account for the richness of these interactions to investigate labour market behaviour of a more “human” homo economicus. In line with the standard results, we find that traditional factors such as human capital factors, material opportunities and income consideration affect labour market choices. However our richer approach indicates that different labour market statuses are also the results of personal beliefs, values, social norms and other psychological and social factors. Overall our results indicate that the use of a cross disciplinary approach of labour economics, behavioural economics and social network analysis can generates significant benefits in terms of policy making and policy prescriptions because it provides useful insights into inaction that can better orientate the design of effective labour market policies. For instance a deeper understanding of how social networks impact worklessness decisions and attitudes, and affect employment perceptions, social mobility, and human capital investments, can have crucial implications for the labour market policies, subsidization of education and decision on unemployment benefits. Our results suggest that the proposed redesign of the benefit system and additional support for those not currently employed needs to allow for a degree of heterogeneity in the recipient basis. A handful of policies, such as the previous New Deal and the current Tax Credits system, have been designed with this heterogeneity in mind by age and type of inactively. The results above point out that a consideration of factors wider than the standard labour economic variable when designing labour market policies may provide fruitful returns.
10) Claudia Champagne, Sherbrooke University, Canada
The International Syndicated Loan Market Network: An ''Unholy Trinity``?
Coauthors: Claudia Champagne
The structure of network of lenders underlying the syndicated loan market is formally examined and studied with network analysis tools. Overall, we find the global network of syndicated lenders, as well as sub-networks of countries or institution types, present the “unholy trinity” of characteristics that are common in other financial networks. Specifically, we observe an increase in the number and frequency of connections between lenders, which increases the complexity of the network. We also conclude that the global networks and sub-networks of countries and lender types exhibit small-world characteristics. Overall, variations in the global small-world statistic reveal i) the progressive dispersion of the network with the increasing number of lenders, and ii) the local insertion of new lenders in the network which causes an increase in the size or number of cliques. Small-world network structures are highly effective for communication of information across actors. However, a small-world structure can affect the fragility of the network because it can turn a local problem into a global one through proximity and clustering. Finally, we find that the global network presents scale-free characteristics with preferential attachment. New lenders are attracted to highly-connected lenders. Scale-free network structures are particularly robust to random disruption of network ties and exogenous shocks. However, they become fragile if failures or attacks affect important lenders (hubs).
11) Po-yuan Chen, Jinwen Univrsity of Sci & Tech, Taiwan
Financial Alliance Strategy under Uncertainty: A Model of Supply Chain Integration
A real option model is employed in this paper to investigate the relationships among the option values, the thresholds for agents and the dynamics of the spot price in a two-echelon supply chain context. The uncertainty of the integrated profits mainly comes from the stochastic spot prices in the consumer market, which evolves as a geometric Brownian motion. In our proposed model, the second agent (retailer) has rights to place orders to receive goods form the first agent (supplier), depending on the spot price and the price threshold. The second agent (retailer) incurs the transportation costs. To prevent from the operational losses, the second agent (retailer) can determine when to suspend the sales to the final consumers and to keep those goods as inventories when the spot price is reversed to a downtrend. After the suspension, the second agent (retailer) is still waiting for the next opportunity to grasp the profits when the spot price evolves up to the level of price threshold. In short, the second agent (retailer) owns a perpetual option when waiting for the sales opportunity, while the first agent (supplier) waiting for it delivery opportunity, which is triggered whenever the spot price reaches the level of the supplier’s threshold. Once the spot price is below the threshold, the first agent (supplier) ceases the goods delivery to the second agent (retailer). The price thresholds relating to the timing of financial alliance and intergration of two agents are analytically derived. Sensitivity analyses are performed in this work. We conclude that the price threshold of the retailer increases with the increasing price volatility and risk-free rate. In contrast, the price threshold of the supplier decreases with the increasing price volatility and risk-free rate. The sensitivity of the real option vlaue is similar to that of the financial options. At the end of the paper, a decision support process is presented to verify the applicability of our model. Further studies could focus on a more general framework with multi-agent complex system by using Monte Carlo simulation approach. This paper is the analytical basis for further researches.
12) Po-yuan Chen, Jinwen Univrsity of Sci & Tech, Taiwan
The Optimal Capital Structure with APR Violation under Product Price Uncertainty
Coauthors: Sheng-syan Chen
This paper extends the model of Goldstein, Ju, and Leland (2001) to investigate the optimal strategy of capital structure by explicitly identifying the source of risk as the product price uncertainty, which evolves as a geometric Brownian motion. Many empirical studies confirm the existence of violation of absolute priority rule (APR) in many countries. For example, Unal et al. (2001) consider APR violation in the estimation of recovery rate. They allow for the condition that junior bondholders receive payments before senior bondholders receive full payments when the firm goes bankrupt. Weiss (1990) confirms that APR violation is practical because bankruptcy law gives junior bondholders the possibility to delay final resolution. Therefore, senior bondholders are willing to violate priority in order not to incur any legal costs caused by the delay of the bankruptcy resolution. Frank and Torous (1994) provide empirical evidence to verify the deviation from absolute priority rule assumed by creditors in different priority class. Related researches focusing on APR violation could be found in the work of Franks and Torous (1989), Eberhart, Moore, and Roenfeldt (1989), Eberhart and Sweeney(1992), Altman and Eberhart(1994), Betker(1995). Based on the above emprical stuides, our framework releases the APR assumption of Goldstein, Ju, and Leland (2001). This paper explores interactions between the product and the financial markets and the effect of APR violation on the optimal financial decisions. We conclude that the optimal debt ratio increases with increasing product price volatility and with increasing degree of APR violation. As a result, a firm should integrate its product and financial decisions in response to the degree of APR violation.
13) Suzy Moat, University College London, United Kingdom
Space-Time Structure of Global Stock Indices
Coauthors: Tao Cheng, James Haworth, Dror Y Kenett and Jiaqiu Wang
Space-time structure has a fundamental influence on the relative performance of global stock market indices. An understanding of this is vital when modeling and forecasting the spatio-temporal evolution of stock market trends and shocks. Much work has focused on the analysis of temporal and spatial correlations of stock indices, in order to quantify inter market relations, with emphasis on events that have led to worldwide economic crises. However, the global market is fundamentally interdependent in both space and time, requiring an integrated spatio-temproal approach to quantify these interdependencies. This paper presents methodologies to filling the gap, by examining the spatio-temporal autocorrelations of daily stock indices of 92 countries during 2007-2011. Focusing on the crash of September 16th 2008, we first examine the spatial correlation between stock prices at a global level. The empirical semi-variance is calculated, revealing a clear distance decay relationship consistent with positive spatial autocorrelation. However, an interesting phenomenon presents itself at a spatial range of around 10 thousand miles, where the semi-variance falls again. This may possibly be accounted for by the distance between some of the major trading centers. The purely spatial analysis provides us with a snapshot of the spatial correlation in stock market index prices. To examine the evolvement of the spatio-temporal interdependencies in stock market index prices in an integrated way, we use the spatio-temporal autocorrelation function (ST-ACF). The ST-ACF, an extension of the Pearson coefficient, is calculated in successive distance bands of width one thousand miles. The resultant pattern reflects a non-stationary space-time process with very slow decay, which is typical of financial series. The strength of the ST-ACF decreases as distance between trading centers increases, confirming the findings of spatial analysis. Again, the exception to this is at a separation distance of between 9 and 10 thousand miles, at which unusually high space-time autocorrelation is exhibited compared with a separation distance of between 4 and 9 thousand miles. To examine the findings in more detail, we will examine local space-time autocorrelations in order to disclose their dynamics in time and heterogeneity in space, which is masked by the global indicators. We propose that this study provides the first detailed picture of spatio-temporal autocorrelation of global stock market index prices. The findings have significant implications for space–time modeling of global financial markets. By untangling the complex interdependencies, we can devise space-time model structures that are able to harness the spatio-temporal autocorrelation structure to produce accurate forecasts. Given the non-stationarity and heterogeneity of the space-time process, traditional space-time models that assume a stationarity are insufficient. Nonlinear machine learning approaches are likely to be more effective forecasting spatio-temporally correlated economic processes.
14) Jens Christian Claussen, University of Luebeck, Germany
From Agent-Based Models to Stochastic Differential Equations in Evolutionary Dynamics of Strategies
Coauthors: Jens Christian Claussen (Luebeck), Arne Traulsen (Ploen), Christoph Hauert (Vancouver)
Behavioral strategies of agents can co-evolve quite differently, by weak selection in biology, or in a strong selection limit for individuals playing social roles or are in economic dilemmata. In addition, a finite number of agents introduces demographic stochasticity, which must be incorporated in quantitative models, and new effects can arise. Starting from an microscopic interaction process, one can derive mean-field models for the macroscopic variables, including finite-size corrections by a Fokker-Planck equation, as we have discussed for the framework of evolutionary game theory [1] and extended for mutations and several strategies [2] and proceeding to a stochastic differential equations [3] which can have significant computational advantage. Depending on the agent's interaction process, the noise can be game-independent or dependent [3], and even the resulting (macro) differential equations can differ resulting in stability reversals in bimatrix games or asymmetric conflicts [1,4] and in non-zero sum cyclic games if the bank loses [5]. [1] Phys Rev Lett 95, 238701 (2005) [2] Phys Rev E 74, 011901 (2006) [3] Phys Rev E 85, 041901 (2011) [4] Eur Phys J B 60, 391-399 (2007) [5] Phys Rev Lett 100, 058104 (2008) http://www.inb.uni-luebeck.de/~claussen/
15) Ana Contreras-Reyes, National Autonomous University of Mexico , Mexico
Price Dynamics Using an Order Book Model
Coauthors: Hernan Larralde
In this work we present simple model for the dynamics of the limit order book in a liquid market. In an order-driven market, buyers and sellers submit their limit orders and when the price reaches the best (lowest) ask or the best (highest) bid, these are executed. For instance, as a result of a "sell transaction", at which the the lowest ask order is executed, the best ask price is shifted to what was the next best ask in the order book. Thus, for all practical purposes, a sell transaction causes an increase in the current ask price. A similar thing happens to the bid price after "buy transactions", decreasing the bid price and thus increasing the spread, which is the best ask - best bid difference. On the other hand, if the orders are not executed, new orders are placed such that the spread gets smaller. We treat the spread as a multiplicative process which can be solved analytically. We present numerical examples and a comparison with financial data from DAX.
16) Matthieu Cristelli, ISC-CNR, Istituto dei Sistemi Complessi, Italy
Turning Agent-Based Models Into a Metrics for Systemic Instabilities of Markets
Coauthors: M. Cristelli, A. Tacchella, L. Pietronero
In the last two decades many Agent Based Models (ABM) have been developed aiming at reproducing and understanding the Stylized Facts (SF) observed in the financial time series. These models are typically capable of reproducing the main characteristics of financial markets, (the SF), in some areas of their spaces of parameters. Their weakness is that the experimental background of SF is limited and an improvement of this body of knowledge seems to be the bottleneck of this field. Regarding this matters ABM allowed to overcome the standard theories of markets and concepts like homogeneity of agents and the equilibrium of the market. To present day it is still possible to use ABMs just as metaphors of real economic systems. ABMs are capable of explaining the role of elements such as heterogeneity of strategies and time horizons, contagion dynamics, intrinsic large fluctuations (endogenous) but they still can not be concretely useful in policy-making processes. This represents the main challenge of the ABM field. In a series of papers regarding a minimal ABM [1,2,3,4,5] we have shown that a key element in order to measure systemic risk and financial distress is the effective number of agents or, in other words, the number of effectively independent strategies active in the market. This observation has been the hint to develop a data-oriented measure of the effective number of strategies in real markets, in order to define a proxy for systemic risk and market instability. We believe that a concrete way to turn ABM into real tools for policy makers is represented by the development of a metrics for the degree of coherence of the markets, that is the number of effective active agents. We discuss two possible approaches to assess this issue: 1-The analysis of the spectrum of the correlation matrices of price time series is a promising candidate to develop a measure for the agents' coherence. In fact while the spectrum of correlation matrices is well described by random matrix theories in low volatility period, the strong increase of the average correlation among stocks (intra and extra sectors) produce spectra markedly different from the signal of a random matrix. 2-Intrastock Pattern Recocgnition is the second candidate under investigation to reconstruct portfolio compositions and detect portfolio coherence, that is their degree of overlap. In this sense Pattern Recognition can be a way to define a measure for the market coherence, the more the strategies are similar, the closer will be the price trajectories on which these strategies are acting. [1] Alfi V., Pietronero L. and Zaccaria A., EPL, 86 (2009) 58003. [2] Alfi V., Cristelli M., Pietronero L. and Zaccaria A., Eur. Phys. J. B, 67 (2009) 385. [3] Alfi V., Cristelli M., Pietronero L. and Zaccaria A., Eur. Phys. J. B, 67 (2009) 399 [4] Alfi V., Cristelli M., Pietronero L. and Zaccaria A., J. Stat. Mech., (2009) P03016 [5] M. Cristelli, A. Zaccaria, L. Pietronero, Proceedings of the School of Physics ”E.
17) Julien Cujean, EPFL, Switzerland
The Social Dynamics of Outperformance
Mutual fund managers’ performance does not persist. I develop a rational-expectations equilibrium model that reproduces this fact. I define managerial ability as managers’ informational advantage. In my model, managers may improve their informational advantage through word-of-mouth communications, which create a rich diversity of managerial abilities. By aggregating private conversations among managers, prices accelerate learning in the market. As a result, it is almost impossible for managers to preserve their informational advantage before the market catches up with them; most managers’ performance does not persist, despite heterogenous abilities. Three important implications of the model underlie this result. First, social interactions have a strong herding impact on managers’ trading. Second, herding produces stock return momentum, which enhances low-ability managers’ performance. Third, by aligning with the market consensus, managers lose their timing skills. These implications together make the ranking of managers difficult to implement; most managers are unable to generate alpha, except a small group of top performers.
18) Christos Emmanouilides, Aristotle University of Thessaloniki, Greece
Statistical Mechanics Models for Innovation Diffusion Modeling and Forecasting
The paper presents a class of statistical mechanics models for innovation diffusion modeling and forecasting. The models are specified as discrete choice models for the innovation adoption probability. Adoption utility function is assumed linearly separable with an exogenous component of individual characteristics and external influences (marketing effort, mass media, etc.), and an interactive component for the expectation formation process through multi-agent interactions. These interactions are assumed to take place in a network defined over a multivariate set of characteristics and spatial structures and may take a variety of forms. Under normality and independence assumptions, dynamic mean field equations (MFE) are derived for the aggregate diffusion process. These equations describe the temporal evolution of the moments of the adoption probabilities’ distribution, and are used to study diffusion dynamics with simulations. Then, reduced (i.e. homogenized to some degree) model forms that are estimable under a variety of data availability scenarios (e.g. with macro-level aggregate diffusion data, individual level panel or cross-sectional data) are obtained. Preliminary results on the properties of available estimators are discussed, together with implications regarding inference about the diffusion data generating process. Findings indicate that the developed statistical mechanics models can be reliably estimated on appropriate multilevel data and provide valid inferences for the multi-agent interactions and the external influences that drive the diffusion process. Finally, the forecasting performance of the simpler (i.e. fully homogeneous) model is compared to that of 15 established aggregate innovation diffusion models. This is a single equation model for the first moment (essentially a Mean Field Model – MFM) that can be estimated with the usually available aggregate diffusion data and used to generate forecasts. Comparisons are made using a large number of real and simulated diffusion series, for a range of forecast origins and lead times in order to assess the models’ short and long-term forecasting performance at various stages of diffusion. The analysis of forecasts leads to several results about the average forecasting performance of the alternative diffusion models. It turns out that the proposed MFM, when is empirically supported by the data, forecasts better or equally well than most diffusion models employed in the study, thus providing a preferable alternative.
19) Daniel Fricke, Kiel Institute for the World Economy, Germany
Core-Periphery Structure in the Overnight Money Market: Evidence from the e-MID Trading Platform
Coauthors: Thomas Lux
We explore the network topology arising from a dataset of the overnight interbank transactions on the e-MID trading platform from January 1999 to December 2010. In order to shed light on the hierarchical structure of the banking system, we estimate different versions of a core-periphery model. Our main findings are: (1) A core-periphery structure provides a better fit for these interbank data than alternative network models, (2) the identified core is quite stable over time, consisting of roughly 28% of all banks before the global financial crisis (GFC) and 23% afterwards, (3) the majority of core banks can be classified as intermediaries, i.e. as banks both borrowing and lending money, (4) allowing for asymmetric `coreness’ with respect to lending and borrowing considerably improves the fit, and reveals more concentration in borrowing than lending activity of money center banks. During the financial crisis of 2008, the reduction of interbank lending was mainly due to core banks’ reducing their numbers of active outgoing links.
20) Guido Germano, Philipps-Universität Marburg, Germany
Distributions of conserved quantities like energy or wealth in physics and economics
Coauthors: Enrico Scalas
Analogies between statistical physics and economics have been noticed by different researchers for over a century; interdisciplinary approaches became particularly popular since the mid 1990s, yielding unexpected results and cross-fertilization in both directions. In particular, a class of statistical multi-agent models has received attention for its capability of describing aggregate features of economic systems like the distribution of wealth among people. This has in turn spurred new interest in revisiting the exact mathematical expression for the distribution of molecular energies in classical statistical mechanical ensembles below the thermodynamic limit.
21) Guido Germano, Philipps-Universität Marburg, Germany
Fourier Transform Techniques for the Pricing of Path-Dependent Derivatives
Coauthors: Daniele Marazzina, Gianluca Fusai
Several stochastic methods used in finance originate from the solution ofproblems posed originally in physics. Recently there has been a growing interest in Fourier-based techniques in finance. Here we will present an application to the fast and accurate pricing of various exotic path-dependent derivative contracts under discrete monitoring when the underlying is modeled as an exponential Levy process. Using a z-transform and the Euler summationfor its inversion, the pricing problem is solved through a fixed number ofindependent Wiener-Hopf or Fredholm integral equations; remarkably, the number of equations and thus the computational cost does not depend on the number of monitoring dates. The integral equations can be solved by Wiener-Hopf factorization exploiting the Hilbert transform. Since the latter can be computed through the fast Fourier transform, a method results whose computational cost grows as N log N, while its error decreases quadratically or even exponentially with the grid size N. Numerical results are presented in order to validate the pricing algorithms and compare their performance withthat of older ones.
22) Jaba Ghonghadze, University of Kiel, Germany
Stock Prices and the Volatility Curvature Hypothesis
Coauthors: None
This paper postulates new models for stock price dynamics. It shows that a strictly stationary univariate Ito-diffusion without jumps, with inaccessible boundaries, and uni-modal stationary distribution is capable of generating price series and log-returns with the following properties: (1) Even though price series are mean-reverting, they might look like a random walk; (2) Implied log-returns do not follow a predictable pattern and appear to be quite random; (3) Log-return series display pronounced volatility clustering; (4) The distribution of log-returns is highly leptokurtic and displays power law tail behavior stretching over more than 10 standard deviations; (5) The unconditional moment properties of log-returns might be disconnected from those of the prices in an unconventional way; (6) Option pricing exercise reveals the presence of pronounced ``volatility smiles''; (7) These effects come without (a) any inclusion of jumps, (b) a priori embedding of GARCH features, or (c) bi-modality of price distributions. The mechanism responsible for the replication of stylized facts is identified as ``volatility curvatures at specific price levels''. This leads to the formulation of the ``volatility curvature hypothesis''. The parameters supporting volatility curvatures are found to be statistically significant in our application study. Interestingly, the inverse-cubic law pattern of Gabaix et al. (2006), that has been established for the real log-returns with magnitudes up to 80 standard deviations, turns out to be a natural outcome of the price processes following the new models. These findings provide some support to considering the stationarity of prices as not restrictive in financial modeling thus offering at least a partial answer to the question raised by Andrew Lo (1991, p. 1283).
23) Vygintas Gontis, Vilnius University, Lithuania
The Class of Nonlinear Stochastic Equations as a Background Modeling Financial Systems
Coauthors: Aleksejus Kononovicius
We have introduced a class of nonlinear stochastic differential equations (SDEs) providing time series with power-law statistics, reproducing 1/f spectral density and exhibiting bursty behavior [1]. Our modeling of financial market variables using the nonlinear stochastic differential equations is based on the empirical analysis and microscopic, agent-based, reasoning for the proposed equations [2]. In this contribution we will shortly review our recent work in the modeling of financial markets [3]. We will demonstrate that the general class of SDEs can be transformed to the Bessel process, which represents a special family of diffusion models applicable in econometric analysis [4]. This allows us to derive explicit form of burst statistics generated by the nonlinear SDE. Exponent of multiplicativity η is a key parameter of defined statistics. We will also present analysis of empirical data providing an evidence that burst statistics of return in financial markets can be modeled by nonlinear stochastic differential equations with multiplicativity as high as η = 5/2. We will introduce herding interaction of three agent groups and will derive macroscopic description of such system by two coupled SDEs. This enables us to reproduce power spectral density of absolute return and trading activity exhibited in financial markets. There are many different attempts of microscopic modeling in sophisticated systems, such as financial markets or other social systems, intended to reproduce the same empirically defined properties. The ambiguity of microscopic description in complex systems is an objective obstacle for quantitative modeling. Simple enough agent-based models with established or expected corresponding macroscopic description are indispensable in modeling of more sophisticated systems. In this contribution we will discuss various extensions and applications of Kirman’s herding model. References [1] Ruseckas, J., Kaulakys, B. and Gontis, V., Herding model and 1/f noise, EPL 96 (2011) 60007. [2] A. Kononovicius, V. Gontis. Agent based reasoning for the non-linear stochastic models of long-range memory. Physica A 391 4, 2012, pp. 1309-1314. [3] A. Kononovičius, V. Gontis and V. Daniunas. Agent-based Versus Macroscopic Modeling of Competition and Business Processes in Economics and Finance, to be published in International Journal On Advances in Intelligent Systems, 2012, http://arxiv.org/pdf/1202.3533.pdf [4] V. Gontis, A. Kononovicius, S. Reimann. The class of nonlinear stochastic models as a background for the bursty behavior in financial markets. ACS 15 (supp01), 2012, pp. 1250071 (1-13).
24) Martin Greiner, Aarhus University, Dept. Engineering \& Mathematics, Denmark
A 100\% Renewable Power System in Europe - Economics Will Follow Technology
Coauthors: Gorm Andresen, Morten Rasmussen, Rolando Rodrigues, Sarah Becker
Todays overall macro energy system based on fossil and nuclear resources will transform into a future system dominantly relying on fluctuating renewable resources. At the moment it is not really clear what will be the best transitional pathway between the current and the future energy system. In this respect it makes sense to think backwards, which means in a first step to get a good functional understanding of fully renewable energy systems and then in a second step bridge from there to todays energy system. Based on state-of-the-art high-resolution meteorological and electrical load data, simple spatio-temporal modelling, solid time-series analysis and the physics of complex networks, fundamental properties of a fully renewable pan-European power system are determined. Amongst such characteristics are the optimal mix of wind and solar power generation [1,2], the optimal combination of storage and balancing, the optimal extension of the transmission network, as well as the optimal ramp down of fossil and nuclear power generation during the transitional phase. These results indicate that the pathways into future energy systems will be driven by an optimal systemic combination of technologies, and that economy and markets have to follow technology. [1] Heide D, Bremen L, Greiner M, Hoffmann C, Speckmann M and Bofinger S: Seasonal optimal mix of wind and solar power in a future, highly renewable Europe. Renewable Energy 35 (2010) 2483-2489. [2] Heide D, Greiner M, Bremen L and Hoffmann C: Reduced storage and balancing needs in a fully renewable European power system with excess wind and solar power generation. Renewable Energy 36 (2011) 2515-2523.
25) Tomasz Gubiec, University of Warsaw, Poland
Share Price Movements as Non-Independent Continuous-Time Random Walk
Coauthors: Ryszard Kutner
We observed, in the context of the stock exchange trading, the backward price jumps of stocks. These jumps are a reminiscence of a bid-ask bounce phenomenon where consecutive jumps have the same or almost the same lengths, but opposite signs that is, there are backward correlated. To model these backward price jumps we extended the Continuous-Time Random Walk formalism in order to include the negative feedback, e.g. defined by one step memory. In the frame of our exactly solvable approach we describe the stochastic evolution of a typical share price on a stock exchange within a high-frequency time scale. Our approach was validated by well agreement of the theoretical velocity autocorrelation function with its empirical counterpart. The agreement is even better if we used our formalism containing the two step memory. Notably, the parameters of the model were obtained from separated data sets, so as the comparison of the theoretical velocity autocorrelation function with corresponding empirical curve has no free parameters.
26) Beniamino Guerra, ETHZ, Switzerland
Cascading Processes in Economic Networks
Coauthors: Antonios Garas, Claudio Tessone, Frank Schweitzer
In different fields it is more and more growing the level of interconnection (networking) of agents due first to the continuously growing level of globalization of the nowadays world (even coupling among systems),it is enough to think to the World Wide Web, to the Internet, to the flight interconnections, mobile phone network, and so on. It is then of primary importance to understand how the functionality of these systems and of the dynamics on the top of them are affected by the structure of the underlying network. Phenomenons like the congestion of the traffic, of the power electric grid, the spread of disease or financial defaults deserved in the last years more and more attention, giving impulse to a deep investigation of robustness of complex systems. In particular we will focus on a typical case of dynamical robustness (that is the robustness of a system subject to a flow of a physical quantity) that is cascading processes, that are widely studied in theoretical studies in Complex Networks. It is then of primary importance to investigate how these cascading dynamics are affected by the structural features of the network (connectivity, heterogeneities, correlations) and by individual properties of the agents (distributions of loads and thresholds among the nodes). This mechanisms could bring huge benefits in order to individuate more resistant system regarding the spread of a damage through the whole system. We used a cascading process ruled by load redistribution mechanism: Any of the agents is characterized by a proper load, that is the stress that an agent is subject to and a threshold, that represents the stress that the agent is able to handle. An initial defaulting agent is then chosen. The shock to this node provokes the redistribution of his own load among his own neighbors, in such a way that their new total load could get bigger than their threshold, inducing further failures; the further failures would cause another step of load redistribution and so on; the process could bring to a cascading process, involving the whole network even if the triggering event was a single default. We do not include any amplification mechanism such as self-reinforcement trends (for instance in finance, dynamics like financial acceleration), and macroscopic influences (for instance the panic of the market). Our goal is to understand the role of the heterogeneity in the distribution of the intrinsic features of the agents (basically load and threshold) and of macroscopic features of the network as mean connectivity and the degree distribution.
27) Jean-Cyprien HEAM, CREST and French Supervisory Authority (ACP), FRANCE
Liquidation Equilibrium with Seniority and Hidden CDO
Coauthors: Christian Gourieroux, Jean-Cyprien Heam, Alain Monfort
We consider the analysis of systemic solvency risk in a financial sector by considering the interconnections between financial institutions through their balance sheets. We complete the existing structural literature on liquidation equilibrium by allowing for different levels of seniority of the debt. This equilibrium framework highlights the diculty in pricing the junior and senior debts of a bank. These contractual single name assets are indeed CDO's written on the names of all interconnected banks. This modifies the standard CDO's pricing formulas by adding a pricing component to capture the contagion effects.
28) Jun-ichi Inoue, Hokkaido University, Japan
Job-Matching Processes in Probabilistic Labor Markets for University Graduates
Coauthors: He Chen
We introduce a probabilistic model to analyse job-matching processes in recent Japanese labor markets by means of statistical physics. In our preliminary study [1], we found that some macroscopic properties of markets are described in terms of phase transition due to `condensation'. In [2], we showed that the aggregation probability of each company is rewritten by means of non-linear map under several conditions. Mathematical treatment of the map enabled us to discuss the condition on which the rankings of arbitrary two companies are reversed during the dynamics. Moreover, we derived the stationary distribution at high temperature limit. We were confirmed that the approximation can be improved by a systematic way based on the `high-temperature expansion'. In this presentation, we shall focus on `social differentials' in job-hunting. Social differential in community is one of serous problems in Japan. Sometimes such social differentials propagate from parents to their children or even to their grandchildren. To deal with such complicated social systems having differentials, here we will extend our probabilistic modeling of labor markets [1][2] for university graduates. `Artificial' differentials in the number of informal acceptances for employment are induced by taking into account the student's score. In our modeling, each company getting entry sheets (CVs) over the quota selects talented persons among the candidates according to the student's scores such as the grade point average (GPA). We investigate the system parameter dependance of both micro- and macroscopic quantities and reveal the condition on which the differentials in the number of informal decisions emerges. We numerically find that the so-called Gini index, which is a traditional measurement of social differentials, with respect to acceptance ratio (= \# of informal acceptances/\# of entry sheets) rapidly grows as the average number of entry sheets increases. This situation implies that informal acceptances in the labor market are `monopolized' by a small limited fraction of students with relatively high scores. [1] H. Chen and J. Inoue, `Statistical mechanics of labor markets', New Economic Window, Springer (2012) (Proc. of Econophysics-Kolkata VI, Econophysics of systemic risk and network dynamics, F. Abergel, B.K. Chakrabarti, A. Chakraborti and A. Ghosh (Eds)), in press. [2] H. Chen and J. Inoue, `Dynamics of probabilistic labor markets: statistical physics perspective', Lecture notes of Economics and Mathematical Systems, Springer (2012) (Proc. of Artificial Economics 2012), in press.
29) Jukka Isohätälä, Loughborough University Department of Physics, UK
Ergodic Instability in a Model of Firms Under Threat of Liquidation and Analogies to Physical System
Coauthors: Feo Kusmartsev and Alistair Milne
Our proposed satellite workshop presentation will build on a current working paper "A model of investment subject to financing constraints" by Jukka Isohatala (Loughborough University Department of Physics), Alistair Milne (Loughborough University School of Business and Economics) and Donald Robertson (University of Cambridge, Faculty of Economics). That working paper investigates a a model of firms acting under financial constraints and a threat of liquidation. This analysis related to a number of papers in the economics literature, exploring the impact of financing constraints on firm behaviour and the transmission of economic shocks. These include the discrete time models of the financial accelerator of Bernanke and Gertler (AER, 1989) and Bernanke Gertler and Gilchrist (Handbook of Macroeconomics, 1999) and continuous time models of firm behaviour when there are constraints on borrowing such as Milne and Robertson (JEDC, 1996) and Brunnemeier and Sannikov (Princeton University, working paper, 2012). Isohatala, Milne and Robertson (Loughborough University, 2012) develop a continuous time model which nests both Milne and Robertson and Brunnermeir and Sannikov as special cases. The firms follow optimal investment policy, and (in the version closest to the assumptions of Brunnermeier and Sannikov) have the option of renting out capital in order to minimize exposure to risk at the cost of reduced output. Importantly, the model they explore has a more transparent derivation, and is more tractable to analysis, than Brunnermeier and Sannikov. Under certain, reasonable choices of parameters, the model recreates a phenomenon reported in Brunnermeier and Sannikov: a bimodal equilibrium distribution of net worth, whereby a significant proportion of firms occupy the lower net worth range. As a consequence of this bimodality, the model appears to be capable of producing dynamic behaviour (a form of ergodic instability) that mimics some features observed following financial crises such as that of 2007-2009. I. e. an initial aggreate shock leading to a substantial and protracted decline in net worth, investment and output. Our proposed workshop presentation will discuss the parallels between the equations describing the stochastic evolution of firms under optimal control (as explored in Isohatala, Milne and Robertson and in Brunnermeier and Sannikov), with the nonlinear dynamics of particles in the presence of spatially varying diffusion or noise. In physics, multimodal densities have been linked to instabilities in the underlying nonlinear deterministic dynamics in some physical systems and can be related to noise induced multistability (for example in [Jung and Hänggi, Phys. Rev. Lett. _65_ 3365 (1990)], [Boissonade, Phys. Lett. A _121_ 121 (1987)], [Doering, Phys. Rev. A _34_ 2564 (1986)], and [Park et al, Phys. Rev. E _56_ 5178 (1997)] and other papers.). This workshop presentation will explore the parallels between these economic and physicial models and assess how insights from physical modelling may contribute to our understanding of financial crises and macroeconomic instability.
30) Klaus Jaffe, Universidad Simón Bolívar, Venezuela
A Virtual Exploration of the Concept "Economic Temperature" Using the Agent Based Simulation Model Sociodynamica
The maturing since the year 2000 of the agent based simulation tool Sociodynamica, confirmed that economic dynamics is full of non-linear phenomena, showing that even very simple economies produce surprising non-predictable outcomes. Sociodynamica allows exploring economic properties of dynamic systems, not easily studied in the real world, such as the “Economic Temperature”. Here I present simulations showing that monetary velocity simulated as frequency of interactions between economic agents affect the overall economic dynamics in a strongly non-liner manner. The results suggest that each economy has its own optimal working temperature. Applications of this insight into real economies suggests the existence of diffuse effects between distinct sectors of the economy that cannot be reduced to simple chains of cause-effect interactions, but rather trough “temperature diffusion”. I present empirical data from 183 countries in the world, on research activity and economic development, showing that the intensity of scientific productivity in the basic sciences is the best available predictor of future economic growth of a country during the last decades. This predictor is several times more accurate than the novel Economic Complexity Index recently developed by Hausmann et al. 2011, or any of the numerous indicators complied by the World Bank.
31) Emmenegger Jean-François, University of Fribourg, Switzerland
Revival of Input-Output Analysis in Higher Education Curricula of Economics
In the cadre of the SNSF-SCOPES project “Analysis of Institutional and Technological Changes in Market and Transition Economies on the Background of the Present Financial Crisis”, Nr. 127’962, period 2011-2012, the eight researchers of Switzerland and Ukraine have taken the Classical-Keynesian Political Economy as theoretical starting point of reflexion. But on the level of economic analysis the research team takes the concept of Input-Output Tables of Leontief as essential tool to treat the economic questions. Indeed, importantly, Input-Output Analysis is essentially considered as being independent of doctrines. Leontief said: What matters is that the analysis starts from and stays close in touch with “observable magnitudes”. Input-Output theory is the classical theory of general interdependence. Adopting this view in the present project, the results show that Input-Output Analysis is a formidable tool to describe, present and understand the economic phenomena. But today, higher education curricula of economics are mainly based on mainstream economics. On the other hand, Input-Output Analysis plays only a marginal role in this field. One of the issues of our project is to understand that the basis for a revival of Input-Output Analysis in higher education curricula has to be set now. This contribution attempts to describe and justify this issue. At the same time, the recent results, elaborated by the Cybernetic Institute of the Ukrainian Academy of Sciences, Kiev will be taken into consideration. Among them are the applications of the theorem of Perron-Frobenius (1907) on Leontief matrices. Economic models with exhaustible resources, recently developed by Kurz (2009) and based also on that theorem, are introduced in this curriculum. A treatise on this curriculum is conceptually presented together with a proposition of a sequence of economic problems of increasing difficulty together with the solutions. Jean-François Emmenegger, 30.06.2012
32) Antonis Kalis, Athens Information Technology, Greece
Maxwell's Equations are Perfect for Describing Economic Activity
Although economic activity has a strong spatial signature, with products propagating within real dimensions, economic theory has only recently gained interest in economic geography. However, even in recent works economic metrics that clearly have spatial characteristics like trade, current account, exchange rates etc., are still treated as distance-dependent scalars. A new modeling method is presented for describing space-time dynamics of economics and finance, which takes into account the spatial dimensions of economic activity, using spatial vectors for analyzing economic activity flows. Vectors are mathematical tools that describe not only the magnitude of a given metric, but also its direction within a set of dimensions. This last property makes them quite attractive in the analysis of space-dependent systems and systems of flows, such as the ones created in an economic environment where products, services and money propagate. A major contribution of this work is the description of the dynamic behavior of economic vectors, through a system of simple equations, describing the dynamics of the economy, when time-varying metrics are involved. Time in this approach is no longer divided into a stream of discrete steps, wherein all metrics are considered constant. Time is considered continuous, and economic metrics are constantly changing, leading to a more accurate representation of real economic conditions. The proposed set of equations, which focuses on the interactions of trade and money, can pave the way for describing the economic environment where waves of products, services and money propagate within the economy, without conflicting with current economic theory, while at the same time extending our understanding of how economy works.
33) Ladislav Kristoufek, Charles University in Prague, Czech Republic
Non-Stationary Volatility with Highly Anti-Persistent Increments: An Alternative Paradigm?
We introduce the alternative paradigm to volatility modeling of financial securities. On the example of three stocks of highly capitalized companies, we show that volatility process is non-stationary and its logarithmic transformation together with logarithmic increments are approximately normally distributed. This is in contrast with mainstream volatility modeling paradigm which considers (or rather assumes) the volatility process to be stationary. Further, the increments of log-volatility are shown to be highly anti-persistent. Together with the assertion that logarithmic returns are normally distributed, and uncorrelated with time-varying volatility, we propose the new returns-generating process. Contrary to standard methods in economics, the proposed procedure is based on empirical observations without any limiting assumptions. We construct the returns series which remarkably mimic the real-world series and posses the standard stylized facts - uncorrelated returns with heavy tails, strongly autocorrelated absolute returns and volatility clustering. Therefore, the proposed methodology opens a wholly new field in research of financial volatility. Importantly, the procedure is based solely on the normal distribution and fat tails come from the generating process naturally, i.e. they are not artificially imposed into the process as it is usually done in the financial economics/econometrics. Proposed methodology has a markable potential for volatility forecasting, Value-at-Risk and risk management in general.
34) Jiri Kukacka, Charles University in Prague - IES FSV, Czech Repubic
Behavioural Breaks in the Heterogeneous Agent Model
Coauthors: Jozef Barunik
The main aim of this work is to incorporate selected findings from behavioural finance into a Heterogeneous Agent Model using the Brock and Hommes (1998) framework. In particular, we analyse the dynamics of the model around the so-called 'Break Point Date', when behavioural elements are injected into the system and compare it to our empirical benchmark sample. Behavioural patterns are thus embedded into an asset pricing framework, which allows to examine their direct impact. Price behaviour of 30 Dow Jones Industrial Average constituents covering five particularly turbulent U.S. stock market periods reveals interesting pattern. To replicate it, we apply numerical analysis using the Heterogeneous Agent Model extended with the selected findings from behavioural finance: herding, overconfidence, and market sentiment. We show that these behavioural breaks can be well modelled via the Heterogeneous Agent Model framework and they extend the original model considerably. Various modifications lead to significantly different results and model with behavioural breaks is also able to partially replicate price behaviour found in the data during turbulent stock market periods.
35) Ryszard Kutner, Faculty of Physics, University of Warsaw, Poland
Catastrophic Bifurcation Breakdowns on Financial Markets vs. Dragon Kings
Coauthors: M. Kozlowska, T. Gubiec, M. Denys, A. Sienkiewicz, T. Werner, R. Kutner, Z. Struzik
We present a systematic, empirical verification of the flickering and 'dragon king' phenomena typical of small and medium to large market capitalisations. In particular, we consider daily index data of the DAX, WIG and DJIA, and analyse bubbles induced by the recent worldwide financial crisis. By using a range of quantifiers, we confirm that crash phenomena on the financial markets considered are preceded by both flickering and the 'dragon king'. The crash events resemble catastrophic regime shifts -- over a relatively short period of time -- empirically observed among other complex systems, in ecosystems, and theoretically accounted for by Rene Thom’s catastrophe theory. The phenomenon of catastrophic slowing down, which is a counterpart of the critical slowing down in thermodynamics, is evidenced by all the quantifiers considered, on approach to the crashes identified in the data considered. Slowing down (both critical and catastrophic) is the most reliable early warning phenomenon, and a refined indicator that the system is approaching either the critical threshold or a catastrophic (tipping) point. Recently, such early-warning signals have been identified in other real-life systems of high complexity, e.g. ecosystems, climate dynamics and medicine (epileptic seizures and asthma attacks). We foresee further applications of the methodology presented.
36) Mirko Kämpf, Martin-Luther-Universität Halle-Wittenberg, Germany
Does Wikipedia Reflect Economic Cycles?
Coauthors: Mirko Kämpf (1), Jan W. Kantelhardt (1), Dror Y. Kenett (2), Tobias Preis (2,3,4)
1 Institut für Physik, Martin-Luther-Universität Halle-Wittenberg, 06099 Halle (Saale), Germany 2 Center for Polymer Studies & Dept. of Physics, Boston University, Boston, MA 02215, USA 3 ETH, Chair of Sociological Particular Modelling & Simulation, CH-8092 Zurich, Switzerland 4 Artemis Capital Asset Management GmbH, 65558 Holzheim, Germany Financial markets are complex systems driven by incoming information from the real-world economy and by opinion formation among the market participants. Both driving factors are partially reflected in user’s activities on the WWW. Previous work has shown a relation between Google search activity and economy performance (see, e.g., [1]). Here, we study the page views of articles in the online encyclopaedia Wikipedia in relation with the performance of large stocks from the German DAX and the US S&P500. We assume that changes in the daily page-view rates for Wikipedia articles regarding specific companies reflect changing general interest in the companies and thus may be related with changes in stock prices or trading volumes. Conceptually, (Pearson) cross-correlation coefficients and event synchronization indices [2] between time series can be used to characterize the relations between elements forming a complex system, if proper significance testing is employed. Since the considered system can be represented as a bipartite graph, we reconstruct and characterize the intra component correlation networks between the elements of the same type and the inter component correlations between elements of different types, expressing the coupling of the two subsystems. In addition, the importance of time delays is studied. The results of several filtering techniques are compared to characterize the networks as a function of time. The approach is in line with time series analysis methods that often depend on the filtering of the raw data. We show how the correct choice of the computational approach depends on the properties of the measured data and the kind of effects that should analyzed. Specifically, we characterize the role of the social online networks in economic cycles by quantifying and analyzing the correlations between time series of access rates for selected groups of the Wikipedia articles and time series of prices and trading volumes of corresponding stocks. Significant correlations could be important for individual decisions and also help identifying global states of the systems and their dynamics. We find that there are highly significant correlations (p < 0.001) between the log returns of several stocks and corresponding Wikipedia access rates, while significance levels are weaker for absolut log returns and trading volumes. [1] Preis, T; Moat, HS; Stanley, HE; Bishop, SR; Scientific Reports 2, 350 (2012). [2] Quiroga RQ; Kreuz T; Grassberger P; Phys. Rev. E 66, 041904 (2002).
37) Luis F. Lafuerza, IFISC, Spain
On the Role of Heterogeneity in Agent Based Models
Coauthors: Luis F. Lafuerza, Raul Toral
In recent years, methods and ideas developed mainly in statistical physics have been transferred to other disciplines, such as ecology, epidemiology, sociology and economy, allowing for the study of collective and emergent phenomena in what it is known as complexity science. Unlike the systems traditionally studied in physics, which consist of identical (some times even indistinguishable) units (such as molecules, atoms or electrons), the new applications require the consideration of systems which are characterized by a large degree of heterogeneity among their constituent units. Furthermore, very often these systems can be modelled only at a stochastic level since the complete knowledge of all the variables, the precise dynamics of the units and the interaction with the environment is not available. This is certainly the case in economic systems, which present stochasticiy and heterogeneity as prominent features. One way to include heterogeneity in the modelling is to consider that the interactions between the agents are not homogeneous but mediated by some complex network, an approach that has attracted enormous attention in the last years. An issue that has been less studied from the analytical point of view is the intrinsic heterogeneity in the behavior of the agents themselves. The effect of heterogeneity in deterministic systems has been considered before, but the combined effects of stochasticity and heterogeneity has not been studied systematically with few exceptions, due to the difficulty in tractability. In this work, we analyze the combined effect of stochasticity and heterogeneity in agent-based models. We present a general approximation method suitable for the analytical study of this type of systems. We show that the heterogeneity can have an ambivalent effect on the fluctuations, enhancing or decreasing them depending on the form of the system and the way heterogeneity is introduced. In the case of independent agents, heterogeneity in the parameters always decreases the size of the global fluctuations. We also demonstrate that it is possible to obtain precise information about the degree and the form of the heterogeneity present in the system by measuring only global variables and their fluctuations. In particular, the correlation function changes qualitatively its form when heterogeneity is present. We exemplify our method by applying it to study a generalised version of the Kirman herding model in which agents have different intrinsic characteristics. We are able to derive exact analytical expressions that shed light into the effect of heterogeneity in the model. Heterogeneity among agents is a very generic feature of economic systems. Our work provides a framework for the systematic study of the effect of heterogeneity in stochastic systems, having thus a wide range of applicability.
38) Carlo Lucheroni, Università di Camerino, Italy
An Extended FitzHugh-Nagumo Model for Spikes and Antispikes in Electricity Market Data
Electricity price hourly data series display a behavior which is very different from the behavior typical of much more studied stock or bond price series. For example, electricity prices display upward spikes and downward spikes (antispikes). Moreover, differently form stock or bond prices, electricity prices are explicitly correlated to an external process, electricity consumption, of which data series are often available in the same dataset of price data. Demand is periodic, and spikes appear only in specific demand conditions, but occasionally, not always. Microeconomically, this can be related to the fact that electricity markets are commodity markets that can find themselves in tight conditions above some levels of demand, due to production capacity constraints or to transmission grid limitations. When modeling electricity prices data, periodicity, spiking, stochasticity, and an intrinsic demand threshold mechanism must have to be taken into account and linked together. An analysis of this kind of data is proposed here by means of an unconventional stochastic nonlinear model, derived from the FitzHugh-Nagumo equations of biomathematics of neural systems, tuned close to a Hopf bifurcation point, so that noise and a periodic forcing can interact with nonlinearity to produce upward and downward spikes around a base level. The model includes one or more implicit thresholds that can be related directly to the microeconomics of the data. This model is an extension of a model introduced in Lucheroni, Phys. Rev. E 76, p. 056116, 2007. Phenomenology and inner workings of the model will be discussed with the help of phase-space analysis, and an econometric calibration technique suitable for this model will shown and applied to data.
39) Haydee Lugo , Universidad Complutense de Madrid, Spain
Popularity-Based Decisions on Structured Populations
We study the presence of a spatial structure within and between two populations whose members measure the degree of optimality of their plays by observing the choices made by their fellow agents. To capture this feature, we consider an evolutionary version of the theoretical model proposed by Cabrales and Iriarte [2012, Journal of EvolutionaryEconomics] in which individuals are sensitive to the popularity of their current strategies. We will show the long-run outcome of the structured populations governing by popularity-based selection dynamics. We address issues as the dependency on the size of the populations. This allows comparison of theoretical results valid for an infinitely large population with simulation results. Also, we examine the effect of locality and clustering by considering random networks. Increasing the neighborhood size and the connectivity (mean degree of thenetwork) the limit of fully connected networks can be re-obtained.
40) Eliza Lungu, National Institute for Labor and Social Protection, Romania
Patterns in Frontier Stock Markets
The academic research on frontier stock markets is limited, due to data availability and credibility and almost none econophysics research, to my knowledge. This paper wants to fill the gap and provide first an updated literature review on the subject of frontier market and secondly to analyze these markets from an econophysics perspective. I considered for this study the countries that are classified as FMs by S&P Frontier BMI, MSCI Frontier Market Indix and FTSE Frontier 50 Index, in total 43 countries from all over the world: EUROPE (Bosnia Herzegovina, Bulgaria, Croatia, Cyprus, Estonia, Latvia, Lithuania, Macedonia, Malta, Romania, Serbia, Slovakia, Slovenia, Ukraine), AFRICA (Botswana, Ghana, Ivory Coast, Kenya, Mauritius, Namibia, Nigeria, Tunisia, Zambia, Zimbabwe), MIDDLE EAST (Bahrain, Jordan, Kuwait, Lebanon, Oman, Qatar, U.A.E.), ASIA (Bangladesh, Cambodia, Pakistan, Kazakhstan, Sri Lanka, Vietnam) and LATIN AMERICA (Argentina, Colombia, Ecuador, Jamaica, Panama, Trinidad & Tabago). First I investigate the tail distribution of the daily volatility of the main stock market indices of the considered countries for a power law behavior using Hill estimator. Secondly I look for patterns of long-range correlations in the considered indices with R/S analysis and detrended fluctuation analysis (DFA). At the end I choose one representative country in my view from each region and perform a network analysis. I look at the centrality measures, density, motifs in order to extract patterns. I tackle the subject from different directions, because I see this study as an exploratory one with the scope of identifying patterns in small emerging markets and setting grounds for new questions.
41) Daniel Längle, ETH Zürich / MTEC, Austria
A Simple Model for Global Defaulting Cascades in Financial Networks
Based on the paper `Systemic risk in a unifying framework for cascading processes on networks' by J. Lorenz, S. Battiston and F. Schweitzer, we develop an agent based model that simulates the cascading process of bank defaults in a network of financial interlinkages. The mechanism is based on the balance sheet structure of a bank and the spread of financial distress via mutual holdings. When a bank defaults, the assets are reduced due to bankruptcy costs and used to pay back the liabilities. The remaining unpaid liabilities have to be written off the balance sheets of neighboring banks which held assets of the defaulting party which in turn increases their risk of defaulting. This simple mechanism allows for the observation of cascading defaults in the system and reveals `tipping points' in which the system switches from stability to breakdown after a small change of the input parameters. We are able to make some meaningful statements about the influence of the average equity ratio and diversity across the network on the overall stability of the system. We also investigate the influence of different assumptions about the underlying network structure and the number of connections between the banks.
42) Thomas Maillart, ETH Zurich, Switzerland
Exploring the Limits of Safety Analysis in Complex Technological Systems
Coauthors: Didier Sornette, Thomas Maillart, Wolfgang Kröger
From biotechnology to cyber-risks, most extreme technological risks cannot be reliably estimated from historical statistics. Engineers resort to probability safety analysis (PSA), which consists in developing models to simulate accidents, potential scenarios, their severity and frequency. However, even the best safety analysis struggles to account for evolving risks resulting from inter-connected networks and cascade effects. Taking nuclear risks as an example, the predicted plant-specific distribution of losses is found to be significantly underestimated when compared with available empirical records. A simple cascade model suggests that the classification of the different possible safety regimes is intrinsically unstable in the presence of cascades. Even the best probabilistic safety analysis requires additional continuous validation, making the best use of the experienced realized incidents, near misses and accidents.
43) Vladimir Marbukh, NIST, USA
Towards Modeling Effect of Agents’ Bounded Rationality on Performance of Selfish Resource Allocation
According to the conventional economic thinking, interrelated assumption of Perfect Competition (PC) and Efficient Market Hypothesis (EMH) imply that market resource allocation schemes have built in stabilizing mechanisms. However, recent events suggest that these mechanisms may be losing their ability to eliminate or mitigate unstable system behavior. It appears difficult to completely attribute this trend to the articulated by George Soros market fallibility and reflexivity since these traits were present throughout market existence. Moreover, the EMH suggests that modern technology, enabling market participants to acquire and rationally exploit near real time market state information, should have a positive effect on the overall system performance. The big question is identifying additional to fallibility and reflexivity market mechanisms responsible for the increasing market tendency to unstable behavior. Econophysics may be able to answer this question by adopting concepts and models developed in physics and particularly statistical physics and thermodynamics. Similarities between macroeconomics and equilibrium thermodynamics are mostly due to (a) the optimization principle: utility maximization in economic systems or entropy maximization in closed thermodynamic systems; and (b) underlying stochastic micro-dynamics: selfish dynamics of market participants or dynamics of microscopic particles comprising the physical system. Our conjecture is that the increasing market tendency to unstable behavior may be explained by the combination of inherent micro-level stochasticity and increasing availability of the dynamic system state information. Indeed, technological inability of selfish agents to exploit system fluctuations effectively eliminated “Maxwell Demons” from the system, and thus made the selfish resource allocation scheme perform similar to a closed thermodynamic system. It is known that in a closed thermodynamic system fluctuations dissipate and system converges to the equilibrium state of maximum entropy, given exogenous constraints. However, in an open non-linear thermodynamic system, noise amplification through interactions of non-linearity and feedback loops can result in continuous (second order) and discontinuous (first order) phase transitions. In a case of continuous phase transitions, noise amplification results in initial system equilibrium losing its stability and emergence of another equilibrium. In a case of discontinuous phase transitions, noise amplification results in emergence of a distant metastable equilibrium which coexists with the original equilibrium. We conjecture that selfish agent ability to exploit system fluctuations transforms these agents into “Maxwell Demons” capable of fluctuation amplification potentially causing system instability. The above conjecture, assuming it has something to do with reality, poses numerous questions, which have important implications for design and control of complex infrastructures with selfish components. Among issues deserving further investigation are the following: (a) a possibility of maintaining the optimal levels of agents’ bounded rationality through information availability control, (b) whether rising system fluctuations may be a source rather than the consequence of emerging instability as some recent publications suggest, and (c) whether it is possible to create instability by manipulating the noise level in the system. In our presentation we intend to discuss the above conjecture as well some its implications and examples.
44) Nikolitsa Markou, Imperial College, United Kingdom
The Percolation Perspective to The Eurozone Dilemma
Coauthors: Dr. Moez Draief
Financial markets have recently experienced unprecedented turmoil, not only triggered by economic phenomena, such as the debt crisis that spreads across Europe, but also by social and political phenomena, such as the recent Greek elections. This turmoil has raised serious doubts about the future of economic coalitions and it has a significant share of responsibility for the worldwide spread of instability and insecurity, producing further uncertainty for the markets. It is thus inevitable to think that both economic and social aspects of our world are largely interconnected. We propose a binary opinion threshold model that may describe the dilemma about staying or exiting the Eurozone; a dilemma faced by European countries nowadays. Each country interacts with its neighbours and if the fraction of opposite neighbours is above a certain threshold, then the country decides to change strategy. The threshold consists of a combination of payoff and cost; as payoff we consider the long-term benefit of each choice, while as cost we consider the short-term losses - political, economic and social – countries may suffer due to strategy change. Assume that the long-term benefit of both decisions is equal and the cost incurred by a strategy change is the same for all nodes. However, costs that correspond to different direction of change may be different. All European countries consider exiting the Eurozone with some probability independent of the others; this probability is called initial condition. Simulations of this model showed similar behaviour for various network models: either consensus or a stable coexistence of strategies is reached, depending on the initial condition. It is observed that the regions of consensus strongly depend on the initial condition of the network and the cost values. The emergence of stable configurations is considered responsible for the coexistence of opinions. For small values of initial condition, the clusters of the minority strategy are small and isolated, but as the initial condition reaches a critical value, a large cluster starts to grow. At the same value, the second largest cluster shows a sharp increase, characteristic of a second-order transition. This behaviour is reminiscent of standard percolation with the difference that interaction between neighbouring nodes is allowed. Simulations of this model in a square lattice, where the initial condition is not independent, but it depends on the node’s location in the network, showed that this model for various costs combinations is in the same universality class with standard percolation. Even though this model belongs to the same universality class with standard percolation, it exhibits some significant differences, especially regarding the local interaction. The existence of various cost combinations can better describe the conditions at which a strategy becomes dominant in a network. If we see this in an economic context, this model may provide us with some perspective on how to predict and avoid the propagation of financial distress and the failure of an economic network, such as the Eurozone.
45) Stefano Marmi, Scuola Normale Superiore, Italy
Value Matters: Predictability of Stock Index Returns
Coauthors: Natascia Angelini, Giacomo Bormetti, and Franco Nardini
The aim of this paper is twofold: to provide a theoretical framework and to give further empirical support to Shiller's test of the appropriateness of prices in the stock market based on the Cyclically Adjusted Price Earnings (CAPE) ratio. We devote the first part of the paper to the empirical analysis and we show that the CAPE is a powerful predictor of future long run performances of the market not only for the U.S. but also for countries such us Belgium, France, Germany, Japan, the Netherlands, Norway, Sweden and Switzerland. We show four relevant empirical facts: i) the striking ability of the logarithmic averaged earning over price ratio to predict returns of the index, with an R squared which increases with the time horizon, ii) how this evidence increases switching from returns to gross returns, iii) moving over different time horizons, the regression coefficients are constant in a statistically robust way, and iv) the poorness of the prediction when the precursor is adjusted with long term interest rate. In the second part we provide a theoretical justification of the empirical observations. Indeed we propose a simple model of the price dynamics in which the return growth depends on three components: a) a momentum component, naturally justified in terms of agents' belief that expected returns are higher in bullish markets than in bearish ones; b) a fundamental component proportional to the log earnings over price ratio at time zero. The initial value of the ratio determines the reference growth level, from which the actual stock price may deviate as an effect of random external disturbances, and c) a driving component ensuring the diffusive behaviour of stock prices. Under these assumptions, we are able to prove that, if we consider a sufficiently large number of periods, the expected rate of return and the expected gross return are linear in the initial time value of the log earnings over price ratio, and their variance goes to zero with rate of convergence equal to minus one. Ultimately this means that, in our model, the stock prices dynamics may generate bubbles and crashes in the short and medium run, whereas for future long-term returns the valuation ratio remains a good predictor. A preprint version of the manuscript is available from http://ssrn.com/abstract=2031406.
46) Robert Marx, TU-Dresden, Germany
Risk-adjusted behavior of financial intermediaries in a world of alpha-stable distributed returns
Coauthors: none
Our world finance system was hit by a financial crisis 2007/2008. A lot of reasons were mentioned like bank size, moral hazard, deregulation, leverage ratio, etc. All these reasons have something to do with the risk-adjusted behavior of financial intermediaries! Financial market theory is usually a closed (µ,σ)-world. Financial market movements are modeled as a Gaussian process, which converges to a stable, theoretically recordable distribution and financial crises scenarios are excluded. But empirical distributions of returns on financial markets have so-called fat-tails (leptokurtosis) like a Levy-distribution. According to Benoit Mandelbrot returns are alpha-stable-distributed (with α<2). It means the variance is not stable and financial crises (Black swans) can be part of the distribution. Unfortunately an exact calculation of risk is not possible then. So risk adaptive behavior has to be based on a contingency selection procedure and cannot be based on a minimizing or maximizing calculus. In a world with α< 2 analytic solutions are impossible to calculate. Therefore risk adaptive behavior has to be simulated (implemented in JAVA). The idea is now to create a simulation environment based on realistic return distributions for testing different value-at-risk regimes and different regimes of regulation (like leverage ratio). VaR is a method to measure risk which usually bases on (µ,σ)- assumptions, but there exists a so called historic VaR what can be used instead. The model consists of (x) agents and (n) stocks with alpha-stable returns. Financial intermediaries like banks, insurance companies, hedge funds are agents. They are distinguished by their balance sheet rules and they are able to buy and sell stocks out of a market portfolio under use of the historic VaR. The budget is based on their equity (E) and their liabilities (L). The agents maximize the equity return (rE) with a satisficing concept, based on the Modigliani-Miller-Theorem (irrelevance hypothesis). The interest rate (rL) is given and stable. Strangely there exists an infinite situation based on theory. If the return on investment (rS) is higher than (>) the cost of borrowing money (rL), companies should invest as much as possible, with infinite leverage. Therefore in an optimal world (normal distributed returns) profits would be infinite and financial institutions would gradually increase risk. That can be observed in reality and is therefore regulated. In the USA f.e. leverage ratio is set to 1/30 = E/L. But this regulation is not based on a theoretical calculus it is determined by history and could be different. That is the reason why the leverage ratio is examined as a regulation method in the simulation. Only one company (x=1) and several stocks exist in the basic simulation. Depending on the α-setting, on the leverage ratio setting and the decision making procedure it can be shown that the illiquidy possibilities differ and that in the long run regardless of the regulation presumably all kind of fincial intermediaries are illiquid if they can borrow.
47) Matthias Meyer, Hamburg University of Technology, Germany
Manipulation in Prediction Markets: Combining Economic Laboratory Experiments and Agent-Based Models
Coauthors: Frank Klingert
Prediction markets are a promising instrument to draw on the ``wisdom of the crowds''. E.g., firms use them successfully to forecast sales numbers or project risks drawing on the heterogeneous knowledge of decentralized actors in and outside companies. Manipulation constitutes an important aspect of prediction market research as it can negatively affect prediction market accuracy. We focus on externally motivated trade-based manipulation, because of its high relevance in prediction markets. For example, the manager of a project may not take part to contribute his knowledge concerning the deadline of his project, but because he will not meet the deadline and does not want to become this information public via the prediction market. Current research on manipulation in prediction markets is inconclusive. On the one hand empirical research is regularly not able to identify a significant effect on prediction market accuracy [Wolfers and Zitzewitz 2004]. On the other hand, exceptions are reported in large scale prediction markets about elections [Hansen et al. (2004)] as well as in small size prediction markets about sport events [Christiansen (2007)]. We contribute to this discussion by introducing an agent-based simulation model derived from laboratory experiments of Hanson et al. [2006]. While this experiment is not able to find instances of successful manipulation, the simulation model shows that increasing the manipulation strength of manipulators leads to an increase of the accuracy error. Furthermore, the effects of manipulation are analyzed in interaction with environmental as well as market mechanism-related effects. The analysis of the simulation experiments shows that an increased amount of traders does not generally lead to lower manipulation. However, the manipulation influence decreases, when manipulation by several manipulators is considered as a public good game and cooperation among an increased amount of manipulators is assumed less likely. Finally, the influence of different trading mechanisms is investigated. The logarithmic market scoring rule is shown to be more robust against manipulation than the continuous double auction. On a methodological level, this paper also reflects on ways to combine economic laboratory experiments with agent-based modeling. While experiments represent an appropriate method to demonstrate the directional influence of the independent variables analyzed, they are usually limited to investigating two or three levels of these variables. Deriving results about the magnitude of an effect or about the functional relationship (linear, convex or concave) between two variables is very time and resource consuming. Moreover, for the same reasons, the potential to combine more than two or three different factors in order to investigate their interaction is limited. The basic idea of combining the two methods is to replace economic laboratory experiments partly by agent-based models, as these allow for investigating a large number of parameters in relatively short time and at comparatively low costs. These simulations can extend the experimental results to other levels of the independent variable and therefore provide evidence as to the functional relationship. Moreover, they can be used to identify crucial settings and “tipping points”, which can be subsequently tested in the laboratory.
48) Dubravka Mijuca, University UNION- Nikola Tesla, Serbia
On Accessing the Accurate Parameters in Bank Risk Evaluation of EE Investments in Buildings
The contemporary bank risk evaluation of EE investments in buildings is often based on the elaborate of the experts from HVAC (heating, ventilation, and air conditioning) community. They traditionally use simplified methods to obtain estimates only about strength of HVAC mechanical systems which should be installed to obtain the comfort in the specific building. Their calculations uses data about linear heat transfer through walls per zones of buildings. Nevertheless, design of energy efficient building needs extremely accurate quantitative values of integral performance of buildings, as kWh/year/m2, but using in account all building aspects as location, meteorology, composition of building envelope, doors, windows, metabolism, air flow, surface or volume heat transfer, in one simulation framework, where answers are given to the investors or creditors in a day notice. Namely, in order to lower the risk, the creditors must have knowledge of exact cost/performance (investments/ (kWh/year/m2)) ratio for number of different options of above mentioned parameters. If sustainable development sees building as a structure which should heat and cool itself and to use external mechanical system only when it is unable to succeed the comfort, it means integral calculation of energy efficiency. The profile of the knowledge of the technical consulting person and hired by bank to help to estimate the risk in EE in buildings, as well as state of the art computational physics methodology in that endeavor are presented. The difference in old and new EE evaluation process is explained on buildings erected or reconstructed in Europe.
49) Felix Patzelt, University of Bremen, Germany
Extreme Events in Small-Scale Minority Games
Coauthors: Klaus Pawelzik
A striking feature of financial markets are the heavy tailed distributions of returns: Price changes that are orders of magnitudes larger than the typically observed ones occur much more frequently than to be expected from a Gaussian distribution. On the one hand, agent based models like the minority game are used to explain extreme price movements as collective phenomena from a competition of strategies. On the other hand, stochastic models are used for characterizing statistical properties of price fluctuations. Many of these models are based on the efficient market hypothesis. It is often presumed, that traders behave maximally rational and that price fluctuations are natural consequences of very effectively balancing out arbitrage opportunities. However, the high frequency of extreme price movements like crashes seems to contradict market efficiency. Such events are often believed to be either related to malign behavior of few potent market participants or to irrational herding effects of speculators. Further, studies in behavioral economics suggest that humans may not behave like ideal rational traders. However, subjects are typically studied in isolation or in very small interacting groups. We investigated, whether it is possible to study large collective phenomena in model markets experimentally with moderately sized groups of participants. We found, that many approaches towards understanding price fluctuations can be mapped in a mathematically precise way to a particularly simple and very illustrative game. The rules correspond to a modified minority game which is combined with a simple yet effective visualization that allows subjects to easily understand the game. Strong herding behavior robustly emerged even with just $10$ to $15$ players. However, this effect was found not to contradict collective market efficiency. On the contrary, herding is caused by subjects exploiting predictable price changes. Simulations of larger markets consisting of stochastic agents modeled after the experimental findings exhibit power-law log-return distributions and volatility clustering. We also investigated an online game where different subjects competed against adaptive, partially predictable virtual players over the course of several months. Subjects were able to successfully exploit the virtual players and the resulting time series again exhibit the aforementioned stylized facts. We are currently in preparation of experiments with larger numbers of participants via the internet. Since the game is entertaining to play, it can further be used to playfully convey scientific approaches as well as basic insights of game-theoretical approaches in economics.
50) Josep Perello, Universitat de Barcelona, Spain
Statistical Analysis of Non Expert Investors' Behaviour
Coauthors: Mario Gutiérrez-Roig
The human collective behaviour in society is becoming more and more empirically studied by scientists. Nowadays, globalization, internet and our ICT society let us provide very large amount of data ready to study empirically. Most of the challenges can be summarized along the question how microscopic interactions between agents trigger macroscopic phenomena in a large scale level. Inside the context of financial markets, this information is particularly accessible and easy to quantify because each single decision, for instance buy or sell, is recorded. It is generally assumed that heterogeneous dynamics in financial trading of individuals (say the nano-level) could check the assumptions done in order-book modelling (say the micro-level) and eventually better understand the nature of empirical stylized facts of price and volatility of a given asset (say macro-level). Unfortunately, the nano-level data from individual decisions is not easily accessible for empirical studies. Research is mostly limited to the study of the order placement in the book without knowing the identity behind each trading decision and when modelling agent’s behaviour some untested but reasonable hypothesis are taken. We here present a study that aims to fulfill this lack of information in individual level. We study a dataset that contains more than 7 million of individual recordings from 29,930 non-experts professional investors (clients) of a particular investment firm. All of them had traded between 2000 and 2007 in 120 of most traded assets of the Spanish stock market IBEX. Price, date, number of shares traded and associate manager is recorded for each transaction, so that we can study variables like number of operations, trading volume, income and other observables for each individual. First results show an extremely big heterogeneity in fundamental observables. We also study cross correlations between the above mentioned fundamental observables and we find that the trading volume scales with number of operations in a linear way. Activity profile of each client is also an important property to analyse. Activity clustering for individual investors and synchronisation between the most active ones are observed. Actually, we map this synchronisation by creating a network, for each asset, and study its properties. Different communities of synchronised agents also arise. We can see how this communities share some properties and observables described above. It is also reasonable to consider that most influence over investors' response is the price of the asset which they are trading with. Therefore we also study the correlation between daily return of the asset price and daily position, number of shares bought minus number of shares sold. As expected, on average these two variables are negatively correlated. This means that, on average, people buy more when the market falls down and sell more when the market goes up. For the investors with a closed position, we also calculate the correlation between daily return of the asset and daily income earned by the investor.
51) Denis Phan, GEMASS (CNRS \& University Paris IV Sorbonne ), France
Comparison Between Two Approaches to Modelling Schooling Segregation: Cognitive vs. Behavioural Agents
Coauthors: N. Bulle, A. Diallo, J.C. Francois, H. Mathian, J.P. Muller, D. Phan, L. Sanders, R. Waldeck
We compare two variants of an ACE model with individual choices about schooling localisations and analyse the resulting segregation phenomenon in an abstract urban zone based on empirical observations on Paris and Suburban. The empirical question deals with the French educational system. It is organized by the way of a ``school mapping system'' where each pupil is affected to a school given his living area. Few Years ago, the constraint imposed by the school mapping has been relaxed, authorizing parents to candidate to schools outside their living area under conditions. The political questions behind the flexibility of the schooling choice (school mapping relaxation) are both about the educational efficiency and reductions in inequality of the educational system. On the theoretical side, the two variants are based on different ``point of view'' about agent theory. (1) A behavioural assumption on the one hand: the choice of a particular school by a family depends on the cultural, economic and social context, defined at the social group level, and inferred from empirical data. (2) A cognitive assumption on the other: the individual school choices are of a micro-economic decision type where individual preferences over different objectives may be parameterized by economic, social or cultural values. Starting from a stylized (but empirically based) segregated residential configuration, the two variants of the model have been built to simulate emergent forms of schoolchild’s population distribution e.g. heterogeneity in educational level across schools, relationship between residential population patterns and schoolchild population through different measures of segregation or inequalities, similarity or difference between closely localized schools etc. The Simulation analysis focuses on the evolution of the characteristic of schools’ population in relation to the following factors: global school policy (constraint on family’s choices by pre-determined sectors or not); the forms of selection (based on “merit” and performance or not); the educational policy of schools e.g. specific curricula etc. In order to compare the two variants and avoid implementation bias, the two variants share large parts of a common ontology and are programmed onto MIMOSA, a MAS platform developed at CIRAD by J.P. Muller, that is able to integrate explicitly multi-point of views model. Each step of the model building activity (ontology, specification and calibration, initialisation) has been discussed from three academic fields: Sociology and Economy on the one hand, Geography on the other. The aim was to neutralize minor difference, i.e. not based on strongly justify difference in conceptual (paradigmatic) point of view: ``behavioural'' versus ``cognitive''. As consequence, the variants focus on only a limited, but significant, differences in the way agents (pupil, school principal and scholar administration) make their choices. The comparison of intermediate obtained results aims to answer the following questions: under what conditions do these two approaches lead to similar or different results? At the interpretation level (making of sense) what kind of meaning and conclusion could be build within both points of view about the decision making by social agents? What kind of knowledge could be deducted from the pertinence or difference of theses point of view?
52) Satnam Reehal, Imperial College London, UK
Testing the Validity of Common Assumptions about Financial Networks against Real World Data
Coauthors: Lavan Mahadeva
A reaction to this ongoing financial crisis is that cross-sectional systemic risk in the financial system is now considered by policymakers key to managing similar future scenarios. Financial network models that use interlinkages to observe beyond the immediate ``point of impact'' of shock or default to the likely spillovers that arise are being employed. These models make assumptions on network structure, properties of banks and how failures propagate. This paper aims to assess these assumptions and understand their implications by comparing properties of these models to a real life banking network data set. The paper finds that the assumptions that banks are on average uniform in size and in linkages to be very unrealistic.
53) Andries Richter, CEES, University of Oslo, Norway
Moral Subjectivism in Peer Sanctioning Overcomes Social Dilemmas Efficiently
Coauthors: Åke Brännström, Ulf Dieckmann
Social sanctions are powerful mechanisms for enforcing social norms that stabilize cooperation (Gurerk et al. 2006), especially when rewarding and punishing are combined (Andreoni et al. 2003). In particular, such sanctions can mitigate the kind of social dilemmas that inevitably arise in the joint management of common-pool resources (Ostrom 1990). While peer sanctions can help stabilizing cooperation, they are often inefficiently costly, and may be used by non-cooperators to undermine any cooperative attempts (Herrmann et al. 2008). The question when peer sanctions are effective in solving social dilemmas is still unresolved (Dreber et al. 2008, Gächter et al. 2008). Employing both an analytical and an agent-based model, in which economic decisions and sanctioning preferences evolve endogenously, we show that social dilemmas can be overcome effectively if own behaviour is used as the moral demarcation line between good and bad behaviour, with peers gauging penalties and rewards accordingly. Sanctioning based on this simplistic moral code, which we refer to as moral subjectivism (Posner 1998), engenders cooperative behaviour even when sanctions are weak or costly, individuals make mistakes, and the socially optimal exploitation level is unknown. Moral subjectivism is a successful moral code when economic decisions are revised frequently relative to changes in the moral preference for sanctioning peers (Williamson 2000, North 2005). Unexpectedly, we find that sanctions are much less efficient when not own, but average group behaviour is used as the moral yardstick. We also find that moral subjectivism can compensate, up to a certain level, for obstacles to the maintenance of cooperation, such as large group sizes or high incentives for selfish acts. When key determinants, such as perception errors or disregard for social sanctions, pass a critical threshold, the tendency to sanction dramatically collapses and individuals abruptly switch from socially optimal exploitation to massive overexploitation.
54) Leonidas Sandoval, Insper, Instituto de Ensino e Pesquisa, Brazil
Causality Relations Among International Stock Market Indices
The understanding of how international financial market crises propagate is of great importance for the development of efficient policies for blocking their dissemination. Correlations between stock market indices have been previously used in order to establish how those indices relate with each other, but any sort of correlation measurement, being symmetric, fails to establish causality relations between them. In this work, the author uses a measurent originating from information theory and based on the Shannon entropy, called transfer entropy, in order to probe the causality structure of 92 international stock market indices for the period ranging from 2007 to 2011. Transfer entropy has been succesfully applied to the theory of celular automata, to the study of the human brain, and also to stock market indices, although few ones. It measures the average information contained in the source about the next state of the destination (in our case, both source and destination are indices) that was not already contained in the destination's past. This measurement is both asymmetric and dynamic, and does not depend on any assumptions about the type of relations between the indices. Taking into account the cyclical nature of the opening hours of the world stock market indices, when markets are open in Asia but not in America and so on, the work uses, in addition to the 92 indices, those same indices, but lagged by one day. The resulting network with 184 nodes has been previously studied, using correlation, to unravel a structure that is typical of world indices, that is the division into two clusters, one of lagged indices, and another of unlagged ones, both connected by Pacific Asian and Oceanian indices. The same set of indices is now used in the calculation of transfer entropy, resulting in a directed network that reveals causality relations with a good amount of transfer entropy between North American indices among themselves and also between Western European ones among themselves. There is also a very clear transfer of information from Pacific Asian countries to the Western ones operating on the same day, and a large amount of information flowing from the Western countries to the next day of the Pacific Asian ones and also to themselves. The undirected network is also used in order to establish the most influential indices and the robustness of connections. Transfer entropy is a difficult measurent to use, though, because it is highly dependent on the number of bins one uses in order to calculate the probability distributions of events in a time series, and it is also influenced by the inner entropy of each time series. The dependence of transfer entropy on the number of bins is also analysed in this work, and the problem with the intrinsic entropies of indices may be partially solved by calculating the transfer entropies of randomized data and then removing it from the transfer entropy of the original time series. An aditional analysis is made on the eigenvalues and eigenvectors, now complex, of the transfer entropy matrix.
55) Niccolo' Stamboglis, City University London, United Kingdom
An Agent Based Model of the Interbank Market
Coauthors: Giulia Iori
The current paper introduces an Agent-Based Model describing capital exchanges in the overnight money market among banking institutions. In the model banks select their counterparties elaborating both public and private information to assess the level of counterparty risk. The information available to market participants arises from three different informational sources: private information resulting from individual transactions, market information available to all participants, and the information obtained from the analysis of the interest rates observed in previously concluded transactions. The paper aims at understanding how market participants form preferential relationships based on available information, and how banks assess their counterparties in a period financial ambiguity, Once defined the mechanisms of interbank preferential lending, the paper aims at explaining how preferential relationships may influence the stability and efficiency of the credit market during a period of financial distress. In particular this paper aims at determining: (i) if the capacity of the interbank market to reallocate liquidity is affected by the proportion of relationship lending; (ii) if and how relationship lending influences interbank market stability; (iii) if and how relationship lending is affected by periods of financial distress. The model should also be able to predict the decisions of banks to leave the interbank market, possibly generating adverse selection, and segmentation between domestic and foreign banks leading to poor market integration.
56) Attilio Stella, Physics Department, University of Padova, Italy
Anomalous Scaling as a Guideline for Modeling Market Dynamics
Coauthors: F. Baldovin, M. Caporin, M. Caraglio, M. Zamparo
The renormalization group in statistical mechanics justifies and determines scaling properties of complex systems through the self-consistent application of a coarse-graining operation. In the case of financial time series, one can use the empirical, anomalous scaling as a guideline for modeling the underlying stochastic process, by implementing a plausible inversion of such coarse-graining operation. The ingredients at the basis of this inversion naturally lead to model the market dynamics as a result of both endogenous and exogenous mechanisms. The former are directly responsible for volatility clustering, deviations from aggregated Gaussianity and long memory effects. The latter trigger volatility switches, induce multiscaling effects, and account for the lack of invariance under time reversal of the dynamics. In mathematical terms the model for successive returns is based on a convex combination of purely Gaussian independent processes (endogenous component), modulated by amplitudes which evolve as a Markov chain (exogenous component). A relevant feature of such a construction is that returns, while strongly dependent in order to account for anomalous scaling, are strictly uncorrelated at linear level. Besides reproducing a large number of statistical stylized facts of financial time series, the model we present has the advantage of an high degreeof analytical tractability and uses a rather limited number of parameters with simple and direct interpretation. The analytical tractability gives the possibility of demonstrating some important properties, like the increments stationarity. The small number of parameters makes the present model more advantageous than the ARCH and GARCH processes for calibration purposes. We will discuss successful calibration protocols exploiting the above features and based on both moment optimization and maximum likelihood estimation. Perspective applications to problems like the description of Omori regimes after main crashes, or option pricing will also be outlined.
57) Jose Luis Subias, University of Zaragoza, Spain
Spin Model with Negative Absolute Temperatures for Stock Market Forecasting
A spin model relating physical to financial variables is presented. Based on this model, an algorithm evaluating negative temperatures was applied to New York Stock Exchange quotations from May 2005 up to the present. Stylized patterns resembling known processes in phenomenological thermodynamics were found, namely, population inversion and the magnetocaloric effect. Moreover, the magnitude of negative temperature peaks correlates with subsecuent prices or index movements (upward or downward), such graphs being a good leading indicator for trend changes in markets. As complementary material, an online database daily updated with the latest data from the New York Stock Exchange is freely available to every reader wanting to review graphs like those presented. Also preprint is available: http://arxiv.org/abs/1206.1272
58) Paolo Tasca, ETH Zurich, Spain
The Matrix of Market Procyclicality and Systemic Risk
Coauthors: Stefano Battiston
We describe the systemic risk consequences of the balance sheet amplfication mechanism at work in a system of interconnected banks whose balance sheets include claims against other banks and external assets related to the real-side of the economy. Banks actively manage their balance sheets in line with the risk management practice to adjust the Value-at-Risk to a target level. A matrix of market procyclicality can be defined in terms of (1) the strenght of banks’ compliance to capital requirements, and (2) the asset market liquidity. If the system is perturbated by a common asset-price shock, the probability (surface) of having a systemic default changes according to the area of market procyclicality. Our findings bear important policy implications. We show the existence of an area of weak procyclicality where the systemic risk remains at low levels; an area of strong procyclicality where the systemic risk is at high levels and two areas of medium procyclicality with systemic risk at an intermediate level. Moreover, we find that lower is the value of the interbank market compared to the banks’ exposure towards the external market, higher the probability of a systemic default.
59) Pietro Terna, Dep. of Economics and Statistics - Unito, Italy
SLAPP with learning facilities: L-SLAPP
The Swarm-Like Agent Protocol in Python (SLAPP) project (http://eco83.econ.unito.it/terna/slapp/) has the goal of offering, to scholars interested in agent-based models, a set of programming examples that could be easily understood in all their details and adapted to other applications. The projec is based on the Swarm fundamental idea of the 90s (www.swarm.org). Why to rebuild Swarm in Python? Quoting from its main web page (www.python.org): «Python is a dynamic object-oriented programming language that can be used for many kinds of software development. It (...), comes with extensive standard libraries, and can be learned in a few days.» The fist implementation of SLAPP dates from 2009 and it is strictly related to the original Swarm protocol, in a both simple and complete object-oriented framework. The next step, introduced in 2010, is a new simulation layer, written upon SLAPP, named AESOP (Agents and Emergencies for Simulating Organizations in Python), intended to be a simplified way to describe and generate interaction within artificial agents. We have there two different main families of agent: (i) bland agents (simple, unspecific, basic, insipid, …) used to populate our simulated world with agents doing basic actions (e.g., in a stock market, zero-intelligence agents) and (ii) tasty agents (specialized, with given skills, acting in a discretionary way, …), used to specify important roles into the simulation scenario. The schedules defining and managing agents’ actions are operating (a) “in the background” for all the agents, or only for the bland ones, but in both cases they are written internally in the code, or (b) “in the foreground”, explicitly managed via an external scripting system, using text files or the spreadsheet formalism. Bland and tasty agents are built with different sets of agents , each containing a different number of elements; specifications are made via external text files. In 2011, a further step was demonstrating how Aesop can run very sophisticated scripts describing (flexible) simulation situations, also with internal IF structures opening the perspective of repeated trials and errors steps, to take advantage from reinforcement learning technique. Currently, a new step is demonstrating the easiness of the implementation of reinforcement learning and neural network training ,within a SLAPP model. Agents are randomly behaving and generating a series of actions, evaluated as successful or unsuccessful via the simulation model and its environment. We memorize positive and negative actions, with their ex-ante data and with the evaluation of the related effects, in one or more neural networks(NN). NNs are operating as a private or public knowledge basis. After training, when agents have to act, they ask to their neural networks a set of guesses about the consequences of each possible action; then, they decide. NN training and application are made in R (http://www.r-project.org); R isß connected to the Python environment via pyRserve (http:// pypi.python.org/pypi/pyRserve/). Finally we have L-SLAAP, as Learning in a Swarm-Like Agent Protocol in Python.
60) Mario Vincenzo Tomasello, ETH Zurich, Switzerland
The Evolution of R&D Networks Across Industries
Coauthors: Mauro Napoletano, Antonios Garas, Frank Schweitzer
This paper investigates the evolution of the R&D alliances network over a 25-year period, from 1984 to 2009, across several industrial sectors. R&D alliances among firms are recognized to play a key role in fostering technology innovation and economic growth. The 1990s witnessed an unprecedented growth of strategic alliances and joint R&D activities aimed at innovation. Especially in high-tech industries - where technological change is rapid and technological knowledge is widely dispersed among firms - inter-organizational collaborations have become a central component of firms' innovation strategy. The increasing importance of R&D alliances led to the appearance of R\&D collaboration networks, often showing a complex structure. Several works - both theoretical and empirical - have already studied the main features of R&D networks. In particular, empirical studies have either focused on the correlation between firm network position and firm innovation performance, or they have analyzed the evolution of the structural properties of the network over time. One shortcoming of the latter strand of research is the exclusive focus on one or few industrial sectors. We improve upon this literature by performing a cross-sectoral analysis of the structural properties of the R&D network and by considering not only manufacturing sectors, but also services and education sectors. In addition, we track the evolution of the network for a longer time period than what is usually considered in the literature. We use the SDC Platinum alliances database, provided by Thomson Reuters. This dataset includes all publicly announced R&D partnerships, from 1984 to 2009, between all kinds of economic actors (including firms, investors, banks and universities). Every company is associated with a SIC (Standard Industrial Classification) code, allowing us to assign all firms to the right industrial sector. The dataset contains the beginning date of every alliance, but there is no information about their duration. To overcome this issue we assume that every partnership lasts 3 years, consistent with empirical work on alliances average duration. After processing the data we are able to characterize the time evolution over 25 years of a large R&D network linking a high number of companies (over 14000), from various industrial sectors. Our results highlight the existence of a cyclic fluctuation in the size of the global network of R&D alliances. The network size increases to a peak in 1995 and then shrinks again. In addition, the fraction of nodes belonging to largest connected component follows the same pattern, reaching a peak in 1995 and then shrinking again to be replaced by a periphery of disconnected dyads. The emergence of a giant component in the network is of particular interest, as different theoretical works have stressed the importance of the relation existing between high network connectedness and efficiency in terms of aggregate profits. Moreover, we find that the emergence of a giant component is very robust to sectoral disaggregation, as we find it in almost all the sub-networks representing the different industrial sectors. A similar behavior is observed for the trend of other network characteristics such as nestedness, small world properties (high clustering associated with short path length), and the presence of a core-periphery hierarchical structure. In turn, our results seem to indicate the existence of some "universal'' features of the formation process of R&D collaborations, that holds independently of the aggregation scale or the sector in which alliances occur.
61) Mario Vincenzo Tomasello, ETH Zurich, Switzerland
Network Dynamics and the Creation of Knowledge in R&D Networks
Coauthors: Claudio Juan Tessone, Frank Schweitzer
In this work, we model the dynamics of creation and extinction of Research and Development (R&D) alliances between companies. The alliances between firms are represented by a monogamous network, i.e. a network where every agent is linked to only one other agent at every time step. If the knowledge distance of two linked agents is smaller than a given interaction radius, a knowledge exchange takes place and the two agents' positions get closer. Using an agent-based model approach, we find that the rewiring of links and the mutual interactions over time eventually lead all the agents in the network to cluster around one or a few attractors in the knowledge space, as observed in real R&D networks. We find that the position of these attractors is not predictable, being an emergent property of the system and that, for intermediate values of the interaction radius, the number of the attractors increases with the rewiring rate. Moreover, we observe the existence of an optimal alliance rewiring rate that maximises the distance covered by the companies in the knowledge space.
62) Lena Tonzer, European University Institute, Italy
Cross-Border Interbank Networks, Banking Risk and Contagion
Recent events emphasize the role of cross-border linkages between banking systems in transmitting local developments across national borders. This paper analyzes whether international linkages in interbank markets affect the stability of interconnected banking systems and channel financial distress within a network consisting of banking systems of main advanced countries for the period 1993-2009. Methodologically, I use a spatial modelling approach to test for spillovers in cross-border interbank markets. The approach enables to formalize contagion among countries through financial linkages in a straightforward way. The results suggest that foreign exposures in banking -both on the liability and on the asset side of the balance sheet- play a significant role in channelling banking risk: I find that countries which are linked through foreign borrowing or lending positions to more stable banking systems abroad are significantly affected by positive spillover effects. From a policy point of view, this implies that especially in stable times linkages in the banking system can be beneficial, while they have to be taken with care in times of financial turmoil covering the whole system.
63) Mathieu Trepanier, SIAW-HAS, Switzerland
Linguistic-Based Perceptual Shocks and Quantile Regression: Emotions and Financial Markets
The paper proposes a novel approach for the empirical investigation of the role of emotions in financial markets relying on linguistic-based perceptual shocks derived from the varying affective content of media coverage about companies. A quantile regression framework is used to document the impact of the shocks on subsequent firm performance for both under and over performing firms. The informational foundations of perceptual shocks are also examined. Finally, attention is given to the respective role of expected and unexpected perceptual shocks for explaining subsequent performance. Overall, the findings offer substantial evidence supporting the use of the quantile regression approach over least squares alternatives for estimates of the conditional mean. The main findings suggest that the perceptual shocks based both on “negative” and “positive” lexica impact the scale of the subsequent stock returns distribution, but not its location. The informational foundations are found to vary significantly across perceptual shock measures derived from different lexica. Finally, both expected and unexpected perceptual shocks are significant predictors of the next-day stock returns distribution, consistent with linguistic-based perceptual shocks being proxies for certain common risk factors as well as for these shocks impacting investor beliefs more directly.
64) Tolga Ulusoy, Kastamonu University, Turkey
Quantum Formulations of Predictive Stock Market Fluctuations in Econophysics: The Case of FTSE, DJIA &ISE
In this paper the key structures of entropy, temperature and energy of stock markets in the manner of statistical physics is obtained. On the basis of some hypothesis of quantum mechanics, this paper considers stock markets as quantum systems and investors as particles. A quantum model of stock price fluctuations is defined on a theoretical framework. Essentially, the models are based upon models of statistical physics and quantum mechanics in which energy is conserved in exchange processes. The relative entropy is used as a measure of stability and maturity of financial markets from financial information of some considered emerging markets (Turkey) and some considered mature markets (England, United States). The model is analytically calculates and simulate the system in FTSE100, DJIA and ISE100 indexes basic predictive model in Econophysics is discussed.
65) Marco Valente, University of L'Aquila, Italy
Describing vs. Explaining: On the Assessment of ABM’s
Consider geographical maps as metaphor for a model since, in both cases, we have symbolic representations of reality. A political map serves certain purposes, and it is not more nor less detailed in respect of a physical map of the same region. An hypothetical map including all details of administrative borders, towns, roads, rivers and lakes, mountain chains, roads, train lines, electricity grid, oil and gas pipelines, etc. would be far more detailed than any specialized map, but also far less useful for any practical purpose. Users of agent-base models are in the awkward condition of having a tool permitting to reach un-precedented levels of detail in representing phenomena ill-adapted to be described by other modelling tools, such as mathematical equations. However, they are also subjected to heavy criticisms for lacking a methodological protocol to assess their models. The literature on this issue focuses on validation of the model results, proposing statistical techniques to compare empirical and theoretical results, under the assumption that the closer are the series produced by the model to the data collected from reality, the better is the model. Validation of models, though obviously attractive, is a not a universal methodological approach, and there are cases in which its use as assessment criterion for models may be irrelevant or, worse, grossly misleading. Some simulation models are developed not much for describing specific realities, but rather are designed as tools for explaining general classes of events. In these cases, empirical evidence is either not available or widely differentiated, so validation is not an option. We propose, instead, to consider as assessment criterion the capacity of the model to provide useful explanations for phenomena represented by the simulation An explanation is the connection between one state of the relevant system that can be accepted as antecedent to the logically and/or temporally following state. Building a chain of individual explanations can lead to individuate a connection between very different states that, without the explicit indications of individual steps, may appear as unrelated. Building upon the concept of explanation it is possible to formalise a general methodological protocol able also to deal with agent-based simulation models. The protocol applied to ABM is based on the consideration of the results from simulations as emergent properties of a complex system. The nature of discrete-time simulation allows in principle to ascribe any aspect of a simulation result to the originating content of the simulation, down, if necessary, to the single line of code or to the individual initialization value. In conclusion, assessment of simulation models should be based on validation up to the point to ensure that the model results resemble the phenomena of interest to the researcher. However, validation is not sufficient, since modellers should not only replicate reality, but also show that their models provide an additional value besides the mere replication. This added value is made of the explanations that can be induced by the study of simulation runs generating the eventual data.
66) Eveline van Leeuwen, VU University Amsterdam, Netherlands
Hotelling Beyond $\rho$-Concave Distributions: an Agent-Based Approach
Coauthors: Dr. Mark Lijesen
Hotelling’s metaphor of spatial competition on Main Street has become one of the most important models in understanding strategic product differentiation. A vast literature has evolved, discussing extensions of the model with respect to the nature of transport costs, distributions of consumers (or preferences), dimensions and so on. However, equilibrium in the Hotelling model of spatial competition is only guaranteed if the distribution of consumers is $\rho$-concave. In reality, nothing guarantees such a distribution, rendering the analytical model unable to assess firm behavior for complex and (probably) more realistic distributions of consumer preferences. We develop an agent-based model of spatial competition that is capable of reproducing analytical results as well as providing results for cases where the distribution of consumers is more complex. Multi-stage games like the Hotelling game are solved analytically by backward induction. For an agent-based model, backward induction is not suitable. Instead, we solve the model using iterating loops of shop choice (by consumers), price optimization and location optimization (by firms). We show outcomes of several model runs using pre-defined distributions that are too complex to solve analytically, but clearly reflect realistic situations such as time-of-use preferences and urban-rural interactions. We use the agent based model to find optimal firm locations for 452 randomly generated distributions of consumer references. Out of the 452 distributions, 171 (38%) resulted in one or more Nash equilibria in prices and locations. Unique solutions were found for 36 (8%) distributions. We analyze which characteristics of the distribution(such as sknewness, weighted position and spread) determine whether a solution is found using a probit estimation. Moreover, we regress the characteristics of the valid location equilibria against the characteristics of the distribution to assess the relationship between them. The analysis suggests that the equilbria, although they could not have been derived analytically, are generally consistent with findings from the theoretical literature. We show that the agent-based model is capable of solving models that would be impossible, or at least intractable, to solve analytically. This opens up a wide range of opportunities for solving the model for distributions that are more in line with real life distributions. The model can therefore also be used to generate primers for empirical research, provided that the researcher has information on the underlying distribution.
67) Wouter Vermeer, RSM Erasmus University Rotterdam, Netherlands
Distinguishing Contagion from Contagion: Three Dimensions of Propagation.
Coauthors: Otto Koppius
Many network studies of financial and economic systems have focused on the process by which one actor in the network affects its alters, i.e. propagation in networks. These studies have drawn on different fields such as epidemiology, finance, sociology and business and perhaps as a consequence, terminology has varied: terms such as contagion (of disease), diffusion (of innovation), spread (of information) and (social) influence have been used frequently and seemingly interchangeably to describe the propagation process. However, the precise meaning and usage of each of these terms has been far from consistent and as a result, comparing propagation across different settings and translating the lessons learned from one field to another has becomes a challenge. To solve this problem and create a consistent typology of propagation processes in networks, we distinguish between three dimensions of shock propagation: 1. The width dimension, referring to the proportion of actors close to the origin of the shock that are affected; 2. The strength dimension, referring to how strongly these actors are affected; 3. The depth dimension, referring to the distance from the shock at which actors are affected. We link each of these dimensions to the existing terminology and by doing so create a typology of propagation processes. We illustrate the different dimensions of propagation by looking at the global trade network. By using the BACI dataset (Gaulier&Zignano, 2010), describing the global trade over a period of 15 years, we show that each of the three dimensions covers a distinctive part of propagation. By dividing the propagation process into three dimensions we not only are able to categorize network dynamics literature and create cohesion in its terminology, we are also able to generalize the propagation framework to different networked settings. Furthermore, by better understanding the propagation process we are one step closer to understanding the dynamics within financial and economic systems and consequently dynamics of the system as a whole.
68) Goetz von Peter, Bank for International Settlements, Switzerland
Interbank Tiering and Money Center Banks
Coauthors: Ben Craig
This paper provides evidence that interbank markets are tiered rather than flat, in the sense that most banks do not lend to each other directly but through money center banks acting as intermediaries. We capture the concept of tiering by developing a core-periphery model, and devise a procedure for fitting the model to real-world networks. Using Bundesbank data on bilateral interbank exposures among 1800 banks, we find strong evidence of tiering in the German banking system. Moreover, bank-specific features, such as balance sheet size, predict how banks position themselves in the interbank market. This link provides a promising avenue for understanding the formation of financial networks.
69) Taylan Yenilmez, Erasmus University Rotterdam, Netherlands
Which Products Bring More: Measuring Product Contribution to Export Diversification
Export diversification has positive effects on economic growth, especially for developing countries. Defining the right strategy for export diversification, however, is an open-ended question. To be able to answer this question, how new products contribute to export diversification should be analyzed. If products are considered as independent objects, addition of a new product to an export basket means only a single increment. On the other hand, if products are viewed as a network, a new product brings not only a single increment but also new connections. Hidalgo et al. (2007) provide a novel methodology to think of products as a network, namely “Product Space,” in which each product pair is connected according to their pairwise proximity measure. They show that product space is a highly heterogenous structure with a dense core and a sparse periphery. Hausmann and Klinger (2008) show that countries are more likely to export new products which are nearer to their current export baskets in the network. A new product is nearer to an export basket if there are higher connections between the new product and the products in the export basket. This paper uses product space to measure the expected contribution of new products to the export diversification of a country. New products that are nearer to an export basket are more likely to be added. Once they are added, products with higher network centrality bring more connections and make more products nearer. Based on these assumptions, I simply formulate that multiplication of product nearness and centrality indicates the expected contribution of a new product to the export diversification. To find evidence for this formulation, HS 6-digit product specific world trade data for the period 1995-2010 are used. I track the configuration of new product clusters in countries’ export baskets to calculate the number of products that a former new product brought into the basket after its addition. Econometric regressions show that the probability of a positive contribution depends on nearness. On the other hand, given that a product is added in the export basket, the number of other products it brings depends on the centrality of the product. Finally, a general econometric model reveals that the expected contribution of a new product depends on the multiplication of nearness and centrality. Therefore, empirical results confirm my theoretical formulation. Measuring the expected contribution of new products to export diversification has strong policy implications. Policy makers can shape their industrial policy by targeting products with higher expected contribution or sectors with higher average expected contribution. This kind of a tool can be beneficial especially for developing countries to diversify their export baskets. References Hausmann, R. and Klinger, B. (2008), South Africa's export predicament. Economics of Transition, 16: 609–637. Hidalgo, C. A., Klinger, B., Barabasi, A.-L. and Hausmann, R. (2007), The Product Space Conditions the Development of Nations. Science, 27: 482-487.
70) Yoshihiro Yura, Tokyo Institute of Technology, Japan
Replication of Non-Trivial Directional Motion in Multi-Scales Observed by Rus Test
Coauthors: Takaaki Ohnishi, Hideki Takayasu, Misako Takayasu
Non-trivial autocorrelation in up-down statistics in financial market price fluctuation is revealed by a multi-scale runs test (Wald-Wolfowitz test). We apply two models, a stochastic price model and an agent-based model to understand this property. In both approaches we successfully reproduce the non-stationary directional price motions consistent with the runs test by tuning parameters in the models. We find that two types of dealers exist in the markets, a short-time-scale trend-follower and an extended-time-scale contrarian who are active in different time periods. Secondly, we use the mentioned stochastic price model, called the PUCK model, and apply the particle filter for quick and accurate estimation of switching point and the parameters of the model especially when sudden news hit the market. The PUCK model is suggested as the model which fulfills all empirically stylized facts such as fat-tailed distribution of price changes and the anomalous diffusion in short time scale. Here, we pay attention to the event of the largest intervention in the foreign exchange market, the US dollar-Japanese yen market, on October 31, 2011 as the sample of the extraordinary time series of exchange rates. Through this data analysis we can detect deviation from the pure random walk quantitatively, and the functional form of underlying potential force is estimated accordingly. It is found that a nonlinear potential force, which causes a directional motion, appeared right after the start of the intervention.
71) Paolo Zeppini, Eindhoven University of Technology, Netherlands
Branching Innovation, Recombinant Innovation, and Endogenous Technological Transitions
Coauthors: Koen Frenken, Luis Izquierdo
Among the most challenging questions in social sciences is the explanation of societal transitions. To explain the dynamics of technological transitions, we develop a model where agents enjoy positive network externalities from using the same technology, while some agents, called innovators, ignore these externalities and introduce new technologies. Previous models of network externalities (David, 1985, Arthur, 1989, Bruckner et al., 1996) only explain how a technology becomes dominant in a population, and do not explain the emergence of new technological paths. There are two different types of innovations in our model: branching innovations are technological improvements along a particular path, while recombinant innovations represent fusions of multiple paths. After a new technology has been created, non-innovator agents make decisions about technology adoption. Adopting agents only adopt a new technology if it gives higher returns net of the switching costs. In the event that all agents switch to a better technology, we speak of a technological transition. Technologies form a graph, as in Vega-Redondo (1994). Here the graph is a tree, while in our model technologies can be recombined. Models of recombinant innovation proposed hitherto are rare, both theoretical (Zeppini and van den Bergh, 2011) and empirical (Fleming, 2001). Our network of technologies is endogenously evolving through the actions of agents, which means that we do not need to make assumptions about the nature of the technology graphs that agents are exploring. The model replicates some stylized facts of technological transitions, such as technological lock-in, experimental failure, punctuated change and irreversibility. Punctuated change is reflected by rare occurrence of transitions, which are irreversible in nature. By running an extensive simulation experiment we have analyzed the role of innovation effort in different conditions of population size and network externalities. There is an optimal innovation effort strongly correlated with the number of recombinations. The main message is that innovation effort has the biggest impact on technological progress when it is just large enough to create new varieties that subsequently can be fused through recombinant innovation triggering a technological transition. Recombinant innovations create ``short-cuts'' to higher quality in the technology graph at low switching costs, allowing technological transitions that would be hard to realize otherwise. A counterintuitive result is that larger switching costs are associated to larger transitions, when the innovation effort is low. The reason is that larger costs prevent non-innovating agents from immediately following branching-innovators, which means more variety, and more recombinations. The policy lessons are twofold. First, subsidizing innovation is a balancing act between the risk of under-spending unable to lock-out a population from existing technologies and the risk of over-spending wasting resources on redundant efforts. Second, innovation policy aimed at fostering technological transitions should not only promote the development of new varieties, but also the recombination of these varieties with elements of the old locked-in technology, as to trigger lagging agents to switch to new technologies.
72) Gilles Zumbach, Switzerland
Option Pricing and ARCH Processes
Recent progresses in option pricing using ARCH processes for the underlying are summarized. The stylized facts are multiscale heteroscedasticity, fat- tailed distributions, time reversal asymmetry, and leverage. The process equations are based on a finite time increment, relative returns, fat-tailed innovations, and multiscale ARCH volatility. The European option price is the expected payoff in the physical measure P weighted by the change of measure dQ/dP, and an expansion in the process increment dt allows for numerical evaluations. A cross-product decomposition of the implied volatility surface allows to compute efficiently option prices, Greeks, replication cost, replication risk, and real option prices. The theoretical implied volatility surface and the empirical mean surface for options on the SP500 index are in excellent agreement.
, Chair of Systems Design of ETH Zurich Contact | Imprint