Musings on Economics

Thursday, March 31

Business-speak in the Millennium Ecosystem Assessment

In the best tradition of Anglo-Saxon journalism, this article on CommonDreams.org about the newly released environmental report does not tell you what the report is called until the 5th paragraph, or who issued it until the 8th paragraph.

Anyway, the report is interesting because of what is says and how. The top-level summary (Statement of the Board) includes:

These [human] changes [to ecosystems] have helped to improve the lives of billions, but at the same time they weakened nature’s ability to deliver other key services such as purification of air and water, protection from disasters, and the provision of medicines.

I beg you pardon? Nature itself is now in the business of delivering services? Is that how far our everything-is-business and human-beings-are-consumers mentality has gone? More of the same:
Among the outstanding problems identified by this assessment are ...; the intense vulnerability of the 2 billion people living in dry regions to the loss of ecosystem services, including water supply; ...

And
The loss of services derived from ecosystems is a significant barrier to the achievement of the Millennium Development Goals to reduce poverty, hunger, and disease.

This almost sounds like ecosystems have let us down by defaulting on their provision of services.

Then come the following common-sense conclusions which, sadly, sound radical in our current unbridled capitalist mentality:
The pressures on ecosystems will increase globally in coming decades unless human attitudes and actions change.
Measures to conserve natural resources are more likely to succeed if local communities are given ownership of them, share the benefits, and are involved in decisions.


Then comes what can be construed (by me at least) as a critique of standard economics:
Even today’s technology and knowledge can reduce considerably the human impact on ecosystems. They are unlikely to be deployed fully, however, until ecosystem services cease to be perceived as free and limitless, and their full value is taken into account.

See this article for a critique of how standard economics perceives natural resources as limitless, sometimes with the argument that, as resources get exhausted, their price will rise until it is high enough to make newsources economical. The assumption, of course, is that resources are indeed unlimited, and it is just a matter of the rising cost of exploiting them. This is nonsense.

Then comes more troublesome language, talking about "natural assets" and "the productivity of ecosistems":
Better protection of natural assets will require coordinated efforts across all sections of governments, businesses, and international institutions. The productivity of ecosystems depends on policy choices on investment, trade, subsidy, taxation, and regulation, among others.

Tuesday, March 8

Betting and the Bayesian approach to probability

One sometimes sees the frequentist interpretation of probability justified by a betting argument. It goes something like this: suppose you want to assign a probability p to an event E happening in a given situation S. To do this, we set up the following game: we set up S and we bet on whether the even E will be observed or not, and we do it repeatedly. If you bet £pon E and I bet £(1-p) against you, the expected balance of the game is zero. If the bets are not in the ratio p:1-p, then the game will be skewed in either direction an one of us will make money at the expense of the other. This determines the probability p.

However, there are some of problematic assumptions implicit in this "definition" of probability.

  1. We are assuming that the situation S can be replicated at will, as often as necessary. In fact, it is easy to see that this procedure does not apply to the probability of single events (such as whether the Sun will rise tomorrow). This is, in fact, one of the thorny questions in the philosophy of probability which frequentists dismiss as meaningless.
  2. We are assuming that the game can go on indefinitely. Now, even though the game is fair and we are expected to break even, the standard deviation of the balance after N bets is £1 times the square root of p(1-p)N. In other wors, it grows without bound and, if the game progresses for long enough, one of the two players is bound to go bankrupt. The problem with this is that bankruptcy is precisely how the right choice of the bet amounts is enforced: if you believe that the probability of E is q>p, you will be happy to bet £q against £(1-q), but you will lose £(p-q) per bet, on average. It should be obvious why the certainty of eventual bankruptcy for players who choose the "right" odds should be problematic.


There is an alternative betting scheme which takes care of both problems, at the cost of requiring a subjective interpretation of probability. For those of us who are (at least sympathetic to) Bayesian, this is actually a good thing. The betting scheme is as follows: we assume that there are a large number of players, each with their subjective best estimate of the probabilities involved, and we allow only one round of betting. This allows us to consider the probabilities of single events. We simply ask everyone to bet £1 and divide the proceeds evently among those who bet for the outcome which actually takes place. The probabilities of the various outcomes are simply the fraction of players betting for each of them, and are therefore subjective "market" probabilities.

So far, so good. but suppose now that the event in question is repeatable, and that those who still have £1 left are asked to bet again. They now have information about the probabilities implied by the previous round of betting, and can change their bet accordingly.

Consider the following example. Suppose that there are two possible outcomes, A and B, and that the number of bets in the first round is 11 and 33, respectively. If the outcome is A, for instance, those who bet correctly make £3, and the losers lose £1. The implied odds are 4:1 for A and 4:3 for B. In the second round of betting, there are two kinds of players:
  • Those who believe the probability of A is less than 7/22, who will bet for B on the expectation of making £4/3 with probability at least 15/22 and losing £1 with probability at most 7/22, so they expect to make at least £13/22 on average.
  • Those who believe the probability of A is more than 7/22, who will bet for A on the expectation of making £4 with probability at least 7/22 and losing £1 with probability at most 15/22, therefore also expecting to make at least £13/22 on average.

Interestingly, this time around it is likely that more players will bet for A than for B, and everyone thinks that the game is in their favour! Is this leading to a model of the volatility (and stupidity) of the stock market? Assuming a uniform distribution of the estimates of p, we would expect 30 A and 14 B bets in the second round. Those who bet A would make £7/15 (which is les than they expect) if they won, while those who bet B would make £15/7.

Tuesday, October 19

More on pricing

To summarize my earlier post on asset pricing, in the absence of arbitrage opportunities assets are random variables and prices are expectations. The probabilities are normalized prices, not probabilities of random events, and the expectation is not a risk expectation but a market expectation. I called an asset with vanishing market variance a benchmark, and questioned whether such an asset can actually exist.

Apparently, in the literature what I am calling market expectation is usually called "risk-neutral pricing with respect to normalized price probabilities". Note the underlying confusion of applying the concept of risk aversion to normalized prices, as opposed to the probabilities of actual events.

Now, since the price is not related to actual risk but is just an abstract mathematical expectation, benchmarks cannot be argued away on the grounds that risk-free assets are implausible. Mathematically, one would consider a finite collection of assets, estimate their "market covariance" matrix and, if the latter were singular, the associated portfolio would be a benchmark. The problem is how one can estimate the market variance from a time series of market expectations. Some sort of underlying model is required that reproduces the observed expectation dynamics, and that can then be used to reconstruct the underlying variances.

Tuesday, October 12

The fraud that is the central bank

Let's see if I got this right...

  • Money is created by the central bank by lending it to the private banks at interest.
  • The state treasury then funds government projects by borrowing money from the banks in the form of bond issues.
  • The interest rate that the treasury pays for bonds is always higher than the interest rate at which the central bank lends money to the private banks.
In this way, the private banks get to milk the treasury for free through the central bank. Notice that both the treasury and the central bank are branches of government. Independent central banks are still part of the government, they are just independent from the executive. In this way, modern separation of powers is into four branches: executive, legislative, judiciary and monetary.

The inescapable conclusion is, though, that the idea of a central bank independent of the state's treasury is a fraud designed to allow private banks to rob the country of its resources at an exponential rate.

Why doesn't the government just create the money it needs to fund its own projects? It already has the authority (delegated to the central bank) to create money, and this would allow the government to fund its projects more cheaply since the current system is equivalent to the government creating money to fund its projects and then subsidizing the private banks proportionally to the money created.

Check out this newsletter on debt-free money.

Sunday, October 3

A formalization of the asset pricing problem

Updated October 6

We assume that there is a given collection of assets. A linear combination of assets is a portfolio. Negative coefficients in a portfolio represent short-selling or borrowing of the corresponding asset. More general functions of assets are called derivative assets. An asset is positive if it is always possible to sell it for a positive price. The problem is to find a fair price P for all assets. The fair price must be linear (the price of a portfolio is the linear combination of the component asset prices) and positive (a positive asset must have a positive price) in order to avoid the possibility of arbitrage (getting something for nothing).

What I just described is the algebraic axiomatization of probability theory. This means that assets are random variables, and the price is an expectation. In other words, assets are characterized by the probability distributions of their values, whose expectation is their fair price. This is not to say that the expected value is calculated with respect to the probabilities of actual events in the future (or not necessarily). It could also be some sort of market expectation of the value of the asset. I used to think that the price was the risk expectation (i.e., with respect to uncertainty of future value), and that zero-price-variance assets are risk-free assets. That seems not to be the case necessarily, and I quickly got confused trying to follow its logical consequences. In particular, it seems that identifying the market expectation with the risk expectation leads to a contradiction with the Capital Asset Pricing Model.

I am thus going to adopt the interpretation that the price is some sort of market average of perceived value. Then, if an asset x existed such that P(x2)=P(x)2, this would mean that there is only one possible price for the asset that anyone will pay. I'll call such an asset a benchmark. It is questionable whether such an asset exists. Back in the time of standard-backed currencies it might have made sense to say that gold was a benchmark, but in a world of freely floating fiat currencies I don't think there are any benchmarks left. Anyway, in a real market with imperfect information or bounded rationality there can't be any benchmarks either.


Friday, September 24

Investment "science"

I recently bought the book Investment Science by David Luenberger. I chose it because it was one of the best-sellers on wilmott.com, and it appeared to have the right mix of mathematics and economics. I am not unhappy with my purchase, it's a nice little book. I have read it once at an accelerating pace, and now I am going through for a second, more detailed reading.

The first thing that comes to my mind is the somewhat impertinent question (borrowed from a critic of computer science), just which part of the scientific method does "investment science" use?

The book is organized around the idea of cash-flow analysis. The first part of the book is about deterministic cash flow streams, interest rates and valuations. The second and third parts introduce random cash flow streams of increasing complexity. The fourth part is about futures, options and related problems. Put this way, the theory of investment becomes easy and reasonable. The basic kind of deterministic cash flow stream is the bond. Because bonds are standarized there is a well-established theory of them. The presentation would have made more sense to me on a first reading if the price/yield curve of a bond were explicitly related to the concept of internal rate of return introduced earlier for general cash flow streams.

The second "flaw" I found in the book came when the author discussed the "term structure" of interest rates. Basically, there is a different market interest rate for money loaned for different lengths of time. Usually, the interest rate increases monotonically with the loan time, although sometimes this is not so. Right now, looking at Bloomberg.com one finds that the interest rate for UK government bonds peaks for maturities of about 8 years, and decrases for longer maturities. This is not as expected, usually, and the bonds issued by all other countries monitored by Bloomberg show monotonic (sometimes very steep) yield curves.

Anyway, the book gives three different "explanations" of the term structure of interest. The first is "market segmentation": the theory that investors fall in different classes according to the maturity of the bonds they prefer to buy, and that so there need be no relation between the interests paid by bonds of different maturities. Since the yields are not observed to fluctuate wildly with maturity date, this theory does not seem very compelling.

The second theory has to do with risk. A longer maturity involves a larger risk that the issuer of the bond will default, and so should be associated to a higher yield. The problem is that this is essentially a probabilistic situation, but the book says nothing about needing to wait for later for a fuller analysis of this idea. Now, what does this theory imply about the market expectations for the British economy given the Bloomberg data?

And, now that I mention "market expectations", this is the third theory about the term structure of interest presented, and apparently the one preferred by the author. The argument goes as follows. A given term structure has implicit information about the market expectations for the term structure in the future. For instance, money invested today for 2 years at the 2-year spot rate should produce the same yied as if it were invested at the 1-year spot rate and then reinvested at next year's 1-year spot rate. This allows one to compute the forward rate for a 1-year investment made 1 year from now. The idea is that, if this forward rate did not match the market expectations, it would be possible for investors to make money by playing with the difference between the two (a no-arbitrage argument). The problem with this is that arbitrage arguments involve zero-risk profit, and loaning money at interest always involves risk. Moreover, if the spot rates increase with the maturity time, as they usually do, the expectation dynamics predict that interest rates will grow in the future, someties considerably. The troubling thing is that the book admits that this is not the observed behaviour, but that "expectation dinamics are very logical" and so are preferred to the other explanations.

Why not simply admit that without an analysis of default risk one cannot explain the term structure and move on to random cash flows? Why insist on using expectation dynamics all over the place and proving invariance theorems about them if that is not the way the market behaves? I find it hard to believe that people will actually invest money according to expectation dynamics... that sounds like a losing strategy.

Wednesday, July 14

Not capitalism, corporatism

I am convinced that our economic system would be better described by the term corporatism rather than by the more commonly used capitalism. I contend that we live, in fact, in a corporatist economy which is not very different, in terms of efficiency, from a statist economy. I hold, in addition, that corporatist or statist economies are both more compatible with totalitarianism than with democracy. After the statist economic model collapsed under its own weight around 1989, it has become increasingly clear that the corporatist model is equally totalitarian. But this is old news. After all, it was Mussolini himself that said "Fascism should more appropriately be called Corporatism, because it is a merger of State and corporate power".

Capital, stock, means of production



Capital is what Adam Smith called stock, but I tend to like Marx's more descriptive term means of production. It is clear that that is what Adam Smith meant by stock, but when we think today about the stock exchange or capital we tend to think just of money and not of the more broad meaning including tools and raw materials. That capital should be synonimous with means-of-production is borne out by the fact that capital goods refers to goods that are used not as ends onto themselves but as means to produce other goods. It is in this sense that capital investments detract from current consumption in exchange for larger future production (and consumption).

Anyway, from a Marxian point of view ownership of the means of production is an important way of classifying economic systems. I can see four kinds of ownership of the means of production:

  • individual ownership
  • communal ownership
  • corporate ownership
  • state ownership

The first is the only ownership by physical persons; the last three are ownership by legal persons. My contention here is that a capitalist economy is one in which ownership of capital is mostly individual, and that we live, in fact, in a corporatist economy which is not very different, in terms of efficiency, from a statist economy.

Market failure



I will argue from the point of view of free-market economics. The basic idea (going back to Adam Smith) is as follows: the market mechanism is the most efficient way of allocating resources, but it depends on each economic agent having sufficient freedom, and on the inability of any single economic agent to affect prices. If individuals do not have the freedom to change their economic activity at will to better suit their needs, or if a single agent can control prices (usually by having control of a substantial fraction of the available supply or demand), then there is market failure, and corrective measures need to be taken from outside the market to restore a functioning market. The argument will be that corporatism leads to market failure both because corporations are too large and because they take away economic freedom from individuals.

Corporations and market failure



Corporations lead to market failure because of their size. A corporation the size of Microsoft, Wal-Mart, Starbucks or Bank of America has such control of the supply in their respective markets that in theory, by their very existence, they should threaten the health of the markets in which they operate. In fact, the business model of these large corporations is known to be detrimental to competition and a free market. I will summarize the influence of Microsoft and Wal-Mart on their respective markets, respectively computer software, and retail sales.

Wal-Mart's business model is basically to open a huge new store in a community, sell products under cost for as long as necessary until all the neighbouring small retailers go out of business, and then enjoy a local monopoly in retail for the foreseeable future. They can afford to do this because the losses of one megastore are more than balanced by the large numbers of other profitable monopoly stores that Wal-Mart has elsewhere. Once Wal-Mart has achieved a monopoly, the consequences for employees and suplliers are disastrous, as Wal-Mart will push for lower and lower wages and pay lower and lower prices for the goods it sells; being a monopoly, there's little the workers or the suppliers can do about it. All large retailers ("megastores") operate in this destructive fashion: well-known examples include Ikea in the furniture business or Barnes and Noble in the Bookselling business.

Microsoft has a history of buying out promising startups not so much to develop their ideas (for example, hotmail) but to kill them if they would threaten Microsoft's business in any way (corel, progeny). Its strategy also includes subtly altering standards ("embracing and extending" and "decommoditizing protocols") to create incompatibilities between their products and those of other software companies (because of their larger user base, microsoft's nonstandard version of the standard would win out). We have seen this happen with web browsers, java, and postscript. See the Halloween Documents for more on Microsoft's business model.

Large corporations result in poor product quality, reduced consumer choice and obstacles to accessing the market, all of which are instances of market failure.

For a while is seemed, though, that everyone in the US who could save a little money was buying stock. Together with the increased popularity of mutual funds (even encouraged by the government as an alternative to social security retirement pensions), don't we have individual ownership of the means of production? Not really. For one thing, many of these small stock owners were day traders. They don't own the stock so much as gamble with it. People who put their money in mutual funds also don't directly own the stock. In practice, they are putting money in a high-interest savings account and it is the bank or investment firm that actually owns the stock. In fact, they are obtaining a higher profit buying stock with your money than the interest they give you on it, otherwise they woulnd't offer the service. The situation really is tha the majority of stock in large corporations is held by a very small number of very wealthy people, and by other corporations: a mixture of oligarchy and corporatism.

State ownership and market failure



The negative effects of state ownership of the means of production are not very different from the negative effects of large corporations. They may be worse because the state amounts to a single holding having a monopoly in many sectors, as opposed to a number of smaller corporations having a monopoly in their respective sectors. Because the state already gets to influence demand by focused spending of tax money, state ownership of the means of production can be even more negative. However, at least in a democracy the state administrators can be voted out of office for engaging in distasteful practices, while nobody can vote Nike executives out of office for using sweatshops. Boycotts may be effective in altering corporate policies somewhat, but they take longer to organize and are generally less effective than just waiting for the next election to come around.

In the US, the collusion between the state and the corporation is so great that, although nominally the state is democratic and does not own a substantial part of the economy, in fact it is impossible to be a successful politician without corporate financial backing and the Defense Department, through its huge budget, effectively owns the huge "military industial complex".