functional-programming - referential - transparency in programming




What is referential transparency? (8)

  1. Denotational-semantics is based on modelling languages by building domains that constitute denotable values.
  2. Functional Programmers use the term value to describe the convergence of a computation based on the rewrite rules of the language ie. its operational semantics.

In 1 there is a clarity of two languages in question:

  • the one being modelled, the object language
  • the language of modelling, the meta language

In 2, thanks to the closeness of the object and metalanguages, they can be confused.

As a language implementer, I find that I need to constantly remember this distinction.

So Prof. Reddy may I paraphrase you thus :-)

In the contexts of functional programming and semantics, the term Referential Transparency is not referentially transparent.

What does the term referential transparency mean? I've heard it described as "it means you can replace equals with equals" but this seems like an inadequate explanation.


A referentially transparent function is one which acts like a mathematical function; given the same inputs, it will always produce the same outputs. It implies that the state passed in is not modified, and that the function has no state of its own.


An expression is referentially transparent if it can be replaced with its value, without changing the algorithm, yielding an algorithm that has the same effects and output on the same input.


For those in need of a concise explanation I will hazard one (but read the disclosure below).

Referential transparency in a programming language promotes equational reasoning -- the more referential transparency you have the easier it is to do equational reasoning. E.g. with a (pseudo) function definition,

f x = x + x,

the ease with which you can (safely) replace f(foo) with foo + foo in the scope of this definition, without having too many constraints on where you can perform this reduction, is a good indication of how much referential transparency your programming language has.

For example if foo were x++ in the C programming sense then you could not perform this reduction safely (which is to say, if you were to perform this reduction you would't end up with the same program that you started with).

In practical programming languages you won't see perfect referential transparency but functional programmers care about it more than most (cf Haskell, where it is a core objective).

(Full disclosure: I am a functional programmer so by the top answer you should take this explanation with a grain of salt.)


In functional programming, referential transparency is generally defined as the fact that an expression, in a program, may be replaced by its value (or anything having the same value) without changing the result of the program. This implies that methods should always return the same value for a given argument, without having any other effect. This functional programming concept also applies to imperative programming, though, and can help you make your code clearer.

Referential Transparency

The expression referential transparency is used in various domains, such as mathematics, logic, linguistics, philosophy and programming. It has quite different meanings in each of these domain. Here, we will deal only with computer programs, although we will show analogy with maths (simple maths, don’t worry). Note, however, that computer scientists do not agree on the meaning of referential transparency in programming. What we will look at is referential transparency as it is used by functional programmers.

Referential Transparency in Maths In maths, referential transparency is the property of expressions that can be replaced by other expressions having the same value without changing the result in any way. Consider the following example:

x = 2 + (3 * 4)

We may replace the subexpression (3 * 4) with any other expression having the same value without changing the result (the value of x). The most evident expression to use, is of course 12:

x = 2 + 12

Any other expression having the value 12 (maybe (5 + 7)) could be used without changing the result. As a consequence, the subexpression (3 * 4) is referentially transparent.

We may also replace the expression 2 + 12 by another expression having the same value without changing the value of x, so it is referentially transparent too:

x = 14

You can easily see the benefit of referential transparency: It allows reasoning. Without it, we could not resolve any expression without considering some other elements.

Referential Transparency in Programming In programming, referential transparency applies to programs. As programs are composed of subprograms, which are programs themselves, it applies to those subprograms, too. Subprograms may be represented, among other things, by methods. That means method can be referentially transparent, which is the case if a call to this method may be replaced by its return value:

int add(int a, int b) {
        return a + b
    }

int mult(int a, int b) {
        return a * b;
    }

int x = add(2, mult(3, 4));

In this example, the mult method is referentially transparent because any call to it may be replaced with the corresponding return value. This may be observed by replacing mult(3, 4) with 12:

int x = add(2, 12)

In the same way, add(2, 12) may be replaced with the corresponding return value, 14:

int x = 14;

None of these replacements will change the result of the program, whatever it does. Note that we could use any other expression having the same value, which is useful when refactoring.

On the other hand, consider the following program:

int add(int a, int b) {
    int result = a + b;
    System.out.println("Returning " + result);
    return result;
}

Replacing a call to the add method with the corresponding return value will change the result of the program, since the message will no longer be printed. In that case, it would only remove the side effect but in other cases, it might change the value returned by the method:

public static void main(String... args) {
    printFibs(10);
}

public static void printFibs(int limit) {
    Fibs fibs = new Fibs();
    for (int i = 0; i < limit; i++) {
        System.out.println(fibs.next());
    }
}

static class Fibs {
    private int previous = -1;
    private int last = 1;

    public Integer next() {
        last = previous + (previous = last);
        return previous + last;
    }
}

Here, the next method can’t be replaced with anything having the same value, since the method is designed to return a different value on each call.

Using such non referentially transparent methods requires a strong discipline in order not to share the mutable state involved in the computation. Functional style avoids such methods in favor of referentially transparent versions.

Referential Transparency in Imperative Programming Both imperative and functional programming use functions. Although functional programming uses only functions, imperative programming uses:

  • pure functions: methods returning values and having no other effects
  • pure effects: methods returning nothing but changing something outside of them)
  • functions with side effects: methods both returning a value and changing something

As we all know, it is good practice to avoid functions with side effects. This leaves imperative programmers with pure functions and pure effects. Referential transparency is then a powerful tool for imperative programmers to make their programs easier to reason about, and easier to test.


Note that this concept of "meaning" is something that happens in the mind of the observer. Thus, the same "reference" can mean different things to different people. So, for example, we have an Edinburgh disambiguation page in Wikipedia.

A related issue which can show up in the context of programming might be polymorphism.

And perhaps we should have a name for the special case of polymorphism (or perhaps even casting) where for our purposes the differing polymorphic cases are semantically equivalent (as opposed to just being similar. For example, the number 1 -- which might be represented using an integer type, or a complex type or any of a variety of other types -- can be treated polymorphically).


The following answer I hope adds to and qualifies the controversial 1st and 3rd answers.

Let us grant that an expression denotes or refers to some referent. However, a question is whether these referents can be encoded isomorphically as part of expressions themselves, calling such expressions 'values'. For example, literal number values are a subset of the set of arithmetic expressions, truth values are a subset of the set of boolean expressions, etc. The idea is to evaluate an expression to its value (if it has one). So the word 'value' may refer to a denotation or to a distinguished element of the set of expressions. But if there is an isomorphism (a bijection) between the referent and the value we can say they are the same thing. (This said, one must be careful to define the referents and the isomorphism, as proven by the field of denotational semantics. To put an example mentioned by replies to the 3rd answer, the algebraic data type definition data Nat = Zero | Suc Nat does not correspond as expected to the set of natural numbers.)

Let us write E[·] for an expression with a hole, also known in some quarters as a 'context'. Two context examples for C-like expressions are [·]+1 and [·]++.

Let us write [[·]] for the function that takes an expression (with no hole) and delivers its meaning (referent, denotation, etc.) in some meaning-providing universe. (I'm borrowing notation from the field of denotational semantics.)

Let us adapt Quine's definition somewhat formally as follows: a context E[·] is referentially transparent iff given any two expressions E1 and E2 (no holes there) such that [[E1]] = [[E2]] (i.e. the expressions denote/refer-to the same referent) then it is the case that [[E[E1]]] = [[E[E2]]] (i.e. filling-in the hole with either E1 or E2 results in expressions that also denote the same referent).

Leibniz's rule of substituting equals for equals is typically expressed as 'if E1 = E2 then E[E1] = E[E2]', which says that E[·] is a function. A function (or for that matter a program computing the function) is a mapping from a source to a target so that there is at most one target element for each source element. Non-deterministic functions are misnomers, they are either relations, functions delivering sets, etc. If in Leibniz's rule the equality = is denotational then the double-brackets are simply taken for granted and elided. So a referentially transparent context is a function. And Leibniz's rule is the main ingredient of equational reasoning, so equational reasoning is definitely related to referential transparency.

Although [[·]] is a function from expressions to denotations, it could be a function from expressions to 'values' understood as a restricted subset of expressions, and [[·]] can be understood as evaluation.

Now, if E1 is an expression and E2 is a value we have what I think is meant by most people when defining referential transparency in terms of expressions, values, and evaluation. But as illustrated by the 1st and 3rd answers in this page, this is an inacurate definition.

The problem with contexts such as [·]++ is not the side effect, but that its value is not defined in C isomorphically to its meaning. Functions are not values (well, pointers to functions are) whereas in functional programming languages they are. Landin, Strachey, and the pioneers of denotational semantics were quite clever in using functional worlds to provide meaning.

For imperative C-like languages we can (roughly) provide semantics to expressions using the function [[·]] : Expression -> (State -> State x Value).

Value is a subset of Expression. State contains pairs (identifier,value). The semantic function takes an expression and delivers as its meaning a function from the current state to the pair with the updated state and a value. For example, [[x]] is the function from the current state to the pair whose first component is the current state and whose second component is the value of x. In contrast, [[x++]] is the function from the current state to the pair whose first component is a state in which the value of x is incremented, and whose second component is that very value. In this sense, the context [·]++ is referentially transparent iff it satisfies the definition given above.

I think functional programmers are entitled to use referential transparency in the sense that they naturally recover [[·]] as a function from expressions to values. Functions are first-class values and the state can also be a value, not a denotation. The state monad is (in part) a clean mechanism for passing (or threading) the state.


The term "referential transparency" comes from analytical philosophy, the branch of philosophy that analyzes natural language constructs, statements and arguments based on the methods of logic and mathematics. In other words, it is the closest subject outside computer science to what we call programming language semantics. The philosopher Willard Quine was responsible for initiating the concept of referential transparency, but it was also implicit in the approaches of Bertrand Russell and Alfred Whitehead.

At its core, "referential transparency" is a very simple and clear idea. The term "referent" is used in analytical philosophy to talk about the thing that an expression refers to. It is roughly the same as what we mean by "meaning" or "denotation" in programming language semantics. Using Andrew Birkett's example (blog post), the term "the capital of Scotland" refers to the city of Edinburgh. That is a straightforward example of a "referent".

A context in a sentence is "referentially transparent" if replacing a term in that context by another term that refers to the same entity doesn't alter the meaning. For example

The Scottish Parliament meets in the capital of Scotland.

means the same as

The Scottish Parliament meets in Edinburgh.

So the context "The Scottish Parliament meets in ..." is a referentially transparent context. We can replace "the capital of Scotland" with "Edinburgh" without altering the meaning. To put another way, the context only cares about what the term refers to and nothing else. That is the sense in which the context is "referentially transparent."

On the other hand, in the sentence,

Edinburgh has been the capital of Scotland since 1999.

we can't do such a replacement. If we did, we would get "Edinburgh has been Edinburgh since 1999", which is a nutty thing to say, and doesn't convey the same meaning as the original sentence. So, it would seem that the context "Edinburgh has been ... since 1999" is referentially opaque (the opposite of referentially transparent). It apparently cares about something more than what the term refers to. What is it?

Things such as "the capital of Scotland" are called definite terms and they gave no lean amount of head ache to logicians and philosophers for a long time. Russell and Quine sorted them out saying that they are not actually "referential", i.e., it is a mistake to think that the above examples are used to refer to entities. The right way to understand "Edinburgh has been the capital of Scotland since 1999" is to say

Scotland has had a capital since 1999 and that capital is Edinburgh.

This sentence cannot be transformed to a nutty one. Problem solved! The point of Quine was to say that natural language is messy, or at least complicated, because it is made to be convenient for practical use, but philosophers and logicians should bring clarity by understanding them in the right way. Referential transparency is a tool to be used for bringing such clarity of meaning.

What does all this have to do with programming? Not very much, actually. As we said, referential transparency is a tool to be used in understanding language, i.e., in assigning meaning. Christopher Strachey, who founded the field of programming language semantics, used it in his study of meaning. His foundational paper "Fundamental concepts in programming languages" is available on the web. It is a beautiful paper and everybody can read and understand it. So, please do so. You will be much enlightened. He introduces the term "referential transparency" in this paragraph:

One of the most useful properties of expressions is that called by Quine referential transparency. In essence this means that if we wish to find the value of an expression which contains a sub-expression, the only thing we need to know about the sub-expression is its value. Any other features of the sub-expression, such as its internal structure, the number and nature of its components, the order in which they are evaluated or the colour of the ink in which they are written, are irrelevant to the value of the main expression.

The use of "in essence" suggests that Strachey is paraphrasing it in order to explain it in simple terms. Functional programmers seem to understand this paragraph in their own way. There are 9 other occurrences of "referential transparency" in the paper, but they don't seem to bother about any of the others. In fact, the whole paper of Strachey is devoted to explaining the meaning of imperative programming languages. But, today, functional programmers claim that imperative programming languages are not referentially transparent. Strachey would be turning in his grave.

We can salvage the situation. We said that natural language is "messy, or at least complicated" because it is made to be convenient for practical use. Programming languages are the same way. They are "messy, or at least complicated" because they are made to be convenient for practical use. That does not mean that they need to confuse us. They just have to be understood the right way, using a meta language that is referentially transparent so that we have clarity of meaning. In the paper I cited, Strachey does exactly that. He explains the meaning of imperative programming languages by breaking them down into elementary concepts, never losing clarity anywhere. An important part of his analysis is to point out that expressions in programming languages have two kinds of "values", called l-values and r-values. Before Strachey's paper, this was not understood and confusion reigned supreme. Today, the definition of C mentions it routinely and every C programmer understands the distinction. (Whether the programmers in other languages understand it equally well is hard to say.)

Both Quine and Strachey were concerned with the meaning of language constructions that involve some form of context-dependence. For example, our example "Edinburgh has been the capital of Scotland since 1999" signifies the fact that "capital of Scotland" depends on the time at which it is being considered. Such context-dependence is a reality, both in natural languages and programming languages. Even in functional programming, free and bound variables are to be interpreted with respect to the context in which they appear in. Context dependence of any kind blocks referential transparency in some way or the other. If you try to understand the meaning of terms without regard to the contexts they depend on, you would again end up with confusion. Quine was concerned with the meaning of modal logic. He held that modal logic was referentially opaque and it should be cleaned up by translating it into a referentially transparent framework (e.g., by regarding necessity as provability). He largely lost this debate. Logicians and philosophers alike found Kripke's possible world semantics to be perfectly adequate. Similar situation also reigns with imperative programming. State-dependence explained by Strachey and store-dependence explained by Reynolds (in a manner similar to Kripke's possible world semantics) are perfectly adequate. Functional programmers don't know much of this research. Their ideas on referential transparency are to be taken with a large grain of salt.

[Additional note: The examples above illustrate that a simple phrase such as "capital of Scotland" has multiple levels of meaning. At one level, we might be talking about the capital at the current time. At another level, we might talking about all possible capitals that Scotland might have had through the course of time. We can "zoom into" a particular context and "zoom out" to span all contexts quite easily in normal practice. The efficiency of natural language makes use of our ability to do so. Imperative programming languages are efficient in very much the same way. We can use a variable x on the right hand side of an assignment (the r-value) to talk about its value in a particular state. Or, we might talk about its l-value which spans all states. People are rarely confused by such things. However, they may or may not be able to precisely explain all the layers of meaning inherent in language constructs. All such layers of meaning are not necessarily 'obvious' and it is a matter of science to study them properly. However, the inarticulacy of ordinary people to explain such layered meanings doesn't imply that they are confused about them.]

A separate "postscript" below relates this discussion to the concerns of functional and imperative programming.





referential-transparency