java difference - Compiled vs. Interpreted Languages





programming or (10)


First, a clarification, Java is not fully static-compiled and linked in the way C++. It is compiled into bytecode, which is then interpreted by a JVM. The JVM can go and do just-in-time compilation to the native machine language, but doesn't have to do it.

More to the point: I think interactivity is the main practical difference. Since everything is interpreted, you can take a small excerpt of code, parse and run it against the current state of the environment. Thus, if you had already executed code that initialized a variable, you would have access to that variable, etc. It really lends itself way to things like the functional style.

Interpretation, however, costs a lot, especially when you have a large system with a lot of references and context. By definition, it is wasteful because identical code may have to be interpreted and optimized twice (although most runtimes have some caching and optimizations for that). Still, you pay a runtime cost and often need a runtime environment. You are also less likely to see complex interprocedural optimizations because at present their performance is not sufficiently interactive.

Therefore, for large systems that are not going to change much, and for certain languages, it makes more sense to precompile and prelink everything, do all the optimizations that you can do. This ends up with a very lean runtime that is already optimized for the target machine.

As for generating executbles, that has little to do with it, IMHO. You can often create an executable from a compiled language. But you can also create an executable from an interpreted language, except that the interpreter and runtime is already packaged in the exectuable and hidden from you. This means that you generally still pay the runtime costs (although I am sure that for some language there are ways to translate everything to a tree executable).

I disagree that all languages could be made interactive. Certain languages, like C, are so tied to the machine and the entire link structure that I'm not sure you can build a meaningful fully-fledged interactive version

I'm trying to get a better understanding of the difference. I've found a lot of explanations online, but they tend towards the abstract differences rather than the practical implications.

Most of my programming experiences has been with CPython (dynamic, interpreted), and Java (static, compiled). However, I understand that there are other kinds of interpreted and compiled languages. Aside from the fact that executable files can be distributed from programs written in compiled languages, are there any advantages/disadvantages to each type? Oftentimes, I hear people arguing that interpreted languages can be used interactively, but I believe that compiled languages can have interactive implementations as well, correct?




Compile is the process of creating an executable program from code written in a compiled programming language. Compiling allows the computer to run and understand the program without the need of the programming software used to create it. When a program is compiled it is often compiled for a specific platform (e.g. IBM platform) that works with IBM compatible computers, but not other platforms (e.g. Apple platform). The first compiler was developed by Grace Hopper while working on the Harvard Mark I computer. Today, most high-level languages will include their own compiler or have toolkits available that can be used to compile the program. A good example of a compiler used with Java is Eclipse and an example of a compiler used with C and C++ is the gcc command. Depending on how big the program is it should take a few seconds or minutes to compile and if no errors are encountered while being compiled an executable file is created.check this information




A language itself is neither compiled nor interpreted, only a specific implementation of a language is. Java is a perfect example. There is a bytecode-based platform (the JVM), a native compiler (gcj) and an interpeter for a superset of Java (bsh). So what is Java now? Bytecode-compiled, native-compiled or interpreted?

Other languages, which are compiled as well as interpreted, are Scala, Haskell or Ocaml. Each of these languages has an interactive interpreter, as well as a compiler to byte-code or native machine code.

So generally categorizing languages by "compiled" and "interpreted" doesn't make much sense.




The biggest advantage of interpreted source code over compiled source code is PORTABILITY.

If your source code is compiled, you need to compile a different executable for each type of processor and/or platform that you want your program to run on (e.g. one for Windows x86, one for Windows x64, one for Linux x64, and so on). Furthermore, unless your code is completely standards compliant and does not use any platform-specific functions/libraries, you will actually need to write and maintain multiple code bases!

If your source code is interpreted, you only need to write it once and it can be interpreted and executed by an appropriate interpreter on any platform! It's portable! Note that an interpreter itself is an executable program that is written and compiled for a specific platform.

An advantage of compiled code is that it hides the source code from the end user (which might be intellectual property) because instead of deploying the original human-readable source code, you deploy an obscure binary executable file.




It's rather difficult to give a practical answer because the difference is about the language definition itself. It's possible to build an interpreter for every compiled language, but it's not possible to build an compiler for every interpreted language. It's very much about the formal definition of a language. So that theoretical informatics stuff noboby likes at university.




The Python Book © 2015 Imagine Publishing Ltd, simply distunguishes the difference by the following hint mentioned in page 10 as:

An interpreted language such as Python is one where the source code is converted to machine code and then executed each time the program runs. This is different from a compiled language such as C, where the source code is only converted to machine code once – the resulting machine code is then executed each time the program runs.




From http://www.quora.com/What-is-the-difference-between-compiled-and-interpreted-programming-languages

There is no difference, because “compiled programming language” and “interpreted programming language” aren’t meaningful concepts. Any programming language, and I really mean any, can be interpreted or compiled. Thus, interpretation and compilation are implementation techniques, not attributes of languages.

Interpretation is a technique whereby another program, the interpreter, performs operations on behalf of the program being interpreted in order to run it. If you can imagine reading a program and doing what it says to do step-by-step, say on a piece of scratch paper, that’s just what an interpreter does as well. A common reason to interpret a program is that interpreters are relatively easy to write. Another reason is that an interpreter can monitor what a program tries to do as it runs, to enforce a policy, say, for security.

Compilation is a technique whereby a program written in one language (the “source language”) is translated into a program in another language (the “object language”), which hopefully means the same thing as the original program. While doing the translation, it is common for the compiler to also try to transform the program in ways that will make the object program faster (without changing its meaning!). A common reason to compile a program is that there’s some good way to run programs in the object language quickly and without the overhead of interpreting the source language along the way.

You may have guessed, based on the above definitions, that these two implementation techniques are not mutually exclusive, and may even be complementary. Traditionally, the object language of a compiler was machine code or something similar, which refers to any number of programming languages understood by particular computer CPUs. The machine code would then run “on the metal” (though one might see, if one looks closely enough, that the “metal” works a lot like an interpreter). Today, however, it’s very common to use a compiler to generate object code that is meant to be interpreted—for example, this is how Java used to (and sometimes still does) work. There are compilers that translate other languages to JavaScript, which is then often run in a web browser, which might interpret the JavaScript, or compile it a virtual machine or native code. We also have interpreters for machine code, which can be used to emulate one kind of hardware on another. Or, one might use a compiler to generate object code that is then the source code for another compiler, which might even compile code in memory just in time for it to run, which in turn . . . you get the idea. There are many ways to combine these concepts.




A compiler and an interpreter do the same job: translating a programming language to another pgoramming language, usually closer to the hardware, often direct executable machine code.

Traditionally, "compiled" means that this translation happens all in one go, is done by a developer, and the resulting executable is distributed to users. Pure example: C++. Compilation usually takes pretty long and tries to do lots of expensive optmization so that the resulting executable runs faster. End users don't have the tools and knowledge to compile stuff themselves, and the executable often has to run on a variety of hardware, so you can't do many hardware-specific optimizations. During development, the separate compilation step means a longer feedback cycle.

Traditionally, "interpreted" means that the translation happens "on the fly", when the user wants to run the program. Pure example: vanilla PHP. A naive interpreter has to parse and translate every piece of code every time it runs, which makes it very slow. It can't do complex, costly optimizations because they'd take longer than the time saved in execution. But it can fully use the capabilities of the hardware it runs on. The lack of a separrate compilation step reduces feedback time during development.

But nowadays "compiled vs. interpreted" is not a black-or-white issue, there are shades in between. Naive, simple interpreters are pretty much extinct. Many languages use a two-step process where the high-level code is translated to a platform-independant bytecode (which is much faster to interpret). Then you have "just in time compilers" which compile code at most once per program run, sometimes cache results, and even intelligently decide to interpret code that's run rarely, and do powerful optimizations for code that runs a lot. During development, debuggers are capable of switching code inside a running program even for traditionally compiled languages.




Start thinking in terms of a: blast from the past

Once upon a time, long long ago, there lived in the land of computing interpreters and compilers. All kinds of fuss ensued over the merits of one over the other. The general opinion at that time was something along the lines of:

  • Interpreter: Fast to develop (edit and run). Slow to execute because each statement had to be interpreted into machine code every time it was executed (think of what this meant for a loop executed thousands of times).
  • Compiler: Slow to develop (edit, compile, link and run. The compile/link steps could take serious time). Fast to execute. The whole program was already in native machine code.

A one or two order of magnitude difference in the runtime performance existed between an interpreted program and a compiled program. Other distinguishing points, run-time mutability of the code for example, were also of some interest but the major distinction revolved around the run-time performance issues.

Today the landscape has evolved to such an extent that the compiled/interpreted distinction is pretty much irrelevant. Many compiled languages call upon run-time services that are not completely machine code based. Also, most interpreted languages are "compiled" into byte-code before execution. Byte-code interpreters can be very efficient and rival some compiler generated code from an execution speed point of view.

The classic difference is that compilers generated native machine code, interpreters read source code and generated machine code on the fly using some sort of run-time system. Today there are very few classic interpreters left - almost all of them compile into byte-code (or some other semi-compiled state) which then runs on a virtual "machine".




I am looking into the same concept, and found this promising article by Joel Pobar,

Create a Language Compiler for the .NET Framework

he discusses a high level concept of a compiler and proceeds to invent his own langauge for the .Net framework. Although its aimed at the .Net Framework, many of the concepts should be able to be reproduced. The Article covers:

  1. Langauge definition
  2. Scanner
  3. Parser (the bit im mainly interested in)
  4. Targeting the .Net Framework The
  5. Code Generator

there are other topics, but you get the just.

Its aimed to people starting out, written in C# (not quite Java)

HTH

bones





java python compiler-construction programming-languages interpreter