Some notable languages of the time, including Java and C++, had gained noticeable traction by virtue of their pretty strong error-handling mechanisms. Error-handling was taken seriously in these languages because they were always considered to be professional languages, meant for professionals.
catch statements to capture and deal with errors in a given program and even the
throw keyword to manually dispatch errors.
What are errors?
When running a program, there are many ways in which we could run into a situation where the program doesn't run at all, or stops executing in between, or produces unexpected output.
Whatever the end result be, each of these errors has, nonetheless, an underlying cause.
For instance, if the program doesn't run at all, there's a 100% chance that the code has an illegal syntax (this is the cause). Similarly, if the program runs but stops executing in between, then it might be the case that a reference to a non-existent variable was made in it (this is the cause).
Anyways, whatever the error be, and whatever its cause be, we as developers are tasked with finding these errors, finding their causes, and then ideally solving them.
This activity is such common in programming that it has a name of its own — debugging.
Following the nomenclature of the word itself, debugging is literally to remove bugs (problems) from a program.
But before we can start debugging our programs, we need a sufficient amount of knowledge about the different kinds of errors that programs can run into.
Some errors end up not allowing the execution of the program at all, some end up terminating it in between, some end up just producing gibberish output, and so on.
Based on the nature of the errors, we can divide them into three broad categories, as follows:
- Syntax error — means that there is some issue with the grammar of the code. The obvious solution is to look for any invalid symbols, identifiers, statements, and then rewrite them in the proper syntax.
- Semantic error — means that there is some kind of a problem with the meaning (i.e. the semantics) of the code. For example, a code accessing a variable might be syntactically correct, but semantically erroneous by virtue of referring to a non-existent variable.
- Logical error — means that there is a problem in the logic of the program. These errors are typically very difficult to find, since they don't cause any visible error messages.
These three broad categories of errors might contain further distinctions downstream for a particular language.
Note that these classes only represent errors that might be syntactic and/or semantic. There is no way a logical error could be represented by any class whatsoever, simply because it can only be recognized by the developer of the program, not the engine executing the program.
Error— a generic class that represents all errors.
SyntaxError— means that there is a problem in the syntax of the code.
TypeError— means that a value is used in a way in which it can't be used.
ReferenceError— means that a reference to a non-existent value is made.
RangeError— means that a given value is out of range.
URIError— means that a URI-processing function was used in the wrong way.
EvalError— means that a problem was encountered while running the global
AggregateError— serves to group multiple errors as thrown in a chain of promises.
The best thing is that we can even define our own custom error classes based on the ones shown above.
For instance, we can define a derived class of
ArgumentError to represent a case where a function is called without a required argument.
Each of these classes defines the following two properties:
name— a string representing the name of the error class. For example, the
message— a string describing the error.
As stated before,
Errors in a given program.
The only purpose of
Error is to be used to create the six derived error classes shown above (obviously apart from
Error) and even custom error classes.
SyntaxError type is used to represent an error in the syntax of the code. Typically, this kind of an error is raised while parsing the text of a program, right before anything is executed.
Consider the following code:
var = 10;
var keyword is immediately followed by an equals-sign (
=) where the parser otherwise expects an identifier. Since this is clearly invalid syntax, the engine throws an error — precisely, a
This is apparent in the console output below (from Google Chrome):
SyntaxError is clearly shown in the error displayed, confirming that the error has something to do with ill-formed syntax of the underlying code.
TypeError class is used to represent an error in using a given value.
Common cases where a
TypeError is raised are:
- Passing in an argument to a function that belongs to a different type than the one the function actually expects.
- Using a value in a way that it's not meant for, for e.g. calling
nullas if it were a function.
- Trying to change a non-writable value, for e.g. setting
null(in strict mode).
Let's quickly consider a few examples.
Consider the following code:
var value = null;
null is called as if it were a function, which it is not. Likewise, a
TypeError is thrown as
null is used in a way that it's not meant for.
Take a look at the following error message generated in the console (Google Chrome):
It clearly pin-points the cause of the underlying problem, i.e.
value is called as if it were a function. Even the line number, where the error occurs, is shown in the message, i.e.
2 is the line number,
1 is the column number).
Time for another example. Consider the following code:
Math.PI = 100;
Here we try to change the value of the
PI property of the predefined global
Since the property is configured to be non-writable by the engine, and since strict mode is enabled by the
'use strict' directive at the start of the code, the code throws a
TypeError as illustrated below:
The message is once again pretty self-explanatory: we are trying to assign to a read-only (i.e. non-writable) property and thus, end up with an error.
The reason of shifting the script above into strict mode, via the
ReferenceError is thrown when a reference to a non-existent variable is made.
Such an error happens quite often in code because of hitting the wrong keys when typing code. For instance, while typing
contains, one might end up with
continas (a non-existent variable), and consequently a
An example follows:
var message = 'Typos are common!';
After reading the error message, we'd immediately visit line 2 and notice that the variable
mesage is missing an 's' in it.
Whenever we encounter a
ReferenceError, our first line of action must be to inspect the reason as to why does the given variable not exist. In the case above, since we just had two lines of code, the reason was apparent, i.e. a typo.
However, in larger and more complex code, the reason of a
ReferenceError must have something to do with the scopes of given variables. And this requires us to carefully follow our way through the code trying to find the declarations of the variables and making them global or local depending on the scenario at hand.
RangeError is thrown whenever a given value is out of its range. For example if a value is meant to be between
10, and we set it to
-1, this would lead to a
Consider the following code:
var num = 56.715;
toPrecision() method, as called on a number, rounds it to a given number of significant figures. Since the number of significant figures can't ever be less than 1, it's invalid to call the method with a number less than
The console snippet below confirms this:
The error message does a great job of explaining the reason of the error to us, i.e. we have called the method with a value that's outside the range 1 - 100.
toPrecision() requires a number argument (rounding a float value down to the greatest integer) and what we gave above, i.e.
0, was a number as well. Hence, the error thrown shouldn't be a
TypeError, and it really isn't.