Course: JavaScript

Progress (0%)

  1. Foundation

  2. Numbers

  3. Strings

  4. Conditions

  5. Loops

  6. Arrays

  7. Functions

  8. Objects

  9. Exceptions

  10. HTML DOM

  11. CSSOM

  12. Events

  13. Drag and Drop

  14. opt Touch Events

  15. Misc

  16. Project: Analog Clock

JavaScript Touch Events - Basics

Chapter 75 36 mins

Learning outcomes:

  1. The touchstart, touchmove and touchend events
  2. The TouchEvent interface
  3. The touches object
  4. Detecting swipe gestures
  5. The touchcancel event


In the last chapter, we saw a very brief overview of touch events in JavaScript. Now we shall explore how to set up a simple program that operates around touch events.

In particular, we'll explore the touchstart, touchmove, touchend and touchcancel events. Not only this, but we'll also cover the TouchList interface which represents a list of all the respective touch points as Touch instances.

Time to begin learning.

Setting up the events

Before we can start to create touch-powered JavaScript applications, we need to understand all the different events fired throughout the lifecycle of a touch point in contact with a touch surface.

In particular, there are four standard events to consider as follows:

  1. touchstart — fired when a touch point comes into contact with the touch surface.
  2. touchmove — fired constantly as the touch point moves across the touch surface.
  3. touchend — fired when a touch point leaves the touch surface.
  4. touchcancel — an implementation-dependent event which typically in a case where the touch couldn't be correctly detected.

The touchstart and touchend events could be thought of as mousedown and mouseup. That is, when the touch point goes down onto the screen, touchstart fires, and when it goes up away from the screen, touchend fires.

To read more about mousedown and mouseup, please refer to JavaScript Mouse Events.

Each of these events can be handled on any given element or on the whole document. They can be handled directly via the onevent category of properties or by using the addEventListener() method.

Even the corresponding HTML attributes are provided, however we won't use them since they're not a good development practice at all.

Hence, to handle the touchstart event on the whole <body> element, we could use any of the following statements:

document.body.ontouchstart = function(e) {};
// or
document.body.addEventListener('touchstart', function(e) {});

We'll go with the latter approach, i.e. using addEventListener(), since the former is not good practice once again, especially when we want to assign multiple event handlers to an event on a single target.

Using addEventListener() also has the benefit of setting up passive listeners which is not available in the mere onevent-like properties.

With this done, let's now create a very very elementary program that notifies the user when a touch point comes into contact with the screen and when it leaves it.

Here's a live example:

Live Example

First we have the following HTML and CSS to create a considerably large touch area with a light grey background color, and an element to showcase the output made by the script:

<div id="touch-region"></div>
<div id="output"></div>
#touch-region {
   height: 200px;
   background: #ddd;

And then we have the following script to enable touch interaction on it:

var touchRegionElement = document.getElementById('touch-region');
var outputElement = document.getElementById('output');

touchRegionElement.addEventListener('touchstart', function(e) {
   outputElement.innerText = 'Touch begins';

touchRegionElement.addEventListener('touchend', function(e) {
   outputElement.innerText = 'Touch ends';

The workflow of this program is extremely basic.

When a touch point touches the screen in the area defined by #touch-region, a touchstart event is emitted and likewise the handler above outputs 'Touch begins'. Similarly, when the touch point leaves the surface, a touchend event fires and likewise the handler above outputs 'Touch ends'.

How touchend really works?

It's extremely important to take note of the way touchend works.

That is, it only fires when the touch point leaves the screen (either by dragging it outside the physical bounds of the screen or by lifting the touch point), NOT when it leaves the element.

It doesn't fire when the touch point moves out of the element on which the event is handled.

For instance, in the example above, try initiating a touch inside #touch-region, then taking your finger all the way out of the element, and then finally leaving the screen.

You'll still notice the touchend event fire and that's because it fires only when the physical touch action comes to an end, regardless of whether it happens inside or outside the respective element.

The events touchstart and touchend could've been named touchdown and touchup respectively, but does that really make sense?

For a mouse, yes the names mousedown and mouseup make sense, owing to the nature of the buttons on the mouse that literally go down and up during a mouse interaction. Likewise, we have the events mousedown and mouseup.

But touchdown and touchup don't make much sense, if at all, when thought from the perspective of a touch point, be that a finger or a stylus.

Simple, wasn't this?

Let's consider another example, this time a bit more complex, utilizing the touchmove event:

The idea is to create a circle for each of the events touchstart, touchmove and touchend, and display them one after another inside a <div> element. The circle for touchstart should be green while the circle for touchend should be red. All the circles in between should be yellow.

Here's a simple demonstration:

Live Example

Now we suggest you to try developing this program on your own first. It's superbly simple to code.

Alright, assuming that you've given it a try, let's code the program together.

First we have the following HTML and CSS. The HTML is the exact same as before while the CSS has one addition i.e. styles for .circle elements which will eventually be added inside #output:

<div id="touch-region"></div>
<div id="output"></div>
#touch-region {
   height: 200px;
   background: #ddd;

.circle {
   margin: 2px;
   padding: 4px;
   display: inline-block;
   border-radius: 100%;

Now, let's talk about the JavaScript code.

One action that's common to each event's handler is creating a circle. Moreover, there seems to be a desire to give a different color to each circle based on the event fired. This hints us at defining a function to create circles that takes in one argument which is the background color to apply to the circle.

Following is the definition of the function which we call createCircle():

function createCircle(backgroundColor) {
   var circleElement = document.createElement('div');
   circleElement.className = 'circle'; = backgroundColor;


The code is pretty much self-explanatory — first a <div> element node is created, then the class 'circle' given to it, followed by the desired background color, and finally placed right inside the #output element.

With this function at hand, now we are only left to call it inside the three event handlers.

This is accomplished below:

var touchRegionElement = document.getElementById('touch-region');
var outputElement = document.getElementById('output');

function createCircle(backgroundColor) { /* ... */ }

touchRegionElement.addEventListener('touchstart', function(e) {

touchRegionElement.addEventListener('touchmove', function(e) {

touchRegionElement.addEventListener('touchend', function(e) {

Live Example


Now we could go on and create many such programs but they will all be boring until and unless we use the information captured by each of the fired events.

That's where the TouchEvent interface comes into the game. It's time to explore it...

The TouchEvent interface

Whenever a touch event is fired, the event object passed in to the respective handler function is a TouchEvent instance. As with all event objects, it contains a lot of useful information about the fired event.

TouchEvent inherits from the UIEvent interface as it represents an event that takes place in the user interface, which in turn (as we know) inherits from the Event interface.

Shown below are some of the common TouchEvent properties:

  1. touches
  2. changedTouches
  3. targetTouches
  4. target
  5. type
  6. timeStamp

You might be wondering why there aren't any clientX/clientY, or pageX/pageY, or screenX/screenY properties in here like in a MouseEvent object.

After all, how could we track the touch point(s) moving across the touch surface without these properties?

Here's the simple answer: a MouseEvent object works differently than TouchEvent in that it provides information for the mouse pointer which can only be one. Likewise, it directly holds the respective properties clientX/clientY, pageX/pageY, and screenX/screenY.

In contrast, TouchEvent provides information for an entirely different medium i.e. a touch point such as a finger or stylus over a touch surface. It's quite common for touch surfaces to have multi-touch support where we could have more than one touch point working on the surface simultaneously.

Hence, for a TouchEvent object to provide information for all the touch points, it can't obviously just have one pair of properties clientX/clientY, or pageX/pageY, or screenX/screenY. Rather, it ought to provide a list containing elements, each of which represents a given touch point and thereby carries these properties on it.

This is where the two interfaces TouchList and Touch step into the equation.

TouchList is merely a list containing Touch instances, each of which represents a given touch point on the touch surface.

The following three properties of a TouchEvent object are TouchList instances: touches, changedTouches and targetTouches.

We'll cover TouchList and all three of these instances in next chapter but for now, it's worthwhile to know a little bit about them.

  1. touches is a list of Touch objects representing all the touch points currently in contact with the touch surface.
  2. changedTouches is a list of Touch objects representing the touch points that fired the current event.
  3. targetTouches is a list of Touch objects representing the touch points that have the same target element as the target of the touch point which fired the current event.

For now, we'll focus on the first one, i.e. touches. It's the simplest of all.

The touches object

As stated before, touches is a property of the TouchEvent object passed into a touch event handler function.

Describing it in simple words:

touches is a TouchList instance representing all of the touch points that are currently in contact with the touch surface.

Each element in the list is a Touch instance with numerous properties on it.

The most useful for us are

  1. clientX/clientY
  2. pageX/pageY
  3. screenX/screenY

There are a couple more useful properties to consider but we'll leave them for the next-to-next chapter on JavaScript Touch Events — The Touch interface.

Anyways, it's time to consider an example using the touches object.

In the code below, we listen to the two events, touchstart and touchmove, and display the co-ordinates of the touch point out on the document as they occur.

var touchRegionElement = document.getElementById('touch-region');
var outputElement = document.getElementById('output');

function showCoordinates(e) {
   outputElement.innerText = `${e.touches[0].clientX}, ${e.touches[0].clientY}`;

touchRegionElement.addEventListener('touchstart', showCoordinates);
touchRegionElement.addEventListener('touchmove', showCoordinates);

Note that we're assuming that the #touch-region and #output elements have already been set up in the HTML, as we did in the previous code snippets.

As per the reason for not listening to the touchend event, it's because on touchend the touches list is empty, given that the whole touch interaction involved just one touch point.

There is a different way to tract the co-ordinates of the touch point upon touchend which we'll see later on below.

Here's a demonstration of this program:

Live Example

As you can see, working with touch events isn't as difficult as it might seem.

In the example below, we demonstrate yet another touch application. This time, we showcase the total number of contact points on the touch surface using the length property of the touches object.

The idea is really simple: when a new contact is made i.e. when touchstart gets fired and when an existing contact is removed from the surface i.e. when touchend gets fired, we output touches.length.

For either event, touches holds as many elements as there are touch points currently on the surface. Likewise, its length property would provide us with this exact number.

Shown below is the complete code:

var touchRegionElement = document.getElementById('touch-region');
var outputElement = document.getElementById('output');

function showTouchPoints(e) {
   outputElement.innerText = e.touches.length;

touchRegionElement.addEventListener('touchstart', showTouchPoints);
touchRegionElement.addEventListener('touchmove', showTouchPoints);
touchRegionElement.addEventListener('touchend', showTouchPoints);

Live Example

Guess what? Our inventory of examples hasn't ended yet.

It's time for another example, this time to display a circle right at the point of contact on the touch surface.

What we mean is demonstrated below:

Live Example

Whenever we put our finger in contact with touch surface on the document's viewport, a circle is shown right at that point. When the finger leaves contact, the circle remains there.

Note that we are assuming that only one touch point is interacting in the example above. With multi-touch interaction, the program won't work because it isn't made to handle that. To handle multi-touch interaction, we ought to use the changedTouches object instead of touches. We'll cover it in the next chapter.

How to accomplish this task?

What we need is once again a function to create a circle but now instead of providing it the desired background color (as we did before), we'll need to provide the x and y co-ordinates of the given touch point so that the circle (which would obviously be fixed positioned) could correctly be positioned.

Here we do require a little bit of logic and math to create the program. But don't worry; it won't be a second course on calculus, but just some very elementary arithmetic.

One more thing that's quite clear is that we'll need to set up event handlers on the whole document instead of on just one single element therein. For this, we'll use the window object.

Instead of window, we could also use document, document.documentElement or document.body. They are all the same as far as the handling of each event goes.

So to begin with, here's the CSS code for the .circle class which will ultimately be given to each element created at the location of the touch point upon touchstart:

.circle {
   position: fixed;
   padding: 20px;
   background-color: rgba(0,0,0,0.4);
   border-radius: 100%;

The JavaScript is also very basic — just a function, that creates a circle, handles the touchstart event.

Here's the JavaScript code:

function createCircle(e) {
   var circleElement = document.createElement('div');
   circleElement.className = 'circle'; = e.touches[0].clientX - 20 + 'px'; = e.touches[0].clientY - 20 + 'px';

window.addEventListener('touchstart', createCircle);

Live Example

Perhaps the most important statements to look over here are on lines 4 and 5. They serve to correctly position the circle right at the place where the touch point initiated the touchstart event.

The reason for subtracting 20 from both clientX and clientY is so that the mid-point of the circle created can coincide with the mid-point of the touch point.

And that's it. Simple, as always.

Swipe gestures

In this section, we see how to use the events touchstart and touchend and the touches object to detect whether the user performed a swipe.

But what is a swipe?

A swipe gesture is typically taken as a rapid movement of a touch point in roughly one direction on the touch surface before leaving contact with it.

There are generally four kinds of swipe gestures following from the four typical directions used in computing: swipe-up, swipe-down, swipe-left and swipe-right.

For instance, swipe-up means that we start off from the bottom of the touch surface and then move the touch point upwards swiftly before leaving contact with the surface.

Swiping gestures are performed all day long on touch devices — they are the most basic and most intuitive of all kinds of complex touch gestures performed.

Now the question is, how to detect a swipe action in JavaScript?

Or let's refine the question further to tackle a more specific problem — how to detect a swipe-left action in JavaScript?

Let's think about it...

We need essentially two things to detect a swipe-left gesture:

  1. Check if the direction of the gesture is towards the left.
  2. Check if the gesture is quick enough.

How to accomplish the first of these?

Well, let's see it with the help of some examples.

Suppose that we start off a touch gesture at the filled grey mark below and then end it right at the hollow mark. The direction is shown with a line connecting the marks.

Does this gesture seem to go leftwards? Well, it surely doesn't. Instead, it's going to the top-right.

Now consider the following:

Does this seem to go leftwards? Once again, no. It's going to the top-left, whereas we want something close to going strictly to the left.

Time for another example:

Does this seem to go leftwards? Well, yes. The gesture seems to be in a perfect straight line towards the left.

And here's the final example:

Does this seem to go leftwards? Well, yes. Although the gesture doesn't draw a perfectly horizontal 180° line, it still constitutes a swipe-left. That's because we allow for a certain level of deflection in the gesture either above the starting point or below it.

After all, no one performs swipe-left gestures in perfect 180° angles!

Now based on all these examples, it's clear that we need to make two computations in order to detect whether a touch gesture is really going to the left:

  1. The change in the x co-ordinate at the end of the gesture.
  2. The change in the y co-ordinate at the end of the gesture.

Both the readings for these co-ordinates are first obtained on the touchstart event and then on the touchend event. It's during touchend that the change is calculated by subtracting the previous values from the latest values.

If the change in the x co-ordinate is greater than a given threshold value and the change in the y co-ordinate is less than a maximum threshold value, then the touch gesture is considered to have met the first condition of a swipe-left gesture.

But how do we figure out the co-ordinates of the touch point on touchend?

As we know, touches represents a list of all the touch points currently in contact with the touch surface, however in this case our touch point isn't on the surface. How could we obtain information for something that just isn't there?

Well, surely we could, all thanks to the changedTouches object.

changedTouches is another TouchList instance that exists on a TouchEvent object, however it's a little bit different than touches.

The exact detail and examples of the difference between these are both left for the next chapter, but let's try to summarize it for the sake of this example.

changedTouches is a TouchList instance representing all the touch points that triggered the current event.

In the case of touchend, changedTouches contains all those touch points that left the touch surface to ultimately cause the touchend event to be fired.

Perfect. Our problem has been effectively solved.

Now it's time to look into the second condition, which is regarding timing.

Here's the same example as before:

It's possible to perform this gesture in two ways: one which spans more than 1 second and one which spans less than 300ms. Only the latter here constitutes a swipe; not the former.

That's because a swipe is taken to be a touch gesture that happens quickly, not something that happens in units of seconds.

So how to determine the time taken by the gesture in JavaScript?

Well, the answer lies in working with dates and times in JavaScript which happens via the Date interface.

It's covered in the last unit of this course where we showcase all the miscellaneous concepts of JavaScript. The chapter doesn't require any of the following units, hence you could go there right now and learn about the interface if you haven't already, before continuing on reading below.

Coming back to the discussion, we can use the Date() constructor to time the touch gesture and then check whether the time taken is less than the given amount or not.

As with computing the change in the co-ordinates of the touch point, first a reading for the time is taken on touchstart and then on touchend. The difference between these represents the time span of the gesture, in milliseconds.

If the time span is less than 300ms, the second condition for a swipe-left is met.

Using 300 here isn't a fixed value proposed by some kind of a standard. It's just a good approximation of how fast should a gesture be in order to constitute it as a swipe. If you want to, you could scale this value up to allow for slow gestures to be treated as swipes, or maybe even scale it down to get the action to be performed even faster.

With the logic for both the conditions of a swipe understood, it's time to start coding.

One thing to note before we start coding is that both touches and changedTouches can hold multiple elements. However, we want to focus on only one single touch point in our simple detector. Henceforth, we'll directly access the first element of both these lists and use that element in our computations.

Alright, everything's set by now and so it's time to write the program.

For the HTML and CSS, we're using the same setup as before, i.e. with the #touch-region and #output elements.

Shown below is the JavaScript:

var touchRegionElement = document.getElementById('touch-region');
var outputElement = document.getElementById('output');

var initialX, initialY, initialTime;

touchRegionElement.addEventListener('touchstart', function(e) {
   initialX = e.touches[0].clientX;
   initialY = e.touches[0].clientY;
   initialTime = new Date();

touchRegionElement.addEventListener('touchend', function(e) {
   var deltaX = e.changedTouches[0].clientX - initialX;
   var deltaY = Math.abs(e.changedTouches[0].clientY - initialY);
   var deltaTime = new Date() - initialTime;

   if (deltaX <= -30 && deltaY <= 100 && deltaTime <= 300) {
      outputElement.innerText = 'Swipe-left detected'; = 'green';
   else {
      outputElement.innerText = 'Not a swipe-left'; = 'red';

The idea is that when a swipe-left gesture is detected, the message 'Swipe left detected' is output on the document with a green color to signal success. However, if this is not the case, then the message 'Not a swipe-left' is output with a red color.

Let's try it out:

Live Example

It just works flawlessly!