The dataset is a supplement to the following paper:
V. Phoha, and Z. Wang, "Which Verifiers Work?: A
Benchmark Evaluation of Touch based Authentication Algorithms," in IEEE
BTAS, Washington, DC, Sept 29 - Oct 2, 2013.
(For a ready
reference the abstract of the paper follows the data description.)
(For a ready reference the abstract of the paper follows the data description.)
Touch strokes which were captured with the phone held in a landscape orientation are in separate files from those captured with the phone held in protrait orientation. The files "LandscapeSession1.xlxs" and "LandscapeSession2.xlxs" respectively contain raw data from the landscape strokes during Sessions 1 and 2. The corresponding files for portrait mode are "PortraitSession1.xlxs" and "PortraitSession2.xlxs" respectively.
Each file has 7 fields which are described in details below:
UserID, --- a unique identifier for each user.
SwipeID --- For each UserID, the SwipeID uniquely identifies each touch stroke.
X and Y --- the X and Y coordinates of each touch point on the screen.
Pressure --- the pressure exerted by the finger on the screen at the point having coordinates (X,Y).
Area --- the area occluded between the finger and the screen at the point having coordinates (X,Y).
EventTime --- the time at which the finger touches the point having coordinates (X,Y).
In the paper we only studied the performance of users who had at least 80 strokes of a given category after outlier filtering (We use the term 'category' to refer to either of: portrait-horizontal strokes, portrait-vertical strokes, landscape-horizontal strokes and landscape-vertical strokes). Our method of distinguishing between vertical and horizontal strokes is described in Section 3 of the paper.
The dataset posted here contains all users' data irrespective of whether they completed 80 strokes or not. Data was collected using Google Nexus S phones.
Despite the tremendous need for the evaluation of touchbased authentication as an extra security layer for mobile devices, the huge disparity in the experimental methodology used by different researchers makes it hard to determine how much research in this area has progressed. Critical variables such as the types of features and how they are pre-processed, the training and testing methodology and the performance evaluation metrics, to mention but a few, vary from one study to the next. Additionally, most datasets used for these evaluations are not openly accessible, making it impossible for researchers to carry out comparative analysis on the same data. This paper takes the first steps towards bridging this gap. We evaluate the performance of ten state-of-the-art touch-based authentication classification algorithms under a common experimental protocol, and present the associated benchmark dataset for the community to use. Using a series of statistical tests, we rigorously compare the performance of the algorithms, and also evaluate how the “failure to enroll” phenomena would impact overall system performance if users exceeding certain EERs were barred from using the system. Our results and benchmark dataset open the door to future research that will enable the community to better understand the potential of touch gestures as a biometric authentication modality.