Tải bản đầy đủ - 0 (trang)
1 Data Acquisition, Processing, and Segmentation

1 Data Acquisition, Processing, and Segmentation

Tải bản đầy đủ - 0trang

694



B.A. Anima et al.



(I) Mouse Move, (II) Press Left Button, (III) Release Left Button, (IV) Press Right

Button and (V) Release Right Button. X-Coordinate and Y-Coordinate are pixel

location values of x and y coordinates of the mouse on the screen respectively. Table 1

shows four sample actions recorded by the tool RUI. Raw mouse data are then processed into three upper level mouse actions: Mouse Move, Point-and-Click on left or

right mouse button and Drag-and-Drop.

Table 1. Example of four mouse action instances recorded by the mouse logging tool RUI.

Elapsed time (in ms) Action

0.33

Moved

0.338

Moved

0.354

Pressed Left

0.394

Released Left



X-coordinate (in pixels) Y-coordinate (in pixels)

204

492

206

479

206

479

206

479



Fig. 1. Direction of mouse movement divided by octants of 45° intervals.



In segmentation step, the processed data is divided into different block sizes based

on the number of mouse actions. A block consists of a set of aforementioned mouse

actions. Block sizes of 350, 400, 450, 500, 550, and 600 are used. From each block, a

set of features are extracted.

2.2



Feature Extraction and Normalization



In this step, features are extracted from the preprocessed dataset. Features are selected

in a way that makes the system compact, efficient and at the same time consist of some

unique characteristics of an individual.

For each action type, twenty-two features are calculated from each block. These

are; Mean and Standard Deviation of time (in milliseconds) to perform a specific type

of action in a block, Mean and Standard Deviation of travel distance (in pixels) to

perform a specific type of action in a block, Number of a specific type of mouse action

(N) in a block, Ratio of number of mouse actions (N) and total number of actions in



User Authentication from Mouse Movement Data Using SVM Classifier



695



Table 2. List of features extracted from each block.

Features

Mean of Time

Standard Deviation of Time

Mean of Travel Distance

Standard Deviation of Travel Distance

Number of Mouse Actions

Ratio of Mouse Action and Total Number of Actions

Direction Specific Mean Time

Direction Specific Mean Mouse Movement Distance

Total Mouse Movement Distance in each direction

Total Features



Number of features

3

3

3

3

3

3

24

24

8

74



 tj ) and proposed direction specific

block (NB), proposed direction specific mean time (X

K

 dj ). Here, direction of the mouse movement is

mean mouse movement distance (X

K



described by octant of 45° intervals with 0° to 360° spans [see in Fig. 1] for every

mouse action. Thus, there are 66 features for three mouse action type. The newly

proposed features are described below.

Proposed direction specific mean time to perform a specific type of action in a

 Ktj ) is a ratio between total time to perform a type of action in K direction and

block (X

total time to perform the same type of action throughout the block.

K

X tj



PM

jẳ1



XtjK



iẳ1



Xti



ẳ PN



1ị



XtjK is the time to perform an action of Jð1; 2; . . .; M Þ samples in Kð1; 2; . . .; 8Þ directions, Xti is the time to perform an action of Ið1; 2; . . .; N Þ samples.

Proposed direction specific mean mouse movement distance to perform a specific

 Kdj ) is a ratio between total travel distance to perform a type

type of action in a block (X

of action in K direction and total travel distance to perform the same type of action

throughout the block.

K

X dj



PM

jẳ1



XdjK



iẳ1



Xdi



ẳ PN



2ị



Where XdjK is the mouse movement distance of Jð1; 2; . . .; M Þ samples in Kð1; 2; . . .; 8Þ

directions, Xdi is the mouse movement distance of Ið1; 2; . . .; N Þ samples.

Eight more features are also calculated which are the total mouse movement disPM K

K

tance in each direction,

j¼1 Xdj where Xdj is the mouse movement distance of

Jð1; 2; . . .; M Þ samples in Kð1; 2; . . .; 8Þ directions. Therefore, the total number of

features is 74 where the total number of proposed features is 48 for three mouse action

type. See Table 2 for the full list of features. These features are used to construct a



696



B.A. Anima et al.



feature vector for each user. The dimension of each feature vector is the number of

selected features which is 74. Before classifying, data of the feature vector are normalized in a scale. This helps to avoid attributes in greater numeric ranges overshadowing those in smaller numeric ranges. By doing this, training and testing data will be

in the same scale. In this proposed system, data is normalized into the scale of zero to

one by using Min-Max Normalization.

2.3



Training and Classification



To analyze how the classifier is checking a genuine user, at first the classifier is trained

with a set of randomly selected data for a selected user from the dataset. The training data

pattern contains patterns of the legitimate user. The classifier is also trained with imposter

patterns labeled with the legitimate patterns. Then the other portions of the dataset which

are treated as testing patterns are applied to the classifier. After testing, it is analyzed that

how the system is classifying genuine data by examining the predicted label.

In this proposed system, Support Vector Machine (SVM) [3] classifier is used for

training and testing purposes. We adopted the classifier SVM since it has been widely

used in the field of object recognition, speech recognition, biometrics, image retrieval,

image regression etc. It is highly accepted classifiers since it offers a result with good

performances. Sometimes it outperformed other classifiers, such as neural network.

In case of SVM, two techniques are applied. One is using original feature vector

(with 74 features) and the other is using dimensionally reduced feature vector by

applying Principal Component Analysis (PCA) [4]. PCA is a mathematical technique

of matching patterns in high dimensions of data. It helps to reduce the dimension of the

data, so when the dataset is larger, PCA plays an important role by reducing the

dimensions and selecting a subset.

To implement the system using SVM classifier, an open source package LIBSVM

[5] is used. The popular choice of Kernel function is Gaussian Radial Basis Function

(RBF). Kernel parameters are obtained by applying fivefold cross validation technique.

The system applies SVM on original feature space as well as SVM on dimensionally

reduced feature space using PCA.



3 Experimental Results and Discussion

The proposed system is implemented in a Windows 7 system with 1.70 GHz Intel Core

i3 4005U CPU with 4.00 GB of RAM. Other remaining part of the system such as

processing, segmentation, scaling, and classification were performed with MATLAB

R2013a.

The proposed system is tested by using a public benchmark data [6, 7]. In the

public benchmark dataset, four types of actions are defined which are; (1) Mouse

Movement (MM) which means normal mouse movement, (2) Silence which means the

time when the mouse does not move, (3) Point and Click (PC) which defines mouse

movement which is followed by mouse button press and release, and (4) Drag and

Drop (DD) which relates with the combination of mouse actions such as mouse



User Authentication from Mouse Movement Data Using SVM Classifier



697



movement, mouse button press and then release sequentially. Before experimenting

data for silence action are deducted from the benchmark dataset. Note that from these

four actions, three upper level actions are derived as mentioned in Sect. 2.1.

Performance is measured by computing False Acceptance Rate (FAR) and False

Rejection Rate (FRR).

3.1



Results of Classification



Experiments are performed on different sizes of blocks (350, 400, 450, 500, 550, and

600 actions) each with 74 features derived from the public dataset. Table 3 shows that

among different block sizes of actions, block size of 600 actions provides better result.

In case of block size of 600 actions, SVM and SVM (+PCA) show FRR of 1.1594 %

and 1.2081 % respectively. Again, for block size of 600 actions, SVM and SVM

(+PCA) show FAR of 1.9053 % and 2.3604 % respectively.

Table 3. Performance for different block sizes using SVM and SVM (+PCA).

Block size (number of action) SVM

FRR (%)

350

1.4631

400

1.3685

450

1.2917

500

1.1902

550

1.1619

600

1.1594



SVM (+PCA)

FAR (%) FRR (%) FAR (%)

2.3358

1.5291

2.6496

2.2234

1.4616

2.5512

2.2114

1.3746

2.4789

2.0379

1.3030

2.3574

2.0020

1.2327

2.3941

1.9053

1.2081

2.3604



Fig. 2. Comparison of SVM and SVM (+PCA) Classifiers based on FRR.



698



B.A. Anima et al.



Fig. 3. Comparison of SVM and SVM (+PCA) Classifiers based on FAR.



After studying the performance result for different classification techniques, it is

observed that the performance rate of the SVM with original feature space offers better

result.

The comparison based on the performance rate of FRR and FAR shown in Figs. 2

and 3 respectively.

3.2



Comparison with Related Works



The results found in our experiments are compared with the results found by Ahmed

et al. in [7], which is considered as benchmark in the field of mouse dynamics. Features

of an existing system by Ahmed et al. [7] are extracted from the public benchmark

dataset and applied to the proposed system. These features are Movement Speed

compared to Travelled Distance (MSD) curve, Average Movement Speed per Movement Direction (MDA), Movement Direction Histogram (MDH), Average Movement

Speed per Type of Action (ATA), Action Type Histogram (ATH), Travelled Distance

Histogram (TDH) and Movement elapsed Time Histogram (MTH). Twelve points are

computed through periodic sampling over the MSD curve. In case of TDH, values in

the range of 0–100 pixels and 100–200 pixels are used. In case of MTH, values within

the range of 0.0–0.5 s, 0.5–1.0 s, and 1.0–1.5 s are collected. In total, the number of

features is 39.

For block size of 600 actions, SVM and SVM (+PCA) offer FRR of 1.6001 % and

1.7851 % respectively by using existing set of features proposed in [7] which are

higher than FRRs showed by our proposed system with the same set of data and block

size. Likewise, for block size of 600 actions, SVM and SVM (+PCA) offer FAR of

2.9798 % and 2.9042 % respectively by using existing features in [7] which are higher

than ours. This clearly indicates the merits of our newly proposed features.

Several other researches showed impressive results in recent times. Below we

mention the notable works and compare their outcomes with ours.



User Authentication from Mouse Movement Data Using SVM Classifier



699



(1) In the work of Ahmed et al. [7], they offer FRR of 2.4614 % and FAR of

2.4649 %. To gain this performance the number of required actions is 2000 where

the actions include point and click, drag and drop, mouse move and silence.

(2) Nakkabi et al. [6] also show FRR of 0.36 % and FAR of 0 for same number of

mouse actions. However, the number of mouse action is large and not always

practical to play a tile game to use the system.

(3) Pusara and Bordley [8] offered a web based authentication system where decision

tree is used as a classifier. It shows good result where false negative rate is 1.75 %

and false positive rate is 0.43 %. However, it only consists of eleven users’

involvement.

(4) In the works of Muthumari et al. [9] they proposed 6.25 % FRR and 7.25 % FAR

using Learning Vector Quantization (LVQ) method.

(5) In their other work [10], Kernel Principle Component Analysis (KPCA) method is

used to reduce the dimension of the feature vector and one class support vector

machine is used as a classifier which offered 8.25 % FRR and 8.98 % FAR.

(6) In the method of Lakshmipriya et al. [11], holistic and procedural features are

used and Nearest Neighbor Algorithm is applied to extract the features. It offers

FRR of 7.70 % and FAR of 8.75 %.

(7) In the method of Rahman et al. [12], similarity score method has been used which

is based on statistical normal distribution. They found equal error rate (EER) to be

6.7 %.

Compared with the above existing methods, our method shows significantly lower

error rates by processing even fewer number of actions (maximum 600 for instance). The

works which show lower error rates than ours, suffers from either inadequate population

size (such as in [8]) or impractical due to restricted testing environment (see in [6]).



4 Conclusion

In this system, three types of mouse actions: Mouse Move, Point-and-Click on left or

right mouse buttons and Drag-and-Drop are obtained. The processed data is divided into

blocks where block means a set of specific number of mouse actions. Seventy-four

features are extracted from each block to form a feature vector where the number of new

features is forty-eight. For each type of mouse action, the features are calculated from

mean and standard deviation of travel distance, mean and standard deviation of elapsed

time to perform an action, mean number of mouse actions, proposed direction specific

mean time of an action and direction specific mean travel distance. The direction of the

mouse movement action is described by an octant of 45° intervals. Using these features a

person’s mouse movement distance and total time to perform an action are described

with eight values instead of one direction. The data of the feature vector is normalized

into the scale of zero to one. After normalizing the feature vector is applied to classifiers.

Support Vector Machine (SVM) with original feature space and Support Vector

Machine (SVM) with dimensionally reduced feature space by Principal Component

Analysis (PCA) are used in the system. To test the system, public benchmark dataset is

used. Performances are measured and analyzed for six different block sizes. After



700



B.A. Anima et al.



experimenting it is observed that the system provides better performance of the block

size of 600. Experiment result shows that, in case of original feature space SVM offers

1.1594 % FRR and 1.9053 % FAR. In case of dimensionally reduced feature space by

PCA, SVM classifier offers 1.2081 % FRR and 2.3604 % FAR.

This system did not consider some actions due to inadequacy of benchmark dataset.

In future, more types of actions such as Double Click, Mouse Wheel etc., will be

considered. A larger dataset is expected to be gathered and tested against our system.

With some impressive initial results, we believe this system could be used with other

conventional authentication systems to build a multi-modal authentication system.

Acknowledgements. This work was done under the assistance of Ministry of Posts,

Telecommunications and Information Technology Fellowship given by the Information and

Communication Technology division of Ministry of Posts, Telecommunications and Information

Technology, Government of the People’s Republic of Bangladesh.



References

1. Jain, A.K., Pankanti, S.: Biometric identification. Commun. ACM 43, 91–98 (2000)

2. Kukreja, U., Stevenson, W.E., Ritter, F.E.: RUI: recording user input from interfaces under

Windows and Mac OS X. Behav. Res. Methods 38(4), 656–659 (2011)

3. Cristianini, N., Shawe-Taylor, J.: An Introduction to Support Vector Machines: And Other

Kernel-based Learning Methods. Cambridge University Press, New York (2000)

4. Jolliffe, I.: Principal Component Analysis. Springer, New York (1986)

5. Chang, C.C., Lin, C.J.: LIBSVM: a library for support vector machines. ACM Trans. Intell.

Syst. Technol. 2, 1–27 (2011). Article No. 27

6. Nakkabi, Y., Traoré, I., Ahmed, A.A.E.: Improving mouse dynamics biometric performance

using variance reduction via extractors with separate features. Trans. Sys. Man Cyber. Part A

40(6), 1345–1353 (2010)

7. Ahmed, A.A.E., Traoré, I.: A new biometric technology based on mouse dynamics. IEEE

Trans. Dependable Sec. Comput. 4(3), 165–179 (2007)

8. Pusara, M., Brodley, C.E.: User re-authentication via mouse movements. In: ACM

Workshop on Visualization and Data Mining for Computer Security. ACM Press (2004)

9. Muthumari, R.S., Pepsi, M.B.B.: Mouse gesture based authentication using machine

learning algorithm. In: International Conference on Advanced Communication Control and

Computing Technologies (2014)

10. Muthumari, G., Shenbagaraj, R., Pepsi, M.B.B.: Authentication of user based on

mouse-behavior data using classification. In: IEEE International Conference on Innovations

in Engineering and Technology (ICIETŠ) (2014)

11. Lakshmipriya, D., Balakrishnan, J.R.: Holistic and procedural features for authenticating

users 16, 98–101 (2014)

12. Rahman, K.A., Moormann, R., Dierich, D., Hossain, M.: Continuous user verification via

mouse activities. In: Dziech, A., et al. (eds.) MCSS 2015. CCIS, vol. 566, pp. 170–181.

Springer, Heidelberg (2015). doi:10.1007/978-3-319-26404-2_14



Distance Bounding Based on PUF

Mathilde Igier and Serge Vaudenay(B)

EPFL, 1015 Lausanne, Switzerland

serge.vaudenay@epfl.ch

http://lasec.epfl.ch



Abstract. Distance Bounding (DB) is designed to mitigate relay

attacks. This paper provides a complete study of the DB protocol of

Kleber et al. based on Physical Unclonable Functions (PUFs). We contradict the claim that it resists to Terrorist Fraud (TF). We propose some

slight modifications to increase the security of the protocol and formally

prove TF-resistance, as well as resistance to Distance Fraud (DF), and

Man-In-the-Middle attacks (MiM) which include relay attacks.



1



Introduction



Wireless devices are subject to relay attacks. It is problematic because these

devices are at the basis for authentication in many domains like payment with

credit cards, building access control, or biometric passports [15,16]. To ensure

the security of wireless devices against relay attacks, Brands and Chaum [8]

introduced the notion of Distance Bounding (DB) protocols in 1993. The idea is

that a prover P must prove that he is close to a verifier V. Several attack models

exist to make the verifier accept with a prover too far away from the verifier. The

attacks described in the literature are: 1. Distance Fraud attacks (DF) [8]: A far

away prover P tries to make V accept. No participant is close to V . 2. Mafia

Fraud attacks (MF) [10]: A malicious actor A who does not hold the secret tries

to make V accept using an honest but far away prover P. 3. Terrorist Fraud

(TF) [10]: A malicious actor A who does not hold the secret tries to make V

accept by colluding with a malicious far away prover P who holds the secret.

Avoine et al. [1] proposed the complete but rather informal ABKLM model.



urholz et al. [11] provided a formal model to prove the security of the protocols.

However, this model is too strong as admitted by the authors [12], and it is

difficult to prove TF security in this model. Another model was proposed by

Boureanu et al. [4].

Most of the proposed protocols are vulnerable to TF attacks but a few protocols provide security against all types of threats: the protocol of Fischlin and

Onete [13], the SKI protocol [5,6], DBopt protocols [7], the public-key DB protocols ProProx [26] and eProProx [25], and the anonymous DB protocol SPADE [9].

However, all these proofs are made on the assumption that in TF, the prover

does not want to give his credential to the adversary for further application. This

assumption is weak and does not correspond to reality. None of the DB protocols in the plain model can provide TF security without this assumption, so,

c Springer International Publishing AG 2016

S. Foresti and G. Persiano (Eds.): CANS 2016, LNCS 10052, pp. 701–710, 2016.

DOI: 10.1007/978-3-319-48965-0 48



702



M. Igier and S. Vaudenay



we should consider alternate models. DF and TF security are easier to provide

using tamper resistant hardware on the prover side because the prover cannot

access his secret. Kılın¸c and Vaudenay [19] provide a new model for distance

bounding protocols with secure hardware. In this model, the game consists of

several verifier instances including a distinguished one V, hardware with their

instances, instances of provers and actors. There is one distinguished hardware

h with instances far away from V. The winning condition of this game is that V

accepts.

– The DB protocol is DF-secure if the winning probability is negligible whenever

there is no instance close to V.

– The DB protocol is MiM-secure if the winning probability is negligible whenever an honest prover is holding h (i.e. it can only be accessed by an honest

and far away prover).

– The DB protocol is TF-secure if the winning probability is negligible.

PUFs are tamper resistant hardware used in counterfeiting detection [22,23] and

authentication protocols [3,14]. A PUF is a physical component which maps a

challenge to a response. By definition, a PUF, as it is described in [21], has the

following properties: non clonable, non emulable, a response Ri gives negligible

information on a response Rj with Ri = Rj and a PUF cannot be distinguished

from a random oracle (as discussed in [2]). For simplicity reasons, we will treat

PUFs as random oracles with access limited to their holder. The aim of our

work is to provide a provably secure protocol using PUF in DB protocols. A

TF-secure DB protocol based on PUF was proposed in [18]. Nevertheless, this

protocol assumes that provers implement their protocol while using a PUF. In

the model of Kleber et al. [20], the prover can implement any malicious protocol

while accessing to the PUF, the protocol in [18] is trivially TF-insecure in this

stronger model.1 Kleber et al. design a protocol in [20] which is claimed to be

secure in their model. However we contradict that fact in this paper and propose

to modify it in order to improve the security.

Our contribution in this paper is as follows: 1. We show that the protocol

proposed by Kleber et al. [20] is not secure against Terrorist Fraud which contradicts the claims from their authors; 2. We provide some slight modifications of

this protocol which we call pufDB to improve its security; 3. We provide proofs

of security for this pufDB protocol for the following attacks: Distance Fraud and

Mafia Fraud ; 4. We prove the security of pufDB protocol against Terrorist Fraud

when the prover is limited in the amount of bits per round he can send. The

security strengthens when the distance from the prover to the verifier increases.

To the best of our knowledge, pufDB is the first protocol which provides TF

security even when the prover is allowed to leak his secret.

Due to limited space, proofs of our results are deferred to the full version of

this paper [17]. The full version includes the analysis for two other threat models:

impersonation fraud and distance hijacking. It also describes some attacks to

lower bound the necessary number of rounds for security.

1



In this protocol, the PUF is not used during the fast phase, so the malicious prover

can give whatever is needed to complete the protocol to a close-by adversary.



Distance Bounding Based on PUF



2



703



The Kleber et al. Protocol



2.1



Details of the Protocol



The verifier is called V and the prover P. The main idea of the protocol proposed

by Kleber et al. [20] is to replace the PRF in P of conventional Distance Bounding protocols by a PUF. In this protocol, it is possible to use both Challengeresponse PUF and a public PUF.2 The protocol is made of two distinct phases:

the preparation phase and the time critical phase.

Prior to the protocol, it is assumed that V can query the PUF and store a

number of challenge-response pairs (CRP ), at a round i such that ri = P U F (Ci ).

A CRP is defined as (Ci , ri ), 0 ≤ i < n with n the number of rounds. There is

always a set of CRPs corresponding to PC to complete the run. A set of CRPs

shall not be used in protocols more than once.

In the time critical phase, only one bit can be sent from V to P in a round.

However the PUF needs a big space of challenges to be secure. Therefore V

transmits a pre-challenge PC to P during the preparation phase. Then, in the

time critical phase, the pre-challenge is combined with the challenges ci received

by P to generate a challenge Ci = PC0 ...P Cn−2−i ||c0 c1 . . . ci for the PUF. It is

assumed that the hardware is such that the PUF can precompute Ci and when

the prover receives the last bit of Ci he can return the response ri in almost

no time. The time critical phase consists of n transmission rounds. The verifier

V starts the clock when he sends a challenge ci and stops the clock when he

receives the response ri . In the paper, Tmax and Emax are defined. Tmax is the

maximal number of responses which can arrive too late. Emax is the maximal

number of errors admitted in the responses. (A late response is not checked.)

We note that if one ci is incorrectly received by P , then all subsequent PUF

computations will produce random outputs, independently from the expected

ri . So, this protocol is not tolerant to reception errors by P .

The protocol is claimed to be provably secure for all types of Fraud by Kleber

et al. [20]. They prove the security of their protocol using the model of Dă

urholz

et al. [11]. They only give a proof of security against Terrorist Fraud attacks. In

fact, in the model defined by Kılın¸c et al. [19], when the protocol uses hardware,

the proof that the protocol is secure against Terrorist Fraud attacks gives a proof

of security against all the other types of attacks. However, when there is no

additional restriction in the protocol, this protocol is insecure against Terrorist

Fraud attack as we show in the Sect. 2.2. To prove the security against Terrorist

Fraud, Kleber et al. assume that the probability for the adversary to win the

n−Emax −Tmax

. We contradict this assumption.

game is equal to 12

2



Normally, a PUF is non emulable so the verifier should first borrow the PUF to get

input-output pairs. To avoid it, we can use Public-PUF also called SIMPL system

(SIMulation Possible but Laborious). SIMPL systems guarantee that the response

to a challenge cannot be computed faster with a simulator of the PUF than with

the real PUF. Anyone can compute the right response but it takes much more time

with the simulator of the PUF.



704



2.2



M. Igier and S. Vaudenay



A Terrorist Fraud Attack



Notations. dV P is the distance between V and the far away prover P, tV P is the

signal propagation time between V and P (it is assume that dtVV PP is a constant

such as the speed of light); Similarly, dAP is the distance between A and the far

away prover P, tAP is the signal propagation time between A and P ; B is the

maximal distance allowed by the protocol, tB is the maximal signal propagation

time over the distance B; Finally, T is the time between sending two consecutive

challenges ci and ci+1 .

In this scenario a malicious far away prover colludes with an adversary close

to the verifier. In the protocol of Kleber et al. the adversary receives PC from

the verifier. He can send it to the malicious prover who holds the PUF. There

is no information concerning the distance dAP between P and A nor about the

time T in between rounds. A forwards every message from V to P . To answer

a challenge ci on time, P is missing m bits. He computes 2m PUF values and

sends them to A so that A will always be able to respond on time. For instance,

if tm denotes the time it takes for P to compute the 2m values and to transmit

them to A (without time of flight), the attack works if

tAP + tV A ≤ tB +



(mT − tm )

2



(1)



As an example, with m = 1, P has two PUF values to compute and to send

1

and the condition is tAP + tV A ≤ tB + T −t

2 . Since there is no information on

dAP , dV A and T , we can have dAP = B, dV A = B and T ≥ t1 + 2tB , in that

configuration Eq. (1) is true. Then A can pass the round if he is in the previous

configuration. He can pass all rounds with high probability, so the protocol is

not secure against Terrorist Fraud.

More concretely, we assume m = 1, B = 3 m and tB = 10 ns. We consider

V running at 1 GHz and have one clock cycle between rounds, so T = 1 µs. We

consider a faster malicious prover P running at 10 GHz so that he can evaluate

two challenges with the PUF (corresponding to the possible challenges for m =1)

in tm = 200 ns. With dV A = B, the attack succeeds for tAP = 400 ns i.e. dV P =

120 m. The attack is possible because there is a huge amount of time between

the reception of ri and the emission of ci+1 , but these figures clearly show it is

a quite realistic scenario.

2.3



Slight Modifications of the Protocol



We choose to slightly modify the protocol of Kleber et al. [20] to improve its

security. We call pufDB the new protocol. pufDB is presented on Fig. 1. First,

we impose a regular rhythm for sending the challenges, second, the (n − 1) bits

of PC are sent with the same rhythm as if there were challenges in the time

critical phase but expecting no answer. The prover begins to send responses

when he receives the first bit of challenge c0 . With this slight change, we make

sure there is no more time left for attacks in between the transmission of PC and

c0 than there is in between the transmission of each ci and this time is bounded.



Tài liệu bạn tìm kiếm đã sẵn sàng tải về

1 Data Acquisition, Processing, and Segmentation

Tải bản đầy đủ ngay(0 tr)

×