Tải bản đầy đủ - 0 (trang)
2 Case 2: Disclosing Location at Most 3 Times per Day

2 Case 2: Disclosing Location at Most 3 Times per Day

Tải bản đầy đủ - 0trang


R. Pardo et al.

controls the policy automaton of each individual. When the message is received

the automaton of the user specified by uid will be updated. This update will

increase the value of the automaton variable posts, whose initial value is 0. After

sending the message, Diaspora* waits for the answer of the automaton, in case

an update of the privacy policies of the user is required. In case posts is less

than 3, there is no need to update the privacy policies, therefore the message

do-nothing is sent back. On the other hand, if posts is greater than 3, the

automaton will move to the state where the policy forbidding the disclosure of

locations must be activated, thus it will send the message disable-posting to

Diaspora*. Note that it is not required to specify the user id in the reply since

Diaspora* initiated the communication.

As for the event midnight, Diaspora* sends the message midnight to the

monitors of all users every day at 23:59. If the monitors are in the state where

the disclosure of location is forbidden, they take the transition to the initial state.

This transition involves, firstly, resetting the variable posts to 0, and secondly,

sending the message uid;enable-posting;location back to Diaspora*, which

removes the privacy policy preventing the location of the user uid to be disclosed.

If the automaton is already in the initial state, it simply resets posts to 0.


Related Work

The lack of a temporal dimension in privacy policies was already pointed out by

Riesner et al. [19]. In their survey, they show that there is no OSN that supports

policies that automatically change over time. The authors mention that Facebook

allows users to apply a default audience to all their own old posts, but there is a

big gap between that privacy policy and the family of evolving policies that we

introduce in this paper.

Specifying and reasoning about temporal properties in multi-agent systems

using epistemic logic have been the subject of study for a long time. It began

with the so called interpreted systems (IS). In [7] Fagin et al. introduce IS as

a model to interpret epistemic formulae with temporal operators such as box

and diamond. IS have been used for security analyses of multi-agent systems.

Though we do consider a temporal aspect, the focus and objectives of our work

are different from the work done in interpreted systems, at least in what concerns

the domain of application and the scope of the approach. In our case, the policies

themselves are the ones evolving based on events, rather than the information

on what is known to different agents at a given time.

Recent research has been carried out in extending IS to be able to reason

about past or future knowledge. In [2] Ben-Zvi and Moses extend Ki with a

timestamp Ki,t , making it possible to express properties such as “Alice knows

at time 5 that Bob knew p at time 3”, i.e., KAlice,5 KBob,3 p. With the same

essence but including real time, Wo´zna and Lomuscio present TCTLKD [22], a

combination of epistemic logic, CTL, a deontic modality and real time. In these,

and other related work, the intention is to be able to model the time differences

in the knowledge acquired by different agents due to delay in communication

Evolving Privacy Policies for Social Networks


channels. Although both our motivation as well as the application domain differ

from those of the aforementioned logics, it is worth mentioning that they could

be indeed useful to express certain real-time policies not currently supported in

our formalism.

Despite the richness of both timed epistemic logics, TCTLKD [22] and the

epistemic logic with timestamps [2], they would not be able to express recurrent

policies as we do. We are of course adding a separate layer beyond the power

of the logical formalism by using automata to precisely express when to switch

from one policy to another. It remains an interesting question what would be

the expressivity of policy automata if we consider an enhancement of PPF with

timed extensions as done in some of the above works in order to express richer

(static) policies.

We have not defined here a theory of privacy policies (we have not given a

formal definition in terms of traces or predicates), nor have we developed a formal

theory of enforcement of privacy policies. To the best of our knowledge such a

characterisation does not exist for privacy policies. There is, however, work done

in the context of security policies, for instance the work by Le Guernic et al. on

using automata to monitor and enforce non-interference [9,11] or by Schneider on

security automata [20]. It could be instructive to further develop the theoretical

foundations of policy automata and relate it to security automata and their

successors (e.g., edit automata [13]).



We have presented a novel technique to define and implement evolving privacy

policies (i.e., recurrent policies that are (de)activated depending on events) for

OSNs. We have defined policy automata as a formalism to express about such

policies. Moreover, we have introduced the notion of parallel composition, subsumption and conflict between policy automata and we have proved some of

their properties. We have defined a translation from policy automata to DATEs

which enables their implementation by means of the tool Larva. Furthermore,

we have describe how to connect Larva monitors to the OSN Diaspora* so that

policy automata can effectively be implemented. In fact, the presented approach

would allow to plug in policy automata to any OSN with a built-in enforcement

of static privacy policies. Finally, as a proof-of-concept, we have implemented a

prototype of two evolving privacy policies.

The policy automata approach has some limitations. For instance, consider

that Alice enables the following policy “Only my friends can see my pictures

during the weekend”. Imagine that Alice and Bob are not friends. If Alice shares

a picture on Saturday, Bob will not have access to it. However, on Monday this

policy would be deactivated. What would be the effect of turning off this policy?

It might be possible that Bob gains access to all the pictures that Alice posted

during the weekend, since no restrictions are specified outside the scope of the

weekend. In order to address this problem we might need a policy language able

to express real-time aspects, with an element of access memory integrated within

policy automata.


R. Pardo et al.

We are currently also extending policy automata with timing events such

as timeouts. This extension will be almost immediately implementable using

Larva since DATEs already support timeouts in their transitions. Another line

of work is to extend policy automata with location events. Users normally access

OSNs through mobile devices. These devices could directly report the location of

users to their policy automata, which avoids having to constantly report users’

location to the OSN.

Acknowledgements. This research has been supported by: the Swedish funding

agency SSF under the grant Data Driven Secure Business Intelligence, the Swedish

Research Council (Vetenskapsr˚

adet) under grant Nr. 2015-04154 (PolUser: Rich UserControlled Privacy Policies), the European ICT COST Action IC1402 (Runtime Verification beyond Monitoring (ARVI)), and the University of Malta Research Fund



1. Alexa-ranking. http://www.alexa.com/topsites. Accessed 11 May 2016

2. Ben-Zvi, I., Moses, Y.: Agent-time epistemics and coordination. In: Lodaya, K.

(ed.) Logic and Its Applications. LNCS, vol. 7750, pp. 97–108. Springer, Heidelberg


3. Harvard student loses Facebook internship after pointing out privacy

flaws. http://www.boston.com/news/nation/2015/08/12/harvard-student-losesfacebook-internship-after-pointing-out-privacy-flaws/. Accessed 11 May 2016

4. Colombo, C., Pace, G.J., Schneider, G.: Dynamic event-based runtime monitoring

of real-time and contextual properties. In: Cofer, D., Fantechi, A. (eds.) FMICS

2008. LNCS, vol. 5596, pp. 135–149. Springer, Heidelberg (2009)

5. Colombo, C., Pace, G.J., Schneider, G.: LARVA -a tool for runtime monitoring

of Java programs. In: 7th IEEE International Conference on Software Engineering

and Formal Methods (SEFM 2009), pp. 33–37. IEEE Computer Society (2009)

6. Diaspora*. https://diasporafoundation.org/. Accessed 11 May 2016

7. Fagin, R., Halpern, J.Y., Moses, Y., Vardi, M.Y.: Reasoning about Knowledge, vol.

4. MIT Press, Cambridge (2003)

8. Diaspora*. Test pod: https://ppf-diaspora.raulpardo.org, Code: https://github.

com/raulpardo/ppf-diaspora (2016)

9. Guernic, G.L.: Automaton-based confidentiality monitoring of concurrent programs. In: 20th IEEE Computer Security Foundations Symposium (CSF 2007),

pp. 218–232 (2007)

10. Johnson, M., Egelman, S., Bellovin, S.M.: Facebook and privacy: it’s complicated.

In: Proceedings of the Eighth Symposium on Usable Privacy and Security, SOUPS

2012, pp. 9:1–9:15. ACM, New York (2012)

11. Guernic, G., Banerjee, A., Jensen, T., Schmidt, D.A.: Automata-based confidentiality monitoring. In: Okada, M., Satoh, I. (eds.) ASIAN 2006. LNCS, vol. 4435,

pp. 75–89. Springer, Heidelberg (2007). doi:10.1007/978-3-540-77505-8 7

12. Lenhart, A., Purcell, K., Smith, A., Zickuhr, K.: Social media & mobile internet

use among teens and young adults. Pew Internet & American Life Project (2010)

13. Ligatti, J., Bauer, L., Walker, D.: Edit automata: enforcement mechanisms for

run-time security policies. Int. J. Inf. Secur. 4, 2–16 (2005)

Evolving Privacy Policies for Social Networks


14. Liu, Y., Gummadi, K.P., Krishnamurthy, B., Mislove, A.: Analyzing Facebook

privacy settings: user expectations vs. reality. In: Proceedings of the 2011 ACM

SIGCOMM Conference on Internet Measurement Conference, IMC 2011, pp. 61–

70. ACM (2011)

15. Madejski, M., Johnson, M., Bellovin, S.: A study of privacy settings errors in an

online social network. In: IEEE International Conference on Pervasive Computing

and Communication Workshops (PERCOM Workshops 2012), pp. 340–345 (2012)

16. Madejski, M., Johnson, M.L., Bellovin, S.M.: The failure of online social network

privacy settings. Columbia University Computer Science Technical Reports (2011)

17. Pardo, R.: Formalising privacy policies for social networks. Licentiate thesis,

Department of Computer Science and Engineering, Chalmers University of Technology, p. 102 (2015)

18. Pardo, R., Schneider, G.: A formal privacy policy framework for social networks. In:

Giannakopoulou, D., Salaă

un, G. (eds.) SEFM 2014. LNCS, vol. 8702, pp. 378–392.

Springer, Heidelberg (2014)

19. Riesner, M., Netter, M., Pernul, G.: An analysis of implemented and desirable settings for identity management on social networking sites. In: 2012 Seventh International Conference on Availability, Reliability and Security (ARES), pp. 103–112,

August 2012

20. Schneider, F.B.: Enforceable security policies. ACM Trans. Inf. Syst. Secur. 3(1),

30–50 (2000)

21. Weitzner, D.J., Abelson, H., Berners-Lee, T., Feigenbaum, J., Hendler, J.A.,

Sussman, G.J.: Information accountability. Commun. ACM 51(6), 82–87 (2008)

22. Wo´zna, B., Lomuscio, A.: A logic for knowledge, correctness, and real time. In:

Leite, J., Torroni, P. (eds.) CLIMA 2004. LNCS (LNAI), vol. 3487, pp. 1–15.

Springer, Heidelberg (2005). doi:10.1007/11533092 1

TrackOS: A Security-Aware Real-Time

Operating System

Lee Pike1(B) , Pat Hickey2 , Trevor Elliott1 , Eric Mertens1 , and Aaron Tomb1


Galois, Inc., Portland, USA



Helium, Portland, USA


Abstract. We describe an approach to control-flow integrity protection for real-time systems. We present TrackOS , a security-aware realtime operating system. TrackOS checks a task’s control stack against a

statically-generated call graph, generated by an abstract interpretationbased tool that requires no source code. The monitoring is done from a

dedicated task, the schedule of which is controlled by the real-time operating system scheduler. Finally, we implement a version of software-based

attestation (SWATT) to ensure program-data integrity to strengthen our

control-flow integrity checks. We demonstrate the feasibility of our approach by monitoring an open source autopilot in flight.



Cyber-physical systems are becoming more pervasive and autonomous without

an associated increase in security. For example, recent work demonstrates how

easy it is to gain access to and subvert the software of a modern automobile [4]. In

this paper, we focus on software integrity attacks aimed at modifying a program’s

control flow. Traditional methods for launching software integrity attacks include

code injection and return-to-libc attacks.

Control-flow attacks are well known, and protections like canaries [5,10] and

address-space layout randomization [21] have been developed to thwart them.

However, for each of these protections, researchers have shown ways to circumvent them, using techniques such as return-oriented programming [4].

More recently, control-flow integrity (CFI), originally developed by

Abadi et al. [1], is more difficult to exploit. CFI implements run-time checks

to ensure that a program respects its statically-built control-flow graph. If the

control stack is invalid, then some other program is being executed; modulo false

positives, it is a program resulting from a malicious attack.

Consequently, the CFI approach to security has been favored recently as

the way forward in protecting program integrity. For example, Checkoway et al.

demonstrate how to execute return-to-libc attacks without modifying return

addresses [4]. In reference to traditional kinds of defenses, the authors write:

What we show in this paper is that these defenses would not be worthwhile

even if implemented in hardware. Resources would instead be better spent

deploying a comprehensive solution, such as CFI.

c Springer International Publishing AG 2016

Y. Falcone and C. Sanchez (Eds.): RV 2016, LNCS 10012, pp. 302–317, 2016.

DOI: 10.1007/978-3-319-46982-9 19

TrackOS: A Security-Aware Real-Time Operating System


Fig. 1. TrackOS RTOS integration

The traditional technique for implementing CFI requires program instrumentation (the instrumentation can be done at various levels of abstraction, from

the source to the binary). Instrumentation is not suitable for critical hard realtime systems code for at least two reasons. First, instrumentation fundamentally

changes the timing characteristics of the program. Not only can instrumentation

introduce delay, but it can introduce jitter: CFI checks are control-flow dependent. Second, safety-critical or security critical systems are often certified, and

instrumenting application code with CFI checks may require recertification. Our

approach allows real-time CFI without instrumenting application code.

The question we answer in this paper is how to provide CFI protections

for critical embedded software. Our answer is a CFI-aware real-time operating

system (RTOS) called TrackOS .

TrackOS has built in support for performing CFI checks over its tasks, as

processes on an RTOS are generally known. TrackOS tasks do not require any

special instrumentation or runtime modifications to be checked. TrackOS overcomes the delay and jitter issues associated with CFI program instrumentation:

rather than instrumenting a program, CFI checks are performed by a separate

monitor task as shown in Fig. 1. This task is responsible for performing CFI

checks on other untrusted tasks. The monitor task is scheduled by the RTOS,

just like any other task. However, the task is privileged by the RTOS and is

allowed access to other tasks’ memory (this is why we show the task overlapped

with the RTOS in Fig. 1).

An insight of TrackOS is that RTOS design already addresses the problem of

real-time scheduling, and CFI monitoring in a real-time setting is just an instance

of the task scheduling problem. Furthermore, as an instance of the real-time

task scheduling problem, the user has the freedom to decide how to temporally

integrate CFI into the overall system design, given the timing constraints. For

example, a developer could decide to make CFI monitoring a high-priority task

if there is sufficient slack in the schedule or instead monitor intermittently as

the schedule allows.

Summary of Contributions

1. Static analysis: Before execution, we analyze a task’s executable to generate a call graph that is stored in non-volatile memory (program memory).

We implement a lightweight static analysis that is able to analyze a 200 KB

machine image (compiled from an approx. 10kloc autopilot) and generate a

call graph in just over 10 s on a modern laptop.


L. Pike et al.

2. Control-flow integrity: At runtime, a monitor task traverses the observed

task’s control stack from the top of the stack, containing the most recent

return addresses, to the bottom of the stack. The control stack is compared

against the static call graph stored in memory. In our approach, we do not

assume frame pointers, so the analysis must parse the stack. We make optimizations to ensure checks have very low-overhead. Most importantly, the

overhead is completely controllable by the user using the RTOS’s scheduler,

just like any other task.

This approach implements callstack monitoring rather than just checking

well-formedness of function pointers, like many rootkit dectection mechanisms [11,14,15]. The approach supports concurrency (i.e., multiple tasks

can be monitored simultaneously).

3. Program-data integrity: Our CFI approach is only valid as long as it is executing. An attacker that can reflash a microcontroller can simply overwrite

TrackOS and any of its tasks. Consequently, we need a check that the program

memory has not been modified. We implement a software-based attestation

framework to provide evidence to this effect. The framework is not novel

to us; we borrow the SoftWare-based ATTestation (SWATT) approach tailored to attestation in embedded systems [19]. Our full implementation therefore answers a challenge by the authors of SWATT, in which they note that

“software-based attestation was primarily designed to achieve code integrity,

but not control-flow integrity” [13]. As far as we know, this is the first integration of software-based program-data integrity attestation with control-flow

integrity; de Clercq et al. previously combine CFI and data integrity relying

on hardware support [6].

Assumptions and Constraints. Regarding system assumptions, while not fundamental to our approach, we assume execution on a Harvard or modified Harvard

architecture in which the program and data are stored in separate memories

(e.g., Flash and SRAM, respectively). Return-oriented programming is still feasible on a Harvard architecture [8]. We do not assume the hardware supports

virtual memory or provides read-write memory protections. We do not assume

that programs have debugging symbols. We also do not assume the existence of

frame pointers.

We assume the attacker does not have physical access to the hardware. However, she may have perfect knowledge of the software including exploitable vulnerabilities in the software, including the bootloader. She may have unlimited

network access to the controller. We assume that the microcontroller’s fuses

allow all memory, including program memory, to be written to. Furthermore,

any control-flow transfer technique is in-scope by the attacker.


Static Analysis

TrackOS compares the control stack against a statically-generated call graph of

each monitored task. The call graphs are generated via binary static analysis tool

called StackApprox ; no sources or debugging symbols are required. StackApprox

currently targets AVR binaries.

TrackOS: A Security-Aware Real-Time Operating System


StackApprox is similar in spirit to a tool developed by Regehr et al. [16],

although the use cases are different. In Regehr’s case, the focus is on statically

determining control-stack bounds, whereas our primary use case is to generate

representations of call graphs as C code, although StackApprox approximates

stack sizes, too. StackApprox uses standard abstraction interpretation techniques

to efficiently generate a call graph; for the sake of space, we elide details about

the tool’s design and implementation.

Like in Regehr et al. [16], StackApprox analyzes direct jumps automatically

but requires the user to explicitly itemize indirect jumps. Doing so ensures that

all indirect jumps are specified and not the result of unintended or undefined

(with respect to C source semantics) behavior. Moreover, large number of indirect jumps are not common in hard real-time systems (we itemized 30 targets

for a 10K LOC autopilot, including interrupts).

For the purposes of CFI checking, we generate four tables or maps from the

generated call graph. Only values for functions reachable from the start address

are generated. Typically, the start address is the entry point for an RTOS task.

– Loop map: A mapping from return addresses to callers’ return addresses associated with their call-sites.

– Top map: A mapping from call-targets (usually the start of a function definition) to the set of return addresses associated with the functions’ call-sites.

– Local stack usage map: A mapping from call-targets to the maximum number

of data bytes pushed on the stack, not including callees’ stack usage.

– Contiguous region map: Pairs representing the start and stop address that

define a contiguous region.

Our build system calls StackApprox , which generates C sources containing

the four maps, and then integrates the generated C files into the build automatically. The basis of TrackOS , FreeRTOS (see Sect. 3), like many embedded

RTOSes, statically links the operating system and its tasks. Consequently, there

is a circular-dependency problem: because the call-graph data is statically linked

into the program, it is needed to build the program, but the program binary must

be available to generate the call-graph data. Our solution is to split compilation

into two rounds. First, we generate dummy call-graph data that contains empty

structures but provide the necessary definitions for building an ELF file. This

ELF is then analyzed to extract the actual call-graph data, which is linked with

the target program to produce the final ELF file.

Note that this approach requires that the call-graph data be located after

the program it is linked with (i.e., the .text segment) to ensure the addresses

are not modified by populating the call-graph data.


TrackOS Architecture

Before describing the CFI monitoring algorithm in the following section, we highlight here the aspects of integrating the CFI checker with the RTOS, including

the definition of task control blocks, context switching, and finally, a scheduler


L. Pike et al.

Fig. 2. Stack layout for a swapped out task. The saved context is on the top,

target stack points to the beginning of the saved control stack, and a fixed address,

0x456, marks the bottom.

addition we call restartable tasks. Our prototype of TrackOS is a derivative of

FreeRTOS, an open source commercially-available RTOS written in C and available for major embedded architectures.1

TrackOS Task Control Blocks TrackOS extends FreeRTOS’s task control blocks

with the following additional state:

1. Stack location: a pointer to the portion of a stack that comes after its saved

context is added to the TCB. When a task has been swapped out by the

scheduler, its control stack will first contain its saved context (i.e., its saved

registers and a pointer to its task control block). The saved context is a fixed

size. On the top of the stack is the task’s saved context; on the bottom of the

stack is a return address to the task’s initialization function. A hypothetical

task control stack is shown in Fig. 2.

2. Timing: timing variables are used to track the timing behavior of the observed

task to provide TrackOS with the duration the task has executed in its most

recent time slice. This can be used, for example, to control when stack checking is run (e.g., it might be delayed until after initialization) or even to have

time-dependent stack-checking properties (e.g., “after 500 ms of execution,

function f() should not appear on the stack”).

3. Restarting: “restarting” variables allow the CFI task to be restarted as necessary; we explain the concept in Sect. 4.2. To do this, we save a code pointer

to the CFI intialization code and its initial parameters as well as a pointer to

a shared “restart mutex” with the observed task.

Context Switching. In Fig. 3(top right), we show FreeRTOS’s context switching

routine (ported to the AVR architecture), together with the extensions necessary

for TrackOS . This routine is used to swap the context of all tasks (including the

monitoring task), whether they are checked or not by the monitor task, and

it may be called from the timer interrupt during preemption or explicitly by a

task during a cooperative yield (interrupts are disabled when vPortYield() is

called). After saving a task’s context, TrackOS updates its pointer to the top of

the stack, after the saved context. Additionally, it saves the execution time of

the saved task. After scheduling a new task in (Line 10), all that has to be done

is record the execution start time for the newly-scheduled task.



TrackOS: A Security-Aware Real-Time Operating System

0 void check stack(stack t ∗target stack ) {

current = target stack;





// Preemptive yield

if (preemptive yield ret (current)) {

current = preemptive stack(current);

stack loop(current );


// Cooperative yield

else if ( coop yield ret (current)) {

stack loop(current );


// Cooperative yield from an ISR

else if ( search ret isrs (current) {


current = preemptive stack(current);

stack loop(current );


else { error (); }


0 void vPortYield( void ) {


#ifdef TRACKOS

pxCurrentTCB−>pxStoredStack =








#ifdef TRACKOS






asm volatile ( ‘‘ ret ’ ’ );



// Check a preemptive function

void preemptive stack(stack t ∗current) {


func = find current func(current );


if (interrupt in main(func, current))



return find caller ret (func, current );


0 void stack loop(stack t ∗current) {

while(!(inside main(current)) {

stack t ∗ valid rets =

lookup valid rets (current );

if (NULL == valid rets) { error(); }


else {

current =

loop find next (current, valid rets );

if (NULL == current) { error(); }




if (at stack end(current)) {



else error ();

15 }

Fig. 3. Left: CFI procedure to discover the task’s yield location. Top right: Context

switch in TrackOS . Bottom right: TrackOS CFI procedure to walk the stack.


Control-Flow Integrity

In this section, we overview the control-flow integrity algorithm implemented in

TrackOS , which is the heart of the approach. We begin by describing the basic

algorithm in Sect. 4.1, then we describe two extensions to basic real-time stack

checking in Sect. 4.2.


Basic Algorithm

The CFI algorithm described below is the heart of TrackOS . There are two main

procedures: first, we find the top return address in the stack, resulting from an

interrupt or an explicit yield by the task. Second, once a valid return address is

found, it serves as an “entry point” to the rest of the control stack. The second

procedure walks the control stack, moving from stack frame to stack frame.


L. Pike et al.

We describe each procedure in turn. Pseudo-code representations of the two

procedures are in Fig. 3. For readability, we elide details from the implementation, including hooks for performing restartable checks (see Sect. 4.2), helper

functions (e.g., binary search), memory manipulations, type conversions, error

codes, special-cases to deal with hardware idiosyncrasies, and other integrated

stack checks for aberrant conditions. In addition, for the sake of readability, utility functions in pseudo-code listings that are underlined are described in the text

without being defined.

In the following, we assume the maps generated by the StackApprox static

analysis tool are available to the CFI checker. We do not assume that frame

pointers are present, so the stack must be parsed by the CFI algorithm to distinguish data bytes from return addresses.

Yield Address Algorithm. While a task is in the task queue waiting to

be executed, its context is saved on its control stack. The CFI checker’s entry

point is just after the saved context, pointed to by the target stack variable.

(The stack t type is the size of stack elements, which are one byte in our


The entry point to the stack checker algorithm is check stack(), shown in

Fig. 3, left. The invariant that holds after calling check stack() is that either the

check has been aborted due to an error, or the function returns a stack pointer

to the first proper stack frame on the stack (pointing to the frame’s return

address). check stack() is executed within a critical section, ensuring that the

CFI checker, whenever it executes, always checks that the current location of

the observed task’s execution is valid.

There are three cases to consider at the entry point of the stack: a preemptive

yield, a cooperative yield, and a cooperative yield from an interrupt service

routine. These cases correspond to the three cases in the body of check stack()

in Fig. 3, left.

Preemptive Yield. In this case, the RTOS scheduler preempts the task via a

timer interrupt. Inside the interrupt service routine (ISR), there is a call to a

function that performs a preemptive context switch; if this is a preemptive yield,

the top of the stack should contain the return address inside the ISR from that

function. (The return address is found by StackApprox at compile time.) The

function preemptive yield ret() performs this check.

In the case of a preemptive yield, we call preemptive stack() (Line 22 in

Fig. 3, left). In that function, we first increment the stack pointer: the next

value on the stack following the return address inside the timer ISR is the interrupt address for the task. The function find current func() takes an arbitrary

address and searches through a map containing the address ranges of reachable

functions generated by StackApprox . If a function that contains the interrupt

address cannot be found, the procedure returns an error. Assuming a reachable

function is found, interrupt in main() checks that the function is not the initialization function for the task. If it is the initialization function, then there are

Tài liệu bạn tìm kiếm đã sẵn sàng tải về

2 Case 2: Disclosing Location at Most 3 Times per Day

Tải bản đầy đủ ngay(0 tr)