Tải bản đầy đủ - 0 (trang)
1 Evaluation Method, Implementation, and Setup

1 Evaluation Method, Implementation, and Setup

Tải bản đầy đủ - 0trang

342



I. Weber et al.



2. Incident management choreography: This process stems from [11, p.18]. This

process has nine tasks, six gateways and four conforming traces. We generated

120 not conforming traces. We implemented it with and without (i) a payment

option and (ii) data manipulation in a mediator.

3. Insurance claim handling: This process is taken from the industrial prototype

Regorous2 . Choreographies tend to result in a simplified view of a collaborative process, as can be seen when comparing Figs. 1 and 3. To test the

conformance checking feature with a more complex process, we added a third

use case which was originally not a choreography. This process has 13 tasks,

eight gateways and nine conforming traces. We generated 17 correct and 262

not conforming traces.

4.3



Identification of Not Conforming Traces



For this part of the evaluation, we investigate if our implementation accurately

identifies the not conforming traces that have been generated for each of the

models. The results are shown in Table 2. All log traces were correctly classified.

This was our expectation: any other outcome would have pointed at severe issues

with our approach or implementation.

Table 2. Process use case characteristics and conformance checking results

Process



Tasks Gateways Trace type



Traces Correctness



Supply chain process

of Fig. 3



10



2



Conforming

Not conforming



5

57



100 %

100 %



Incident management



9



6



Conforming

Not conforming



4

120



100 %

100 %



Incident management

with payment



9



6



Conforming

Not conforming



4

19



100 %

100 %



Incident mgmt. with

9

data transformation



6



Calculation

10

String manipulation 10



100 %

100 %



Insurance claim



8



Conforming

Not conforming



100 %

100 %



4.4



13



17

262



Analysis of Cost and Latency



In this part of the evaluation, we investigate the cost and latency of involving the

blockchain in the process execution, since these are the non-functional properties

that are most different from solutions currently used in practice.



2



http://www.regorous.com/. A subset of the authors is involved in this project.



Untrusted Business Process Monitoring and Execution Using Blockchain



343



Cost. In our experiments on the private blockchain, we executed a total of 7923

transactions, at zero cost. On the public Ethereum blockchain, we ran 32 process

instances with a total of 256 transactions. The deployment of the factory contract cost 0.032 Ether, and each run of the Incident Management process, with

automatic payments and data transformations, cost on average 0.0347 Ether, or

approx. US$ 0.40 at the time of writing. The data (transactions and contract

effects) of the experiment on the public blockchain is publicly viewable from the

factory contract’s address, e.g. via Etherscan.3

Latency. We measure latency as the time taken from when the trigger receives

an API call until it sends the response with conformance outcome, transaction

hash, block number, etc. A test script iterates over the events in a trace and

synchronously calls the trigger for each event. Therefore, the test script sends

the next request very soon after receiving a response. This distorts the latency

measurement to a degree, since the trigger adds the next transaction to the

transaction pool just after the previous block has been mined, and it needs

to wait there until mining for the block after the current one is started. Our

measurements should thus be regarded as an upper bound, rather than the

typical case. A more detailed explanation is given in the technical report [23].

An overview of the latency measurements is shown in Fig. 54 . The duration

for a block to be mined comes from the complexity of the mining task, which

is deliberately designed to be computationally hard. On the public Ethereum

blockchain, the target median time between blocks at the time of writing is set to

around 13 s, with the actual time measured at 14.4 s. On our private blockchain,

we can control the complexity mechanism to increase mining time (shown as Private fast in Fig. 5) or leave the default implementation in place (Private uncontrolled ). As can be seen, the variance is high. On the public Ethereum blockchain,

the median latency was 23.0 s. In our private fast setting we achieved a median

latency of 2.8 s, which should be sufficient for many practical deployments. For

any application, this tradeoff needs to be considered: public blockchains offer

much higher trustworthiness in return for higher cost and latency.

4.5



Discussion



Conflict Resolution. Following up on the conflict example from Sect. 2.1, we

discuss how conflict resolution can be implemented in our approach. Recall that

there was disagreement about the amount of supplies ordered. The blockchain

inherently provides an immutable audit trail, thus it is trivial to review the

original order and waybill messages – the culprit can be identified through such

inspection. Say, the Supplier was at fault, but the Manufacturer paid crypto-coins

3

4



https://etherscan.io/address/0x09890f52cdd5d0743c7d13abe481e705a2706384.

Note that, instead of the typical error bars with min and max in box plots, we

here show the 1st and the 99th percentile, to reduce the effect of the worst outliers.

For Private uncontrolled, the max was 183 s – almost twice as much as the 99th

percentile.



344



I. Weber et al.



Fig. 5. Latency in seconds, using private blockchain with/without speed modification,

and public Ethereum blockchain (box plot)



into escrow – how does it get its money back? The conditions for reimbursement

from escrow need to be specified in the smart contract, but then they can be

invoked at a later time. For instance, the participants may agree upfront that

the Manufacturer gets reimbursed only if the Middleman agrees to that; then

the Middleman sends a transaction to that effect, and the Manufacturer’s money

is transferred back to its account.

Trust. Blockchain provides a trustworthy environment, without requiring trust

in any single entity. In contrast, in the traditional model participants who do

not trust each other need to agree on a third party which is trusted by all.

Blockchain can replace this trusted third party. This is of particular interest in

cases of coopetition. If multiple parties come together to achieve a joint business

goal, but some of the organizations are in coopetition, it is important that the

entity which executes the joint business process is neutral. Say, Org1, Org2, and

Org3 are in coopetition, but want to have a joint process to achieve some business

goal. However, Org1 would not accept Org2 or Org3 to control the process, and

neither of those would accept Org1. Using our approach, the blockchain can be

used, enabling trustless collaboration as it is not controlled by a single entity. Our

translator allows the deployment of business processes on blockchain network

without the need to manually implement the corresponding smart contract. Trust

in the deployed bytecode for a process is established as follows: each participant

has access to the process model, translates it to Solidity with our translator, and

uses an agreed-upon Solidity compiler. This results in the same bytecode, and

each participant can verify that the deployed bytecode has not been manipulated.

Finally, the trigger allows for seamless integration into service-based message



Untrusted Business Process Monitoring and Execution Using Blockchain



345



exchanges. However, each trigger is a fully trusted party, and by default we

assume each organization hosts their own trigger.

Privacy. Public blockchains do not guarantee any data privacy: anyone can

join a public blockchain network without permission, and information on the

blockchain is public. Thus, for scenarios like collaborative process execution, a

permissioned blockchain may be more appropriate: joining it requires explicit

permission. Even with permission management, the information on blockchain

is still available to all the participants of the blockchain network. While we

propose a method to encrypt the data payload of messages, the process status

information is publicly available. As such, if Org1 ’s competitor, Org4, knows

which account address belongs to which participant, it can infer with whom

Org1 is doing business and how frequently. This can be mitigated by creating a

new account address for each process instance: the space of addresses is huge, and

account creation trivial. However, this method prevents building a reputation,

at least on the blockchain.

Off-Chain Data Store. For large data payloads, we propose to store only metadata with a URI on-chain, and to keep the actual payload off-chain – accessible

with the URI. Due to size limits for data storage on current blockchains [14] and

associated costs, this solution can be highly advantageous. There are existing

solutions that provide a data layer on top of blockchains, such as Factom [14].

Distributed data storage, like IPFS, DHT (Distributed Hash Table), or AWS

S3, can also be used in combination with the blockchain to build decentralized

applications.

Threats to Validity. There are several limitations to our study. To start, we

made some assumptions when implementing our evaluation scenario, which bear

threats to validity. First, we considered a supply chain scenario in which seconds

of latency are typically not an issue. We expect that scenarios in other industries, such as automatic financial trading, would have stronger requirements in

terms of latency, which could limit the applicability of our technique. Second,

we worked with a network of limited size. A global network might have stronger

requirements in terms of minimal block-to-block latency to ensure correct replication. These threats emphasize the need to conduct further application studies in different settings. Furthermore, there are open questions regarding technology acceptance, including management perception and legal issues of using

blockchain technology.



5



Conclusion



Collaborative process execution is problematic if the participants involved have a

lack of trust in each other. In this paper, we propose the use of blockchain and its

smart contracts to circumvent the traditional need for a centralized trusted party



346



I. Weber et al.



in a collaborative process execution. First, we devise a translator to translate

process specifications into smart contracts that can be executed on a blockchain.

Second, we utilize the computational infrastructure of blockchain to coordinate business processes. Third, to connect the smart contracts on blockchain

with external world, we propose and implement the concept of triggers. A trigger converts API calls to blockchain transactions directed at a smart contract,

and receives status updates from the contract that it converts to API calls.

Triggers can thus act as a bridge between the blockchain and an organization’s private process implementations. We ran a large number of experiments

to demonstrate the feasibility of this approach, using a private as well as a public blockchain. While latency is low on a private, customized blockchain, the

latency on the public blockchain may be considered too high for fast-paced scenarios. Additional benefits of our approach include the option to build escrow

and automated payments into the process, and that the blockchain transactions

from process executions form an immutable audit trail.

Acknowledgments. We thank Chao Li for integrating the trigger prototype with

POD-Viz and recording the screencast video.



References

1. Carminati, B., Ferrari, E., Tran, N.H.: Secure web service composition with

untrusted broker. In: IEEE ICWS, pp. 137–144. IEEE (2014)

2. Decker, G., Weske, M.: Interaction-centric modeling of process choreographies. Inf.

Syst. 36(2), 292–312 (2011)

3. Fdhila, W., Rinderle-Ma, S., Knuplesch, D., Reichert, M.: Change and compliance

in collaborative processes. In: IEEE SCC, pp. 162–169 (2015)

4. Flynn, B.B., Huo, B., Zhao, X.: The impact of supply chain integration on performance: a contingency and configuration approach. J. Oper. Manag. 28(1), 58–71

(2010)

5. Kemme, B., Alonso, G.: Database replication: a tale of research across communities.

Proc. VLDB Endow. 3(1–2), 5–12 (2010)

6. Li, G., Muthusamy, V., Jacobsen, H.A.: A distributed service-oriented architecture

for business process execution. ACM TWEB 4(1), 2 (2010)

7. Mendling, J., Hafner, M.: From WS-CDL choreography to BPEL process orchestration. J. Enterp. Inf. Manag. 21(5), 525–542 (2008)

8. Mont, M.C., Tomasi, L.: A distributed service, adaptive to trust assessment, based

on peer-to-peer e-records replication and storage. In: IEEE FTDCS (2001)

9. Nakamoto, S.: Bitcoin: a peer-to-peer electronic cash system. https://bitcoin.org/

bitcoin.pdf. Accessed 19 July 2015

10. Narayanan, S., Jayaraman, V., Luo, Y., Swaminathan, J.M.: The antecedents of

process integration in business process outsourcing and its effect on firm performance. J. Oper. Manag. 29(1), 3–16 (2011)

11. Object Management Group, June 2010. BPMN 2.0 by Example. www.omg.org/

spec/BPMN/20100601/10-06-02.pdf. Version 1.0. Accessed 10 Mar 2016

12. Omohundro, S.: Cryptocurrencies, smart contracts, and artificial intelligence. AI

Matters 1(2), 19–21 (2014)



Untrusted Business Process Monitoring and Execution Using Blockchain



347



13. Panayides, P.M., Lun, Y.V.: The impact of trust on innovativeness and supply

chain performance. J. Prod. Econ. 122(1), 35–46 (2009)

14. Snow, P., Deery, B., Lu, J., Johnston, D., Kirby, P.: Business processes secured by

immutable audit trails on the blockchain (2014)

15. Squicciarini, A., Paci, F., Bertino, E.: Trust establishment in the formation of

virtual organizations. In: ICDE Workshops, IEEE Computer Society (2008)

16. Subramanian, S., Thiran, P., Narendra, N., Most´efaoui, G., Maamar, Z.: On the

enhancement of BPEL engines for self-healing composite web services. In: Proceedings of SAINT Symposium, pp. 33–39 (2008)

17. Tschorsch, F., Scheuermann, B.: Bitcoin and beyond: a technical survey on decentralized digital currencies. IACR Cryptology ePrint Archive, 2015, 464 (2015)

18. van der Aalst, W., ter Hofstede, A.H.M., Kiepuszewski, B., Barros, A.P.: Workflow

patterns. Distrib. Parallel Databases 14(1), 5–51 (2003)

19. van der Aalst, W.M.P., Dumas, M., Ouyang, C., Rozinat, A., Verbeek, E.: Conformance checking of service behavior. ACM Trans. Internet Technol. 8(3) (2008)

20. van der Aalst, W.M.P., Weske, M.: The P2P approach to interorganizational workflows. In: Dittrich, K.R., Geppert, A., Norrie, M. (eds.) CAiSE 2001. LNCS, vol.

2068, pp. 140–159. Springer, Heidelberg (2001)

21. Viriyasitavat, W., Martin, A.: In the relation of workflow and trust characteristics,

and requirements in service workflows. In: Abd Manaf, A., Zeki, A., Zamani, M.,

Chuprat, S., El-Qawasmeh, E. (eds.) ICIEIS 2011, Part I. CCIS, vol. 251, pp.

492–506. Springer, Heidelberg (2011)

22. Weber, I., Haller, J., Mă

ulle, J.: Automated derivation of executable business

processes from choreograpies in virtual organizations. Int. J. Bus. Process Integr.

Manag. (IJBPIM) 3(2), 85–95 (2008)

23. Weber, I., Xu, X., Riveret, R., Governatori, G., Ponomarev, A., Mendling, J.:

Using blockchain to enable untrusted business process monitoring and execution.

Technical report UNSW-CSE-TR-09, University of New South Wales (2016)

24. Zeng, L., Benatallah, B., Ngu, A., Dumas, M., Kalagnanam, J., Chang, H.: QOSaware middleware for web services composition. IEEE TSE 30(5), 311–327 (2004)

25. Muehlen, M., Recker, J.: How much language is enough? Theoretical and practical

use of the business process modeling notation. In: Bellahs`ene, Z., L´eonard, M.

(eds.) CAiSE 2008. LNCS, vol. 5074, pp. 465–479. Springer, Heidelberg (2008)



Classification and Formalization

of Instance-Spanning Constraints

in Process-Driven Applications

Walid Fdhila(B) , Manuel Gall, Stefanie Rinderle-Ma, Juergen Mangler,

and Conrad Indiono

Faculty of Computer Science, University of Vienna, Vienna, Austria

{walid.fdhila,manuel.gall,stefanie.rinderle-Ma,

juergen.mangler,conrad.indiono}@univie.ac.at



Abstract. In process-driven applications, typically, instances share

human, computer, and physical resources and hence cannot be executed independently of each other. This necessitates the definition, verification, and enforcement of restrictions and conditions across multiple

instances by so called instance-spanning constraints (ISC). ISC might

refer to instances of one or several process types or variants. While realworld applications from, e.g., the logistics, manufacturing, and energy

domain crave for the support of ISC, only partial solutions can be found.

This work provides a systematic ISC classification and formalization that

enables the verification of ISC during design and runtime. Based on a

collection of 114 ISC from different domains and sources the relevance

and feasibility of the presented concepts is shown.



Keywords: Instance-spanning constraints

Process-Aware Information Systems



1



·



Compliance



·



Introduction



Checking and enforcing constraints such as regulations or security policies is

the key concern of business process compliance [29]. Enterprises have to invest

significantly into compliance projects, e.g., for large companies $4.6 million only

for the management of internal controls [31]. BPM research has provided several solutions for compliance at design time, e.g., [6] and runtime (cf. survey

in [15]). Despite these large efforts, an important type of constraints has not

been paid sufficient attention to, i.e., Instance-Spanning Constraints (ISC). ISC

are constraints that refer to more than one instance of one or several process

types. Logistics is a domain where ISC play a crucial role for the bundling or

rebundling of cargo over several transport processes [4]. Other domains craving for ISC support are health care [7] and security [33]. Specifically, in highly

adaptive process-driven applications where processes dynamically evolve during

runtime [10] ISC provide the means for ensuring a certain level of control.

c Springer International Publishing Switzerland 2016

M. La Rosa et al. (Eds.): BPM 2016, LNCS 9850, pp. 348–364, 2016.

DOI: 10.1007/978-3-319-45348-4 20



Classification and Formalization of Instance-Spanning Constraints



349



ISC support is scattered over a few approaches [7,13,17,18,27,33], but a

comprehensive support for ISC formalization, verification, and enforcement is

missing. Here, the property comprehensive refers to the context of ISC such as

multiple instances or processes, the expressiveness, e.g., ISC referring to data

or time, and the process life cycle phase the ISC is referring to. For a sufficient

understanding of these requirements, a systematic classification of ISC is needed.

An ISC formalization can then be chosen based on the ISC classification and

additional requirements such as complexity of the verification. The following

research questions address these needs:

1. How to systematically classify ISC?

2. How to formalize ISC based on ISC classification?

3. Do ISC classification and formalization meet real-world ISC requirements?

Questions 1–3 will be tackled following the milestones set out in Fig. 1. At

first, objectives are harvested from literature that must be met by an ISC classification (Question 1 ) and formalization (Question 2 ). The ISC classification

will be created as new artifact. The ISC formalization choice (Question 2 ) is

based on an analysis of existing languages. Based on an ISC collection of 114

examples from practice, literature, and experience, relevance and feasibility of

the ISC classification are evaluated (Question 3 ). Moreover, the ISC formalization will be validated by formalizing and implementing representatives along the

provided ISC classification (Question 3 ). In summary, this work provides an ISC

classification and formalization as well as an evaluation based on an extensive

meta study on ISC examples (cf. [26] for a complete description and all 114 ISC

examples).



Fig. 1. Milestones following the research methodology in [22]



Section 2 provides ISC objectives and the ISC classification. Section 3 discusses alternatives for formalization languages. In Sect. 4, relevance and feasibility of the ISC classification is evaluated. ISC representatives are formalized and

implemented in Sect. 5. Section 6 discusses related approaches and Sect. 7 closes

with a summary.



350



2



W. Fdhila et al.



ISC Classification



Following the milestones set out in Fig. 1, a collection of objectives on the ISC

classification and formalization is harvested from literature. ISC have a strong

runtime focus [33] and can thus be estimated as related to compliance monitoring in business processes. In [15], objectives on compliance monitoring have been

selected and evaluated as Compliance Monitoring Functionalities (CMF). The

CMFs are grouped along modeling, execution, and user requirements. For the

ISC classification the focus is at the moment on modeling and execution requirements. User requirements will play an important role later on when investigating

feedback options and handling of ISC violations and conflicts. According to [15],

modeling and execution requirements are CMF 1: Constraints referring to time,

CMF 2: Constraints referring to data, CMF 3: Constraints referring to resources,

CMF 4: Supporting non-atomic activities, CMF 5: Supporting activity life cycles,

CMF 6: Supporting multiple instances constraints.

Although CMF 6 suggests the use of CMFs for ISC, the CMF framework does

not deal with ISC, but rather with multiple activity instantiations. Hence, we

complement the elicitation of objectives by including requirements stated in literature on ISC, i.e., [7,13,17,18,27,33]. These works partly confirm CMF 1–CMF

6 and extend it by the context of a constraint [13,17,18], i.e., whether it refers

to a single/multiple processes and/or single/multiple instances. An example for

an ISC spanning multiple instances of a single process is a security constraint

restricting the loan sum granted by one employee over all her customers [33]. An

example for an ISC spanning single instances of multiple processes is imposing

an order between two activities of different treatment processes [7].

Concluding, we state as objectives for ISC classification and formalization:

Objective 1: coverage and support of CMF 1–CMF 3 (modeling)

Objective 2: coverage and support of CMF 4–CMF 6 (execution)

Objective 3: coverage and support of context single/multiple instances for single/multiple processes

Objective 4: support during design/runtime

Regarding Objective 4: ISC might not only become effective during runtime,

but also during design time, e.g., imposing restrictions on different process variants and their instances that can be checked during design time, such as static

information about roles in a process spanning separation of duty scenario. Thus,

support of ISC during design time is added to the objectives.

Figure 2 depicts the proposed ISC classification designed along Objective 1–

4 . Objective 1 suggests a classification along the modeling requirements time,

data and resource. Here, the classification of an ISC into several requirements is

conceivable. ISC A user is not allowed to do t2 if the total loan amount per day

exceeds $1M [33], for example, can be classified as time and data. For a selective

classification, ISC should not fit into multiple categories, but be assigned to

exactly one category. For this reason, the modeling requirements are grouped

into single and multiple requirements. Multiple modeling requirements describe

ISC for which more than one modeling requirement is existing such as in the



Classification and Formalization of Instance-Spanning Constraints



351



example above. An ISC is classified as single modeling requirement if none or

one modeling requirement is present. Objective 2 is not considered for the ISC

classification. In turn, the underlying CMFs are relevant for the formalization

and for the interplay with a process execution engine which manages task states

and multiple instances of a task.



Fig. 2. ISC classification according to objectives.



Objective 3 requires to extend the classification by the spanning property

of constraints, e.g., imposing a restriction that must hold across several process

instances. In the iUPC logical description [13,17,18,27], for example, the spanning part is described as context. ISC can span over processes and/or instances.

An ISC is considered single spanning if the constraint spans over processes or

instances and multi spanning when the constraint spans across both.

ISC can be enforced during design and run time (Objective 4). The proposed ISC classification considers both, but due to the strong runtime focus of

ISC design-time will be a single group and run-time is divided into the four classifications provided by modeling requirements and context. A more extensive

discussion on design and runtime support of ISC is provided in Sect. 3.1.



3



Analysis of Existing Formalisms for ISC Support



In Sect. 2, we have identified 4 objectives primordial for the classification and

formalization of ISC. In the following, we use these 4 objectives to evaluate a

list of existing formalisms and compare them to ISC requirements.

3.1



ISC Support During Design and Runtime



We start with a discussion of ISC requirements on verification at design time

and runtime (cf. Objective 4).

Design time checking aims at verifying the process model compliability

with respect to the defined ISC, detecting and resolving conflicts between multiple ISC, and checking the reachable states of the instances with respect to the

defined ISC. This might imply generating and combining possible traces to be

checked against the ISC. One of the techniques used at design time is model

checking. This technique suffers from well known problem of state explosion and

is not well suited for checking constraints that refer to runtime data.



352



W. Fdhila et al.



Runtime checking becomes necessary as soon as ISC refer to execution

data, time, or resources. Moreover, at runtime it is possible to deviate from the

original process model, and therefore a monitoring approach to check possible

violations becomes primordial. In contrast to design time checking, the process

models are not used in the monitoring of constraints (unless for conformance

checking), but the runtime events instead. At runtime, we differentiate between

two checking possibilities: (i) using partial traces, where events are analyzed

against the constraints when they arrive, and (ii) post checking, i.e., using complete traces, which assume that the analyzed instances have completed. ISC span

multiple instances. Hence, the fact that an instance or a set of instances satisfy

an ISC at the time of their completion does not necessarily ensure that this ISC

will not be violated by the executing of future instances, i.e., combined with the

completed ones. Consequently, it becomes crucial for ISC monitoring to define

correctly the window for analyzing the instances against the constraints.

3.2



Analysis of Formal Languages



In this section, we have analyzed the commonly used formalisms in the areas of

business process compliance and concurrent systems as follows.

Event-B is a specification language that describes how the system is allowed

to evolve. In particular, it specifies the properties that the system must fulfill [1].

Event-B is mainly used for distributed systems, using artifacts; i.e. blueprints,

to reason about the behavior and the constraints of the future system. The main

advantage of Event-B is that it allows different level of abstractions through stepwise refinement. Event-B is based on events, expresses the constraints between

them, and supports modality; i.e. time operators (CMF 1). In the context of

business processes, Event-B has been used for verifying cloud resource allocation and consumption [3] (CMF 2–3).

TLA+ is a syntactic extension of TLA (Temporal logic of Action), a specification language for describing and reasoning about asynchronous, nondeterministic concurrent systems [9]. TLA+ combines temporal logic with logic of

action, is suited for reasoning about protocols, and can be used to specify safety

and liveness properties. Similarly to Event-B, TLA+ allows different levels of

abstraction through refinement.

Both TLA+ and Event-B can be appropriate for specifying and checking

ISC at design time. In particular, structural parts of ISC might checked before

runtime to detect inconsistencies or incorrect specifications. Both formalisms

are very expressive, support time, data and resources (Objective 1), and can

ensure properties such as liveness, fairness or safety at design time. However,

this does not prevent deviations from the specified model at run time. To our

knowledge, TLA+ and Event-B are meant to be used for specifying correct and

compliant models, but not for monitoring the system properties at run-time; i.e.

they does not satisfy Objective 4. Both languages are used for distributed and

concurrent systems and can support Objective 3.

LTL (Linear Temporal Logic) is a formal language, introduced by Pnueli

[24], referring to the temporal modality (CMF 1), and used for reactive and



Tài liệu bạn tìm kiếm đã sẵn sàng tải về

1 Evaluation Method, Implementation, and Setup

Tải bản đầy đủ ngay(0 tr)

×