Tải bản đầy đủ - 0 (trang)
2 Solving a Maximum Entropy Problem Analytically - the Symmetric Case as Illustrative Example

2 Solving a Maximum Entropy Problem Analytically - the Symmetric Case as Illustrative Example

Tải bản đầy đủ - 0trang

Computational Issues in the Design of Transition Probabilities



pij ¼ 2Àln2 þ k3 þ k2 vij for j = i + 1, . . ., i + D.



According to the definition of bs and As, constraint g2 pi ị ẳ 0 is now






j¼1 pij j À 2 ¼ 0, and constraint g3 pi ị ẳ 0 is

jẳ1 pij ẳ 2 :


, where vk2 ị :ẳ

To make g2 pi ị ẳ 0 hold, we now let pij k2 ị :ẳ 2k2 vij 2vk


Pi ỵ D k2 v2 2 PD k2 j2 2

ij v ẳ

j . Then, to make g3 pi ị ẳ 0 hold as well, we let


jẳi ỵ 1 2

jẳ1 2




pii :¼ 1 À 2 j¼1 pij . Note, this is only well defined as long as D

j¼1 pij \ 2 holds for a

choice of k2 .


If we define k3 :¼ log2


2vðk2 Þ


2ln2 , then also (12’) holds.

So far, the candidate probability distribution pi actually depends on the choice of

k2 , e.g. pi ẳ pi k2 ị. The nal step is finding the maxima of (9), that is maximize the

entropy S ðpi ðk2 ÞÞ with respect to k2 . This is a straightforward – though a little cumbersome - analytical exercise. Saving the effort here, Fig. 1a presents a graphical

solution for the case of D = 2. Obviously, the maximum is around k2 = 0.9. Figure 1b

shows the resulting maximum entropy probability distribution.


As nal observation: With a :ẳ 2ln2 ỵ k3 and b :ẳ eln2ịk2 , (12) can be re-written


as: pij ¼ aebvij , reminding of the functional form of a normal distribution with zero


Fig. 1a. Shannon’s entropy for the instance

D = 2; symmetrical case; 0:3 k2 1


Fig. 1b. Maximum entropy distribution for

the instance D = 2; symmetrical case

Software Solution for the Maximum Entropy Problem

While it might be feasible to implement a little procedure to carry out the steps of the

approach taken in the illustrative example for the symmetric case, for the general case

this would be difficult. It is therefore very advisable to restore to the well-known

algorithms for non-linear optimization problems. To obtain the solutions discussed in

Sect. 2.4, the Nlopt package [11] has been used via its R-interface routine nloptr [16].

From this package we decided to use a sequential quadratic programming algorithm for

nonlinearly constrained gradient-based optimization (supporting both inequality and

equality constraints), based on the implementation described in [12] and [13]. In order


S. Giessing

to execute an instance, seven arguments have to be passed to the interface routine

nloptr. See the Appendix for a list and illustration of those arguments.


Adding Constraints to the Maximum Entropy Based Approach

A simple measure of information loss, easy to communicate to data users is the

probability for data to preserve their true value, pii , or the cumulated probability for the


data to change by less than d (for small d, like d = 1): dj¼Àd pij . Disregarding the fact

that this will reduce the entropy (when the other important parameter, e.g. the variance

is fixed), one might be tempted to prefer transition probability distributions with high

probabilities in the center (even though this will be at the cost of increasing the tails of

the distribution, i.e. the probabilities for larger changes). This can be achieved with the

software solution outlined in Sect. 2.3 and in the Appendix by adding constraints to

impose lower bounds on the respective probabilities. Especially in combination with

such additional constraints, it sometimes happens that in the solution obtained, the left

hand side of the probability distribution does not increase monotonously: we get

piðiÀkÞ \piðiÀjÞ for some j [ k, which may not be appreciated. To avoid this, more

constraints can be added to enforce monotony.

Figures 2a and b compare three transition probability distributions for maximum

perturbation D = 5 and condition (3) requiring that no cell values 1 and 2 appear in the

perturbed data, e.g. js :¼ 2. Two of the distributions were obtained as maximum

entropy solutions for parameter settings of V = 2 vs. V = 3. For V = 2, the figures

present also another distribution, computed with the LP-heuristic explained in [9].

Figure 2a presents the distributions obtained for i = 3, and Fig. 2b those obtained for

the symmetric case.

Fig. 2a. Transition probability distributions

for D = 5; i = 3, js ¼ 2

Fig. 2b. Transition probability distributions

for D = 5; symmetric case ði ! 8Þ

Note that a distribution almost identical to the leptokurtic distribution resulting

from the LP-heuristic of [9] is obtained by the maximum entropy approach, when we

add constraints on the elements on the diagonal of the transition matrix, pii, and on the

off-diagonal elements pii1ị and pii ỵ 1ị ; to dene the respective probabilities of the

Computational Issues in the Design of Transition Probabilities


‘target’ distribution as lower bounds. This is, however, at the “expense” of higher

probabilities in the tails of the distribution4 which are of course rather undesirable.

Another observation from experiments with the approach is that changing the

maximum perturbation (to, for example, D = 4 or D = 10), while keeping the variance

V = 2 or V = 3 constant, reproduces the probability distribution almost exactly, for

larger D of course with longer, but very flat tails.

The software solution outlined in Sect. 2.3 facilitates the computation of probability distributions satisfying the basic properties of the random noise outlined in the

beginning of Sect. 2, and to experiment with the main parameters (V, D, pii ) which

obviously affect the usability of the perturbed data and are easy to interpret. In order to

determine these parameters appropriately, however, disclosure risk issues must also be

taken into account.

3 Evaluating Disclosure Risk with the Shuttle Algorithm

Disclosure risk measures of the type that measure, for example, the percentage of

perturbed data identical with the true data as proposed and discussed in [1, 14] and,

lean in some way on a scenario where the users of the data mistake perturbed data for

“true”, ignoring information on the SDC method offered by the data provider. The

present paper assumes a different scenario: we assume a data user (rather “snooper”)

who is well aware of the perturbation and tries to undo it with certain differencing

attacks. On the basis of this scenario the following general requirement for tabular data

protection methods makes sense also in the context of a perturbative method not

preserving additivity, e.g. not preserving the “logical” table relations (i.e. interior cells

sum up to the respective margins) for the perturbed data.

A method that is in this sense non-additive5 should ensure that it can be expected

that the feasibility intervals that can be computed for protected, sensitive data are not

disclosive (c.f. [10], Sect. 4.3.1.) Notably, the idea is to assess this property of a

method (viz.: choice of parameters) by a rigorous preparative study – not in the

production phase of table generation and perturbation!

Feasibility intervals are typically computed by means of linear programming

(LP) methods (see for example [5]). Solutions can be obtained for each cell of interest

by minimizing and maximizing a variable representing its cell value, subject to a set of

constraints expressing the “logical” table relations, and some a priori upper and lower

bounds on unknown cell values. Note, in the setting of a perturbative SDC method,

generally all the cells are assumed to be “unknown”. Our intruder only assumes to

know for each cell a lower and upper bound. This increases of course the size of the

problem to be solved and hence computations times, especially when table relations

tend to be ‘long’, with many interior cells contributing to the same marginal.



For example, in the symmetric case, with D = 5: pi(i−5) = pi(i+5) ≅ 0.0006 without the additional

constraints, ≅0.009 with constraint on the diagonal element, and ≅0.01 with additional constraint on

the off-diagonal elements.

Notably, the ABS Census TableBuilder product is additive in this sense, because of a subsequent

algorithm implemented in the product that restores additivity to the table.


S. Giessing

The objective of the research step envisioned is to compute intervals for all cells of

large high-dimensional tables for various settings, regarding for example the

assumptions on the a priori bounds made by the intruder, different settings for the noise

etc. We decided not to rely on a LP based tool, but to implement a generalized version

of the shuttle algorithm [2] in SAS, which had also certain practical advantages in the

processing and management of the instances, even though there might be a theoretical

disadvantage: there might be cases, where the LP algorithm produces better solutions.

In such cases the shuttle algorithm would underestimate the true risks6.

The mathematical formulation of the version of the shuttle algorithm we use is

briefly presented in the following (for more details and a toy example the reader is

referred to [8]):

Let a table be “defined” by a set of cell indices I and a set of table relations J. For

cell c 2 I let JT ðcÞ identify the set of table relations in which this cell is the margin, and

JB ðcÞ the set of table relations in which cell c is one of the interior cells. For relation

j 2 J let tj 2 I denote the index of the margin cell and Bj & I the set of indices of the


“bottom” cells contributing to this margin. Finally, for c 2 I let xU

c and xc upper and

lower bounds for its true cell value which may change in the course of executing the


Like the “original” algorithm, the generalized version consists basically of two

steps (referred to in the following as L-step and U-step) which are repeated alternatingly. Starting from the original a priori bounds (the ones assumed to be known to the


intruder) for xU

c and xc , each step will use the results of the previous step to compute

several new candidate bounds for all interior cells and compare

those candidate bounds


to the “previous” bound: For a given interior cell b 2 j2J Bj , one candidate bound is

computed for every table relation j 2 JB ðbÞ containing this cell.

In the U-step, a candidate bound for cell b is computed by taking the difference

between upper bound of the marginal cell xU

tj and the sum of all lower bounds on the


other interior cells in the same table relation i2Bj ;i6¼b xLi . The upper bound of cell b,






b ; will be replaced in the U-step by xb ðjÞ :¼ xtj À

i2Bj ;i6¼b xi , if for one j 2 JB ðbÞ the


candidate bound is tighter, i.e. xU

b ð j Þ < xb .

The L-step works in the same way, but here we take the difference between lower

bound of the marginal cell xLtj and the sum of all upper bounds xU

i on the other interior


cells in the same table relation. The lower bound of cell b, xb ; will be replaced, if for

one j 2 JB ðbÞ the candidate bound is tighter, i.e. xLb ð jÞ > xLb . The algorithm stops, when

no bound improvement is reached in the previous pair of steps, L or U.

In the original setting of [2], the cell values of the margins are assumed to be

known. In our setting they are not. We therefore compute when the algorithm

S starts and

after every U-step (before starting the L-step) for every margin cell t 2 j2J ftj g new


In the literature [3, 15] there is some discussion as to which extent it could be expected that solutions

obtained by the shuttle algorithm coincide with the solutions that would be obtained solving the

respective linear programming problems. For some (admittedly: few) test tables we have actually

compared the results and found no differences.

Computational Issues in the Design of Transition Probabilities

candidate upper bounds xU

t jị :ẳ






i ; replacing in this additional step xt by




t ð jÞ; if for one j 2 JT ðtÞ the candidate bound is tighter, i.e. xt ð jÞ < xt . Similarly, at

the start of the algorithm and between an L- and the next U-step a lower bound xLt will


be replaced by xLt jị :ẳ i2Bj xLi ; if xLt ð jÞ [ xLt for one j 2 JT ðtÞ:

4 Test Application

Our test data set consists of 31 two-way tables available for about 270 geographical units

on different levels of a hierarchy. Taking those relations (like f.e. the district –

municipality relations) into account too, the tables are actually 3-dimensional. The 31

two-way tables are given by cross combinations of nine variables. Two with 2 categories, one with 3, two with 4, the others with 7, 9, 19 and 111 categories. The

geography relations also tend to be ‘long’ with often more than 10, or even up to 100

categories. The current implementation does not consider any hierarchical relations:

bounds are computed independently for all tables. The test tables are tables of population counts.

The shuttle algorithm is used to compare the effects of different protection methods:

we look at a number of variants of deterministic rounding and at variants of additive

noise. The rounding methods were included for sake of comparison, in the first place.

For the rounding we assume that data users are aware of the a priori bounds which

directly result from the rounding rule. Denote R the rounding base and nc R the rounded

count for cell c 2 I: Then xLc :ẳ max0; nc R R ị and xU

c :ẳ nc R þ R Þ are the



known a priori bounds. R ; R denote the maximum upper and lower rounding

deviation. For example in case of rounding base R = 3: R ỵ ¼ RÀ ¼ 1:

For the additive noise we assume two different scenarios: A. The maximum perturbation D is published. B. The maximum perturbation D is not published, but the

intruder is able to estimate it at D ± 2. Under scenario A:xLc :ẳ max0; ~xc Dị and


xc ỵ D are known a priori bounds for a true count xc given the perturbed count

c :¼ ~

~xc . Under scenario B. We assume estimated a priori bounds xLc :¼ maxð0; ~xc À D 2ị

and xU

xc ỵ D ỵ 2

c :ẳ ~

In the test application, we used bases of 3, 5, 7, 9, 10 and 11 for the rounding

methods. The additive noise was created with the transition probabilities derived in [9]

for maximum perturbations D = 3, 4 and 5.

During execution of the shuttle algorithm, we record the cases where one of the


bounds xLb ; xU

b ; xt ; xt is replaced by a tighter (candidate) bound, noting in particular the

iteration step and the equation j at which a replacement occurs. After the execution the

results are investigated. Cases where final upper and lower bounds for a cell coincide

with each other, matching the true cell value (i.e. they are exact bounds) are categorized

according to that original cell value (1; 2; 3–10; >10). Figures 3a, b, 4a, b and c present

indicators computed on the basis of the evaluation.

Every data point in those figures refers to one of the 31 structural types of our test

tables, resulting from the crossing of pairs of the nine variables mentioned above with

the geography dimension. Data points are sorted by table size (#cells = number of table


S. Giessing

cells, summed across all geographical units) to visualize an obvious general trend for

higher values of the indicators in smaller tables.

For the indicators, we count for each cell the number of equations (at most 3, for the

3-dimensional test data) with bound replacements, i.e. where during execution of the

algorithm an upper or lower bound was replaced by a tighter one. Results are summed

across all cells and across all tables relating to the data point, divided by the respective

number of table cells and multiplied by 100. So the theoretical maximum for indicators

is 300 %. We refer to the indicator defined this way as “indicator for disclosure risk


For a second, more direct indicator, the “disclosure risk indicator”, we count for the

enumerator of the indicator equations with bound replacements only for those cells

where the algorithm finally returns exact bounds, i.e. where the algorithm discloses the

exact true counts.

Figures 3a and b present results for the disclosure risk indicator, grouped by

original count size (1; 2; 3–10; >10) of the disclosed cells, for rounding to base 3

(Fig. 3a) and rounding to base 5 (Fig. 3b).

Fig. 3a. Disclosure risk indicator for rounding to base 3, by original count size

Fig. 3b. Disclosure risk indicator for rounding

to base 5, by original count size

An interesting pattern is that many of the peaks in the observations for rounding to

base 3 are observed for table types where one of the variables in the respective pair of

spanning variables has 2 categories only. For rounding to base 5, on the other hand,

most peaks occur at table types involving a 4-categories variable. This can be regarded

as empirical confirmation of a theoretical result explained in [7]:

The original values of a table row with l inner cells and a margin cell, protected by

rounding to base R, can be retrieved exactly, when lR þ þ RÀ or lRÀ þ R þ is a

multiple of R.

For rounding base 3 such a relation exists for the case of l = 2 categories

(2R ỵ ỵ R ẳ 2 ỵ 1 ẳ 3ị: For rounding base 5 it exists in the case of l = 4 categories

(4R ỵ ỵ R ẳ 8 ỵ 2 ẳ 10ị:

Computational Issues in the Design of Transition Probabilities


For rounding bases 7 and 10, exact bound retrievals were very rare7, for rounding

bases 9 and 11 no such cases were observed, nor for any of the additive noise variants.

Especially for the additive noise this is not so surprising considering another result of [7]:

The original values of a table row with l inner cells and a margin cell, protected by

rounding to base R, can be retrieved exactly, when the pattern of the rounding differences is either I ịR ỵ R ỵ . . .R ỵ jRÀ , or ðII ÞRÀ RÀ . . .RÀ jR þ :

So, all inner cells must have been perturbed by the maximum amount D in one

direction, and the total must have been perturbed by D in the other direction. For the

l ỵ 1

stochastic noise method such events occur with approximate probability pii ỵ Dị

which is very small, considering the flat tails of the transition probabilities used to

create the noise.

Retrieval of exact bounds and hence the original count is a rather direct risk of

disclosure, especially where small cells are concerned. The disclosure risk indicator is

thus a very informative disclosure risk measure. On the other hand, one should not

interpret an indicator value of zero as evidence for zero disclosure risk. After all, such

cases might eventually occur in other tables, not taken into account in the testing, or

they might have occurred, if the observed data had been a little different, or if additional

complexities – like hierarchical relations between some of the variables – had been

taken into account.

This risk of underestimating the disclosure risk is reflected in the indicator for

disclosure risk potential, introduced above. Figures 4a and b compare this indicator for

the additive noise variants with maximum perturbations D = 3, 4 and 5, for the intruder

scenarios A (ABS3, ABS4, ABS5, c.f. Figure 4a) and B (with a priori interval bounds

increased by +2: ABS3_2, ABS4_2, ABS5_2, c.f. Figure 4b), and for the stronger

variants of rounding, i.e. to rounding bases 7, 9, 10 and 11 (Fig. 4c).

For the noise, we observe a strong reduction in the indicators when maximum

perturbation is increased. While for ABS3 the indicator values vary between about

15 % and 3 %, for ABS5 the range is on a different scale, between ca. 1.9 % and

0.5 %. The effect is similar (even slightly stronger) when the maximum perturbation is

kept, but not published: actually the range for the scenario B variant ABS3_2, for

which the shuttle algorithm starts with a priori intervals of the same size (i.e.: 10) as for

scenario A variant ABS5, varies between 1.8 % and 0.02 %.

Notably, 10 is also the size of the a priori intervals for rounding to base 11, but here

indicator values are again on a much higher scale (compared to noise), varying between

about 60 % and 3 %. For all deterministic rounding variants (including rounding to

base 3 and to base 5 not presented in the figure) the indicator values are remarkably

similar. Also interesting is the effect that the general trend for higher indicator values in

smaller tables, visible for the rounding variants and the least protective noise variant

ABS3, vanishes in the variants with the lowest indicator values.


Exact bounds were obtained for only 17 of the 31 structural types, for altogether 44 cell/equation

cases. Apart from two structural types with disclosure risk indicators of 0.25 % and 0.1 % for

original count sizes 3–10, indicators were at most 0.05 %. For rounding base 10, exact bounds were

obtained for 20 cell/equation cases, all in the same structural type and only concerning cells with

original counts >10.


S. Giessing

Fig. 4a. Disclosure risk potential indicator

for additive noise, scenario A

Fig. 4b. Disclosure risk potential indicator for

additive noise, scenario B

Fig. 4c. Disclosure risk potential indicator for Rounding to bases 7, 9, 10 and 11

A final observation is that most bound improvements occur during the first combination of U- and L-step. For the additive noise, this is the case in more than 95 % of

the cases for almost all structural types of test tables. For deterministic rounding, at

least 75 % of bound improvements occur during this first step. The algorithm always

came to end after at most 6 steps.

5 Summary and Outlook

In its first part, this paper has explored the suggestion of [14] to determine the transition

probability distributions of additive noise by maximizing entropy, looking at the

mathematical formulation of the approach. For sake of illustration, the paper has

explained how to derive a solution analytically in a relatively simple special case. The

first part of the paper concludes with suggestions for how to employ the R-interface

routine nloptr for the NLopt nonlinear-optimization package to compute transition

probabilities, presenting some examples of distributions produced this way.

The topic addressed in the second part is disclosure risk estimation. The paper has

outlined an implementation of a generalized version of the shuttle algorithm [2]. As test

application the algorithm has been used on a major set of 3-dimensional tables. We

have compared disclosure risks associated to several variants of deterministic rounding

Computational Issues in the Design of Transition Probabilities


and additive noise, using one more and one less direct indicator for disclosure risk. As a

side result, this test application has delivered an empirical proof of the high disclosure

risks associated to rounding to base 3 as protection method. The paper has also proven

empirically that although rounding to larger rounding bases reduces disclosure risks

effectively, the disclosure risk potential of additive noise is considerably lower than that

of deterministic rounding when we compare variants with the same maximum deviation. For additive noise, looking at only the more direct risk indicator of the two, no

cases of disclosure risk were observed, even for rather weak settings of noise


For future work, it is foreseen to use the algorithm for a rigorous assessment and

comparison of disclosure risks and risk potentials for several variants (e.g.: parameter

settings) of additive noise, taking into account also more than 3-dimensional instances

and, overall, a much larger set of instances. However, it is not at all intended to cover

the enormous set of tables which can theoretically be generated by ‘on-the-fly’ table

production. When fixing parameters, this “under-coverage issue” in disclosure risk

estimation must be taken into account appropriately.

Acknowledgements. The author gratefully acknowledges the contribution of Jonas Peter who

during his time as a student at Destatis has developed the R-code described in Sect. 2.3 and the



Arguments passed in R to the interface routine nloptr for the package Nlopt.

• x0: start-solution for pi. Use for example a vector of 1's. Vector length must equal

the number of columns of the coefficient matrix A.

• eval_f: list with two entries representing

the objective

function of the maximum



entropy problem, and its gradient ln2 ỵ log2 pij ị j2Ji which for nloptr can simply

be stated as: “objective” = sum(x * log2(x)), and “gradient” = log2(x)+1)

• lb, ub: vectors of upper and lower bounds for pi. Same vector length as x0. For ub,

a vector of 1's will do. For lb, use a vector of e(e > 0, f.i. e := 10-7). For discussion of

more elaborate settings, see Sect. 2.4.

• eval_g_ineq: list with two entries representing the inequality constraints, to be

stated formally as, e.g. “constraints” =constr, and “jacobian” = grad. The vector

constr consists of (at least) two elements. One is the second term in the left hand

side of (7), which could be stated as sum(vɅ2 * x) − V8. The other one relates to

condition (6) and could be stated as x − 1. Matrix grad is composed of (at least) the


In our present code the second equality constraint (g2 (pi ) = 0, i.e. the constraint expressing the

requirement that perturbations shall have a fixed variance of V) is handled as inequality constraint,

defining V as an upper bound, as in most practical instances the variance of the maximum entropy

solution anyway assumes the value of this parameter.


S. Giessing

two respective gradient vectors which are actually identical to the last two rows of

matrix A. Further inequality constraints could be implemented to enforce monotony9, c.f. Sect. 2.4.

• eval_g_eq: list with two entries representing the equality constraints, to be stated

formally as, e.g. “constraints” = constr, and “jacobian” = grad, where constr is a

vector listing the first and the last term of the left hand side of (7), like f.e. as (sum

(v * x) − 0, sum(x) − 1), and grad is a matrix composed of the two respective

gradient vectors, which is obviously identical to the matrix that results when we

delete the second row of matrix A.

• opts: Argument for the selection of a specific optimization algorithm and respective



1. Andersson, K., Jansson, I., Kraft, K.: Protection of frequency tables – current work at

Statistics Sweden. In: Joint UNECE/Eurostat Work Session on Statistical Data

Confidentiality, Helsinki, Finland, 5 to 7 October 2015. http://www1.unece.org/stat/


2. Buzzigoli, L., Giusti, A.: An algorithm to calculate the lower and upperbounds of the

elements of an array given its marginals. In: Statistical Data Protection (SDP 1998)

Proceedings, Eurostat, Luxembourg, pp. 131–147 (1998)

3. Buzzigoli, L., Giusti, A.: Disclosure control on multi-way tables by means of the shuttle

algorithm: extensions and experiences. In: Bethlehem, J.G., van derHejden, P.G.M. (eds.)

Computational Statistics 2000, COMPSTAT Proceedings in Computational Statistics 200.

Physica-Verlag, Heidelberg

4. EUROSTAT, Methodology for Statistics Unit B1: Minutes of the Working Group on

Methodology, 7 April 2016

5. Fischetti, M., Salazar-González, J.J.: Models and algorithms for optimizing cell suppression

problem in tabular data with linear constraints. J. Am. Stat. Assoc. 95, 916–928 (2000)

6. Fraser, B., Wooton, J.: A proposed method for confidentialising tabular output to protect

against differencing. In: Monographs of Official Statistics, Work session on Statistical Data

Confidentiality, Eurostat-Office for Official Publications of the European Communities,

Luxembourg, pp. 299–302 (2006)

7. Giessing, S.: Anonymisierung von Fallzahltabellen durch Rundung. In: Paper presented at

the Sitzung des Arbeitskreises für Fragen der mathematischen Methodik am 17.06.1991 in

Wiesbaden (in German), Statistisches Bundesamt

8. Giessing, S.: Report on issues in the design of transition probabilities and disclosure risk

estimation for additive noise. Statistisches Bundesamt (Unpublished manuscript)

Add one element (x[j] − x[j+1]) to the vector constr for each index j pointing to the right hand side of

the distribution. Extend matrix grad by an additional column vector: The vector entries referring to

indices j and j+1 should be 1 and (-1), resp. All other entries are 0.


In our implementation we have defined local_opts as the following list: “algorithm” =

“NLOPT_LD_SLSQP”, “xtol_rel” = 1.0e−7, “maxeval” = 100000, and “local_opts” = local_opts,

where local_opts is another list: (“algorithm” = “NLOPT_LD_MMA”,“xtol_rel” = 1.0e−7)


Computational Issues in the Design of Transition Probabilities


9. Giessing, S., Höhne, J.: Eliminating small cells from census counts tables: some

considerations on transition probabilities. In: Domingo-Ferrer, J., Magkos, E. (eds.) PSD

2010. LNCS, vol. 6344, pp. 52–65. Springer, Heidelberg (2010)

10. Hundepool, A., Domingo-Ferrer, J., Franconi, L., Giessing, S., Schulte Nordholt, E., Spicer,

K., de Wolf, P.P.: Statistical Disclosure Control. Wiley, Chichester

11. Johnson, S.G.: The NLopt nonlinear-optimization package. http://ab-initio.mit.edu/nlopt

12. Kraft, D.: A software package for sequential quadratic programming, Technical Report

DFVLR-FB 88-28, Institut für Dynamik der Flugsysteme, Oberpfaffenhofen, July 1988

13. Kraft, D.: Algorithm 733: TOMP–Fortran modules for optimal control calculations. ACM

Trans. Math. Softw. 20(3), 262–281 (1994)

14. Marley, J.K., Leaver, V.L.: A method for confidentialising user-defined tables: statistical

properties and a risk-utility analysis. In: Proceedings of 58th World Statistical Congress,

pp. 1072–1081 (2011)

15. Roehrig, S.F.: Auditing disclosure in multi-way tables with cell suppression: simplex and

shuttle solutions. In: Paper Presented at: Joint Statistical Meeting 1999, Baltimore, 5–12

August 1999

16. Ypma, J.: Introduction to nloptr: an R interface to Nlopt (2014). https://cran.r-project.org/


Tài liệu bạn tìm kiếm đã sẵn sàng tải về

2 Solving a Maximum Entropy Problem Analytically - the Symmetric Case as Illustrative Example

Tải bản đầy đủ ngay(0 tr)