Except polynomial approximation, another way to cope with intractability of min indepen- dent **dominating** **set** (as well as of any other NP-hard problem) is by designing algorithms able to solve it to optimality with worst-case exponential running time as low as possible. Since min **independent** **dominating** **set** can be trivially solved in O(2 |V | p(|V |)) by simply enumer- ating all the subsets of V , the stake of such a research issue is to solve it within O(2 c|V | p(|V |)), where c is a constant lower than 1 and p is some polynomial. Since in comparison with the slightest improvement of c, p is non relevant, we use from now on notation O ∗ (2 c|V | ) to measure

En savoir plus
keywords distributed computing, fault tolerance, self-stabilization, k- **independent** **dominating** **set**, k-**dominating** **set**, k-**independent** **set**
1 Introduction
In this paper, we consider the problem of computing a distance-k inde- pendent **dominating** **set** in a self-stabilizing manner in case where k > 1. A nodes **set** is a distance-k **independent** **dominating** **set** if and only if this **set** is a distance-k **independent** **set** and a distance-k **dominating** **set**. A **set** I of nodes is distance-k **independent** if every node in I is at distance at least k + 1 to any other node of I. A **set** of nodes D is distance-k **dominating** if every node not belonging to D is at distance at most k of a node in D. We propose a very simple and fast protocol, called F ID. The protocol F ID reaches a terminal configuration in at most 4n + k rounds, where n is the network size. F ID requires (k + 1)log(n + 1) bits per node. The obtained distance-k **independent** **dominating** **set** contains at most b2n/k + 2c nodes.

En savoir plus
that when each obligation of Π is a singleton, an IDO is just an **independent** **dominating** **set** of G that can be constructed with a greedy algorithm. Among other things, we show that determining if an instance (G = (V, E), Π) has an IDO is N P -complete, even if G is a path (each vertex of G exactly has two neighbours, except two extremities that have one), all the obligations are **independent** sets of G and they all have the same constant size λ > 1. We also show that the problem is also N P -complete, even if Π is composed of p|V | **independent** obligations each of size p|V | or if the diameter of G is three. Our results clearly show that this problem is very difficult, even in extremely restricted instances. Hence, in a second part of the paper, we relax the condition to dominate all the vertices of G. However, we show that determining if (G = (V, E), Π) contains an **independent** **set** D respecting the obligations and **dominating** at least 3p|V | − 2 vertices of G is N P -complete, even if G is a collection of disjoint paths and obligations are all **independent** sets of G. On the positive side, we propose a greedy algorithm constructing a solution **dominating** at least 2p|V | − 1 vertices in any instance (G = (V, E), Π) if all obligations of Π are **independent** sets of G.

En savoir plus
In this section, we prove that all maximal computations of protocol SID under any unfair distributed scheduler are finite by reductio ad absurdam arguments.
Let e be a maximal computation starting from a configuration, named c0. In a con- figuration c reached by e, for all node v, firstHead(v) c .id is either the identifier of an node or this value appears in the initial configuration (i.e. there is a node u, such that firstHead(v) c .id = firstHead(u) c0 .id ∨ firstHead(v) c .id = secondHead(u) c0 .id). So, the value taken by a variable firstHead in e belongs to a bounded **set**. Similary we prove also that the value taken by a variable secondHead in e belongs to a bounded **set**.

En savoir plus
α − 1 p1 − γ
with q ∗ the size of an optimal quasi-**independent** **set** and α min the size of a minimum **independent** **dominating** **set** of
the input graph. This ratio tends to ∆ p1 − γ.
Algorithm γ QIS has been run on 20 randomly generated graphs of each size (10, 20 and 30 vertices); edges in an instance have been generated with a probability p, 0.1 < p < 0.5). Optimal solutions have been computed with the exact method of Section 4. Table 1 gives a summary of the experimental results obtained. It contains for every value of γ, the worst, best and average ratios and the percentage of optima returned by the algorithm.

En savoir plus
In Table 3, we perform a comparative study of running times for Algorithms IS, and RIS2, for some ratio values. As one can see from Tables 2 and 3, Algorithm RIS2 dominates Algorithm RIS1 since, in fact, the former is a refinement of the latter.
The second improvement follows a different approach, based upon an exhaustive lookup of all the candidate values for α(G), and using an exact algorithm for min vertex cover rather than for max **independent** **set**. Informally, the underlying idea for this approach (leading to Algorithm RIS3) is that randomization allows to split the input graph into “small” subgraphs, on which a fixed-parameter algorithm can be efficiently used to reach both a good overall running time and any a priori fixed approximation ratio. Then, Algorithm RIS3 consists of running Algorithm OPT_VC on subgraphs of size βn < rn taken at random and for a sufficient number of times, where β is optimally determined as a function of α(G).

En savoir plus
Let us consider a **set** S that does not verify the lemma. The **set** S is defined in some call A to Procedure 15 or 27. If S is defined in Step 1, 2, or 3 of Procedure 15, then the lemma is trivially verified for S. We can assume that A is either a call to Procedure 27, or that Step 5 is reached. We consider the **set** C as it is last defined in call A, and if A is a call to Procedure 27, then let X = ∅. Let K be the component of S defined from a component C 0 of G[C] (K is defined from C 0 by removing some vertices that have degree 1 in C 0 , and degree 2 in G). By construction, either C 0 contains a 3-vertex with a giving edge, or |C 0 | ≥ k + 1. Suppose |C 0 | ≥ k + 1. For every vertex u of C 0 \ X, since u was not put in I in Step 4, u has a neighbour that is not in C (that was in I when C was defined), and thus not in C 0 . For every vertex u of X, u is adjacent to its supervisor, which is not in C 0 . Therefore P

En savoir plus
Now, for each guard the Alternating strategy requires to move in from a border vertex, it requires a guard to move out from the interior vertices. The exchange is easy to facilitate since the guard moving out of the interior will always move onto the same border path that the guard moving in to the interior previously occupied. In all three of the possible configurations of the guards on the border vertices, the guards occupy a **dominating** **set** of the row or column of vertices adjacent to them in the 7 × 7 subgrid. Hence, there is a guard available to move to whichever vertex requires a guard to move to it and the guard leaving the interior can always move onto the border path as the guards can easily maneuver to leave an adjacent vertex empty for him while maintaining one of the three formations.

En savoir plus
Because **dominating** sets are among the most studied objects in graph theory and algorithms, their enumeration (and counting) have attracted an increasing attention over the past 10 years. The problem of enumerating minimal **dominating** sets (hereafter referred to as Dom-Enum) has a notable feature: it is equivalent to the extensively studied hypergraph problem Trans-Enum. In Trans-Enum, one is given a hypergraph H (i.e. a collection of sets, called hyperedges) and is asked to enumerate all the minimal transversals of H (i.e. the inclusion-minimal sets of elements that meet every hyperedge). It is not hard to see that Dom-Enum is a particular case of Trans-Enum: the minimal **dominating** sets of a graph G are exactly the minimal transversals of the hypergraph of closed neighborhoods of G. Conversely, Kanté, Limouzy, Mary, and Nourine proved that every instance of Trans- Enum can be reduced to a co-bipartite 1 instance of Dom-Enum [17]. Currently, the best output-sensitive algorithm for Trans-Enum is due to Fredman and Khachiyan and runs in quasi-polynomial time [9]. It is a long-standing open problem whether this complexity bound can be improved (see for instance the surveys [6, 8]). Therefore, the equivalence between the two problems is an additional motivation to study Dom-Enum, with the hope that techniques from graph theory will be used to obtain new results on the Trans-Enum problem. So far, output-polynomial algorithms have been obtained for Dom-Enum in several classes of graphs, including planar graphs and degenerate graphs [7], classes of graphs of bounded tree-width, clique-width [4], or mim-width [10], path graphs and line graphs [16], interval graphs and permutation graphs [18], split graphs [19], graphs of girth at least 7 [12], chordal graphs [19], and chordal bipartite graphs [11]. A succinct survey of results on Dom-Enum can be found in [20]. The authors of [19] state as an open problem the question to design an output-polynomial algorithm for bipartite graphs (the problem also appeared in [20, 11]). We address this problem with the following result.

En savoir plus
bounded pathwidth and bounded treewidth graphs.
4 Polynomial-time algorithms
In this section, we focus on graph classes for which DSR TS can be solved in polynomial time. A natural
way to solve this problem is to distinguish a special **dominating** **set** (that we call canonical) and then show that each **dominating** **set** can be reconfigured into this special one [17]. The canonical **dominating** **set** is not part of the original instance, so it is crucial to be able to compute it in polynomial time if we aim to compute the reconfiguration sequence in polynomial time. However, this is not an issue if we are only interested in the decision problem. We emphasize the fact that this canonical **dominating** **set** must be uniquely defined, i.e., the **set** of vertices that hold a token as well as the number of tokens on each of these vertices must be fixed.

En savoir plus
1 Introduction
Given a graph G(V, E), a **dominating** clique is a clique which is also a **dominating** **set** for G. Determining whether there exists a **dominating** clique or not is known to be NP-hard ([5]) and so are the two related problems min **dominating** clique and max **dominating** clique, where we are asked for a **dominating** clique of minimum and of maximum size, respectively. These problems can of course be solved by enumerating all the subsets of V . So, an interesting problem is to devise algorithms able to optimally solve the three problems existing **dominating** clique, min **dominating** clique and max **dominating** clique within time O(2 c|V | p( |V |)), where c is a constant lower than 1 and p some polynomial function. Notice that, compared to the slightest improvement of c, p is non relevant. So, from now on, we use notation O ∗ (2 c|V | ) in order to omit polynomial factors.

En savoir plus
5 Price of extension
Considering the possibility that some fixed **set** U might not be extendible to any minimal solution, one might ask how wrong U is as a fixed choice for an extension problem. One idea to evaluate this, is to ask how much U has to be altered when aiming for a minimal solution. Described differently for our extension problems at hand, we want to discuss how many vertices of U have to be deleted for Ext VC (added for Ext IS) in order to arrive at a yes-instance of the extension problem. The magnitude of how much U has to be altered can be seen as the price that has to be paid to ensure extendibility.

En savoir plus
Keywords: open locating-**dominating** code problem, locating total-**dominating** code problem, polyhedral approach
1 Introduction
For a graph G that models a facility, detection devices can be placed at its nodes to locate an intruder (like a fire, a thief or a saboteur). Depending on the features of the detection devices (to detect an intruder only if it is present at the node where the detector is installed and/or also at any of its neighbors), different **dominating** sets can be used to determine the optimal distribution of the detection devices in G. In the following, we study three problems arising in this context which all have been actively studied during the last decade, see e.g. the bibliography maintained by Lobstein [16].

En savoir plus
Bovykin and Weiermann [1] introduced the Erd˝ os-Moser theorem in reverse mathematics and proved that EM together with the chain-antichain principle (CAC) is equivalent to RT 2 2 over RCA 0 . This was refined into an equivalence between EM + ADS and RT 2 2 by Montalb´ an
(see [1]), and the equivalence still holds between the stable versions of the statements. Lerman, Solomon and Towsner [19] proved that EM is strictly weaker than RT 2 2 by constructing an ω- model of EM which is not a model of the stable ascending descending sequence (SADS). SADS is the restriction of ADS to linear orders of order type ω + ω ∗ [14]. The author noticed in [22] that their construction can be adapted to obtain a separation of EM from the stable thin **set** theorem for pairs (STS(2)). Wang strengthened this separation by constructing in [31] a standard model of many theorems, including EM, COH and weak K¨ onig’s lemma (WKL) which is neither a model of STS(2) nor a model of SADS. The author later refined in [24, 26] the forcing technique of Lerman, Solomon and Towsner and showed that it is strong enough to obtain the same separations as Wang.

En savoir plus
+?A@CBEDF@ GIHKJLBNM DF@OGP?AQR SJ TUJ TVJLB=WV@WVB+DYXZDLGP?A@ODFB=WU@CWVBEDR ][ \+^`_a b c _ dfegihNjlkm n m opqFrokshFt rIgshLnug%gshNovtwpspioksm txeNyFoeNvo]yNjlkivt zxopioy${A|~}tx[r]

Unité de recherche INRIA Sophia Antipolis 2004, route des Lucioles - BP 93 - 06902 Sophia Antipolis Cedex France Unité de recherche INRIA Futurs : Parc Club Orsay Université - ZAC des Vi[r]

Conversely, assume that G has an identifying code C of size at most f ( ˚ H). By Lemmas 2 and 4, the code C contains at least γ ℓ
j /2 internal vertices of p ℓ j , and at least 3m vertices in each variable-gadget. Moreover, C must contain at least one vertex among C j , oj so as to cover oj . Hence, the code C contains exactly that number of vertices in each of the subgraphs mentioned. Thus, by Lemma 4, for each variable-gadget Ki, either Fi ⊂ C and Ti ∩ C = ∅, or T i ⊂ C and F i ∩ C = ∅. If T i ⊂ C then we **set** the variable X i to be true, otherwise false. Consider now an arbitrary clause Cj : we infer that at least one neighbour of C j different from oj also belongs to C (otherwise C would not be separating C j and oj ). Consider the path p ℓ

En savoir plus
Annegret K. Wagler 3
University Blaise Pascal (LIMOS, UMR 6158 CNRS), Clermont-Ferrand, France
Abstract
The locating-**dominating** **set** problem is a special domination problem, challenging both from a theoretical and a computational point of view. Hence, a typical line of attack is to determine minimum locating-**dominating** sets of special graphs or to provide bounds for their size. Here, we study the locating-**dominating** **set** problem from a polyhedral point of view and demonstrate how the associated polyhedra can be entirely described for some basic families of graphs. The latter enables us to determine minimum weight locating-**dominating** sets in the studied graph classes for arbitrary integral node weights. We discuss further lines of research in order to apply similar techniques for other graph classes, to obtain the exact values or strong lower bounds for the size of minimum locating-**dominating** sets, stemming from linear relaxations of the polyhedra, enhanced by suitable cutting planes. Keywords: **set** covering, locating-**dominating** **set**, facet-descriptions of polyhedra

En savoir plus
We have audited the accompanying financial statements of the Inter-American Institute for Cooperation on Agriculture (IICA), which comprise the statements of financial position as at D[r]

We deal of two versions of branch-&-bound. The first one is the exhaustive search method. It can be seen as a branch-&-bound where the search-tree is pruned only on branches rooted at nodes that correspond to infeasible partial solutions. In the second version, nodes of the search-tree are provided with an evaluation function that, as described above, is an upper bound on the best solution that will be built when the node is completely unfolded. For max **independent** **set** a simple evaluation for a node t of the branch-&- bound tree is function u(S) = |S| + (n − ℓ(t)), where S denotes the partial solution already constructed at a node t of the search-tree, n denotes the order of the input-graph and ℓ(t) is the level of t in the tree. Recall that at level ℓ, the branch-&-bound algorithm has handled ℓ vertices of G. Function u(S), also called “evaluation of S”, simply expresses the fact that if S is the partial **independent** **set** built when handling some node t, ℓ(t) vertices of G have been fixed. The ideal solution one can hope from t is the one that includes in S the whole **set** of the n − ℓ(t) unfixed vertices of G. Recall that the evaluation of a leaf (since infeasible subtrees have already been discarded) is the size of the **independent** **set** corresponding to this leaf.

En savoir plus