Counting List Homomorphisms from Graphs of Bounded Treewidth: Tight Complexity Bounds

The goal of this work is to give precise bounds on the counting complexity of a family of generalized coloring problems (list homomorphisms) on bounded-treewidth graphs. Given graphs G, H, and lists L (v) ⊆ V(H) for every v∈ V(G), a f:V(G)→ V(H) that preserves the edges (i.e., uv∈ E(G) implies f(u)f(v)∈ E(H)) and respects the lists (i.e., f(v)∈ L(v)). Standard techniques show that if G is given with a tree decomposition of width t, then the number of list homomorphisms can be counted in time |V(H)|t⋅ n𝒪(1). Our main result is determining, for every fixed graph H, how much the base |V(H)| in the running time can be improved. For a connected graph H, we define irr(H) in the following way: if H has a loop or is nonbipartite, then irr(H) is the maximum size of a set S⊆ V(H) where any two vertices have different neighborhoods; if H is bipartite, then irr(H) is the maximum size of such a set that is fully in one of the bipartition classes. For disconnected H, we define irr(H) as the maximum of irr(C) over every connected component C of H. It follows from earlier results that if irr(H)=1, then the problem of counting list homomorphisms to H is polynomial-time solvable, and otherwise it is #P-hard. We show that, for every fixed graph H, the number of list homomorphisms from (G,L) to H— can be counted in time \(\operatorname{irr}(H)^t\cdot n^{\mathcal {O}(1)}\) if a tree decomposition of G having width at most t is given in the input, and, — given that \(\operatorname{irr}(H)\ge 2\) , cannot be counted in time \((\operatorname{irr}(H)-\varepsilon)^t\cdot n^{\mathcal {O}(1)}\) for any \(\varepsilon \gt 0\) , even if a tree decomposition of G having width at most t is given in the input, unless the Counting Strong Exponential-Time Hypothesis (#SETH) fails. Thereby, we give a precise and complete complexity classification featuring matching upper and lower bounds for all target graphs with or without loops.

• given that irr(H) ≥ 2, cannot be counted in time (irr(H) − ε) t • n O (1) for any ε > 0, even if a tree decomposition of G having width at most t is given in the input, unless the Counting Strong Exponential-Time Hypothesis (#SETH) fails.
Thereby we give a precise and complete complexity classification featuring matching upper and lower bounds for all target graphs with or without loops.

Introduction
Many of the NP-hard problems studied in the literature are known to be polynomial-time solvable when restricted to graphs of bounded treewidth.In fact, the majority of these problems can be solved in time f (t)•n O (1) if a tree decomposition of width t is given in the input, that is, they are fixed-parameter tractable (FPT) parameterized by treewidth.As algorithms working on tree decompositions are useful building blocks in many types of FPT results and approximation schemes, determining the optimal dependence f (t) on the width of the decomposition received significant attention.
On the upper bound side, new algorithmic techniques (such as Fast Subset Convolution, Cut & Count, representative sets) were developed to obtain improved algorithms.For lower bounds, conditional complexity results were given ruling out certain forms of running times.Lokshtanov, Marx, and Saurabh [22] considered problems that are known to be solvable in time c t • n O (1) if a tree decomposition of width t is given in the input, and showed that these algorithms are essentially optimal, as no algorithm with running time (c − ε) t • n O (1) can exist for any ε > 0, assuming the Strong Exponential-Time Hypothesis (SETH).In particular, for Vertex Coloring with c colors, the textbook c t • n O (1) algorithm based on dynamic programming cannot be improved to (c − ε) t • n O (1) .By now, there is a growing collection of tight lower bounds of this form in the literature [2,6,7,11,20,23,26,27].
Vertex coloring with c colors can be seen as a homomorphism problem.Given graphs G and H, a homomorphism is a mapping f : V (G) → V (H) that preserves the edges of G, that is, uv ∈ E(G) implies f (u)f (v) ∈ E(H).Let us observe that a c-coloring of G can be seen equivalently as a homomorphism from G to K c , the complete graph with c vertices.Thus generalized coloring problems defined by homomorphisms to a fixed graph H were intensively studied in the combinatorics literature and subsequently from the viewpoint of computational complexity [14,15,[17][18][19]24,25]. The homomorphism problem can be generalized from graphs to arbitrary relational structures, giving a very direct connection to constraint satisfaction problems (CSPs), which are often described using the terminology of homomorphisms [13,[17][18][19].
Introducing lists of allowed images gives a more robust variant for homomorphism problems: formally, given graphs G and H, a list assignment is a function L : V (G) → 2 V (H) .Then a list homomorphism from (G, L) to H is a homomorphism f : V (G) → V (H) that additionally respects the lists, that is, f (v) ∈ L(v) for every v ∈ V (G).Studying the list version of a homomorphism problem can be seen as analogous to studying the conservative version of CSP, where every unary constraint is allowed [1,4,5].The study of conservative CSP often served as a starting point before more general investigations, motivating the exploration of the list version of homomorphism problems.
The polynomial-time solvable cases of (list) homomorphism is well understood.Nešetřil and Hell [15] showed that the non-list problem is polynomial-time solvable if H is bipartite or has a loop, and NP-hard when restricted to any other H.The complexity of the list version was characterized by Feder, Hell, and Huang [12]: it is polynomial-time solvable if H is a so-called bi-arc graph, and NP-hard for every other fixed H.But are there nontrivial algorithmic ideas that can help us obtain improved running times for the NP-hard cases?Similarly to coloring, the problem of finding a (list) homomorphism to H can be solved in time |V (H)| t • n O (1) if G is given with a tree decomposition of width t.Can this straightforward dynamic programming algorithm be improved for a given fixed H, and if so, by how much?
This question was resolved first by Egri, Marx, and Rzążewski [11] for the list homomor-phism problem in the special case when H is reflexive (that is, every vertex has a loop), which was extended by Okrasa, Piecyk, and Rzążewski [26] to every H (where each vertex may or may not have a loop).It turns out that there are a couple of algorithmic ideas that can be used to obtain c t • n O (1) time algorithms with c < |V (H)|.In particular, there are delicate notions of decompositions of H, such that if they are present, then the problem can be reduced in a nontrivial way to multiple instances with some H having fewer vertices than H.To formalize the optimality of these ideas, a combinatorial parameter i * (H) was introduced, and it was shown that the problem can be solved in time i * (H) t • n O (1) , but there is no (i * (H) − ε) t • n O (1)  time algorithm for any ε > 0, assuming the SETH.This means that the technical ideas behind the i * (H) t • n O (1) time algorithm already capture all the possible algorithmic insights that can be exploited when solving the problem on a given tree decomposition.A similar tight result was obtained by Okrasa and Rzążewski [27] for the non-list version of the problem.In that version, a different set of algorithmic ideas become relevant (reduction to a homomorphic core and factorizations), and the optimality of these ideas were proved assuming not only the SETH, but also two long-standing conjectures from algebraic graph theory.
A well-known phenomenon in computational complexity is that even if it is possible to find a solution efficiently in a combinatorial problem, counting the number of solutions can be hard.The most notable example is the perfect matching problem in bipartite graphs: finding a perfect matching is polynomial-time solvable, but counting the number of perfect matchings is #P-hard by the seminal result of Valiant [29].There are algorithmic ideas that can be generalized from decision to counting (for example, simple forms of dynamic programming), but others may not be.In particular, arguments of the form "if there is a solution, then there is a solution with property P , hence we only need to look for solutions with property P " do not immediately generalize to counting, as they would fail to count the potential solutions that do not have property P .Unfortunately, the i * (H) t • n O (1) time algorithm of Okrasa, Piecyk, and Rzążewski [26] for list homomorphism heavily relies on such arguments.Careful observation shows that only a very limited set of algorithmic ideas remain relevant for the counting problem: • Connected components.We may assume that G is connected: otherwise the number of solutions is the product of the number of solutions for each component of G. Furthermore, if G is connected, then every vertex of G is mapped into the same connected component of H. Thus we may assume that H is connected as well: the number of solutions is the sum of the number of solutions when restricted to each component of H.
• Identical neighborhoods.If two vertices u and v have identical neighborhoods, then we may pretend that only one of them, say u, is present in H. Then we need to solve a weighted problem that takes into account that whenever a vertex of G is mapped to u, then we can actually choose from two different copies of u.The extension to this weighted problem can be easily implemented in the dynamic programming algorithm working over a tree decomposition.Therefore, the number of possible images of a vertex can be restricted to the maximum size of a set S ⊆ V (G) where the neighborhoods are pairwise different.We call such a set S an irredundant set.
• Bipartite classes.If H is bipartite (and has no loops), then G has to be bipartite as well.This means that each bipartite class of G is mapped to one of the two bipartite classes of H, resulting in two cases that we can treat separately.In each case, the possible images of a vertex v of G are restricted to a bipartite class of H. Now the maximum number of possible images of v can be bounded by the maximum size of an irredundant set S that is contained in one bipartite class of H.
Our main negative result shows that these are all the algorithmic ideas that can be exploited in the list homomorphism problem for a fixed H. To state this formally, we define the following graph parameter.Note that if v has a loop, then the neighborhood of v includes v itself.This has to be carefully taken into account when interpreting different neighborhoods.For example, in a reflexive clique (where every vertex has a loop) every vertex has the same neighborhood.
For a fixed graph H (possibly with loops), we denote by #LHom(H) the problem of counting the number of list homomorphisms from the given (G, L) to H. Our main result shows for #LHom(H) that irr(H) is indeed the best possible base of the exponent that can appear in the running time.We obtain the lower bound under the #SETH, the counting version of the SETH, which is the natural variant of the assumption for lower bounds on counting problems (see [8]).Note that the SETH implies the #SETH, thus proving a result under the latter assumption make it somewhat stronger.Theorem 1.2.Let H be a graph that has a component which is not a biclique nor a reflexive clique.On n-vertex instances with treewidth tw, the #LHom(H) problem 1. can be solved in time irr(H) tw • n O (1) , provided that a tree decomposition of width tw is given as part of the input.2. cannot be solved in time (irr(H) − ε) tw • n O (1) , for any ε > 0, even if a tree decomposition of width tw is given as part of the input, unless the #SETH fails.
While the counting complexity of homomorphism problems is well-studied, we may argue that our lower bound brings a new level of understanding on the real hardness of the problem.It is easy to deduce from earlier results that if H has a component that is neither a biclique nor a reflexive clique, then H contains an induced subgraph H on at most 4 vertices such that #LHom(H ) is #P-hard.As we can use the lists to restrict the problem to V (H ), it follows that #LHom(H) is #P-hard as well.Dyer and Greenhill [10] showed that for such an H the homomorphism counting problem is #P-hard even if no lists are allowed.Since we cannot use the lists to restrict the problem to V (H ), this proof is significantly more complicated.What all these proofs have in common is that they identify only one level of complexity: #P-hardness.Intuitively, counting 10-colorings feels harder than counting 3-colorings, as we need to consider a larger number of possible values at each vertex.Saying that both problems are #P-hard ignores this perceived difference of hardness.On the other hand, Theorem 1.2 establishes, in a formal way, a difference in complexity between #LHom(K 10 ) and #LHom(K 3 ): the best possible base is 10 in the former and 3 in the latter.
Typical #P-hardness proofs use as the basis of the reduction a few known #P-hard problems, such as counting independent sets or counting 3-colorings.Once it is established that the problem can express one of these hard problems, then the proof can stop, as it is irrelevant whether some even harder problem can also be expressed.On the other hand, the lower bound proof of Theorem 1.2 has to delve deeper into the complexity of the problem, as it should extract as much hardness from the problem as possible.So instead of trying to show that the problem can express a simple relation corresponding to e.g., independent set or 3-coloring, our main goal is to show that, in a formal sense, #LHom(H) can express every relation over a domain of size irr(H).Going beyond the context of bounded-treewidth graphs, the main message of this paper is that the #LHom(H) problem is sufficiently rich to express all such relations.

Proof overview
As described above, the algorithmic part of the main result is fairly simple: we need to do only a few changes to the standard dynamic programming algorithm to take into account connected components, vertices with identical neighborhoods, and bipartiteness.Therefore, we focus on the main steps of the lower bound proof in this section.
From SAT to CSP.The #SETH states that for every ε > 0, there is a d ≥ 1 such that n-variable #d-SAT (i.e., counting satisfying assignments of a Boolean formula where every clause has d literals) cannot be solved in time O((2 − ε) n ).Therefore, we need to show that the hypothetical fast algorithm for #LHom(H) would give an algorithm for #SAT violating this lower bound.It will be convenient to introduce an intermediate problem.By #CSP(q, r), we denote the problem of counting solutions in a CSP instance with domain [q] and each constraint having arity at most r.It follows from the work of Lampis [21] that if for some q ≥ 2 there is an ε > 0 such that #CSP(q, r) can be solved in time (q − ε) n • (n + m) O (1) for every r, then the #SETH fails.Therefore, to prove our main lower bound for #LHom(H), we take a #CSP(q, r) instance with q = irr(H) and show that it can be reduced to #LHom(H).
Focusing on bipartite H. Similarly to [26], first we prove the main result for bipartite H. Then it is a surprisingly simple task to raise the result to general H.Given a (not necessarily bipartite) graph H, we define the associated bipartite graph H * the following way (see Figure 1): for every vertex v of H, there are are two vertices v , v in H * and we introduce edges such that, if It can be shown that irr(H * ) = irr(H) and the lower bound for #LHom(H) can be obtained by a simple reduction from #LHom(H * ).Thus in the following we can assume that H is bipartite.Furthermore, we may assume that H is irredundant, that is, any two vertices have distinct neighborhoods.It is easy to see that the lower bound for irredundant H can be extended to the general case.
New relations.Input to #LHom(H) can be seen as a CSP instance where the vertices are variables with domain V (H) and each edge is a binary constraint restricting the combination of values appearing on two vertices.We can augment the problem by allowing a fixed finite set R = {R 1 , . . ., R c } of other relations in the instance.Of course, introducing new relations may make the problem harder.If for every 1 ≤ i ≤ c, we can provide an appropriate reduction from the problem augmented with {R 1 , . . ., R i } to the problem augmented only with {R 1 , . . ., R i−1 }, then this chain of reductions shows that the problem augmented with R is not harder than the original #LHom(H) problem, hence any lower bound proved for the augmented problem holds for #LHom(H) as well.Therefore, we start with #LHom(H) and try to introduce new relations R i one by one, in a way that does not make the problem any harder.We do this until we can introduce every r-ary relation over a domain S of size q = irr(H).At this point, the lower bound for #CSP(q, r) applies to our augmented problem, and hence to the original #LHom(H) problem as well.
Gadgets and interpolation.How can we show that introducing a new relation R i does not make the problem harder?This can be done by showing that each occurrence of the R i relation can be replaced by an appropriate gadget that, in a certain sense, simulates this relation.A gadget is a small instance where a subset (x 1 , . . ., x r ) of the vertices are distinguished as the interface.Now if we fix the images f (x 1 ) = v 1 , . . ., f (x r ) = v r , then, for each vector (v 1 , . . ., v r ), there is some number of ways the mapping f can be extended to a list homomorphism from the gadget to H.If it so happens that for every (v 1 , . . ., v r ) ∈ R i there is exactly one extension, and for every (v 1 , . . ., v r ) ∈ R i there are zero extensions, then this gadget effectively expresses a constraint with the relation R i on the vertices (x 1 , . . ., x r ).Now we can replace every occurrence of R i with a copy of this gadget, showing that introducing R i does not make the problem harder.However, it is rare that a gadget can express the new relation R i in such a clean way.Fortunately, the nature of counting problems allows us to use gadgets in a much more general setting.Suppose the number of possible extensions for every (v 1 , . . ., v r ) ∈ R i is always an integer from a set A (for example, A = {4, 5, 8, 11}), while the number of extensions for (v 1 , . . ., v r ) ∈ R i is always an integer from a set B (for example, B = {3, 7, 9}).Suppose that every a ∈ A and b ∈ B are coprime1 (that is, there is no prime p dividing both an element of A and an element of B); this condition holds in our example.Then a polynomial interpolation technique of Dyer and Greenhill [10] allows us to count those assignments where (v 1 , . . ., v r ) ∈ R i is satisfied, effectively introducing a constraint with relation R i .
A simple application of this technique allows us to express compositions of relations.Given two binary relations R 1 and R 2 , their composition is defined as Given two interface vertices v 1 and v 2 , we can construct a gadget by introducing a new vertex z and adding the constraint R 1 on v 1 and z, and adding the constraint R 2 on z and v 2 .Now if the interface vertices are assigned a pair of values from R 1 ;R 2 , then there is at least one extension to z; if they are assigned a pair of values not from R 1 ;R 2 , then there is no extension.Thus by the technique mentioned in the previous paragraph, we can express the relation R 1 ;R 2 .
Relations on a P 4 .The path on four vertices (a P 4 ) is the smallest example of a connected bipartite graph H that is not a complete bipartite graph, and hence the #LHom(H) problem is #P-hard.Intuitively, #P-hardness should mean that whenever a P 4 appears in H, then we should be able to express arbitrarily complicated constraints when restricted to these vertices; however, this has not been stated explicitely in earlier work.We make this expectation formal by showing that if (a − b − c − d) is a P 4 appearing in H, then any relation R ⊆ {a, c} r can be simulated with appropriate gadgets.For this statement it is essential that we consider the counting problem, and an analogous statement should not hold for the decision version.To see this, observe that in the path (a − b − c − d), the neighborhood of c is a superset of the neighborhood of a. Thus if the list of a vertex contains both a and c, then a in the solution can always be replaced by c.This makes it impossible to simulate even very simple relations such as R = {aa, cc} in the decision version since allowing aa and cc would automatically allow ac and ca as well.
We will be particularly interested in forcers where α and β are vertices a and c of a P 4 a − b − c − d.The following claim will be crucial for our proof: if we can show that, for every x, y ∈ S, there are good (x, y, S)-forcers with respect to (a, c) of the same P 4 , then we can realize any r-ary relation R over S. Therefore, if we choose an S with |S| = irr(H) = q, then we can reduce #CSP(q, r) to our problem, which was our main goal.Let us sketch how to prove the claim by realizing an r-ary relation R with the help of the forcers.Let s = |S|.Intuitively, by applying all the s(s − 1) possible (x, y, S)-forcers on a vertex v, we can extract s(s − 1) bits of information about the value of v.These bits are sufficient to identify the value of v, as for any two different values x = y, one of the bits will be different.Therefore, we can translate the values of r variables into rs(s − 1) bits and then we can implement R by an rs(s − 1)-ary relation over {a, c}.As a and c are on a P 4 , we have already seen that such a relation can be realized.
The connected structure of P 4 's.As we have seen in the previous paragraph, a crucial step in the proof is to realize an (x, y, S)-forcer for every x, y ∈ S.An important detail is that we need all these (x, y, S)-forcers to be with respect to the same (a, c).But what happens if we have two , can an (x, y, S)-forcer with respect to (a , c ) be turned into one with respect to (a, c)?We can show that if the two P 4 s are "adjacent" in the sense that they share two vertices in the same bipartite class of H, then this is indeed possible.Even if the two P 4 s are not adjacent, but are in the "same connected component" under this definition of adjacency, then multiple applications of this argument show that a forcer with respect to (a , c ) can be turned into a forcer with respect to (a, c).A graph-theoretic analysis shows that if H is connected and irredundant, then the P 4 s have only a single connected component.Therefore, it does not matter in the definition of good forcers exactly where a and c are, as long as they are in a P 4 .We can show that even this last condition is unnecessary: a forcer with respect to any two distinct (α, β) that are on the same side can be turned into a forcer with respect to (a, c).
The inductive proof.We prove by induction that there is a good (x, y, S)-forcer for every S ⊆ V (H) and x, y ∈ S: when proving the statement for (x, y, S), we assume that it is true for every (x , y , S ) where either To explain the main ideas of the induction step, we need two new concepts.A weaker version of the forcer is the distinguisher, which additionally may allow (y, α) ∈ R as well.For example, the following relation is an (x, y, {x, y, z, q, w})-distinguisher with respect to (α, β): R = {xα, yα, yβ, zα, zβ, qα, wβ}.
As another application of the interpolation technique, we show that if we can realize a distinguisher, then we can realize a forcer as well, that is, we can eliminate the unwanted possibility (y, α).
For a partition (X, Y ) of S, the (X, Y )-partitioner with respect to (α, β) is the relation It can be seen as a stricter version of the (x, y, S)-forcer: it is now specified for every element of S whether the relation should move it to α or β.But we can show that if we realize an (x, y, S)-forcer for every x, y ∈ S, then we can realize an (X, Y )-partitioner for every partition (X, Y ) of S: the argument is similar to how forcers were used to realize arbitrary relations over S. Now let us overview the main cases in the inductive proof.
• Base case: S = {x, y}.If x and y have a common neighbor, then x and y are part of a P 4 (as we assumed that H is irredundant) and we essentially have an (x, y, S)distinguisher, which we can turn into an (x, y, S)-forcer.If x and y have distance at least 3, then we can use the edge relation E of H to move x and y closer to each other, until they have a common neighbor.
• Case 1: S contains a vertex z at distance two from {x, y}.Using that z has a neighbor that is also adjacent to one of x and y, we can define a set S with S ⊆ N (S ) and |S | < |S| (see Figure 2).Furthermore, we can define S in a way that, without loss of generality, the set N (y) \ N (x) is nonempty.As |S | < |S|, the induction hypothesis implies that there are (x , y , S )-forcers for every x , y ∈ S .Therefore, we may also assume that there are (X , Y )-partitioners for every partition (X , Y ) of S .Let R be an (N (x)∩S , S \N (x))-partitioner with respect to some (α, β).If E is the edge relation of H, then we claim that the composition E;R is an (x, y, S)-distinguisher.Indeed, it moves x to α (as E has to move x to N (x) ∩ S first, which is moved to α by R), y can move to β (as E can move y to the nonempty (N (y) \ N (x)) ∩ S , which is moved to β by R), and every element of S can move to at least one of α and β (as S ⊆ N (S )).
Then this (x, y, S)-distinguisher can be turned into an (x, y, S)-forcer, as required.
• Case 2: Every vertex of S\{x, y} is at distance at least 4 from {x, y}.Let P be a shortest path between {x, y} and S \ {x, y}.There are a few similar cases to consider, but let us assume, as a representative case, that P goes from x to some vertex z, and there is a vertex y ∈ N (y)\N (x) (see Figure 2).Let x be the neighbor of x on P , let z be the other endpoint of P , and let z be the neighbor of z on P .We define S starting from {x , y , z } and adding a neighbor of each S \ {x, y, z}; clearly, we have |S | ≤ |S|.Furthermore, dist({x , y }, S \{x , y }) is strictly less than dist({x, y}, S \{x, y}): vertices x and z are closer to each other than x and z.Thus by the induction hypothesis, an (x , y , S )-forcer R exists with respect to some (α, β).Then we claim that the composition E;R is an (x, y, S)-distinguisher.Indeed, x is moved to α (as x is the only neighbor of x in S , which is moved to α by R), y can move to β (as y is a neighbor of y and R can move y to β), and every element of S can move to at least one of α and β (as S ⊆ N (S )).
Then this (x, y, S)-distinguisher can be turned into an (x, y, S)-forcer, as required.

Preliminaries
For an integer n, we define [n] := {1, 2, . . ., n}.Given a set S, by 2 S we denote the power set of S.
Graph theory By (p 1 − p 2 − . . .− p q ) we denote the path of length q whose consecutive vertices are p 1 , p 2 , . . ., p q .Let H be a graph.For vertices u, v ∈ V (H), dist(u, v) is the length of a shortest path between u and v in H.
The Gaifman graph of a structure H has vertex set V (H) and distinct u, v ∈ V (H) are adjacent if and only if there is a relation R ∈ R(H) that involves u and v.We define the treewidth and a tree decomposition (resp., the pathwidth and a path decomposition) of H as the treewidth and a tree decomposition (resp., the pathwidth and a path decomposition) of the Gaifman graph of H.
Given two structures G and H with the same signature σ, a function f : and we also use the usual list notation: For an element x ∈ V (G) we write for each x in the domain of L. Note that x being outside the domain of L is as restrictive as having L(x) = V (H).By hom (G, L) → H we denote the number of homomorphisms from (G, L) to H.
Let H = (U, S) be a fixed structure and let J = (Z, F) be a structure with the same signature as H.For some integer p, let x = (x 1 , x 2 , . . ., x p ) ∈ Z p and y = (y 1 , y 2 , . . ., y p ) ∈ U p .Then hom (J , x) → (H, y) is the number of homomorphisms from J to H that map x i to y i for each i ∈ [p].A p-tuple of distinguished (not necessarily pairwise distinct) elements of Z is also called an interface (of size p) of J .
Counting Problems Let H = (V, R) be a structure.Then #Hom(H) is the problem that takes as input a structure G with the same signature as H and asks to determine hom G → H .
Note that the power set 2 V is the set of all unary relations over V .Then #LHom(H) is the problem #Hom(H + ), where Intuitively, H + is the structure obtained from H by adding all non-empty unary relations over the universe of H.In the special case, where H is a graph H we simplify the notation by writing instances of #LHom(H) as (G, L), where G is a graph and L is a function from the vertices of G to subsets of V (H) specifying the unary relations of the input (lists).For convenience we assume that for such instances every vertex of G has a list.Then #LHom(H) on input (G, L) asks to determine hom (G, L) → H .
Pathwidth and pathwidth-preserving reductions Let A and B be computational problems that take as input some structure given along with a path decomposition.A pathwidthpreserving reduction from A to B is a polynomial-time Turing reduction from A to B such that the corresponding algorithm, if executed on an instance G of A given with a path decomposition with width at most t, makes B-oracle calls only on structures of size polynomial in ||G||, given with a path decomposition with width at most t + O (1).
Note that pathwidth-preserving reductions are transitive in the sense that a pathwidthpreserving reduction from A to B together with a pathwidth-preserving reduction from B to C implies that there is a pathwidth-preserving reduction from A to C. The concept of realizable relations is transitive as well: If a relation R is realizable by a structure H = (V, R), and a relation R is realizable by H = (V, R ∪ {R}), then R is realizable by H.
However, there is a caveat here: we are only allowed to combine a constant number of pathwidth-preserving reductions to make sure that the total increase of the pathwidth bound is constant as well.
We say that R) .Note that, since we can use arbitrary lists, we can see each instance of #LHom(H) as an instance of #LHom(H ), where each element of the input universe has a list that is a subset of V .This gives us a simple pathwidth-preserving reduction from #LHom(H) to #LHom(H ).Therefore, if a relation R is realizable by H, it is also realizable by H .We will use this fact implicitly throughout this work.

Relations and Operators
Let us now define some notation concerning relations.
• Given a graph H and two sets U, W ⊆ V (H), we define Note that for a graph H, sets U, W ⊆ V (H), and u ∈ U , we have The following lemma shows three simple ways to obtain realizable relations.
Lemma 2.1.Let H = (V, R).The following relations R are realizable by H: , where R contains E(H) for some graph H with vertex set V , and The proof of Lemma 2.1 is postponed to the next subsection, where we introduce necessary tools to prove the third item.

Pathwidth-Preserving Reductions and Interpolation
One well-known tool for proving hardness for exact counting problems is interpolation.The following lemma is a modification of [10,Corollary 3.3] that generalizes to structures and makes the reduction pathwidth-preserving.Note that the conditions on the numbers are slightly different than the conditions on the weights in [10, Corollary 3.3].
Lemma 2.3.Let H = (V, R) be a structure with signature σ and, for some positive integer p, let R ⊆ V p be a relation that is not in R. Suppose that there is a structure J with signature σ and an interface x of size p such that, for each f ∈ R and g ∈ V p \ R, we have the following: 3. If hom (J , x) → (H, g) = 0, then there is no prime that divides both hom (J , x) → (H, f ) and hom (J , x) → (H, g) .
Then R is realizable by H.
Proof.In order to show that R is realizable by H, we need to show, for H = (V, R ∪ {R}), that there is a pathwidth-preserving reduction from #LHom(H ) to #LHom(H).
Consider an input G = (U, S) of the #LHom(H ) problem, given along with a path decomposition with width t.Slightly overloading notation, we will use R to denote both the relation and its symbol so the corresponding relation in G is R G .We set n = ||G|| and m = |R G |.Note that if m = 0, then we are done, as G can be cast as a structure with signature σ and it therefore can be interpreted as an instance of the #LHom(H) problem.
Let k be a positive integer.For each (v 1 , v 2 , . . ., v p ) ∈ R G , we introduce k copies of the structure J with interface x = (x 1 , . . ., x p ).For each copy we identify x 1 , . . ., x p with v 1 , v 2 , . . ., v p , respectively.Then we remove the relation R G from G. Let us call the obtained structure G J k .Note that the signature of G J k is σ.
Let Φ be the class of functions from U to V that respect all relations from σ (but not necessarily R).Let Φ + be the class of functions in Φ that respect R.
For each function f ∈ Φ, define We also say that f is a w(f )-function.For each integer w, by Φ(w) we denote the set of w-functions in Φ. Define First note that for f ∈ Φ + , we have w(f ) = 0 according to property 1 as given in the statement of the lemma.Thus, We now argue that W + and W − are disjoint.Let w + ∈ W + and w − ∈ W − .We have w − = 0, and by properties 2 and 3 as given in the statement of the lemma, we observe that w − has a prime factor that does not divide w + .Therefore, w + = w − .We can conclude that Furthermore, we have Let a = (w) w∈W and b = (hom . Note that a factor of the form hom (J , x) → (H, f (v)) can have at most |V | p different values, each of which can have multiplicity between 0 and m in the product w(f ).Thus, there are at most (m + 1) (|V | p ) distinct values for w(f ) and the set W can be computed in time polynomial in n.
So, the tuple a can be computed in time polynomial in n.The tuple b can be computed using the algorithm that solves #LHom(H) for each k ∈ Consider a tuple (v 1 , v 2 , . . ., v p ) ∈ R G and note that it corresponds to a clique in the Gaifman graph of G, so in a path decomposition X of this graph there is a bag that contains {v 1 , v 2 , . . ., v p }. Let X be the first such a bag.We modify X by inserting right after X bags X 1 , X 2 , . . ., X k , where X i is the union of X and the vertex set of the i-th copy of J with interface (v 1 , v 2 , . . ., v p ). Clearly the obtained sequence X is a path decomposition of G J k .As the size of J depends only on R and H, and thus is a constant, we obtain that the width of X is t + O(1).
Thus, b can be computed using |W| calls to the #LHom(H) oracle, and we have shown that |W| ∈ n O (1) and each of the oracle calls is on an instance given with a path decomposition of width at most t + O(1).
Note that the values of a are non-zero by definition of W. Using (2) and the fact that a contains non-zero pairwise distinct values, we can apply Lemma 2.2.Thus, we obtain x = (|Φ(w)|) w∈W from a and b in time polynomial in n.By computing, for each f ∈ R, the value of hom (J , x) → (H, f ) , one can decide in polynomial time which values of W belong to W + (recall that each w − ∈ W − has a prime factor that does not appear in any value of hom (J , x) → (H, f ) with f ∈ R).Thus, using (1), the sought-for number of (list) homomorphisms can be computed from x = (|Φ(w)|) w∈W in time polynomial in n.
The following simple corollary formalizes the modeling of relations by gadgets.
Corollary 2.4.Let H = (V, R) be a structure with signature σ and, for some positive integer p, let R ∈ V p be a relation whose relation symbol is not in σ.Suppose that there is a structure J with signature σ and an interface x of size p such that, for each f ∈ R and g ∈ V p \ R, we have hom (J , x) → (H, f ) = 0 and hom (J , x) → (H, g) = 0, then R is realizable by H. Now we are ready to prove Lemma 2.1 that we recall here.
Lemma 2.1.Let H = (V, R).The following relations R are realizable by H: are of the same arity, 2. R U →U , where R contains E(H) for some graph H with vertex set V , and Proof.For each case we will build a structure J = (U, S) with the same signature as H and interface x, which satisfies the assumptions of Corollary 2.4.
3. Let R = R 1 ;R 2 for some R 1 , R 2 ∈ R or arity 2. We set U = {x, y, z}, x = (x, z), R J 1 = {(x, y)}, and R J 2 = {(y, z)}.Note that in the first two cases all vertices of J are in x, so it it straightforward to observe that hom (J , x) In the last case, we observe that hom (J , x) → (H, f ) > 0 if f ∈ R and hom (J , x) → (H, f ) = 0 otherwise.Thus the lemma follows from Corollary 2.4.

Hardness of counting satisfying assignments to CSP(q, r)
Recall that the #SETH states that for every ε > 0, there is a d ≥ 1 such that n-variable #d-SAT cannot be solved in time O((2 − ε) n ).Note that since d is a constant, this is equivalent to saying that n-variable m-clause #d-SAT cannot be solved in time (2 − ε) n • (n + m) O (1) .
For integers r, q ≥ 2, by CSP(q, r) we denote the CSP problem with domain [q] and each constraint of arity at most r.By #CSP(q, r) we denote the problem of counting satisfying valuations of a given instance of CSP(q, r).In this section we show the hardness of computing #CSP(q, r), conditioned on the #SETH.
The decision version of this result, conditioned on the SETH, was proven by Lampis [21].Our proof is just an adaptation of the original one to the counting world, so we will not elaborate on the details and refer the reader to the paper of Lampis [21].
Theorem 2.5.For every integer q ≥ 2 and ε > 0 there is an integer r, such that the following holds.Unless the #SETH fails, #CSP(q, r) with n variables and m constraints cannot be solved in time (q − ε) n • (n + m) O (1) .
Proof.Fix q ≥ 2 and ε > 0 and suppose that for every r there is an algorithm solving nvariable m-constraint #CSP(q, r) in time (q − ε) n • (n + m) O (1) .Suppose we are given a #d-SAT instance Φ with N variables and M clauses, where d is a constant.Without loss of generality we may assume that each variable is involved in some clause.
In the reduction we will carefully choose r = r(d, q, ε) and build a #CSP(q, r) instance Ψ.To avoid confusion, we will refer to an assignment of values to the variables of Φ as truth assignment, while an assignment of values to the variables of Ψ will be called valuation.
First, select an integer p and a real δ > 0, such that there exists an integer t satisfying the following (see [21] for the argument that such a choice is possible): Note that δ does not depend on d.Now group the variables of Φ into γ := N/t groups, each with at most t variables.Call these groups V 1 , V 2 . . ., V γ .For each i ∈ [γ], we create a group X i of p variables of Ψ.Thus the total number of variables in Ψ is n := γ • p ≤ pN t + p.Note that the total number of truth assignments of variables in V i is at most 2 t ≤ q p , so it does not exceed the number of possible valuations of X i .Let us fix some injective function that maps each truth assignment on V i to a distinct valuation of X i .Now, let us define the constraints of Ψ.Consider any clause c of Φ and let V i 1 , V i 2 , . . ., V i s for d ≤ d be the groups that contain the variables of c.Note that each truth assignment of variables in V i 1 , V i 2 , . . ., V i d that satisfies c corresponds to some valuation of variables in X i 1 , X i 2 , . . ., X i d .We introduce a new constraint C(c) on all variables from X i 1 ∪X i 2 ∪. ..∪X i d that accepts only the valuations that correspond to the truth assignments that satisfy c.The Summing up, Ψ has n = γ • p ≤ pN t + p variables, m = M constraints, its domain is [q], and each constraint has arity at most r := dp.
It is straightforward to observe that Φ has a satisfying truth assignment if and only if Ψ has a satisfying valuation.However, a stronger property holds as well: there is a bijection between truth assignments that satisfy Φ and valuations that satisfy Ψ.Indeed, recall that every variable of Φ appears in some clause, and thus, for each i ∈ [γ], there is a constraint of Ψ involving all variables of X i .Since the constraints of Ψ were defined in a way that the only accepted valuations are in one-to-one correspondence to the truth assignments of Φ, we observe that the claimed bijection between truth assignments and valuations exists.Hence, the solution to #CSP(q, r) on the instance Ψ is precisely the number of satisfying assignments of Φ, i.e., the solution of our instance of #d-SAT.Let us call our hypothetical algorithm for Ψ.Its running time is: As t depends only on q and ε, this running time is (1) , which violates the #SETH.

Counting list homomorphisms to bipartite graphs H
Let H be a bipartite graph.We say that a set is contained in one bipartition class.Then recall from Definition 1.1 that irr(H) is the maximum size of a one-sided irredundant subset of V (H).We extend this to disconnected bipartite graphs by setting irr(H) to be the maximum value of irr(H ) over all connected components H of H. Observe that the following conditions are equivalent for every bipartite graph H: (ii) H has a connected component that is not a biclique, (iii) H contains an induced P 4 .
For a connected bipartite graph H, an instance (G, L) of #LHom(H) is consistent, if: • G is connected and bipartite, let X and Y be its bipartition classes, • x∈X L(x) is contained in one bipartition class of H, and y∈Y L(y) is contained in the other bipartition class of H.

Algorithm
Theorem 3.1.For each bipartite graph H, the #LHom(H) problem on n-vertex instances given along with a tree decomposition of width at most t can be solved in time irr(H) t • n O (1) .
Proof.Let (G, L) be an instance of #LHom(H), where G has n vertices and is given along with a tree decomposition of width at most t.First observe that if G is not bipartite, then there is no homomorphism from G to H, thus we return 0. So from now on assume that G is bipartite.
First, assume that G and H are both connected.Let X, Y be the bipartition of G and A, B be the bipartition of H.We observe that in every homomorphism from G to H, either each vertex of X is mapped to a vertex of A and each vertex of Y is mapped to a vertex of B, or each vertex of X is mapped to a vertex of B and each vertex of Y is mapped to a vertex of A.
Thus we can reduce the problem to solving two consistent instances of #LHom(H) as follows.Let L 1 be the lists obtained from L by setting L 1 (x) := L(x) ∩ A for every x ∈ X and L 1 (y) := L(y) ∩ B for every y ∈ Y .Similarly, define L 2 as follows: L 2 (x) := L(x) ∩ B for every x ∈ X and L 2 (y) := L(y) ∩ A for every y ∈ Y .By the above reasoning, we obtain that hom (G, L) So let us focus on computing hom (G, L 1 ) → H as computing hom (G, L 2 ) → H is analogous.We define lists L 1 and an auxiliary weight function w : V (G) × V (H) → N as follows.For each v ∈ V (G), we partition the vertices of L 1 (v) into subsets consisting of vertices with exactly the same neighborhood.From each such subset W we include in L 1 (v) exactly one vertex, say u, and set w(v, u) = |W |.On the other hand, for every u / ∈ L 1 (v) we set w(v, u) = 0.
Observe that for each v ∈ V (G) the set L 1 (v) is irredundant and contained in one bipartition class of H. Thus, for each v ∈ V (G), we have Furthermore, denoting by Ω the set of all homomorphisms from (G, L 1 ) to H, we observe that: Since every list in L 1 has size at most irr(H), a standard bottom-up dynamic programming approach can be used to compute hom (G, L 1 ) → H in time irr(H) t • n O (1) [?].Furthermore one can easily modify the algorithm to actually determine the sum in ( 3): whenever we decide to assign color u to some vertex v ∈ V (G), we multiply the number of solutions by w(w, u).This modification does not increase the running time.Now consider the general case that graphs G and H are possibly disconnected.Let G 1 , G 2 , . . ., G p be the connected components of G and let H 1 , H 2 , . . ., H q be the connected components of H.It is straightforward to observe that Thus, the total running time of such an algorithm is irr(H) t • n O(1) .

Hardness for bipartite target graphs
The following lemma is the main building block of our hardness reduction.Lemma 3.2.Let H = (V, E) be a connected bipartite graph with irr(H) ≥ 2, and let S ⊆ V be a one-sided irredundant set.For every fixed p ≥ 1, every relation R ⊆ S p is realizable by H.
We postpone the proof of Lemma 3.2 to Section 3.6, and now we show how it implies the lower bounds for bipartite graphs H. Theorem 3.3.Let H be a bipartite graph with irr(H) ≥ 2. Assuming the #SETH, there is no ε > 0, such that #LHom(H) on consistent n-vertex instances given with a path decomposition of width t can be solved in time (irr(H) − ε) t • n O (1) .
Proof.Let H be as in the assumptions and let S be a maximum-size irredundant set contained in a bipartition class of some connected component H of H. Let q := |S| = irr(H).Note that H and S satisfy the assumptions of Lemma 3.2.Suppose that there is ε > 0 and an algorithm that solves every consistent n-vertex m-edge instance (G, L) of #LHom(H) that is given along with a path decomposition of G with width t in time (q − ε) t • (n + m) O (1) .
We reduce from #CSP(q, r), where r is the constant given for q and ε by Theorem 2.5.Consider an instance Ψ with variables U , domain D = [q], and let R be the set of relations used in the constraints of Ψ.Note that |R| depends only on q and r, i.e., |R| is a constant.Furthermore, the number of constraints in Ψ is polynomial in |U |.
Recall that |S| = q, so by fixing an arbitrary bijection between S and [q], we can equivalently see Ψ as an instance with domain S. In other words, the instance Ψ can be equivalently seen as a structure, which is an instance of #LHom((S, R)).Note that the pathwidth of Ψ is clearly at most |U |.
As the arity of each relation in R is at most r, by Lemma 3.2, all relations in R are realizable by H and thus by H.This means that there is a pathwidth-preserving reduction from #LHom((S, R)) to #LHom(H).
Let us point out that our reduction is in fact a chain of pathwidth-preserving reductions.However, the total number of steps in this chain is O(|R|), which is a constant.Thus the total increase of the pathwidth is O(1).
So, using our hypothetical algorithm for #LHom(H), we can count satisfying assignments for Ψ in time 1) , where we use the fact that q is a constant.By Theorem 2.5 the existence of such an algorithm for #CSP(q, r) contradicts the #SETH.

Special case: H = P 4
As a warm-up, let us start with the special case that H = P 4 .The following lemma can be seen as a restriction of Lemma 3.2 to this case.The proof will illustrate our approach.
Lemma 3.4.Consider P 4 = (a − b − c − d) and fix a positive integer q.Then any R ⊆ {a, c} q is realizable by P 4 .
Proof.We first show that the relations NEQ := {(a, c), (c, a)} and OR q := {a, c} q \ {c q } are realizable by P 4 .We will then use these relations to show the statement of the lemma.
First, let us focus on NEQ.We aim to use Lemma 2.3.Let J NEQ be a five-vertex path Let us analyze the values of hom (J NEQ , x) hom (J NEQ , x) → (P 4 , f ) = 0, otherwise.
So, for each f ∈ NEQ and g ∈ V 2 \ NEQ, we have hom (J , (s, t)) → (H , f ) = 3 and hom (J , (s, t)) → (H , g) ∈ {0, 2, 5}.Thus, J NEQ satisfies the assumptions of Lemma 2.3 and we obtain that NEQ is realizable by P 4 .Now let us consider OR q .Define J ORq as follows.We introduce q vertices v 1 , v 2 , . . ., v q and one additional vertex w, adjacent to all v i 's.We set L(v i ) = {a, c} for all i ∈ [q], and For f ∈ V (P 4 ) q we have hom (J ORq , x) Again, J ORq satisfies the assumptions of Lemma 2.3 and we obtain that OR q is realizable by Finally, consider an arbitrary relation R ⊆ {a, c} q .Let {f 1 , f 2 , . . ., f p } = {a, c} q \ R, and for each i ∈ [p] let R i := {a, c} q \ {f i }.Clearly R = p i=1 R i .Thus, by Lemma 2.1, it is sufficient to show that each R i is realizable.Fix some i and let J be the set of the indices j ∈ [q], such that the j-th coordinate of f i is a (and the other ones are c).If |J| = ∅, then R i = OR q and we are done.So suppose that |J| ≥ 1 and we can realize all relations R = {a, c} q \ {f }, where the number of coordinates of f that are equal to a is smaller than |J|.
So let us choose some j ∈ J.For each tuple f ∈ {a, c} q let f be the tuple in {a, c} q obtained from f by changing the j-th coordinate from a to c or from c to a, whichever applies.Consider R i = {a, c} q \ {f i }.Since j ∈ J, f i is obtained from f i by swapping the j-th coordinate from a to c.By the inductive assumption, R i is realizable.
Note that R i = {f | f ∈ R i }.We will use Corollary 2.4 to show that R i is realizable by the structure H = (V (P 4 ), {E(P 4 ), R i , NEQ}), which implies that R i is realizable by P 4 = (V (P 4 ), E(P 4 )).Slightly abusing notation, we use E, R i , and NEQ also as corresponding relation symbols in the signature of H.We define a gadget J on q + 1 vertices {v 1 , . . ., v q , u} with interface x = (v 1 , . . ., v q ).We apply R i to the tuple (v 1 , . . .v j−1 , u, v j+1 , . . .v q ) and we apply NEQ to (u, v j ), i.e.R J i = {(v 1 , . . .v j−1 , u, v j+1 , . . .v q )}, NEQ J = {(u, v j )}, and E J = ∅.Clearly, J has the same signature as H.Moreover, hom (J , x) Note that Lemma 3.4 together with the proof of Theorem 3.3 already yield the following result.
Corollary 3.5.Assuming the #SETH, there is no ε > 0, such that #LHom(P 4 ) on n-vertex instances given with a path decomposition of width t can be solved in time

Structure of P 4 s in H
Let H be a bipartite graph with bipartition (X, Y ).We define By Adj P 4 (H) we denote the following graph: Informally speaking, we think of two induced four-vertex paths as adjacent if they share two vertices from one bipartition class.Let us point out that in the definition of E(Adj P 4 (H)) we do not insist that S = S .Therefore the graph Adj P 4 (H) is reflexive, i.e., every vertex has a loop.This is a technical detail that allows us to simplify some arguments.Lemma 3.6.Let H be a connected irredundant bipartite graph.Then Adj P 4 (H) is connected.
Proof.Note that if |V (H)| ≤ 3, then Adj P 4 (H) = ∅ and we are done.Thus, suppose that H has at least four vertices.Let C 1 , C 2 , . . ., C p be the connected components of Adj P 4 (H).Let c : P 4 (H) → [p] be a function such that c(S) = i if and only if S belongs to C i .
First, let us observe the following.
Claim 3.6.1.Every edge and every induced P 3 in H is contained in some induced P 4 .
Proof of Claim.First, let us argue that every induced P 3 is contained in some induced P 4 .Let (x − y − z) be an induced P 3 .Since the vertices x and z do not have the same neighborhood, one of them, say x, has a neighbor w that is not adjacent to z.Then (w − x − y − z) is an induced P 4 .Now consider an edge xy.Since H is connected and has at least three vertices, one of the endvertices of xy, say y, has a neighbor z.Since H is bipartite, (x − y − z) is an induced P 3 .As we have already shown, it is contained in some induced P 4 .
Note that if A ∈ P 3 (H), then the set S A := {S ∈ P 4 (H) | A ⊆ S} is non-empty (by Claim 3.6.1)and induces a clique in Adj P 4 (H).In particular, for any S, S ∈ S A it holds that c(S) = c(S ).We introduce a labeling : P 3 (H) → [p], where (A) = i if and only if c(S A ) = {i}.Claim 3.6.2.For any edge xy ∈ E(H) and any A, B ∈ P 3 (H) such that {x, y} ⊆ A ∩ B, it holds that (A) = (B).Consequently, all sets S ∈ P 4 (H) that contain xy belong to the same connected component of Adj P 4 (H).
So suppose that {x, y, a, b} / ∈ P 4 (H).This is possible in two cases: (1) if a and b are adjacent to the same vertex from {x, y} (and thus H[{x, y, a, b}] is a star with 3 leaves), or (2) if ab ∈ E(H) (and thus H[{x, y, a, b}] is a 4-cycle).
By Claim 3.6.1,each of the sets A, B belongs to some induced P 4 , i.e., there are vertices c, d, such that {x, y, a, c}, {x, y, b, d} ∈ P 4 (H).Note that it is possible that c = d.
We will show that there is a walk from {x, y, a, c} to {x, y, b, d} in Adj P 4 (H), which will prove that c({x, y, a, c}) = c({x, y, b, d}) and consequently (A) = (B).We consider some cases.
Case 1. Suppose that H[{x, y, a, b}] is a star with 3 leaves.By symmetry we assume that a, b, x ∈ N (y).We consider possible positions of c and d.One of the following cases occurs, see also Figure 3. Claim 3.6.2allows us to define a labeling of edges of H, analogous to : for xy ∈ E(H), we set (xy) = c(S), where S is any element of P 4 (H) with {x, y} ∈ S. Note that by Claim 3.6.1,such a set S always exists and by Claim 3.6.2, the value of (xy) does not depend on the choice of S. Note that for every induced path (x−y −z) we have ({x, y, z}) = (xy) = (yz).Now we claim that for any two edges e, f ∈ E(H) we have (e) = (f ).For contradiction, suppose this is not the case.Since H is connected, this means that there are two edges xy and xz, where y = z, such that (xy) = (xz).However, this cannot happen as (xy) = ({x, y, z}) = (yz), since {x, y, z} ∈ P 3 (H).
1. R is an (x, y, S)-distinguisher with respect to (α, β) if it has the following properties: 2. R is an (x, y, S)-forcer with respect to (α, β) if it is an (x, y, S)-distinguisher with respect to (α, β) with the additional property: We say that a relation R is an (x, y, S)-distinguisher (resp.(x, y, S)-forcer ) if there are is a one-sided set {α, β}, such that R is an (x, y, S)-distinguisher (resp.(x, y, S)-forcer) with respect to (α, β).
The following lemma is a crucial building block that we will use repeatedly in the results leading up to the proof of Lemma 3.2.Here, given a realizable (x, y, S)-distinguisher with respect to some pair of vertices on a four-vertex path, we use Lemma 2.3 to turn this distinguisher into a forcer.
Therefore, we can apply Lemma 2.3 to complete the proof.
Let R be the (x, y, S)-forcer with respect to (a, c) given by Claim 3.8.1.To obtain item 2, notice that R ;NEQ({a, c}) is an (x, y, S)-forcer with respect to (c, a), and by Lemma 2.1 it is realizable by H . Now let us focus on (3).Note that R ;R {a,c}→{b,d} is an (x, y, S)-distinguisher with respect to (b, d), which is realizable by H by Lemma 2.1.Thus, applying Claim 3.8.1 with a switched to b and c switched to d, we obtain that an (x, y, S)-forcer R with respect to (b, d) is realizable by H .
We show a strengthening of Lemma 3.8: Given a realizable (x, y, S)-distinguisher with respect to some pair of vertices that are potentially far apart in H, we can obtain a realizable (x, y, S)-forcer with respect to some pair of vertices on an induced four-vertex path.Moreover, since P 4 s form a connected structure in H (recall Lemma 3.6), we can even choose a P 4 in H and obtain a forcer with respect to any one-sided pair from this very P 4 .Lemma 3.9.Let H = (V, E) be an irredundant connected bipartite graph.Fix an induced 4vertex path (a−b−c−d) in H. Let S be a one-sided subset of V with distinct x, y ∈ S. Suppose there is an (x, y, S)-distinguisher R that is realizable by H. Then there is an (x, y, S)-forcer R with respect to (a, c) such that R is realizable by H.
Proof.The proof is divided into two parts.First, we show that there is a realizable forcer with respect to a one-sided pair of vertices on some induced P 4 in H.  Let (a − b − c − d ) be as in Claim 3.9.1.Note that H satisfies the assumptions of Lemma 3.6, so there is a sequence P (1) , P (2) , . . ., P (s) of subsets of H, such that (i) each P (i) induces a P 4 in H, (ii) P (1) = {a , b , c , d } and P (s) = {a, b, c, d}, and (iii) for each i ≤ s − 1, sets P (i) and P (i+1) share two vertices from the same bipartition class.
We prove the statement by induction on s.
. By Lemma 3.8, in either case there is a realizable (x, y, S)-forcer with respect to (a, c) and we are done.So assume that s ≥ 2 and there is an (x, y, S)-forcer R with respect to (a , c ), such that R is realizable by H, where H[P (s−1) ] = (a − b − c − d ).Let (s, t) ∈ {(a , c ), (b , d )} be such that s, t ∈ P s−1 ∩ P s .Applying Lemma 3.8 for the path H[P s−1 ], we obtain that an (x, y, S)-forcer with respect to (s, t) is realizable by H. Since s, t ∈ P s , where s and t are in the same bipartition class, we have (s, t) ∈ {(a, c), (c, a), (b, d), (d, b)}.Now using Lemma 3.8 for the path H[P s ] = (a − b − c − d), we conclude that an (x, y, S)-forcer with respect to (a, c) is realizable by H.
Recall from Lemma 3.4 that the structure of a 4-vertex path is rich enough to encode basic binary relations.In Lemma 3.9 we showed how to obtain forcers with respect to some specified pair (a, c) on a 4-vertex path.In the next lemma we show that such a collection of forcers lets us realize more expressive relations.Lemma 3.10.Let H = (V, E) be an irredundant connected bipartite graph.Let (a − b − c − d) be an induced 4-vertex path in H, and let S be a one-sided subset of V .If for every x, y ∈ S there is an (x, y, S)-distinguisher realizable by H, then, for every fixed p ≥ 1 and q ≥ 0, every relation R ⊆ S p × {a, c} q is realizable by H.
Proof.Let |S| = s and enumerate S as {x 1 , x 2 , . . ., x s }.Let (a − b − c − d) be a fixed 4-vertex path in H.By the assumption of the lemma, for all distinct x i , x j ∈ S there is some (x i , x j , S)distinguisher realizable by H. Thus, by Lemma 3.9, there is also an (x i , x j , S)-forcer R i,j with respect to (a, c), realizable by H.
Let us now introduce one more special type of relation.Consider a bipartite graph H = (V, E) and a subset S ⊆ V .Let (X, Y ) be a partition of S.Moreover, let a, c be two distinct vertices in V .A relation R ⊆ S × {a, c} is an (X, Y )-partitioner with respect to (a, c) if: • for every x ∈ X it holds that R(x) = {a}, • for every y ∈ Y it holds that R(y) = {c}.
As each partitioner is a relation in S ×{a, c}, Lemma 3.10 immediately yields the following.Corollary 3.11.Let H = (V, E) be an irredundant connected bipartite graph.Let (a−b−c−d) be an induced 4-vertex path in H, and let S be a one-sided subset of V .If for every x, y ∈ S there is an (x, y, S)-distinguisher realizable by H, then for every partition (X, Y ) of S, the (X, Y )-partitioner with respect to (a, c) is realizable by H.

Proof of Lemma 3.2
In this section we finally prove Lemma 3.2.Let us first discuss the plan.Lemma 3.2 will easily follow from Lemma 3.10 (for q = 0), provided that for all distinct x, y ∈ S, there is an (x, y, S)-distinguisher realizable by H.We prove this statement inductively, essentially deriving new forcers from forcers that are "smaller" with respect to some measure on (x, y, S).On the way, we use Lemma 3.9 to turn distinguishers into forcers, and we use Corollary 3.11 to turn forcers into partitioners.Lemma 3.12.Let H = (V, E) be an irredundant connected bipartite graph with irr(H) ≥ 2.
Let S be a one-sided subset of V , and let x, y be distinct vertices in S. Fix any induced 4-vertex path (a − b − c − d) in H. Then there is an (x, y, S)-forcer R with respect to (a, c) such that R is realizable by H.
Proof.Let S and S be one-sided irredundant sets (not necessarily in the same part of the bipartition) such that S contains two distinct vertices x, y ∈ S and S contains two distinct vertices x , y ∈ S .We define an order by setting (x , y , S ) < (x, y, S) if one of the following holds: We prove the statement of the lemma by induction with respect to this order.Note that (x, y, S) is a minimal element if S = {x, y}.
Suppose that the following holds: There is an (x, y, S)-distinguisher or a (y, x, S)-distinguisher realizable by H.
In the first case we can apply Lemma 3.9 to obtain an (x, y, S)-forcer with respect to (a, c) that is realizable by H.In the second case, note that a (y, x, S)-forcer with respect to (a, c), obtained by the application of Lemma 3.9, is an (x, y, S)-forcer with respect to (c, a).Thus, Proof.Let S be a maximal-size irredundant superset of S in V .Note that every maximal-size irredundant set in H contains a vertex from each class of vertices with identical neighborhood.Thus, H = H[S ] is connected as H is connected.Furthermore, we have irr(H ) = irr(H) ≥ 2.
Since H is an induced subgraph of H it suffices to show that every relation in S p is realizable by H . Since irr(H ) ≥ 2, the graph H is not a complete bipartite graph and therefore it contains an induced four-vertex path (a − b − c − d).By Lemma 3.12, for every pair of distinct x, y ∈ S, there is an (x, y, S)-forcer R with respect to (a, c) such that R is realizable by H .The statement of the lemma follows from Lemma 3.10 for q = 0.

Counting list homomorphisms to general graphs H
In this section we discuss how to lift the results from Section 3, where we assumed H to be bipartite, to the general case.

Associated bipartite graphs
For a graph H = (V, E), by H * we denote its associated bipartite graph, i.e., the graph with Furthermore, we observe that there is a correspondence between connected components of H and connected components of H * .Let H be a connected component of H.If H is bipartite, then H * consists of two disjoint copies of H and so irr(H ) = irr(H * ).If H is nonbipartite, then H * is connected (and bipartite).Indeed, this follows from the fact that for every two vertices x, y of H there is an even x-y-walk and an odd x-y-walk in H . Thus all vertices x , x , y , y are in the same connected component of H .
Consequently, each bipartite component H of H corresponds to two components of H * , both isomorphic to H , and each nonbipartite component H of H corresponds to one connected component of H * , i.e., (H ) * .Now the claim easily follows from the previous observations about S, S and S .
Note that H * is a biclique if and only if H is a reflexive clique.Thus, we observe that irr(H) ≥ 2 if and only if H has a connected component that is not a biclique nor a reflexive clique.This allows us to restate the complexity dichotomy for #LHom(H), which was originally observed as a simple consequence of the non-list result of Dyer and Greenhill [10] by Díaz, Serna, and Thilikos [9], and also by Hell and Nešetřil [16].Theorem 4.2 ([9, 10, 16]).Let H be a fixed graph.If irr(H) = 1, then #LHom(H) is polynomial-time solvable, and otherwise it is #P-complete.

Algorithm for general graphs H
For an instance (G, L) of #LHom(H), we define its associated instance (G * , L * ) of #LHom(H * ), where for all v ∈ V (G) We say that a homomorphism f : to H * and that σ is a bijection.Using Lemma 4.3 we can show the algorithmic statement of Theorem 1.2.Note that it follows from the subsequent slightly stronger result.Theorem 4.4.For each graph H, the #LHom(H) problem on n-vertex instances given along with a tree decomposition of width at most t can be solved in time irr(H) t • n O (1) .
Proof.Consider an instance (G, L) of #LHom(H).By Lemma 4.3, it suffices to count the number of clean homomorphisms from (G * , L * ) to H * .
Let T be a tree decomposition of G with width at most t.We modify it as follows: in every bag of T we replace every vertex v ∈ V (G) with two vertices v , v ∈ V (G * ).It is straightforward to verify that this way we obtain a tree decomposition of G * with width at most 2t; let us call this decomposition T * .
First, just like in the proof of Theorem 3.1, we reduce the problem to its equivalent weighted version with each list of size at most irr(H * ) = irr(H).Consider one bag of T * and recall that we are only interested in counting the number of clean homomorphisms from G * to H * .Therefore, even though the size of the bag is at most 2t, there are at most irr(H) t colorings that could possibly be extended to a clean homomorphism from G * to H * .Thus, by an argument analogous to the one in the proof of Theorem 3.1, we obtain the desired running time.

Hardness for general graphs H
Recall the definition of consistent instances from the beginning of Section 3. The following lemma is a crucial tool used in our hardness reduction.Proof.For a homomorphism f : (G, L ) → H * , define σ(f ) : V (G) → V (H) as follows.If f (v) ∈ {x , x }, then σ(f )(v) = x.It is straightforward to verify that σ(f ) is a homomorphism from (G, L) to H. Furthermore, σ is a bijection -as the instance (G, L ) is consistent, for no v ∈ V (G) and x ∈ V (H) it holds that x , x ∈ L (v).
Proof.Let G = (V, E) be an n-vertex graph and let L : V → 2 [q] be a list function.Suppose that the treewidth of G is tw and G is given along with a tree decomposition T of width tw.
Let G be a graph obtained from G as follows.First, we introduce a q-vertex clique K with vertices {x 1 , x 2 , . . ., x q }.Next, for each v ∈ V and each i ∈ [q], we make v adjacent to x i if and only if i / ∈ L(v).It is straightforward to observe that (G, L) is a yes-instance of list q-coloring if and only if G is a yes-instance of q-coloring.Furthermore, each proper list coloring of (G, L) corresponds to exactly q! proper colorings of G , one for each proper coloring of vertices of K. Thus hom (G, L) → K q = 1 q! • hom G → K q .The number of vertices of G is n + q.Now let us deal with the treewidth.We can easily modify the tree decomposition T of G into a tree decomposition T of G by including all vertices of K in every bag.This proves that the treewidth of G is at most tw + q.Let us further modify the instance, so that the treewidth is exactly tw + q.We use a trick similar to the one in the proof of Theorem 4.6.
Let v be an arbitrary vertex of G and let G be obtained from G by introducing 2(tw + q) − 1 new vertices, which, together with v, form a biclique K tw+q,tw+q .Recall that the treewidth of K tw+q,tw+q is exactly tw + q, and the treewidth of G is also tw + q.On the other hand we can turn the tree decomposition T of G into a tree decomposition T of G as follows.Let T be an optimal tree decomposition of the biclique K tw+q,tw+q .We choose any bag of T containing v and make it adjacent to any bag of T containing v.This way we obtain a tree decomposition of G with width tw + q.The number of vertices of G is n + q + 2(tw + q) − 1 = O(n).Now, let f (q, tw) be the number of proper q-colorings of K tw+q,tw+q with the color of one vertex fixed.We observe that hom G → K q = 1 f (q,tw) • hom G → K q .Thus, if we could count the number of proper q-colorings of G in time (q −ε) tw+q n O (1) , we could count the number of list q-colorings of (G, L) in time (q − ε) tw+q n O(1) = (q − ε) tw n O (1) .By Theorem 4.4, this would contradict the #SETH.

Conclusion
Let us conclude the paper with the discussion of a potential extension of our results to the non-list variant of the #LHom(H) problem, i.e., the problem of counting homomorphisms to a fixed graph H. Denote this problem by #Hom(H).The complexity dichotomy for #Hom(H) was provided by Dyer and Greenhill [10] and it is exactly the same as for the list variant: the problem is polynomial-time solvable if every component of H is either a reflexive clique or an irreflexive biclique, and otherwise it is #P-complete.
While the algorithmic statement of Theorem 1.2 clearly carries over to #Hom(H) (as #Hom(H) is a restriction of #LHom(H) where all lists are equal to V (H)), our hardness proof heavily exploits non-trivial lists.The simple tricks we used in Corollaries 4.7 and 4.8 to reduce the list variant of coloring to the non-list variant cannot be easily generalized to arbitrary graphs H.
Let us point out that the fine-grained complexity of the decision variant of #Hom(H), parameterized by the treewidth of the instance graph is not fully understood [27].
Typically, the hardness proofs concerning the complexity of (non-list) graph homomorphism problems involve some tools from universal algebra and algebraic graph theory [3,15,28].
For graphs H 1 , H 2 , . . ., H p , we define their direct product H 1 × • • • × H p as follows: As an immediate consequence, we obtain In other words, if H can be obtained as a direct products of some factors H 1 , H 2 , . . ., H p , we can reduce solving #Hom(H) to solving #Hom(H i ) for all i (with the same instance graph).As irr(H i ) can be much smaller than irr(H), we observe that for some graphs H, the #Hom(H) problem can be solved faster than the #LHom(H) problem.In particular, while it was the truth for #LHom(H), the parameter irr(H) is not always the correct base of the exponential factor appearing in the complexity of an optimal algorithm solving #Hom(H).

Figure 1 :
Figure 1: A graph H and its associated bipartite graph H * .

Figure 2 :
Figure2: The two cases in the inductive proof.The set S is highlighted in red, the set S is highlighted in yellow.Case 1: As z and y have a common neighbor, we can define S with |S | < |S|.Vertex q ∈ S is in N (y) \ N (x).Case 2: In S, the distance of {x, y} from the rest of the vertices is 6 (a shortest path P is highlighted in gray), while in (x , y , S ) this distance is only 4.

Figure 4 :
Figure 4: Possible configurations in the proof of Case 2 in Claim 3.6.2.Dashed edges may, but do not have to exist.

Claim 3 . 9 . 1 .
There exist an induced 4-vertex path (a −b −c −d ) in H and an (x, y, S)-forcer with respect to (a , c ) that is realizable by H.Proof of Claim.Let {α, β} be a one-sided set such that R is an (x, y, S)-distinguisher with respect to (α, β).If α and β have a common neighbor then, since H is irredundant and has at least four vertices, there is an induced 4-vertex path in H that is of the form (α − b − β − d ) or of the form (d − α − b − β).In either case, the statement follows from Lemma 3.8.Otherwise, there is a shortest path P = (p 1 − . . .− p k ) from α to β (p 1 = α, p k = β) with k vertices.Since α and β are in the same bipartition class we have k ≥ 5.For each i ∈ {1, . . ., k − 4}, the relation

Lemma 4 . 3 .
There is a bijection between homomorphisms from (G, L) to H and clean homomorphisms from (G * , L * ) to H * .Proof.Consider a homomorphism f :

E(H 1
× • • • × H p ) ={(x 1 , . . ., x p )(y 1 , . . ., y p ) | x i y i ∈ E(H i ) for all i ∈ [p]}.The following observation is straightforward.Observation 5.1.Let G and H = H 1 × • • • × H p be two graphs.A function f : V (G) → V (H) is a homomorphism if and only if for every i ∈ [p], the function Π i (f ) is a homomorphism from G to H i .
Definition 1.1.Given a graph H, we say that S ⊆ V (H) is irredundant if any two vertices have different neighborhoods.We say that graph H is irredundant if V (H) is irredundant.For a connected bipartite graph H, set S is one-sided if it is fully contained in one of the bipartite classes.For a connected graph H, we define irr(H) the following way.If H has a loop or is nonbipartite, then irr(H) is the maximum size of an irredundant set; if H is bipartite, then irr(H) is the maximum size of a one-sided irredundant set.For disconnected H, we define irr(H) as the maximum of irr(C) over every connected component C of H.
forcer with respect to (p i+2 , p k ), and R i is realizable by H according to Lemma 2.1.So, by Lemma 2.1, the relation R ;R 1 ;...;R k−4 is a realizable (x, y, S)distinguisher with respect to (p k−2 , p k ), where (pk−3 − p k−2 − p k−1 − p k) is an induced path in H.By Lemma 3.8, there is also a realizable (x, y, S)-forcer with respect to (p k−2 , p k ).
where + denotes the disjoint sum, thenH * = H * 1 + H * 2 .Recall that if H is connected and nonbipartite, then irr(H) is the cardinality of the largest irredundant set in H. Let us point out that associated bipartite graphs allow us to provide a uniform definition of irr(H), which does not need to distinguish bipartite and nonbipartite graphs.For every graph H it holds that irr(H) = irr(H * ). .First, observe that S ⊆ V (H) is irredundant if and only if S