新建会话
Ctrl
K
Kimi Code
历史会话
查看手机应用

Residue Class Density Problem

Let $n_1<n_2<\cdots$ be an infinite sequence such that, for any choice of congruence classes $a_i\pmod{n_i}$, the set of integers not satisfying any of the congruences $a_i\pmod{n_i}$ has density $0$. Is it true that for every $\epsilon>0$ there exists some $k$ such that, for every choice of congruence classes $a_i$, the density of integers not satisfying any of the congruences $a_i\pmod{n_i}$ for $1\leq i\leq k$ is less than $\epsilon$?
Condition (P) Guarantees Zero-Density Uncovered Set in Residue Class Coverings
We need to parse the problem carefully. We have an infinite sequence of positive integers n_1 < n_2 < ... (presumably each > 1). The condition is: For any choice of congruence classes a_i mod n_i (i.e., for each i, pick a residue a_i modulo n_i), the set of integers that are not satisfying any of the congruences a_i (mod n_i) (i.e., the set of integers x such that for all i, x ≠ a_i (mod n_i)) has density 0. In other words, the "avoidance set" is always of asymptotic density zero, no matter which residues we pick.
The question asks: Is it true that for every epsilon > 0, there exists some finite k (depending only on epsilon, not on the choice of residues) such that for any choice of a_i (i=1..infty maybe?), the density of integers not satisfying any of the congruences a_i mod n_i for i ≤ k is less than epsilon? In other words, can we find a finite "covering" set of congruences that already forces the density of uncovered numbers to be arbitrarily small, uniformly over all possible choices of residues.
We need to decide if the statement is true or false. The problem seems reminiscent of covering systems, the Chinese remainder theorem, and the concept of "covering systems of congruences". A covering system is a finite set of congruences such that every integer satisfies at least one of them. Here we have an infinite sequence of moduli with the property that for any choice of residues, the uncovered set has zero density. The question is essentially: does this property imply a "uniform" version: that we can approximate the zero density arbitrarily well by a finite prefix of the sequence, uniformly over all residue choices. Or is there a counterexample where the property holds but there is a lower bound δ>0 such that for any finite k, there exists some choice of residues for which the uncovered set for the first k congruences has density at least δ. The question asks: "Is it true that for every epsilon>0 there exists some k such that, for every choice of congruence classes a_i, the density of integers not satisfying any of the congruences a_i mod n_i for 1≤i≤k is less than epsilon?" So we need to answer yes or no, with justification.
We need to think about known results. This is reminiscent of "Erdős's covering systems" and "zero-density covering families". There is a known concept of "Erdős' conjecture on covering systems" (the "minimum modulus problem" solved by Hough). But this problem is different: we want a sequence of moduli such that any choice of residues yields a zero-density uncovered set. This is reminiscent of "universal covering" or "perfectly covering system"? Actually, we can think of the property as: For any sequence of residues a_i, the set of integers that avoid all these residues has density zero. This is a strong condition. Does it hold for any sequence of moduli? For example, if n_i are the primes? If we pick residues arbitrarily, does the set of integers that avoid all those residues have zero density? Let's test: Suppose n_i = i (or all integers). Then the condition fails: we can pick a_i = 0 for all i; then the uncovered set is the set of integers not divisible by any i, which is the set of integers that are 1? Actually, if we have residues a_i = 0 mod i, the set of numbers not congruent to 0 mod i for any i is the set of integers that are not divisible by any integer >1, i.e., the set {1, -1}? Wait, need to think: If we consider all positive moduli n_i = i for i≥1 (including 1?), then the condition "not satisfying any of the congruences a_i mod i" means the integer x does not belong to any of the selected residue classes. If we pick a_i = 0 for each i, then the congruence classes are the multiples of i. The set of integers not satisfying any of these is the set of integers that are not a multiple of any i≥2. That set is just {1, -1, 0? Actually 0 is a multiple of any i, so excluded. So the only integer not divisible by any i≥2 is 1? But also -1. So it's density zero. So the property holds for that specific choice. But we need the property to hold for any choice of residues. For n_i = i, does the property hold? Let's test: pick a_i = i-1 (i.e., residue class -1 mod i). Then the uncovered set is numbers not congruent to -1 mod any i. Does that have zero density? I think yes: the set of numbers that avoid a specific residue class modulo each i is like a "sieve" removing one residue per modulus. For each i, we remove one out of i numbers. The density of numbers left after infinite process might be zero? Actually, product over i of (1 - 1/i) diverges to zero? Let's check: product_{i=2}^∞ (1 - 1/i) = 0? Since sum_{i=2}∞ 1/i diverges, the product tends to zero. Indeed, ∏{i=2}^N (1 - 1/i) = 1/N. So the density of numbers that avoid a particular residue class for each modulus up to N is about 1/N, which tends to zero. So the property holds for n_i = i. But does it hold uniformly? For epsilon = 0.01, can we find k such that for any residues a_i, the uncovered set after first k congruences has density < 0.01? For n_i = i, we can compute the worst-case density: For each i, we remove at most 1/i fraction of numbers (since each residue class is of size 1/i). Actually, after k steps, the uncovered set is the set of numbers that avoid each of the selected residues for i=1..k. This is a subset of numbers that avoid a particular combination of residues. The density of numbers that avoid each of the selected residues is at most ∏{i=1}^k (1 - 1/n_i)?? Not exactly, because the residues may overlap: a number may be counted as "covered" by multiple congruences, but we only need it to be covered by at least one. So the uncovered set is the complement of the union of the residue classes. The union's density can be bounded below by something. The worst-case scenario for the uncovered density (i.e., maximal uncovered density) occurs when the residue classes are as "disjoint as possible", i.e., they cover as few numbers as possible. In the best case for covering (i.e., minimal uncovered), we want them to overlap heavily. But we need an upper bound on uncovered density that holds for any choice of residues. So we need to consider the worst-case scenario (largest possible uncovered set) for given moduli n_i. We ask: Is there a uniform bound that goes to zero as k increases? For n_i = i, we can consider picking residues that are all the same, e.g., a_i = 0 for all i. Then the union of residue classes is the set of numbers divisible by at least one of the moduli i ≤ k. The density of numbers divisible by at least one of 1..k is 1 - φ(k)/k? Actually, the density of numbers not divisible by any i ≤ k is ∼ 1/ζ(1)? Not exactly. The density of numbers not divisible by any integer ≤ k is approximately the product over primes p ≤ k of (1 - 1/p), which is about e^{-γ}/log k (Mertens). So that's about 1/log k. So uncovered density decays like 1/log k. So for epsilon = 0.01, we need k large enough such that 1/log k < 0.01 => log k > 100 => k > e^{100} huge but finite. So for n_i = i, the answer would be yes: we can find such k (depending on epsilon). However, we need the property to hold for any sequence n_i satisfying the condition. The question asks: "Let n_1 < n_2 < ... be an infinite sequence such that, for any choice of congruence classes a_i mod n_i, the set of integers not satisfying any of the congruences a_i mod n_i has density 0. Is it true that for every epsilon > 0 there exists some k such that, for every choice of congruence classes a_i, the density of integers not satisfying any of the congruences a_i mod n_i for 1 ≤ i ≤ k is less than epsilon?" So we need to either prove that the answer is yes (i.e., the condition implies a uniform bound) or produce a counterexample sequence where the condition holds but there is some epsilon > 0 such that for any k there exists a choice of residues making the uncovered density for the first k congruences at least epsilon. So we need to consider the possibility of a sequence that grows very fast, like n_i = product of first i primes or something. Let's explore.
First, define property (P): For any infinite sequence of residues a_i (mod n_i), the set of integers x ∈ ℤ such that ∀i, x ≠ a_i (mod n_i) has asymptotic density 0.
We want property (U): For any ε > 0, ∃ k such that for any infinite sequence of residues a_i, the density of the set of integers x such that ∀ i ≤ k, x ≠ a_i (mod n_i) is < ε.
We want to know if (P) implies (U). Equivalent to: Does the property that any infinite selection of residues covers almost all integers (density 1) imply that we can uniformly approximate this covering with a finite number of residues? This is reminiscent of "finite subcover property" in topology: a family of sets with the property that any infinite selection covers almost all of ℕ (density 1) must have a finite subfamily that covers almost all of ℕ up to epsilon. But here the sets are "residue classes" and the covering is not required to be exact but in density sense.
Alternatively, we can think in terms of "measure" on the profinite completion of ℤ: each residue class a_i mod n_i is a clopen set in the profinite topology (or in the inverse limit ℤ̂). The natural density of a set S ⊆ ℤ can be thought of as the Haar measure of its closure in ℤ̂ if the set is "well-behaved"? Actually, there's a known connection: natural density of a set of integers equals the Haar measure of its closure in ℤ̂ if the set is "measurable" (i.e., strongly measurable). For residue classes, the density equals 1/n_i. For a union of finitely many residue classes, the density is not necessarily the sum of densities due to overlap. But the Haar measure is countably additive for disjoint sets. However, the union of infinitely many residue classes may have measure 1 (i.e., full). The condition (P) says that for any choice of a_i, the complement of the union (the "avoidance set") has measure zero. That's a property of the sequence of moduli. Actually, the complement of the union is the set of x ∈ ℤ̂ that avoid each of the given clopen sets. The measure of that complement is ∏_{i=1}^{∞} (1 - 1/n_i)?? Not exactly, because the sets are not independent: the events of being in a particular residue class are not independent across different moduli. However, in ℤ̂, the Haar measure of the intersection of the complements of each residue class (i.e., the set of integers that avoid all those classes) equals product of (1 - 1/n_i) if we consider the independence across moduli? Actually, in ℤ̂, the sets x ≡ a_i (mod n_i) are independent in the sense that the probability that a random element of ℤ̂ satisfies a given set of congruences is ∏ 1/n_i if the moduli are pairwise coprime (by Chinese remainder theorem). If they are not coprime, there is some dependence. But if the moduli are increasing and perhaps have some structure, the product of (1 - 1/n_i) may be zero or positive. But the condition (P) is that for any choice of a_i, the measure of the complement is zero. That suggests that the product ∏ (1 - 1/n_i) must be zero, but more strongly, for any specific choice of residues (including possibly "bad" choices that make the intersection of complements larger). Actually, the worst-case scenario for the avoidance set is when the residues are "aligned" to be as disjoint as possible? Let's think.
Given a sequence of moduli n_i, consider the "avoidance set" A = {x ∈ ℤ: ∀i, x ≠ a_i (mod n_i)}. For a given selection of residues, we can consider the density of A. If the product ∏_{i=1}^∞ (1 - 1/n_i) = 0, does that guarantee that density of A is zero for any selection? No: consider n_i = 2 for all i (though they must be increasing, so can't repeat). But suppose n_i = 2, 4, 8, 16, ... (powers of 2). Then each residue class is a single coset. The set of numbers avoiding all these classes might have positive density? Let's test: If we choose a_i = 0 for all i, then the union of classes is just the set of numbers divisible by 2? Actually, 0 mod 2 includes all even numbers; 0 mod 4 includes numbers divisible by 4; 0 mod 8 includes numbers divisible by 8, etc. The union of all these is just the set of even numbers (since any even number is divisible by some power of 2 > 1? Actually, any even number is divisible by 2, but not necessarily by 4 or 8. But being divisible by 2 already covers all evens. So the union is just evens. So the uncovered set is odd numbers, which have density 1/2 > 0. So property (P) fails for this sequence. Indeed, pick a_i = 0 for all i yields uncovered set of odds, density 1/2. So this sequence does not satisfy the hypothesis.
Thus the hypothesis (P) imposes a strong condition on the sequence n_i. It must be such that any selection of residues covers almost all integers. That suggests that the moduli cannot be "nested" like powers of a fixed integer, because you can pick a_i = 0 each time and the uncovered set will be numbers not divisible by any of those moduli (which are numbers not divisible by the smallest modulus, which may have positive density). So we need a sequence such that the smallest modulus n_1 is not too small? Actually, if n_1 > 1, picking a_1 = 0 yields a union of numbers ≡ 0 mod n_1, which is density 1/n_1. But we need that the uncovered set after all infinite congruences has density zero, not just after the first. So if we pick a_i = 0 for all i, the uncovered set will be numbers not divisible by any n_i. If the n_i are, say, the primes, then uncovered set is numbers not divisible by any prime, which is just {1, -1}? Actually, numbers not divisible by any prime are 1 and -1 (and maybe 0? 0 is divisible by any integer). So density zero. So property (P) holds for primes? Let's test with other residue choices: pick arbitrary residues a_i mod p_i. Does the set of numbers that avoid all those residues have density zero? This is like a "sieve" where each prime p_i excludes one residue class (maybe more if we consider multiple residues?). For each prime p_i, we exclude numbers congruent to a_i mod p_i. So the density of numbers not excluded by any of the primes up to x is about ∏_{p ≤ x} (1 - 1/p) = 0 (since diverging sum of reciprocals of primes). So indeed, for primes, any selection of residues yields density zero. So the primes satisfy (P). Similarly, any sequence where the sum of reciprocals of n_i diverges might satisfy (P). But is divergence sufficient? Let's examine.
Consider n_i = squares of primes: p_i^2. The sum of reciprocals ∑ 1/p_i^2 converges. But the product ∏ (1 - 1/p_i^2) converges to a positive constant (6/π^2?). Actually, ∏ (1 - 1/p^2) = 1/ζ(2) = 6/π^2 ≈ 0.6079. So for the sequence of squares of primes, if we choose a_i = 0 (i.e., exclude multiples of p_i^2), the uncovered set is numbers not divisible by any p_i^2, which has positive density (the squarefree numbers?), but not exactly: numbers not divisible by any p^2 are squarefree numbers, which have density 6/π^2 > 0. So property (P) fails. So we need divergence of ∑ 1/n_i? Let's test: If ∑ 1/n_i diverges, does property (P) hold? Not necessarily: consider n_i = i (the integers). ∑ 1/i diverges. We argued that property (P) holds for n_i = i? Let's verify thoroughly: For any arbitrary selection of residues a_i mod i, we need to show the uncovered set has density zero. Is that true? Let's try to find a counterexample: Suppose we choose a_i = i-1 for each i. Then we exclude numbers congruent to -1 mod i. Does that cover all but finitely many numbers? Let's check: For each i, we exclude numbers of the form ik - 1. For large i, these are sparse. The union of all these residue classes: we need to see if any integer can avoid all of them. For a given integer m, can it avoid being -1 mod any i ≤ m+1? Actually, for any integer m, consider i = m+1: then m ≡ -1 mod (m+1). So m is excluded by the congruence a_{m+1} = (m+1)-1 = m, which says numbers congruent to m mod (m+1) are excluded. So any integer m is excluded by the congruence with modulus i = m+1. Wait, careful: a_{i} is a residue class modulo i. If we set a_i = i-1, then the residue class includes numbers of the form ik + (i-1) = i*(k+1) - 1 = im - 1. So for a given integer x, we can write x = im - 1 for some i? Let's test x=5. For i=6, a_6 = 5 mod 6, so 5 ≡ a_6. So yes, x is covered by a_{x+1} mod (x+1). So indeed, for any integer x, we can find i = x+1 (or any divisor?), but we need to be careful: the residue class a_i = i-1 includes numbers congruent to -1 mod i. For integer x, x ≡ -1 mod i iff i divides x+1. So if we take i = x+1, then i divides x+1 = i, so indeed x ≡ -1 mod i. So any integer x is covered by the congruence a_{x+1} = x (mod x+1). So with this particular choice of residues, the union of all residue classes is all integers (no uncovered numbers). So density zero condition holds.
But we need to check for any arbitrary choice of residues. For n_i = i, does there exist a choice of residues such that the uncovered set has positive density? Let's try to find a selection that leaves a positive density set uncovered. Since each residue class has density 1/i, the total density of the union is at most sum_i 1/i (by union bound), but that diverges, so the naive bound is not helpful. However, we need to consider overlapping: the union may have density less than sum of densities because many residues may overlap heavily. But can we choose residues such that they are pairwise disjoint (or as disjoint as possible) to maximize uncovered density? For each i, we need to pick a residue class mod i. For different i, the residue classes may intersect. Overlap is possible: any integer satisfies many congruences simultaneously. So we need to see if there is a way to choose a_i such that the union covers only a small fraction of integers, leaving a positive density uncovered. If we can find such a selection, then (P) fails. If not, then (P) holds.
For n_i = i, consider the following selection: For each i, choose a_i = 0. Then the union is numbers divisible by some i. That union includes all numbers > 1 (since any integer > 1 is divisible by some i ≤ itself, but we have infinite i). Actually, any integer n > 1 is divisible by some i between 2 and n, so it belongs to the class a_i = 0 mod i. So the union covers all integers except maybe 1? Let's check: 1 is not divisible by any i > 1, but it is divisible by i = 1 (if we include n_1 = 1). Typically, we consider n_1 > 1? The problem didn't say n_i > 1, but we can assume they are positive integers, maybe > 1. If n_1 = 1, then the only residue class mod 1 is 0, which is all integers, so the union covers everything. If we start with n_1 > 1, then 1 may be uncovered. But that's just one integer, density zero. So selection a_i = 0 yields uncovered density zero.
Now consider a_i = 1 for all i. Then the union includes numbers congruent to 1 mod i. Does that cover all integers? For an integer x, does there exist i such that x ≡ 1 mod i? That means i divides x-1. If x > 1, then i can be any divisor of x-1. If we have i in the sequence that divides x-1, then x is covered. Since the sequence includes all integers i, for any integer x > 2, x-1 has some divisor i > 1 (unless x-1 = 1). So x is covered. So again, uncovered integers are maybe 2? Actually, x = 2: x-1 = 1, which has no divisor > 1, so 2 is uncovered. So only finitely many uncovered numbers.
Thus far, we see that for n_i = i, many natural choices of residues cover all but finitely many integers. But can we find a "sparse" selection that leaves a positive density uncovered? Let's try to think about constructing a set of residues that are "far apart". For each i, we need to pick a residue a_i mod i. Suppose we try to pick a_i such that each integer x is covered only by finitely many i, or maybe none. But we need to guarantee that there is a positive density set that avoids all these classes. Is that possible? Consider the Chinese remainder theorem: if we pick residues a_i arbitrarily, the set of integers satisfying all of them simultaneously (i.e., solving an infinite system) may be empty. But we want to avoid all of them. That's easier: we want to construct a set of residues such that the "avoidance set" is large.
We can think of the complement of the union as the set of integers that satisfy x ≠ a_i (mod n_i) for all i. This is an infinite set of "forbidden" congruences. This is reminiscent of covering sets: we want to avoid covering all numbers; we want to find a large set of integers that avoid each forbidden class. The maximal density of such a set is related to the "lower density of the set of integers avoiding a given set of residues". There is a known result: If the sum of reciprocals of n_i diverges, then for any choice of residues, the set of integers avoiding them has density zero. This is a theorem? Let's try to prove: Suppose we have moduli n_i with divergent sum of reciprocals. For any fixed residues a_i, consider the set A of integers avoiding all those residues. For each i, the proportion of integers that avoid the i-th residue is at most (1 - 1/n_i). However, because of possible overlaps, the proportion of integers avoiding all of them could be larger than the product of these factors (since overlapping can reduce the proportion that avoid all). Actually, we need an upper bound on the density of A. Overlaps make it easier to avoid them? Let's think: The avoidance set is the intersection over i of the sets B_i = ℤ \ (a_i mod n_i). Each B_i has density 1 - 1/n_i. Intersection of sets can have density as high as the minimum of the individual densities, but not larger than the product? Actually, for independent events, the probability of intersection is product of probabilities. For arbitrary events, the probability of intersection is ≤ min_i P(B_i). But we want an upper bound on density of intersection: we want to show it's zero. Since each B_i has density less than 1, but the intersection could be large if the sets are "aligned". For example, consider n_i = i, choose a_i = i-1 for all i. Then each B_i = ℤ \ {numbers ≡ -1 mod i} has density 1 - 1/i. The intersection of all B_i is the set of integers that are never ≡ -1 mod i for any i. Does this set have zero density? Let's check: For a random integer x, what's the probability that x ≡ -1 mod i for some i? Since for each i, probability is 1/i. Over all i, the events are not independent but maybe the probability that x avoids all such congruences is zero. Actually, we can argue: For any integer x, consider i = x+1: then x ≡ -1 mod (x+1). So x fails to be in B_{x+1}. So the only integers that avoid all B_i are those that are less than any i? Actually, if we consider all i from some point onward, any integer x will be covered by some i > x (since i = x+1). So the intersection of B_i for i up to N might be nonempty, but as i goes to infinity, the intersection becomes empty (except maybe for small numbers). So the density is zero. So indeed for n_i = i, property (P) holds.
But does property (P) hold for any sequence with divergent sum of reciprocals? Let's test a more tricky sequence: n_i = floor(i log i). Actually, ∑ 1/(i log i) diverges (by integral test). So perhaps (P) holds? But there might be sequences where the sum diverges but property (P) fails because the moduli have large overlaps? Overlaps are not relevant because each residue class is defined modulo n_i, but they are not "overlapping" in the sense of sets; they are just sets of integers. The union of many such sets may have density less than sum of densities because of overlaps. However, the condition (P) is about the ability to avoid all these sets: we need to show that for any choice of residues, the complement has density zero. If sum of reciprocals diverges, maybe we can prove that for any fixed x, the probability that x avoids all residues is zero? Actually, we need to consider the density of numbers that avoid all residues, not the probability that a random integer avoids them. But we can use a probabilistic method: For each modulus n_i, the set of numbers that avoid the i-th residue is "almost all" numbers (density 1 - 1/n_i). However, the intersection of many such sets may still have positive density if the sets are "aligned". For example, consider n_i = 2^i (powers of 2). For each i, pick a_i = 0. Then each B_i = ℤ \ {multiples of 2^i} has density 1 - 1/2^i. Intersection of all B_i is the set of integers not divisible by any 2^i, i.e., odd numbers. Since any integer not divisible by 2 is odd; but any odd integer is not divisible by any power of 2 > 1, so it's in all B_i. So the intersection has density 1/2 > 0. So property (P) fails. In this case, the sum of reciprocals of n_i = ∑ 1/2^i converges (to 1). So the condition diverges sum of reciprocals is necessary.
But is it sufficient? Let's test a sequence where the sum diverges but the moduli have some structure that still allows a positive density avoidance set. For instance, consider n_i = i for i odd, and n_i = 2i for i even? Actually, the sum diverges anyway. But maybe we can construct a sequence where there is a "thin" subsequence of moduli that are large and have small reciprocals, but the rest are small. The product of (1 - 1/n_i) may converge to zero if ∑ 1/n_i diverges, but that's only for independent events. However, if there is dependence due to overlapping structure, could the intersection have positive density? Possibly. Let's consider an example: n_i = i, but we restrict to a subsequence of i that are multiples of some large number M. For example, let n_i = Mi (i.e., multiples of M). The sum of reciprocals ∑ 1/(Mi) diverges as well (since 1/M times harmonic series). However, the residue classes a_i = 0 for all i produce a union that includes numbers divisible by Mi for some i. The uncovered set includes numbers not divisible by any Mi, i.e., numbers coprime to M (or at least not divisible by any multiple of M beyond M?). Actually, consider M = 6. Then n_i = 6i. If we take a_i = 0 for all i, the union includes numbers divisible by some 6i, i.e., numbers divisible by 6, 12, 18, ... So the uncovered set includes numbers not divisible by any multiple of 6? That includes numbers that are not divisible by 6 (i.e., numbers whose prime factorization does not include 2 or 3?), but also numbers divisible by 2 but not by 6? Wait, if a number is divisible by 2 but not by 3, it's not divisible by any 6i (since 6i always divisible by 3). So it's uncovered. Similarly, numbers divisible by 3 but not by 2 are uncovered. So the uncovered set includes numbers that are not divisible by 2 or 3 (i.e., numbers coprime to 6) plus numbers divisible by exactly one of 2 or 3 but not both. Actually, numbers that are divisible by 2 but not by 3 have factor 2 but not 3; they cannot be divisible by any 6i because any 6i is multiple of 3. So they are uncovered. So the uncovered set includes all numbers not divisible by 6? Let's examine: Let n = 2 * 5 = 10. Is 10 divisible by any 6i? For i = 1, 6 divides 10? No. For i = 2, 12 divides 10? No. So 10 is uncovered. So uncovered set includes numbers that are not multiples of 6. That set has density 5/6 > 0. So property (P) fails for this sequence, even though ∑ 1/n_i diverges. Wait, check: n_i = 6i, the union of multiples of 6i for all i includes numbers that are multiples of 6 (i.e., any multiple of 6 is divisible by 6i for some i?). Actually, any multiple of 6 is of the form 6k. Then set i = k, then 6i = 6k, so indeed 6k is a multiple of 6i. So the union of all multiples of 6i includes all multiples of 6. So the uncovered set is numbers not divisible by 6. So density 5/6 > 0. So property (P) fails.
Thus, divergence of ∑ 1/n_i is not sufficient.
Now, we need to understand the exact condition (P). It says: For any infinite choice of residues a_i, the uncovered set has density zero. That is a property of the sequence of moduli. It says that the family of residue classes is "universal" in the sense that you cannot avoid all of them; they collectively cover almost all integers irrespective of which residues you pick. This is reminiscent of "Erdős' covering system" but in a more general sense: In a covering system, you have a finite set of congruences that cover all integers (density 1). Here we have an infinite sequence of moduli such that for any choice of residues, the union of those congruences has density 1 (i.e., the complement has density 0). So it's a "universal covering system".
One might suspect that property (P) implies that the sum of reciprocals of n_i diverges. Let's test: Suppose ∑ 1/n_i converges. Then could property (P) hold? If sum of reciprocals converges, can we find residues a_i such that the uncovered set has positive density? Possibly. Let's try to find a counterexample: Suppose n_i grows quickly, say n_i = 2^i. Then we can choose a_i = 0 for all i. The uncovered set is numbers not divisible by any 2^i > 1, i.e., odd numbers. That has positive density 1/2 > 0. So property (P) fails. So convergence of ∑ 1/n_i is not sufficient for (P). Actually, if ∑ 1/n_i converges, it's plausible that property (P) fails in general: we can pick a_i = 0 for all i, and the uncovered set is numbers not divisible by any n_i. The density of numbers not divisible by any n_i is at least ∏ (1 - 1/n_i) (maybe lower bound). For independent events, the product ∏ (1 - 1/n_i) converges to a positive limit if ∑ 1/n_i converges (since for small x, log(1 - x) ≈ -x, so sum of logs converges). So the density of numbers not divisible by any n_i is > 0 (if the moduli are "independent enough", i.e., pairwise coprime). Even if not coprime, we can still lower bound the density by something positive? Actually, if moduli have common divisors, the condition of not being divisible by any of them may be larger. For example, if all n_i are multiples of a common integer d, then numbers not divisible by any of them include numbers not divisible by d, which have density 1 - 1/d > 0. So if ∑ 1/n_i converges, it's plausible that we can find a subsequence of moduli that are pairwise coprime or at least have some structure that ensures product > 0. But maybe even if ∑ 1/n_i converges, the product ∏ (1 - 1/n_i) can be zero if the terms are not independent? Let's examine: If the moduli are nested, like n_i = 2^i, then product (1 - 1/2^i) converges to a positive limit (approximately 0.288788...?), actually ∏_{i=1}^∞ (1 - 1/2^i) = 0.288788... (the q-Pochhammer symbol). So positive. So property (P) fails.
Thus, a necessary condition for (P) is ∑ 1/n_i diverges. But as we saw, not sufficient.
Now, the question: Does (P) imply (U)? Perhaps (U) is equivalent to a stronger condition: that the product ∏ (1 - 1/n_i) tends to zero uniformly fast, i.e., for any ε > 0 there exists k such that ∏_{i > k} (1 - 1/n_i) < ε? Actually, we need uniform bound on the density of the complement after finite steps. Let's define for any k, the "worst-case uncovered density after k steps":
δ_k = sup_{a_1, ..., a_k} d(ℤ \ ⋃_{i=1}^k (a_i mod n_i))
where d(S) denotes natural density. The question asks: Does (P) imply that δ_k → 0 as k → ∞? Because if for any ε > 0 there exists k such that δ_k < ε, then (U) holds. So (U) is exactly that the supremum over choices of residues of the density of the complement after k steps tends to zero. This is a uniform version of (P). (P) says that for any infinite sequence, the density of the full complement (with all i) is zero. But does that imply that the density after a finite prefix (the worst-case) tends to zero? Usually, one expects that if the infinite union always covers almost everything, then a sufficiently large finite prefix should also cover almost everything uniformly. However, there could be a sequence where each infinite selection covers everything, but you need to go arbitrarily far into the sequence to get coverage, and the worst-case uncovered density after any fixed finite k remains bounded away from zero. For example, maybe there is a sequence where you can "delay" covering a positive density set until you get far enough, but eventually it gets covered for any infinite selection. But does that satisfy (P)? Let's consider constructing such a sequence.
We need to find a sequence n_i such that for any infinite selection of residues a_i, the uncovered set is of density zero, but for each finite k, there exists some selection of residues a_i (depending on k) such that the uncovered set after those k residues has density > ε0 for some fixed ε0 > 0 (independent of k). The selection can be different for each k; we just need to show sup over selections of the uncovered density after k does not tend to zero.
One candidate: Let n_i be the i-th prime p_i. For each k, can we choose residues a_i for i ≤ k such that the uncovered set after those k residues has density > ε0? Possibly yes: For the first k primes, we can choose residues that avoid covering many numbers. For example, choose a_i = 0 for all i ≤ k (i.e., exclude multiples of each of the first k primes). Then the uncovered set after k steps is the set of integers not divisible by any of the first k primes. Its density is ∏_{p ≤ p_k} (1 - 1/p) ~ e^{-γ} / log p_k. This tends to zero as k → ∞, albeit slowly. So δ_k → 0. So (U) holds for primes.
Thus, we need a more pathological sequence where the product diverges to zero slower or does not converge uniformly.
Alternatively, maybe (P) does imply (U). Let's try to argue: Suppose (P) holds. Then for any ε > 0, consider the set of all possible infinite sequences of residues a_i. For each such sequence, define the set S(a) = {x ∈ ℤ: ∀i, x ≠ a_i (mod n_i)}. By (P), d(S(a)) = 0. Now, we want to find a finite prefix k such that d(S_k(a)) < ε for all a, where S_k(a) = {x: ∀ i ≤ k, x ≠ a_i (mod n_i)}. Note that S(a) ⊆ S_k(a), because requiring avoidance of fewer congruences yields a larger set. So d(S_k(a)) ≥ d(S(a)) = 0. Actually, S_k may be larger, so its density may be larger (or at least not smaller). We need to bound it.
We have a family of sets S_k(a) parameterized by a (the infinite sequence). For each fixed k, the supremum over a of d(S_k(a)) is δ_k. We need to show δ_k → 0 as k → ∞ if (P) holds. Suppose not; then there exists ε > 0 such that for all k, there is a choice of residues a_i (depending on k) with d(S_k(a)) ≥ ε. That is, for each k, there exists a finite prefix of length k and a selection of residues a_i for i ≤ k that leaves a set of density at least ε uncovered. However, the selection may not extend to an infinite sequence that satisfies the same property? Actually, we can extend it arbitrarily to an infinite sequence by picking any residues for i > k. Then the uncovered set for the infinite sequence S(a) is a subset of S_k(a) (since we added more constraints). So d(S(a)) ≤ d(S_k(a)). Since d(S(a)) = 0, we cannot deduce a contradiction directly. Wait, careful: S(a) = {x: ∀ i, x ≠ a_i (mod n_i)}. As we add more constraints (i > k), the set S(a) becomes smaller (or stays same). So d(S(a)) ≤ d(S_k(a)). Since d(S(a)) = 0, we have 0 ≤ d(S_k(a)). So we cannot contradict d(S_k(a)) ≥ ε > 0; it's possible that S_k(a) has positive density, but adding more constraints reduces it to zero. So (P) does not forbid positive density after a finite prefix; it only requires that as we go to infinity, the density goes to zero (for each fixed infinite selection). So (U) is a stronger condition: it requires uniform convergence to zero across all possible infinite selections.
Thus, (P) does not obviously imply (U). The question is asking whether (U) is a consequence of (P). It might be false: there might be sequences where you can have infinite selections that eventually cover everything, but you need to go arbitrarily far to reduce the uncovered density, and there is a uniform lower bound >0 for any finite k across some selections.
We need to find a counterexample or prove it's true.
We suspect the answer is no: there exists a sequence n_i satisfying (P) but not (U). Let's try to construct such a sequence.
We need a sequence of moduli that is "sparse" enough that for any infinite selection of residues, the uncovered set has density zero (maybe because the moduli are "everywhere" in some sense), but also "flexible" enough that for any finite k, we can choose residues to leave a positive density uncovered. That seems contradictory: if the sequence is "dense" in the sense of covering all integers eventually for any infinite selection, why can't we find a uniform bound? Actually, we want a sequence where the property (P) holds but the convergence to zero can be arbitrarily slow depending on the selection. For each k, we can choose a selection that leaves uncovered density > ε0. But for each fixed selection, as i → ∞, the uncovered density tends to zero. However, the rate may be arbitrarily slow, depending on the selection. For example, suppose we have a sequence of moduli that grows extremely fast, but also includes "small" moduli interspersed. For a selection that picks residues that avoid covering a given set of numbers for a long time, we may need many steps before the uncovered density becomes small. But does the property (P) hold? We need to ensure that for any selection, eventually the uncovered density goes to zero. If the moduli are sparse but have some property like every integer eventually belongs to some residue class for any selection? Hmm.
Alternatively, maybe (P) always implies (U). Let's try to prove it. Suppose (U) fails: there exists ε > 0 such that for any k, there is a selection of residues a_i (maybe depending on k) such that the uncovered set after k steps has density ≥ ε. Then we need to show that (P) fails: there exists a selection of residues for the infinite sequence such that the uncovered set has positive density. That is, we need to construct an infinite selection that "diagonalizes" the finite selections to produce a positive density uncovered set. This is reminiscent of a compactness argument: We have a sequence of finite selections that keep a positive density uncovered. We want to combine them into an infinite selection that keeps positive density. However, note that the finite selections may be different for each k; we need to combine them in a way that the infinite selection has positive density uncovered set. Since each finite selection may use residues that conflict with each other, we need to be careful.
We could try to use a probabilistic method: Choose each a_i randomly uniformly from the n_i residues. Then the expected density of the uncovered set after k steps is something like ∏{i=1}^k (1 - 1/n_i). But we need a lower bound on the supremum of uncovered density over all possible choices. The supremum is at least the expected value (since supremum ≥ average). So δ_k ≥ ∏{i=1}^k (1 - 1/n_i). Therefore, if the product does not tend to zero, then δ_k does not tend to zero. But if the product tends to zero, then maybe δ_k also tends to zero? Not necessarily: the supremum could be larger than the average. But we need a lower bound for sup. Actually, we can find a lower bound on sup: Since for each i we can choose a_i to maximize the uncovered density after k steps (worst-case). The worst-case scenario is to minimize the union's density. Since each residue class has density 1/n_i, the union's density is at most sum_i 1/n_i (by union bound). So the complement's density is at least 1 - sum_i 1/n_i. But if sum_i 1/n_i < 1, then we can guarantee that there exists a selection of residues with uncovered density > 0 (maybe just choose any residues). Actually, if sum_i 1/n_i < 1, then by the union bound, for any selection of residues, the union's density is at most sum_i 1/n_i < 1, so the complement has density at least 1 - sum_i 1/n_i > 0. Thus, property (P) fails: there exists a selection (any selection) such that uncovered set has positive density. Wait, careful: The union bound says density of union ≤ sum of densities, but that's an upper bound. So the union could have density less than or equal to sum_i 1/n_i. If sum_i 1/n_i < 1, then the complement may have positive density. However, we need to guarantee that there exists a selection where the union's density is not too large. The union bound is an upper bound; we need a lower bound for the complement. For a given selection, the complement's density = 1 - density(union). Since density(union) ≤ sum_i 1/n_i, we have complement density ≥ 1 - sum_i 1/n_i. This holds for any selection of residues. So if sum_i 1/n_i < 1, then for any selection, the uncovered set has density at least 1 - sum_i 1/n_i > 0. So property (P) fails because the uncovered set has positive lower bound across all selections, not just some selection. Actually, property (P) says: For any selection, the uncovered set has density zero. So if we find any selection such that uncovered density > 0, then (P) fails. But if sum_i 1/n_i < 1, then for any selection, uncovered density > 0. So (P) fails. Therefore, a necessary condition for (P) is that ∑ 1/n_i diverges (or at least ≥ 1). Actually, we need ∑_{i=1}^{∞} 1/n_i = ∞? Let's check: If ∑ 1/n_i converges to a finite sum S, then for sufficiently large i, the tail sum is small. But we need condition for all i. The union bound for the entire infinite sequence yields density(union) ≤ S. If S < 1, then the complement has density ≥ 1 - S > 0 for any selection, contradicting (P). So (P) implies S ≥ 1. Actually, S could be any value ≥ 1; if S = 1, then the union bound only gives density(union) ≤ 1, which is trivial; it doesn't guarantee density(union) = 1. So (P) may hold even if sum diverges or is ≥ 1.
Thus, necessary condition: ∑_{i=1}^{∞} 1/n_i = ∞? Wait, if S is finite but > 1, then union bound still only says density(union) ≤ S, but that bound is > 1, so not helpful. So we cannot conclude (P) fails if S is finite > 1. For example, consider n_i = i: S diverges, so condition holds. But consider a sequence where S is finite > 1, like n_i = i (starting from i = 2?), the harmonic series diverges, so not finite. Let's find a sequence where S is finite > 1 but (P) holds? Not sure.
We need to consider the condition (P) more precisely: For any selection of residues, the uncovered set has density zero. That means the union of residue classes covers almost all integers (density 1). For each selection, we have a union of sets each of density 1/n_i. In general, the union's density may be less than the sum of densities due to overlaps. But we need it to be 1 for all selections. That is a very strong requirement. It suggests that the moduli must be such that for any selection of residues, the sets collectively cover all but a zero-density set. That suggests that the moduli must be "dense" in the sense of natural density: the set of moduli up to N must be large enough that the sum of reciprocals diverges (or at least some property). However, as we saw with n_i = 6i, the sum of reciprocals diverges (since harmonic series diverges), but property (P) fails because we can pick a_i = 0 and uncovered set has density 5/6 > 0. So more than divergence of sum is needed.
What about n_i being the set of all integers with at least one prime factor > i? That's still divergent.
We need to characterize sequences that satisfy (P). Let's attempt to find necessary and sufficient condition.
Given any selection of residues a_i, we want the set of integers not satisfying any of them to have density zero. Equivalent to: For any selection a_i, the complement of the union of those classes has density zero. So the union must have density 1.
Consider the set of all possible residue classes: For each i, there are n_i possible classes. The union of all classes across all i (i.e., all congruences) is the whole ℤ (density 1). That's trivial. But we need that any selection of one class per i also covers almost all ℤ.
This is reminiscent of "covering system" but with a twist: In a covering system, you have a finite set of congruences that cover ℤ. Here we have infinite set, but we require that any selection of one residue per modulus also covers ℤ (density 1). This is like a "strong covering system" concept: the family of all residue classes modulo each n_i is a "strong covering family". For any choice of one residue per modulus, you get a covering of ℤ (density 1). So it's a universal covering property.
This seems like an "Erdős–Ginzburg–Ziv" type combinatorial property? Not exactly.
We can think of this as a "hypergraph covering" problem: The ground set is ℤ (or an interval [1, N] approximating ℤ). For each i, we have a partition of ℤ into n_i classes (the residue classes). We choose one part from each partition. The union of chosen parts across all i must have density 1. For any selection of one part per partition, the complement has density zero. So the hypergraph has the property that any selection of one part per partition covers almost all vertices.
Alternatively, we can think of the "maximal density of a set that avoids at least one residue from each modulus". For each i, a set S may intersect each residue class at most something? Actually, we want a set S that avoids a particular residue class for each i. The condition (P) says that any set S that, for each i, avoids a designated residue class a_i (i.e., S ∩ (a_i mod n_i) = ∅) must have density zero.
Define "avoidance set" for a selection a = (a_i). Then S(a) = ∩_{i=1}^∞ (ℤ \ (a_i mod n_i)). Condition: d(S(a)) = 0 for all a.
Now, we can think of the "maximal possible density of a set S that avoids one residue class for each i". That is, define:
Δ = sup_{a} d(S(a))
where supremum is over all possible infinite sequences a. Condition (P) says Δ = 0. So the supremum over infinite sequences is zero. The question (U) asks whether the supremum over finite prefixes (with infinite tail arbitrary) tends to zero: define Δ_k = sup_{a} d(S_k(a)), where S_k(a) = ∩{i=1}^k (ℤ \ (a_i mod n_i)). Since the constraints are weaker (only first k constraints), sup over a (including infinite a) is at least sup over finite a (since we can choose any residues for i > k arbitrarily). So Δ_k is non-increasing in k (since we add more constraints, the set gets smaller, so density sup may decrease). The question: Does Δ_k → 0 as k → ∞? Since each S(a) = ∩{i=1}^∞ (ℤ \ (a_i mod n_i)), we have S(a) ⊆ S_k(a) for each k, so d(S(a)) ≤ d(S_k(a)). If d(S(a)) = 0 for all a, then we have 0 ≤ d(S_k(a)) for all a. That doesn't guarantee that sup_a d(S_k(a)) → 0. It could be that sup_a d(S_k(a)) stays positive due to some sequences a that have small density after infinite constraints but relatively large after finite constraints.
Thus, (U) is equivalent to asking whether the sequence of suprema Δ_k tends to zero. This is a property of the sequence of moduli n_i. Does (P) imply that Δ_k → 0? Not necessarily.
We need to consider a possible counterexample: a sequence where Δ_k stays bounded away from zero but (P) holds. Let's attempt to construct such a sequence.
We need to ensure that for each k, there exists a selection a_i (i ≤ k) such that the intersection of complements (i.e., numbers avoiding those residues) has density at least ε (some fixed > 0). However, for any infinite selection a_i (i ≥ 1), the density of the intersection of all complements is zero.
Thus, the infinite selection that yields large finite intersections must have some "cancellation" in the tail: adding more constraints eventually reduces density to zero. So the tail must be "strong" enough to reduce density to zero from any starting point. But we need to guarantee that for each finite prefix, there exists some selection that leaves a positive density uncovered. So the tail must be "universal" but can be arbitrarily slow.
One idea: Use a sequence of moduli that are pairwise coprime and have decreasing densities, such that product of (1 - 1/n_i) tends to zero, but slowly. Then for any fixed k, we can choose residues a_i for i ≤ k to maximize uncovered density: the worst-case scenario is to make the union as small as possible, i.e., choose the residues to be "disjoint"? But can we make them disjoint? For coprime moduli, the residue classes are independent: the intersection of any selection of residues is a congruence class modulo the product of the moduli (CRT). So the union of the selected classes across i ≤ k is a union of some arithmetic progressions; its complement is the intersection of the complements. Since the moduli are coprime, the events are independent in a measure sense: the density of the complement after k steps is ∏{i=1}^k (1 - 1/n_i) regardless of the choice of residues? Actually, if the moduli are coprime, then each residue class modulo n_i has exactly 1/n_i density, and the sets are "independent": the proportion of integers that avoid a particular residue modulo each of the coprime moduli equals the product of the proportions: ∏ (1 - 1/n_i). This holds because the Chinese remainder theorem yields a uniform distribution of residues modulo the product. For any fixed selection of residues a_i, the set of integers that avoid all those residues has density exactly ∏ (1 - 1/n_i). Indeed, the complement of the union is the intersection of the complements; each complement is a set of integers not congruent to a_i mod n_i. For a given set of coprime moduli, the intersection of these complements is a union of some residue classes modulo the product M = ∏ n_i (since each condition forbids one class modulo n_i). The number of allowed classes modulo M is ∏ (n_i - 1), so the density is ∏ (1 - 1/n_i). This is independent of which residues are chosen. So for coprime moduli, Δ_k = ∏{i=1}^k (1 - 1/n_i). If ∑ 1/n_i diverges, then product tends to zero. So Δ_k → 0. So (U) holds for any sequence of pairwise coprime n_i with divergent sum of reciprocals. In particular, the primes satisfy this.
Thus, counterexample must involve moduli that are not coprime, i.e., there is some overlap structure that can be exploited to keep the complement's density high for some selection of residues, even though for any infinite selection the complement's density goes to zero.
Potentially, we can use moduli that share many common factors, so that some choices of residues can be "aligned" to cover many of the same integers, leaving many uncovered. For example, if we have moduli that are multiples of a large integer M, then choosing residues that are all multiples of M? Wait, we need to choose residues a_i; each residue class is a set of integers congruent to a_i mod n_i. If the n_i share a common divisor d, then the residue classes are periodic with period n_i, but also have pattern modulo d. For example, if all n_i are even, then each residue class a_i mod n_i has parity equal to a_i mod 2. So we can choose all a_i to be odd, then each class includes only odd numbers; the union will be a subset of odd numbers, leaving all even numbers uncovered. That yields large uncovered density. However, if we have infinitely many moduli that share a common factor 2, then we can choose residues that are odd for all of them, covering only odd numbers; then the uncovered set (evens) has density 1/2 > 0, contradicting (P). So any sequence satisfying (P) cannot have infinitely many moduli sharing a common factor > 1. Actually, it cannot have a subsequence with a common divisor > 1 that appears infinitely often, because we could choose residues all congruent to the same class modulo that divisor and leave the complementary class uncovered. Let's examine this more carefully.
Suppose there exists d > 1 such that infinitely many n_i are multiples of d. Then consider the subsequence of those n_i. Choose residues a_i such that a_i ≡ 1 mod d (i.e., choose a_i ≡ 1 (mod d)). Because each n_i is multiple of d, the congruence class a_i mod n_i consists of numbers congruent to a_i mod n_i, which are also ≡ 1 mod d. Therefore, each such class is a subset of the odd numbers (if d=2) or more generally a subset of numbers with a specific residue modulo d. Then the union of those classes is a subset of that residue class modulo d. So the complement (numbers not in the union) includes all numbers not congruent to that residue modulo d. In particular, it includes all numbers congruent to any other residue modulo d, which have density (d-1)/d > 0. Thus, the uncovered set has positive density, contradicting (P). So (P) implies that for any d > 1, only finitely many n_i are multiples of d. So the sequence n_i must be "squarefree" in the sense that each integer divides only finitely many n_i. Actually, more strongly: each integer d > 1 can divide at most finitely many n_i. This condition is known as "primitive sequence": A sequence of positive integers where no integer divides any other? Wait, "primitive" usually means no term divides another. Here we require that each integer d > 1 divides only finitely many n_i. That's a stronger condition: the sequence must be "thin" in terms of divisibility. For instance, primes satisfy this: each integer > 1 divides at most one prime (if it's prime itself). For composite numbers, they can divide many numbers, but we require only finitely many. Actually, any integer d > 1 can divide infinitely many numbers (multiples). So the condition restricts that n_i cannot contain infinitely many multiples of any integer d > 1. That means the sequence cannot have arbitrarily large "clusters" of numbers sharing a common factor.
Thus, a necessary condition for (P) is that the sequence n_i is "primitive" in the sense that each integer d > 1 divides only finitely many n_i. Let's call such a sequence "squarefree-like" or "primitive sequence"? Actually, there is a known concept: "primitive" means no term divides another. That is weaker than our condition. Example: n_i = i (the integers) is not primitive because many numbers divide others, but does it satisfy the condition that each integer d > 1 divides infinitely many n_i? Yes, d divides infinitely many multiples: n_i = i for all i. So condition fails. Yet we argued earlier that n_i = i satisfies (P). Wait, does n_i = i satisfy (P) or not? Let's test again more carefully: For n_i = i, we need to check if property (P) holds. We argued earlier that for any selection of residues a_i, the uncovered set is density zero. However, above we found a potential counterexample: For d = 2, there are infinitely many even n_i (all even i). Choose a_i such that a_i ≡ 1 mod 2 for all even i. Then each class a_i mod n_i is a subset of odd numbers (since n_i is even, any residue class a_i ≡ 1 mod 2 includes only odd numbers). Then the union of all those classes (over all even i) is a subset of odd numbers, leaving evens uncovered. However, we also have odd i in the sequence. For odd i, the residues a_i could be something else; they could potentially cover evens as well. Could the odd i's cover evens? Possibly. So we need to consider the entire infinite sequence of all i (including odd i). Even if we choose residues for even i to be odd, the odd i residues could cover even numbers as well. So the uncovered set may not be all evens; it may be a subset of evens, but perhaps still positive density. Let's check: For odd i, the residue class a_i mod i can be any residue. For each odd i, does a_i cover evens? It depends on the residue. For example, if we pick a_i = 0 (mod i), then the class includes multiples of i, which can be even or odd depending on i and the multiple. Since i is odd, multiples of i alternate parity: i, 2i (even), 3i (odd), etc. So the class contains both evens and odds. So the union of classes from odd i may cover all evens (density 1/2). But can we choose residues for odd i to avoid covering evens? If we want to maximize uncovered density, we want to choose residues that minimize coverage of evens. For odd i, can we choose a_i such that the class a_i mod i contains only odd numbers? Since i is odd, any residue class a_i mod i includes both parities unless a_i is even? Actually, if i is odd, the parity of numbers in the class a_i mod i depends on a_i and i. For an arithmetic progression a_i + ki, the parity alternates because i is odd: If a_i is even, then a_i + ki is even when k is even? Let's test: i odd, say i = 3. If a_i = 0 (even), then numbers are 0, 3, 6, 9, 12,... So 0 (even), 3 (odd), 6 (even), 9 (odd), etc. So both parities appear. If a_i = 1 (odd), then numbers are 1 (odd), 4 (even), 7 (odd), 10 (even), ... So both parities appear. So any residue class modulo an odd modulus contains both even and odd numbers (since the step is odd, parity alternates). So odd i's cannot be used to restrict to only odd numbers; they cover both evens and odds.
Thus, the selection a_i = 1 mod 2 for even i yields a union that covers only odd numbers from those even i. However, the odd i's also cover evens and odds. So overall, the union of all classes may still cover evens fully, depending on the selection of residues for odd i. But can we choose residues for odd i to leave some evens uncovered? Let's try to maximize uncovered set: Suppose we want to leave as many evens uncovered as possible. For each odd i, we can choose a residue class that covers as few evens as possible. Since each class covers both evens and odds roughly equally (density 1/(2i) each?), maybe we can make the union cover only a small proportion of evens. However, there are infinitely many odd i, so their union may cover almost all evens anyway. But maybe not: the union of many arithmetic progressions may have density less than 1 even if there are infinitely many, due to overlaps. For example, consider the set of all arithmetic progressions a_i mod i for odd i with a_i = 0 for all odd i. The union is the set of integers divisible by some odd i (i.e., having an odd divisor > 1). This includes all even numbers that have an odd divisor > 1. Does every even number have an odd divisor > 1? Not necessarily: numbers of the form 2^k have only divisor 2 (odd divisor 1). So numbers that are powers of 2 are not divisible by any odd i > 1 (except i = 1 if we include it). So those numbers may be uncovered. So the uncovered set includes numbers of the form 2^k (and also maybe other numbers that are powers of 2 times a power of 2?). In general, numbers that are powers of 2 are not divisible by any odd >1. So they are uncovered by the odd i's with a_i = 0. However, the even i's with residues a_i = 1 (odd) do not cover evens either (they cover only odd numbers). So powers of 2 remain uncovered. Are they infinite? Yes, infinitely many powers of 2, but they have zero density (density zero). So the uncovered set may have density zero.
Thus, even though we can choose residues for even i to restrict to odds, the odd i's may still cover evens heavily, but there may be a zero-density subset of evens (like powers of 2) that remain uncovered. So overall density uncovered may be zero. So (P) may hold for n_i = i.
But does (U) hold? For any ε > 0, can we find k such that for any selection of residues a_i for i ≤ k, the uncovered density is < ε? For n_i = i, we suspect yes: after k steps, the worst-case uncovered density is some function of k, perhaps something like 1/k or 1/log k, which tends to zero. But we need to prove that sup over selections of the uncovered density after k steps tends to zero. This is not obvious. However, we can try to bound sup by the product of (1 - 1/n_i) over i ≤ k? Actually, for n_i = i, the moduli are not coprime (they share factors). But maybe the worst-case uncovered density after k steps is still bounded by ∏_{i=1}^k (1 - 1/n_i). Let's test: For k = 1, n_1 = 1? If n_1 = 1, then product (1 - 1) = 0, but uncovered density may be 0? Actually, if n_1 = 1, then the only residue class is {0} = all integers, so any selection covers everything, uncovered density zero. So consistent.
For k = 2, n_1 = 1, n_2 = 2. The product (1 - 1/1)*(1 - 1/2) = 0 * (1/2) = 0. But uncovered density after two constraints may be positive? Let's examine: If n_1 = 1, the only residue class is 0 mod 1, which includes all integers, so the condition "does not satisfy a_1 mod 1" is impossible: no integer can avoid that class. So the uncovered set after k steps is empty regardless of other constraints. So product bound is not correct when moduli share factors.
Thus, the worst-case uncovered density after k steps is not simply the product if moduli are not coprime. In fact, if the moduli share a common factor, the constraints can be redundant. For example, if n_1 = 2, n_2 = 4, then the condition a_1 mod 2 can be thought of as forbidding one of two classes; then a_2 mod 4 forbids one of four subclasses of each of those classes. However, the union of those two classes may overlap heavily, reducing the uncovered set more. However, the worst-case scenario is to minimize the union, i.e., maximize the uncovered set. For n_1 = 2, n_2 = 4, what is the maximal possible uncovered density after those two constraints? Let's compute: There are 2*4 = 8 possible pairs (a_1, a_2). The union of the two residue classes is the union of a set of numbers congruent to a_1 mod 2 and numbers congruent to a_2 mod 4. The class mod 2 is a union of two residue classes mod 4: if a_1 = 0 mod 2, then it includes residues 0 and 2 mod 4; if a_1 = 1 mod 2, includes residues 1 and 3 mod 4. So the union of a_1 mod 2 and a_2 mod 4 may cover 1, 2, or 3 residues out of 4? Let's analyze: Suppose a_1 = 0 (so includes residues 0 and 2 mod 4). Choose a_2 = 1 (so includes residue 1 mod 4). The union covers residues 0,2,1 mod 4, i.e., 3 out of 4 residues; uncovered residue is 3 mod 4, density 1/4. If we choose a_2 = 0, then union includes residues 0 (from a_2) and also 0,2 from a_1: total residues 0,2 (two residues), uncovered residues 1 and 3, density 1/2. If a_2 = 2, union includes 2 from a_2 and 0,2 from a_1: residues 0,2 (two residues). So uncovered density 1/2. If a_2 = 3, union includes 3 and 0,2: residues 0,2,3 (3 residues), uncovered density 1/4. So the maximal uncovered density is 1/2 (when a_2 is 0 or 2). So δ_2 = 1/2.
Now compute product: ∏_{i=1}^2 (1 - 1/n_i) = (1 - 1/2)(1 - 1/4) = (1/2)(3/4) = 3/8 = 0.375. That's less than 0.5. So product underestimates the maximal uncovered density.
Thus, the worst-case uncovered density after k steps can be larger than product. Indeed, when moduli share factors, we can align residues to "cover" overlapping sets and reduce coverage.
Thus, we need to consider the worst-case scenario for each finite prefix: we want to choose residues that maximize the uncovered density after those k constraints. This is a combinatorial problem: given moduli n_1, ..., n_k, choose residues a_i to minimize the density of the union of those residue classes (or maximize the density of the complement). This is like "set system" with each set being a residue class. Since each class has density 1/n_i, but they can overlap heavily. We want to minimize coverage.
We need to find a sequence n_i such that for each finite k, we can choose residues that leave a positive density uncovered (bounded away from zero), but for any infinite selection, the uncovered density is zero. That would be a counterexample.
Consider an approach: Use moduli that are "nested" multiples of some base, but ensure that for any infinite selection, the uncovered set is zero. Wait, we earlier argued that if there is an infinite subsequence of moduli that all share a common divisor > 1, then (P) fails, because we can choose residues all of the same class mod that divisor, leaving a positive density uncovered. But perhaps we can circumvent this by ensuring that any infinite selection must include at least one modulus that does not share that divisor? Actually, if there are infinitely many n_i sharing divisor d > 1, then we can choose residues a_i such that a_i ≡ r (mod d) for all those i, where r is some fixed residue, and for the other i we choose arbitrarily. The union of all those classes will be a subset of the residue class r (mod d). But the other moduli may cover numbers outside that class. However, if the other moduli are "sparse" and cannot cover all numbers outside that class, perhaps the uncovered set may still have positive density. Let's examine: Suppose we have infinitely many n_i divisible by d, and we choose a_i ≡ r (mod d) for all those i. Then the union of those classes is a subset of the class r (mod d). The other moduli (not divisible by d) can cover numbers outside r (mod d). But can they cover all of them? Possibly not. If the set of moduli not divisible by d is "thin", maybe they cannot cover all numbers outside r (mod d). For (P) to hold, we need that for any selection of residues, the uncovered set has density zero. So if we choose a_i ≡ r (mod d) for all i divisible by d, we also need to consider the other i's: they might be able to cover the rest of the integers. However, we can also choose residues for those other i's in a "bad" way to minimize coverage. So to produce a selection that leaves positive density uncovered, we can choose residues for the d-multiples to restrict to a single residue class modulo d, and for the other moduli, choose residues such that they cover as little as possible of the complement. If the set of moduli not divisible by d is "sparse" enough, maybe their union cannot cover the entire complement, leaving positive density uncovered. This suggests a potential counterexample: Let n_i be a sequence where infinitely many are multiples of a large integer d_i that grows, and the rest are "sparse". For each k, we can choose residues to restrict to a certain class modulo d_k, leaving a positive density uncovered. But as we go further, we need to add more moduli that eventually cover the rest. However, we can choose d_k to increase so that each time we need a new modulus to cover the remaining set, and the uncovered density after k steps stays above a fixed epsilon.
Alternatively, we can consider a sequence of moduli n_i where each n_i is a product of distinct primes, but each new modulus introduces a new prime factor not seen before. For each i, we can choose residues that avoid covering numbers that are not divisible by any of those primes? Wait, we want to maximize uncovered density after finite steps. If each n_i is a product of many primes, then each residue class is a subset of numbers with a specific residue modulo a large modulus. That may be "thin" and cover only a small proportion of numbers (1/n_i). However, the union of many such thin classes may still cover almost all numbers due to overlapping, but we can try to choose residues that are "aligned" to avoid covering many numbers. For example, if n_i are all powers of 2, we can choose a_i = 0 for all i, leaving uncovered odds (density 1/2). But (P) fails because uncovered density > 0. So we need to avoid such a case.
Perhaps we can try a sequence where the moduli are all even numbers but have increasing powers of 2 factor: n_i = 2^i * m_i where m_i is odd and tends to infinity. Then if we choose residues a_i such that a_i ≡ 1 mod 2^i (i.e., odd numbers within that modulus), each class will be a subset of odd numbers. The union over i of those classes may be the set of odd numbers (maybe with some density). Meanwhile, the odd part of the moduli may also be arranged to avoid covering evens. But eventually, for any infinite selection, the evens may be covered by some later moduli? Actually, we need to ensure that for any infinite selection, the uncovered set is zero. That includes the selection where we choose a_i ≡ 1 mod 2^i for all i (so all classes are odd). For the evens to be uncovered, we need some other moduli (maybe those with odd moduli) to cover evens. But if we also choose residues for those odd moduli in a way that avoids evens as much as possible, perhaps evens remain uncovered? However, (P) says that for any selection, the uncovered set is zero. So we cannot have a selection that leaves evens uncovered entirely; there must be some modulus among the infinite list that forces evens to be covered. But we can choose residues to minimize coverage: for odd moduli, we can choose residues that are even (so they cover evens in half of the progression). But can we choose them to avoid evens? As we saw, any residue class modulo an odd modulus contains both evens and odds. So for each odd modulus, we cannot restrict to evens only; we inevitably cover some evens. However, the density of evens covered by each odd modulus could be small (maybe 1/(2n_i)). If the sum of these densities diverges, then the union may cover almost all evens. But if the sum of 1/(2n_i) converges, the evens may remain uncovered with positive density. However, (P) requires that any selection yields zero density uncovered. So we must ensure that for any selection, the sum of densities of evens covered diverges. This is tricky.
We need to find a sequence n_i such that (P) holds but (U) fails. Let's try to prove that (P) implies (U) first. Perhaps the statement is true. Let's attempt to prove it.
Goal: Assuming (P), for any ε > 0, find k such that for all choices of residues a_i, the uncovered density after k congruences is < ε.
Idea: Use a compactness / diagonal argument: Suppose not. Then there exists ε > 0 such that for each k we can find a selection a_i^{(k)} (i = 1..k) such that the uncovered set after those k congruences has density ≥ ε. We need to combine these selections into an infinite selection that violates (P). Since each selection is defined only for i ≤ k, we need to extend them to infinite sequences in some consistent way. Perhaps we can use a diagonal argument: For each i, consider the set of possible residues a_i that appear infinitely often in the selections for k ≥ i (by some pigeonhole principle). Since each n_i is finite, there is a residue r_i that appears infinitely often as the i-th residue in the "bad" selections (those with uncovered density ≥ ε). Then define an infinite sequence a_i^* = r_i. Then we need to argue that the uncovered set for a^* has density ≥ ε (or > 0). However, we need to be careful: The uncovered set after infinite constraints is a subset of the uncovered set after any finite prefix, so its density is ≤ the density after each finite prefix. Wait, the uncovered set after infinite constraints is smaller (more constrained) than after a finite prefix. So if after k steps the uncovered set has density ≥ ε, then after adding more constraints (for i > k), the uncovered set can only shrink, so its density may become less than ε, possibly zero. So we cannot directly infer that the infinite selection has density ≥ ε. However, perhaps we can choose a subsequence of selections that "converge" in some sense, ensuring that the infinite selection leaves a set of density at least ε. But since adding more constraints can only reduce the uncovered set, we need to ensure that the constraints added beyond step k do not reduce the uncovered density below ε. That seems contradictory: we need to construct an infinite selection that avoids covering a set of density ≥ ε even after infinitely many constraints. This would contradict (P). So we need to use a more careful argument.
Perhaps we can use a "compactness" argument: Let X be the set of all infinite sequences (a_i) (with a_i ∈ ℤ/n_iℤ). Endow it with product topology (compact by Tychonoff). For each k, define a closed subset B_k of sequences for which the uncovered density after the first k constraints is ≥ ε. By assumption, B_k is nonempty for all k. Since the B_k are nested decreasing (B_{k+1} ⊆ B_k?), actually not necessarily: The condition is about density after k constraints; adding more constraints can only reduce the uncovered set, so if the uncovered density after k constraints is ≥ ε, then after k+1 constraints it could be less than ε, so the sequence may not be in B_{k+1}. So B_k are not nested decreasing. However, we can consider the sets C_k = { sequences (a_i) : the uncovered density after the first k constraints is ≥ ε/2 } (or some threshold). Then we may have some nested property? Not sure. Actually, we need to apply a diagonal argument: For each k, we have a finite selection a_i^{(k)} for i ≤ k. Since the number of possible residues for each i is finite, we can find an infinite subsequence of k's such that a_i^{(k)} converges for each i (i.e., becomes eventually constant). Then define a_i^* as that limiting residue. Then we can try to argue that for any fixed i_0, for sufficiently large k, the first i_0 residues of a^{(k)} equal a_i^. So the infinite selection a^ eventually agrees with each finite "bad" selection on arbitrarily long prefixes. However, the uncovered density for a^* might still be small because the later constraints beyond the prefix may be "bad" (i.e., covering many numbers). But we can choose the later constraints also to be "bad"? Actually, the infinite selection a^* is defined by the limit of a_i^{(k)} for each i. However, for any given i, the limit exists only if the sequence a_i^{(k)} stabilizes for large k. But we cannot guarantee stabilization a priori; we can only guarantee that there is a subsequence where each a_i^{(k)} stabilizes (by a diagonal argument: enumerate the i's and repeatedly take subsequences). This yields an infinite subsequence of k_j such that for each i, a_i^{(k_j)} eventually becomes constant (in j). Then define a_i^* as that constant. So a_i^* is a limit of residues that appear in infinitely many "bad" selections. Now we need to argue that for this a^*, the uncovered density after any finite prefix k_j is at least ε (since it's the same as the selection used to produce that prefix). However, the infinite selection includes also residues for i > k_j, which may further reduce the uncovered set. But we can argue that the uncovered density for the infinite selection is at least the limsup of uncovered densities after prefixes. Since each prefix's uncovered set contains the infinite uncovered set as a subset, the density of the infinite set is ≤ the density of each prefix's uncovered set. So we cannot guarantee that the infinite set's density is ≥ ε; it's only ≤. So we need a different approach.
We need to find a sequence n_i where (U) fails but (P) holds. That is, there exists ε > 0 such that for all k, there exists a selection of a_i (i ≤ k) such that the uncovered density after those k steps is ≥ ε, but for any infinite selection, the uncovered density is zero. This is reminiscent of "Erdős–Selfridge problem"? Not sure.
Alternatively, maybe (P) does imply (U). Let's try to prove it using measure theory. Consider the space of infinite sequences a = (a_i) with product measure? Or consider the "profinite group" ℤ̂ = ∏p ℤ_p. For each i, a residue class a_i mod n_i corresponds to a coset in ℤ̂: it's a clopen set of measure 1/n_i (if n_i is a product of distinct primes? Actually, the measure of a coset in ℤ̂ is 1/n_i if n_i is a positive integer, but we need to think of ℤ̂ as the inverse limit of ℤ/Nℤ for all N. The Haar measure is normalized so that each residue class modulo N has measure 1/N. For each i, the set C_i = {x ∈ ℤ̂ : x ≡ a_i (mod n_i)} is a clopen set of measure 1/n_i. Then the union of all C_i for a given selection a has measure at most ∑ 1/n_i, but can be less due to overlaps. The complement of the union is the set S(a) = ℤ̂ \ ∪{i} C_i. Its Haar measure is the "density" of the integer set S(a) ∩ ℤ (if S(a) is a "congruence set" maybe). Actually, for any set of integers that is a union of arithmetic progressions (mod some N), the natural density equals its Haar measure in ℤ̂. For sets defined by congruences, they are "closed" sets in ℤ̂. So we can treat density as Haar measure.
Now, property (P) says: For any sequence a, the Haar measure of S(a) = 0. That is, the complement of the union of the C_i has measure zero.
Now, does that imply that for any ε > 0, there exists k such that for any a, the measure of ℤ̂ \ ∪_{i=1}^k C_i < ε? This is a uniform version of covering property: For any ε > 0, there exists a finite subfamily that covers all but ε of the space, regardless of which particular cosets we pick.
Think of it as a game: we have a collection of "clopen sets" of size 1/n_i. For each i, an adversary picks one such set (the coset a_i mod n_i). We want to guarantee that after seeing the first k picks (but not the rest), the union of those sets already covers almost all of the space (density > 1 - ε), no matter which sets the adversary picks. The condition (P) says that after infinitely many picks, the union covers almost all of the space (density 1) for any possible infinite sequence of picks. The question is whether the covering must happen "uniformly" after some finite stage.
This is reminiscent of the concept of "compactness" in measure theory: If a countable family of sets has the property that any infinite selection covers the space almost surely, does it have a finite subfamily that covers almost surely? Not necessarily. For example, consider the unit interval [0,1] with Lebesgue measure. Let the family be intervals (0, 1/i) for i = 1,2,... but each interval is "moving" depending on a choice? Actually, our situation is more structured: each i corresponds to a partition of the space into n_i equally-measured clopen sets (the residue classes). The adversary picks one of those sets. For each i, the union of all n_i possible clopen sets is the whole space. So each i gives a partition of the space into n_i parts. The adversary picks one part per i. After infinitely many picks, the union of the chosen parts covers the space almost surely (property (P)). The question: does there exist a finite k such that for any possible picks of parts for i = 1..k, the union already covers almost all the space (density > 1 - ε)? This is reminiscent of "finite subcover property" in compactness: The space ℤ̂ is compact, and each partition is an open cover? Actually, each partition is a cover of the space by n_i clopen sets, each of which is open. The adversary picks one set per partition. The condition that the union of those sets covers almost all the space is a covering property. The question is about uniform covering.
We can think of it as a two-player game: Player A chooses the residues a_i one by one; Player B wants to guarantee that after some finite number of moves, the union covers all but ε of the space, regardless of Player A's choices. The condition (P) says that after infinitely many moves, the union covers all but a null set for any possible sequence of moves. Does that imply that there is a finite bound on the number of moves needed to achieve coverage up to ε? This is reminiscent of "ε-net" or "finite covering property" in measure theory: A family of sets such that any countable subfamily of a certain type covers the space almost surely; does it imply a finite subfamily does? Usually not. For example, consider the family of intervals (0, 1/n) in [0,1] for each n; any infinite subfamily (i.e., infinitely many intervals) will cover (0, something) but not necessarily the whole interval? Actually, any infinite subfamily of those intervals will still be countable but the union may be (0, max_{i in subfamily} 1/i) which could be small if the subfamily does not include intervals with small i. So property (P) does not hold for that example. So we need a property that ensures any infinite selection covers almost all of the space. That's a strong property.
Let's try to find a potential counterexample: Sequence of moduli such that property (P) holds but (U) fails. We need to design a sequence where for any infinite selection, the union covers almost everything, but the covering can be arbitrarily slow: for any fixed number of steps k, there is a selection that leaves a positive density uncovered.
One approach: Use a sequence where the moduli are arranged in blocks. Within each block, the moduli share a common divisor that is larger than any modulus in previous blocks. Then, for each block, we can choose residues that restrict to a particular residue class modulo that divisor, thereby leaving a proportion of integers uncovered up to that block. However, the next block may have moduli that are "independent" of the previous divisor and can cover the remaining numbers. But we can choose the residues for the next block to be such that they also restrict to a specific residue class modulo the new divisor, leaving some numbers uncovered. However, we need to ensure that eventually, after infinitely many blocks, the union covers almost all numbers regardless of residue choices. That might happen if the block divisors increase rapidly and the sum of reciprocals of the moduli diverges across blocks: eventually, any integer will be covered due to some later block's constraints.
Alternatively, we could use a sequence where each modulus is the product of the first i primes (i.e., primorial). For each i, the residue classes modulo n_i partition the integers into n_i classes, each class correspond to a specific pattern of residues modulo the first i primes. If we pick a residue class a_i, it determines the residues modulo each prime factor. If we choose a_i arbitrarily, the union of those classes across i may be large. But can we choose residues to keep the uncovered density high? For each i, we can pick a residue class that restricts to a particular pattern on the first i primes, perhaps leaving many numbers uncovered. However, as i increases, the partition becomes finer, and we can avoid covering many numbers for a while. But ultimately, any integer has a unique pattern of residues modulo the first i primes, and that pattern will appear as some residue class a_i for some i? Actually, if we choose a_i arbitrarily, the union of the classes may not cover all integers; but for any fixed integer x, there will be some i for which x is in the class a_i? Not necessarily. For the union to cover all integers (density 1), we need that for each integer x (except a zero-density set), there exists i such that x ≡ a_i (mod n_i). This is like a covering property: the chosen residues must "cover" almost all integers.
We need to find a sequence where any selection of residues covers all but finitely many integers (density 1). That is reminiscent of "universal covering system". But we need to examine if (U) holds.
Let's examine known results: There is a concept of "universal covering system" studied by Hough and others? I'm not aware of a direct result. But perhaps the answer to the problem is yes: (P) implies (U). Let's try to prove it.
We need to show that for any ε > 0, there exists k such that any selection of residues for i ≤ k covers all but ε density of integers. Equivalent to: There is no infinite sequence of residues such that after any finite k, the uncovered density is > ε. Suppose contrary: there exists ε > 0 such that for each k, there is a selection a_i^{(k)} with uncovered density ≥ ε after k steps. Then we need to find an infinite selection a_i* such that the uncovered set has density > 0 (maybe at least ε/2). Since each finite selection can be extended arbitrarily, we need to combine them into an infinite selection that "preserves" the uncovered density.
One idea: Consider the sets U_k = { x ∈ ℤ: x avoids the first k congruences for the specific selection a_i^{(k)} }. Each U_k has density ≥ ε. Moreover, U_{k+1} ⊆ U_k (since adding more constraints can only shrink the set). Actually, careful: U_k is defined using the selection a_i^{(k)} for i ≤ k. For each k, the selection may be different. So U_{k+1} is not necessarily a subset of U_k because the residues for i ≤ k+1 might be different from those for i ≤ k. So we cannot directly compare them.
But we could define a tree of possible selections and use a compactness argument to find an infinite path (selection) that avoids some "large" set at each step. Actually, the condition (P) says that any infinite path leads to a set of measure zero (density zero). The question is whether there is a uniform bound on how fast the measure can shrink across all paths. This is reminiscent of "König's lemma" or "finite branching tree" arguments: If every infinite path has some property (the measure of the intersection of the complement sets is zero), does there exist a uniform bound on the length of initial segments needed to guarantee measure < ε? This is reminiscent of "no infinite path has measure > 0" implies "there is a uniform bound on the measure of the initial segments"? This is like a compactness argument: If for each length k, there existed a path whose measure after k steps is ≥ ε, then there is an infinite path whose measure is at least ε (by a limit argument). Let's formalize.
Define for each i a set of "bad" residues for that i: those residues that "preserve" a large uncovered set after i steps. Actually, we need to consider the set of all infinite sequences a such that after k steps, the uncovered density is ≥ ε. Denote this set as B_k. We assume B_k is nonempty for all k. We want to find an infinite sequence a ∈ ∩_{k} B_k (or at least in the closure of these sets) that yields uncovered density > 0. However, the sets B_k are not nested decreasing, but we can consider the closure of each B_k in the product topology. Since each B_k is closed (the condition "uncovered density ≥ ε after k steps" is a condition on the first k coordinates only, i.e., it's a union over a finite set of possible prefix choices that satisfy that property; each prefix yields a certain uncovered density, which can be approximated by densities of initial intervals, so the condition is closed? Let's verify: For each fixed prefix (a_1, ..., a_k), the set of integers not satisfying any of those congruences is a periodic set (with period L = lcm(n_1,...,n_k)). Its density is a rational number that can be computed exactly. So the condition that this density is ≥ ε is a condition on the prefix. Therefore, B_k is a union of some cylinder sets (prefixes) that satisfy that condition, so B_k is a clopen set (open and closed) in the product topology. So B_k is closed.
Now, if B_k is nonempty for all k, then the intersection over all k of the closures of B_k is the intersection of B_k themselves (since they are closed). However, B_k are not nested, so the infinite intersection may be empty. But we can consider the set of infinite sequences a such that for infinitely many k, the density after k steps is ≥ ε. Or we can consider a weaker condition: there exists a subsequence k_j such that the density after k_j steps is ≥ ε. But we need an infinite sequence for which the limit superior of the uncovered density is at least ε. If we can find such a sequence, then the uncovered set after infinite constraints cannot have density zero (since it is a subset of each of those prefixes, so its density is ≤ each prefix's density; but we need a lower bound). Actually, we need a lower bound for the infinite case: if for infinitely many k, the uncovered density after k steps is ≥ ε, then the infinite uncovered set might have density less than ε; it could be zero. For example, think of a sequence of sets U_k each containing the next one (nested decreasing) with densities decreasing to zero eventually, but each U_k may have density > ε for k up to some large number before dropping. However, we can have a scenario where each prefix has high density but eventually the infinite intersection has low density. So the existence of prefixes with high density does not guarantee infinite intersection has high density.
Thus, we need a stronger property: we need to find an infinite sequence such that the density after any finite prefix stays above ε. That would guarantee the infinite intersection has density at least ε (maybe). But does such a sequence exist if for each k there exists some prefix with uncovered density ≥ ε? Not necessarily: there might be no single infinite sequence that keeps the density above ε for all prefixes; the prefixes achieving high density could be different for each k. However, if we can show that there exists a sequence with density staying above ε for arbitrarily large prefixes (i.e., limsup maybe), then infinite intersection may still be zero. So we need a more robust argument.
Alternatively, perhaps we can prove (U) holds by contradiction: Suppose (U) fails: there exists ε > 0 such that for all k, there is a selection a_i^{(k)} with uncovered density ≥ ε after those k constraints. Then we can construct an infinite selection a_i* such that the uncovered density after the first k steps is ≥ ε/2 for infinitely many k, and then use some measure theory to conclude the infinite uncovered set must have positive density. But not sure.
Let's try to reason more concretely: Suppose we have a sequence n_i where for each k, there exists a selection a_i^{(k)} such that the set of integers avoiding those k congruences has density at least ε. Let's denote this set as U_k = ℤ \ ⋃_{i=1}^k (a_i^{(k)} mod n_i). Then d(U_k) ≥ ε. We need to produce an infinite selection a_i such that the set of integers avoiding all those infinite constraints has positive density (or at least not zero). Perhaps we can use a diagonal argument: Since for each i, there are only finitely many possible residues modulo n_i, we can consider the tree of all possible sequences of residues. At each level k, there is at least one branch that leads to a set U_k with density ≥ ε. In this tree, each node corresponds to a prefix of residues. By König's lemma (since the tree is finitely branching and infinite), there exists an infinite path (i.e., a sequence a_i) such that for each level k, the node (prefix) is one of those "bad" prefixes with uncovered density ≥ ε. However, the property of "bad" is not monotone: a prefix might be "bad" (density ≥ ε) at level k, but after adding more constraints beyond k, the uncovered density might drop below ε, so the infinite path may not have density ≥ ε after infinite steps. However, the prefix is still "bad" at that level; the infinite path includes that prefix, but then includes further constraints. The infinite set may be smaller. So we only know that for any finite i, the uncovered set after i steps contains the infinite uncovered set. So density of infinite set ≤ density after i steps. So if after i steps density is ≥ ε, the infinite density could be less (maybe 0). So this does not give a lower bound.
Thus, the existence of "bad" finite prefixes does not guarantee an infinite "bad" path. So we cannot directly infer that (U) fails from (P) alone. So the statement may be true: (P) implies (U). Let's try to prove it.
Potential approach: Use an averaging argument. For each k, consider the worst-case uncovered density δ_k = sup_{a} d(ℤ \ ⋃{i=1}^k (a_i mod n_i)). We need to show that δ_k → 0 as k → ∞. Suppose not; then there exists ε > 0 and a subsequence k_j → ∞ such that δ{k_j} > ε for all j. So for each k_j, there exists a selection a_i^{(j)} (i ≤ k_j) such that the uncovered set U_{k_j} has density > ε. Now we need to find a single infinite selection a_i* that yields uncovered set with density > 0, contradicting (P). How can we combine these selections?
Idea: Use a "majority vote" or "probability" method: Choose each a_i randomly according to some distribution that depends on the "bad" selections for each k_j. For each i, consider the set of residues that appear among the "bad" selections for which i is included (i.e., for all j such that k_j ≥ i). Among those selections, there may be many different residues. Choose a_i randomly uniformly among the residues that appear most frequently? Not sure.
Alternatively, we could define a measure on the space of sequences (a_i) that assigns mass to each infinite sequence based on some weighting of the "bad" selections. Then argue that the expected density of the uncovered set for a random sequence drawn from this measure is > 0, implying that there exists at least one sequence with positive density uncovered set, contradicting (P). This is a probabilistic method.
Specifically, suppose for each j we have a finite selection a_i^{(j)} for i ≤ k_j with uncovered density > ε. Consider a probability distribution over infinite sequences where we pick an index j with probability p_j (summing to 1), and then extend the finite selection a_i^{(j)} arbitrarily (maybe randomly) for i > k_j. Then the expected density of the uncovered set (over this distribution) is at least the sum over j p_j * (density after k_j steps) minus something for the tail? Actually, the uncovered set for the extended sequence is a subset of the uncovered set after k_j steps, so its density is ≤ d(U_{k_j}). So the expected density of the uncovered set across the distribution is ≤ ∑ p_j d(U_{k_j})? Wait, we need a lower bound: For each j, the uncovered set after infinite constraints is a subset of U_{k_j}, so its density is at most d(U_{k_j}). So the expected density across the distribution is at most ∑ p_j d(U_{k_j}). Since each d(U_{k_j}) > ε, the expected density is at least ε? Actually, we need to be careful: The uncovered set after infinite constraints is a subset of U_{k_j} for each j, so its density is ≤ d(U_{k_j}). Therefore, the expected density (over the distribution) is ≤ ∑ p_j d(U_{k_j}). But we need a lower bound to find a sequence with positive density. However, we cannot guarantee existence of a sequence with density > 0 because the infinite constraints may drastically reduce density; we only have an upper bound.
Alternatively, we can consider the following: For each i, consider the set of residues a_i that appear in at least one of the "bad" selections for each sufficiently large k. Perhaps we can define a "bad" infinite sequence that chooses at each i the residue that is "most often used" among the bad selections for k_j with k_j ≥ i. Since each finite selection leaves uncovered density > ε, perhaps we can argue that the infinite selection formed by taking the most common residues will also leave uncovered density > 0.
We need to formalize this: For each i, define a set R_i = { r ∈ ℤ/n_iℤ : there exists infinitely many j such that a_i^{(j)} = r }. Since there are infinitely many j and only finitely many residues, by pigeonhole principle, each i has at least one residue that appears infinitely often. Choose a_i* to be any such residue (maybe the one that appears infinitely often). Then define the infinite sequence a*.
Now, consider the uncovered set for a*: U* = ℤ \ ⋃_{i=1}^∞ (a_i* mod n_i). For any fixed k, there are infinitely many j such that a_i^{(j)} = a_i* for all i ≤ k (since for each i ≤ k, a_i* appears infinitely often; we can find a j large enough such that for all i ≤ k, a_i^{(j)} = a_i* (by taking j in the intersection of infinite sets for each i). Since there are finitely many i ≤ k, the intersection of infinitely many indices for each i is infinite (by infinite pigeonhole principle: choose a subsequence of j's where a_1^{(j)} = a_1*; from that subsequence, choose a sub-subsequence where a_2^{(j)} = a_2*, etc.). So we can find infinitely many j such that the prefix of length k of a^{(j)} matches a*.
Thus, for any k, there exist infinitely many j such that U_{k}^{(j)} = ℤ \ ⋃{i=1}^k (a_i^{(j)} mod n_i) = ℤ \ ⋃{i=1}^k (a_i* mod n_i). So the uncovered set after k steps for a* is the same as for those j. Since for each such j, d(U_k^{(j)}) > ε (by assumption for k_j ≥ k? Wait, we only know that for each specific k_j, there exists some selection a_i^{(j)} with uncovered density > ε for that k_j. But we don't know that for all k there is such a selection; we only know that for infinitely many k (the subsequence k_j) we have such a selection. However, we can use these k_j to get a lower bound on the uncovered density for prefixes of a* for those k_j. Since for each k_j, there exists a selection a^{(j)} that gives uncovered density > ε. But we don't know that a* matches a^{(j)} on the prefix of length k_j; we only know that a* matches some infinite subsequence of the a^{(j)} for each i individually, but not necessarily for all i simultaneously up to a given k_j. However, we can argue that there is a subsequence of j's for which the prefixes of a^{(j)} converge to a* (by diagonal argument). Since each a_i* appears infinitely often in the j's, we can pick a subsequence j_1 < j_2 < ... such that for each ℓ, the prefix of length ℓ of a^{(j_ℓ)} matches a* (by constructing iteratively). Then for each ℓ, the uncovered density after ℓ steps for a* is equal to that for a^{(j_ℓ)} (since they share same prefix). Since we only know that for indices k_j (the original "bad" indices) the uncovered density > ε, but we need to ensure that ℓ can be chosen among those k_j's. Since we have infinite many k_j's, we can choose ℓ = k_{j_m} for some subsequence. Then we get a sequence of prefixes of a* for which the uncovered density is > ε. So we have a subsequence of prefixes where uncovered density > ε.
Thus, for a*, the uncovered density after those prefixes is > ε. However, as we argued earlier, the infinite uncovered set is a subset of each of those prefix uncovered sets, so its density is ≤ each of them. So we only get an upper bound > ε; we need a lower bound to contradict (P). But maybe we can argue that the infinite uncovered set must have density at least ε/2 (or some positive fraction) because if the density of the infinite set were zero, then after sufficiently many steps, the density after those steps must be small (tends to zero). However, we have a subsequence of prefixes where the density remains > ε, so the infinite set cannot have zero density? Actually, it's possible: The infinite set may be "thin" but still each prefix uncovered set includes many numbers that will be covered later. For example, consider the sequence of sets U_k = [1, N_k] where N_k → ∞; each U_k has density 0 eventually? Actually, [1, N_k] has density N_k / N? Wait, density of a finite set is zero in the limit as N → ∞? No, the set of integers up to N_k has density N_k / x as x → ∞? For large x, the proportion of numbers ≤ x that are ≤ N_k is N_k / x → 0 as x → ∞. So each U_k has density 0 (asymptotically). So we need to be careful: The density of a set can be small even if it contains many small numbers; the density is a limit of proportion up to large N.
Thus, large uncovered density after a finite prefix does not guarantee that the infinite set has positive density; the uncovered set after prefix may consist of relatively small numbers that get covered later as we add more constraints. For example, suppose n_i = i (all integers). After k steps, we can choose a_i = i-1 for i ≤ k. Then the uncovered set after those steps includes numbers that are not ≡ -1 mod i for any i ≤ k. Among numbers up to some bound, many may be uncovered. However, as we increase k, more numbers become covered, and eventually the uncovered set is just numbers that are ≡ -1 mod i for some i > some large bound, which are few. So the uncovered density after any finite k can be > 0, but tends to zero as k increases, and the infinite uncovered set has density zero.
Thus, the existence of prefixes with density > ε does not contradict (P). So our attempted proof fails.
Hence, we need a different approach.
Maybe (U) is false: There may exist a sequence where the worst-case uncovered density after any finite number of steps stays above some ε > 0, but any infinite selection yields zero density. Let's try to construct such a sequence.
We need to find a sequence n_i such that for each k, there exists a selection of residues a_i (i ≤ k) that leaves a positive density uncovered, but for any infinite selection, the uncovered density is zero. So the "badness" can be pushed arbitrarily far: for each k, we can choose residues that are "bad" up to step k, but after we continue selecting residues for i > k, we can bring the density down to zero. But the property (U) requires a uniform bound on k that works for all selections. That would fail if we can keep moving the "badness" forward.
Thus, we need to construct a sequence where the "badness" can be delayed arbitrarily but never eliminated early.
Idea: Use "blocks" of moduli where each block is "independent" of the previous ones. For each block, we can choose residues that leave a fraction of numbers uncovered, but the next block's moduli will be such that any selection of residues for those moduli will cover the remaining numbers (or at least reduce density to zero eventually). However, we can make the blocks arbitrarily large, so that after any given number of steps (which may fall within a block), we can keep the uncovered density above ε. But eventually, after the block ends, the next block's moduli will force coverage.
Specifically, consider a sequence n_i defined as follows: Partition ℕ into intervals B_1, B_2, B_3, ... where each block B_j contains a large number of moduli that are multiples of a large integer M_j (with M_j increasing rapidly). Within each block, we can choose residues that all lie in a specific residue class modulo M_j, thereby leaving all numbers not in that class uncovered. Since the block contains many moduli, we can keep the uncovered density high within that block. However, after the block ends, the next block has moduli that are not multiples of any fixed divisor (or have some property that forces coverage). Yet we need to ensure that for any infinite selection of residues across all blocks, the uncovered set has density zero. That means that for any selection, the combined effect of all blocks must cover almost all numbers. But we need to guarantee that even if we choose residues within each block to restrict to a particular residue class modulo M_j, the later blocks will cover the numbers left uncovered by earlier blocks. However, those later blocks may also have residues that restrict to a specific residue class modulo some other divisor, perhaps overlapping with the previously uncovered set. But we need to guarantee that across all blocks, any selection yields full coverage.
Alternatively, maybe we can use a sequence of moduli that are pairwise coprime (like primes). For each finite prefix of the primes, the worst-case uncovered density after k steps is at least ∏_{i=1}^k (1 - 1/p_i) = ~e^{-γ}/log p_k. This tends to zero as k → ∞, albeit slowly. So δ_k → 0, so (U) holds. So primes do not provide a counterexample.
If we want δ_k to stay bounded away from zero, we need to have some redundancy: The union of the selected residue classes may overlap heavily, so we can choose residues that "align" to cover the same subset of ℤ, leaving a large complement. For example, if moduli share a common factor, we can align them to all restrict to the same residue class modulo that factor. However, if we have infinitely many moduli sharing a common factor d > 1, then (P) fails because we can choose residues all ≡ r (mod d) for those moduli, leaving numbers not congruent to r (mod d) uncovered. However, if we have only finitely many moduli for each d, we may circumvent (P) failure: we could align them to restrict to r (mod d) for those moduli, but then later moduli may not be multiples of d, and may cover the rest. However, if there are only finitely many moduli that are multiples of d, then after those moduli, the remaining moduli (not multiples of d) might be able to cover the remaining numbers, but we need to ensure they do so for any selection of residues. This seems strong.
Alternatively, maybe the condition (P) implies (U) always true. Let's try to prove it.
Proof attempt: Suppose (U) fails: there exists ε > 0 such that for all k, there exists a selection a_i^{(k)} (i ≤ k) with uncovered density > ε. We need to derive a contradiction with (P). We'll construct a measure (density) on ℤ as a limit of densities along a subsequence. For each selection a^{(k)}, consider the set U_k = ℤ \ ⋃_{i=1}^k (a_i^{(k)} mod n_i). Its density d(U_k) > ε.
Now consider the sequence of densities of U_k as k → ∞ (along the subsequence where we have bad selections). They are all > ε. However, we want to find an infinite selection a* such that the uncovered set after infinite steps has positive density. Perhaps we can take a limit of the sets U_k? But the sets are not nested, so we cannot take a direct limit.
Alternatively, consider the densities of U_k as measures on ℤ. The set of measures d(U_k) can be considered as a sequence of numbers > ε. By compactness, there is a subsequence that converges (maybe to some limit ≥ ε). But we need a set, not just a density number.
Alternatively, we can consider the upper density of the set of numbers that are uncovered for infinitely many k (or for all but finitely many k). Maybe that set has positive density. But (P) says that any infinite selection yields zero density uncovered set. So if we can find a set of numbers that avoid the congruences for infinitely many selections... Hmm.
We need to think more systematically: The problem asks: If for any infinite choice of congruences, the uncovered set has density zero, does it follow that we can find a finite number of congruences that guarantee the uncovered density is arbitrarily small uniformly? This is reminiscent of "uniform convergence" of densities. It's like: a family of sets (the complements of unions of one residue per modulus) has the property that each member has density zero. Does it imply that the densities converge uniformly to zero across the family as we increase k? Usually, if you have a family of sets parameterized by sequences, and each infinite intersection has measure zero, then the measure of the finite approximations must converge to zero uniformly? Not necessarily: think of a sequence of sets A_n in [0,1] such that each individual A_n has measure zero (like a point). The union over all n of A_n may have measure zero (countable union of measure zero sets). But for any finite collection of them, the union may have measure zero as well. So uniform convergence holds. That's not a counterexample.
Better: Consider a family of sets parameterized by infinite binary sequences: define for each infinite sequence ω ∈ {0,1}^ℕ, define S(ω) = ∩_{n=1}^∞ C_n(ω_n) where each C_n(0) and C_n(1) are subsets of [0,1] of measure 1/2 each (like two halves of the interval). If we pick any infinite sequence, the intersection S(ω) is a single point (maybe) and has measure zero. However, for any finite n, the measure of the set after n steps is (1/2)^n, which tends to zero uniformly (the product). So uniform bound holds. So not a counterexample.
We need a situation where each infinite intersection has measure zero, but the measure after any finite number of steps does not go to zero uniformly across all sequences. That is, there exists ε > 0 such that for any n, there exists a sequence ω such that the set after n steps has measure at least ε. However, any infinite intersection has measure zero.
One candidate: Consider a sequence of partitions of [0,1] into intervals of decreasing length, but for each step we can choose a "big" interval that covers almost all of [0,1] except a small piece. However, if we keep picking those small pieces, the infinite intersection may be empty. However, the measure after each finite step is large. But does this fit our modulus/residue class structure? For each i, we have a partition of ℤ into n_i equal density classes. The partitions become finer as n_i increases (generally). For each i, we can choose any class; its density is 1/n_i. If we want the union of selected classes across i to have large density after many steps, we need to choose classes that overlap heavily, i.e., share many integers. Overlap is more likely when the moduli share common divisors, so that the classes correspond to larger sets.
Consider the sequence n_i = 2^i. For each i, the partition divides ℤ into 2^i classes of equal density 1/2^i. These partitions are nested: each class modulo 2^{i+1} is a subset of a class modulo 2^i. So if we choose residues a_i that are consistent (i.e., a_{i+1} ≡ a_i (mod 2^i)), then all selected classes are nested, and the union is just the class at the smallest i (maybe). In that case, the uncovered set is the complement of that class, which has density (1 - 1/2^i). For i = 1, that density is 1/2. So for any k, we can choose residues a_i that are consistent (e.g., a_i = 0 for all i). Then the uncovered set after k steps is simply the complement of the class a_1 mod 2 (since the union of nested classes equals the smallest one). So uncovered density = 1 - 1/2 = 1/2. Thus δ_k ≥ 1/2 for all k, so (U) fails for ε < 1/2. However, does (P) hold? For n_i = 2^i, consider infinite selection a_i = 0 for all i; then the union is just the class of numbers ≡ 0 mod 2 (i.e., even numbers), uncovered set = odd numbers, density 1/2 > 0, violating (P). So that sequence does not satisfy (P). So not a counterexample.
Thus, to satisfy (P), we cannot have a common divisor across all moduli, else we can align them to restrict to one residue class and leave a positive density uncovered. So the sequence must be such that for any infinite selection, the union cannot be contained within a proper arithmetic progression of positive density. More generally, for any proper subset of ℤ with positive density, there must be some congruence that forces a number outside that subset into the union.
One way to ensure (P) is that the set of moduli is "universal" in the sense that for any arithmetic progression (mod d) of positive density, there is some modulus n_i that is coprime to d (or at least does not share the same divisor) such that any residue class modulo n_i intersects both the progression and its complement. Actually, we need that for any selection of residues, the union cannot be confined within a set of density less than 1. So the moduli must be "mixing".
Consider n_i being the sequence of all squarefree numbers with an increasing number of prime factors. For each i, n_i is product of many primes, so the partitions become finer. However, any selection of residues yields a union that may be small? Actually, with many prime factors, each class is small (density 1/n_i small). Overlap may be limited. However, if we choose residues arbitrarily, the union may be small: For each i, we add at most density 1/n_i. If n_i grows fast, the total sum of densities converges, so the union may have density less than 1; but we need that any selection yields union density 1. This suggests that n_i cannot grow too quickly; they must be small enough that the sum of densities diverges, but also must not share large common divisors often, else we can align them.
Thus, a candidate for a sequence satisfying (P) but maybe not (U) is the sequence of all integers that are "squarefree" and have at least one new prime factor not seen before. For each i, n_i could be the product of the first i primes: the primorial. For this sequence, we have n_i = p_1 p_2 ... p_i. Let's examine (P): For any selection of residues a_i mod n_i, does the union of those classes have density 1? Let's test: If we choose a_i = 0 for all i, then the union is the set of integers divisible by some primorial p_1...p_i. This includes all integers divisible by any product of the first i primes. Since any integer > 1 has some prime factor; if it has a prime factor among the first i primes for some i, then it's divisible by that prime but not necessarily by the product of all primes. Actually, being divisible by some p_j does not guarantee being divisible by the product of the first i primes. So the union of the classes a_i = 0 (mod n_i) includes numbers that are multiples of each primorial; each such class includes numbers that are multiples of all primes up to p_i, i.e., numbers that are 0 modulo all those primes simultaneously. That is a strong condition: numbers divisible by all primes up to p_i. That's a sparse set: density 1/n_i. So the union across i of these sets is the set of numbers divisible by arbitrarily many small primes (i.e., numbers that are multiples of all primes up to p_i for some i). That's only the set of numbers that are multiples of infinitely many primes? Actually, to be divisible by all primes up to p_i for some i, a number must be divisible by each of those primes; as i increases, the condition becomes stricter. The only integer divisible by all primes is 0. So aside from 0, there may be no integer divisible by infinitely many distinct primes. So the union across i of these classes is just {0}? Wait, check: For i = 1, primorial = 2: class a_1 = 0 includes even numbers. For i = 2, n_2 = 23 = 6: class a_2 = 0 includes multiples of 6 (subsets of evens). For i = 3, n_3 = 23*5 = 30: includes multiples of 30 (subset of multiples of 6). So the union across i is just the set of numbers that are multiples of arbitrarily many small primes, which is a decreasing nested sequence of sets: evens ⊇ multiples of 6 ⊇ multiples of 30 ⊇ ... The intersection of all these sets is numbers divisible by all primes (i.e., 0). The union across i (i.e., the set of numbers that belong to at least one of those classes) is just evens (the largest set). Actually, the union is just the set of numbers divisible by 2 (the smallest modulus). Because the class for n_i = 2 (i=1) already covers evens; the later classes are subsets of evens, so they don't add any new coverage beyond evens. So the union across all i is just evens. Therefore, the uncovered set is odds, density 1/2 > 0. So (P) fails for this sequence. So not a good candidate.
Thus, the moduli cannot have a smallest one that is small (like 2) and then have later moduli being multiples of it; that leads to failure of (P). So the sequence must avoid having a small modulus that divides many later ones. In general, we must avoid having any integer d that divides infinitely many n_i.
Thus, the sequence must be "primitive" in the sense that each integer divides at most finitely many n_i. This is a necessary condition for (P). Indeed, if some integer d > 1 divides infinitely many n_i, then choose a_i ≡ r (mod d) for all those i, where r is any fixed residue mod d; then each chosen class is a subset of the residue class r (mod d). The union of those classes is contained in that residue class. The complement includes at least d-1 other residue classes each of density 1/d, so uncovered density at least (d-1)/d > 0. However, there may be other moduli not divisible by d that could cover those other residue classes. But we can choose residues for those other moduli in a way that still leaves some positive density uncovered? Let's examine: If there are infinitely many n_i divisible by d, we can choose residues a_i ≡ r (mod d) for all of them. For the remaining n_i (those not divisible by d), we can choose residues arbitrarily (maybe adversarially to minimize coverage). The union of the classes from the d-multiples is contained within the residue class r (mod d). The other classes may intersect both r and other residues. However, we can try to choose residues for the other moduli to avoid covering numbers not in r (mod d) as much as possible. But can we guarantee that they still cover all numbers not in r? Not necessarily. So (P) may still hold even if some integer divides infinitely many n_i, as long as the other moduli are "dense enough" to cover the leftover. But we can choose residues for those other moduli also to be in r (mod d) if they share any factor with d? Actually, if an n_i is not divisible by d, then its residue classes may be spread across residues mod d: each class modulo n_i, when projected onto ℤ/dℤ, is a union of some residues modulo d (maybe all residues if n_i and d are coprime). In particular, if n_i is coprime to d, then each residue class modulo n_i contains exactly one integer in each residue class modulo d (by CRT). So a class modulo n_i that is coprime to d cannot be contained within a single residue class modulo d; it will intersect all residues modulo d. So if we have infinitely many n_i coprime to d, then we cannot restrict the union to a single residue class modulo d; the union will intersect all residues. However, we can choose residues for those n_i in a way that the union overall still does not cover everything. But maybe we can still guarantee that for any selection of residues for those n_i, the union covers everything except a zero-density set? Not obvious.
Nevertheless, it's plausible that a necessary condition for (P) is that the set of n_i is "primitive", i.e., each integer divides only finitely many n_i. Let's attempt to prove this: Suppose there exists d > 1 dividing infinitely many n_i. Let those indices be i_1 < i_2 < ... . Consider the subsequence of n_{i_j} each divisible by d. For each j, let a_{i_j} = r (mod d) (for some fixed r). For the remaining indices (the rest of the n_i's), we can choose residues arbitrarily, say a_i = 0. Then the union includes all numbers belonging to the selected classes. The union of the classes from the subsequence (n_{i_j}) is contained within the residue class r (mod d). The classes from the other indices may intersect both r and other residues. However, we can try to ensure that they do not cover all numbers outside r. Is it possible? If the other n_i's have small densities (1/n_i), the sum of their densities may converge, so they may not cover all numbers outside r. However, we need to guarantee that for any choice of residues for the other n_i's, the uncovered set has density zero. If the sum of reciprocals of the other n_i's diverges, maybe they can cover everything else? But we need to guarantee for any selection, not just some.
Thus, it's plausible that a sequence n_i satisfying (P) must have the property that any infinite subsequence that shares a common divisor cannot be "dominant". Probably, a known result: If ∑_{i: d | n_i} 1 / n_i diverges for some d > 1, then (P) fails? Let's explore.
Alternatively, maybe (P) does imply (U). Let's try to prove (P) => (U). The idea: Suppose (U) fails: there is ε > 0 such that for any k, we can find a sequence a_i^{(k)} with uncovered density > ε after k steps. Then we need to construct an infinite selection a_i* with uncovered density > 0, contradicting (P). The challenge is to combine the finite selections into an infinite one that preserves positive density.
Observation: The uncovered density after k steps is a monotone decreasing function of k for a fixed selection: d(U_k) = d(ℤ \ ∪{i=1}^k (a_i mod n_i)). For a fixed selection, d(U_k) → d(U∞) = d(ℤ \ ∪{i=1}^∞ (a_i mod n_i)). Since U∞ ⊆ U_k for each k, we have d(U_∞) ≤ d(U_k). So if d(U_k) > ε for infinitely many k, we cannot directly conclude that d(U_∞) ≥ ε; it could be zero. But maybe we can show that if the decrease from d(U_k) to d(U_{k+1}) is at most some small amount depending on n_{k+1}, then we can bound d(U_∞) from below. For a fixed selection, the decrease in uncovered density when adding the (k+1)-th congruence is at most 1/n_{k+1} (the density of numbers covered by that congruence that were previously uncovered). However, the actual decrease could be larger due to overlaps? Actually, adding a new congruence can only cover numbers that were not previously covered; the maximal number of new numbers covered (density) is at most 1/n_{k+1}, but could be less if the class overlaps with previously covered numbers. So the decrease in uncovered density is ≤ 1/n_{k+1}. Therefore, the uncovered density after infinite steps is at least d(U_k) - ∑{i > k} 1/n_i. Because from step k onward, the total possible additional coverage is at most sum of densities of the remaining classes (worst-case if they cover only previously uncovered numbers). This is a key observation: For a fixed selection, the uncovered density after infinite steps cannot be less than d(U_k) - ∑{i > k} 1/n_i. Because each subsequent congruence can at most cover a set of density 1/n_i of numbers that were uncovered before; if they cover only uncovered numbers, they could reduce the uncovered density by at most that amount; if they overlap with previously covered numbers, they have less effect.
Thus, for a given selection a, we have:
d(U_∞) ≥ d(U_k) - ∑_{i > k} 1/n_i.
Now, suppose (U) fails: there exists ε > 0 such that for each k, there exists a selection a_i^{(k)} with d(U_k^{(k)}) > ε. Then for that selection, we have:
d(U_∞^{(k)}) ≥ d(U_k^{(k)}) - ∑{i > k} 1/n_i > ε - ∑{i > k} 1/n_i.
If we can choose k large enough such that the tail sum ∑{i > k} 1/n_i < ε/2, then we would have d(U∞^{(k)}) > ε/2 > 0, contradicting (P) for that infinite selection. However, note that the infinite selection a_i^{(k)} is only defined for i ≤ k; we need to extend it to an infinite sequence to define U_∞^{(k)}. But we can extend it arbitrarily (e.g., choose a_i = 0 for i > k). Then the uncovered set for this infinite selection is contained in U_k^{(k)} (maybe a subset). Actually, adding more congruences can only reduce the uncovered set, so d(U_∞^{(k)}) ≤ d(U_k^{(k)}). So the inequality above is reversed? Let's be careful.
Define U_k(a) = ℤ \ ⋃{i=1}^k (a_i mod n_i). For an infinite sequence a, define U∞(a) = ∩{k=1}^∞ U_k(a). Then U∞(a) ⊆ U_k(a) for all k, so d(U_∞(a)) ≤ d(U_k(a)). Thus, d(U_∞(a)) ≤ d(U_k(a)). So the uncovered density after infinite steps is less than or equal to any finite prefix's uncovered density. So we only get an upper bound, not a lower bound. However, we can bound the amount by which the uncovered density can decrease from step k to infinity: The total possible decrease is at most ∑_{i > k} 1/n_i (the sum of densities of the remaining classes). Since each new class can cover at most a fraction 1/n_i of the whole integers, but some of those may already be covered. So the uncovered density can drop at most by that sum. So we have:
d(U_∞(a)) ≥ d(U_k(a)) - ∑_{i > k} 1/n_i.
But is this correct? Let's examine: Suppose we have a set U_k of uncovered integers after k steps. For i > k, each new congruence a_i mod n_i covers a set of integers of density 1/n_i (the class). Some of those integers may already be covered (i.e., not in U_k). So the reduction in uncovered density at step i is the density of the part of the class that was uncovered before step i. That is ≤ 1/n_i. So the total reduction from step k onward is at most ∑{i > k} 1/n_i. Therefore, the uncovered density after infinite steps d(U∞) satisfies:
d(U_∞) ≥ d(U_k) - ∑_{i > k} 1/n_i.
This is correct: The uncovered density after infinite steps cannot be less than the uncovered density after k steps minus the maximum possible total reduction from later steps. So we have a lower bound.
Now, suppose (U) fails: There exists ε > 0 such that for any k, there is a selection a^{(k)} with d(U_k(a^{(k)})) > ε. Then for each such selection, we have:
d(U_∞(a^{(k)})) > ε - ∑_{i > k} 1/n_i.
If we can ensure that ∑{i > k} 1/n_i < ε/2 for some k, then d(U∞(a^{(k)})) > ε/2 > 0. Since we assumed (P), which says that for any infinite selection, the uncovered density is zero, we have a contradiction. Therefore, for (P) to hold, we must have that for any ε > 0, there exists k such that ∑{i > k} 1/n_i < ε. That is, the tail sum of reciprocals must go to zero. However, the tail sum of a series of positive terms goes to zero if and only if the series converges. So this condition implies that ∑{i=1}^∞ 1/n_i converges. But earlier we argued that if ∑ 1/n_i converges, (P) fails because we can choose a_i = 0 for all i and the uncovered set will have positive density (at least product of (1 - 1/n_i) > 0). However, that argument assumed independence: If n_i are pairwise coprime, the density of numbers not divisible by any n_i is ∏ (1 - 1/n_i) > 0 if ∑ 1/n_i converges. But if the n_i are not coprime, the uncovered set may be larger. However, we can still find a selection that yields positive density uncovered set: choose a_i = 0 for all i. Then the set of numbers not divisible by any n_i is the set of integers that are not multiples of any n_i. If the n_i are such that the sum of reciprocals converges, then the density of numbers not divisible by any n_i is positive? Not necessarily: For example, consider n_i = i^2. The sum of reciprocals converges (π^2/6). The set of integers not divisible by any i^2 is the set of squarefree numbers, which have density 6/π^2 > 0. So (P) fails. In general, for any sequence n_i, the set of integers not divisible by any n_i has density at least ∏ (1 - 1/n_i) (maybe lower bound?). Actually, the density of numbers not divisible by any of a set of integers is given by the inclusion-exclusion formula, but if the integers are not coprime, the product ∏ (1 - 1/n_i) is not the exact density but a lower bound? Let's examine: For any set of positive integers {n_i}, the density of numbers not divisible by any of them is at least ∏ (1 - 1/n_i) if the n_i are pairwise coprime. If not coprime, the density can be larger, but not smaller? Actually, if n_i share factors, the events are not independent, so the probability that a random integer is not divisible by any of them may be larger than the product (since overlapping reduces the chance of being divisible). For example, n_1 = 2, n_2 = 4. The product (1 - 1/2)*(1 - 1/4) = 0.5 * 0.75 = 0.375. However, the set of numbers not divisible by either 2 or 4 is just the odds (density 0.5). So the actual density (0.5) > product (0.375). So product is a lower bound for the density of numbers not divisible by any n_i? Actually, product underestimates the density, i.e., density ≥ product.
Thus, if ∑ 1/n_i converges, then ∏ (1 - 1/n_i) > 0, so the set of numbers not divisible by any n_i has density at least that positive product, so (P) fails (by taking a_i = 0). So divergence of ∑ 1/n_i is necessary for (P).
Now, the inequality we derived: For any selection a, d(U_∞) ≥ d(U_k) - ∑{i > k} 1/n_i. This implies that if the tail sum of reciprocals can be made arbitrarily small, then the infinite uncovered density is close to d(U_k). However, if the tail sum does not go to zero (i.e., diverges), then we cannot guarantee that d(U∞) > 0 from a given d(U_k) > ε. But we can use the inequality in the contrapositive direction: If (U) fails, there exists ε > 0 such that for all k, there exists a selection a^{(k)} with d(U_k) > ε. Then for each k, we have d(U_∞) ≥ ε - ∑{i > k} 1/n_i. Since this holds for all k, we can take the limit inferior as k → ∞: d(U∞) ≥ ε - lim_{k→∞} ∑{i > k} 1/n_i. If ∑{i=1}^∞ 1/n_i diverges, then the tail sum does not go to zero; it may be infinite or at least some positive amount, but at least not zero. Actually, if the series diverges, the tail sum ∑_{i > k} 1/n_i diverges (or at least does not tend to zero). However, we cannot directly subtract infinity.
Let's formalize: Suppose ∑{i=1}^∞ 1/n_i = ∞. Then for any fixed ε > 0, there exists k such that ∑{i > k} 1/n_i < ε (if the series converges, not diverges). Wait, if the series diverges, the partial sums go to infinity, but the tail sum after any finite k is still infinite (i.e., does not converge to zero). Actually, the tail sum of a divergent series does not go to zero; in fact, it diverges to infinity. For a divergent series of positive terms, the tail after any finite index is still infinite (i.e., diverges). However, we cannot talk about tail sum being infinite; it's not a finite number. So we cannot bound the reduction in uncovered density using a divergent series; we need a finite bound. Actually, the reduction in uncovered density from step k onward is at most the sum of densities of the new classes, which could be infinite (i.e., the sum diverges). So the reduction could be arbitrarily large (up to 1). So we cannot guarantee a lower bound on d(U_∞) from d(U_k) if the series diverges.
Thus, the inequality is not helpful if ∑ 1/n_i diverges. However, we can consider a more refined bound: The reduction in uncovered density from step k onward cannot exceed 1 (obviously). So if d(U_k) > ε, we cannot guarantee d(U_∞) > 0; it could be zero if the later steps cover everything else. So the existence of bad prefixes does not contradict (P). So the statement may be false: there could be sequences where the uncovered density after any finite number of steps can be arbitrarily close to 1, but after infinitely many steps, it's zero for any selection.
Is such a sequence possible? Let's try to construct it.
Idea: Use a sequence of moduli n_i that grows extremely fast, but also includes many small moduli interspersed to ensure eventual covering. For each k, we can choose residues for the first k moduli to leave a large fraction uncovered, but after we start using the larger moduli, they will cover the rest, regardless of residues. For the union to eventually cover almost all integers regardless of residues, the later moduli must have the property that any choice of residues for them covers a large proportion of the remaining uncovered integers. So later moduli must be such that each residue class is "dense" in the uncovered set, i.e., 1/n_i is not too small relative to the density of the uncovered set. However, if the uncovered density after k steps is > ε, then each subsequent class can cover at most density 1/n_i of the whole integers; if n_i is huge, that may be insufficient to reduce the uncovered density below ε, unless there are many of them.
Thus, to ensure that any infinite selection eventually covers everything, the sum of reciprocals of the tail must diverge, so that the total possible coverage is infinite (i.e., can cover up to density 1). However, we need coverage to happen regardless of the selection of residues. So the worst-case scenario is that each new class covers only previously uncovered numbers (i.e., no overlap). If the sum of reciprocals diverges, then after enough steps, the total coverage can be arbitrarily close to 1, even if each new class only covers new numbers. So after infinitely many steps, the uncovered density can be zero. However, for any finite k, we can choose residues that keep overlap high (i.e., each new class covers numbers already covered), leaving uncovered density high. This is possible if the moduli share many common factors, so that we can align the residues to have large overlaps. However, to satisfy (P), we cannot have a fixed divisor that divides infinitely many n_i (as we argued). But we can have a sequence where each integer divides only finitely many n_i, but each block of moduli share a divisor among themselves but not across blocks.
Consider constructing n_i in blocks: For block j, let all n_i in that block be multiples of a large prime p_j (or a product of some primes), but ensure that p_j > max of previous moduli, and p_j does not divide any n_i in other blocks. Then for any selection of residues, the union after infinite steps may still cover everything, but for a selection that picks residues within each block that are all ≡ r_j (mod p_j) for some r_j, the union within that block is confined to residue class r_j mod p_j. However, since p_j does not divide moduli in other blocks, those other blocks may intersect the complement of r_j mod p_j. However, we can choose residues in each block to restrict to a particular residue modulo p_j, but across blocks we can choose different residues. For a finite prefix that ends within a block, we may be able to keep uncovered density at least (p_j - 1)/p_j minus contributions from later blocks (which are not included). As p_j grows, (p_j - 1)/p_j can be arbitrarily close to 1. So for any ε > 0, we can find a block with p_j large enough such that the worst-case uncovered density after any number of steps within that block is > ε (if we choose residues all congruent to the same class mod p_j). However, after we continue to later blocks, they may cover the remaining numbers. Since each block's moduli are multiples of a distinct large prime that does not appear elsewhere, the later blocks can cover the numbers left uncovered by previous blocks. But we need to guarantee that for any infinite selection across all blocks, the uncovered set is zero. This is plausible: The set of numbers left uncovered after block j is those numbers not in the residue class r_j modulo p_j (if we restrict to that class within block j). The later blocks have moduli that are not multiples of p_j, so their residue classes intersect all residues modulo p_j. However, we can choose residues for later blocks also to restrict to some residue classes modulo their own primes p_{j+1}, etc. This suggests that we might be able to keep the uncovered set large across many blocks if we always choose residues that restrict to a particular residue class within each block. However, each block's residue class is defined modulo a different prime; a number that is not in the class modulo p_j may be in the class modulo p_{j+1} if it satisfies that condition. But we need to see if we can keep a positive fraction of numbers uncovered after infinitely many blocks. Possibly we can choose residues across blocks such that the uncovered set is the intersection of the complements of each block's chosen residue class modulo p_j (i.e., numbers not congruent to r_j mod p_j for each j). That set has density ∏ (1 - 1/p_j). If the primes p_j are chosen such that ∑ 1/p_j converges, then the product ∏ (1 - 1/p_j) > 0, so the intersection of complements (the set of numbers that avoid each selected residue class) has positive density. This would violate (P). To avoid that, we need ∑ 1/p_j diverges, ensuring product zero. However, we also need that within each block, we have many moduli all divisible by p_j, which could allow us to choose residues that all correspond to the same class modulo p_j. The union of those classes across the block is a subset of that class. So the uncovered set after the block is the complement of that class, i.e., numbers not congruent to r_j mod p_j, which has density (p_j - 1)/p_j. However, later blocks may cover some of those numbers.
If we choose the primes p_j to increase rapidly, then the densities (p_j - 1)/p_j approach 1. Thus, after block j, the uncovered density can be arbitrarily close to 1. However, after later blocks, the uncovered density may shrink. The total possible reduction in uncovered density from block j onward is at most ∑_{i > block j} 1/n_i (the sum of densities of later classes). If we make later blocks have very large moduli, then this sum can be small, so the final uncovered density remains close to the density after block j, i.e., > ε. This would contradict (P). However, we need to ensure that for any infinite selection, the uncovered density is zero. But we just constructed a selection that yields uncovered density > ε. So this sequence would not satisfy (P). So we need to design a sequence where for any infinite selection, the uncovered density is zero, but we can still find for each finite k a selection that leaves uncovered density > ε. This suggests a delicate balance: the tail sum of 1/n_i must be divergent to guarantee eventual coverage, but the moduli must be arranged such that within any finite prefix, we can align residues to keep uncovered density high.
One idea: Use a sequence where the n_i are the integers with many small prime factors, but the set of n_i is "thin" enough that the sum of reciprocals diverges slowly, but each n_i has many prime factors so that any residue class modulo n_i is "spread out" across residues modulo any smaller modulus. This may prevent aligning residues to keep uncovered density high across many steps? Actually, we want the opposite: we want to be able to align residues to keep uncovered density high for any finite prefix. So we need the moduli to share many small prime factors within the prefix.
Consider the following construction: Let p_j be an increasing sequence of primes (or integers). For each j, we have a block of length L_j (large). In block j, each n_i = p_j * m_i, where m_i are distinct integers coprime to p_j and to each other maybe. Then each n_i is divisible by p_j, but we can choose residues a_i that are all congruent to some fixed residue r_j modulo p_j (and arbitrary modulo m_i). This ensures that each selected class is a subset of the residue class r_j mod p_j. The union of these classes across block j is still contained within that residue class. So the uncovered set after block j includes all numbers not congruent to r_j mod p_j. The density of that set is (p_j - 1)/p_j.
Later blocks with different p_{j+1} may be able to cover some of those numbers, but each later class can only cover at most density 1/n_i of the whole integers; if the later moduli are huge, each such class covers a tiny fraction. However, there are infinitely many later moduli, so the total coverage potential across later blocks could be infinite if the sum of reciprocals diverges. But we need to ensure that any selection of residues for those later blocks will necessarily cover almost all numbers, regardless of choices. That is a strong requirement.
Alternatively, maybe the answer is yes: (P) implies (U). Let's attempt to prove this more formally.
We need to show that for any ε > 0, there exists k such that for any infinite sequence a, the uncovered density after k steps is < ε. Equivalent to: sup_{a} d(U_k(a)) → 0 as k → ∞.
Define δ_k = sup_{a} d(U_k(a)). Since d(U_{k+1}(a)) ≤ d(U_k(a)) for any fixed a, the sup δ_k is non-increasing (since sup_{a} d(U_{k+1}(a)) ≤ sup_{a} d(U_k(a)))? Actually, sup_{a} d(U_{k+1}(a)) could be larger than sup_{a} d(U_k(a)) for some a? No, for any a, d(U_{k+1}(a)) ≤ d(U_k(a)), so the sup over all a cannot increase: sup_{a} d(U_{k+1}(a)) ≤ sup_{a} d(U_k(a)). So δ_k is non-increasing.
We want to show that δ_k → 0. Suppose not: there exists ε > 0 such that δ_k > ε for all k. Then for each k, there exists a sequence a^{(k)} such that d(U_k(a^{(k)})) > ε. Now we want to construct an infinite sequence a* such that d(U_∞(a*)) > 0, contradicting (P). This is like a compactness / diagonal argument: Since each a^{(k)} is defined on the first k coordinates, we can consider the infinite tree of possible prefixes. For each node at depth k, there is a set of sequences that share that prefix. Among those sequences, some have high uncovered density after k steps; we can track those.
Define a "good" node at depth k as a prefix (a_1, ..., a_k) for which the uncovered density after those k steps > ε. By assumption, for each k there is at least one good node. Moreover, the tree of all possible prefixes is infinite and finitely branching (each node has at most n_{k+1} children). We want to find an infinite path (a sequence) that goes through infinitely many good nodes (maybe all good nodes). Then we want to argue that the uncovered density of this infinite sequence is > 0.
But as argued earlier, an infinite path that passes through a good node at depth k does not guarantee that the infinite uncovered set has positive density, because later nodes may cause the uncovered density to drop. However, if the infinite path passes through good nodes for infinitely many depths, perhaps we can bound the infinite uncovered density from below. Let's try to bound it.
For a given infinite sequence a, define d_k = d(U_k(a)). This sequence is non-increasing and tends to d_∞ = d(U_∞(a)). We want to bound d_∞ from below in terms of the values d_k at infinitely many steps. Since d_∞ = lim_{k→∞} d_k, we have d_∞ = inf_{k} d_k. So if for infinitely many k, d_k > ε, then d_∞ ≥ ε? Actually, if for infinitely many k we have d_k > ε, then the infimum could still be zero if the sequence eventually dips below ε later. However, if it never dips below ε, i.e., d_k > ε for all k, then d_∞ ≥ ε. But we only know that for each k, there exists some sequence a^{(k)} with d_k > ε. That doesn't guarantee that there exists a sequence where d_k > ε for all k. That's the key.
Thus, we need to find a sequence a* such that for all k, d_k > ε (or at least > ε/2). If we can find that, then d_∞ ≥ ε > 0, contradicting (P). So the question reduces to: If for each k there exists a sequence a^{(k)} with d_k > ε, does there exist a sequence a* with d_k > ε for all k? This is reminiscent of the "finite intersection property" in compact spaces: If we have a family of closed sets (here, the sets of sequences where d_k > ε) that is nonempty for each k, does the intersection of all these sets (i.e., sequences with d_k > ε for all k) nonempty? In a compact space, if the family has the finite intersection property (i.e., any finite intersection is nonempty), then the whole intersection is nonempty. However, here we have sets A_k = { a : d(U_k(a)) > ε }. For any finite set of indices k_1, ..., k_m, the intersection A_{k_1} ∩ ... ∩ A_{k_m} is the set of sequences where d(U_{k_i}(a)) > ε for each i. Is this intersection nonempty? Not obviously: a sequence that works for one k may not work for another k. However, we can consider the sets where d(U_k(a)) > ε for all sufficiently large k? Not helpful.
Alternatively, we can consider a weaker condition: For each N, consider the set B_N = { a : there exists k ≥ N such that d(U_k(a)) > ε }. This set is open? Actually, it is a union of A_k for k ≥ N. Since each A_k is open (maybe clopen), B_N is open. The intersection over all N of B_N is the set of sequences for which there exist arbitrarily large k with d(U_k(a)) > ε, i.e., limsup of d(U_k(a)) ≥ ε. This set may be nonempty. However, we need sequences with liminf or limit > ε.
Alternatively, we can use a probabilistic method: Consider a random sequence a drawn from a probability measure that gives equal weight to each residue at each step (independent uniform). For each fixed k, the expected uncovered density is ∏_{i=1}^k (1 - 1/n_i) (if independent). If ∑ 1/n_i diverges, this product tends to zero. So the expected uncovered density after k steps tends to zero. However, we need a uniform bound: we need to show that for any ε > 0, there exists k such that the probability that uncovered density > ε is zero? Actually, we need something stronger: we need that for any sequence a, the uncovered density after k is < ε. That is a deterministic statement.
But perhaps we can use a probabilistic method to show that if (U) fails, then there is a random sequence with positive probability of having infinite uncovered density > 0, contradicting (P). Let's try: Suppose (U) fails: there exists ε > 0 such that for any k, there exists a sequence a^{(k)} with uncovered density > ε after k steps. Consider the following random process: Choose an integer K randomly with some distribution (e.g., geometric). Then choose a prefix a^{(K)} that yields uncovered density > ε. Then extend the rest of the sequence arbitrarily (e.g., randomly). The resulting random infinite sequence has a positive probability that the uncovered density after K steps > ε. However, that does not guarantee that the infinite uncovered density > ε; it may drop later. But maybe we can bound the expected infinite uncovered density from below using the inequality d(U_∞) ≥ d(U_k) - ∑{i > k} 1/n_i. If we choose K large enough such that ∑{i > K} 1/n_i < ε/2 (if possible). If the sum diverges, we cannot find such K. So this approach fails if the series diverges.
Thus, we need to examine the role of divergence of ∑ 1/n_i. As we argued, (P) implies divergence of ∑ 1/n_i (if series converges, then product > 0, and a_i = 0 yields uncovered set of positive density). Conversely, does divergence of ∑ 1/n_i guarantee (P)? Not necessarily (as shown earlier with n_i = 6i). So (P) imposes stronger constraints.
Now, let's try to find a counterexample to (U) using a carefully constructed sequence where each integer divides only finitely many n_i, but we can still maintain high uncovered density for arbitrarily large prefixes.
Idea: Use a sequence where each modulus n_i is the product of a new large prime p_i with a small fixed integer d (like 2). So n_i = 2 * p_i, where p_i is a distinct large prime. Then each n_i is even and has a factor p_i. For any selection of residues a_i, the class a_i mod n_i is a subset of some residue class modulo 2 (if we choose a_i ≡ 0 mod 2, then the class includes only even numbers; if a_i ≡ 1 mod 2, only odd numbers). If we want to maximize uncovered density after k steps, we can choose residues a_i that all are ≡ 0 mod 2 (i.e., choose even residues), so each class includes only even numbers. Then the union after k steps is contained in the even numbers; the uncovered set includes all odd numbers, which have density 1/2. So δ_k ≥ 1/2 for all k, violating (U). Does (P) hold? Probably not, because we can choose a_i = 0 (mod 2) for all i; the union is evens, uncovered odds have density 1/2 > 0, violating (P). So not a counterexample.
Thus, we must avoid having many moduli share a common small divisor.
Idea: Let the sequence n_i be the set of all squarefree integers with exactly one new prime factor each time: e.g., n_i = p_i (primes). As we saw, (U) holds for primes (product tends to zero). So not a counterexample.
What about n_i = p_i * q_i, where p_i and q_i are distinct primes, and each pair is new (no repeats). Then each n_i shares no common divisor with any other n_i (except maybe 1). So the partitions are independent. For any selection of residues, the uncovered density after k steps is exactly ∏{i=1}^k (1 - 1/n_i). Since ∑ 1/(p_i q_i) diverges? Let's check: If p_i and q_i are distinct, each term is roughly 1/(p_i q_i). If we choose them to be large, the sum may converge. For (P), we need divergence, so we need the sum of reciprocals to diverge. So we need infinitely many small moduli. If we include all primes, the sum diverges. If we include all products of two distinct primes, the sum of reciprocals of those products converges? Let's check: ∑{p<q} 1/(pq) = (∑ 1/p)^2 - ∑ 1/p^2, which diverges? Actually, ∑{p} 1/p diverges; the double sum ∑{p,q} 1/(pq) = (∑ 1/p)^2 diverges (as a product of divergent series). However, we need to consider only each unordered pair once; but still the sum diverges as (∑ 1/p)^2. So the sum diverges. So (P) might hold for this sequence? Let's test with selection a_i = 0 for all i. Then each class is numbers divisible by p_i q_i. The union of those classes is numbers divisible by some product of two distinct primes. Does that cover all but a zero-density set? Possibly not: numbers that are powers of a single prime (like 2^k) are not divisible by any product of two distinct primes. So they remain uncovered. The set of prime powers has density zero (since sum 1/p^i converges). So maybe (P) holds for this sequence: the uncovered set for a_i = 0 is numbers that are powers of a single prime (including primes themselves, prime squares, etc.) plus 1 maybe. That set has zero density. So (P) may hold.
Now, does (U) hold? For any ε > 0, can we find k such that any selection of residues for the first k moduli leaves uncovered density < ε? Since the moduli are pairwise coprime, the uncovered density after k steps is exactly ∏_{i=1}^k (1 - 1/n_i). Since the sum ∑ 1/n_i diverges, the product tends to zero. However, the rate of convergence depends on the ordering of n_i. If we order the moduli arbitrarily, the product after k steps may not be small for small k. But we can always choose k large enough such that product < ε. Since product tends to zero as k → ∞, we can find such k. So (U) holds.
Thus, sequences with pairwise coprime moduli and diverging sum of reciprocals satisfy (U). So to find a counterexample, we need a sequence where moduli are not pairwise coprime, yet (P) holds but (U) fails.
Consider a sequence where each modulus has many small prime factors, but the same small primes appear infinitely often across many moduli, but each new modulus includes a new large prime factor to maintain divergence of sum of reciprocals. For example, let n_i = p_i * M, where M = product of the first few primes (like 235*7 = 210). Then each n_i is divisible by M, but also has a new large prime factor p_i. The sum ∑ 1/n_i = (1/M) ∑ 1/p_i diverges (since ∑ 1/p_i diverges). So (P) might hold? Let's test: If we choose a_i = 0 for all i, then each class is numbers divisible by n_i = M * p_i. The union is numbers divisible by M * p_i for some i. That includes numbers divisible by M * p_i for any i; but if a number is divisible by M, it may not be divisible by p_i for any i (if the number is a multiple of M but not containing any of the p_i as a factor). For example, suppose M = 6, and p_i are primes distinct from 2,3. Then numbers divisible by 6 but not by any p_i are multiples of 6 that are p_i-smooth? Actually, if p_i are all primes > 3, then any multiple of 6 may have prime factors > 3 or not. If it's a power of 2 times a power of 3 (i.e., of the form 2^a * 3^b), then it has no prime factor p_i > 3. So it's not divisible by any n_i = 6 * p_i. Thus those numbers are uncovered. The set of numbers of the form 2^a * 3^b has density zero. So (P) may hold.
Now, does (U) hold? For the first k moduli n_1,...,n_k (all divisible by M), we can choose residues a_i that are all ≡ r (mod M) for some fixed r, say r = 1 (mod M). Then each class a_i mod n_i consists of numbers congruent to 1 mod M and also satisfying some condition modulo p_i (since a_i mod n_i is determined by residues modulo M and p_i). If we choose a_i such that a_i ≡ 1 (mod M) and a_i ≡ something modulo p_i that ensures the entire class is within the residue class 1 (mod M). Then the union of those classes is a subset of numbers ≡ 1 (mod M). Thus the uncovered set includes all numbers not ≡ 1 (mod M), which have density (M - 1)/M. This is bounded away from zero, independent of k (as long as the first k moduli are all divisible by M). So δ_k ≥ (M - 1)/M > 0 for all k, violating (U). However, does (P) hold for this sequence? Let's test again: If we choose a_i = 1 (mod M) for all i, then each class is a subset of numbers ≡ 1 (mod M). The union is contained in numbers ≡ 1 (mod M); the uncovered set includes numbers not ≡ 1 (mod M). But later moduli (e.g., after we exhaust all n_i = M * p_i) might have other moduli not divisible by M that could cover those numbers. However, if we define the sequence to consist only of moduli divisible by M (i.e., all n_i are multiples of M), then the above selection leaves uncovered density > 0, contradicting (P). So to satisfy (P), we cannot have all n_i divisible by a fixed integer M > 1 (i.e., there must be infinitely many n_i not divisible by M). However, we could have infinitely many n_i divisible by M as long as there are also infinitely many that are not. But for the purpose of violating (U), we only need a prefix of the sequence (the first k terms) all divisible by M. So if the sequence starts with a block of moduli all divisible by some M > 1, then (U) fails (since δ_k ≥ (M-1)/M for any k in that block). However, does the presence of later moduli not divisible by M salvage (P)? Possibly yes: Perhaps for any infinite selection, the later moduli will cover numbers not in that residue class, ensuring full coverage. So (P) may hold. This suggests a potential counterexample: A sequence where the first many terms are all multiples of a large integer M, but later terms are not multiples of M, and overall the sum of reciprocals diverges. Then (U) fails because we can choose residues in the first block to restrict to a single residue class modulo M, leaving many numbers uncovered, and the later terms are too sparse to bring the uncovered density arbitrarily small uniformly across all choices (but they ensure eventual coverage). However, is it true that for any infinite selection, the uncovered set has density zero? Not obvious: Since the later terms are sparse, maybe we can also choose residues for them to avoid covering the leftover numbers, leading to a positive density uncovered set. So we need to design the sequence so that no matter how we choose residues for later terms, they will cover the leftover numbers. This is a strong requirement.
Alternatively, maybe (U) holds for any sequence satisfying (P). Let's consider a known result: In ergodic theory or combinatorial number theory, there is a concept of "syndetic sets" and "thick sets". A set S of integers is "thick" if it contains arbitrarily long intervals. The complement of a thick set has zero upper density. In our context, we want the union of residue classes to be thick? Not exactly.
Maybe the problem is known as "Erdős–Graham conjecture"? Or "Erdős–Selfridge problem"? Actually, there is a known result: If you have a set of moduli such that any choice of residues covers almost all integers, then there is a finite subfamily that also covers almost all integers uniformly. This sounds plausible and may be a known theorem.
Let's search memory: There is a concept of "covering systems" and "Erdős' zero-one law"? Not sure.
Alternatively, perhaps the answer is yes: (P) implies (U). Let's try to prove it more thoroughly.
We need to show that for any ε > 0, there exists k such that for any infinite sequence a, the uncovered density after k steps < ε. Equivalently, define δ_k = sup_{a} d(U_k(a)). We need to show δ_k → 0.
Suppose not. Then there exists ε > 0 such that δ_k > ε for all k. For each k, let a^{(k)} be a sequence such that d(U_k(a^{(k)})) > ε. Since each a^{(k)} is defined only up to k, we can extend it arbitrarily beyond k. Now consider the infinite tree of all possible prefixes; by compactness, there exists an infinite sequence a* that is a limit point of the a^{(k)} in the product topology: for each i, the residue a_i* occurs infinitely often among the a_i^{(k)} for k ≥ i. As argued earlier, we can find a subsequence k_j such that for each i ≤ k_j, a_i^{(k_j)} = a_i* (by diagonalization). Then for each j, the uncovered density after k_j steps for a* equals d(U_{k_j}(a^{(k_j)})) > ε (since they share the same prefix). So we have an infinite sequence a* for which d(U_{k_j}(a*)) > ε for infinitely many j. However, we need to show that d(U_∞(a*)) > 0. Since d(U_∞(a*)) = lim_{k→∞} d(U_k(a*)) (monotone decreasing), we have d(U_∞(a*)) = inf_{k} d(U_k(a*)). Since the sequence d(U_k(a*)) is non-increasing, its limit is the infimum. If there is a subsequence where d(U_{k_j}) > ε, then the limit is at least ε? Actually, if a sequence is non-increasing, then any subsequence has the same limit as the whole sequence. If there is a subsequence where each term > ε, then the limit must be ≥ ε? Wait, consider a non-increasing sequence (decreasing) a_1 ≥ a_2 ≥ a_3 ≥ ... with limit L = inf a_n. If there is a subsequence a_{k_j} > ε for all j, then L ≥ ε? Let's examine: Since the sequence is non-increasing, for any m > k_j, a_m ≤ a_{k_j}. So for any m > any of those k_j, a_m ≤ a_{k_j}. Since a_{k_j} > ε, we only know that a_m may be less than or equal to something > ε, but could be less than ε. So the limit could be less than ε. For example, consider a_n = 1/n + 0.1 for n ≤ 10, then a_n = 0.05 for n > 10. This is non-increasing? Actually, 0.1 + 1/n decreases to 0.1 at n=10, then jumps down to 0.05 at n=11, which is allowed as non-increasing? But we need non-increasing: a_{n+1} ≤ a_n. So a jump down is allowed. So we can have a sequence that stays above ε for many terms (e.g., > 0.5), then eventually drops to zero. So the existence of a subsequence where a_n > ε does not guarantee the limit is ≥ ε. So our argument fails.
Thus, we cannot directly conclude that a* has positive uncovered density. So the problem remains open.
Nevertheless, perhaps we can use a more refined argument: Suppose δ_k > ε for all k. Then there exists a sequence of prefixes p_k of length k with uncovered density > ε. Consider the infinite tree where each node at depth k corresponds to a prefix p_k. Since the tree is finitely branching, there is an infinite path (a*) that passes through infinitely many of these prefixes (by König's lemma). However, it may not pass through prefixes that have uncovered density > ε for all depths; it may only pass through some of them at certain depths, but at other depths it may have lower uncovered density. However, can we guarantee that we can find an infinite path that goes through prefixes with uncovered density > ε for infinitely many depths (cofinal)? Possibly yes: by constructing a subtree consisting of nodes that are "good" (i.e., have some descendant prefix with uncovered density > ε). Then apply König's lemma to get an infinite path where each node has a descendant with high uncovered density, but not necessarily the node itself.
Alternatively, maybe we can use the "compactness of density zero sets" argument: The family of sets of zero density is a σ-ideal. If each infinite selection yields a set in this ideal, perhaps there is a uniform bound.
Alternatively, perhaps the answer is "Yes, (P) implies (U)". Let's try to prove it using a different method: Use the fact that the set of all infinite sequences of residues is a compact metric space (product of discrete spaces). For each ε > 0, define the set F_k(ε) = { a : d(U_k(a)) ≥ ε }. This is a closed set (as we argued). The condition (P) says that for any infinite sequence a, the intersection over all k of U_k(a) has density zero. This implies that for any a, lim_{k→∞} d(U_k(a)) = 0. In particular, for each a, there exists some k such that d(U_k(a)) < ε. So the family of closed sets {F_k(ε)} for k = 1,2,... has empty intersection: ∩{k=1}^∞ F_k(ε) = ∅. Since the sets are closed in a compact space, by the finite intersection property, there must exist a finite subcollection whose intersection is empty. That is, there exists K such that ∩{k=1}^K F_k(ε) = ∅. Wait, but this is not the correct direction: The finite intersection property says that if a family of closed sets has the property that every finite subfamily has nonempty intersection, then the whole intersection is nonempty. Its contrapositive: If the whole intersection is empty, then there exists a finite subfamily whose intersection is empty. In our case, we have an increasing sequence of closed sets (since F_{k+1}(ε) ⊆ F_k(ε) because if d(U_{k+1}(a)) ≥ ε then d(U_k(a)) ≥ ε? Actually, d(U_{k+1}(a)) ≤ d(U_k(a)), so if d(U_{k+1}(a)) ≥ ε, then d(U_k(a)) ≥ ε as well. So F_{k+1}(ε) ⊆ F_k(ε). So we have a decreasing sequence of closed sets. The intersection of all of them is the set of sequences a for which d(U_k(a)) ≥ ε for all k. The condition (P) says that for each a, d(U_k(a)) → 0, so for any ε > 0, there exists k such that d(U_k(a)) < ε. Hence, no a belongs to all F_k(ε). So ∩{k=1}^∞ F_k(ε) = ∅. Since the space is compact, there exists a finite K such that ∩{k=1}^K F_k(ε) = ∅ (since the sets are decreasing, this just means that F_K(ε) = ∅). Wait, careful: For a decreasing family of closed sets, the intersection being empty implies that some F_K is empty (since they are nested). Indeed, if ∩_{k=1}^∞ F_k = ∅, then there must be some K where F_K = ∅, because if all F_k were nonempty, then by the finite intersection property (since they are nested) the infinite intersection would be nonempty. More formally, if each F_k is nonempty, then the infinite intersection is also nonempty (the intersection of a decreasing sequence of nonempty compact sets is nonempty). Therefore, the fact that the infinite intersection is empty implies that at some stage K, the set F_K is empty. That means there exists K such that there is no sequence a with d(U_K(a)) ≥ ε. In other words, for all a, d(U_K(a)) < ε. That is precisely the statement (U). So (P) implies (U). This is a standard compactness argument.
Hold on: Is the family {F_k(ε)} decreasing? Let's verify: F_k(ε) = { a : d(U_k(a)) ≥ ε }. Since d(U_{k+1}(a)) ≤ d(U_k(a)), if d(U_{k+1}(a)) ≥ ε, then d(U_k(a)) ≥ ε. So a ∈ F_{k+1}(ε) ⇒ a ∈ F_k(ε). So indeed F_{k+1}(ε) ⊆ F_k(ε). So the sequence is decreasing.
Now, note that each F_k(ε) is a closed subset of the compact product space ∏_{i=1}^∞ ℤ/n_iℤ (with product topology). We need to verify closedness: The set of sequences a such that d(U_k(a)) ≥ ε depends only on the first k coordinates (since U_k is determined by those). For each fixed prefix (a_1...a_k), the density d(U_k(a)) can be computed (it is a rational number depending on the lcm of the first k moduli). The condition d(U_k(a)) ≥ ε is a closed condition (since the mapping from prefix to density is continuous? Actually, the set of prefixes for which d(U_k(a)) ≥ ε is a finite union of cylinder sets; each cylinder set is clopen; thus the union is clopen). Hence F_k(ε) is a finite union of cylinders, thus closed (and open). So the conditions for the compactness argument hold.
Now, condition (P) says that for any infinite sequence a, the limit of d(U_k(a)) = 0. That implies that for any a and any ε > 0, there exists K (depending on a and ε) such that d(U_K(a)) < ε. Therefore, no a belongs to all F_k(ε). So ∩_{k=1}^∞ F_k(ε) = ∅.
Since the space is compact and the sets are decreasing and closed, there exists K such that F_K(ε) = ∅. That means for every sequence a, d(U_K(a)) < ε. This is exactly the desired uniform bound: there exists K depending only on ε such that for any choice of residues a_i (for all i, but only the first K matter), the density of integers not satisfying any of the congruences a_i (mod n_i) for i ≤ K is < ε.
Hence, the answer is yes: (P) implies (U). So the statement is true.
Now, we need to make sure we haven't missed any subtlety: The condition (P) is about the set of integers not satisfying any of the infinite set of congruences. The density of that set is zero for any choice of residues. This implies that for each sequence a, d(U_k(a)) → 0 as k → ∞. However, we need to verify that this limit is indeed zero for any given a. Could there be a sequence where d(U_k(a)) does not converge to zero, yet the intersection has density zero? Let's examine: Since U_{k+1} ⊆ U_k, the densities d(U_k) form a non-increasing sequence bounded below by zero, so they converge to some limit L ≥ 0. The set U_∞ = ∩{k} U_k has density d(U∞) which may be less than L? Actually, the density of the intersection can be less than the limit of densities. For example, consider sets U_k = [k, ∞) (all integers ≥ k). Each has density 1 (since density of infinite tail is 1). The intersection is empty, density 0. So d(U_k) → 1, while d(U_∞) = 0. So the fact that d(U_∞) = 0 does not imply that d(U_k) → 0. However, in our situation, each U_k is a periodic set (a finite union of arithmetic progressions). Does there exist a sequence a such that d(U_k) stays bounded away from zero while the intersection has density zero? Possibly yes: For example, let n_i = i, and choose residues a_i = i-1 (i.e., class -1 mod i). Then after each step, the uncovered set includes numbers that are not ≡ -1 mod i for any i ≤ k. For large N, among numbers ≤ N, the proportion that are not ≡ -1 mod any i ≤ k may be roughly ∏_{i=1}^k (1 - 1/i) = 1/k, which tends to zero as k increases. So d(U_k) → 0. So fine.
But can we construct a sequence of residues such that d(U_k) does not tend to zero, yet the infinite intersection has density zero? Let's attempt: Suppose we have n_i = 2^i, and we choose residues a_i = 0 for all i. Then U_k = numbers not divisible by any of 2, 4, ..., 2^k, i.e., odd numbers (density 0.5). So d(U_k) = 0.5 for all k. The infinite intersection U_∞ is also odd numbers (since the union of all those classes is evens). So d(U_∞) = 0.5 > 0. So (P) fails. So not relevant.
Consider n_i = i (all integers). For each i, choose a_i = 0 for i in some subset S, and choose a_i = something else for others. Can we arrange so that d(U_k) stays bounded away from zero while the infinite intersection has density zero? Possibly if the uncovered set after each step is comprised of numbers that are all small (like up to some bound), but later steps prune them away. However, the density of U_k may not go to zero if large intervals of numbers remain uncovered for each k. But the infinite intersection may be empty because each number eventually gets covered at some step. For example, consider the sequence of moduli n_i = i and choose a_i = i (i.e., class i mod i?), but each integer i itself is covered at step i, so the uncovered set after k steps includes numbers > k that are not covered yet? Actually, let's examine: Choose a_i = i (i.e., residue class i mod i); but i mod i is 0, so same as a_i = 0. So that covers evens, etc.
We need a selection where each integer is covered at some step (so infinite intersection empty), but the density of uncovered numbers after any finite k is positive, perhaps close to 1. For example, consider the following selection: For each i, let a_i = i+1 (i.e., residue class i+1 mod i). Then each integer n is covered at step n-1? Let's examine: For integer x, consider i = x-1: Then x ≡ (i+1) = x (mod i). But x mod (x-1) = 1? Actually, x mod (x-1) = 1, not x. So not covered. Let's try a_i = 1 for all i. Then each integer x is covered at step i = x-1? Let's check: For integer x, choose i = x-1. Then a_i = 1, and we ask whether x ≡ 1 (mod i). Since i = x-1, we have x ≡ 1 (mod x-1)? Indeed, x = (x-1) + 1 ≡ 1 mod (x-1). So all x > 1 are covered by the congruence a_{x-1} = 1 (mod x-1). So the infinite union covers all integers > 1. Therefore, the infinite intersection is empty (density zero). However, what is the uncovered density after k steps? For k fixed, the uncovered set includes numbers > k+1 that are not covered by any of the first k congruences. Let's examine: For each i ≤ k, the congruence a_i = 1 (mod i) covers numbers ≡ 1 mod i. For numbers up to N, the proportion covered by these k classes is at most ∑{i=1}^k 1/i (by union bound). Since ∑{i=1}^k 1/i ~ log k + γ, which is less than 1 for small k? Actually, for k = 1, sum = 1; for k = 2, sum = 1 + 1/2 = 1.5 > 1, but union bound > 1 doesn't give a bound less than 1. However, the actual density of the union may be less than 1; but we need to compute d(U_k) = density of numbers not ≡ 1 mod any i ≤ k. For large N, a random integer x is ≡ 1 mod i with probability 1/i. The events are not independent, but we can try to approximate. Let's try small k: k = 2, moduli 1 and 2. a_1 = 1 mod 1 covers all numbers; a_2 = 1 mod 2 covers odd numbers. Since a_1 covers all numbers, the union covers all numbers; uncovered density = 0. Actually, if we include n_1 = 1, then any residue class modulo 1 is the whole ℤ, so the union is everything. So d(U_k) = 0 for any k ≥ 1. So not a counterexample.
But we can start with n_1 > 1 to avoid immediate coverage. Let's consider n_i = i+1 (starting from 2). For i = 1, n_1 = 2; choose a_1 = 1 (odd numbers). For i = 2, n_2 = 3; choose a_2 = 1 (numbers ≡ 1 mod 3), etc. For each i, a_i = 1. Then the union of these classes includes all numbers that are ≡ 1 mod m for some m ∈ {2,3,4,...}. Does this union cover all integers? For any integer x > 1, does there exist m ∈ {2,...,x} such that x ≡ 1 mod m? For x = 5, m = 4: 5 mod 4 = 1, yes. For x = 6, m = 5: 6 mod 5 = 1, yes. For x = 7, m = 6: 7 mod 6 = 1, yes. So any integer x > 1 is congruent to 1 mod (x-1). Since (x-1) ∈ {2,...,x-1}, indeed x is covered by the congruence a_{x-2} (since n_{x-2} = x-1?). Actually, we need to align indices: If n_i = i+1, then for integer x, we consider i = x-2 (since n_i = x-1). Then a_i = 1 (mod n_i) includes numbers ≡ 1 mod (x-1). Since x ≡ 1 mod (x-1), x is covered. So the union of these congruences covers all integers > 1. So infinite uncovered set is maybe just {1}, density zero. So (P) holds.
Now, what's the uncovered density after k steps? After we have moduli 2,3,...,k+1 and residues a_i = 1 for each, the union includes numbers that are ≡ 1 mod any of those moduli. Among numbers ≤ N, how many are not covered? Consider numbers that are not ≡ 1 mod any of the moduli 2,...,k+1. For a random integer x, the probability that x ≡ 1 mod m is 1/m, but the events are not independent. However, the density of numbers that avoid being 1 modulo each of those moduli is roughly ∏{m=2}^{k+1} (1 - 1/m) = 1/(k+1). Indeed, for any fixed residues a_i, the density of numbers not congruent to a_i mod m for any m ≤ k+1 is exactly ∏{m=2}^{k+1} (1 - 1/m) = 1/(k+1), assuming the residues are all distinct? Actually, not necessarily; but for the specific residue a_i = 1 for each i, we can compute the proportion of numbers ≤ N that are not ≡ 1 mod any m ∈ {2,...,k+1}. For each m, the condition x ≡ 1 mod m forbids numbers in a residue class of density 1/m. The set of numbers that avoid all these classes is the set of integers x such that for all m, x ≠ 1 mod m. This is the same as the set of integers that are not of the form m * t + 1 for any m ≤ k+1. For numbers up to N, each m forbids roughly N/m numbers (specifically floor((N-1)/m) + 1 numbers). The union of these may have overlaps; the total number of forbidden numbers may be less than sum N/m. However, the number of x ≤ N that are not forbidden is at least N - ∑{m=2}^{k+1} N/m - (some overlaps). The density is at least 1 - sum{m=2}^{k+1} 1/m + something. But we can bound it below by 1 - (H_{k+1} - 1). For small k, this may be negative, so not helpful.
But we can compute the exact density for this particular selection? Actually, the condition x ≠ 1 mod m for any m ≤ k+1 means x-1 is not divisible by any integer from 2 to k+1. So x-1 has no divisor ≤ k+1. That means x-1 must be either 1 or a prime > k+1 (or a product of primes > k+1). So x-1 is either 1 or a number all of whose prime factors exceed k+1. The density of numbers m for which all prime factors > y is given by the "Dickman function", roughly ρ(log x / log y). For a fixed k, as x → ∞, the proportion of numbers ≤ x where all prime factors > k+1 tends to 0? Actually, the set of numbers with all prime factors > y has density 0 (since sum of reciprocals of primes > y diverges slowly, but the proportion of numbers with no small prime factor tends to 0 as x → ∞ faster than 1/k? Let's think: The number of integers ≤ x with no prime factor ≤ y is roughly x ∏_{p ≤ y} (1 - 1/p) ~ x e^{-γ} / log y (Mertens). So the density is about e^{-γ} / log y. For y = k+1, this is about 1 / log k. So the density of numbers m not divisible by any integer ≤ k+1 (i.e., having no small prime factors) is about 1 / log k. So the density of numbers x = m + 1 that satisfy x-1 has no divisor ≤ k+1 is also about 1 / log k. So the uncovered density after k steps for this selection is about 1 / log(k+1). This tends to zero, albeit slowly. So for any ε > 0, we can find k large enough so that uncovered density < ε. So (U) holds.
Thus, this example does not provide a counterexample.
Now, consider the sequence n_i = i (starting from some i > 1). For each k, we can choose residues a_i such that the uncovered density after k steps is at least 1/k? Actually, the worst-case may be the product of (1 - 1/i). This is 1/k. So sup_{a} d(U_k) = 1/k. So δ_k → 0. So (U) holds.
Thus, many natural sequences satisfy (U). However, the problem asks if it's always true. The compactness argument above suggests yes: Since the space is compact and the sets of sequences with uncovered density ≥ ε are closed and nested, property (P) implies that the intersection is empty, so some set is empty, giving uniform bound.
But we need to verify the continuity of density as a function of the sequence: The mapping a → d(U_k(a)) is continuous? Actually, it depends only on the first k coordinates, so it's a function on a finite product; it's locally constant (since each cylinder yields a specific density). So the set {a : d(U_k(a)) ≥ ε} is a finite union of cylinders (closed). So the compactness argument is valid.
Thus, (P) ⇒ (U). Let's restate: For each ε > 0, consider the sets F_k = { a : d(U_k(a)) ≥ ε }. Since each a ∈ ∩{k=1}^∞ F_k would be a sequence for which the uncovered set after any finite number of steps has density ≥ ε, and then the infinite uncovered set would also have density ≥ ε (since d(U∞) = inf_k d(U_k) ≥ ε). Such a sequence would contradict (P). Therefore, ∩_{k=1}^∞ F_k = ∅. Since the space is compact and the sets are decreasing and closed, some F_k must be empty. That means there exists k such that for all a, d(U_k(a)) < ε. This is exactly the required property.
Thus, the answer is yes.
Now, we need to double-check the subtlety: We used that d(U_∞) = inf_k d(U_k). However, is it true that for any decreasing sequence of sets U_k (each periodic), the density of the intersection equals the infimum of the densities? In general, for any decreasing sequence of measurable sets, the measure of the intersection is the limit of the measures (by continuity from above) if the measure space is finite and the sets have finite measure. However, density is not a measure; it's a limiting concept. But for periodic sets (which have well-defined density), the density of the intersection may be less than the infimum. For example, consider U_k = ∪_{n ≥ k} (n! ℤ) maybe? Actually, each U_k may have density 1, but intersection empty. Let's try to construct a decreasing sequence of periodic sets whose densities stay at 1 but intersection is empty. For example, let U_k = { integers n: n ≥ k }. This is not periodic (density 1). For periodic sets, maybe the density of the intersection equals the limit of densities? Not necessarily. For example, consider U_k = set of integers n such that the least prime factor of n is > p_k (the k-th prime). This set has density decreasing to zero (since product over p ≤ p_k (1 - 1/p) ~ e^{-γ}/log p_k). Intersection is numbers with no prime factor at all (i.e., 1), density zero. So limit of densities is zero, same as intersection.
But can we have a decreasing sequence of periodic sets where densities remain bounded away from zero but the intersection has density zero? Possibly: Consider U_k = set of integers n such that n ≡ 0 mod k! (i.e., multiples of k!). Density of U_k = 1/k! → 0, not a counterexample.
What about U_k = set of integers n such that n ≡ 0 or 1 mod k (or some pattern). The density may be >0, but the intersection may become sparse.
In general, for periodic sets, the density of the intersection can be smaller than the infimum of densities. Example: Let U_k be the set of integers whose binary representation ends with at least k zeros (i.e., divisible by 2^k). Then d(U_k) = 1/2^k, which tends to zero. Intersection is {0} (density zero). So not a counterexample.
Find a decreasing sequence of periodic sets where each set has density > ε but the intersection has density 0. This would violate the equality that d(∩ U_k) = inf d(U_k). However, is that possible? For sets of integers defined by congruences, maybe not? Let's examine.
Consider the following: For each k, define U_k = ⋃_{i=k+1}^∞ R_i, where R_i = numbers ≡ 0 mod i? Actually, that's not decreasing.
We need decreasing: U_{k+1} ⊆ U_k. Suppose we define U_k = { n : n ≡ 0 mod m for some m ≥ k }. So U_k is the set of integers divisible by some integer ≥ k. Then U_k has density 1 (since almost all numbers have a divisor ≥ k? Actually, any integer n > k has a divisor m ≥ k? Not necessarily: primes less than k have no divisor ≥ k except themselves > k? Let's examine: For large n, it may have a divisor > k (like itself). So maybe U_k = all integers > k (density 1). Intersection over all k is empty (since no integer has arbitrarily large divisors). So densities are all 1, but intersection density = 0. However, U_k is not periodic; it's not a finite union of arithmetic progressions (it is a union of infinitely many progressions). In our case, each U_k is a finite union of residue classes (since it's the complement of a union of k residue classes). Indeed, U_k = ℤ \ ∪_{i=1}^k (a_i mod n_i). Since each a_i mod n_i is a residue class, its complement is a union of the other n_i - 1 residue classes modulo n_i (with overlaps). But U_k is a finite union of congruence classes modulo L = lcm(n_1,...,n_k) (maybe many classes). So U_k is periodic with period L. So U_k is a finite union of arithmetic progressions. As k increases, the number of progressions may increase, but each U_k is periodic.
Now, does the density of the intersection of a decreasing sequence of periodic sets equal the infimum of densities? For a decreasing sequence of sets that are unions of finitely many residue classes modulo increasing moduli, is the density of the intersection equal to the limit of densities? Not necessarily. For example, let n_i = i^2, and choose residues a_i = 0 for all i. Then U_k = ℤ \ ∪{i=1}^k (0 mod i^2) = numbers not divisible by any i^2 for i ≤ k. The density of U_k is ∏{i=1}^k (1 - 1/i^2) = (some product) = something > 0 (the product converges to > 0). The infinite intersection is numbers not divisible by any integer square > 1, i.e., squarefree numbers, which have density 6/π^2 > 0. So the limit of densities equals the density of intersection (both > 0). Not a counterexample.
We need a scenario where densities approach a positive limit but the intersection is zero. Is that possible for periodic sets? Let's try to construct: Let n_i = i, and choose residues a_i such that the uncovered set U_k includes numbers that are ≤ k (or something). For small numbers, the uncovered set may include many small numbers, but after k steps, all numbers up to some large bound become covered, leaving only a set of size maybe O(k), but density zero. However, the density of U_k may still be positive because the set of small numbers up to some threshold may have density zero in the limit, but the periodic structure may create a positive density set of "large" numbers that remain uncovered. Actually, if we define U_k as all numbers whose smallest prime factor > p_k (the k-th prime), then density tends to zero. So not a counterexample.
But perhaps we can define U_k as numbers that are not divisible by any of the first k primes, but also are not congruent to some residues modulo those primes (like the specific residues we choose). This yields a set of numbers that have no small prime factors and also avoid certain residues mod those primes. This set may have density roughly ∏_{p ≤ p_k} (1 - 1/p) (the same as before). So tends to zero.
Thus, for periodic sets defined by avoiding a certain residue class modulo each n_i, the density of the uncovered set after k steps tends to zero if sum of reciprocals diverges and the residues are "generic". However, we need to show that for any selection of residues, the densities converge to zero uniformly. The compactness argument above suggests that this must hold.
Let's formalize the compactness proof:
Let Ω = ∏_{i=1}^∞ ℤ/n_iℤ be the space of all infinite sequences of residues a_i (with product topology). For each k, define a function f_k: Ω → [0,1] by f_k(a) = d(U_k(a)). This function depends only on the first k coordinates, and is locally constant (takes finitely many values). In particular, it is continuous.
For any a ∈ Ω, condition (P) implies that lim_{k→∞} f_k(a) = 0. Therefore, for any ε > 0, the family of closed sets F_k = { a ∈ Ω : f_k(a) ≥ ε } has empty intersection. Since Ω is compact, there exists K such that F_K = ∅. Hence, for all a ∈ Ω, f_K(a) < ε. This is precisely the desired uniform bound.
Thus, the answer is yes.
We should verify that the density d(U_k(a)) is indeed well-defined and finite for any prefix a. Since U_k(a) = ℤ \ ∪_{i=1}^k (a_i mod n_i). Each a_i mod n_i is a periodic set with density 1/n_i. The union of finitely many periodic sets has a well-defined density (you can compute via inclusion-exclusion or using the lcm of the moduli). So d(U_k(a)) exists as a rational number. So f_k is well-defined.
Thus, the compactness argument holds.
Hence, the answer is yes.
Now, we should also consider the possibility that the density may not exist for certain sets. However, for sets defined by congruences, the natural density exists (they are "well-behaved").
Thus, the answer: Yes, it's true.
We can also provide a direct proof: Suppose (U) fails; then there exists ε > 0 such that for each k, there is a selection a^{(k)} with uncovered density ≥ ε. Consider the tree of all prefixes; each prefix defines a set of sequences that share it. By König's lemma, there exists an infinite sequence a* such that for each k, the prefix of length k appears as a prefix of some a^{(m)} with m ≥ k and uncovered density ≥ ε. However, we need to argue that d(U_k(a*)) ≥ ε for all k. Actually, we need to show that we can pick a* such that each prefix has the property that the uncovered density after that prefix is ≥ ε. This may be possible: Since for each k, there is at least one good prefix of length k; by a tree argument, we can find an infinite path that always chooses a good child when possible (i.e., one that can be extended to arbitrarily large good prefixes). This yields an infinite sequence a* with the property that for each k, there are infinitely many extensions with uncovered density ≥ ε, but not necessarily that the uncovered density after exactly k steps for a* is ≥ ε. However, we can perhaps guarantee that for each prefix of a*, the uncovered density after that prefix is at least ε, because we can choose a prefix that is "good" at each step: we need a tree where each node that is "good" has at least one child that is also "good". Starting from the root (good by assumption?), we can build an infinite path of good nodes. This is similar to the infinite tree lemma: If each node at level k can be extended to a node at level > k with the property, we can build a path with infinitely many such nodes. However, we need each node in the path to have the property, not just infinitely many.
Alternatively, we can use the "compactness" argument as above: The sets F_k(ε) are closed and nested; their intersection empty implies some F_k(ε) empty. This is a standard argument.
Thus, the answer is yes: (P) implies (U).
Now, we should also consider if there is any subtlety about the density existing. For the sets U_k(a), the density exists, but for the infinite intersection U_∞(a), the natural density may not exist. However, (P) says the set of integers not satisfying any of the congruences has density 0. It doesn't specify whether the density exists; but presumably it does exist or the condition holds for upper density? The statement "has density 0" can be interpreted as the natural density exists and equals zero. For sets defined by congruences, the density exists (since they are "regular" sets). So we can assume that.
Thus, the answer: Yes, it is true.
We can also note that the property is equivalent to a compactness argument, and we can refer to the finite intersection property in compact spaces.
Thus, answer: Yes, the property holds: For any ε > 0, there exists k such that for any choice of congruences a_i (mod n_i) for i ≤ k, the density of integers not satisfying any of them is less than ε.
We may also provide a constructive bound: Since the sets F_k(ε) are finite unions of cylinders, we can find the minimal k such that no cylinder yields uncovered density ≥ ε. However, this may not be constructive.
Thus, the answer is "Yes". If we need to provide a counterexample, we would need to find a sequence n_i where the condition (P) holds but the sets F_k(ε) are never empty. However, the compactness argument suggests that cannot happen.
Thus, answer: Yes.
We should also consider the case where n_i may be unbounded but we need the existence of density; for any finite prefix, the uncovered set is a periodic set, so density exists. For the infinite intersection, maybe the natural density does not exist (the set may be irregular). However, the statement "the set of integers not satisfying any of the congruences has density 0" means that the natural density of that set exists and equals 0. For any set defined by congruences (i.e., a union of arithmetic progressions), the density exists. However, the complement of an infinite union of residue classes may not be periodic, but its density may still exist (likely zero). In any case, the argument stands.
Thus, final answer: Yes, that is true; the proof uses a compactness argument: If not, then there exists ε > 0 such that for each k we can choose residues so that the uncovered set has density ≥ ε; then by compactness there is an infinite selection for which the uncovered set has density ≥ ε, contradicting the hypothesis.
We can also provide a more detailed proof:
Given ε > 0. For each integer k, define S_k as the set of all sequences (a_i) such that the density of integers not satisfying any of the first k congruences is ≥ ε. Each S_k is a closed (indeed clopen) subset of the product space ∏ ℤ/n_iℤ. By hypothesis, the intersection ∩_{k≥1} S_k is empty (for any sequence, the density of the infinite uncovered set is zero, so eventually the uncovered density is < ε). By compactness of the product space, some finite subcollection already has empty intersection; i.e., there exists K with S_K = ∅. Then for any choice of residues, the density after K steps is < ε, as required.
Thus, the answer is yes.
We can also note that this is analogous to the statement that a family of clopen sets with the property that any infinite subfamily covers the space almost surely must have a finite subfamily that covers almost all of the space uniformly.
Hence final answer: Yes.
But we need to check if the problem expects a yes/no answer with justification. Probably they expect answer "Yes" with a proof using compactness or the finite intersection property.
Alternatively, they may ask: "Is it true that for every ε > 0 there exists some k such that ...?" The answer is yes. Provide a proof.
Thus, final answer: Yes, it is true.
But we need to provide a rigorous proof. Let's outline:
  1. For each i, define the finite set of residues modulo n_i: ℤ/n_iℤ. Let Ω = ∏_{i=1}^∞ ℤ/n_iℤ with product topology (compact by Tychonoff).
  2. For each k, define f_k: Ω → [0,1] by f_k(a) = density of integers not satisfying any of the first k congruences. Since this depends only on the first k coordinates, f_k is locally constant, hence continuous.
  3. For each ε > 0, define closed sets F_k = { a ∈ Ω : f_k(a) ≥ ε }. Since {F_k} is decreasing and ∩{k=1}^∞ F_k = ∅ (by hypothesis (P) that for each a, lim{k→∞} f_k(a) = 0), compactness implies there exists K such that F_K = ∅.
  4. This K satisfies the required property: for any choice of residues a_i, the density of integers not satisfying any of the first K congruences is < ε.
Thus, (P) ⇒ (U). QED.
Alternatively, we can phrase in terms of the finite intersection property: Since Ω is compact, the collection of closed sets {F_k} cannot have the finite intersection property if the whole intersection is empty; thus some F_K must be empty.
Thus, answer: Yes.
Potential counterexample? We looked for one but didn't find. The compactness argument seems solid, assuming the density exists and the sets are closed. So answer: Yes, it's true.
Now, we might also discuss that this is a consequence of the fact that the space of sequences of residues is compact and the density function is upper semicontinuous (or something). But we can keep it simple.
Thus, final answer: Yes, the statement is true.
We should also note that this is a version of the "finite covering property" for universal covering systems. The property that any infinite choice of residues covers almost all integers implies a uniform bound on the number of congruences needed to achieve a given density. The proof uses a compactness argument.
Thus, answer: Yes.
Now, we can also remark that this result is known as the "Erdős–Ginzburg–Ziv theorem"? Not exactly.
Nevertheless, we can produce a final answer.
Let's write a full solution.
Solution:
Let Ω = ∏ ℤ/n_iℤ. For each k, define
E_k = { (a_i) ∈ Ω : d(ℤ \ ⋃_{i=1}^k (a_i mod n_i)) ≥ ε }.
Since E_k depends only on the first k coordinates, it is a finite union of cylinders, hence closed in Ω.
Condition (P) says that for any a ∈ Ω, the set of integers not satisfying any of the infinite list of congruences has density 0; equivalently, lim_{k→∞} d(ℤ \ ⋃{i=1}^k (a_i mod n_i)) = 0. Hence for any a ∈ Ω there exists k with a ∉ E_k. Therefore, ∩{k≥1} E_k = ∅.
Ω is compact (Tychonoff). By the finite intersection property, there must be some K with E_K = ∅. This means that for every a ∈ Ω we have d(ℤ \ ⋃_{i=1}^K (a_i mod n_i)) < ε. This is exactly the required statement.
Thus, the answer is yes.
We can also add a remark: The condition (P) is equivalent to the statement that the family of sets {ℤ \ (a_i mod n_i)} has the property that any infinite subfamily has empty intersection (density 0). By compactness, this implies a uniform bound.
Thus, answer: Yes, it is true.
Now, we should also consider if there is any nuance: The problem may ask: "Is it true that for every ε > 0 there exists some k such that, for every choice of congruence classes a_i, the density of integers not satisfying any of the congruences a_i (mod n_i) for 1 ≤ i ≤ k is less than ε?" The answer is yes. Provide a short proof.
Thus, final answer: Yes.
We can also note that the property (P) is a "uniformly covering property" and the question asks whether it is "uniformly finite". The answer is yes, due to compactness.
Thus, final answer: Yes, it is true.
Now, we can also mention that this is a special case of a more general result about families of clopen sets in a compact space: if any infinite intersection has measure zero, then there is a finite subfamily whose intersection already has small measure. Here, the measure is the natural density, and the sets are complements of residue classes.
Thus, answer: Yes.
We can also provide a short example to show that the condition (P) is necessary: If the sum of reciprocals of n_i converges, then the product ∏ (1 - 1/n_i) > 0, and choosing a_i = 0 yields uncovered set of positive density, violating (U). However, under the hypothesis, (U) holds.
Thus, final answer: Yes.
Now, we should also consider potential subtlety: The density of the infinite uncovered set may not exist; we may need upper density. However, the problem states "density 0", which likely means natural density 0. For our sets, the natural density exists, so no problem.
Thus, answer: Yes.
Now, we can write the final answer as: Yes, this is true; the proof uses a compactness argument: let Ω be the product of ℤ/n_iℤ; define f_k(a) as the density of integers not satisfying any of the first k congruences; the sets {a : f_k(a) ≥ ε} are closed and nested, and by hypothesis their intersection is empty; compactness implies some F_K is empty, which yields the required uniform bound.
Alternatively, one can give an explicit proof: Suppose not; then there is ε > 0 and for each k a selection a_i^{(k)} with uncovered density ≥ ε; by a diagonal argument, we can construct an infinite sequence a_i* such that for each k, the uncovered density after k steps is ≥ ε, contradicting (P). This shows (P) ⇒ (U).
Thus, answer: Yes.
Hence, the statement is true.
Now, we can produce the final answer.
Answer: Yes – property (P) forces a uniform bound.

1. The set‑up

For each i let
Ωi​=Z/ni​Z,Ω=i≥1∏​Ωi​.
Ω is compact in the product (Tychonoff) topology.
A point a=(a1​,a2​,…)∈Ω is a choice of residues ai​(modni​) .
For a finite prefix a1​,…,ak​ put
Uk​(a)={x∈Z:x≡ai​(modni​) for 1≤i≤k},fk​(a)=d(Uk​(a)),
where d(⋅) denotes the natural asymptotic density. Each Uk​(a) is a finite union of arithmetic progressions, hence d(Uk​(a)) exists and is a rational number. Moreover fk​ depends only on the first k coordinates of a ; consequently fk​ is locally constant and therefore continuous on Ω .
For a fixed a the sets Uk​(a) are decreasing, Uk+1​(a)⊆Uk​(a) ; hence fk​(a) is non‑increasing. Condition (P) in the problem says that for every a∈Ω
k→∞lim​fk​(a)=0.(1)

2. The compactness argument

Fix ε>0 and consider
Fk​={a∈Ω:fk​(a)≥ε}(k≥1).
Each Fk​ is the pre‑image of the closed interval [ε,1] under the continuous map fk​ ; hence Fk​ is closed. Because fk+1​≤fk​ we have Fk+1​⊆Fk​ ; the family {Fk​} is decreasing.
From (1) we see that no point a∈Ω belongs to all Fk​ ; indeed, for every a there exists k with fk​(a)<ε . Thus
k≥1⋂​Fk​=∅.(2)
The space Ω is compact, and the Fk​ are closed and nested. A basic property of compact spaces (the finite–intersection property) states that a decreasing family of closed sets has empty intersection iff some member of the family is already empty. Consequently, (2) implies that for some integer K ,
FK​=∅.(3)

3. The required uniform estimate

Equation (3) says that for every choice of residues a1​,…,aK​ ,
d(Z∖i=1⋃K​(ai​modni​))=fK​(a)<ε.
In other words, after the first K congruences the set of integers that still avoid all of them has density less than ε . Since ε>0 was arbitrary, we obtain exactly the statement asked for.

4. Conclusion

Hence, for an infinite sequence (ni​) with the property that any choice of residues ai​(modni​) leaves a set of uncovered integers of density 0, one can always find a finite number k (depending only on the desired ε ) that guarantees that the uncovered set has density <ε for every choice of the residues.
Yes, the uniform bound always exists.​