Ulam matrix

Term in mathematical set theory

This is the current revision of this page, as edited by imported>Citation bot at 14:16, 19 September 2024 (Added doi. | Use this bot. Report bugs. | Suggested by Dominic3203 | Category:Set theory | #UCB_Category 5/155). The present address (URL) is a permanent link to this version.

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

In mathematical set theory, an Ulam matrix is an array of subsets of a cardinal number with certain properties. Ulam matrices were introduced by Stanislaw Ulam in his 1930 work on measurable cardinals: they may be used, for example, to show that a real-valued measurable cardinal is weakly inaccessible.[1]

Definition

Suppose that κ and λ are cardinal numbers, and let   be a  -complete filter on  . An Ulam matrix is a collection of subsets   of   indexed by   such that

  • If   then   and   are disjoint.
  • For each  , the union over   of the sets  , is in the filter  .

References

  1. ^ Jech, Thomas (2003), Set Theory, Springer Monographs in Mathematics (Third Millennium ed.), Berlin, New York: Springer-Verlag, p. 131, ISBN 978-3-540-44085-7, Zbl 1007.03002