Spatial representations that capture both structural and semantic characteristics of urban environments are essential for urban modeling. Traditional spatial embeddings often prioritize spatial proximity while underutilizing fine-grained contextual information from places. To address this limitation, we introduce CaLLiPer+, an extension of the CaLLiPer model that systematically integrates Point-of-Interest (POI) names alongside categorical labels within a multimodal contrastive learning framework. We evaluate its effectiveness on two downstream tasks - land use classification and socioeconomic status distribution mapping - demonstrating consistent performance gains of 4% to 11% over baseline methods. Additionally, we show that incorporating POI names enhances location retrieval, enabling models to capture complex urban concepts with greater precision. Ablation studies further reveal the complementary role of POI names and the advantages of leveraging pretrained text encoders for spatial representations. Overall, our findings highlight the potential of integrating fine-grained semantic attributes and multimodal learning techniques to advance the development of urban foundation models.
@InProceedings{liu_et_al:LIPIcs.GIScience.2025.3, author = {Liu, Junyuan and Wang, Xinglei and Cheng, Tao}, title = {{Enriching Location Representation with Detailed Semantic Information}}, booktitle = {13th International Conference on Geographic Information Science (GIScience 2025)}, pages = {3:1--3:15}, series = {Leibniz International Proceedings in Informatics (LIPIcs)}, ISBN = {978-3-95977-378-2}, ISSN = {1868-8969}, year = {2025}, volume = {346}, editor = {Sila-Nowicka, Katarzyna and Moore, Antoni and O'Sullivan, David and Adams, Benjamin and Gahegan, Mark}, publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik}, address = {Dagstuhl, Germany}, URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.GIScience.2025.3}, URN = {urn:nbn:de:0030-drops-238322}, doi = {10.4230/LIPIcs.GIScience.2025.3}, annote = {Keywords: Location Embedding, Contrastive Learning, Pretrained Model} }
Feedback for Dagstuhl Publishing