What Do You Mean You're in Trafalgar Square? Comparing Distance Thresholds for Geospatial Prepositions

Authors Niloofar Aflaki , Kristin Stock , Christopher B. Jones , Hans Guesgen, Jeremy Morley, Yukio Fukuzawa

Thumbnail PDF


  • Filesize: 1.5 MB
  • 14 pages

Document Identifiers

Author Details

Niloofar Aflaki
  • Massey Geoinformatics Collaboratory, Massey University, Auckland, New Zealand
Kristin Stock
  • Massey Geoinformatics Collaboratory, Massey University, Auckland, New Zealand
Christopher B. Jones
  • School of Computer Science and Informatics, Cardiff University, UK
Hans Guesgen
  • Massey Geoinformatics Collaboratory, Massey University, Auckland, New Zealand
Jeremy Morley
  • Ordnance Survey, Southampton, UK
Yukio Fukuzawa
  • School of Natural and Computational Sciences, Massey University, Auckland, New Zealand

Cite AsGet BibTex

Niloofar Aflaki, Kristin Stock, Christopher B. Jones, Hans Guesgen, Jeremy Morley, and Yukio Fukuzawa. What Do You Mean You're in Trafalgar Square? Comparing Distance Thresholds for Geospatial Prepositions. In 15th International Conference on Spatial Information Theory (COSIT 2022). Leibniz International Proceedings in Informatics (LIPIcs), Volume 240, pp. 1:1-1:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Natural language location descriptions frequently describe object locations relative to other objects (the house near the river). Geospatial prepositions (e.g.near) are a key element of these descriptions, and the distances associated with proximity, adjacency and topological prepositions are thought to depend on the context of a specific scene. When referring to the context, we include consideration of properties of the relatum such as its feature type, size and associated image schema. In this paper, we extract spatial descriptions from the Google search engine for nine prepositions across three locations, compare their acceptance thresholds (the distances at which different prepositions are acceptable), and study variations in different contexts using cumulative graphs and scatter plots. Our results show that adjacency prepositions next to and adjacent to are used for a large range of distances, in contrast to beside; and that topological prepositions in, at and on can all be used to indicate proximity as well as containment and collocation. We also found that reference object image schema influences the selection of geospatial prepositions such as near and in.

Subject Classification

ACM Subject Classification
  • Computing methodologies → Natural language processing
  • contextual factors
  • spatial descriptions
  • acceptance model
  • spatial template
  • applicability model
  • geospatial prepositions


  • Access Statistics
  • Total Accesses (updated on a weekly basis)
    PDF Downloads


  1. L. A. Carlson and E. S. Covey. How far is near? inferring distance from spatial descriptions. Language and Cognitive Processes, 20(5):617-631, 2005. Google Scholar
  2. A. Chang, M. Savva, and C. D. Manning. Learning spatial knowledge for text to 3d scene generation. In EMNLP, pages 2028-2038, 2014. Google Scholar
  3. H. Chen, S. Winter, and M. Vasardani. Georeferencing places from collective human descriptions using place graphs. Journal of Spatial Information Science, 17:31-62, 2018. Google Scholar
  4. G. Collell, L. Van Gool, and M.-F. Moens. Acquiring common sense spatial knowledge through implicit spatial templates. In Proc. AAAI 2018, 2018. Google Scholar
  5. Curdin Derungs and Ross S. Purves. Mining nearness relations from an n-grams web corpus in geographical space. Spatial Cognition & Computation, 16(4):301-322, 2016. URL: https://doi.org/10.1080/13875868.2016.1246553.
  6. S. Du, X. Wang, C. Feng, and X. Zhang. Classifying natural-language spatial relation terms with random forest algorithm. Int. J. Geographical Information Science, 31(3):542-568, 2017. Google Scholar
  7. P. F. Fisher and T. M. Orf. An investigation of the meaning of near and close on a university campus. Computers, Environment and Urban Systems, 15(1-2):23-35, 1991. Google Scholar
  8. N. Gronau, M. Neta, and M. Bar. Integrated contextual representation for objects' identities and their locations. Journal of Cognitive Neuroscience, 20(3):371-388, 2008. Google Scholar
  9. M. Hall, P. Smart, and C.B. Jones. Interpreting spatial language in image captions. Cognitive Processing, 12(1):67-94, 2011. Google Scholar
  10. M. M. Hall and C. B. Jones. Quantifying spatial prepositions: an experimental study. In Proc. 16th ACM SIGSPATIAL, pages 1-4, 2008. Google Scholar
  11. M.M. Hall, C.B. Jones, and P. Smart. Spatial natural language generation for location description in photo captions. In COSIT, pages 196-223. Springer, 2015. Google Scholar
  12. Annette Herskovits. Semantics and pragmatics of locative expressions. Cognitive science, 9(3):341-378, 1985. Google Scholar
  13. Y. Hu and J. Wang. How do people describe locations during a natural disaster: an analysis of tweets from hurricane harvey. arXiv preprint, 2020. URL: http://arxiv.org/abs/2009.12914.
  14. Arthur J Jelinek. Use of the cumulative graph in temporal ordering. American Antiquity, 28(2):241-243, 1962. Google Scholar
  15. M. Johnson. The body in the mind: The bodily basis of meaning, imagination, and reason. University of Chicago Press, 2013. Google Scholar
  16. E. Kamalloo and D. Rafiei. A coherent unsupervised model for toponym resolution. In Proc, WWW Conference, pages 1287-1296, 2018. Google Scholar
  17. Morteza Karimzadeh. Performance evaluation measures for toponym resolution. In Proc. GIR Workshop, pages 1-2, 2016. Google Scholar
  18. T. Kew, A. Shaitarova, I. Meraner, J. Goldzycher, S. Clematide, and M. Volk. Geotagging a diachronic corpus of alpine texts: Comparing distinct approaches to toponym recognition. In Proc. Workshop on Language Technology for Digital Historical Archives, pages 11-18, 2019. Google Scholar
  19. Ti. Lan, W. Yang, Y. Wang, and G. Mori. Image retrieval with structured object queries using latent ranking svm. In European Conf. Computer Vision, pages 129-142. Springer, 2012. Google Scholar
  20. Jochen L Leidner. Toponym resolution in text: Annotation, evaluation and applications of spatial grounding of place names. Universal-Publishers, 2008. Google Scholar
  21. M. D Lieberman and H. Samet. Adaptive context features for toponym resolution in streaming news. In Proc. ACM SIGIR, pages 731-740, 2012. Google Scholar
  22. M. Malinowski and M. Fritz. A pooling approach to modelling spatial relations for image retrieval and annotation. arXiv preprint, 2014. URL: http://arxiv.org/abs/1411.5190.
  23. D.M. Mark. Cognitive image-schemata for geographic information: Relations to user views and gis interfaces. In GIS/LIS, volume 89 (2), pages 551-560, 1989. Google Scholar
  24. D.M. Mark and A.U. Frank. Experiential and formal models of geographic space. Environment and Planning B: Planning and Design, 23(1):3-24, 1996. Google Scholar
  25. Daniel R. Montello. Scale and multiple psychologies of space. In Andrew U. Frank and Irene Campari, editors, Spatial Information Theory A Theoretical Basis for GIS, pages 312-321, Berlin, Heidelberg, 1993. Springer Berlin Heidelberg. Google Scholar
  26. R. Moratz and T. Tenbrink. Spatial reference in linguistic human-robot interaction: Iterative, empirically supported development of a model of projective relations. Spatial cognition and computation, 6(1):63-107, 2006. Google Scholar
  27. D.G. Morrow and H.H. Clark. Interpreting words in spatial descriptions. Language and Cognitive Processes, 3(4):275-291, 1988. Google Scholar
  28. G. Platonov and L. Schubert. Computational models for spatial prepositions. In Proc. Workshop on Spatial Language Understanding, pages 21-30, 2018. Google Scholar
  29. Leonard Richardson. Beautiful soup documentation. April, 2007. Google Scholar
  30. E. Rosch. Cognitive representations of semantic categories. J. Experimental Psychology, 104(3):192, 1975. Google Scholar
  31. S. Schockaert, M. De Cock, and E. E. Kerre. Location approximation for local search services using natural language hints. International Journal of Geographical Information Science, 22(3):315-336, 2008. URL: https://doi.org/10.1080/13658810701626277.
  32. G. Skoumas, D. Pfoser, A. Kyrillidis, and T. Sellis. Location estimation using crowdsourced spatial relations. ACM Trans. Spatial Algorithms and Systems, 2(2):1-23, 2016. Google Scholar
  33. K. Stock and M. Hall. The role of context in the interpretation of natural language location descriptions. In COSIT, pages 245-254. Springer, 2017. Google Scholar
  34. K. Stock and J. Yousaf. Context-aware automated interpretation of elaborate natural language descriptions of location through learning from empirical data. Int. J. Geographical Information Science, 32(6):1087-1116, 2018. Google Scholar
  35. L. Talmy. How language structures space. In Spatial Orientation, pages 225-282. Springer, 1983. Google Scholar
  36. A. Tyler and V. Evans. The semantics of English prepositions: Spatial scenes, embodied meaning, and cognition. Cambridge University Press, 2003. Google Scholar
  37. M. Vasardani, L.F. Stirling, and S. Winter. The preposition at from a spatial language, cognition, and information systems perspective. Semantics and Pragmatics, 10:3, 2017. Google Scholar
  38. Jan Oliver Wallgrün, Alexander Klippel, and Timothy Baldwin. Building a corpus of spatial relational expressions extracted from web documents. In Proceedings of the 8th workshop on geographic information retrieval, pages 1-8, 2014. Google Scholar
  39. D. Wu and Y. Cui. Disaster early warning and damage assessment analysis using social media data and geo-location information. Decision support systems, 111:48-59, 2018. Google Scholar
  40. Z. Wu and M. Palmer. Verb semantics and lexical selection. arXiv preprint, 1994. URL: http://arxiv.org/abs/cmp-lg/9406033.
  41. X. Yao and J.-C. Thill. How far is too far?-a statistical approach to context-contingent proximity modeling. Trans. in GIS, 9(2):157-178, 2005. Google Scholar
  42. H. Yu and J.M. Siskind. Sentence directed video object codiscovery. Int. J. Computer Vision, 124(3):312-334, 2017. Google Scholar
Questions / Remarks / Feedback

Feedback for Dagstuhl Publishing

Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail