When you view the world in AR, your commlink or cyberdeck can overlay icons for any (and all) nearby matrix devices onto your vision. This is rather overwhelming - in an urban area, the local mesh can contain thousands of icons. So most people run filtering routines that hide most of them and only show ones deemed important. For example, in a crowded street, you might only show icons for commlinks for people you know, and hide the rest.
The mesh networking routing protocols that keep the wireless matrix working tracks the approximate position and motion of all these devices, so it can predict when devices are about to go out of range of each other and have fallback routes prepared to keep traffic flowing. AR leverages this information to position icons in the user’s sensorium in vaguely the correct place, relative to where the device is.
When the user has line-of-sight to the device, this positioning is quite accurate; glance at a coffee machine in AR and you’ll see its glowing matrix icon hovering just over it. When there’s no line of sight, position accuracy drifts randomly, often by a few metres. If you are in a shopping mall and your friend is in the store a few doors down from you, you’ll see an icon for their commlink, but it’ll appear vague and fuzzed-out so you know it’s only an approximate position.
When using VR inside a host, there is no need to make things correspond to meatspace. Icon positioning is arbitrary and governed by the sculpting of the host. Some hosts look like glowing neon wireframes, with icons clustered across an infinite 2d plane. Others are painstakingly rendered 3d environments with icons grouped logically and scattered across rooms or areas. The possibilities are limitless.