The navigation in a website is made possible by web pages which visually communicate virtually instantaneously extensive information, including content, overall semantics, orientation cues, and navigation possibilities. For users who are visually impaired or who cannot look at a screen while performing other tasks (e.g. driving or walking), this multidimensional communication may be difficult or even impossible to access. Existing aural technologies (e.g. screen readers, aural browsers) and web accessibility standards—although powerful and enabling—do not fully address this problem, as they read aloud content rather than conceptually translating a complex communication process. In this context, audio is a strictly linear channel which makes aural navigation in large information architectures a very difficult and frustrating task. Supported by the National Science Foundation, my group is exploring innovative design strategies for the aural navigation of complex web information architectures, where users exclusively or primarily listen to, rather than look at, content and navigational prompts. We iteratively create and refine aural design solutions for back and history navigation, and browsing in large collections, and evaluate these navigation strategies on the user experience. We establish the potential and limits of the aural navigation paradigms to enhance the effectiveness of web navigation by performing a series of evaluation studies involving visually-impaired participants using screen readers and sighted participants using mobile devices.