Skip to content
- LowLat uses a server-side presending mechanism with a receiver-side cache. The protocol is transparent to the server and client. The design uses available bandwidth – either excess capacity or idle periods – when available. Client cache hits are forwarded to the presender (but not the server), to loosely couple the presender/cache interaction. It degenerates to a conventional request/response interaction when the server is loaded, bandwidth isn’t available, or the client cache is small or not implemented (i.e., it’s backward compatible). LowLat is based on the Mirage formal model of communication latency.Key feature: server tries to presends the contents of ALL linked items. Order and history statistics can be used to limit the presender’s actions. Client cache hits are forwarded to presender to loosely couple the interaction.
- Joe Touch
- List of publications, including:
- “Mirage: A Model for Ultra-High-Speed Protocol Analysis and Design”, Protocols for High Speed Networks 1, 1989.
- This paper describes the basis for the Mirage protocol model, based on analogies from quantum interactions. It models latency as it affects state imprecision, and is the earliest known reference to trading bandwidth for perceived latency via source anticipation.
- Joe’s 1992 PhD dissertation “Mirage, a Model for Latency in Communication”
- Mirage is a model describing the effects of latency on communication. Describes latency reduction by increasing bandwidth by decreasing the precision of state, as a converse to Shannon’s error coding theorem (which trades coding length, i.e., latency, for error). Experiments were performed to measure the effects on a time protocol (NTP) and processor/memory communication (patented).
- “Parallel Communication”, J. Touch, Infocom ’93.
- Parallel Communication appears to be one of the first papers to explicitly describe source anticipation as a way to reduce the speed-of-light latency in hypermedia naviation (pg. 4).
- “An Experiment in Latency Reduction”, J. Touch, D. Farber, Infocom ’94.
- Describes the use of source anticipation to reduce latency in FTP requests. An experiment used FTP traces at several locations to measure the expected increase in bandwidth (7x) and reduction in latency (by 2/3). Describes planned experiments to apply these techniques to the Web.
- Key feature: server tries to presends the contents of linked items based on prior request history.
- Object Caching Environments for Applications and Network Services group
- Prefetching and Speculative Service on the Web project
- Prof. Azer Bestavros, Carlos Cunha
Jeff Mogul’s server-side mechanisms
- Key feature: Server-side presending using Markov chains to model the prediction stream.
- Key feature: Geographical push caching.
- Predictive prefetching (client-side), using pipelined GETALL and GETLIST to avoid multiple RTTs, and pipelining in a long-lived connections. Performed by Venkata while a summer intern at DECWRL.Key feature: Optimize time per traditional request/response interaction, by aggregating requests and caching connections.
- Fetches anchors and in-line images in HTML, and prefetches. Cache saves its own prefetches.Key feature: Supports Internet Cache Protocol (ICP) (QUERY, HIT, and MISS), also supports “catalyst mode”, in which separates the prefetching proxy from the primary caching proxy.
- Key feature: Third-party prefetching, at a “broker” that manages cache coherence and updates when items are stale. Uses statistical usage information.
- (from the Siva pages:) Siva is a hierarchical caching HTTP proxy server. The goal is to greatly increase performance of WWW access by reducing overall traffic, through highly-distributed caching of documents. See the project notes for more information. The current implementation is highly experimental, but the source could be useful to some.
- Provides a public WWW cache for the Higher Education community in the UK. The UK has terrible problems with international bandwidth and we would be willing to act as a test site for any worthy schemes. As we run a real user service any scheme would have to be pretty stable, but our user-base is large and realistic. We currently serve about 1,000,000 requests a day. We can’t contribute a great deal except testing other people’s ideas. Neil Smith