Using conditional GET or wget timestamping for the catalog files

Peter Bonivart bonivart at
Mon Oct 28 13:34:31 CET 2013

On Mon, Oct 28, 2013 at 11:05 AM, Dagobert Michelsen <dam at> wrote:
> Hi folks,
> Am 28.10.2013 um 10:37 schrieb Maciej (Matchek) Bliziński <maciej at>:
> > Hey Peter (B) and maintainers,
> >
> > I spoke to Dago a few days ago, and we had a chat about a large portion of traffic from our main mirror being just the catalog files, that is, the files named 'catalog' that are downloaded and re-downloaded a countless number of times. The mirror can withstand it, but it's a constant stream of a few megabytes per second, day and night.
> Some numbers: we have constantly 3-4 MB per second. This is not a problem ATM as we
> have a direct gigabit uplink to the internet, but summing this up it is roughly
> 10 TB. Just as a comparison: Amazon would charge $0,120 per GB resulting in 1200$ !!
> So I would like to take the initiative and see that we save bandwidth now that we still
> have the cheap mirror.
> > Perhaps this can be helped by using the conditional GET with the possible HTTP 304 Not Modified response, or timestamping. wget has an option to timestamp files, and it can issue just a HEAD request to skip downloading the whole file. Here's some information I found:
> >
> >
> >
> > Have we considered this in the past? I don't recall it. Maybe we should take a look, it could be simple to implement, and we would save some bandwidth on our main mirror and on other mirrors worldwide.
> Just adding --timestamping would already be a great benefit.
> Peter, what do you think?

I could do some tests I guess. What I did was to make the default for
expired catalogs 14 days but I think most people add -U to their
command line all the time.

Is timestamping available in our old static wget binaries (those I
distribute with pkgutil as a last resort)?


More information about the maintainers mailing list